Compare commits

...

7 Commits

Author SHA1 Message Date
Waleed Latif
4339c8f980 improvement(blocks): extract model config subBlocks into shared utils 2026-02-07 04:23:57 -08:00
Waleed
99ae5435e3 feat(models): updated model configs, updated anthropic provider to propagate errors back to user if any (#3159)
* feat(models): updated model configs, updated anthropic provider to propagate errors back to user if any

* moved max tokens to advanced

* updated model configs and testesd

* removed default in max config for output tokens

* moved more stuff to advanced mode in the agent block

* stronger typing

* move api key under model, update mistral and groq

* update openrouter, fixed serializer to allow ollama/vllm models without api key

* removed ollama handling
2026-02-06 22:35:57 -08:00
Vikhyath Mondreti
925f06add7 improvement(preview): render nested values like input format correctly in workflow execution preview (#3154)
* improvement(preview): nested workflow snapshots/preview when not executed

* improvements to resolve nested subblock values

* few more things

* add try catch

* fix fallback case

* deps
2026-02-06 22:12:40 -08:00
Vikhyath Mondreti
193b95cfec fix(auth): swap out hybrid auth in relevant callsites (#3160)
* fix(logs): execution files should always use our internal route

* correct degree of access control

* fix tests

* fix tag defs flag

* fix type check

* fix mcp tools

* make webhooks consistent

* fix ollama and vllm visibility

* remove dup test
2026-02-06 22:07:55 -08:00
Waleed
0ca25bbab6 fix(function): isolated-vm worker pool to prevent single-worker bottleneck + execution user id resolution (#3155)
* fix(executor): isolated-vm worker pool to prevent single-worker bottleneck

* chore(helm): add isolated-vm worker pool env vars to values.yaml

* fix(userid): resolution for fair scheduling

* add fallback back

* add to helm charts

* remove constant fallbacks

* fix

* address bugbot comments

* fix fallbacks

* one more bugbot comment

---------

Co-authored-by: Vikhyath Mondreti <vikhyath@simstudio.ai>
2026-02-06 18:34:03 -08:00
Waleed
1edaf197b2 fix(azure): add azure-anthropic support to router, evaluator, copilot, and tokenization (#3158)
* fix(azure): add azure-anthropic support to router, evaluator, copilot, and tokenization

* added azure anthropic values to env

* fix(azure): make anthropic-version configurable for azure-anthropic provider

* fix(azure): thread provider credentials through guardrails and fix translate missing bedrockAccessKeyId

* updated guardrails

* ack'd PR comments

* fix(azure): unify credential passing pattern across all LLM handlers

- Pass all provider credentials unconditionally in router, evaluator (matching agent pattern)
- Remove conditional if-branching on providerId for credential fields
- Thread workspaceId through guardrails → hallucination validator for BYOK key resolution
- Remove getApiKey() from hallucination validator, let executeProviderRequest handle it
- Resolve vertex OAuth credentials in hallucination validator matching agent handler pattern

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-06 15:26:10 -08:00
Waleed
474b1af145 improvement(ui): improved skills UI, validation, and permissions (#3156)
* improvement(ui): improved skills UI, validation, and permissions

* stronger typing for Skill interface

* added missing docs description

* ack comment
2026-02-06 13:11:56 -08:00
107 changed files with 3290 additions and 1313 deletions

View File

@@ -5462,3 +5462,24 @@ export function EnrichSoIcon(props: SVGProps<SVGSVGElement>) {
</svg>
)
}
export function AgentSkillsIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg
{...props}
xmlns='http://www.w3.org/2000/svg'
width='16'
height='16'
viewBox='0 0 16 16'
fill='none'
>
<path
d='M8 1L14.0622 4.5V11.5L8 15L1.93782 11.5V4.5L8 1Z'
stroke='currentColor'
strokeWidth='1.5'
fill='none'
/>
<path d='M8 4.5L11 6.25V9.75L8 11.5L5 9.75V6.25L8 4.5Z' fill='currentColor' />
</svg>
)
}

View File

@@ -18,7 +18,9 @@ This means you can attach many skills to an agent without bloating its context w
## Creating Skills
Go to **Settings** (gear icon) and select **Skills** under the Tools section.
Go to **Settings** and select **Skills** under the Tools section.
![Manage Skills](/static/skills/manage-skills.png)
Click **Add** to create a new skill with three fields:
@@ -52,11 +54,22 @@ Use when the user asks you to write, optimize, or debug SQL queries.
...
```
**Recommended structure:**
- **When to use** — Specific triggers and scenarios
- **Instructions** — Step-by-step guidance with numbered lists
- **Examples** — Input/output samples showing expected behavior
- **Common Patterns** — Reusable approaches for frequent tasks
- **Edge Cases** — Gotchas and special considerations
Keep skills focused and under 500 lines. If a skill grows too large, split it into multiple specialized skills.
## Adding Skills to an Agent
Open any **Agent** block and find the **Skills** dropdown below the tools section. Select the skills you want the agent to have access to.
Selected skills appear as chips that you can click to edit or remove.
![Add Skill](/static/skills/add-skill.png)
Selected skills appear as cards that you can click to edit or remove.
### What Happens at Runtime
@@ -69,12 +82,50 @@ When the workflow runs:
This works across all supported LLM providers — the `load_skill` tool uses standard tool-calling, so no provider-specific configuration is needed.
## Tips
## Common Use Cases
- **Keep descriptions actionable** — Instead of "Helps with SQL", write "Write optimized SQL queries for PostgreSQL, MySQL, and SQLite, including index recommendations and query plan analysis"
Skills are most valuable when agents need specialized knowledge or multi-step workflows:
**Domain Expertise**
- `api-integration-expert` — Best practices for calling specific APIs (authentication, rate limiting, error handling)
- `data-transformation` — ETL patterns, data cleaning, and validation rules
- `code-reviewer` — Code review guidelines specific to your team's standards
**Workflow Templates**
- `bug-investigation` — Step-by-step debugging methodology (reproduce → isolate → test → fix)
- `feature-implementation` — Development workflow from requirements to deployment
- `document-generator` — Templates and formatting rules for technical documentation
**Company-Specific Knowledge**
- `our-architecture` — System architecture diagrams, service dependencies, and deployment processes
- `style-guide` — Brand guidelines, writing tone, UI/UX patterns
- `customer-onboarding` — Standard procedures and common customer questions
**When to use skills vs. agent instructions:**
- Use **skills** for knowledge that applies across multiple workflows or changes frequently
- Use **agent instructions** for task-specific context that's unique to a single agent
## Best Practices
**Writing Effective Descriptions**
- **Be specific and keyword-rich** — Instead of "Helps with SQL", write "Write optimized SQL queries for PostgreSQL, MySQL, and SQLite, including index recommendations and query plan analysis"
- **Include activation triggers** — Mention specific words or phrases that should prompt the skill (e.g., "Use when the user mentions PDFs, forms, or document extraction")
- **Keep it under 200 words** — Agents scan descriptions quickly; make every word count
**Skill Scope and Organization**
- **One skill per domain** — A focused `sql-expert` skill works better than a broad `database-everything` skill
- **Use markdown structure** — Headers, lists, and code blocks help the agent parse and follow instructions
- **Test iteratively** — Run your workflow and check if the agent activates the skill when expected
- **Limit to 5-10 skills per agent** — More skills = more decision overhead; start small and add as needed
- **Split large skills** — If a skill exceeds 500 lines, break it into focused sub-skills
**Content Structure**
- **Use markdown formatting** — Headers, lists, and code blocks help agents parse and follow instructions
- **Provide examples** — Show input/output pairs so agents understand expected behavior
- **Be explicit about edge cases** — Don't assume agents will infer special handling
**Testing and Iteration**
- **Test activation** — Run your workflow and verify the agent loads the skill when expected
- **Check for false positives** — Make sure skills aren't activating when they shouldn't
- **Refine descriptions** — If a skill isn't loading when needed, add more keywords to the description
## Learn More

View File

@@ -10,6 +10,21 @@ import { BlockInfoCard } from "@/components/ui/block-info-card"
color="#6366F1"
/>
{/* MANUAL-CONTENT-START:intro */}
[Airweave](https://airweave.ai/) is an AI-powered semantic search platform that helps you discover and retrieve knowledge across all your synced data sources. Built for modern teams, Airweave enables fast, relevant search results using neural, hybrid, or keyword-based strategies tailored to your needs.
With Airweave, you can:
- **Search smarter**: Use natural language queries to uncover information stored across your connected tools and databases
- **Unify your data**: Seamlessly access content from sources like code, docs, chat, emails, cloud files, and more
- **Customize retrieval**: Select between hybrid (semantic + keyword), neural, or keyword search strategies for optimal results
- **Boost recall**: Expand search queries with AI to find more comprehensive answers
- **Rerank results using AI**: Prioritize the most relevant answers with powerful language models
- **Get instant answers**: Generate clear, AI-powered responses synthesized from your data
In Sim, the Airweave integration empowers your agents to search, summarize, and extract insights from all your organizations data via a single tool. Use Airweave to drive rich, contextual knowledge retrieval within your workflows—whether answering questions, generating summaries, or supporting dynamic decision-making.
{/* MANUAL-CONTENT-END */}
## Usage Instructions
Search across your synced data sources using Airweave. Supports semantic search with hybrid, neural, or keyword retrieval strategies. Optionally generate AI-powered answers from search results.

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 56 KiB

View File

@@ -5,7 +5,7 @@ import { eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { generateAgentCard, generateSkillsFromWorkflow } from '@/lib/a2a/agent-card'
import type { AgentCapabilities, AgentSkill } from '@/lib/a2a/types'
import { checkHybridAuth } from '@/lib/auth/hybrid'
import { checkSessionOrInternalAuth } from '@/lib/auth/hybrid'
import { getRedisClient } from '@/lib/core/config/redis'
import { loadWorkflowFromNormalizedTables } from '@/lib/workflows/persistence/utils'
import { checkWorkspaceAccess } from '@/lib/workspaces/permissions/utils'
@@ -40,7 +40,7 @@ export async function GET(request: NextRequest, { params }: { params: Promise<Ro
}
if (!agent.agent.isPublished) {
const auth = await checkHybridAuth(request, { requireWorkflowId: false })
const auth = await checkSessionOrInternalAuth(request, { requireWorkflowId: false })
if (!auth.success) {
return NextResponse.json({ error: 'Agent not published' }, { status: 404 })
}
@@ -81,7 +81,7 @@ export async function PUT(request: NextRequest, { params }: { params: Promise<Ro
const { agentId } = await params
try {
const auth = await checkHybridAuth(request, { requireWorkflowId: false })
const auth = await checkSessionOrInternalAuth(request, { requireWorkflowId: false })
if (!auth.success || !auth.userId) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
@@ -151,7 +151,7 @@ export async function DELETE(request: NextRequest, { params }: { params: Promise
const { agentId } = await params
try {
const auth = await checkHybridAuth(request, { requireWorkflowId: false })
const auth = await checkSessionOrInternalAuth(request, { requireWorkflowId: false })
if (!auth.success || !auth.userId) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
@@ -189,7 +189,7 @@ export async function POST(request: NextRequest, { params }: { params: Promise<R
const { agentId } = await params
try {
const auth = await checkHybridAuth(request, { requireWorkflowId: false })
const auth = await checkSessionOrInternalAuth(request, { requireWorkflowId: false })
if (!auth.success || !auth.userId) {
logger.warn('A2A agent publish auth failed:', { error: auth.error, hasUserId: !!auth.userId })
return NextResponse.json({ error: auth.error || 'Unauthorized' }, { status: 401 })

View File

@@ -13,7 +13,7 @@ import { v4 as uuidv4 } from 'uuid'
import { generateSkillsFromWorkflow } from '@/lib/a2a/agent-card'
import { A2A_DEFAULT_CAPABILITIES } from '@/lib/a2a/constants'
import { sanitizeAgentName } from '@/lib/a2a/utils'
import { checkHybridAuth } from '@/lib/auth/hybrid'
import { checkSessionOrInternalAuth } from '@/lib/auth/hybrid'
import { loadWorkflowFromNormalizedTables } from '@/lib/workflows/persistence/utils'
import { hasValidStartBlockInState } from '@/lib/workflows/triggers/trigger-utils'
import { getWorkspaceById } from '@/lib/workspaces/permissions/utils'
@@ -27,7 +27,7 @@ export const dynamic = 'force-dynamic'
*/
export async function GET(request: NextRequest) {
try {
const auth = await checkHybridAuth(request, { requireWorkflowId: false })
const auth = await checkSessionOrInternalAuth(request, { requireWorkflowId: false })
if (!auth.success || !auth.userId) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
@@ -87,7 +87,7 @@ export async function GET(request: NextRequest) {
*/
export async function POST(request: NextRequest) {
try {
const auth = await checkHybridAuth(request, { requireWorkflowId: false })
const auth = await checkSessionOrInternalAuth(request, { requireWorkflowId: false })
if (!auth.success || !auth.userId) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}

View File

@@ -5,7 +5,7 @@ import { and, eq } from 'drizzle-orm'
import { jwtDecode } from 'jwt-decode'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { checkHybridAuth } from '@/lib/auth/hybrid'
import { checkSessionOrInternalAuth } from '@/lib/auth/hybrid'
import { generateRequestId } from '@/lib/core/utils/request'
import { evaluateScopeCoverage, type OAuthProvider, parseProvider } from '@/lib/oauth'
import { getUserEntityPermissions } from '@/lib/workspaces/permissions/utils'
@@ -81,7 +81,7 @@ export async function GET(request: NextRequest) {
const { provider: providerParam, workflowId, credentialId } = parseResult.data
// Authenticate requester (supports session, API key, internal JWT)
const authResult = await checkHybridAuth(request)
const authResult = await checkSessionOrInternalAuth(request)
if (!authResult.success || !authResult.userId) {
logger.warn(`[${requestId}] Unauthenticated credentials request rejected`)
return NextResponse.json({ error: 'User not authenticated' }, { status: 401 })

View File

@@ -12,7 +12,7 @@ describe('OAuth Token API Routes', () => {
const mockRefreshTokenIfNeeded = vi.fn()
const mockGetOAuthToken = vi.fn()
const mockAuthorizeCredentialUse = vi.fn()
const mockCheckHybridAuth = vi.fn()
const mockCheckSessionOrInternalAuth = vi.fn()
const mockLogger = createMockLogger()
@@ -42,7 +42,7 @@ describe('OAuth Token API Routes', () => {
}))
vi.doMock('@/lib/auth/hybrid', () => ({
checkHybridAuth: mockCheckHybridAuth,
checkSessionOrInternalAuth: mockCheckSessionOrInternalAuth,
}))
})
@@ -235,7 +235,7 @@ describe('OAuth Token API Routes', () => {
describe('credentialAccountUserId + providerId path', () => {
it('should reject unauthenticated requests', async () => {
mockCheckHybridAuth.mockResolvedValueOnce({
mockCheckSessionOrInternalAuth.mockResolvedValueOnce({
success: false,
error: 'Authentication required',
})
@@ -255,30 +255,8 @@ describe('OAuth Token API Routes', () => {
expect(mockGetOAuthToken).not.toHaveBeenCalled()
})
it('should reject API key authentication', async () => {
mockCheckHybridAuth.mockResolvedValueOnce({
success: true,
authType: 'api_key',
userId: 'test-user-id',
})
const req = createMockRequest('POST', {
credentialAccountUserId: 'test-user-id',
providerId: 'google',
})
const { POST } = await import('@/app/api/auth/oauth/token/route')
const response = await POST(req)
const data = await response.json()
expect(response.status).toBe(401)
expect(data).toHaveProperty('error', 'User not authenticated')
expect(mockGetOAuthToken).not.toHaveBeenCalled()
})
it('should reject internal JWT authentication', async () => {
mockCheckHybridAuth.mockResolvedValueOnce({
mockCheckSessionOrInternalAuth.mockResolvedValueOnce({
success: true,
authType: 'internal_jwt',
userId: 'test-user-id',
@@ -300,7 +278,7 @@ describe('OAuth Token API Routes', () => {
})
it('should reject requests for other users credentials', async () => {
mockCheckHybridAuth.mockResolvedValueOnce({
mockCheckSessionOrInternalAuth.mockResolvedValueOnce({
success: true,
authType: 'session',
userId: 'attacker-user-id',
@@ -322,7 +300,7 @@ describe('OAuth Token API Routes', () => {
})
it('should allow session-authenticated users to access their own credentials', async () => {
mockCheckHybridAuth.mockResolvedValueOnce({
mockCheckSessionOrInternalAuth.mockResolvedValueOnce({
success: true,
authType: 'session',
userId: 'test-user-id',
@@ -345,7 +323,7 @@ describe('OAuth Token API Routes', () => {
})
it('should return 404 when credential not found for user', async () => {
mockCheckHybridAuth.mockResolvedValueOnce({
mockCheckSessionOrInternalAuth.mockResolvedValueOnce({
success: true,
authType: 'session',
userId: 'test-user-id',
@@ -373,7 +351,7 @@ describe('OAuth Token API Routes', () => {
*/
describe('GET handler', () => {
it('should return access token successfully', async () => {
mockCheckHybridAuth.mockResolvedValueOnce({
mockCheckSessionOrInternalAuth.mockResolvedValueOnce({
success: true,
authType: 'session',
userId: 'test-user-id',
@@ -402,7 +380,7 @@ describe('OAuth Token API Routes', () => {
expect(response.status).toBe(200)
expect(data).toHaveProperty('accessToken', 'fresh-token')
expect(mockCheckHybridAuth).toHaveBeenCalled()
expect(mockCheckSessionOrInternalAuth).toHaveBeenCalled()
expect(mockGetCredential).toHaveBeenCalledWith(mockRequestId, 'credential-id', 'test-user-id')
expect(mockRefreshTokenIfNeeded).toHaveBeenCalled()
})
@@ -421,7 +399,7 @@ describe('OAuth Token API Routes', () => {
})
it('should handle authentication failure', async () => {
mockCheckHybridAuth.mockResolvedValueOnce({
mockCheckSessionOrInternalAuth.mockResolvedValueOnce({
success: false,
error: 'Authentication required',
})
@@ -440,7 +418,7 @@ describe('OAuth Token API Routes', () => {
})
it('should handle credential not found', async () => {
mockCheckHybridAuth.mockResolvedValueOnce({
mockCheckSessionOrInternalAuth.mockResolvedValueOnce({
success: true,
authType: 'session',
userId: 'test-user-id',
@@ -461,7 +439,7 @@ describe('OAuth Token API Routes', () => {
})
it('should handle missing access token', async () => {
mockCheckHybridAuth.mockResolvedValueOnce({
mockCheckSessionOrInternalAuth.mockResolvedValueOnce({
success: true,
authType: 'session',
userId: 'test-user-id',
@@ -487,7 +465,7 @@ describe('OAuth Token API Routes', () => {
})
it('should handle token refresh failure', async () => {
mockCheckHybridAuth.mockResolvedValueOnce({
mockCheckSessionOrInternalAuth.mockResolvedValueOnce({
success: true,
authType: 'session',
userId: 'test-user-id',

View File

@@ -2,7 +2,7 @@ import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { authorizeCredentialUse } from '@/lib/auth/credential-access'
import { checkHybridAuth } from '@/lib/auth/hybrid'
import { checkSessionOrInternalAuth } from '@/lib/auth/hybrid'
import { generateRequestId } from '@/lib/core/utils/request'
import { getCredential, getOAuthToken, refreshTokenIfNeeded } from '@/app/api/auth/oauth/utils'
@@ -71,7 +71,7 @@ export async function POST(request: NextRequest) {
providerId,
})
const auth = await checkHybridAuth(request, { requireWorkflowId: false })
const auth = await checkSessionOrInternalAuth(request, { requireWorkflowId: false })
if (!auth.success || auth.authType !== 'session' || !auth.userId) {
logger.warn(`[${requestId}] Unauthorized request for credentialAccountUserId path`, {
success: auth.success,
@@ -187,7 +187,7 @@ export async function GET(request: NextRequest) {
const { credentialId } = parseResult.data
// For GET requests, we only support session-based authentication
const auth = await checkHybridAuth(request, { requireWorkflowId: false })
const auth = await checkSessionOrInternalAuth(request, { requireWorkflowId: false })
if (!auth.success || auth.authType !== 'session' || !auth.userId) {
return NextResponse.json({ error: 'User not authenticated' }, { status: 401 })
}

View File

@@ -285,6 +285,14 @@ export async function POST(req: NextRequest) {
apiVersion: 'preview',
endpoint: env.AZURE_OPENAI_ENDPOINT,
}
} else if (providerEnv === 'azure-anthropic') {
providerConfig = {
provider: 'azure-anthropic',
model: envModel,
apiKey: env.AZURE_ANTHROPIC_API_KEY,
apiVersion: env.AZURE_ANTHROPIC_API_VERSION,
endpoint: env.AZURE_ANTHROPIC_ENDPOINT,
}
} else if (providerEnv === 'vertex') {
providerConfig = {
provider: 'vertex',

View File

@@ -29,7 +29,7 @@ function setupFileApiMocks(
}
vi.doMock('@/lib/auth/hybrid', () => ({
checkHybridAuth: vi.fn().mockResolvedValue({
checkSessionOrInternalAuth: vi.fn().mockResolvedValue({
success: authenticated,
userId: authenticated ? 'test-user-id' : undefined,
error: authenticated ? undefined : 'Unauthorized',

View File

@@ -1,7 +1,7 @@
import { createLogger } from '@sim/logger'
import type { NextRequest } from 'next/server'
import { NextResponse } from 'next/server'
import { checkHybridAuth } from '@/lib/auth/hybrid'
import { checkSessionOrInternalAuth } from '@/lib/auth/hybrid'
import type { StorageContext } from '@/lib/uploads/config'
import { deleteFile, hasCloudStorage } from '@/lib/uploads/core/storage-service'
import { extractStorageKey, inferContextFromKey } from '@/lib/uploads/utils/file-utils'
@@ -24,7 +24,7 @@ const logger = createLogger('FilesDeleteAPI')
*/
export async function POST(request: NextRequest) {
try {
const authResult = await checkHybridAuth(request, { requireWorkflowId: false })
const authResult = await checkSessionOrInternalAuth(request, { requireWorkflowId: false })
if (!authResult.success || !authResult.userId) {
logger.warn('Unauthorized file delete request', {

View File

@@ -1,6 +1,6 @@
import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { checkHybridAuth } from '@/lib/auth/hybrid'
import { checkSessionOrInternalAuth } from '@/lib/auth/hybrid'
import type { StorageContext } from '@/lib/uploads/config'
import { hasCloudStorage } from '@/lib/uploads/core/storage-service'
import { verifyFileAccess } from '@/app/api/files/authorization'
@@ -12,7 +12,7 @@ export const dynamic = 'force-dynamic'
export async function POST(request: NextRequest) {
try {
const authResult = await checkHybridAuth(request, { requireWorkflowId: false })
const authResult = await checkSessionOrInternalAuth(request, { requireWorkflowId: false })
if (!authResult.success || !authResult.userId) {
logger.warn('Unauthorized download URL request', {

View File

@@ -35,7 +35,7 @@ function setupFileApiMocks(
}
vi.doMock('@/lib/auth/hybrid', () => ({
checkHybridAuth: vi.fn().mockResolvedValue({
checkInternalAuth: vi.fn().mockResolvedValue({
success: authenticated,
userId: authenticated ? 'test-user-id' : undefined,
error: authenticated ? undefined : 'Unauthorized',

View File

@@ -5,7 +5,7 @@ import path from 'path'
import { createLogger } from '@sim/logger'
import binaryExtensionsList from 'binary-extensions'
import { type NextRequest, NextResponse } from 'next/server'
import { checkHybridAuth } from '@/lib/auth/hybrid'
import { checkInternalAuth } from '@/lib/auth/hybrid'
import {
secureFetchWithPinnedIP,
validateUrlWithDNS,
@@ -66,7 +66,7 @@ export async function POST(request: NextRequest) {
const startTime = Date.now()
try {
const authResult = await checkHybridAuth(request, { requireWorkflowId: true })
const authResult = await checkInternalAuth(request, { requireWorkflowId: true })
if (!authResult.success) {
logger.warn('Unauthorized file parse request', {

View File

@@ -55,7 +55,7 @@ describe('File Serve API Route', () => {
})
vi.doMock('@/lib/auth/hybrid', () => ({
checkHybridAuth: vi.fn().mockResolvedValue({
checkSessionOrInternalAuth: vi.fn().mockResolvedValue({
success: true,
userId: 'test-user-id',
}),
@@ -165,7 +165,7 @@ describe('File Serve API Route', () => {
}))
vi.doMock('@/lib/auth/hybrid', () => ({
checkHybridAuth: vi.fn().mockResolvedValue({
checkSessionOrInternalAuth: vi.fn().mockResolvedValue({
success: true,
userId: 'test-user-id',
}),
@@ -226,7 +226,7 @@ describe('File Serve API Route', () => {
}))
vi.doMock('@/lib/auth/hybrid', () => ({
checkHybridAuth: vi.fn().mockResolvedValue({
checkSessionOrInternalAuth: vi.fn().mockResolvedValue({
success: true,
userId: 'test-user-id',
}),
@@ -291,7 +291,7 @@ describe('File Serve API Route', () => {
}))
vi.doMock('@/lib/auth/hybrid', () => ({
checkHybridAuth: vi.fn().mockResolvedValue({
checkSessionOrInternalAuth: vi.fn().mockResolvedValue({
success: true,
userId: 'test-user-id',
}),
@@ -350,7 +350,7 @@ describe('File Serve API Route', () => {
for (const test of contentTypeTests) {
it(`should serve ${test.ext} file with correct content type`, async () => {
vi.doMock('@/lib/auth/hybrid', () => ({
checkHybridAuth: vi.fn().mockResolvedValue({
checkSessionOrInternalAuth: vi.fn().mockResolvedValue({
success: true,
userId: 'test-user-id',
}),

View File

@@ -2,7 +2,7 @@ import { readFile } from 'fs/promises'
import { createLogger } from '@sim/logger'
import type { NextRequest } from 'next/server'
import { NextResponse } from 'next/server'
import { checkHybridAuth } from '@/lib/auth/hybrid'
import { checkSessionOrInternalAuth } from '@/lib/auth/hybrid'
import { CopilotFiles, isUsingCloudStorage } from '@/lib/uploads'
import type { StorageContext } from '@/lib/uploads/config'
import { downloadFile } from '@/lib/uploads/core/storage-service'
@@ -49,7 +49,7 @@ export async function GET(
return await handleLocalFilePublic(fullPath)
}
const authResult = await checkHybridAuth(request, { requireWorkflowId: false })
const authResult = await checkSessionOrInternalAuth(request, { requireWorkflowId: false })
if (!authResult.success || !authResult.userId) {
logger.warn('Unauthorized file access attempt', {

View File

@@ -845,6 +845,8 @@ export async function POST(req: NextRequest) {
contextVariables,
timeoutMs: timeout,
requestId,
ownerKey: `user:${auth.userId}`,
ownerWeight: 1,
})
const executionTime = Date.now() - startTime

View File

@@ -23,7 +23,16 @@ export async function POST(request: NextRequest) {
topK,
model,
apiKey,
azureEndpoint,
azureApiVersion,
vertexProject,
vertexLocation,
vertexCredential,
bedrockAccessKeyId,
bedrockSecretKey,
bedrockRegion,
workflowId,
workspaceId,
piiEntityTypes,
piiMode,
piiLanguage,
@@ -110,7 +119,18 @@ export async function POST(request: NextRequest) {
topK,
model,
apiKey,
{
azureEndpoint,
azureApiVersion,
vertexProject,
vertexLocation,
vertexCredential,
bedrockAccessKeyId,
bedrockSecretKey,
bedrockRegion,
},
workflowId,
workspaceId,
piiEntityTypes,
piiMode,
piiLanguage,
@@ -178,7 +198,18 @@ async function executeValidation(
topK: string | undefined,
model: string,
apiKey: string | undefined,
providerCredentials: {
azureEndpoint?: string
azureApiVersion?: string
vertexProject?: string
vertexLocation?: string
vertexCredential?: string
bedrockAccessKeyId?: string
bedrockSecretKey?: string
bedrockRegion?: string
},
workflowId: string | undefined,
workspaceId: string | undefined,
piiEntityTypes: string[] | undefined,
piiMode: string | undefined,
piiLanguage: string | undefined,
@@ -219,7 +250,9 @@ async function executeValidation(
topK: topK ? Number.parseInt(topK) : 10, // Default topK is 10
model: model,
apiKey,
providerCredentials,
workflowId,
workspaceId,
requestId,
})
}

View File

@@ -2,7 +2,7 @@ import { randomUUID } from 'crypto'
import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { checkHybridAuth } from '@/lib/auth/hybrid'
import { checkSessionOrInternalAuth } from '@/lib/auth/hybrid'
import { SUPPORTED_FIELD_TYPES } from '@/lib/knowledge/constants'
import { createTagDefinition, getTagDefinitions } from '@/lib/knowledge/tags/service'
import { checkKnowledgeBaseAccess } from '@/app/api/knowledge/utils'
@@ -19,19 +19,11 @@ export async function GET(req: NextRequest, { params }: { params: Promise<{ id:
try {
logger.info(`[${requestId}] Getting tag definitions for knowledge base ${knowledgeBaseId}`)
const auth = await checkHybridAuth(req, { requireWorkflowId: false })
const auth = await checkSessionOrInternalAuth(req, { requireWorkflowId: false })
if (!auth.success) {
return NextResponse.json({ error: auth.error || 'Unauthorized' }, { status: 401 })
}
// Only allow session and internal JWT auth (not API key)
if (auth.authType === 'api_key') {
return NextResponse.json(
{ error: 'API key auth not supported for this endpoint' },
{ status: 401 }
)
}
// For session auth, verify KB access. Internal JWT is trusted.
if (auth.authType === 'session' && auth.userId) {
const accessCheck = await checkKnowledgeBaseAccess(knowledgeBaseId, auth.userId)
@@ -64,19 +56,11 @@ export async function POST(req: NextRequest, { params }: { params: Promise<{ id:
try {
logger.info(`[${requestId}] Creating tag definition for knowledge base ${knowledgeBaseId}`)
const auth = await checkHybridAuth(req, { requireWorkflowId: false })
const auth = await checkSessionOrInternalAuth(req, { requireWorkflowId: false })
if (!auth.success) {
return NextResponse.json({ error: auth.error || 'Unauthorized' }, { status: 401 })
}
// Only allow session and internal JWT auth (not API key)
if (auth.authType === 'api_key') {
return NextResponse.json(
{ error: 'API key auth not supported for this endpoint' },
{ status: 401 }
)
}
// For session auth, verify KB access. Internal JWT is trusted.
if (auth.authType === 'session' && auth.userId) {
const accessCheck = await checkKnowledgeBaseAccess(knowledgeBaseId, auth.userId)

View File

@@ -8,7 +8,7 @@ import {
import { createLogger } from '@sim/logger'
import { and, eq, inArray } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { checkHybridAuth } from '@/lib/auth/hybrid'
import { checkSessionOrInternalAuth } from '@/lib/auth/hybrid'
import { generateRequestId } from '@/lib/core/utils/request'
import type { TraceSpan, WorkflowExecutionLog } from '@/lib/logs/types'
@@ -23,7 +23,7 @@ export async function GET(
try {
const { executionId } = await params
const authResult = await checkHybridAuth(request, { requireWorkflowId: false })
const authResult = await checkSessionOrInternalAuth(request, { requireWorkflowId: false })
if (!authResult.success || !authResult.userId) {
logger.warn(`[${requestId}] Unauthorized execution data access attempt for: ${executionId}`)
return NextResponse.json(

View File

@@ -4,7 +4,7 @@ import { createLogger } from '@sim/logger'
import { and, eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { checkHybridAuth } from '@/lib/auth/hybrid'
import { checkInternalAuth } from '@/lib/auth/hybrid'
import { generateRequestId } from '@/lib/core/utils/request'
import { checkWorkspaceAccess } from '@/lib/workspaces/permissions/utils'
@@ -36,7 +36,7 @@ async function validateMemoryAccess(
requestId: string,
action: 'read' | 'write'
): Promise<{ userId: string } | { error: NextResponse }> {
const authResult = await checkHybridAuth(request, { requireWorkflowId: false })
const authResult = await checkInternalAuth(request, { requireWorkflowId: false })
if (!authResult.success || !authResult.userId) {
logger.warn(`[${requestId}] Unauthorized memory ${action} attempt`)
return {

View File

@@ -3,7 +3,7 @@ import { memory } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { and, eq, isNull, like } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { checkHybridAuth } from '@/lib/auth/hybrid'
import { checkInternalAuth } from '@/lib/auth/hybrid'
import { generateRequestId } from '@/lib/core/utils/request'
import { checkWorkspaceAccess } from '@/lib/workspaces/permissions/utils'
@@ -16,7 +16,7 @@ export async function GET(request: NextRequest) {
const requestId = generateRequestId()
try {
const authResult = await checkHybridAuth(request)
const authResult = await checkInternalAuth(request)
if (!authResult.success || !authResult.userId) {
logger.warn(`[${requestId}] Unauthorized memory access attempt`)
return NextResponse.json(
@@ -89,7 +89,7 @@ export async function POST(request: NextRequest) {
const requestId = generateRequestId()
try {
const authResult = await checkHybridAuth(request)
const authResult = await checkInternalAuth(request)
if (!authResult.success || !authResult.userId) {
logger.warn(`[${requestId}] Unauthorized memory creation attempt`)
return NextResponse.json(
@@ -228,7 +228,7 @@ export async function DELETE(request: NextRequest) {
const requestId = generateRequestId()
try {
const authResult = await checkHybridAuth(request)
const authResult = await checkInternalAuth(request)
if (!authResult.success || !authResult.userId) {
logger.warn(`[${requestId}] Unauthorized memory deletion attempt`)
return NextResponse.json(

View File

@@ -3,7 +3,7 @@ import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { createA2AClient } from '@/lib/a2a/utils'
import { checkHybridAuth } from '@/lib/auth/hybrid'
import { checkSessionOrInternalAuth } from '@/lib/auth/hybrid'
import { generateRequestId } from '@/lib/core/utils/request'
const logger = createLogger('A2ACancelTaskAPI')
@@ -20,7 +20,7 @@ export async function POST(request: NextRequest) {
const requestId = generateRequestId()
try {
const authResult = await checkHybridAuth(request, { requireWorkflowId: false })
const authResult = await checkSessionOrInternalAuth(request, { requireWorkflowId: false })
if (!authResult.success) {
logger.warn(`[${requestId}] Unauthorized A2A cancel task attempt`)

View File

@@ -2,7 +2,7 @@ import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { createA2AClient } from '@/lib/a2a/utils'
import { checkHybridAuth } from '@/lib/auth/hybrid'
import { checkSessionOrInternalAuth } from '@/lib/auth/hybrid'
import { generateRequestId } from '@/lib/core/utils/request'
export const dynamic = 'force-dynamic'
@@ -20,7 +20,7 @@ export async function POST(request: NextRequest) {
const requestId = generateRequestId()
try {
const authResult = await checkHybridAuth(request, { requireWorkflowId: false })
const authResult = await checkSessionOrInternalAuth(request, { requireWorkflowId: false })
if (!authResult.success) {
logger.warn(

View File

@@ -2,7 +2,7 @@ import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { createA2AClient } from '@/lib/a2a/utils'
import { checkHybridAuth } from '@/lib/auth/hybrid'
import { checkSessionOrInternalAuth } from '@/lib/auth/hybrid'
import { generateRequestId } from '@/lib/core/utils/request'
export const dynamic = 'force-dynamic'
@@ -18,7 +18,7 @@ export async function POST(request: NextRequest) {
const requestId = generateRequestId()
try {
const authResult = await checkHybridAuth(request, { requireWorkflowId: false })
const authResult = await checkSessionOrInternalAuth(request, { requireWorkflowId: false })
if (!authResult.success) {
logger.warn(`[${requestId}] Unauthorized A2A get agent card attempt: ${authResult.error}`)

View File

@@ -2,7 +2,7 @@ import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { createA2AClient } from '@/lib/a2a/utils'
import { checkHybridAuth } from '@/lib/auth/hybrid'
import { checkSessionOrInternalAuth } from '@/lib/auth/hybrid'
import { generateRequestId } from '@/lib/core/utils/request'
export const dynamic = 'force-dynamic'
@@ -19,7 +19,7 @@ export async function POST(request: NextRequest) {
const requestId = generateRequestId()
try {
const authResult = await checkHybridAuth(request, { requireWorkflowId: false })
const authResult = await checkSessionOrInternalAuth(request, { requireWorkflowId: false })
if (!authResult.success) {
logger.warn(

View File

@@ -3,7 +3,7 @@ import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { createA2AClient } from '@/lib/a2a/utils'
import { checkHybridAuth } from '@/lib/auth/hybrid'
import { checkSessionOrInternalAuth } from '@/lib/auth/hybrid'
import { generateRequestId } from '@/lib/core/utils/request'
export const dynamic = 'force-dynamic'
@@ -21,7 +21,7 @@ export async function POST(request: NextRequest) {
const requestId = generateRequestId()
try {
const authResult = await checkHybridAuth(request, { requireWorkflowId: false })
const authResult = await checkSessionOrInternalAuth(request, { requireWorkflowId: false })
if (!authResult.success) {
logger.warn(`[${requestId}] Unauthorized A2A get task attempt: ${authResult.error}`)

View File

@@ -10,7 +10,7 @@ import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { createA2AClient, extractTextContent, isTerminalState } from '@/lib/a2a/utils'
import { checkHybridAuth } from '@/lib/auth/hybrid'
import { checkSessionOrInternalAuth } from '@/lib/auth/hybrid'
import { generateRequestId } from '@/lib/core/utils/request'
const logger = createLogger('A2AResubscribeAPI')
@@ -27,7 +27,7 @@ export async function POST(request: NextRequest) {
const requestId = generateRequestId()
try {
const authResult = await checkHybridAuth(request, { requireWorkflowId: false })
const authResult = await checkSessionOrInternalAuth(request, { requireWorkflowId: false })
if (!authResult.success) {
logger.warn(`[${requestId}] Unauthorized A2A resubscribe attempt`)

View File

@@ -3,7 +3,7 @@ import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { createA2AClient, extractTextContent, isTerminalState } from '@/lib/a2a/utils'
import { checkHybridAuth } from '@/lib/auth/hybrid'
import { checkSessionOrInternalAuth } from '@/lib/auth/hybrid'
import { validateUrlWithDNS } from '@/lib/core/security/input-validation.server'
import { generateRequestId } from '@/lib/core/utils/request'
@@ -32,7 +32,7 @@ export async function POST(request: NextRequest) {
const requestId = generateRequestId()
try {
const authResult = await checkHybridAuth(request, { requireWorkflowId: false })
const authResult = await checkSessionOrInternalAuth(request, { requireWorkflowId: false })
if (!authResult.success) {
logger.warn(`[${requestId}] Unauthorized A2A send message attempt: ${authResult.error}`)

View File

@@ -2,7 +2,7 @@ import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { createA2AClient } from '@/lib/a2a/utils'
import { checkHybridAuth } from '@/lib/auth/hybrid'
import { checkSessionOrInternalAuth } from '@/lib/auth/hybrid'
import { validateUrlWithDNS } from '@/lib/core/security/input-validation.server'
import { generateRequestId } from '@/lib/core/utils/request'
@@ -22,7 +22,7 @@ export async function POST(request: NextRequest) {
const requestId = generateRequestId()
try {
const authResult = await checkHybridAuth(request, { requireWorkflowId: false })
const authResult = await checkSessionOrInternalAuth(request, { requireWorkflowId: false })
if (!authResult.success) {
logger.warn(`[${requestId}] Unauthorized A2A set push notification attempt`, {

View File

@@ -1,7 +1,7 @@
import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { checkHybridAuth } from '@/lib/auth/hybrid'
import { checkSessionOrInternalAuth } from '@/lib/auth/hybrid'
import { getUserUsageLogs, type UsageLogSource } from '@/lib/billing/core/usage-log'
const logger = createLogger('UsageLogsAPI')
@@ -20,7 +20,7 @@ const QuerySchema = z.object({
*/
export async function GET(req: NextRequest) {
try {
const auth = await checkHybridAuth(req, { requireWorkflowId: false })
const auth = await checkSessionOrInternalAuth(req, { requireWorkflowId: false })
if (!auth.success || !auth.userId) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })

View File

@@ -325,6 +325,11 @@ export async function POST(req: NextRequest, { params }: { params: Promise<{ id:
requestId
)
// Client-side sessions and personal API keys bill/permission-check the
// authenticated user, not the workspace billed account.
const useAuthenticatedUserAsActor =
isClientSession || (auth.authType === 'api_key' && auth.apiKeyType === 'personal')
const preprocessResult = await preprocessExecution({
workflowId,
userId,
@@ -334,6 +339,7 @@ export async function POST(req: NextRequest, { params }: { params: Promise<{ id:
checkDeployment: !shouldUseDraftState,
loggingSession,
useDraftState: shouldUseDraftState,
useAuthenticatedUserAsActor,
})
if (!preprocessResult.success) {

View File

@@ -74,8 +74,7 @@ function FileCard({ file, isExecutionFile = false, workspaceId }: FileCardProps)
}
if (isExecutionFile) {
const serveUrl =
file.url || `/api/files/serve/${encodeURIComponent(file.key)}?context=execution`
const serveUrl = `/api/files/serve/${encodeURIComponent(file.key)}?context=execution`
window.open(serveUrl, '_blank')
logger.info(`Opened execution file serve URL: ${serveUrl}`)
} else {
@@ -88,16 +87,12 @@ function FileCard({ file, isExecutionFile = false, workspaceId }: FileCardProps)
logger.warn(
`Could not construct viewer URL for file: ${file.name}, falling back to serve URL`
)
const serveUrl =
file.url || `/api/files/serve/${encodeURIComponent(file.key)}?context=workspace`
const serveUrl = `/api/files/serve/${encodeURIComponent(file.key)}?context=workspace`
window.open(serveUrl, '_blank')
}
}
} catch (error) {
logger.error(`Failed to download file ${file.name}:`, error)
if (file.url) {
window.open(file.url, '_blank')
}
} finally {
setIsDownloading(false)
}
@@ -198,8 +193,7 @@ export function FileDownload({
}
if (isExecutionFile) {
const serveUrl =
file.url || `/api/files/serve/${encodeURIComponent(file.key)}?context=execution`
const serveUrl = `/api/files/serve/${encodeURIComponent(file.key)}?context=execution`
window.open(serveUrl, '_blank')
logger.info(`Opened execution file serve URL: ${serveUrl}`)
} else {
@@ -212,16 +206,12 @@ export function FileDownload({
logger.warn(
`Could not construct viewer URL for file: ${file.name}, falling back to serve URL`
)
const serveUrl =
file.url || `/api/files/serve/${encodeURIComponent(file.key)}?context=workspace`
const serveUrl = `/api/files/serve/${encodeURIComponent(file.key)}?context=workspace`
window.open(serveUrl, '_blank')
}
}
} catch (error) {
logger.error(`Failed to download file ${file.name}:`, error)
if (file.url) {
window.open(file.url, '_blank')
}
} finally {
setIsDownloading(false)
}

View File

@@ -89,7 +89,7 @@ export function WorkflowSelector({
onMouseDown={(e) => handleRemove(e, w.id)}
>
{w.name}
<X className='h-3 w-3' />
<X className='!text-[var(--text-primary)] h-4 w-4 flex-shrink-0 opacity-50' />
</Badge>
))}
{selectedWorkflows.length > 2 && (

View File

@@ -35,6 +35,7 @@ interface CredentialSelectorProps {
disabled?: boolean
isPreview?: boolean
previewValue?: any | null
previewContextValues?: Record<string, unknown>
}
export function CredentialSelector({
@@ -43,6 +44,7 @@ export function CredentialSelector({
disabled = false,
isPreview = false,
previewValue,
previewContextValues,
}: CredentialSelectorProps) {
const [showOAuthModal, setShowOAuthModal] = useState(false)
const [editingValue, setEditingValue] = useState('')
@@ -67,7 +69,11 @@ export function CredentialSelector({
canUseCredentialSets
)
const { depsSatisfied, dependsOn } = useDependsOnGate(blockId, subBlock, { disabled, isPreview })
const { depsSatisfied, dependsOn } = useDependsOnGate(blockId, subBlock, {
disabled,
isPreview,
previewContextValues,
})
const hasDependencies = dependsOn.length > 0
const effectiveDisabled = disabled || (hasDependencies && !depsSatisfied)

View File

@@ -5,6 +5,7 @@ import { Tooltip } from '@/components/emcn'
import { SelectorCombobox } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/components/selector-combobox/selector-combobox'
import { useDependsOnGate } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-depends-on-gate'
import { useSubBlockValue } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-sub-block-value'
import { resolvePreviewContextValue } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/utils'
import type { SubBlockConfig } from '@/blocks/types'
import type { SelectorContext } from '@/hooks/selectors/types'
@@ -33,7 +34,9 @@ export function DocumentSelector({
previewContextValues,
})
const [knowledgeBaseIdFromStore] = useSubBlockValue(blockId, 'knowledgeBaseId')
const knowledgeBaseIdValue = previewContextValues?.knowledgeBaseId ?? knowledgeBaseIdFromStore
const knowledgeBaseIdValue = previewContextValues
? resolvePreviewContextValue(previewContextValues.knowledgeBaseId)
: knowledgeBaseIdFromStore
const normalizedKnowledgeBaseId =
typeof knowledgeBaseIdValue === 'string' && knowledgeBaseIdValue.trim().length > 0
? knowledgeBaseIdValue

View File

@@ -17,6 +17,7 @@ import { formatDisplayText } from '@/app/workspace/[workspaceId]/w/[workflowId]/
import { TagDropdown } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/components/tag-dropdown/tag-dropdown'
import { useSubBlockInput } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-sub-block-input'
import { useSubBlockValue } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-sub-block-value'
import { resolvePreviewContextValue } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/utils'
import { useAccessibleReferencePrefixes } from '@/app/workspace/[workspaceId]/w/[workflowId]/hooks/use-accessible-reference-prefixes'
import type { SubBlockConfig } from '@/blocks/types'
import { useKnowledgeBaseTagDefinitions } from '@/hooks/kb/use-knowledge-base-tag-definitions'
@@ -77,7 +78,9 @@ export function DocumentTagEntry({
})
const [knowledgeBaseIdFromStore] = useSubBlockValue(blockId, 'knowledgeBaseId')
const knowledgeBaseIdValue = previewContextValues?.knowledgeBaseId ?? knowledgeBaseIdFromStore
const knowledgeBaseIdValue = previewContextValues
? resolvePreviewContextValue(previewContextValues.knowledgeBaseId)
: knowledgeBaseIdFromStore
const knowledgeBaseId =
typeof knowledgeBaseIdValue === 'string' && knowledgeBaseIdValue.trim().length > 0
? knowledgeBaseIdValue

View File

@@ -9,6 +9,7 @@ import { SelectorCombobox } from '@/app/workspace/[workspaceId]/w/[workflowId]/c
import { useDependsOnGate } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-depends-on-gate'
import { useForeignCredential } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-foreign-credential'
import { useSubBlockValue } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-sub-block-value'
import { resolvePreviewContextValue } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/utils'
import { getBlock } from '@/blocks/registry'
import type { SubBlockConfig } from '@/blocks/types'
import { isDependency } from '@/blocks/utils'
@@ -62,42 +63,56 @@ export function FileSelectorInput({
const [domainValueFromStore] = useSubBlockValue(blockId, 'domain')
const connectedCredential = previewContextValues?.credential ?? blockValues.credential
const domainValue = previewContextValues?.domain ?? domainValueFromStore
const connectedCredential = previewContextValues
? resolvePreviewContextValue(previewContextValues.credential)
: blockValues.credential
const domainValue = previewContextValues
? resolvePreviewContextValue(previewContextValues.domain)
: domainValueFromStore
const teamIdValue = useMemo(
() =>
previewContextValues?.teamId ??
resolveDependencyValue('teamId', blockValues, canonicalIndex, canonicalModeOverrides),
[previewContextValues?.teamId, blockValues, canonicalIndex, canonicalModeOverrides]
previewContextValues
? resolvePreviewContextValue(previewContextValues.teamId)
: resolveDependencyValue('teamId', blockValues, canonicalIndex, canonicalModeOverrides),
[previewContextValues, blockValues, canonicalIndex, canonicalModeOverrides]
)
const siteIdValue = useMemo(
() =>
previewContextValues?.siteId ??
resolveDependencyValue('siteId', blockValues, canonicalIndex, canonicalModeOverrides),
[previewContextValues?.siteId, blockValues, canonicalIndex, canonicalModeOverrides]
previewContextValues
? resolvePreviewContextValue(previewContextValues.siteId)
: resolveDependencyValue('siteId', blockValues, canonicalIndex, canonicalModeOverrides),
[previewContextValues, blockValues, canonicalIndex, canonicalModeOverrides]
)
const collectionIdValue = useMemo(
() =>
previewContextValues?.collectionId ??
resolveDependencyValue('collectionId', blockValues, canonicalIndex, canonicalModeOverrides),
[previewContextValues?.collectionId, blockValues, canonicalIndex, canonicalModeOverrides]
previewContextValues
? resolvePreviewContextValue(previewContextValues.collectionId)
: resolveDependencyValue(
'collectionId',
blockValues,
canonicalIndex,
canonicalModeOverrides
),
[previewContextValues, blockValues, canonicalIndex, canonicalModeOverrides]
)
const projectIdValue = useMemo(
() =>
previewContextValues?.projectId ??
resolveDependencyValue('projectId', blockValues, canonicalIndex, canonicalModeOverrides),
[previewContextValues?.projectId, blockValues, canonicalIndex, canonicalModeOverrides]
previewContextValues
? resolvePreviewContextValue(previewContextValues.projectId)
: resolveDependencyValue('projectId', blockValues, canonicalIndex, canonicalModeOverrides),
[previewContextValues, blockValues, canonicalIndex, canonicalModeOverrides]
)
const planIdValue = useMemo(
() =>
previewContextValues?.planId ??
resolveDependencyValue('planId', blockValues, canonicalIndex, canonicalModeOverrides),
[previewContextValues?.planId, blockValues, canonicalIndex, canonicalModeOverrides]
previewContextValues
? resolvePreviewContextValue(previewContextValues.planId)
: resolveDependencyValue('planId', blockValues, canonicalIndex, canonicalModeOverrides),
[previewContextValues, blockValues, canonicalIndex, canonicalModeOverrides]
)
const normalizedCredentialId =

View File

@@ -6,6 +6,7 @@ import { SelectorCombobox } from '@/app/workspace/[workspaceId]/w/[workflowId]/c
import { useDependsOnGate } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-depends-on-gate'
import { useForeignCredential } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-foreign-credential'
import { useSubBlockValue } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-sub-block-value'
import { resolvePreviewContextValue } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/utils'
import type { SubBlockConfig } from '@/blocks/types'
import { resolveSelectorForSubBlock } from '@/hooks/selectors/resolution'
import { useCollaborativeWorkflow } from '@/hooks/use-collaborative-workflow'
@@ -17,6 +18,7 @@ interface FolderSelectorInputProps {
disabled?: boolean
isPreview?: boolean
previewValue?: any | null
previewContextValues?: Record<string, unknown>
}
export function FolderSelectorInput({
@@ -25,9 +27,13 @@ export function FolderSelectorInput({
disabled = false,
isPreview = false,
previewValue,
previewContextValues,
}: FolderSelectorInputProps) {
const [storeValue] = useSubBlockValue(blockId, subBlock.id)
const [connectedCredential] = useSubBlockValue(blockId, 'credential')
const [credentialFromStore] = useSubBlockValue(blockId, 'credential')
const connectedCredential = previewContextValues
? resolvePreviewContextValue(previewContextValues.credential)
: credentialFromStore
const { collaborativeSetSubblockValue } = useCollaborativeWorkflow()
const { activeWorkflowId } = useWorkflowRegistry()
const [selectedFolderId, setSelectedFolderId] = useState<string>('')
@@ -47,7 +53,11 @@ export function FolderSelectorInput({
)
// Central dependsOn gating
const { finalDisabled } = useDependsOnGate(blockId, subBlock, { disabled, isPreview })
const { finalDisabled } = useDependsOnGate(blockId, subBlock, {
disabled,
isPreview,
previewContextValues,
})
// Get the current value from the store or prop value if in preview mode
useEffect(() => {

View File

@@ -7,6 +7,7 @@ import { formatDisplayText } from '@/app/workspace/[workspaceId]/w/[workflowId]/
import { TagDropdown } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/components/tag-dropdown/tag-dropdown'
import { useSubBlockInput } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-sub-block-input'
import { useSubBlockValue } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-sub-block-value'
import { resolvePreviewContextValue } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/utils'
import { useAccessibleReferencePrefixes } from '@/app/workspace/[workspaceId]/w/[workflowId]/hooks/use-accessible-reference-prefixes'
import { useWorkflowState } from '@/hooks/queries/workflows'
@@ -37,6 +38,8 @@ interface InputMappingProps {
isPreview?: boolean
previewValue?: Record<string, unknown>
disabled?: boolean
/** Sub-block values from the preview context for resolving sibling sub-block values */
previewContextValues?: Record<string, unknown>
}
/**
@@ -50,9 +53,13 @@ export function InputMapping({
isPreview = false,
previewValue,
disabled = false,
previewContextValues,
}: InputMappingProps) {
const [mapping, setMapping] = useSubBlockValue(blockId, subBlockId)
const [selectedWorkflowId] = useSubBlockValue(blockId, 'workflowId')
const [storeWorkflowId] = useSubBlockValue(blockId, 'workflowId')
const selectedWorkflowId = previewContextValues
? resolvePreviewContextValue(previewContextValues.workflowId)
: storeWorkflowId
const inputController = useSubBlockInput({
blockId,

View File

@@ -17,6 +17,7 @@ import { type FilterFieldType, getOperatorsForFieldType } from '@/lib/knowledge/
import { formatDisplayText } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/components/formatted-text'
import { TagDropdown } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/components/tag-dropdown/tag-dropdown'
import { useSubBlockInput } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-sub-block-input'
import { resolvePreviewContextValue } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/utils'
import { useAccessibleReferencePrefixes } from '@/app/workspace/[workspaceId]/w/[workflowId]/hooks/use-accessible-reference-prefixes'
import type { SubBlockConfig } from '@/blocks/types'
import { useKnowledgeBaseTagDefinitions } from '@/hooks/kb/use-knowledge-base-tag-definitions'
@@ -69,7 +70,9 @@ export function KnowledgeTagFilters({
const overlayRefs = useRef<Record<string, HTMLDivElement>>({})
const [knowledgeBaseIdFromStore] = useSubBlockValue(blockId, 'knowledgeBaseId')
const knowledgeBaseIdValue = previewContextValues?.knowledgeBaseId ?? knowledgeBaseIdFromStore
const knowledgeBaseIdValue = previewContextValues
? resolvePreviewContextValue(previewContextValues.knowledgeBaseId)
: knowledgeBaseIdFromStore
const knowledgeBaseId =
typeof knowledgeBaseIdValue === 'string' && knowledgeBaseIdValue.trim().length > 0
? knowledgeBaseIdValue

View File

@@ -6,6 +6,7 @@ import { cn } from '@/lib/core/utils/cn'
import { LongInput } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/components/long-input/long-input'
import { ShortInput } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/components/short-input/short-input'
import { useSubBlockValue } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-sub-block-value'
import { resolvePreviewContextValue } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/utils'
import type { SubBlockConfig } from '@/blocks/types'
import { useMcpTools } from '@/hooks/mcp/use-mcp-tools'
import { formatParameterLabel } from '@/tools/params'
@@ -18,6 +19,7 @@ interface McpDynamicArgsProps {
disabled?: boolean
isPreview?: boolean
previewValue?: any
previewContextValues?: Record<string, unknown>
}
/**
@@ -47,12 +49,19 @@ export function McpDynamicArgs({
disabled = false,
isPreview = false,
previewValue,
previewContextValues,
}: McpDynamicArgsProps) {
const params = useParams()
const workspaceId = params.workspaceId as string
const { mcpTools, isLoading } = useMcpTools(workspaceId)
const [selectedTool] = useSubBlockValue(blockId, 'tool')
const [cachedSchema] = useSubBlockValue(blockId, '_toolSchema')
const [toolFromStore] = useSubBlockValue(blockId, 'tool')
const selectedTool = previewContextValues
? resolvePreviewContextValue(previewContextValues.tool)
: toolFromStore
const [schemaFromStore] = useSubBlockValue(blockId, '_toolSchema')
const cachedSchema = previewContextValues
? resolvePreviewContextValue(previewContextValues._toolSchema)
: schemaFromStore
const [toolArgs, setToolArgs] = useSubBlockValue(blockId, subBlockId)
const selectedToolConfig = mcpTools.find((tool) => tool.id === selectedTool)

View File

@@ -4,6 +4,7 @@ import { useEffect, useMemo, useState } from 'react'
import { useParams } from 'next/navigation'
import { Combobox } from '@/components/emcn/components'
import { useSubBlockValue } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-sub-block-value'
import { resolvePreviewContextValue } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/utils'
import type { SubBlockConfig } from '@/blocks/types'
import { useMcpTools } from '@/hooks/mcp/use-mcp-tools'
@@ -13,6 +14,7 @@ interface McpToolSelectorProps {
disabled?: boolean
isPreview?: boolean
previewValue?: string | null
previewContextValues?: Record<string, unknown>
}
export function McpToolSelector({
@@ -21,6 +23,7 @@ export function McpToolSelector({
disabled = false,
isPreview = false,
previewValue,
previewContextValues,
}: McpToolSelectorProps) {
const params = useParams()
const workspaceId = params.workspaceId as string
@@ -31,7 +34,10 @@ export function McpToolSelector({
const [storeValue, setStoreValue] = useSubBlockValue(blockId, subBlock.id)
const [, setSchemaCache] = useSubBlockValue(blockId, '_toolSchema')
const [serverValue] = useSubBlockValue(blockId, 'server')
const [serverFromStore] = useSubBlockValue(blockId, 'server')
const serverValue = previewContextValues
? resolvePreviewContextValue(previewContextValues.server)
: serverFromStore
const label = subBlock.placeholder || 'Select tool'

View File

@@ -9,6 +9,7 @@ import { SelectorCombobox } from '@/app/workspace/[workspaceId]/w/[workflowId]/c
import { useDependsOnGate } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-depends-on-gate'
import { useForeignCredential } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-foreign-credential'
import { useSubBlockValue } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-sub-block-value'
import { resolvePreviewContextValue } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/utils'
import { getBlock } from '@/blocks/registry'
import type { SubBlockConfig } from '@/blocks/types'
import { resolveSelectorForSubBlock } from '@/hooks/selectors/resolution'
@@ -55,14 +56,19 @@ export function ProjectSelectorInput({
return (workflowValues as Record<string, Record<string, unknown>>)[blockId] || {}
})
const connectedCredential = previewContextValues?.credential ?? blockValues.credential
const jiraDomain = previewContextValues?.domain ?? jiraDomainFromStore
const connectedCredential = previewContextValues
? resolvePreviewContextValue(previewContextValues.credential)
: blockValues.credential
const jiraDomain = previewContextValues
? resolvePreviewContextValue(previewContextValues.domain)
: jiraDomainFromStore
const linearTeamId = useMemo(
() =>
previewContextValues?.teamId ??
resolveDependencyValue('teamId', blockValues, canonicalIndex, canonicalModeOverrides),
[previewContextValues?.teamId, blockValues, canonicalIndex, canonicalModeOverrides]
previewContextValues
? resolvePreviewContextValue(previewContextValues.teamId)
: resolveDependencyValue('teamId', blockValues, canonicalIndex, canonicalModeOverrides),
[previewContextValues, blockValues, canonicalIndex, canonicalModeOverrides]
)
const serviceId = subBlock.serviceId || ''

View File

@@ -8,6 +8,7 @@ import { buildCanonicalIndex, resolveDependencyValue } from '@/lib/workflows/sub
import { SelectorCombobox } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/components/selector-combobox/selector-combobox'
import { useDependsOnGate } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-depends-on-gate'
import { useForeignCredential } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-foreign-credential'
import { resolvePreviewContextValue } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/utils'
import { getBlock } from '@/blocks/registry'
import type { SubBlockConfig } from '@/blocks/types'
import { resolveSelectorForSubBlock, type SelectorResolution } from '@/hooks/selectors/resolution'
@@ -66,9 +67,12 @@ export function SheetSelectorInput({
[blockValues, canonicalIndex, canonicalModeOverrides]
)
const connectedCredential = previewContextValues?.credential ?? connectedCredentialFromStore
const connectedCredential = previewContextValues
? resolvePreviewContextValue(previewContextValues.credential)
: connectedCredentialFromStore
const spreadsheetId = previewContextValues
? (previewContextValues.spreadsheetId ?? previewContextValues.manualSpreadsheetId)
? (resolvePreviewContextValue(previewContextValues.spreadsheetId) ??
resolvePreviewContextValue(previewContextValues.manualSpreadsheetId))
: spreadsheetIdFromStore
const normalizedCredentialId =

View File

@@ -130,39 +130,52 @@ export function SkillInput({
onOpenChange={setOpen}
/>
{selectedSkills.length > 0 && (
<div className='flex flex-wrap gap-[4px]'>
{selectedSkills.map((stored) => {
const fullSkill = workspaceSkills.find((s) => s.id === stored.skillId)
return (
{selectedSkills.length > 0 &&
selectedSkills.map((stored) => {
const fullSkill = workspaceSkills.find((s) => s.id === stored.skillId)
return (
<div
key={stored.skillId}
className='group relative flex flex-col overflow-hidden rounded-[4px] border border-[var(--border-1)] transition-all duration-200 ease-in-out'
>
<div
key={stored.skillId}
className='flex cursor-pointer items-center gap-[4px] rounded-[4px] border border-[var(--border-1)] bg-[var(--surface-5)] px-[6px] py-[2px] font-medium text-[12px] text-[var(--text-secondary)] hover:bg-[var(--surface-6)]'
className='flex cursor-pointer items-center justify-between gap-[8px] rounded-t-[4px] bg-[var(--surface-4)] px-[8px] py-[6.5px]'
onClick={() => {
if (fullSkill && !disabled && !isPreview) {
setEditingSkill(fullSkill)
}
}}
>
<AgentSkillsIcon className='h-[10px] w-[10px] text-[var(--text-tertiary)]' />
<span className='max-w-[140px] truncate'>{resolveSkillName(stored)}</span>
{!disabled && !isPreview && (
<button
type='button'
onClick={(e) => {
e.stopPropagation()
handleRemove(stored.skillId)
}}
className='ml-[2px] rounded-[2px] p-[1px] text-[var(--text-tertiary)] hover:bg-[var(--surface-7)] hover:text-[var(--text-secondary)]'
<div className='flex min-w-0 flex-1 items-center gap-[8px]'>
<div
className='flex h-[16px] w-[16px] flex-shrink-0 items-center justify-center rounded-[4px]'
style={{ backgroundColor: '#e0e0e0' }}
>
<XIcon className='h-[10px] w-[10px]' />
</button>
)}
<AgentSkillsIcon className='h-[10px] w-[10px] text-[#333]' />
</div>
<span className='truncate font-medium text-[13px] text-[var(--text-primary)]'>
{resolveSkillName(stored)}
</span>
</div>
<div className='flex flex-shrink-0 items-center gap-[8px]'>
{!disabled && !isPreview && (
<button
type='button'
onClick={(e) => {
e.stopPropagation()
handleRemove(stored.skillId)
}}
className='flex items-center justify-center text-[var(--text-tertiary)] transition-colors hover:text-[var(--text-primary)]'
aria-label='Remove skill'
>
<XIcon className='h-[13px] w-[13px]' />
</button>
)}
</div>
</div>
)
})}
</div>
)}
</div>
)
})}
</div>
<SkillModal

View File

@@ -8,6 +8,7 @@ import { SelectorCombobox } from '@/app/workspace/[workspaceId]/w/[workflowId]/c
import { useDependsOnGate } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-depends-on-gate'
import { useForeignCredential } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-foreign-credential'
import { useSubBlockValue } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-sub-block-value'
import { resolvePreviewContextValue } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/utils'
import type { SubBlockConfig } from '@/blocks/types'
import type { SelectorContext, SelectorKey } from '@/hooks/selectors/types'
@@ -58,9 +59,15 @@ export function SlackSelectorInput({
const [botToken] = useSubBlockValue(blockId, 'botToken')
const [connectedCredential] = useSubBlockValue(blockId, 'credential')
const effectiveAuthMethod = previewContextValues?.authMethod ?? authMethod
const effectiveBotToken = previewContextValues?.botToken ?? botToken
const effectiveCredential = previewContextValues?.credential ?? connectedCredential
const effectiveAuthMethod = previewContextValues
? resolvePreviewContextValue(previewContextValues.authMethod)
: authMethod
const effectiveBotToken = previewContextValues
? resolvePreviewContextValue(previewContextValues.botToken)
: botToken
const effectiveCredential = previewContextValues
? resolvePreviewContextValue(previewContextValues.credential)
: connectedCredential
const [_selectedValue, setSelectedValue] = useState<string | null>(null)
const serviceId = subBlock.serviceId || ''

View File

@@ -332,6 +332,7 @@ function FolderSelectorSyncWrapper({
dependsOn: uiComponent.dependsOn,
}}
disabled={disabled}
previewContextValues={previewContextValues}
/>
</GenericSyncWrapper>
)

View File

@@ -797,6 +797,7 @@ function SubBlockComponent({
disabled={isDisabled}
isPreview={isPreview}
previewValue={previewValue}
previewContextValues={isPreview ? subBlockValues : undefined}
/>
)
@@ -832,6 +833,7 @@ function SubBlockComponent({
disabled={isDisabled}
isPreview={isPreview}
previewValue={previewValue}
previewContextValues={isPreview ? subBlockValues : undefined}
/>
)
@@ -843,6 +845,7 @@ function SubBlockComponent({
disabled={isDisabled}
isPreview={isPreview}
previewValue={previewValue}
previewContextValues={isPreview ? subBlockValues : undefined}
/>
)
@@ -865,6 +868,7 @@ function SubBlockComponent({
disabled={isDisabled}
isPreview={isPreview}
previewValue={previewValue as any}
previewContextValues={isPreview ? subBlockValues : undefined}
/>
)
@@ -876,6 +880,7 @@ function SubBlockComponent({
disabled={isDisabled}
isPreview={isPreview}
previewValue={previewValue as any}
previewContextValues={isPreview ? subBlockValues : undefined}
/>
)
@@ -887,6 +892,7 @@ function SubBlockComponent({
disabled={isDisabled}
isPreview={isPreview}
previewValue={previewValue as any}
previewContextValues={isPreview ? subBlockValues : undefined}
/>
)
@@ -911,6 +917,7 @@ function SubBlockComponent({
isPreview={isPreview}
previewValue={previewValue as any}
disabled={isDisabled}
previewContextValues={isPreview ? subBlockValues : undefined}
/>
)
@@ -946,6 +953,7 @@ function SubBlockComponent({
disabled={isDisabled}
isPreview={isPreview}
previewValue={previewValue}
previewContextValues={isPreview ? subBlockValues : undefined}
/>
)
@@ -979,6 +987,7 @@ function SubBlockComponent({
disabled={isDisabled}
isPreview={isPreview}
previewValue={previewValue as any}
previewContextValues={isPreview ? subBlockValues : undefined}
/>
)
@@ -990,6 +999,7 @@ function SubBlockComponent({
disabled={isDisabled}
isPreview={isPreview}
previewValue={previewValue}
previewContextValues={isPreview ? subBlockValues : undefined}
/>
)

View File

@@ -0,0 +1,18 @@
/**
* Extracts the raw value from a preview context entry.
*
* @remarks
* In the sub-block preview context, values are wrapped as `{ value: T }` objects
* (the full sub-block state). In the tool-input preview context, values are already
* raw. This function normalizes both cases to return the underlying value.
*
* @param raw - The preview context entry, which may be a raw value or a `{ value: T }` wrapper
* @returns The unwrapped value, or `null` if the input is nullish
*/
export function resolvePreviewContextValue(raw: unknown): unknown {
if (raw === null || raw === undefined) return null
if (typeof raw === 'object' && !Array.isArray(raw) && 'value' in raw) {
return (raw as Record<string, unknown>).value ?? null
}
return raw
}

View File

@@ -6,6 +6,7 @@ import {
isSubBlockVisibleForMode,
} from '@/lib/workflows/subblocks/visibility'
import type { BlockConfig, SubBlockConfig, SubBlockType } from '@/blocks/types'
import { usePermissionConfig } from '@/hooks/use-permission-config'
import { useWorkflowDiffStore } from '@/stores/workflow-diff'
import { mergeSubblockState } from '@/stores/workflows/utils'
import { useWorkflowStore } from '@/stores/workflows/workflow/store'
@@ -35,6 +36,7 @@ export function useEditorSubblockLayout(
const blockDataFromStore = useWorkflowStore(
useCallback((state) => state.blocks?.[blockId]?.data, [blockId])
)
const { config: permissionConfig } = usePermissionConfig()
return useMemo(() => {
// Guard against missing config or block selection
@@ -100,6 +102,9 @@ export function useEditorSubblockLayout(
const visibleSubBlocks = (config.subBlocks || []).filter((block) => {
if (block.hidden) return false
// Hide skill-input subblock when skills are disabled via permissions
if (block.type === 'skill-input' && permissionConfig.disableSkills) return false
// Check required feature if specified - declarative feature gating
if (!isSubBlockFeatureEnabled(block)) return false
@@ -149,5 +154,6 @@ export function useEditorSubblockLayout(
activeWorkflowId,
isSnapshotView,
blockDataFromStore,
permissionConfig.disableSkills,
])
}

View File

@@ -40,6 +40,7 @@ import { useCustomTools } from '@/hooks/queries/custom-tools'
import { useMcpServers, useMcpToolsQuery } from '@/hooks/queries/mcp'
import { useCredentialName } from '@/hooks/queries/oauth-credentials'
import { useReactivateSchedule, useScheduleInfo } from '@/hooks/queries/schedules'
import { useSkills } from '@/hooks/queries/skills'
import { useDeployChildWorkflow } from '@/hooks/queries/workflows'
import { useSelectorDisplayName } from '@/hooks/use-selector-display-name'
import { useVariablesStore } from '@/stores/panel'
@@ -618,6 +619,48 @@ const SubBlockRow = memo(function SubBlockRow({
return `${toolNames[0]}, ${toolNames[1]} +${toolNames.length - 2}`
}, [subBlock?.type, rawValue, customTools, workspaceId])
/**
* Hydrates skill references to display names.
* Resolves skill IDs to their current names from the skills query.
*/
const { data: workspaceSkills = [] } = useSkills(workspaceId || '')
const skillsDisplayValue = useMemo(() => {
if (subBlock?.type !== 'skill-input' || !Array.isArray(rawValue) || rawValue.length === 0) {
return null
}
interface StoredSkill {
skillId: string
name?: string
}
const skillNames = rawValue
.map((skill: StoredSkill) => {
if (!skill || typeof skill !== 'object') return null
// Priority 1: Resolve skill name from the skills query (fresh data)
if (skill.skillId) {
const foundSkill = workspaceSkills.find((s) => s.id === skill.skillId)
if (foundSkill?.name) return foundSkill.name
}
// Priority 2: Fall back to stored name (for deleted skills)
if (skill.name && typeof skill.name === 'string') return skill.name
// Priority 3: Use skillId as last resort
if (skill.skillId) return skill.skillId
return null
})
.filter((name): name is string => !!name)
if (skillNames.length === 0) return null
if (skillNames.length === 1) return skillNames[0]
if (skillNames.length === 2) return `${skillNames[0]}, ${skillNames[1]}`
return `${skillNames[0]}, ${skillNames[1]} +${skillNames.length - 2}`
}, [subBlock?.type, rawValue, workspaceSkills])
const isPasswordField = subBlock?.password === true
const maskedValue = isPasswordField && value && value !== '-' ? '•••' : null
@@ -627,6 +670,7 @@ const SubBlockRow = memo(function SubBlockRow({
dropdownLabel ||
variablesDisplayValue ||
toolsDisplayValue ||
skillsDisplayValue ||
knowledgeBaseDisplayName ||
workflowSelectionName ||
mcpServerDisplayName ||

View File

@@ -784,8 +784,12 @@ function PreviewEditorContent({
? childWorkflowSnapshotState
: childWorkflowState
const resolvedIsLoadingChildWorkflow = isExecutionMode ? false : isLoadingChildWorkflow
const isBlockNotExecuted = isExecutionMode && !executionData
const isMissingChildWorkflow =
Boolean(childWorkflowId) && !resolvedIsLoadingChildWorkflow && !resolvedChildWorkflowState
Boolean(childWorkflowId) &&
!isBlockNotExecuted &&
!resolvedIsLoadingChildWorkflow &&
!resolvedChildWorkflowState
/** Drills down into the child workflow or opens it in a new tab */
const handleExpandChildWorkflow = useCallback(() => {
@@ -1192,7 +1196,7 @@ function PreviewEditorContent({
<div ref={subBlocksRef} className='subblocks-section flex flex-1 flex-col overflow-hidden'>
<div className='flex-1 overflow-y-auto overflow-x-hidden'>
{/* Not Executed Banner - shown when in execution mode but block wasn't executed */}
{isExecutionMode && !executionData && (
{isBlockNotExecuted && (
<div className='flex min-w-0 flex-col gap-[8px] overflow-hidden border-[var(--border)] border-b px-[12px] py-[10px]'>
<div className='flex items-center justify-between'>
<Badge variant='gray-secondary' size='sm' dot>
@@ -1419,9 +1423,11 @@ function PreviewEditorContent({
) : (
<div className='flex h-full items-center justify-center bg-[var(--surface-3)]'>
<span className='text-[13px] text-[var(--text-tertiary)]'>
{isMissingChildWorkflow
? DELETED_WORKFLOW_LABEL
: 'Unable to load preview'}
{isBlockNotExecuted
? 'Not Executed'
: isMissingChildWorkflow
? DELETED_WORKFLOW_LABEL
: 'Unable to load preview'}
</span>
</div>
)}

View File

@@ -27,6 +27,13 @@ interface SkillModalProps {
const KEBAB_CASE_REGEX = /^[a-z0-9]+(-[a-z0-9]+)*$/
interface FieldErrors {
name?: string
description?: string
content?: string
general?: string
}
export function SkillModal({
open,
onOpenChange,
@@ -43,7 +50,7 @@ export function SkillModal({
const [name, setName] = useState('')
const [description, setDescription] = useState('')
const [content, setContent] = useState('')
const [formError, setFormError] = useState('')
const [errors, setErrors] = useState<FieldErrors>({})
const [saving, setSaving] = useState(false)
useEffect(() => {
@@ -57,7 +64,7 @@ export function SkillModal({
setDescription('')
setContent('')
}
setFormError('')
setErrors({})
}
}, [open, initialValues])
@@ -71,24 +78,26 @@ export function SkillModal({
}, [name, description, content, initialValues])
const handleSave = async () => {
const newErrors: FieldErrors = {}
if (!name.trim()) {
setFormError('Name is required')
return
}
if (name.length > 64) {
setFormError('Name must be 64 characters or less')
return
}
if (!KEBAB_CASE_REGEX.test(name)) {
setFormError('Name must be kebab-case (e.g. my-skill)')
return
newErrors.name = 'Name is required'
} else if (name.length > 64) {
newErrors.name = 'Name must be 64 characters or less'
} else if (!KEBAB_CASE_REGEX.test(name)) {
newErrors.name = 'Name must be kebab-case (e.g. my-skill)'
}
if (!description.trim()) {
setFormError('Description is required')
return
newErrors.description = 'Description is required'
}
if (!content.trim()) {
setFormError('Content is required')
newErrors.content = 'Content is required'
}
if (Object.keys(newErrors).length > 0) {
setErrors(newErrors)
return
}
@@ -113,7 +122,7 @@ export function SkillModal({
error instanceof Error && error.message.includes('already exists')
? error.message
: 'Failed to save skill. Please try again.'
setFormError(message)
setErrors({ general: message })
} finally {
setSaving(false)
}
@@ -135,12 +144,17 @@ export function SkillModal({
value={name}
onChange={(e) => {
setName(e.target.value)
if (formError) setFormError('')
if (errors.name || errors.general)
setErrors((prev) => ({ ...prev, name: undefined, general: undefined }))
}}
/>
<span className='text-[11px] text-[var(--text-muted)]'>
Lowercase letters, numbers, and hyphens (e.g. my-skill)
</span>
{errors.name ? (
<p className='text-[12px] text-[var(--text-error)]'>{errors.name}</p>
) : (
<span className='text-[11px] text-[var(--text-muted)]'>
Lowercase letters, numbers, and hyphens (e.g. my-skill)
</span>
)}
</div>
<div className='flex flex-col gap-[4px]'>
@@ -153,10 +167,14 @@ export function SkillModal({
value={description}
onChange={(e) => {
setDescription(e.target.value)
if (formError) setFormError('')
if (errors.description || errors.general)
setErrors((prev) => ({ ...prev, description: undefined, general: undefined }))
}}
maxLength={1024}
/>
{errors.description && (
<p className='text-[12px] text-[var(--text-error)]'>{errors.description}</p>
)}
</div>
<div className='flex flex-col gap-[4px]'>
@@ -169,13 +187,19 @@ export function SkillModal({
value={content}
onChange={(e: ChangeEvent<HTMLTextAreaElement>) => {
setContent(e.target.value)
if (formError) setFormError('')
if (errors.content || errors.general)
setErrors((prev) => ({ ...prev, content: undefined, general: undefined }))
}}
className='min-h-[200px] resize-y font-mono text-[13px]'
/>
{errors.content && (
<p className='text-[12px] text-[var(--text-error)]'>{errors.content}</p>
)}
</div>
{formError && <span className='text-[11px] text-[var(--text-error)]'>{formError}</span>}
{errors.general && (
<p className='text-[12px] text-[var(--text-error)]'>{errors.general}</p>
)}
</div>
</ModalBody>
<ModalFooter className='items-center justify-between'>

View File

@@ -1,31 +1,9 @@
import { createLogger } from '@sim/logger'
import { AgentIcon } from '@/components/icons'
import { isHosted } from '@/lib/core/config/feature-flags'
import type { BlockConfig } from '@/blocks/types'
import { AuthMode } from '@/blocks/types'
import {
getBaseModelProviders,
getHostedModels,
getMaxTemperature,
getProviderIcon,
getReasoningEffortValuesForModel,
getThinkingLevelsForModel,
getVerbosityValuesForModel,
MODELS_WITH_REASONING_EFFORT,
MODELS_WITH_THINKING,
MODELS_WITH_VERBOSITY,
providers,
supportsTemperature,
} from '@/providers/utils'
const getCurrentOllamaModels = () => {
return useProvidersStore.getState().providers.ollama.models
}
const getCurrentVLLMModels = () => {
return useProvidersStore.getState().providers.vllm.models
}
import { getApiKeyCondition, getModelConfigSubBlocks, MODEL_CONFIG_INPUTS } from '@/blocks/utils'
import { getBaseModelProviders, getProviderIcon, providers } from '@/providers/utils'
import { useProvidersStore } from '@/stores/providers'
import type { ToolResponse } from '@/tools/types'
@@ -158,165 +136,7 @@ Return ONLY the JSON array.`,
value: providers.vertex.models,
},
},
{
id: 'reasoningEffort',
title: 'Reasoning Effort',
type: 'dropdown',
placeholder: 'Select reasoning effort...',
options: [
{ label: 'low', id: 'low' },
{ label: 'medium', id: 'medium' },
{ label: 'high', id: 'high' },
],
dependsOn: ['model'],
fetchOptions: async (blockId: string) => {
const { useSubBlockStore } = await import('@/stores/workflows/subblock/store')
const { useWorkflowRegistry } = await import('@/stores/workflows/registry/store')
const activeWorkflowId = useWorkflowRegistry.getState().activeWorkflowId
if (!activeWorkflowId) {
return [
{ label: 'low', id: 'low' },
{ label: 'medium', id: 'medium' },
{ label: 'high', id: 'high' },
]
}
const workflowValues = useSubBlockStore.getState().workflowValues[activeWorkflowId]
const blockValues = workflowValues?.[blockId]
const modelValue = blockValues?.model as string
if (!modelValue) {
return [
{ label: 'low', id: 'low' },
{ label: 'medium', id: 'medium' },
{ label: 'high', id: 'high' },
]
}
const validOptions = getReasoningEffortValuesForModel(modelValue)
if (!validOptions) {
return [
{ label: 'low', id: 'low' },
{ label: 'medium', id: 'medium' },
{ label: 'high', id: 'high' },
]
}
return validOptions.map((opt) => ({ label: opt, id: opt }))
},
value: () => 'medium',
condition: {
field: 'model',
value: MODELS_WITH_REASONING_EFFORT,
},
},
{
id: 'verbosity',
title: 'Verbosity',
type: 'dropdown',
placeholder: 'Select verbosity...',
options: [
{ label: 'low', id: 'low' },
{ label: 'medium', id: 'medium' },
{ label: 'high', id: 'high' },
],
dependsOn: ['model'],
fetchOptions: async (blockId: string) => {
const { useSubBlockStore } = await import('@/stores/workflows/subblock/store')
const { useWorkflowRegistry } = await import('@/stores/workflows/registry/store')
const activeWorkflowId = useWorkflowRegistry.getState().activeWorkflowId
if (!activeWorkflowId) {
return [
{ label: 'low', id: 'low' },
{ label: 'medium', id: 'medium' },
{ label: 'high', id: 'high' },
]
}
const workflowValues = useSubBlockStore.getState().workflowValues[activeWorkflowId]
const blockValues = workflowValues?.[blockId]
const modelValue = blockValues?.model as string
if (!modelValue) {
return [
{ label: 'low', id: 'low' },
{ label: 'medium', id: 'medium' },
{ label: 'high', id: 'high' },
]
}
const validOptions = getVerbosityValuesForModel(modelValue)
if (!validOptions) {
return [
{ label: 'low', id: 'low' },
{ label: 'medium', id: 'medium' },
{ label: 'high', id: 'high' },
]
}
return validOptions.map((opt) => ({ label: opt, id: opt }))
},
value: () => 'medium',
condition: {
field: 'model',
value: MODELS_WITH_VERBOSITY,
},
},
{
id: 'thinkingLevel',
title: 'Thinking Level',
type: 'dropdown',
placeholder: 'Select thinking level...',
options: [
{ label: 'minimal', id: 'minimal' },
{ label: 'low', id: 'low' },
{ label: 'medium', id: 'medium' },
{ label: 'high', id: 'high' },
{ label: 'max', id: 'max' },
],
dependsOn: ['model'],
fetchOptions: async (blockId: string) => {
const { useSubBlockStore } = await import('@/stores/workflows/subblock/store')
const { useWorkflowRegistry } = await import('@/stores/workflows/registry/store')
const activeWorkflowId = useWorkflowRegistry.getState().activeWorkflowId
if (!activeWorkflowId) {
return [
{ label: 'low', id: 'low' },
{ label: 'high', id: 'high' },
]
}
const workflowValues = useSubBlockStore.getState().workflowValues[activeWorkflowId]
const blockValues = workflowValues?.[blockId]
const modelValue = blockValues?.model as string
if (!modelValue) {
return [
{ label: 'low', id: 'low' },
{ label: 'high', id: 'high' },
]
}
const validOptions = getThinkingLevelsForModel(modelValue)
if (!validOptions) {
return [
{ label: 'low', id: 'low' },
{ label: 'high', id: 'high' },
]
}
return validOptions.map((opt) => ({ label: opt, id: opt }))
},
value: () => 'high',
condition: {
field: 'model',
value: MODELS_WITH_THINKING,
},
},
...getModelConfigSubBlocks(),
{
id: 'azureEndpoint',
title: 'Azure Endpoint',
@@ -333,11 +153,11 @@ Return ONLY the JSON array.`,
id: 'azureApiVersion',
title: 'Azure API Version',
type: 'short-input',
placeholder: '2024-07-01-preview',
placeholder: 'Enter API version',
connectionDroppable: false,
condition: {
field: 'model',
value: providers['azure-openai'].models,
value: [...providers['azure-openai'].models, ...providers['azure-anthropic'].models],
},
},
{
@@ -401,6 +221,16 @@ Return ONLY the JSON array.`,
value: providers.bedrock.models,
},
},
{
id: 'apiKey',
title: 'API Key',
type: 'short-input',
placeholder: 'Enter your API key',
password: true,
connectionDroppable: false,
required: true,
condition: getApiKeyCondition(),
},
{
id: 'tools',
title: 'Tools',
@@ -413,32 +243,6 @@ Return ONLY the JSON array.`,
type: 'skill-input',
defaultValue: [],
},
{
id: 'apiKey',
title: 'API Key',
type: 'short-input',
placeholder: 'Enter your API key',
password: true,
connectionDroppable: false,
required: true,
// Hide API key for hosted models, Ollama models, vLLM models, Vertex models (uses OAuth), and Bedrock (uses AWS credentials)
condition: isHosted
? {
field: 'model',
value: [...getHostedModels(), ...providers.vertex.models, ...providers.bedrock.models],
not: true, // Show for all models EXCEPT those listed
}
: () => ({
field: 'model',
value: [
...getCurrentOllamaModels(),
...getCurrentVLLMModels(),
...providers.vertex.models,
...providers.bedrock.models,
],
not: true, // Show for all models EXCEPT Ollama, vLLM, Vertex, and Bedrock models
}),
},
{
id: 'memoryType',
title: 'Memory',
@@ -486,46 +290,6 @@ Return ONLY the JSON array.`,
value: ['sliding_window_tokens'],
},
},
{
id: 'temperature',
title: 'Temperature',
type: 'slider',
min: 0,
max: 1,
defaultValue: 0.3,
condition: () => ({
field: 'model',
value: (() => {
const allModels = Object.keys(getBaseModelProviders())
return allModels.filter(
(model) => supportsTemperature(model) && getMaxTemperature(model) === 1
)
})(),
}),
},
{
id: 'temperature',
title: 'Temperature',
type: 'slider',
min: 0,
max: 2,
defaultValue: 0.3,
condition: () => ({
field: 'model',
value: (() => {
const allModels = Object.keys(getBaseModelProviders())
return allModels.filter(
(model) => supportsTemperature(model) && getMaxTemperature(model) === 2
)
})(),
}),
},
{
id: 'maxTokens',
title: 'Max Output Tokens',
type: 'short-input',
placeholder: 'Enter max tokens (e.g., 4096)...',
},
{
id: 'responseFormat',
title: 'Response Format',
@@ -715,7 +479,7 @@ Example 3 (Array Input):
},
model: { type: 'string', description: 'AI model to use' },
apiKey: { type: 'string', description: 'Provider API key' },
azureEndpoint: { type: 'string', description: 'Azure OpenAI endpoint URL' },
azureEndpoint: { type: 'string', description: 'Azure endpoint URL' },
azureApiVersion: { type: 'string', description: 'Azure API version' },
vertexProject: { type: 'string', description: 'Google Cloud project ID for Vertex AI' },
vertexLocation: { type: 'string', description: 'Google Cloud location for Vertex AI' },
@@ -766,14 +530,7 @@ Example 3 (Array Input):
required: ['schema'],
},
},
temperature: { type: 'number', description: 'Response randomness level' },
maxTokens: { type: 'number', description: 'Maximum number of tokens in the response' },
reasoningEffort: { type: 'string', description: 'Reasoning effort level for GPT-5 models' },
verbosity: { type: 'string', description: 'Verbosity level for GPT-5 models' },
thinkingLevel: {
type: 'string',
description: 'Thinking level for models with extended thinking (Anthropic Claude, Gemini 3)',
},
...MODEL_CONFIG_INPUTS,
tools: { type: 'json', description: 'Available tools configuration' },
skills: { type: 'json', description: 'Selected skills configuration' },
},

View File

@@ -76,8 +76,9 @@ export const TranslateBlock: BlockConfig = {
vertexProject: params.vertexProject,
vertexLocation: params.vertexLocation,
vertexCredential: params.vertexCredential,
bedrockRegion: params.bedrockRegion,
bedrockAccessKeyId: params.bedrockAccessKeyId,
bedrockSecretKey: params.bedrockSecretKey,
bedrockRegion: params.bedrockRegion,
}),
},
},

View File

@@ -208,7 +208,7 @@ export interface SubBlockConfig {
not?: boolean
}
}
| (() => {
| ((values?: Record<string, unknown>) => {
field: string
value: string | number | boolean | Array<string | number | boolean>
not?: boolean
@@ -261,7 +261,7 @@ export interface SubBlockConfig {
not?: boolean
}
}
| (() => {
| ((values?: Record<string, unknown>) => {
field: string
value: string | number | boolean | Array<string | number | boolean>
not?: boolean

View File

@@ -1,6 +1,19 @@
import { isHosted } from '@/lib/core/config/feature-flags'
import type { BlockOutput, OutputFieldDefinition, SubBlockConfig } from '@/blocks/types'
import { getHostedModels, providers } from '@/providers/utils'
import {
getBaseModelProviders,
getHostedModels,
getMaxTemperature,
getProviderFromModel,
getReasoningEffortValuesForModel,
getThinkingLevelsForModel,
getVerbosityValuesForModel,
MODELS_WITH_REASONING_EFFORT,
MODELS_WITH_THINKING,
MODELS_WITH_VERBOSITY,
providers,
supportsTemperature,
} from '@/providers/utils'
import { useProvidersStore } from '@/stores/providers/store'
/**
@@ -48,11 +61,54 @@ const getCurrentOllamaModels = () => {
return useProvidersStore.getState().providers.ollama.models
}
/**
* Helper to get current vLLM models from store
*/
const getCurrentVLLMModels = () => {
return useProvidersStore.getState().providers.vllm.models
function buildModelVisibilityCondition(model: string, shouldShow: boolean) {
if (!model) {
return { field: 'model', value: '__no_model_selected__' }
}
return shouldShow ? { field: 'model', value: model } : { field: 'model', value: model, not: true }
}
function shouldRequireApiKeyForModel(model: string): boolean {
const normalizedModel = model.trim().toLowerCase()
if (!normalizedModel) return false
const hostedModels = getHostedModels()
const isHostedModel = hostedModels.some(
(hostedModel) => hostedModel.toLowerCase() === normalizedModel
)
if (isHosted && isHostedModel) return false
if (normalizedModel.startsWith('vertex/') || normalizedModel.startsWith('bedrock/')) {
return false
}
if (normalizedModel.startsWith('vllm/')) {
return false
}
const currentOllamaModels = getCurrentOllamaModels()
if (currentOllamaModels.some((ollamaModel) => ollamaModel.toLowerCase() === normalizedModel)) {
return false
}
if (!isHosted) {
try {
const providerId = getProviderFromModel(model)
if (
providerId === 'ollama' ||
providerId === 'vllm' ||
providerId === 'vertex' ||
providerId === 'bedrock'
) {
return false
}
} catch {
// If model resolution fails, fall through and require an API key.
}
}
return true
}
/**
@@ -60,27 +116,16 @@ const getCurrentVLLMModels = () => {
* Handles hosted vs self-hosted environments and excludes providers that don't need API key.
*/
export function getApiKeyCondition() {
return isHosted
? {
field: 'model',
value: [...getHostedModels(), ...providers.vertex.models, ...providers.bedrock.models],
not: true,
}
: () => ({
field: 'model',
value: [
...getCurrentOllamaModels(),
...getCurrentVLLMModels(),
...providers.vertex.models,
...providers.bedrock.models,
],
not: true,
})
return (values?: Record<string, unknown>) => {
const model = typeof values?.model === 'string' ? values.model : ''
const shouldShow = shouldRequireApiKeyForModel(model)
return buildModelVisibilityCondition(model, shouldShow)
}
}
/**
* Returns the standard provider credential subblocks used by LLM-based blocks.
* This includes: Vertex AI OAuth, API Key, Azure OpenAI, Vertex AI config, and Bedrock config.
* This includes: Vertex AI OAuth, API Key, Azure (OpenAI + Anthropic), Vertex AI config, and Bedrock config.
*
* Usage: Spread into your block's subBlocks array after block-specific fields
*/
@@ -111,25 +156,25 @@ export function getProviderCredentialSubBlocks(): SubBlockConfig[] {
},
{
id: 'azureEndpoint',
title: 'Azure OpenAI Endpoint',
title: 'Azure Endpoint',
type: 'short-input',
password: true,
placeholder: 'https://your-resource.openai.azure.com',
placeholder: 'https://your-resource.services.ai.azure.com',
connectionDroppable: false,
condition: {
field: 'model',
value: providers['azure-openai'].models,
value: [...providers['azure-openai'].models, ...providers['azure-anthropic'].models],
},
},
{
id: 'azureApiVersion',
title: 'Azure API Version',
type: 'short-input',
placeholder: '2024-07-01-preview',
placeholder: 'Enter API version',
connectionDroppable: false,
condition: {
field: 'model',
value: providers['azure-openai'].models,
value: [...providers['azure-openai'].models, ...providers['azure-anthropic'].models],
},
},
{
@@ -202,7 +247,7 @@ export function getProviderCredentialSubBlocks(): SubBlockConfig[] {
*/
export const PROVIDER_CREDENTIAL_INPUTS = {
apiKey: { type: 'string', description: 'Provider API key' },
azureEndpoint: { type: 'string', description: 'Azure OpenAI endpoint URL' },
azureEndpoint: { type: 'string', description: 'Azure endpoint URL' },
azureApiVersion: { type: 'string', description: 'Azure API version' },
vertexProject: { type: 'string', description: 'Google Cloud project ID for Vertex AI' },
vertexLocation: { type: 'string', description: 'Google Cloud location for Vertex AI' },
@@ -250,6 +295,239 @@ export function createVersionedToolSelector<TParams extends Record<string, any>>
}
}
/**
* Returns the standard model configuration subBlocks used by LLM-based blocks.
* Includes: reasoningEffort, verbosity, thinkingLevel, temperature (max=1 and max=2), maxTokens.
*
* Usage: Spread into your block's subBlocks array after provider credential fields
*/
export function getModelConfigSubBlocks(): SubBlockConfig[] {
return [
{
id: 'reasoningEffort',
title: 'Reasoning Effort',
type: 'dropdown',
placeholder: 'Select reasoning effort...',
options: [
{ label: 'auto', id: 'auto' },
{ label: 'low', id: 'low' },
{ label: 'medium', id: 'medium' },
{ label: 'high', id: 'high' },
],
dependsOn: ['model'],
fetchOptions: async (blockId: string) => {
const { useSubBlockStore } = await import('@/stores/workflows/subblock/store')
const { useWorkflowRegistry } = await import('@/stores/workflows/registry/store')
const autoOption = { label: 'auto', id: 'auto' }
const activeWorkflowId = useWorkflowRegistry.getState().activeWorkflowId
if (!activeWorkflowId) {
return [
autoOption,
{ label: 'low', id: 'low' },
{ label: 'medium', id: 'medium' },
{ label: 'high', id: 'high' },
]
}
const workflowValues = useSubBlockStore.getState().workflowValues[activeWorkflowId]
const blockValues = workflowValues?.[blockId]
const modelValue = blockValues?.model as string
if (!modelValue) {
return [
autoOption,
{ label: 'low', id: 'low' },
{ label: 'medium', id: 'medium' },
{ label: 'high', id: 'high' },
]
}
const validOptions = getReasoningEffortValuesForModel(modelValue)
if (!validOptions) {
return [
autoOption,
{ label: 'low', id: 'low' },
{ label: 'medium', id: 'medium' },
{ label: 'high', id: 'high' },
]
}
return [autoOption, ...validOptions.map((opt) => ({ label: opt, id: opt }))]
},
mode: 'advanced',
condition: {
field: 'model',
value: MODELS_WITH_REASONING_EFFORT,
},
},
{
id: 'verbosity',
title: 'Verbosity',
type: 'dropdown',
placeholder: 'Select verbosity...',
options: [
{ label: 'auto', id: 'auto' },
{ label: 'low', id: 'low' },
{ label: 'medium', id: 'medium' },
{ label: 'high', id: 'high' },
],
dependsOn: ['model'],
fetchOptions: async (blockId: string) => {
const { useSubBlockStore } = await import('@/stores/workflows/subblock/store')
const { useWorkflowRegistry } = await import('@/stores/workflows/registry/store')
const autoOption = { label: 'auto', id: 'auto' }
const activeWorkflowId = useWorkflowRegistry.getState().activeWorkflowId
if (!activeWorkflowId) {
return [
autoOption,
{ label: 'low', id: 'low' },
{ label: 'medium', id: 'medium' },
{ label: 'high', id: 'high' },
]
}
const workflowValues = useSubBlockStore.getState().workflowValues[activeWorkflowId]
const blockValues = workflowValues?.[blockId]
const modelValue = blockValues?.model as string
if (!modelValue) {
return [
autoOption,
{ label: 'low', id: 'low' },
{ label: 'medium', id: 'medium' },
{ label: 'high', id: 'high' },
]
}
const validOptions = getVerbosityValuesForModel(modelValue)
if (!validOptions) {
return [
autoOption,
{ label: 'low', id: 'low' },
{ label: 'medium', id: 'medium' },
{ label: 'high', id: 'high' },
]
}
return [autoOption, ...validOptions.map((opt) => ({ label: opt, id: opt }))]
},
mode: 'advanced',
condition: {
field: 'model',
value: MODELS_WITH_VERBOSITY,
},
},
{
id: 'thinkingLevel',
title: 'Thinking Level',
type: 'dropdown',
placeholder: 'Select thinking level...',
options: [
{ label: 'none', id: 'none' },
{ label: 'minimal', id: 'minimal' },
{ label: 'low', id: 'low' },
{ label: 'medium', id: 'medium' },
{ label: 'high', id: 'high' },
{ label: 'max', id: 'max' },
],
dependsOn: ['model'],
fetchOptions: async (blockId: string) => {
const { useSubBlockStore } = await import('@/stores/workflows/subblock/store')
const { useWorkflowRegistry } = await import('@/stores/workflows/registry/store')
const noneOption = { label: 'none', id: 'none' }
const activeWorkflowId = useWorkflowRegistry.getState().activeWorkflowId
if (!activeWorkflowId) {
return [noneOption, { label: 'low', id: 'low' }, { label: 'high', id: 'high' }]
}
const workflowValues = useSubBlockStore.getState().workflowValues[activeWorkflowId]
const blockValues = workflowValues?.[blockId]
const modelValue = blockValues?.model as string
if (!modelValue) {
return [noneOption, { label: 'low', id: 'low' }, { label: 'high', id: 'high' }]
}
const validOptions = getThinkingLevelsForModel(modelValue)
if (!validOptions) {
return [noneOption, { label: 'low', id: 'low' }, { label: 'high', id: 'high' }]
}
return [noneOption, ...validOptions.map((opt) => ({ label: opt, id: opt }))]
},
mode: 'advanced',
condition: {
field: 'model',
value: MODELS_WITH_THINKING,
},
},
{
id: 'temperature',
title: 'Temperature',
type: 'slider',
min: 0,
max: 1,
defaultValue: 0.3,
mode: 'advanced',
condition: () => ({
field: 'model',
value: (() => {
const allModels = Object.keys(getBaseModelProviders())
return allModels.filter(
(model) => supportsTemperature(model) && getMaxTemperature(model) === 1
)
})(),
}),
},
{
id: 'temperature',
title: 'Temperature',
type: 'slider',
min: 0,
max: 2,
defaultValue: 0.3,
mode: 'advanced',
condition: () => ({
field: 'model',
value: (() => {
const allModels = Object.keys(getBaseModelProviders())
return allModels.filter(
(model) => supportsTemperature(model) && getMaxTemperature(model) === 2
)
})(),
}),
},
{
id: 'maxTokens',
title: 'Max Output Tokens',
type: 'short-input',
placeholder: 'Enter max tokens (e.g., 4096)...',
mode: 'advanced',
},
]
}
/**
* Returns the standard input definitions for model configuration parameters.
* Use this in your block's inputs definition.
*/
export const MODEL_CONFIG_INPUTS = {
temperature: { type: 'number', description: 'Response randomness level' },
maxTokens: { type: 'number', description: 'Maximum number of tokens in the response' },
reasoningEffort: { type: 'string', description: 'Reasoning effort level' },
verbosity: { type: 'string', description: 'Verbosity level' },
thinkingLevel: {
type: 'string',
description: 'Thinking level for models with extended thinking',
},
} as const
const DEFAULT_MULTIPLE_FILES_ERROR =
'File reference must be a single file, not an array. Use <block.files[0]> to select one file.'

View File

@@ -5468,18 +5468,18 @@ export function AgentSkillsIcon(props: SVGProps<SVGSVGElement>) {
<svg
{...props}
xmlns='http://www.w3.org/2000/svg'
width='24'
height='24'
viewBox='0 0 32 32'
width='16'
height='16'
viewBox='0 0 16 16'
fill='none'
>
<path d='M16 0.5L29.4234 8.25V23.75L16 31.5L2.57661 23.75V8.25L16 0.5Z' fill='currentColor' />
<path
d='M16 6L24.6603 11V21L16 26L7.33975 21V11L16 6Z'
fill='currentColor'
stroke='var(--background, white)'
strokeWidth='3'
d='M8 1L14.0622 4.5V11.5L8 15L1.93782 11.5V4.5L8 1Z'
stroke='currentColor'
strokeWidth='1.5'
fill='none'
/>
<path d='M8 4.5L11 6.25V9.75L8 11.5L5 9.75V6.25L8 4.5Z' fill='currentColor' />
</svg>
)
}

View File

@@ -326,6 +326,7 @@ export class AgentBlockHandler implements BlockHandler {
_context: {
workflowId: ctx.workflowId,
workspaceId: ctx.workspaceId,
userId: ctx.userId,
isDeployedContext: ctx.isDeployedContext,
},
},
@@ -377,6 +378,9 @@ export class AgentBlockHandler implements BlockHandler {
if (ctx.workflowId) {
params.workflowId = ctx.workflowId
}
if (ctx.userId) {
params.userId = ctx.userId
}
const url = buildAPIUrl('/api/tools/custom', params)
const response = await fetch(url.toString(), {
@@ -487,7 +491,9 @@ export class AgentBlockHandler implements BlockHandler {
usageControl: tool.usageControl || 'auto',
executeFunction: async (callParams: Record<string, any>) => {
const headers = await buildAuthHeaders()
const execUrl = buildAPIUrl('/api/mcp/tools/execute')
const execParams: Record<string, string> = {}
if (ctx.userId) execParams.userId = ctx.userId
const execUrl = buildAPIUrl('/api/mcp/tools/execute', execParams)
const execResponse = await fetch(execUrl.toString(), {
method: 'POST',
@@ -596,6 +602,7 @@ export class AgentBlockHandler implements BlockHandler {
serverId,
workspaceId: ctx.workspaceId,
workflowId: ctx.workflowId,
...(ctx.userId ? { userId: ctx.userId } : {}),
})
const maxAttempts = 2
@@ -670,7 +677,9 @@ export class AgentBlockHandler implements BlockHandler {
usageControl: tool.usageControl || 'auto',
executeFunction: async (callParams: Record<string, any>) => {
const headers = await buildAuthHeaders()
const execUrl = buildAPIUrl('/api/mcp/tools/execute')
const discoverExecParams: Record<string, string> = {}
if (ctx.userId) discoverExecParams.userId = ctx.userId
const execUrl = buildAPIUrl('/api/mcp/tools/execute', discoverExecParams)
const execResponse = await fetch(execUrl.toString(), {
method: 'POST',
@@ -906,24 +915,17 @@ export class AgentBlockHandler implements BlockHandler {
}
}
// Find first system message
const firstSystemIndex = messages.findIndex((msg) => msg.role === 'system')
if (firstSystemIndex === -1) {
// No system message exists - add at position 0
messages.unshift({ role: 'system', content })
} else if (firstSystemIndex === 0) {
// System message already at position 0 - replace it
// Explicit systemPrompt parameter takes precedence over memory/messages
messages[0] = { role: 'system', content }
} else {
// System message exists but not at position 0 - move it to position 0
// and update with new content
messages.splice(firstSystemIndex, 1)
messages.unshift({ role: 'system', content })
}
// Remove any additional system messages (keep only the first one)
for (let i = messages.length - 1; i >= 1; i--) {
if (messages[i].role === 'system') {
messages.splice(i, 1)
@@ -989,13 +991,14 @@ export class AgentBlockHandler implements BlockHandler {
workflowId: ctx.workflowId,
workspaceId: ctx.workspaceId,
stream: streaming,
messages,
messages: messages?.map(({ executionId, ...msg }) => msg),
environmentVariables: ctx.environmentVariables || {},
workflowVariables: ctx.workflowVariables || {},
blockData,
blockNameMapping,
reasoningEffort: inputs.reasoningEffort,
verbosity: inputs.verbosity,
thinkingLevel: inputs.thinkingLevel,
}
}
@@ -1055,6 +1058,7 @@ export class AgentBlockHandler implements BlockHandler {
responseFormat: providerRequest.responseFormat,
workflowId: providerRequest.workflowId,
workspaceId: ctx.workspaceId,
userId: ctx.userId,
stream: providerRequest.stream,
messages: 'messages' in providerRequest ? providerRequest.messages : undefined,
environmentVariables: ctx.environmentVariables || {},
@@ -1064,6 +1068,7 @@ export class AgentBlockHandler implements BlockHandler {
isDeployedContext: ctx.isDeployedContext,
reasoningEffort: providerRequest.reasoningEffort,
verbosity: providerRequest.verbosity,
thinkingLevel: providerRequest.thinkingLevel,
})
return this.processProviderResponse(response, block, responseFormat)
@@ -1081,8 +1086,6 @@ export class AgentBlockHandler implements BlockHandler {
logger.info(`[${requestId}] Resolving Vertex AI credential: ${credentialId}`)
// Get the credential - we need to find the owner
// Since we're in a workflow context, we can query the credential directly
const credential = await db.query.account.findFirst({
where: eq(account.id, credentialId),
})
@@ -1091,7 +1094,6 @@ export class AgentBlockHandler implements BlockHandler {
throw new Error(`Vertex AI credential not found: ${credentialId}`)
}
// Refresh the token if needed
const { accessToken } = await refreshTokenIfNeeded(requestId, credential, credentialId)
if (!accessToken) {

View File

@@ -34,6 +34,7 @@ export interface AgentInputs {
bedrockRegion?: string
reasoningEffort?: string
verbosity?: string
thinkingLevel?: string
}
export interface ToolInput {

View File

@@ -72,6 +72,7 @@ export class ApiBlockHandler implements BlockHandler {
workflowId: ctx.workflowId,
workspaceId: ctx.workspaceId,
executionId: ctx.executionId,
userId: ctx.userId,
isDeployedContext: ctx.isDeployedContext,
},
},

View File

@@ -48,6 +48,7 @@ export async function evaluateConditionExpression(
_context: {
workflowId: ctx.workflowId,
workspaceId: ctx.workspaceId,
userId: ctx.userId,
isDeployedContext: ctx.isDeployedContext,
},
},

View File

@@ -104,7 +104,7 @@ export class EvaluatorBlockHandler implements BlockHandler {
}
try {
const url = buildAPIUrl('/api/providers')
const url = buildAPIUrl('/api/providers', ctx.userId ? { userId: ctx.userId } : {})
const providerRequest: Record<string, any> = {
provider: providerId,
@@ -121,26 +121,17 @@ export class EvaluatorBlockHandler implements BlockHandler {
temperature: EVALUATOR.DEFAULT_TEMPERATURE,
apiKey: finalApiKey,
azureEndpoint: inputs.azureEndpoint,
azureApiVersion: inputs.azureApiVersion,
vertexProject: evaluatorConfig.vertexProject,
vertexLocation: evaluatorConfig.vertexLocation,
bedrockAccessKeyId: evaluatorConfig.bedrockAccessKeyId,
bedrockSecretKey: evaluatorConfig.bedrockSecretKey,
bedrockRegion: evaluatorConfig.bedrockRegion,
workflowId: ctx.workflowId,
workspaceId: ctx.workspaceId,
}
if (providerId === 'vertex') {
providerRequest.vertexProject = evaluatorConfig.vertexProject
providerRequest.vertexLocation = evaluatorConfig.vertexLocation
}
if (providerId === 'azure-openai') {
providerRequest.azureEndpoint = inputs.azureEndpoint
providerRequest.azureApiVersion = inputs.azureApiVersion
}
if (providerId === 'bedrock') {
providerRequest.bedrockAccessKeyId = evaluatorConfig.bedrockAccessKeyId
providerRequest.bedrockSecretKey = evaluatorConfig.bedrockSecretKey
providerRequest.bedrockRegion = evaluatorConfig.bedrockRegion
}
const response = await fetch(url.toString(), {
method: 'POST',
headers: await buildAuthHeaders(),

View File

@@ -39,6 +39,7 @@ export class FunctionBlockHandler implements BlockHandler {
_context: {
workflowId: ctx.workflowId,
workspaceId: ctx.workspaceId,
userId: ctx.userId,
isDeployedContext: ctx.isDeployedContext,
},
},

View File

@@ -66,6 +66,7 @@ export class GenericBlockHandler implements BlockHandler {
workflowId: ctx.workflowId,
workspaceId: ctx.workspaceId,
executionId: ctx.executionId,
userId: ctx.userId,
isDeployedContext: ctx.isDeployedContext,
},
},

View File

@@ -605,6 +605,7 @@ export class HumanInTheLoopBlockHandler implements BlockHandler {
_context: {
workflowId: ctx.workflowId,
workspaceId: ctx.workspaceId,
userId: ctx.userId,
isDeployedContext: ctx.isDeployedContext,
},
blockData: blockDataWithPause,

View File

@@ -80,6 +80,7 @@ export class RouterBlockHandler implements BlockHandler {
try {
const url = new URL('/api/providers', getBaseUrl())
if (ctx.userId) url.searchParams.set('userId', ctx.userId)
const messages = [{ role: 'user', content: routerConfig.prompt }]
const systemPrompt = generateRouterPrompt(routerConfig.prompt, targetBlocks)
@@ -96,26 +97,17 @@ export class RouterBlockHandler implements BlockHandler {
context: JSON.stringify(messages),
temperature: ROUTER.INFERENCE_TEMPERATURE,
apiKey: finalApiKey,
azureEndpoint: inputs.azureEndpoint,
azureApiVersion: inputs.azureApiVersion,
vertexProject: routerConfig.vertexProject,
vertexLocation: routerConfig.vertexLocation,
bedrockAccessKeyId: routerConfig.bedrockAccessKeyId,
bedrockSecretKey: routerConfig.bedrockSecretKey,
bedrockRegion: routerConfig.bedrockRegion,
workflowId: ctx.workflowId,
workspaceId: ctx.workspaceId,
}
if (providerId === 'vertex') {
providerRequest.vertexProject = routerConfig.vertexProject
providerRequest.vertexLocation = routerConfig.vertexLocation
}
if (providerId === 'azure-openai') {
providerRequest.azureEndpoint = inputs.azureEndpoint
providerRequest.azureApiVersion = inputs.azureApiVersion
}
if (providerId === 'bedrock') {
providerRequest.bedrockAccessKeyId = routerConfig.bedrockAccessKeyId
providerRequest.bedrockSecretKey = routerConfig.bedrockSecretKey
providerRequest.bedrockRegion = routerConfig.bedrockRegion
}
const response = await fetch(url.toString(), {
method: 'POST',
headers: await buildAuthHeaders(),
@@ -218,6 +210,7 @@ export class RouterBlockHandler implements BlockHandler {
try {
const url = new URL('/api/providers', getBaseUrl())
if (ctx.userId) url.searchParams.set('userId', ctx.userId)
const messages = [{ role: 'user', content: routerConfig.context }]
const systemPrompt = generateRouterV2Prompt(routerConfig.context, routes)
@@ -234,6 +227,13 @@ export class RouterBlockHandler implements BlockHandler {
context: JSON.stringify(messages),
temperature: ROUTER.INFERENCE_TEMPERATURE,
apiKey: finalApiKey,
azureEndpoint: inputs.azureEndpoint,
azureApiVersion: inputs.azureApiVersion,
vertexProject: routerConfig.vertexProject,
vertexLocation: routerConfig.vertexLocation,
bedrockAccessKeyId: routerConfig.bedrockAccessKeyId,
bedrockSecretKey: routerConfig.bedrockSecretKey,
bedrockRegion: routerConfig.bedrockRegion,
workflowId: ctx.workflowId,
workspaceId: ctx.workspaceId,
responseFormat: {
@@ -257,22 +257,6 @@ export class RouterBlockHandler implements BlockHandler {
},
}
if (providerId === 'vertex') {
providerRequest.vertexProject = routerConfig.vertexProject
providerRequest.vertexLocation = routerConfig.vertexLocation
}
if (providerId === 'azure-openai') {
providerRequest.azureEndpoint = inputs.azureEndpoint
providerRequest.azureApiVersion = inputs.azureApiVersion
}
if (providerId === 'bedrock') {
providerRequest.bedrockAccessKeyId = routerConfig.bedrockAccessKeyId
providerRequest.bedrockSecretKey = routerConfig.bedrockSecretKey
providerRequest.bedrockRegion = routerConfig.bedrockRegion
}
const response = await fetch(url.toString(), {
method: 'POST',
headers: await buildAuthHeaders(),

View File

@@ -511,6 +511,8 @@ export class LoopOrchestrator {
contextVariables: {},
timeoutMs: LOOP_CONDITION_TIMEOUT_MS,
requestId,
ownerKey: `user:${ctx.userId}`,
ownerWeight: 1,
})
if (vmResult.error) {

View File

@@ -2,13 +2,13 @@ import { db } from '@sim/db'
import { account, workflow as workflowTable } from '@sim/db/schema'
import { eq } from 'drizzle-orm'
import type { NextRequest } from 'next/server'
import { checkHybridAuth } from '@/lib/auth/hybrid'
import { checkSessionOrInternalAuth } from '@/lib/auth/hybrid'
import { getUserEntityPermissions } from '@/lib/workspaces/permissions/utils'
export interface CredentialAccessResult {
ok: boolean
error?: string
authType?: 'session' | 'api_key' | 'internal_jwt'
authType?: 'session' | 'internal_jwt'
requesterUserId?: string
credentialOwnerUserId?: string
workspaceId?: string
@@ -16,10 +16,10 @@ export interface CredentialAccessResult {
/**
* Centralizes auth + collaboration rules for credential use.
* - Uses checkHybridAuth to authenticate the caller
* - Uses checkSessionOrInternalAuth to authenticate the caller
* - Fetches credential owner
* - Authorization rules:
* - session/api_key: allow if requester owns the credential; otherwise require workflowId and
* - session: allow if requester owns the credential; otherwise require workflowId and
* verify BOTH requester and owner have access to the workflow's workspace
* - internal_jwt: require workflowId (by default) and verify credential owner has access to the
* workflow's workspace (requester identity is the system/workflow)
@@ -30,7 +30,9 @@ export async function authorizeCredentialUse(
): Promise<CredentialAccessResult> {
const { credentialId, workflowId, requireWorkflowIdForInternal = true } = params
const auth = await checkHybridAuth(request, { requireWorkflowId: requireWorkflowIdForInternal })
const auth = await checkSessionOrInternalAuth(request, {
requireWorkflowId: requireWorkflowIdForInternal,
})
if (!auth.success || !auth.userId) {
return { ok: false, error: auth.error || 'Authentication required' }
}
@@ -52,7 +54,7 @@ export async function authorizeCredentialUse(
if (auth.authType !== 'internal_jwt' && auth.userId === credentialOwnerUserId) {
return {
ok: true,
authType: auth.authType,
authType: auth.authType as CredentialAccessResult['authType'],
requesterUserId: auth.userId,
credentialOwnerUserId,
}
@@ -85,14 +87,14 @@ export async function authorizeCredentialUse(
}
return {
ok: true,
authType: auth.authType,
authType: auth.authType as CredentialAccessResult['authType'],
requesterUserId: auth.userId,
credentialOwnerUserId,
workspaceId: wf.workspaceId,
}
}
// Session/API key: verify BOTH requester and owner belong to the workflow's workspace
// Session: verify BOTH requester and owner belong to the workflow's workspace
const requesterPerm = await getUserEntityPermissions(auth.userId, 'workspace', wf.workspaceId)
const ownerPerm = await getUserEntityPermissions(
credentialOwnerUserId,
@@ -105,7 +107,7 @@ export async function authorizeCredentialUse(
return {
ok: true,
authType: auth.authType,
authType: auth.authType as CredentialAccessResult['authType'],
requesterUserId: auth.userId,
credentialOwnerUserId,
workspaceId: wf.workspaceId,

View File

@@ -1,7 +1,4 @@
import { db } from '@sim/db'
import { workflow } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { eq } from 'drizzle-orm'
import type { NextRequest } from 'next/server'
import { authenticateApiKeyFromHeader, updateApiKeyLastUsed } from '@/lib/api-key/service'
import { getSession } from '@/lib/auth'
@@ -13,35 +10,33 @@ export interface AuthResult {
success: boolean
userId?: string
authType?: 'session' | 'api_key' | 'internal_jwt'
apiKeyType?: 'personal' | 'workspace'
error?: string
}
/**
* Resolves userId from a verified internal JWT token.
* Extracts workflowId/userId from URL params or POST body, then looks up userId if needed.
* Extracts userId from the JWT payload, URL search params, or POST body.
*/
async function resolveUserFromJwt(
request: NextRequest,
verificationUserId: string | null,
options: { requireWorkflowId?: boolean }
): Promise<AuthResult> {
let workflowId: string | null = null
let userId: string | null = verificationUserId
const { searchParams } = new URL(request.url)
workflowId = searchParams.get('workflowId')
if (!userId) {
const { searchParams } = new URL(request.url)
userId = searchParams.get('userId')
}
if (!workflowId && !userId && request.method === 'POST') {
if (!userId && request.method === 'POST') {
try {
const clonedRequest = request.clone()
const bodyText = await clonedRequest.text()
if (bodyText) {
const body = JSON.parse(bodyText)
workflowId = body.workflowId || body._context?.workflowId
userId = userId || body.userId || body._context?.userId
userId = body.userId || body._context?.userId || null
}
} catch {
// Ignore JSON parse errors
@@ -52,22 +47,8 @@ async function resolveUserFromJwt(
return { success: true, userId, authType: 'internal_jwt' }
}
if (workflowId) {
const [workflowData] = await db
.select({ userId: workflow.userId })
.from(workflow)
.where(eq(workflow.id, workflowId))
.limit(1)
if (!workflowData) {
return { success: false, error: 'Workflow not found' }
}
return { success: true, userId: workflowData.userId, authType: 'internal_jwt' }
}
if (options.requireWorkflowId !== false) {
return { success: false, error: 'workflowId or userId required for internal JWT calls' }
return { success: false, error: 'userId required for internal JWT calls' }
}
return { success: true, authType: 'internal_jwt' }
@@ -222,6 +203,7 @@ export async function checkHybridAuth(
success: true,
userId: result.userId!,
authType: 'api_key',
apiKeyType: result.keyType,
}
}

View File

@@ -12,6 +12,7 @@ const VALID_PROVIDER_IDS: readonly ProviderId[] = [
'openai',
'azure-openai',
'anthropic',
'azure-anthropic',
'google',
'deepseek',
'xai',

View File

@@ -147,6 +147,13 @@ export type CopilotProviderConfig =
apiVersion?: string
endpoint?: string
}
| {
provider: 'azure-anthropic'
model: string
apiKey?: string
apiVersion?: string
endpoint?: string
}
| {
provider: 'vertex'
model: string
@@ -155,7 +162,7 @@ export type CopilotProviderConfig =
vertexLocation?: string
}
| {
provider: Exclude<ProviderId, 'azure-openai' | 'vertex'>
provider: Exclude<ProviderId, 'azure-openai' | 'azure-anthropic' | 'vertex'>
model?: string
apiKey?: string
}

View File

@@ -95,6 +95,9 @@ export const env = createEnv({
AZURE_OPENAI_ENDPOINT: z.string().url().optional(), // Shared Azure OpenAI service endpoint
AZURE_OPENAI_API_VERSION: z.string().optional(), // Shared Azure OpenAI API version
AZURE_OPENAI_API_KEY: z.string().min(1).optional(), // Shared Azure OpenAI API key
AZURE_ANTHROPIC_ENDPOINT: z.string().url().optional(), // Azure Anthropic service endpoint
AZURE_ANTHROPIC_API_KEY: z.string().min(1).optional(), // Azure Anthropic API key
AZURE_ANTHROPIC_API_VERSION: z.string().min(1).optional(), // Azure Anthropic API version (e.g. 2023-06-01)
KB_OPENAI_MODEL_NAME: z.string().optional(), // Knowledge base OpenAI model name (works with both regular OpenAI and Azure OpenAI)
WAND_OPENAI_MODEL_NAME: z.string().optional(), // Wand generation OpenAI model name (works with both regular OpenAI and Azure OpenAI)
OCR_AZURE_ENDPOINT: z.string().url().optional(), // Azure Mistral OCR service endpoint
@@ -180,6 +183,24 @@ export const env = createEnv({
EXECUTION_TIMEOUT_ASYNC_TEAM: z.string().optional().default('5400'), // 90 minutes
EXECUTION_TIMEOUT_ASYNC_ENTERPRISE: z.string().optional().default('5400'), // 90 minutes
// Isolated-VM Worker Pool Configuration
IVM_POOL_SIZE: z.string().optional().default('4'), // Max worker processes in pool
IVM_MAX_CONCURRENT: z.string().optional().default('10000'), // Max concurrent executions globally
IVM_MAX_PER_WORKER: z.string().optional().default('2500'), // Max concurrent executions per worker
IVM_WORKER_IDLE_TIMEOUT_MS: z.string().optional().default('60000'), // Worker idle cleanup timeout (ms)
IVM_MAX_QUEUE_SIZE: z.string().optional().default('10000'), // Max pending queued executions in memory
IVM_MAX_FETCH_RESPONSE_BYTES: z.string().optional().default('8388608'),// Max bytes read from sandbox fetch responses
IVM_MAX_FETCH_RESPONSE_CHARS: z.string().optional().default('4000000'),// Max chars returned to sandbox from fetch body
IVM_MAX_FETCH_OPTIONS_JSON_CHARS: z.string().optional().default('262144'), // Max JSON payload size for sandbox fetch options
IVM_MAX_FETCH_URL_LENGTH: z.string().optional().default('8192'), // Max URL length accepted by sandbox fetch
IVM_MAX_STDOUT_CHARS: z.string().optional().default('200000'), // Max captured stdout characters per execution
IVM_MAX_ACTIVE_PER_OWNER: z.string().optional().default('200'), // Max active executions per owner (per process)
IVM_MAX_QUEUED_PER_OWNER: z.string().optional().default('2000'), // Max queued executions per owner (per process)
IVM_MAX_OWNER_WEIGHT: z.string().optional().default('5'), // Max accepted weight for weighted owner scheduling
IVM_DISTRIBUTED_MAX_INFLIGHT_PER_OWNER:z.string().optional().default('2200'), // Max owner in-flight leases across replicas
IVM_DISTRIBUTED_LEASE_MIN_TTL_MS: z.string().optional().default('120000'), // Min TTL for distributed in-flight leases (ms)
IVM_QUEUE_TIMEOUT_MS: z.string().optional().default('300000'), // Max queue wait before rejection (ms)
// Knowledge Base Processing Configuration - Shared across all processing methods
KB_CONFIG_MAX_DURATION: z.number().optional().default(600), // Max processing duration in seconds (10 minutes)
KB_CONFIG_MAX_ATTEMPTS: z.number().optional().default(3), // Max retry attempts

View File

@@ -103,6 +103,7 @@ export interface SecureFetchOptions {
body?: string | Buffer | Uint8Array
timeout?: number
maxRedirects?: number
maxResponseBytes?: number
}
export class SecureFetchHeaders {
@@ -165,6 +166,7 @@ export async function secureFetchWithPinnedIP(
redirectCount = 0
): Promise<SecureFetchResponse> {
const maxRedirects = options.maxRedirects ?? DEFAULT_MAX_REDIRECTS
const maxResponseBytes = options.maxResponseBytes
return new Promise((resolve, reject) => {
const parsed = new URL(url)
@@ -237,14 +239,32 @@ export async function secureFetchWithPinnedIP(
}
const chunks: Buffer[] = []
let totalBytes = 0
let responseTerminated = false
res.on('data', (chunk: Buffer) => chunks.push(chunk))
res.on('data', (chunk: Buffer) => {
if (responseTerminated) return
totalBytes += chunk.length
if (
typeof maxResponseBytes === 'number' &&
maxResponseBytes > 0 &&
totalBytes > maxResponseBytes
) {
responseTerminated = true
res.destroy(new Error(`Response exceeded maximum size of ${maxResponseBytes} bytes`))
return
}
chunks.push(chunk)
})
res.on('error', (error) => {
reject(error)
})
res.on('end', () => {
if (responseTerminated) return
const bodyBuffer = Buffer.concat(chunks)
const body = bodyBuffer.toString('utf-8')
const headersRecord: Record<string, string> = {}

View File

@@ -9,6 +9,21 @@ const USER_CODE_START_LINE = 4
const pendingFetches = new Map()
let fetchIdCounter = 0
const FETCH_TIMEOUT_MS = 300000 // 5 minutes
const MAX_STDOUT_CHARS = Number.parseInt(process.env.IVM_MAX_STDOUT_CHARS || '', 10) || 200000
const MAX_FETCH_OPTIONS_JSON_CHARS =
Number.parseInt(process.env.IVM_MAX_FETCH_OPTIONS_JSON_CHARS || '', 10) || 256 * 1024
function stringifyLogValue(value) {
if (typeof value !== 'object' || value === null) {
return String(value)
}
try {
return JSON.stringify(value)
} catch {
return '[unserializable]'
}
}
/**
* Extract line and column from error stack or message
@@ -101,8 +116,32 @@ function convertToCompatibleError(errorInfo, userCode) {
async function executeCode(request) {
const { code, params, envVars, contextVariables, timeoutMs, requestId } = request
const stdoutChunks = []
let stdoutLength = 0
let stdoutTruncated = false
let isolate = null
const appendStdout = (line) => {
if (stdoutTruncated || !line) return
const remaining = MAX_STDOUT_CHARS - stdoutLength
if (remaining <= 0) {
stdoutTruncated = true
stdoutChunks.push('[stdout truncated]\n')
return
}
if (line.length <= remaining) {
stdoutChunks.push(line)
stdoutLength += line.length
return
}
stdoutChunks.push(line.slice(0, remaining))
stdoutChunks.push('\n[stdout truncated]\n')
stdoutLength = MAX_STDOUT_CHARS
stdoutTruncated = true
}
try {
isolate = new ivm.Isolate({ memoryLimit: 128 })
const context = await isolate.createContext()
@@ -111,18 +150,14 @@ async function executeCode(request) {
await jail.set('global', jail.derefInto())
const logCallback = new ivm.Callback((...args) => {
const message = args
.map((arg) => (typeof arg === 'object' ? JSON.stringify(arg) : String(arg)))
.join(' ')
stdoutChunks.push(`${message}\n`)
const message = args.map((arg) => stringifyLogValue(arg)).join(' ')
appendStdout(`${message}\n`)
})
await jail.set('__log', logCallback)
const errorCallback = new ivm.Callback((...args) => {
const message = args
.map((arg) => (typeof arg === 'object' ? JSON.stringify(arg) : String(arg)))
.join(' ')
stdoutChunks.push(`ERROR: ${message}\n`)
const message = args.map((arg) => stringifyLogValue(arg)).join(' ')
appendStdout(`ERROR: ${message}\n`)
})
await jail.set('__error', errorCallback)
@@ -178,6 +213,9 @@ async function executeCode(request) {
} catch {
throw new Error('fetch options must be JSON-serializable');
}
if (optionsJson.length > ${MAX_FETCH_OPTIONS_JSON_CHARS}) {
throw new Error('fetch options exceed maximum payload size');
}
}
const resultJson = await __fetchRef.apply(undefined, [url, optionsJson], { result: { promise: true } });
let result;

View File

@@ -0,0 +1,500 @@
import { EventEmitter } from 'node:events'
import { afterEach, describe, expect, it, vi } from 'vitest'
type MockProc = EventEmitter & {
connected: boolean
stderr: EventEmitter
send: (message: unknown) => boolean
kill: () => boolean
}
type SpawnFactory = () => MockProc
type RedisEval = (...args: any[]) => unknown | Promise<unknown>
type SecureFetchImpl = (...args: any[]) => unknown | Promise<unknown>
function createBaseProc(): MockProc {
const proc = new EventEmitter() as MockProc
proc.connected = true
proc.stderr = new EventEmitter()
proc.send = () => true
proc.kill = () => {
if (!proc.connected) return true
proc.connected = false
setImmediate(() => proc.emit('exit', 0))
return true
}
return proc
}
function createStartupFailureProc(): MockProc {
const proc = createBaseProc()
setImmediate(() => {
proc.connected = false
proc.emit('exit', 1)
})
return proc
}
function createReadyProc(result: unknown): MockProc {
const proc = createBaseProc()
proc.send = (message: unknown) => {
const msg = message as { type?: string; executionId?: number }
if (msg.type === 'execute') {
setImmediate(() => {
proc.emit('message', {
type: 'result',
executionId: msg.executionId,
result: { result, stdout: '' },
})
})
}
return true
}
setImmediate(() => proc.emit('message', { type: 'ready' }))
return proc
}
function createReadyProcWithDelay(delayMs: number): MockProc {
const proc = createBaseProc()
proc.send = (message: unknown) => {
const msg = message as { type?: string; executionId?: number; request?: { requestId?: string } }
if (msg.type === 'execute') {
setTimeout(() => {
proc.emit('message', {
type: 'result',
executionId: msg.executionId,
result: { result: msg.request?.requestId ?? 'unknown', stdout: '' },
})
}, delayMs)
}
return true
}
setImmediate(() => proc.emit('message', { type: 'ready' }))
return proc
}
function createReadyFetchProxyProc(fetchMessage: { url: string; optionsJson?: string }): MockProc {
const proc = createBaseProc()
let currentExecutionId = 0
proc.send = (message: unknown) => {
const msg = message as { type?: string; executionId?: number; request?: { requestId?: string } }
if (msg.type === 'execute') {
currentExecutionId = msg.executionId ?? 0
setImmediate(() => {
proc.emit('message', {
type: 'fetch',
fetchId: 1,
requestId: msg.request?.requestId ?? 'fetch-test',
url: fetchMessage.url,
optionsJson: fetchMessage.optionsJson,
})
})
return true
}
if (msg.type === 'fetchResponse') {
const fetchResponse = message as { response?: string }
setImmediate(() => {
proc.emit('message', {
type: 'result',
executionId: currentExecutionId,
result: { result: fetchResponse.response ?? '', stdout: '' },
})
})
return true
}
return true
}
setImmediate(() => proc.emit('message', { type: 'ready' }))
return proc
}
async function loadExecutionModule(options: {
envOverrides?: Record<string, string>
spawns: SpawnFactory[]
redisEvalImpl?: RedisEval
secureFetchImpl?: SecureFetchImpl
}) {
vi.resetModules()
const spawnQueue = [...options.spawns]
const spawnMock = vi.fn(() => {
const next = spawnQueue.shift()
if (!next) {
throw new Error('No mock spawn factory configured')
}
return next() as any
})
vi.doMock('@sim/logger', () => ({
createLogger: () => ({
info: vi.fn(),
warn: vi.fn(),
error: vi.fn(),
}),
}))
const secureFetchMock = vi.fn(
options.secureFetchImpl ??
(async () => ({
ok: true,
status: 200,
statusText: 'OK',
headers: new Map<string, string>(),
text: async () => '',
json: async () => ({}),
arrayBuffer: async () => new ArrayBuffer(0),
}))
)
vi.doMock('@/lib/core/security/input-validation.server', () => ({
secureFetchWithValidation: secureFetchMock,
}))
vi.doMock('@/lib/core/config/env', () => ({
env: {
IVM_POOL_SIZE: '1',
IVM_MAX_CONCURRENT: '100',
IVM_MAX_PER_WORKER: '100',
IVM_WORKER_IDLE_TIMEOUT_MS: '60000',
IVM_MAX_QUEUE_SIZE: '10',
IVM_MAX_ACTIVE_PER_OWNER: '100',
IVM_MAX_QUEUED_PER_OWNER: '10',
IVM_MAX_OWNER_WEIGHT: '5',
IVM_DISTRIBUTED_MAX_INFLIGHT_PER_OWNER: '100',
IVM_DISTRIBUTED_LEASE_MIN_TTL_MS: '1000',
IVM_QUEUE_TIMEOUT_MS: '1000',
...(options.envOverrides ?? {}),
},
}))
const redisEval = options.redisEvalImpl ? vi.fn(options.redisEvalImpl) : undefined
vi.doMock('@/lib/core/config/redis', () => ({
getRedisClient: vi.fn(() =>
redisEval
? ({
eval: redisEval,
} as any)
: null
),
}))
vi.doMock('node:child_process', () => ({
execSync: vi.fn(() => Buffer.from('v23.11.0')),
spawn: spawnMock,
}))
const mod = await import('./isolated-vm')
return { ...mod, spawnMock, secureFetchMock }
}
describe('isolated-vm scheduler', () => {
afterEach(() => {
vi.restoreAllMocks()
vi.resetModules()
})
it('recovers from an initial spawn failure and drains queued work', async () => {
const { executeInIsolatedVM, spawnMock } = await loadExecutionModule({
spawns: [createStartupFailureProc, () => createReadyProc('ok')],
})
const result = await executeInIsolatedVM({
code: 'return "ok"',
params: {},
envVars: {},
contextVariables: {},
timeoutMs: 100,
requestId: 'req-1',
})
expect(result.error).toBeUndefined()
expect(result.result).toBe('ok')
expect(spawnMock).toHaveBeenCalledTimes(2)
})
it('rejects new requests when the queue is full', async () => {
const { executeInIsolatedVM } = await loadExecutionModule({
envOverrides: {
IVM_MAX_QUEUE_SIZE: '1',
IVM_QUEUE_TIMEOUT_MS: '200',
},
spawns: [createStartupFailureProc, createStartupFailureProc, createStartupFailureProc],
})
const firstPromise = executeInIsolatedVM({
code: 'return 1',
params: {},
envVars: {},
contextVariables: {},
timeoutMs: 100,
requestId: 'req-2',
ownerKey: 'user:a',
})
await new Promise((resolve) => setTimeout(resolve, 25))
const second = await executeInIsolatedVM({
code: 'return 2',
params: {},
envVars: {},
contextVariables: {},
timeoutMs: 100,
requestId: 'req-3',
ownerKey: 'user:b',
})
expect(second.error?.message).toContain('at capacity')
const first = await firstPromise
expect(first.error?.message).toContain('timed out waiting')
})
it('enforces per-owner queued limit', async () => {
const { executeInIsolatedVM } = await loadExecutionModule({
envOverrides: {
IVM_MAX_QUEUED_PER_OWNER: '1',
IVM_QUEUE_TIMEOUT_MS: '200',
},
spawns: [createStartupFailureProc, createStartupFailureProc, createStartupFailureProc],
})
const firstPromise = executeInIsolatedVM({
code: 'return 1',
params: {},
envVars: {},
contextVariables: {},
timeoutMs: 100,
requestId: 'req-4',
ownerKey: 'user:hog',
})
await new Promise((resolve) => setTimeout(resolve, 25))
const second = await executeInIsolatedVM({
code: 'return 2',
params: {},
envVars: {},
contextVariables: {},
timeoutMs: 100,
requestId: 'req-5',
ownerKey: 'user:hog',
})
expect(second.error?.message).toContain('Too many concurrent')
const first = await firstPromise
expect(first.error?.message).toContain('timed out waiting')
})
it('enforces distributed owner in-flight lease limit when Redis is configured', async () => {
const { executeInIsolatedVM } = await loadExecutionModule({
envOverrides: {
IVM_DISTRIBUTED_MAX_INFLIGHT_PER_OWNER: '1',
REDIS_URL: 'redis://localhost:6379',
},
spawns: [() => createReadyProc('ok')],
redisEvalImpl: (...args: any[]) => {
const script = String(args[0] ?? '')
if (script.includes('ZREMRANGEBYSCORE')) {
return 0
}
return 1
},
})
const result = await executeInIsolatedVM({
code: 'return "blocked"',
params: {},
envVars: {},
contextVariables: {},
timeoutMs: 100,
requestId: 'req-6',
ownerKey: 'user:distributed',
})
expect(result.error?.message).toContain('Too many concurrent')
})
it('fails closed when Redis is configured but unavailable', async () => {
const { executeInIsolatedVM } = await loadExecutionModule({
envOverrides: {
REDIS_URL: 'redis://localhost:6379',
},
spawns: [() => createReadyProc('ok')],
})
const result = await executeInIsolatedVM({
code: 'return "blocked"',
params: {},
envVars: {},
contextVariables: {},
timeoutMs: 100,
requestId: 'req-7',
ownerKey: 'user:redis-down',
})
expect(result.error?.message).toContain('temporarily unavailable')
})
it('fails closed when Redis lease evaluation errors', async () => {
const { executeInIsolatedVM } = await loadExecutionModule({
envOverrides: {
REDIS_URL: 'redis://localhost:6379',
},
spawns: [() => createReadyProc('ok')],
redisEvalImpl: (...args: any[]) => {
const script = String(args[0] ?? '')
if (script.includes('ZREMRANGEBYSCORE')) {
throw new Error('redis timeout')
}
return 1
},
})
const result = await executeInIsolatedVM({
code: 'return "blocked"',
params: {},
envVars: {},
contextVariables: {},
timeoutMs: 100,
requestId: 'req-8',
ownerKey: 'user:redis-error',
})
expect(result.error?.message).toContain('temporarily unavailable')
})
it('applies weighted owner scheduling when draining queued executions', async () => {
const { executeInIsolatedVM } = await loadExecutionModule({
envOverrides: {
IVM_MAX_PER_WORKER: '1',
},
spawns: [() => createReadyProcWithDelay(10)],
})
const completionOrder: string[] = []
const pushCompletion = (label: string) => (res: { result: unknown }) => {
completionOrder.push(String(res.result ?? label))
return res
}
const p1 = executeInIsolatedVM({
code: 'return 1',
params: {},
envVars: {},
contextVariables: {},
timeoutMs: 500,
requestId: 'a-1',
ownerKey: 'user:a',
ownerWeight: 2,
}).then(pushCompletion('a-1'))
const p2 = executeInIsolatedVM({
code: 'return 2',
params: {},
envVars: {},
contextVariables: {},
timeoutMs: 500,
requestId: 'a-2',
ownerKey: 'user:a',
ownerWeight: 2,
}).then(pushCompletion('a-2'))
const p3 = executeInIsolatedVM({
code: 'return 3',
params: {},
envVars: {},
contextVariables: {},
timeoutMs: 500,
requestId: 'b-1',
ownerKey: 'user:b',
ownerWeight: 1,
}).then(pushCompletion('b-1'))
const p4 = executeInIsolatedVM({
code: 'return 4',
params: {},
envVars: {},
contextVariables: {},
timeoutMs: 500,
requestId: 'b-2',
ownerKey: 'user:b',
ownerWeight: 1,
}).then(pushCompletion('b-2'))
const p5 = executeInIsolatedVM({
code: 'return 5',
params: {},
envVars: {},
contextVariables: {},
timeoutMs: 500,
requestId: 'a-3',
ownerKey: 'user:a',
ownerWeight: 2,
}).then(pushCompletion('a-3'))
await Promise.all([p1, p2, p3, p4, p5])
expect(completionOrder.slice(0, 3)).toEqual(['a-1', 'a-2', 'a-3'])
expect(completionOrder).toEqual(['a-1', 'a-2', 'a-3', 'b-1', 'b-2'])
})
it('rejects oversized fetch options payloads before outbound call', async () => {
const { executeInIsolatedVM, secureFetchMock } = await loadExecutionModule({
envOverrides: {
IVM_MAX_FETCH_OPTIONS_JSON_CHARS: '50',
},
spawns: [
() =>
createReadyFetchProxyProc({
url: 'https://example.com',
optionsJson: 'x'.repeat(100),
}),
],
})
const result = await executeInIsolatedVM({
code: 'return "fetch-options"',
params: {},
envVars: {},
contextVariables: {},
timeoutMs: 100,
requestId: 'req-fetch-options',
})
const payload = JSON.parse(String(result.result))
expect(payload.error).toContain('Fetch options exceed maximum payload size')
expect(secureFetchMock).not.toHaveBeenCalled()
})
it('rejects overly long fetch URLs before outbound call', async () => {
const { executeInIsolatedVM, secureFetchMock } = await loadExecutionModule({
envOverrides: {
IVM_MAX_FETCH_URL_LENGTH: '30',
},
spawns: [
() =>
createReadyFetchProxyProc({
url: 'https://example.com/path/to/a/very/long/resource',
}),
],
})
const result = await executeInIsolatedVM({
code: 'return "fetch-url"',
params: {},
envVars: {},
contextVariables: {},
timeoutMs: 100,
requestId: 'req-fetch-url',
})
const payload = JSON.parse(String(result.result))
expect(payload.error).toContain('fetch URL exceeds maximum length')
expect(secureFetchMock).not.toHaveBeenCalled()
})
})

File diff suppressed because it is too large Load Diff

View File

@@ -124,6 +124,7 @@ export interface PreprocessExecutionOptions {
workspaceId?: string // If known, used for billing resolution
loggingSession?: LoggingSession // If provided, will be used for error logging
isResumeContext?: boolean // If true, allows fallback billing on resolution failure (for paused workflow resumes)
useAuthenticatedUserAsActor?: boolean // If true, use the authenticated userId as actorUserId (for client-side executions and personal API keys)
/** @deprecated No longer used - background/async executions always use deployed state */
useDraftState?: boolean
}
@@ -170,6 +171,7 @@ export async function preprocessExecution(
workspaceId: providedWorkspaceId,
loggingSession: providedLoggingSession,
isResumeContext = false,
useAuthenticatedUserAsActor = false,
} = options
logger.info(`[${requestId}] Starting execution preprocessing`, {
@@ -257,7 +259,14 @@ export async function preprocessExecution(
let actorUserId: string | null = null
try {
if (workspaceId) {
// For client-side executions and personal API keys, the authenticated
// user is the billing and permission actor — not the workspace owner.
if (useAuthenticatedUserAsActor && userId) {
actorUserId = userId
logger.info(`[${requestId}] Using authenticated user as actor: ${actorUserId}`)
}
if (!actorUserId && workspaceId) {
actorUserId = await getWorkspaceBilledAccountUserId(workspaceId)
if (actorUserId) {
logger.info(`[${requestId}] Using workspace billed account: ${actorUserId}`)

View File

@@ -1,7 +1,11 @@
import { db } from '@sim/db'
import { account } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { eq } from 'drizzle-orm'
import { getBaseUrl } from '@/lib/core/utils/urls'
import { refreshTokenIfNeeded } from '@/app/api/auth/oauth/utils'
import { executeProviderRequest } from '@/providers'
import { getApiKey, getProviderFromModel } from '@/providers/utils'
import { getProviderFromModel } from '@/providers/utils'
const logger = createLogger('HallucinationValidator')
@@ -19,7 +23,18 @@ export interface HallucinationValidationInput {
topK: number // Number of chunks to retrieve, default 10
model: string
apiKey?: string
providerCredentials?: {
azureEndpoint?: string
azureApiVersion?: string
vertexProject?: string
vertexLocation?: string
vertexCredential?: string
bedrockAccessKeyId?: string
bedrockSecretKey?: string
bedrockRegion?: string
}
workflowId?: string
workspaceId?: string
requestId: string
}
@@ -89,7 +104,9 @@ async function scoreHallucinationWithLLM(
userInput: string,
ragContext: string[],
model: string,
apiKey: string,
apiKey: string | undefined,
providerCredentials: HallucinationValidationInput['providerCredentials'],
workspaceId: string | undefined,
requestId: string
): Promise<{ score: number; reasoning: string }> {
try {
@@ -127,6 +144,23 @@ Evaluate the consistency and provide your score and reasoning in JSON format.`
const providerId = getProviderFromModel(model)
let finalApiKey: string | undefined = apiKey
if (providerId === 'vertex' && providerCredentials?.vertexCredential) {
const credential = await db.query.account.findFirst({
where: eq(account.id, providerCredentials.vertexCredential),
})
if (credential) {
const { accessToken } = await refreshTokenIfNeeded(
requestId,
credential,
providerCredentials.vertexCredential
)
if (accessToken) {
finalApiKey = accessToken
}
}
}
const response = await executeProviderRequest(providerId, {
model,
systemPrompt,
@@ -137,7 +171,15 @@ Evaluate the consistency and provide your score and reasoning in JSON format.`
},
],
temperature: 0.1, // Low temperature for consistent scoring
apiKey,
apiKey: finalApiKey,
azureEndpoint: providerCredentials?.azureEndpoint,
azureApiVersion: providerCredentials?.azureApiVersion,
vertexProject: providerCredentials?.vertexProject,
vertexLocation: providerCredentials?.vertexLocation,
bedrockAccessKeyId: providerCredentials?.bedrockAccessKeyId,
bedrockSecretKey: providerCredentials?.bedrockSecretKey,
bedrockRegion: providerCredentials?.bedrockRegion,
workspaceId,
})
if (response instanceof ReadableStream || ('stream' in response && 'execution' in response)) {
@@ -184,8 +226,18 @@ Evaluate the consistency and provide your score and reasoning in JSON format.`
export async function validateHallucination(
input: HallucinationValidationInput
): Promise<HallucinationValidationResult> {
const { userInput, knowledgeBaseId, threshold, topK, model, apiKey, workflowId, requestId } =
input
const {
userInput,
knowledgeBaseId,
threshold,
topK,
model,
apiKey,
providerCredentials,
workflowId,
workspaceId,
requestId,
} = input
try {
if (!userInput || userInput.trim().length === 0) {
@@ -202,17 +254,6 @@ export async function validateHallucination(
}
}
let finalApiKey: string
try {
const providerId = getProviderFromModel(model)
finalApiKey = getApiKey(providerId, model, apiKey)
} catch (error: any) {
return {
passed: false,
error: `API key error: ${error.message}`,
}
}
// Step 1: Query knowledge base with RAG
const ragContext = await queryKnowledgeBase(
knowledgeBaseId,
@@ -234,7 +275,9 @@ export async function validateHallucination(
userInput,
ragContext,
model,
finalApiKey,
apiKey,
providerCredentials,
workspaceId,
requestId
)

View File

@@ -33,11 +33,25 @@ export class SnapshotService implements ISnapshotService {
const existingSnapshot = await this.getSnapshotByHash(workflowId, stateHash)
if (existingSnapshot) {
let refreshedState: WorkflowState = existingSnapshot.stateData
try {
await db
.update(workflowExecutionSnapshots)
.set({ stateData: state })
.where(eq(workflowExecutionSnapshots.id, existingSnapshot.id))
refreshedState = state
} catch (error) {
logger.warn(
`Failed to refresh snapshot stateData for ${existingSnapshot.id}, continuing with existing data`,
error
)
}
logger.info(
`Reusing existing snapshot for workflow ${workflowId} (hash: ${stateHash.slice(0, 12)}...)`
)
return {
snapshot: existingSnapshot,
snapshot: { ...existingSnapshot, stateData: refreshedState },
isNew: false,
}
}

View File

@@ -1,6 +1,6 @@
import { createLogger } from '@sim/logger'
import type { NextRequest, NextResponse } from 'next/server'
import { checkHybridAuth } from '@/lib/auth/hybrid'
import { checkSessionOrInternalAuth } from '@/lib/auth/hybrid'
import { generateRequestId } from '@/lib/core/utils/request'
import { createMcpErrorResponse } from '@/lib/mcp/utils'
import { getUserEntityPermissions } from '@/lib/workspaces/permissions/utils'
@@ -43,7 +43,7 @@ async function validateMcpAuth(
const requestId = generateRequestId()
try {
const auth = await checkHybridAuth(request, { requireWorkflowId: false })
const auth = await checkSessionOrInternalAuth(request, { requireWorkflowId: false })
if (!auth.success || !auth.userId) {
logger.warn(`[${requestId}] Authentication failed: ${auth.error}`)
return {

View File

@@ -21,6 +21,11 @@ export const TOKENIZATION_CONFIG = {
confidence: 'high',
supportedMethods: ['heuristic', 'fallback'],
},
'azure-anthropic': {
avgCharsPerToken: 4.5,
confidence: 'high',
supportedMethods: ['heuristic', 'fallback'],
},
google: {
avgCharsPerToken: 5,
confidence: 'medium',

View File

@@ -204,6 +204,7 @@ export function estimateTokenCount(text: string, providerId?: string): TokenEsti
estimatedTokens = estimateOpenAITokens(text)
break
case 'anthropic':
case 'azure-anthropic':
estimatedTokens = estimateAnthropicTokens(text)
break
case 'google':

View File

@@ -24,6 +24,7 @@ import {
validateTypeformSignature,
verifyProviderWebhook,
} from '@/lib/webhooks/utils.server'
import { getWorkspaceBilledAccountUserId } from '@/lib/workspaces/utils'
import { executeWebhookJob } from '@/background/webhook-execution'
import { resolveEnvVarReferences } from '@/executor/utils/reference-validation'
import { isGitHubEventMatch } from '@/triggers/github/utils'
@@ -1003,10 +1004,23 @@ export async function queueWebhookExecution(
}
}
if (!foundWorkflow.workspaceId) {
logger.error(`[${options.requestId}] Workflow ${foundWorkflow.id} has no workspaceId`)
return NextResponse.json({ error: 'Workflow has no associated workspace' }, { status: 500 })
}
const actorUserId = await getWorkspaceBilledAccountUserId(foundWorkflow.workspaceId)
if (!actorUserId) {
logger.error(
`[${options.requestId}] No billing account for workspace ${foundWorkflow.workspaceId}`
)
return NextResponse.json({ error: 'Unable to resolve billing account' }, { status: 500 })
}
const payload = {
webhookId: foundWebhook.id,
workflowId: foundWorkflow.id,
userId: foundWorkflow.userId,
userId: actorUserId,
provider: foundWebhook.provider,
body,
headers,
@@ -1017,7 +1031,7 @@ export async function queueWebhookExecution(
const jobQueue = await getJobQueue()
const jobId = await jobQueue.enqueue('webhook-execution', payload, {
metadata: { workflowId: foundWorkflow.id, userId: foundWorkflow.userId },
metadata: { workflowId: foundWorkflow.id, userId: actorUserId },
})
logger.info(
`[${options.requestId}] Queued webhook execution task ${jobId} for ${foundWebhook.provider} webhook`

View File

@@ -156,6 +156,15 @@ describe('evaluateSubBlockCondition', () => {
expect(evaluateSubBlockCondition(condition, values)).toBe(true)
})
it.concurrent('passes current values into function conditions', () => {
const condition = (values?: Record<string, unknown>) => ({
field: 'model',
value: typeof values?.model === 'string' ? values.model : '__no_model_selected__',
})
const values = { model: 'ollama/gemma3:4b' }
expect(evaluateSubBlockCondition(condition, values)).toBe(true)
})
it.concurrent('handles boolean values', () => {
const condition = { field: 'enabled', value: true }
const values = { enabled: true }

View File

@@ -100,11 +100,14 @@ export function resolveCanonicalMode(
* Evaluate a subblock condition against a map of raw values.
*/
export function evaluateSubBlockCondition(
condition: SubBlockCondition | (() => SubBlockCondition) | undefined,
condition:
| SubBlockCondition
| ((values?: Record<string, unknown>) => SubBlockCondition)
| undefined,
values: Record<string, unknown>
): boolean {
if (!condition) return true
const actual = typeof condition === 'function' ? condition() : condition
const actual = typeof condition === 'function' ? condition(values) : condition
const fieldValue = values[actual.field]
const valueMatch = Array.isArray(actual.value)
? fieldValue != null &&

View File

@@ -1,5 +1,6 @@
import type Anthropic from '@anthropic-ai/sdk'
import { transformJSONSchema } from '@anthropic-ai/sdk/lib/transform-json-schema'
import type { RawMessageStreamEvent } from '@anthropic-ai/sdk/resources/messages/messages'
import type { Logger } from '@sim/logger'
import type { StreamingExecution } from '@/executor/types'
import { MAX_TOOL_ITERATIONS } from '@/providers'
@@ -34,11 +35,21 @@ export interface AnthropicProviderConfig {
logger: Logger
}
/**
* Custom payload type extending the SDK's base message creation params.
* Adds fields not yet in the SDK: adaptive thinking, output_format, output_config.
*/
interface AnthropicPayload extends Omit<Anthropic.Messages.MessageStreamParams, 'thinking'> {
thinking?: Anthropic.Messages.ThinkingConfigParam | { type: 'adaptive' }
output_format?: { type: 'json_schema'; schema: Record<string, unknown> }
output_config?: { effort: string }
}
/**
* Generates prompt-based schema instructions for older models that don't support native structured outputs.
* This is a fallback approach that adds schema requirements to the system prompt.
*/
function generateSchemaInstructions(schema: any, schemaName?: string): string {
function generateSchemaInstructions(schema: Record<string, unknown>, schemaName?: string): string {
const name = schemaName || 'response'
return `IMPORTANT: You must respond with a valid JSON object that conforms to the following schema.
Do not include any text before or after the JSON object. Only output the JSON.
@@ -113,6 +124,30 @@ function buildThinkingConfig(
}
}
/**
* The Anthropic SDK requires streaming for non-streaming requests when max_tokens exceeds
* this threshold, to avoid HTTP timeouts. When thinking is enabled and pushes max_tokens
* above this limit, we use streaming internally and collect the final message.
*/
const ANTHROPIC_SDK_NON_STREAMING_MAX_TOKENS = 21333
/**
* Creates an Anthropic message, automatically using streaming internally when max_tokens
* exceeds the SDK's non-streaming threshold. Returns the same Message object either way.
*/
async function createMessage(
anthropic: Anthropic,
payload: AnthropicPayload
): Promise<Anthropic.Messages.Message> {
if (payload.max_tokens > ANTHROPIC_SDK_NON_STREAMING_MAX_TOKENS && !payload.stream) {
const stream = anthropic.messages.stream(payload as Anthropic.Messages.MessageStreamParams)
return stream.finalMessage()
}
return anthropic.messages.create(
payload as Anthropic.Messages.MessageCreateParamsNonStreaming
) as Promise<Anthropic.Messages.Message>
}
/**
* Executes a request using the Anthropic API with full tool loop support.
* This is the shared core implementation used by both the standard Anthropic provider
@@ -135,7 +170,7 @@ export async function executeAnthropicProviderRequest(
const anthropic = config.createClient(request.apiKey, useNativeStructuredOutputs)
const messages: any[] = []
const messages: Anthropic.Messages.MessageParam[] = []
let systemPrompt = request.systemPrompt || ''
if (request.context) {
@@ -153,8 +188,8 @@ export async function executeAnthropicProviderRequest(
content: [
{
type: 'tool_result',
tool_use_id: msg.name,
content: msg.content,
tool_use_id: msg.name || '',
content: msg.content || undefined,
},
],
})
@@ -188,12 +223,12 @@ export async function executeAnthropicProviderRequest(
systemPrompt = ''
}
let anthropicTools = request.tools?.length
let anthropicTools: Anthropic.Messages.Tool[] | undefined = request.tools?.length
? request.tools.map((tool) => ({
name: tool.id,
description: tool.description,
input_schema: {
type: 'object',
type: 'object' as const,
properties: tool.parameters.properties,
required: tool.parameters.required,
},
@@ -238,13 +273,12 @@ export async function executeAnthropicProviderRequest(
}
}
const payload: any = {
const payload: AnthropicPayload = {
model: request.model,
messages,
system: systemPrompt,
max_tokens:
Number.parseInt(String(request.maxTokens)) ||
getMaxOutputTokensForModel(request.model, request.stream ?? false),
Number.parseInt(String(request.maxTokens)) || getMaxOutputTokensForModel(request.model),
temperature: Number.parseFloat(String(request.temperature ?? 0.7)),
}
@@ -268,13 +302,35 @@ export async function executeAnthropicProviderRequest(
}
// Add extended thinking configuration if supported and requested
if (request.thinkingLevel) {
// The 'none' sentinel means "disable thinking" — skip configuration entirely.
if (request.thinkingLevel && request.thinkingLevel !== 'none') {
const thinkingConfig = buildThinkingConfig(request.model, request.thinkingLevel)
if (thinkingConfig) {
payload.thinking = thinkingConfig.thinking
if (thinkingConfig.outputConfig) {
payload.output_config = thinkingConfig.outputConfig
}
// Per Anthropic docs: budget_tokens must be less than max_tokens.
// Ensure max_tokens leaves room for both thinking and text output.
if (
thinkingConfig.thinking.type === 'enabled' &&
'budget_tokens' in thinkingConfig.thinking
) {
const budgetTokens = thinkingConfig.thinking.budget_tokens
const minMaxTokens = budgetTokens + 4096
if (payload.max_tokens < minMaxTokens) {
const modelMax = getMaxOutputTokensForModel(request.model)
payload.max_tokens = Math.min(minMaxTokens, modelMax)
logger.info(
`Adjusted max_tokens to ${payload.max_tokens} to satisfy budget_tokens (${budgetTokens}) constraint`
)
}
}
// Per Anthropic docs: thinking is not compatible with temperature or top_k modifications.
payload.temperature = undefined
const isAdaptive = thinkingConfig.thinking.type === 'adaptive'
logger.info(
`Using ${isAdaptive ? 'adaptive' : 'extended'} thinking for model: ${modelId} with ${isAdaptive ? `effort: ${request.thinkingLevel}` : `budget: ${(thinkingConfig.thinking as { budget_tokens: number }).budget_tokens}`}`
@@ -288,7 +344,16 @@ export async function executeAnthropicProviderRequest(
if (anthropicTools?.length) {
payload.tools = anthropicTools
if (toolChoice !== 'auto') {
// Per Anthropic docs: forced tool_choice (type: "tool" or "any") is incompatible with
// thinking. Only auto and none are supported when thinking is enabled.
if (payload.thinking) {
// Per Anthropic docs: only 'auto' (default) and 'none' work with thinking.
if (toolChoice === 'none') {
payload.tool_choice = { type: 'none' }
}
} else if (toolChoice === 'none') {
payload.tool_choice = { type: 'none' }
} else if (toolChoice !== 'auto') {
payload.tool_choice = toolChoice
}
}
@@ -301,42 +366,46 @@ export async function executeAnthropicProviderRequest(
const providerStartTime = Date.now()
const providerStartTimeISO = new Date(providerStartTime).toISOString()
const streamResponse: any = await anthropic.messages.create({
const streamResponse = await anthropic.messages.create({
...payload,
stream: true,
})
} as Anthropic.Messages.MessageCreateParamsStreaming)
const streamingResult = {
stream: createReadableStreamFromAnthropicStream(streamResponse, (content, usage) => {
streamingResult.execution.output.content = content
streamingResult.execution.output.tokens = {
input: usage.input_tokens,
output: usage.output_tokens,
total: usage.input_tokens + usage.output_tokens,
}
stream: createReadableStreamFromAnthropicStream(
streamResponse as AsyncIterable<RawMessageStreamEvent>,
(content, usage) => {
streamingResult.execution.output.content = content
streamingResult.execution.output.tokens = {
input: usage.input_tokens,
output: usage.output_tokens,
total: usage.input_tokens + usage.output_tokens,
}
const costResult = calculateCost(request.model, usage.input_tokens, usage.output_tokens)
streamingResult.execution.output.cost = {
input: costResult.input,
output: costResult.output,
total: costResult.total,
}
const costResult = calculateCost(request.model, usage.input_tokens, usage.output_tokens)
streamingResult.execution.output.cost = {
input: costResult.input,
output: costResult.output,
total: costResult.total,
}
const streamEndTime = Date.now()
const streamEndTimeISO = new Date(streamEndTime).toISOString()
const streamEndTime = Date.now()
const streamEndTimeISO = new Date(streamEndTime).toISOString()
if (streamingResult.execution.output.providerTiming) {
streamingResult.execution.output.providerTiming.endTime = streamEndTimeISO
streamingResult.execution.output.providerTiming.duration =
streamEndTime - providerStartTime
if (streamingResult.execution.output.providerTiming.timeSegments?.[0]) {
streamingResult.execution.output.providerTiming.timeSegments[0].endTime = streamEndTime
streamingResult.execution.output.providerTiming.timeSegments[0].duration =
if (streamingResult.execution.output.providerTiming) {
streamingResult.execution.output.providerTiming.endTime = streamEndTimeISO
streamingResult.execution.output.providerTiming.duration =
streamEndTime - providerStartTime
if (streamingResult.execution.output.providerTiming.timeSegments?.[0]) {
streamingResult.execution.output.providerTiming.timeSegments[0].endTime =
streamEndTime
streamingResult.execution.output.providerTiming.timeSegments[0].duration =
streamEndTime - providerStartTime
}
}
}
}),
),
execution: {
success: true,
output: {
@@ -385,21 +454,13 @@ export async function executeAnthropicProviderRequest(
const providerStartTime = Date.now()
const providerStartTimeISO = new Date(providerStartTime).toISOString()
// Cap intermediate calls at non-streaming limit to avoid SDK timeout errors,
// but allow users to set lower values if desired
const nonStreamingLimit = getMaxOutputTokensForModel(request.model, false)
const nonStreamingMaxTokens = request.maxTokens
? Math.min(Number.parseInt(String(request.maxTokens)), nonStreamingLimit)
: nonStreamingLimit
const intermediatePayload = { ...payload, max_tokens: nonStreamingMaxTokens }
try {
const initialCallTime = Date.now()
const originalToolChoice = intermediatePayload.tool_choice
const originalToolChoice = payload.tool_choice
const forcedTools = preparedTools?.forcedTools || []
let usedForcedTools: string[] = []
let currentResponse = await anthropic.messages.create(intermediatePayload)
let currentResponse = await createMessage(anthropic, payload)
const firstResponseTime = Date.now() - initialCallTime
let content = ''
@@ -468,10 +529,10 @@ export async function executeAnthropicProviderRequest(
const toolExecutionPromises = toolUses.map(async (toolUse) => {
const toolCallStartTime = Date.now()
const toolName = toolUse.name
const toolArgs = toolUse.input as Record<string, any>
const toolArgs = toolUse.input as Record<string, unknown>
try {
const tool = request.tools?.find((t: any) => t.id === toolName)
const tool = request.tools?.find((t) => t.id === toolName)
if (!tool) return null
const { toolParams, executionParams } = prepareToolExecution(tool, toolArgs, request)
@@ -512,17 +573,8 @@ export async function executeAnthropicProviderRequest(
const executionResults = await Promise.allSettled(toolExecutionPromises)
// Collect all tool_use and tool_result blocks for batching
const toolUseBlocks: Array<{
type: 'tool_use'
id: string
name: string
input: Record<string, unknown>
}> = []
const toolResultBlocks: Array<{
type: 'tool_result'
tool_use_id: string
content: string
}> = []
const toolUseBlocks: Anthropic.Messages.ToolUseBlockParam[] = []
const toolResultBlocks: Anthropic.Messages.ToolResultBlockParam[] = []
for (const settledResult of executionResults) {
if (settledResult.status === 'rejected' || !settledResult.value) continue
@@ -583,11 +635,25 @@ export async function executeAnthropicProviderRequest(
})
}
// Add ONE assistant message with ALL tool_use blocks
// Per Anthropic docs: thinking blocks must be preserved in assistant messages
// during tool use to maintain reasoning continuity.
const thinkingBlocks = currentResponse.content.filter(
(
item
): item is
| Anthropic.Messages.ThinkingBlock
| Anthropic.Messages.RedactedThinkingBlock =>
item.type === 'thinking' || item.type === 'redacted_thinking'
)
// Add ONE assistant message with thinking + tool_use blocks
if (toolUseBlocks.length > 0) {
currentMessages.push({
role: 'assistant',
content: toolUseBlocks as unknown as Anthropic.Messages.ContentBlock[],
content: [
...thinkingBlocks,
...toolUseBlocks,
] as Anthropic.Messages.ContentBlockParam[],
})
}
@@ -595,19 +661,23 @@ export async function executeAnthropicProviderRequest(
if (toolResultBlocks.length > 0) {
currentMessages.push({
role: 'user',
content: toolResultBlocks as unknown as Anthropic.Messages.ContentBlockParam[],
content: toolResultBlocks as Anthropic.Messages.ContentBlockParam[],
})
}
const thisToolsTime = Date.now() - toolsStartTime
toolsTime += thisToolsTime
const nextPayload = {
...intermediatePayload,
const nextPayload: AnthropicPayload = {
...payload,
messages: currentMessages,
}
// Per Anthropic docs: forced tool_choice is incompatible with thinking.
// Only auto and none are supported when thinking is enabled.
const thinkingEnabled = !!payload.thinking
if (
!thinkingEnabled &&
typeof originalToolChoice === 'object' &&
hasUsedForcedTool &&
forcedTools.length > 0
@@ -624,7 +694,11 @@ export async function executeAnthropicProviderRequest(
nextPayload.tool_choice = undefined
logger.info('All forced tools have been used, removing tool_choice parameter')
}
} else if (hasUsedForcedTool && typeof originalToolChoice === 'object') {
} else if (
!thinkingEnabled &&
hasUsedForcedTool &&
typeof originalToolChoice === 'object'
) {
nextPayload.tool_choice = undefined
logger.info(
'Removing tool_choice parameter for subsequent requests after forced tool was used'
@@ -633,7 +707,7 @@ export async function executeAnthropicProviderRequest(
const nextModelStartTime = Date.now()
currentResponse = await anthropic.messages.create(nextPayload)
currentResponse = await createMessage(anthropic, nextPayload)
const nextCheckResult = checkForForcedToolUsage(
currentResponse,
@@ -682,33 +756,38 @@ export async function executeAnthropicProviderRequest(
tool_choice: undefined,
}
const streamResponse: any = await anthropic.messages.create(streamingPayload)
const streamResponse = await anthropic.messages.create(
streamingPayload as Anthropic.Messages.MessageCreateParamsStreaming
)
const streamingResult = {
stream: createReadableStreamFromAnthropicStream(streamResponse, (streamContent, usage) => {
streamingResult.execution.output.content = streamContent
streamingResult.execution.output.tokens = {
input: tokens.input + usage.input_tokens,
output: tokens.output + usage.output_tokens,
total: tokens.total + usage.input_tokens + usage.output_tokens,
}
stream: createReadableStreamFromAnthropicStream(
streamResponse as AsyncIterable<RawMessageStreamEvent>,
(streamContent, usage) => {
streamingResult.execution.output.content = streamContent
streamingResult.execution.output.tokens = {
input: tokens.input + usage.input_tokens,
output: tokens.output + usage.output_tokens,
total: tokens.total + usage.input_tokens + usage.output_tokens,
}
const streamCost = calculateCost(request.model, usage.input_tokens, usage.output_tokens)
streamingResult.execution.output.cost = {
input: accumulatedCost.input + streamCost.input,
output: accumulatedCost.output + streamCost.output,
total: accumulatedCost.total + streamCost.total,
}
const streamCost = calculateCost(request.model, usage.input_tokens, usage.output_tokens)
streamingResult.execution.output.cost = {
input: accumulatedCost.input + streamCost.input,
output: accumulatedCost.output + streamCost.output,
total: accumulatedCost.total + streamCost.total,
}
const streamEndTime = Date.now()
const streamEndTimeISO = new Date(streamEndTime).toISOString()
const streamEndTime = Date.now()
const streamEndTimeISO = new Date(streamEndTime).toISOString()
if (streamingResult.execution.output.providerTiming) {
streamingResult.execution.output.providerTiming.endTime = streamEndTimeISO
streamingResult.execution.output.providerTiming.duration =
streamEndTime - providerStartTime
if (streamingResult.execution.output.providerTiming) {
streamingResult.execution.output.providerTiming.endTime = streamEndTimeISO
streamingResult.execution.output.providerTiming.duration =
streamEndTime - providerStartTime
}
}
}),
),
execution: {
success: true,
output: {
@@ -778,21 +857,13 @@ export async function executeAnthropicProviderRequest(
const providerStartTime = Date.now()
const providerStartTimeISO = new Date(providerStartTime).toISOString()
// Cap intermediate calls at non-streaming limit to avoid SDK timeout errors,
// but allow users to set lower values if desired
const nonStreamingLimit = getMaxOutputTokensForModel(request.model, false)
const toolLoopMaxTokens = request.maxTokens
? Math.min(Number.parseInt(String(request.maxTokens)), nonStreamingLimit)
: nonStreamingLimit
const toolLoopPayload = { ...payload, max_tokens: toolLoopMaxTokens }
try {
const initialCallTime = Date.now()
const originalToolChoice = toolLoopPayload.tool_choice
const originalToolChoice = payload.tool_choice
const forcedTools = preparedTools?.forcedTools || []
let usedForcedTools: string[] = []
let currentResponse = await anthropic.messages.create(toolLoopPayload)
let currentResponse = await createMessage(anthropic, payload)
const firstResponseTime = Date.now() - initialCallTime
let content = ''
@@ -872,7 +943,7 @@ export async function executeAnthropicProviderRequest(
const toolExecutionPromises = toolUses.map(async (toolUse) => {
const toolCallStartTime = Date.now()
const toolName = toolUse.name
const toolArgs = toolUse.input as Record<string, any>
const toolArgs = toolUse.input as Record<string, unknown>
// Preserve the original tool_use ID from Claude's response
const toolUseId = toolUse.id
@@ -918,17 +989,8 @@ export async function executeAnthropicProviderRequest(
const executionResults = await Promise.allSettled(toolExecutionPromises)
// Collect all tool_use and tool_result blocks for batching
const toolUseBlocks: Array<{
type: 'tool_use'
id: string
name: string
input: Record<string, unknown>
}> = []
const toolResultBlocks: Array<{
type: 'tool_result'
tool_use_id: string
content: string
}> = []
const toolUseBlocks: Anthropic.Messages.ToolUseBlockParam[] = []
const toolResultBlocks: Anthropic.Messages.ToolResultBlockParam[] = []
for (const settledResult of executionResults) {
if (settledResult.status === 'rejected' || !settledResult.value) continue
@@ -989,11 +1051,23 @@ export async function executeAnthropicProviderRequest(
})
}
// Add ONE assistant message with ALL tool_use blocks
// Per Anthropic docs: thinking blocks must be preserved in assistant messages
// during tool use to maintain reasoning continuity.
const thinkingBlocks = currentResponse.content.filter(
(
item
): item is Anthropic.Messages.ThinkingBlock | Anthropic.Messages.RedactedThinkingBlock =>
item.type === 'thinking' || item.type === 'redacted_thinking'
)
// Add ONE assistant message with thinking + tool_use blocks
if (toolUseBlocks.length > 0) {
currentMessages.push({
role: 'assistant',
content: toolUseBlocks as unknown as Anthropic.Messages.ContentBlock[],
content: [
...thinkingBlocks,
...toolUseBlocks,
] as Anthropic.Messages.ContentBlockParam[],
})
}
@@ -1001,19 +1075,27 @@ export async function executeAnthropicProviderRequest(
if (toolResultBlocks.length > 0) {
currentMessages.push({
role: 'user',
content: toolResultBlocks as unknown as Anthropic.Messages.ContentBlockParam[],
content: toolResultBlocks as Anthropic.Messages.ContentBlockParam[],
})
}
const thisToolsTime = Date.now() - toolsStartTime
toolsTime += thisToolsTime
const nextPayload = {
...toolLoopPayload,
const nextPayload: AnthropicPayload = {
...payload,
messages: currentMessages,
}
if (typeof originalToolChoice === 'object' && hasUsedForcedTool && forcedTools.length > 0) {
// Per Anthropic docs: forced tool_choice is incompatible with thinking.
// Only auto and none are supported when thinking is enabled.
const thinkingEnabled = !!payload.thinking
if (
!thinkingEnabled &&
typeof originalToolChoice === 'object' &&
hasUsedForcedTool &&
forcedTools.length > 0
) {
const remainingTools = forcedTools.filter((tool) => !usedForcedTools.includes(tool))
if (remainingTools.length > 0) {
@@ -1026,7 +1108,11 @@ export async function executeAnthropicProviderRequest(
nextPayload.tool_choice = undefined
logger.info('All forced tools have been used, removing tool_choice parameter')
}
} else if (hasUsedForcedTool && typeof originalToolChoice === 'object') {
} else if (
!thinkingEnabled &&
hasUsedForcedTool &&
typeof originalToolChoice === 'object'
) {
nextPayload.tool_choice = undefined
logger.info(
'Removing tool_choice parameter for subsequent requests after forced tool was used'
@@ -1035,7 +1121,7 @@ export async function executeAnthropicProviderRequest(
const nextModelStartTime = Date.now()
currentResponse = await anthropic.messages.create(nextPayload)
currentResponse = await createMessage(anthropic, nextPayload)
const nextCheckResult = checkForForcedToolUsage(
currentResponse,
@@ -1098,33 +1184,38 @@ export async function executeAnthropicProviderRequest(
tool_choice: undefined,
}
const streamResponse: any = await anthropic.messages.create(streamingPayload)
const streamResponse = await anthropic.messages.create(
streamingPayload as Anthropic.Messages.MessageCreateParamsStreaming
)
const streamingResult = {
stream: createReadableStreamFromAnthropicStream(streamResponse, (streamContent, usage) => {
streamingResult.execution.output.content = streamContent
streamingResult.execution.output.tokens = {
input: tokens.input + usage.input_tokens,
output: tokens.output + usage.output_tokens,
total: tokens.total + usage.input_tokens + usage.output_tokens,
}
stream: createReadableStreamFromAnthropicStream(
streamResponse as AsyncIterable<RawMessageStreamEvent>,
(streamContent, usage) => {
streamingResult.execution.output.content = streamContent
streamingResult.execution.output.tokens = {
input: tokens.input + usage.input_tokens,
output: tokens.output + usage.output_tokens,
total: tokens.total + usage.input_tokens + usage.output_tokens,
}
const streamCost = calculateCost(request.model, usage.input_tokens, usage.output_tokens)
streamingResult.execution.output.cost = {
input: cost.input + streamCost.input,
output: cost.output + streamCost.output,
total: cost.total + streamCost.total,
}
const streamCost = calculateCost(request.model, usage.input_tokens, usage.output_tokens)
streamingResult.execution.output.cost = {
input: cost.input + streamCost.input,
output: cost.output + streamCost.output,
total: cost.total + streamCost.total,
}
const streamEndTime = Date.now()
const streamEndTimeISO = new Date(streamEndTime).toISOString()
const streamEndTime = Date.now()
const streamEndTimeISO = new Date(streamEndTime).toISOString()
if (streamingResult.execution.output.providerTiming) {
streamingResult.execution.output.providerTiming.endTime = streamEndTimeISO
streamingResult.execution.output.providerTiming.duration =
streamEndTime - providerStartTime
if (streamingResult.execution.output.providerTiming) {
streamingResult.execution.output.providerTiming.endTime = streamEndTimeISO
streamingResult.execution.output.providerTiming.duration =
streamEndTime - providerStartTime
}
}
}),
),
execution: {
success: true,
output: {
@@ -1179,7 +1270,7 @@ export async function executeAnthropicProviderRequest(
toolCalls.length > 0
? toolCalls.map((tc) => ({
name: tc.name,
arguments: tc.arguments as Record<string, any>,
arguments: tc.arguments as Record<string, unknown>,
startTime: tc.startTime,
endTime: tc.endTime,
duration: tc.duration,

View File

@@ -35,6 +35,8 @@ export const azureAnthropicProvider: ProviderConfig = {
// The SDK appends /v1/messages automatically
const baseURL = `${request.azureEndpoint.replace(/\/$/, '')}/anthropic`
const anthropicVersion = request.azureApiVersion || '2023-06-01'
return executeAnthropicProviderRequest(
{
...request,
@@ -49,7 +51,7 @@ export const azureAnthropicProvider: ProviderConfig = {
apiKey,
defaultHeaders: {
'api-key': apiKey,
'anthropic-version': '2023-06-01',
'anthropic-version': anthropicVersion,
...(useNativeStructuredOutputs
? { 'anthropic-beta': 'structured-outputs-2025-11-13' }
: {}),

View File

@@ -1,6 +1,14 @@
import { createLogger } from '@sim/logger'
import { AzureOpenAI } from 'openai'
import type { ChatCompletionCreateParamsStreaming } from 'openai/resources/chat/completions'
import type {
ChatCompletion,
ChatCompletionCreateParamsBase,
ChatCompletionCreateParamsStreaming,
ChatCompletionMessageParam,
ChatCompletionTool,
ChatCompletionToolChoiceOption,
} from 'openai/resources/chat/completions'
import type { ReasoningEffort } from 'openai/resources/shared'
import { env } from '@/lib/core/config/env'
import type { StreamingExecution } from '@/executor/types'
import { MAX_TOOL_ITERATIONS } from '@/providers'
@@ -16,6 +24,7 @@ import {
import { getProviderDefaultModel, getProviderModels } from '@/providers/models'
import { executeResponsesProviderRequest } from '@/providers/openai/core'
import type {
FunctionCallResponse,
ProviderConfig,
ProviderRequest,
ProviderResponse,
@@ -59,7 +68,7 @@ async function executeChatCompletionsRequest(
endpoint: azureEndpoint,
})
const allMessages: any[] = []
const allMessages: ChatCompletionMessageParam[] = []
if (request.systemPrompt) {
allMessages.push({
@@ -76,12 +85,12 @@ async function executeChatCompletionsRequest(
}
if (request.messages) {
allMessages.push(...request.messages)
allMessages.push(...(request.messages as ChatCompletionMessageParam[]))
}
const tools = request.tools?.length
const tools: ChatCompletionTool[] | undefined = request.tools?.length
? request.tools.map((tool) => ({
type: 'function',
type: 'function' as const,
function: {
name: tool.id,
description: tool.description,
@@ -90,7 +99,7 @@ async function executeChatCompletionsRequest(
}))
: undefined
const payload: any = {
const payload: ChatCompletionCreateParamsBase & { verbosity?: string } = {
model: deploymentName,
messages: allMessages,
}
@@ -98,8 +107,10 @@ async function executeChatCompletionsRequest(
if (request.temperature !== undefined) payload.temperature = request.temperature
if (request.maxTokens != null) payload.max_completion_tokens = request.maxTokens
if (request.reasoningEffort !== undefined) payload.reasoning_effort = request.reasoningEffort
if (request.verbosity !== undefined) payload.verbosity = request.verbosity
if (request.reasoningEffort !== undefined && request.reasoningEffort !== 'auto')
payload.reasoning_effort = request.reasoningEffort as ReasoningEffort
if (request.verbosity !== undefined && request.verbosity !== 'auto')
payload.verbosity = request.verbosity
if (request.responseFormat) {
payload.response_format = {
@@ -121,8 +132,8 @@ async function executeChatCompletionsRequest(
const { tools: filteredTools, toolChoice } = preparedTools
if (filteredTools?.length && toolChoice) {
payload.tools = filteredTools
payload.tool_choice = toolChoice
payload.tools = filteredTools as ChatCompletionTool[]
payload.tool_choice = toolChoice as ChatCompletionToolChoiceOption
logger.info('Azure OpenAI request configuration:', {
toolCount: filteredTools.length,
@@ -231,7 +242,7 @@ async function executeChatCompletionsRequest(
const forcedTools = preparedTools?.forcedTools || []
let usedForcedTools: string[] = []
let currentResponse = await azureOpenAI.chat.completions.create(payload)
let currentResponse = (await azureOpenAI.chat.completions.create(payload)) as ChatCompletion
const firstResponseTime = Date.now() - initialCallTime
let content = currentResponse.choices[0]?.message?.content || ''
@@ -240,8 +251,8 @@ async function executeChatCompletionsRequest(
output: currentResponse.usage?.completion_tokens || 0,
total: currentResponse.usage?.total_tokens || 0,
}
const toolCalls = []
const toolResults = []
const toolCalls: (FunctionCallResponse & { success: boolean })[] = []
const toolResults: Record<string, unknown>[] = []
const currentMessages = [...allMessages]
let iterationCount = 0
let modelTime = firstResponseTime
@@ -260,7 +271,7 @@ async function executeChatCompletionsRequest(
const firstCheckResult = checkForForcedToolUsage(
currentResponse,
originalToolChoice,
originalToolChoice ?? 'auto',
logger,
forcedTools,
usedForcedTools
@@ -356,10 +367,10 @@ async function executeChatCompletionsRequest(
duration: duration,
})
let resultContent: any
let resultContent: Record<string, unknown>
if (result.success) {
toolResults.push(result.output)
resultContent = result.output
toolResults.push(result.output as Record<string, unknown>)
resultContent = result.output as Record<string, unknown>
} else {
resultContent = {
error: true,
@@ -409,11 +420,11 @@ async function executeChatCompletionsRequest(
}
const nextModelStartTime = Date.now()
currentResponse = await azureOpenAI.chat.completions.create(nextPayload)
currentResponse = (await azureOpenAI.chat.completions.create(nextPayload)) as ChatCompletion
const nextCheckResult = checkForForcedToolUsage(
currentResponse,
nextPayload.tool_choice,
nextPayload.tool_choice ?? 'auto',
logger,
forcedTools,
usedForcedTools

View File

@@ -1,4 +1,5 @@
import type { Logger } from '@sim/logger'
import type OpenAI from 'openai'
import type { ChatCompletionChunk } from 'openai/resources/chat/completions'
import type { CompletionUsage } from 'openai/resources/completions'
import type { Stream } from 'openai/streaming'
@@ -20,8 +21,8 @@ export function createReadableStreamFromAzureOpenAIStream(
* Uses the shared OpenAI-compatible forced tool usage helper.
*/
export function checkForForcedToolUsage(
response: any,
toolChoice: string | { type: string; function?: { name: string }; name?: string; any?: any },
response: OpenAI.Chat.Completions.ChatCompletion,
toolChoice: string | { type: string; function?: { name: string }; name?: string },
_logger: Logger,
forcedTools: string[],
usedForcedTools: string[]

View File

@@ -197,6 +197,9 @@ export const bedrockProvider: ProviderConfig = {
} else if (tc.type === 'function' && tc.function?.name) {
toolChoice = { tool: { name: tc.function.name } }
logger.info(`Using Bedrock tool_choice format: force tool "${tc.function.name}"`)
} else if (tc.type === 'any') {
toolChoice = { any: {} }
logger.info('Using Bedrock tool_choice format: any tool')
} else {
toolChoice = { auto: {} }
}
@@ -413,6 +416,7 @@ export const bedrockProvider: ProviderConfig = {
input: initialCost.input,
output: initialCost.output,
total: initialCost.total,
pricing: initialCost.pricing,
}
const toolCalls: any[] = []
@@ -860,6 +864,12 @@ export const bedrockProvider: ProviderConfig = {
content,
model: request.model,
tokens,
cost: {
input: cost.input,
output: cost.output,
total: cost.total,
pricing: cost.pricing,
},
toolCalls:
toolCalls.length > 0
? toolCalls.map((tc) => ({

View File

@@ -24,7 +24,6 @@ import {
extractTextContent,
mapToThinkingLevel,
} from '@/providers/google/utils'
import { getThinkingCapability } from '@/providers/models'
import type { FunctionCallResponse, ProviderRequest, ProviderResponse } from '@/providers/types'
import {
calculateCost,
@@ -432,13 +431,11 @@ export async function executeGeminiRequest(
logger.warn('Gemini does not support responseFormat with tools. Structured output ignored.')
}
// Configure thinking for models that support it
const thinkingCapability = getThinkingCapability(model)
if (thinkingCapability) {
const level = request.thinkingLevel ?? thinkingCapability.default ?? 'high'
// Configure thinking only when the user explicitly selects a thinking level
if (request.thinkingLevel && request.thinkingLevel !== 'none') {
const thinkingConfig: ThinkingConfig = {
includeThoughts: false,
thinkingLevel: mapToThinkingLevel(level),
thinkingLevel: mapToThinkingLevel(request.thinkingLevel),
}
geminiConfig.thinkingConfig = thinkingConfig
}

View File

@@ -141,7 +141,6 @@ export const mistralProvider: ProviderConfig = {
const streamingParams: ChatCompletionCreateParamsStreaming = {
...payload,
stream: true,
stream_options: { include_usage: true },
}
const streamResponse = await mistral.chat.completions.create(streamingParams)
@@ -453,7 +452,6 @@ export const mistralProvider: ProviderConfig = {
messages: currentMessages,
tool_choice: 'auto',
stream: true,
stream_options: { include_usage: true },
}
const streamResponse = await mistral.chat.completions.create(streamingParams)

View File

@@ -34,17 +34,8 @@ export interface ModelCapabilities {
toolUsageControl?: boolean
computerUse?: boolean
nativeStructuredOutputs?: boolean
/**
* Max output tokens configuration for Anthropic SDK's streaming timeout workaround.
* The Anthropic SDK throws an error for non-streaming requests that may take >10 minutes.
* This only applies to direct Anthropic API calls, not Bedrock (which uses AWS SDK).
*/
maxOutputTokens?: {
/** Maximum tokens for streaming requests */
max: number
/** Safe default for non-streaming requests (to avoid Anthropic SDK timeout errors) */
default: number
}
/** Maximum supported output tokens for this model */
maxOutputTokens?: number
reasoningEffort?: {
values: string[]
}
@@ -109,7 +100,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
name: 'OpenAI',
description: "OpenAI's models",
defaultModel: 'gpt-4o',
modelPatterns: [/^gpt/, /^o1/, /^text-embedding/],
modelPatterns: [/^gpt/, /^o\d/, /^text-embedding/],
icon: OpenAIIcon,
capabilities: {
toolUsageControl: true,
@@ -138,7 +129,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
},
capabilities: {
reasoningEffort: {
values: ['none', 'minimal', 'low', 'medium', 'high', 'xhigh'],
values: ['none', 'low', 'medium', 'high', 'xhigh'],
},
verbosity: {
values: ['low', 'medium', 'high'],
@@ -164,60 +155,6 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
},
contextWindow: 400000,
},
// {
// id: 'gpt-5.1-mini',
// pricing: {
// input: 0.25,
// cachedInput: 0.025,
// output: 2.0,
// updatedAt: '2025-11-14',
// },
// capabilities: {
// reasoningEffort: {
// values: ['none', 'low', 'medium', 'high'],
// },
// verbosity: {
// values: ['low', 'medium', 'high'],
// },
// },
// contextWindow: 400000,
// },
// {
// id: 'gpt-5.1-nano',
// pricing: {
// input: 0.05,
// cachedInput: 0.005,
// output: 0.4,
// updatedAt: '2025-11-14',
// },
// capabilities: {
// reasoningEffort: {
// values: ['none', 'low', 'medium', 'high'],
// },
// verbosity: {
// values: ['low', 'medium', 'high'],
// },
// },
// contextWindow: 400000,
// },
// {
// id: 'gpt-5.1-codex',
// pricing: {
// input: 1.25,
// cachedInput: 0.125,
// output: 10.0,
// updatedAt: '2025-11-14',
// },
// capabilities: {
// reasoningEffort: {
// values: ['none', 'medium', 'high'],
// },
// verbosity: {
// values: ['low', 'medium', 'high'],
// },
// },
// contextWindow: 400000,
// },
{
id: 'gpt-5',
pricing: {
@@ -280,8 +217,10 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
output: 10.0,
updatedAt: '2025-08-07',
},
capabilities: {},
contextWindow: 400000,
capabilities: {
temperature: { min: 0, max: 2 },
},
contextWindow: 128000,
},
{
id: 'o1',
@@ -311,7 +250,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
values: ['low', 'medium', 'high'],
},
},
contextWindow: 128000,
contextWindow: 200000,
},
{
id: 'o4-mini',
@@ -326,7 +265,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
values: ['low', 'medium', 'high'],
},
},
contextWindow: 128000,
contextWindow: 200000,
},
{
id: 'gpt-4.1',
@@ -391,7 +330,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
capabilities: {
temperature: { min: 0, max: 1 },
nativeStructuredOutputs: true,
maxOutputTokens: { max: 128000, default: 8192 },
maxOutputTokens: 128000,
thinking: {
levels: ['low', 'medium', 'high', 'max'],
default: 'high',
@@ -410,10 +349,10 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
capabilities: {
temperature: { min: 0, max: 1 },
nativeStructuredOutputs: true,
maxOutputTokens: { max: 64000, default: 8192 },
maxOutputTokens: 64000,
thinking: {
levels: ['low', 'medium', 'high'],
default: 'medium',
default: 'high',
},
},
contextWindow: 200000,
@@ -429,10 +368,10 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
capabilities: {
temperature: { min: 0, max: 1 },
nativeStructuredOutputs: true,
maxOutputTokens: { max: 64000, default: 8192 },
maxOutputTokens: 64000,
thinking: {
levels: ['low', 'medium', 'high'],
default: 'medium',
default: 'high',
},
},
contextWindow: 200000,
@@ -447,10 +386,10 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
},
capabilities: {
temperature: { min: 0, max: 1 },
maxOutputTokens: { max: 64000, default: 8192 },
maxOutputTokens: 64000,
thinking: {
levels: ['low', 'medium', 'high'],
default: 'medium',
default: 'high',
},
},
contextWindow: 200000,
@@ -466,10 +405,10 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
capabilities: {
temperature: { min: 0, max: 1 },
nativeStructuredOutputs: true,
maxOutputTokens: { max: 64000, default: 8192 },
maxOutputTokens: 64000,
thinking: {
levels: ['low', 'medium', 'high'],
default: 'medium',
default: 'high',
},
},
contextWindow: 200000,
@@ -484,10 +423,10 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
},
capabilities: {
temperature: { min: 0, max: 1 },
maxOutputTokens: { max: 64000, default: 8192 },
maxOutputTokens: 64000,
thinking: {
levels: ['low', 'medium', 'high'],
default: 'medium',
default: 'high',
},
},
contextWindow: 200000,
@@ -503,10 +442,10 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
capabilities: {
temperature: { min: 0, max: 1 },
nativeStructuredOutputs: true,
maxOutputTokens: { max: 64000, default: 8192 },
maxOutputTokens: 64000,
thinking: {
levels: ['low', 'medium', 'high'],
default: 'medium',
default: 'high',
},
},
contextWindow: 200000,
@@ -515,13 +454,13 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
id: 'claude-3-haiku-20240307',
pricing: {
input: 0.25,
cachedInput: 0.025,
cachedInput: 0.03,
output: 1.25,
updatedAt: '2026-02-05',
},
capabilities: {
temperature: { min: 0, max: 1 },
maxOutputTokens: { max: 4096, default: 4096 },
maxOutputTokens: 4096,
},
contextWindow: 200000,
},
@@ -536,10 +475,10 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
capabilities: {
temperature: { min: 0, max: 1 },
computerUse: true,
maxOutputTokens: { max: 8192, default: 8192 },
maxOutputTokens: 64000,
thinking: {
levels: ['low', 'medium', 'high'],
default: 'medium',
default: 'high',
},
},
contextWindow: 200000,
@@ -580,7 +519,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
},
capabilities: {
reasoningEffort: {
values: ['none', 'minimal', 'low', 'medium', 'high', 'xhigh'],
values: ['none', 'low', 'medium', 'high', 'xhigh'],
},
verbosity: {
values: ['low', 'medium', 'high'],
@@ -606,42 +545,6 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
},
contextWindow: 400000,
},
{
id: 'azure/gpt-5.1-mini',
pricing: {
input: 0.25,
cachedInput: 0.025,
output: 2.0,
updatedAt: '2025-11-14',
},
capabilities: {
reasoningEffort: {
values: ['none', 'low', 'medium', 'high'],
},
verbosity: {
values: ['low', 'medium', 'high'],
},
},
contextWindow: 400000,
},
{
id: 'azure/gpt-5.1-nano',
pricing: {
input: 0.05,
cachedInput: 0.005,
output: 0.4,
updatedAt: '2025-11-14',
},
capabilities: {
reasoningEffort: {
values: ['none', 'low', 'medium', 'high'],
},
verbosity: {
values: ['low', 'medium', 'high'],
},
},
contextWindow: 400000,
},
{
id: 'azure/gpt-5.1-codex',
pricing: {
@@ -652,7 +555,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
},
capabilities: {
reasoningEffort: {
values: ['none', 'medium', 'high'],
values: ['none', 'low', 'medium', 'high'],
},
verbosity: {
values: ['low', 'medium', 'high'],
@@ -722,23 +625,25 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
output: 10.0,
updatedAt: '2025-08-07',
},
capabilities: {},
contextWindow: 400000,
capabilities: {
temperature: { min: 0, max: 2 },
},
contextWindow: 128000,
},
{
id: 'azure/o3',
pricing: {
input: 10,
cachedInput: 2.5,
output: 40,
updatedAt: '2025-06-15',
input: 2,
cachedInput: 0.5,
output: 8,
updatedAt: '2026-02-06',
},
capabilities: {
reasoningEffort: {
values: ['low', 'medium', 'high'],
},
},
contextWindow: 128000,
contextWindow: 200000,
},
{
id: 'azure/o4-mini',
@@ -753,7 +658,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
values: ['low', 'medium', 'high'],
},
},
contextWindow: 128000,
contextWindow: 200000,
},
{
id: 'azure/gpt-4.1',
@@ -763,7 +668,35 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
output: 8.0,
updatedAt: '2025-06-15',
},
capabilities: {},
capabilities: {
temperature: { min: 0, max: 2 },
},
contextWindow: 1000000,
},
{
id: 'azure/gpt-4.1-mini',
pricing: {
input: 0.4,
cachedInput: 0.1,
output: 1.6,
updatedAt: '2025-06-15',
},
capabilities: {
temperature: { min: 0, max: 2 },
},
contextWindow: 1000000,
},
{
id: 'azure/gpt-4.1-nano',
pricing: {
input: 0.1,
cachedInput: 0.025,
output: 0.4,
updatedAt: '2025-06-15',
},
capabilities: {
temperature: { min: 0, max: 2 },
},
contextWindow: 1000000,
},
{
@@ -775,7 +708,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
updatedAt: '2025-06-15',
},
capabilities: {},
contextWindow: 1000000,
contextWindow: 200000,
},
],
},
@@ -801,7 +734,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
capabilities: {
temperature: { min: 0, max: 1 },
nativeStructuredOutputs: true,
maxOutputTokens: { max: 128000, default: 8192 },
maxOutputTokens: 128000,
thinking: {
levels: ['low', 'medium', 'high', 'max'],
default: 'high',
@@ -820,10 +753,10 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
capabilities: {
temperature: { min: 0, max: 1 },
nativeStructuredOutputs: true,
maxOutputTokens: { max: 64000, default: 8192 },
maxOutputTokens: 64000,
thinking: {
levels: ['low', 'medium', 'high'],
default: 'medium',
default: 'high',
},
},
contextWindow: 200000,
@@ -839,10 +772,10 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
capabilities: {
temperature: { min: 0, max: 1 },
nativeStructuredOutputs: true,
maxOutputTokens: { max: 64000, default: 8192 },
maxOutputTokens: 64000,
thinking: {
levels: ['low', 'medium', 'high'],
default: 'medium',
default: 'high',
},
},
contextWindow: 200000,
@@ -858,10 +791,10 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
capabilities: {
temperature: { min: 0, max: 1 },
nativeStructuredOutputs: true,
maxOutputTokens: { max: 64000, default: 8192 },
maxOutputTokens: 64000,
thinking: {
levels: ['low', 'medium', 'high'],
default: 'medium',
default: 'high',
},
},
contextWindow: 200000,
@@ -877,10 +810,10 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
capabilities: {
temperature: { min: 0, max: 1 },
nativeStructuredOutputs: true,
maxOutputTokens: { max: 64000, default: 8192 },
maxOutputTokens: 64000,
thinking: {
levels: ['low', 'medium', 'high'],
default: 'medium',
default: 'high',
},
},
contextWindow: 200000,
@@ -2548,14 +2481,11 @@ export function getThinkingLevelsForModel(modelId: string): string[] | null {
}
/**
* Get the max output tokens for a specific model
* Returns the model's max capacity for streaming requests,
* or the model's safe default for non-streaming requests to avoid timeout issues.
* Get the max output tokens for a specific model.
*
* @param modelId - The model ID
* @param streaming - Whether the request is streaming (default: false)
*/
export function getMaxOutputTokensForModel(modelId: string, streaming = false): number {
export function getMaxOutputTokensForModel(modelId: string): number {
const normalizedModelId = modelId.toLowerCase()
const STANDARD_MAX_OUTPUT_TOKENS = 4096
@@ -2563,11 +2493,7 @@ export function getMaxOutputTokensForModel(modelId: string, streaming = false):
for (const model of provider.models) {
const baseModelId = model.id.toLowerCase()
if (normalizedModelId === baseModelId || normalizedModelId.startsWith(`${baseModelId}-`)) {
const outputTokens = model.capabilities.maxOutputTokens
if (outputTokens) {
return streaming ? outputTokens.max : outputTokens.default
}
return STANDARD_MAX_OUTPUT_TOKENS
return model.capabilities.maxOutputTokens || STANDARD_MAX_OUTPUT_TOKENS
}
}
}

View File

@@ -1,4 +1,5 @@
import type { Logger } from '@sim/logger'
import type OpenAI from 'openai'
import type { StreamingExecution } from '@/executor/types'
import { MAX_TOOL_ITERATIONS } from '@/providers'
import type { Message, ProviderRequest, ProviderResponse, TimeSegment } from '@/providers/types'
@@ -30,7 +31,7 @@ type ToolChoice = PreparedTools['toolChoice']
* - Sets additionalProperties: false on all object types.
* - Ensures required includes ALL property keys.
*/
function enforceStrictSchema(schema: any): any {
function enforceStrictSchema(schema: Record<string, unknown>): Record<string, unknown> {
if (!schema || typeof schema !== 'object') return schema
const result = { ...schema }
@@ -41,23 +42,26 @@ function enforceStrictSchema(schema: any): any {
// Recursively process properties and ensure required includes all keys
if (result.properties && typeof result.properties === 'object') {
const propKeys = Object.keys(result.properties)
const propKeys = Object.keys(result.properties as Record<string, unknown>)
result.required = propKeys // Strict mode requires ALL properties
result.properties = Object.fromEntries(
Object.entries(result.properties).map(([key, value]) => [key, enforceStrictSchema(value)])
Object.entries(result.properties as Record<string, unknown>).map(([key, value]) => [
key,
enforceStrictSchema(value as Record<string, unknown>),
])
)
}
}
// Handle array items
if (result.type === 'array' && result.items) {
result.items = enforceStrictSchema(result.items)
result.items = enforceStrictSchema(result.items as Record<string, unknown>)
}
// Handle anyOf, oneOf, allOf
for (const keyword of ['anyOf', 'oneOf', 'allOf']) {
if (Array.isArray(result[keyword])) {
result[keyword] = result[keyword].map(enforceStrictSchema)
result[keyword] = (result[keyword] as Record<string, unknown>[]).map(enforceStrictSchema)
}
}
@@ -65,7 +69,10 @@ function enforceStrictSchema(schema: any): any {
for (const defKey of ['$defs', 'definitions']) {
if (result[defKey] && typeof result[defKey] === 'object') {
result[defKey] = Object.fromEntries(
Object.entries(result[defKey]).map(([key, value]) => [key, enforceStrictSchema(value)])
Object.entries(result[defKey] as Record<string, unknown>).map(([key, value]) => [
key,
enforceStrictSchema(value as Record<string, unknown>),
])
)
}
}
@@ -123,29 +130,29 @@ export async function executeResponsesProviderRequest(
const initialInput = buildResponsesInputFromMessages(allMessages)
const basePayload: Record<string, any> = {
const basePayload: Record<string, unknown> = {
model: config.modelName,
}
if (request.temperature !== undefined) basePayload.temperature = request.temperature
if (request.maxTokens != null) basePayload.max_output_tokens = request.maxTokens
if (request.reasoningEffort !== undefined) {
if (request.reasoningEffort !== undefined && request.reasoningEffort !== 'auto') {
basePayload.reasoning = {
effort: request.reasoningEffort,
summary: 'auto',
}
}
if (request.verbosity !== undefined) {
if (request.verbosity !== undefined && request.verbosity !== 'auto') {
basePayload.text = {
...(basePayload.text ?? {}),
...((basePayload.text as Record<string, unknown>) ?? {}),
verbosity: request.verbosity,
}
}
// Store response format config - for Azure with tools, we defer applying it until after tool calls complete
let deferredTextFormat: { type: string; name: string; schema: any; strict: boolean } | undefined
let deferredTextFormat: OpenAI.Responses.ResponseFormatTextJSONSchemaConfig | undefined
const hasTools = !!request.tools?.length
const isAzure = config.providerId === 'azure-openai'
@@ -171,7 +178,7 @@ export async function executeResponsesProviderRequest(
)
} else {
basePayload.text = {
...(basePayload.text ?? {}),
...((basePayload.text as Record<string, unknown>) ?? {}),
format: textFormat,
}
logger.info(`Added JSON schema response format to ${config.providerLabel} request`)
@@ -231,7 +238,10 @@ export async function executeResponsesProviderRequest(
}
}
const createRequestBody = (input: ResponsesInputItem[], overrides: Record<string, any> = {}) => ({
const createRequestBody = (
input: ResponsesInputItem[],
overrides: Record<string, unknown> = {}
) => ({
...basePayload,
input,
...overrides,
@@ -247,7 +257,9 @@ export async function executeResponsesProviderRequest(
}
}
const postResponses = async (body: Record<string, any>) => {
const postResponses = async (
body: Record<string, unknown>
): Promise<OpenAI.Responses.Response> => {
const response = await fetch(config.endpoint, {
method: 'POST',
headers: config.headers,
@@ -496,10 +508,10 @@ export async function executeResponsesProviderRequest(
duration: duration,
})
let resultContent: any
let resultContent: Record<string, unknown>
if (result.success) {
toolResults.push(result.output)
resultContent = result.output
resultContent = result.output as Record<string, unknown>
} else {
resultContent = {
error: true,
@@ -615,11 +627,11 @@ export async function executeResponsesProviderRequest(
}
// Make final call with the response format - build payload without tools
const finalPayload: Record<string, any> = {
const finalPayload: Record<string, unknown> = {
model: config.modelName,
input: formattedInput,
text: {
...(basePayload.text ?? {}),
...((basePayload.text as Record<string, unknown>) ?? {}),
format: deferredTextFormat,
},
}
@@ -627,15 +639,15 @@ export async function executeResponsesProviderRequest(
// Copy over non-tool related settings
if (request.temperature !== undefined) finalPayload.temperature = request.temperature
if (request.maxTokens != null) finalPayload.max_output_tokens = request.maxTokens
if (request.reasoningEffort !== undefined) {
if (request.reasoningEffort !== undefined && request.reasoningEffort !== 'auto') {
finalPayload.reasoning = {
effort: request.reasoningEffort,
summary: 'auto',
}
}
if (request.verbosity !== undefined) {
if (request.verbosity !== undefined && request.verbosity !== 'auto') {
finalPayload.text = {
...finalPayload.text,
...((finalPayload.text as Record<string, unknown>) ?? {}),
verbosity: request.verbosity,
}
}
@@ -679,10 +691,10 @@ export async function executeResponsesProviderRequest(
const accumulatedCost = calculateCost(request.model, tokens.input, tokens.output)
// For Azure with deferred format in streaming mode, include the format in the streaming call
const streamOverrides: Record<string, any> = { stream: true, tool_choice: 'auto' }
const streamOverrides: Record<string, unknown> = { stream: true, tool_choice: 'auto' }
if (deferredTextFormat) {
streamOverrides.text = {
...(basePayload.text ?? {}),
...((basePayload.text as Record<string, unknown>) ?? {}),
format: deferredTextFormat,
}
}

View File

@@ -1,4 +1,5 @@
import { createLogger } from '@sim/logger'
import type OpenAI from 'openai'
import type { Message } from '@/providers/types'
const logger = createLogger('ResponsesUtils')
@@ -38,7 +39,7 @@ export interface ResponsesToolDefinition {
type: 'function'
name: string
description?: string
parameters?: Record<string, any>
parameters?: Record<string, unknown>
}
/**
@@ -85,7 +86,15 @@ export function buildResponsesInputFromMessages(messages: Message[]): ResponsesI
/**
* Converts tool definitions to the Responses API format.
*/
export function convertToolsToResponses(tools: any[]): ResponsesToolDefinition[] {
export function convertToolsToResponses(
tools: Array<{
type?: string
name?: string
description?: string
parameters?: Record<string, unknown>
function?: { name: string; description?: string; parameters?: Record<string, unknown> }
}>
): ResponsesToolDefinition[] {
return tools
.map((tool) => {
const name = tool.function?.name ?? tool.name
@@ -131,7 +140,7 @@ export function toResponsesToolChoice(
return 'auto'
}
function extractTextFromMessageItem(item: any): string {
function extractTextFromMessageItem(item: Record<string, unknown>): string {
if (!item) {
return ''
}
@@ -170,7 +179,7 @@ function extractTextFromMessageItem(item: any): string {
/**
* Extracts plain text from Responses API output items.
*/
export function extractResponseText(output: unknown): string {
export function extractResponseText(output: OpenAI.Responses.ResponseOutputItem[]): string {
if (!Array.isArray(output)) {
return ''
}
@@ -181,7 +190,7 @@ export function extractResponseText(output: unknown): string {
continue
}
const text = extractTextFromMessageItem(item)
const text = extractTextFromMessageItem(item as unknown as Record<string, unknown>)
if (text) {
textParts.push(text)
}
@@ -193,7 +202,9 @@ export function extractResponseText(output: unknown): string {
/**
* Converts Responses API output items into input items for subsequent calls.
*/
export function convertResponseOutputToInputItems(output: unknown): ResponsesInputItem[] {
export function convertResponseOutputToInputItems(
output: OpenAI.Responses.ResponseOutputItem[]
): ResponsesInputItem[] {
if (!Array.isArray(output)) {
return []
}
@@ -205,7 +216,7 @@ export function convertResponseOutputToInputItems(output: unknown): ResponsesInp
}
if (item.type === 'message') {
const text = extractTextFromMessageItem(item)
const text = extractTextFromMessageItem(item as unknown as Record<string, unknown>)
if (text) {
items.push({
role: 'assistant',
@@ -213,18 +224,20 @@ export function convertResponseOutputToInputItems(output: unknown): ResponsesInp
})
}
const toolCalls = Array.isArray(item.tool_calls) ? item.tool_calls : []
// Handle Chat Completions-style tool_calls nested under message items
const msgRecord = item as unknown as Record<string, unknown>
const toolCalls = Array.isArray(msgRecord.tool_calls) ? msgRecord.tool_calls : []
for (const toolCall of toolCalls) {
const callId = toolCall?.id
const name = toolCall?.function?.name ?? toolCall?.name
const tc = toolCall as Record<string, unknown>
const fn = tc.function as Record<string, unknown> | undefined
const callId = tc.id as string | undefined
const name = (fn?.name ?? tc.name) as string | undefined
if (!callId || !name) {
continue
}
const argumentsValue =
typeof toolCall?.function?.arguments === 'string'
? toolCall.function.arguments
: JSON.stringify(toolCall?.function?.arguments ?? {})
typeof fn?.arguments === 'string' ? fn.arguments : JSON.stringify(fn?.arguments ?? {})
items.push({
type: 'function_call',
@@ -238,14 +251,18 @@ export function convertResponseOutputToInputItems(output: unknown): ResponsesInp
}
if (item.type === 'function_call') {
const callId = item.call_id ?? item.id
const name = item.name ?? item.function?.name
const fc = item as OpenAI.Responses.ResponseFunctionToolCall
const fcRecord = item as unknown as Record<string, unknown>
const callId = fc.call_id ?? (fcRecord.id as string | undefined)
const name =
fc.name ??
((fcRecord.function as Record<string, unknown> | undefined)?.name as string | undefined)
if (!callId || !name) {
continue
}
const argumentsValue =
typeof item.arguments === 'string' ? item.arguments : JSON.stringify(item.arguments ?? {})
typeof fc.arguments === 'string' ? fc.arguments : JSON.stringify(fc.arguments ?? {})
items.push({
type: 'function_call',
@@ -262,7 +279,9 @@ export function convertResponseOutputToInputItems(output: unknown): ResponsesInp
/**
* Extracts tool calls from Responses API output items.
*/
export function extractResponseToolCalls(output: unknown): ResponsesToolCall[] {
export function extractResponseToolCalls(
output: OpenAI.Responses.ResponseOutputItem[]
): ResponsesToolCall[] {
if (!Array.isArray(output)) {
return []
}
@@ -275,14 +294,18 @@ export function extractResponseToolCalls(output: unknown): ResponsesToolCall[] {
}
if (item.type === 'function_call') {
const callId = item.call_id ?? item.id
const name = item.name ?? item.function?.name
const fc = item as OpenAI.Responses.ResponseFunctionToolCall
const fcRecord = item as unknown as Record<string, unknown>
const callId = fc.call_id ?? (fcRecord.id as string | undefined)
const name =
fc.name ??
((fcRecord.function as Record<string, unknown> | undefined)?.name as string | undefined)
if (!callId || !name) {
continue
}
const argumentsValue =
typeof item.arguments === 'string' ? item.arguments : JSON.stringify(item.arguments ?? {})
typeof fc.arguments === 'string' ? fc.arguments : JSON.stringify(fc.arguments ?? {})
toolCalls.push({
id: callId,
@@ -292,18 +315,20 @@ export function extractResponseToolCalls(output: unknown): ResponsesToolCall[] {
continue
}
if (item.type === 'message' && Array.isArray(item.tool_calls)) {
for (const toolCall of item.tool_calls) {
const callId = toolCall?.id
const name = toolCall?.function?.name ?? toolCall?.name
// Handle Chat Completions-style tool_calls nested under message items
const msgRecord = item as unknown as Record<string, unknown>
if (item.type === 'message' && Array.isArray(msgRecord.tool_calls)) {
for (const toolCall of msgRecord.tool_calls) {
const tc = toolCall as Record<string, unknown>
const fn = tc.function as Record<string, unknown> | undefined
const callId = tc.id as string | undefined
const name = (fn?.name ?? tc.name) as string | undefined
if (!callId || !name) {
continue
}
const argumentsValue =
typeof toolCall?.function?.arguments === 'string'
? toolCall.function.arguments
: JSON.stringify(toolCall?.function?.arguments ?? {})
typeof fn?.arguments === 'string' ? fn.arguments : JSON.stringify(fn?.arguments ?? {})
toolCalls.push({
id: callId,
@@ -323,15 +348,17 @@ export function extractResponseToolCalls(output: unknown): ResponsesToolCall[] {
* Note: output_tokens is expected to include reasoning tokens; fall back to reasoning_tokens
* when output_tokens is missing or zero.
*/
export function parseResponsesUsage(usage: any): ResponsesUsageTokens | undefined {
if (!usage || typeof usage !== 'object') {
export function parseResponsesUsage(
usage: OpenAI.Responses.ResponseUsage | undefined
): ResponsesUsageTokens | undefined {
if (!usage) {
return undefined
}
const inputTokens = Number(usage.input_tokens ?? 0)
const outputTokens = Number(usage.output_tokens ?? 0)
const cachedTokens = Number(usage.input_tokens_details?.cached_tokens ?? 0)
const reasoningTokens = Number(usage.output_tokens_details?.reasoning_tokens ?? 0)
const inputTokens = usage.input_tokens ?? 0
const outputTokens = usage.output_tokens ?? 0
const cachedTokens = usage.input_tokens_details?.cached_tokens ?? 0
const reasoningTokens = usage.output_tokens_details?.reasoning_tokens ?? 0
const completionTokens = Math.max(outputTokens, reasoningTokens)
const totalTokens = inputTokens + completionTokens
@@ -398,7 +425,7 @@ export function createReadableStreamFromResponses(
continue
}
let event: any
let event: Record<string, unknown>
try {
event = JSON.parse(data)
} catch (error) {
@@ -416,7 +443,8 @@ export function createReadableStreamFromResponses(
eventType === 'error' ||
eventType === 'response.failed'
) {
const message = event?.error?.message || 'Responses API stream error'
const errorObj = event.error as Record<string, unknown> | undefined
const message = (errorObj?.message as string) || 'Responses API stream error'
controller.error(new Error(message))
return
}
@@ -426,12 +454,13 @@ export function createReadableStreamFromResponses(
eventType === 'response.output_json.delta'
) {
let deltaText = ''
if (typeof event.delta === 'string') {
deltaText = event.delta
} else if (event.delta && typeof event.delta.text === 'string') {
deltaText = event.delta.text
} else if (event.delta && event.delta.json !== undefined) {
deltaText = JSON.stringify(event.delta.json)
const delta = event.delta as string | Record<string, unknown> | undefined
if (typeof delta === 'string') {
deltaText = delta
} else if (delta && typeof delta.text === 'string') {
deltaText = delta.text
} else if (delta && delta.json !== undefined) {
deltaText = JSON.stringify(delta.json)
} else if (event.json !== undefined) {
deltaText = JSON.stringify(event.json)
} else if (typeof event.text === 'string') {
@@ -445,7 +474,11 @@ export function createReadableStreamFromResponses(
}
if (eventType === 'response.completed') {
finalUsage = parseResponsesUsage(event?.response?.usage ?? event?.usage)
const responseObj = event.response as Record<string, unknown> | undefined
const usageData = (responseObj?.usage ?? event.usage) as
| OpenAI.Responses.ResponseUsage
| undefined
finalUsage = parseResponsesUsage(usageData)
}
}
}

View File

@@ -431,19 +431,13 @@ export const openRouterProvider: ProviderConfig = {
const accumulatedCost = calculateCost(requestedModel, tokens.input, tokens.output)
const streamingParams: ChatCompletionCreateParamsStreaming & { provider?: any } = {
model: payload.model,
...payload,
messages: [...currentMessages],
tool_choice: 'auto',
stream: true,
stream_options: { include_usage: true },
}
if (payload.temperature !== undefined) {
streamingParams.temperature = payload.temperature
}
if (payload.max_tokens !== undefined) {
streamingParams.max_tokens = payload.max_tokens
}
if (request.responseFormat) {
;(streamingParams as any).messages = await applyResponseFormat(
streamingParams as any,

Some files were not shown because too many files have changed in this diff Show More