mirror of
https://github.com/simstudioai/sim.git
synced 2026-01-31 09:48:06 -05:00
215 lines
6.6 KiB
Plaintext
215 lines
6.6 KiB
Plaintext
---
|
||
title: Logging and Cost Calculation
|
||
description: Understanding workflow logs and how execution costs are calculated in Sim
|
||
---
|
||
|
||
import { Accordion, Accordions } from 'fumadocs-ui/components/accordion'
|
||
import { Callout } from 'fumadocs-ui/components/callout'
|
||
import { Tab, Tabs } from 'fumadocs-ui/components/tabs'
|
||
import { ThemeImage } from '@/components/ui/theme-image'
|
||
|
||
Sim provides comprehensive logging for workflow executions and automatic cost calculation for AI model usage.
|
||
|
||
## Logging System
|
||
|
||
Sim offers two complementary logging interfaces:
|
||
|
||
### Real-Time Console (Manual Executions)
|
||
|
||
During manual workflow execution, logs appear in real-time in the Console panel on the right side of the workflow editor:
|
||
|
||
<ThemeImage
|
||
lightSrc="/static/light/console-panel-light.png"
|
||
darkSrc="/static/dark/console-panel-dark.png"
|
||
alt="Real-time Console Panel"
|
||
width={600}
|
||
height={400}
|
||
/>
|
||
|
||
The console shows:
|
||
- Block execution progress with active block highlighting
|
||
- Real-time outputs as blocks complete
|
||
- Execution timing for each block
|
||
- Success/error status indicators
|
||
|
||
### Logs Page (All Executions)
|
||
|
||
All workflow executions—whether triggered manually, via API, Chat, Schedule, or Webhook—are logged to the dedicated Logs page:
|
||
|
||
<ThemeImage
|
||
lightSrc="/static/light/logs-page-light.png"
|
||
darkSrc="/static/dark/logs-page-dark.png"
|
||
alt="Logs Page"
|
||
width={600}
|
||
height={400}
|
||
/>
|
||
|
||
The Logs page provides:
|
||
- Comprehensive filtering by time range, status, trigger type, folder, and workflow
|
||
- Search functionality across all logs
|
||
- Live mode for real-time updates
|
||
- 7-day log retention (upgradeable for longer retention)
|
||
|
||
## Log Details Sidebar
|
||
|
||
Clicking on any log entry opens a detailed sidebar view:
|
||
|
||
<ThemeImage
|
||
lightSrc="/static/light/logs-sidebar-light.png"
|
||
darkSrc="/static/dark/logs-sidebar-dark.png"
|
||
alt="Logs Sidebar Details"
|
||
width={600}
|
||
height={400}
|
||
/>
|
||
|
||
### Block Input/Output
|
||
|
||
View the complete data flow for each block with tabs to switch between:
|
||
|
||
<Tabs items={['Output', 'Input']}>
|
||
<Tab>
|
||
**Output Tab** shows the block's execution result:
|
||
- Structured data with JSON formatting
|
||
- Markdown rendering for AI-generated content
|
||
- Copy button for easy data extraction
|
||
</Tab>
|
||
|
||
<Tab>
|
||
**Input Tab** displays what was passed to the block:
|
||
- Resolved variable values
|
||
- Referenced outputs from other blocks
|
||
- Environment variables used
|
||
- API keys are automatically redacted for security
|
||
</Tab>
|
||
</Tabs>
|
||
|
||
### Execution Timeline
|
||
|
||
For workflow-level logs, view detailed execution metrics:
|
||
- Start and end timestamps
|
||
- Total workflow duration
|
||
- Individual block execution times
|
||
- Performance bottleneck identification
|
||
|
||
### Model Breakdown
|
||
|
||
For workflows using AI blocks, expand the Model Breakdown section to see:
|
||
|
||
<ThemeImage
|
||
lightSrc="/static/light/model-breakdown-light.png"
|
||
darkSrc="/static/dark/model-breakdown-dark.png"
|
||
alt="Model Breakdown"
|
||
width={600}
|
||
height={400}
|
||
/>
|
||
|
||
- **Token Usage**: Input and output token counts for each model
|
||
- **Cost Breakdown**: Individual costs per model and operation
|
||
- **Model Distribution**: Which models were used and how many times
|
||
- **Total Cost**: Aggregate cost for the entire workflow execution
|
||
|
||
### Workflow Snapshot
|
||
|
||
For any logged execution, click "View Snapshot" to see the exact workflow state at execution time:
|
||
|
||
<ThemeImage
|
||
lightSrc="/static/light/workflow-snapshot-light.png"
|
||
darkSrc="/static/dark/workflow-snapshot-dark.png"
|
||
alt="Workflow Snapshot"
|
||
width={600}
|
||
height={400}
|
||
/>
|
||
|
||
The snapshot provides:
|
||
- Frozen canvas showing the workflow structure
|
||
- Block states and connections as they were during execution
|
||
- Click any block to see its inputs and outputs
|
||
- Useful for debugging workflows that have since been modified
|
||
|
||
<Callout type="info">
|
||
Workflow snapshots are only available for executions after the enhanced logging system was introduced. Older migrated logs show a "Logged State Not Found" message.
|
||
</Callout>
|
||
|
||
## Cost Calculation
|
||
|
||
Sim automatically calculates costs for all AI model usage:
|
||
|
||
### How Costs Are Calculated
|
||
|
||
Every workflow execution includes two cost components:
|
||
|
||
**Base Execution Charge**: $0.001 per execution
|
||
|
||
**AI Model Usage**: Variable cost based on token consumption
|
||
```javascript
|
||
modelCost = (inputTokens × inputPrice + outputTokens × outputPrice) / 1,000,000
|
||
totalCost = baseExecutionCharge + modelCost
|
||
```
|
||
|
||
<Callout type="info">
|
||
AI model prices are per million tokens. The calculation divides by 1,000,000 to get the actual cost. Workflows without AI blocks only incur the base execution charge.
|
||
</Callout>
|
||
|
||
### Pricing Options
|
||
|
||
<Tabs items={['Hosted Models', 'Bring Your Own API Key']}>
|
||
<Tab>
|
||
**Hosted Models** - Sim provides API keys with a 2.5x pricing multiplier:
|
||
|
||
| Model | Base Price (Input/Output) | Hosted Price (Input/Output) |
|
||
|-------|---------------------------|----------------------------|
|
||
| GPT-4o | $2.50 / $10.00 | $6.25 / $25.00 |
|
||
| GPT-4.1 | $2.00 / $8.00 | $5.00 / $20.00 |
|
||
| o1 | $15.00 / $60.00 | $37.50 / $150.00 |
|
||
| o3 | $2.00 / $8.00 | $5.00 / $20.00 |
|
||
| Claude 3.5 Sonnet | $3.00 / $15.00 | $7.50 / $37.50 |
|
||
| Claude Opus 4.0 | $15.00 / $75.00 | $37.50 / $187.50 |
|
||
|
||
*The 2.5x multiplier covers infrastructure and API management costs.*
|
||
</Tab>
|
||
|
||
<Tab>
|
||
**Your Own API Keys** - Use any model at base pricing:
|
||
|
||
| Provider | Models | Input / Output |
|
||
|----------|---------|----------------|
|
||
| Google | Gemini 2.5 | $0.15 / $0.60 |
|
||
| Deepseek | V3, R1 | $0.75 / $1.00 |
|
||
| xAI | Grok 4, Grok 3 | $5.00 / $25.00 |
|
||
| Groq | Llama 4 Scout | $0.40 / $0.60 |
|
||
| Cerebras | Llama 3.3 70B | $0.94 / $0.94 |
|
||
| Ollama | Local models | Free |
|
||
|
||
*Pay providers directly with no markup*
|
||
</Tab>
|
||
</Tabs>
|
||
|
||
<Callout type="warning">
|
||
Pricing shown reflects rates as of July 14, 2025. Check provider documentation for current pricing.
|
||
</Callout>
|
||
|
||
### Cost Optimization
|
||
|
||
<Accordions>
|
||
<Accordion title="Model Selection">
|
||
Choose models based on task complexity. Simple tasks can use GPT-4.1-nano ($0.10/$0.40) while complex reasoning might need o1 or Claude Opus.
|
||
</Accordion>
|
||
|
||
<Accordion title="Prompt Engineering">
|
||
Well-structured, concise prompts reduce token usage without sacrificing quality.
|
||
</Accordion>
|
||
|
||
<Accordion title="Local Models">
|
||
Use Ollama for non-critical tasks to eliminate API costs entirely.
|
||
</Accordion>
|
||
</Accordions>
|
||
|
||
## Usage Monitoring
|
||
|
||
Monitor your usage and billing in Settings → Subscription:
|
||
|
||
- **Current Usage**: Real-time usage and costs for the current period
|
||
- **Usage Limits**: Plan limits with visual progress indicators
|
||
- **Billing Details**: Projected charges and minimum commitments
|
||
- **Plan Management**: Upgrade options and billing history
|