Compare commits

..

7 Commits

Author SHA1 Message Date
Zamil Majdy
0a48c49902 test(backend): Update test to match removed SET search_path call
The SET search_path call was removed in favor of explicit schema
qualification, so the test now expects only 1 execute_raw call.
2026-01-19 21:31:40 -05:00
Zamil Majdy
1fc1102eb4 fix(backend): Use explicit schema qualification for pgvector types
Fix intermittent "type 'vector' does not exist" errors when using
PgBouncer in transaction mode. The issue was that SET search_path
and the actual query could run on different backend connections.

Changes:
- Add {schema} placeholder for raw schema name (e.g., platform)
- Use explicit type casting: {schema}.vector instead of ::vector
- Use explicit operator: OPERATOR({schema}.<=>) instead of <=>
- Remove set_public_search_path parameter (no longer needed)
- Remove search_path manipulation from DATABASE_URL

Tested on both local and dev environments via kubectl exec.

Fixes: AUTOGPT-SERVER-763, AUTOGPT-SERVER-764
2026-01-19 18:14:29 -05:00
Swifty
bc75d70e7d refactor(backend): Improve Langfuse tracing with v3 SDK patterns and @observe decorators (#11803)
<!-- Clearly explain the need for these changes: -->

This PR improves the Langfuse tracing implementation in the chat feature
by adopting the v3 SDK patterns, resulting in cleaner code and better
observability.

### Changes 🏗️

- **Simplified Langfuse client usage**: Replace manual client
initialization with `langfuse.get_client()` global singleton
- **Use v3 context managers**: Switch to
`start_as_current_observation()` and `propagate_attributes()` for
automatic trace propagation
- **Auto-instrument OpenAI calls**: Use `langfuse.openai` wrapper for
automatic LLM call tracing instead of manual generation tracking
- **Add `@observe` decorators**: All chat tools now have
`@observe(as_type="tool")` decorators for automatic tool execution
tracing:
  - `add_understanding`
  - `view_agent_output` (renamed from `agent_output`)
  - `create_agent`
  - `edit_agent`
  - `find_agent`
  - `find_block`
  - `find_library_agent`
  - `get_doc_page`
  - `run_agent`
  - `run_block`
  - `search_docs`
- **Remove manual trace lifecycle**: Eliminated the verbose `finally`
block that manually ended traces/generations
- **Rename tool**: `agent_output` → `view_agent_output` for clarity

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Verified chat feature works with Langfuse tracing enabled
- [x] Confirmed traces appear correctly in Langfuse dashboard with tool
spans
  - [x] Tested tool execution flows show up as nested observations

#### For configuration changes:

- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

No configuration changes required - uses existing Langfuse environment
variables.
2026-01-19 20:56:51 +00:00
Nicholas Tindle
c1a1767034 feat(docs): Add block documentation auto-generation system (#11707)
- Add generate_block_docs.py script that introspects block code to
generate markdown
- Support manual content preservation via <!-- MANUAL: --> markers
- Add migrate_block_docs.py to preserve existing manual content from git
HEAD
- Add CI workflow (docs-block-sync.yml) to fail if docs drift from code
- Add Claude PR review workflow (docs-claude-review.yml) for doc changes
- Add manual LLM enhancement workflow (docs-enhance.yml)
- Add GitBook configuration (.gitbook.yaml, SUMMARY.md)
- Fix non-deterministic category ordering (categories is a set)
- Add comprehensive test suite (32 tests)
- Generate docs for 444 blocks with 66 preserved manual sections

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

<!-- Clearly explain the need for these changes: -->

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Extensively test code generation for the docs pages



<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Introduces an automated documentation pipeline for blocks and
integrates it into CI.
> 
> - Adds `scripts/generate_block_docs.py` (+ tests) to introspect blocks
and generate `docs/integrations/**`, preserving `<!-- MANUAL: -->`
sections
> - New CI workflows: **docs-block-sync** (fails if docs drift),
**docs-claude-review** (AI review for block/docs PRs), and
**docs-enhance** (optional LLM improvements)
> - Updates existing Claude workflows to use `CLAUDE_CODE_OAUTH_TOKEN`
instead of `ANTHROPIC_API_KEY`
> - Improves numerous block descriptions/typos and links across backend
blocks to standardize docs output
> - Commits initial generated docs including
`docs/integrations/README.md` and many provider/category pages
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
631e53e0f6. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-19 07:03:19 +00:00
Nicholas Tindle
1b56ff13d9 test 2026-01-18 15:32:10 -06:00
Zamil Majdy
f31c160043 feat(platform): add endedAt field and fix execution analytics timestamps (#11759)
## Summary

This PR adds proper execution end time tracking and fixes timestamp
handling throughout the execution analytics system.

### Key Changes

1. **Added `endedAt` field to database schema** - Executions now have a
dedicated field for tracking when they finish
2. **Fixed timestamp nullable handling** - `started_at` and `ended_at`
are now properly nullable in types
3. **Fixed chart aggregation** - Reduced threshold from ≥3 to ≥1
executions per day
4. **Improved timestamp display** - Moved timestamps to expandable
details section in analytics table
5. **Fixed nullable timestamp bugs** - Updated all frontend code to
handle null timestamps correctly

## Problem Statement

### Issue 1: Missing Execution End Times
Previously, executions used `updatedAt` (last DB update) as a proxy for
"end time". This broke when adding correctness scores retroactively -
the end time would change to whenever the score was added, not when the
execution actually finished.

### Issue 2: Chart Shows Only One Data Point
The accuracy trends chart showed only one data point despite having
executions across multiple days. Root cause: aggregation required ≥3
executions per day.

### Issue 3: Incorrect Type Definitions
Manually maintained types defined `started_at` and `ended_at` as
non-nullable `Date`, contradicting reality where QUEUED executions
haven't started yet.

## Solution

### Database Schema (`schema.prisma`)
```prisma
model AgentGraphExecution {
  // ...
  startedAt DateTime?
  endedAt   DateTime?  // NEW FIELD
  // ...
}
```

### Execution Lifecycle
- **QUEUED**: `startedAt = null`, `endedAt = null` (not started)
- **RUNNING**: `startedAt = set`, `endedAt = null` (in progress)  
- **COMPLETED/FAILED/TERMINATED**: `startedAt = set`, `endedAt = set`
(finished)

### Migration Strategy
```sql
-- Add endedAt column
ALTER TABLE "AgentGraphExecution" ADD COLUMN "endedAt" TIMESTAMP(3);

-- Backfill ONLY terminal executions (prevents marking RUNNING executions as ended)
UPDATE "AgentGraphExecution"
SET "endedAt" = "updatedAt"
WHERE "endedAt" IS NULL
  AND "executionStatus" IN ('COMPLETED', 'FAILED', 'TERMINATED');
```

## Changes by Component

### Backend

**`schema.prisma`**
- Added `endedAt` field to `AgentGraphExecution`

**`execution.py`**
- Made `started_at` and `ended_at` optional with Field descriptions
- Updated `from_db()` to use `endedAt` instead of `updatedAt`
- `update_graph_execution_stats()` sets `endedAt` when status becomes
terminal

**`execution_analytics_routes.py`**
- Removed `created_at`/`updated_at` from `ExecutionAnalyticsResult` (DB
metadata, not execution data)
- Kept only `started_at`/`ended_at` (actual execution runtime)
- Made settings global (avoid recreation)
- Moved OpenAI key validation to `_process_batch` (only check when LLM
actually runs)

**`analytics.py`**
- Fixed aggregation: `COUNT(*) >= 1` (was 3) - include all days with ≥1
execution
- Uses `createdAt` for chart grouping (when execution was queued)

**`late_execution_monitor.py`**
- Handle optional `started_at` with fallback to `datetime.min` for
sorting
- Display "Not started" when `started_at` is null

### Frontend

**Type Definitions**
- Fixed manually maintained `types.ts`: `started_at: Date | null` (was
non-nullable)
- Generated types were already correct

**Analytics Components**
- `AnalyticsResultsTable.tsx`: Show only `started_at`/`ended_at` in
2-column expandable grid
- `ExecutionAnalyticsForm.tsx`: Added filter explanation UI

**Monitoring Components** - Fixed null handling bugs:
- `OldAgentLibraryView.tsx`: Handle null in reduce function
- `agent-runs-selector-list.tsx`: Safe sorting with `?.getTime() ?? 0`
- `AgentFlowList.tsx`: Filter/sort with null checks
- `FlowRunsStatus.tsx`: Filter null timestamps
- `FlowRunsTimeline.tsx`: Filter executions with null timestamps before
rendering
- `monitoring/page.tsx`: Safe sorting
- `ActivityItem.tsx`: Fallback to "recently" for null timestamps

## Benefits

 **Accurate End Times**: `endedAt` is frozen when execution finishes,
not updated later
 **Type Safety**: Nullable types match reality, exposing real bugs  
 **Better UX**: Chart shows all days with data (not just days with ≥3
executions)
 **Bug Fixes**: 7+ frontend components now handle null timestamps
correctly
 **Documentation**: Field descriptions explain when timestamps are null

## Testing

### Backend
```bash
cd autogpt_platform/backend
poetry run format  #  All checks passed
poetry run lint    #  All checks passed
```

### Frontend  
```bash
cd autogpt_platform/frontend
pnpm format        #  All checks passed
pnpm lint          #  All checks passed
pnpm types         #  All type errors fixed
```

### Test Data Generation
Created script to generate 35 test executions across 7 days with
correctness scores:
```bash
poetry run python scripts/generate_test_analytics_data.py
```

## Migration Notes

⚠️ **Important**: The migration only backfills `endedAt` for executions
with terminal status (COMPLETED, FAILED, TERMINATED). Active executions
(QUEUED, RUNNING) correctly keep `endedAt = null`.

## Breaking Changes

None - this is backward compatible:
- `endedAt` is nullable, existing code that doesn't use it is unaffected
- Frontend already used generated types which were correct
- Migration safely backfills historical data

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Introduces explicit execution end-time tracking and normalizes
timestamp handling across backend and frontend.
> 
> - Adds `endedAt` to `AgentGraphExecution` (schema + migration);
backfills terminal executions; sets `endedAt` on terminal status updates
> - Makes `GraphExecutionMeta.started_at/ended_at` optional; updates
`from_db()` to use DB `endedAt`; exposes timestamps in
`ExecutionAnalyticsResult`
> - Moves OpenAI key validation into batch processing; instantiates
`Settings` once
> - Accuracy trends: reduce daily aggregation threshold to `>= 1`;
optional historical series
> - Monitoring/analytics UI: results table shows/export
`started_at`/`ended_at`; adds chart filter explainer
> - Frontend null-safety: update types (`Date | null`) and fix
sorting/filtering/rendering for nullable timestamps across monitoring
and library views
> - Late execution monitor: safe sorting/display when `started_at` is
null
> - OpenAPI specs updated for new/nullable fields
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
1d987ca6e5. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2026-01-16 21:44:24 +00:00
Nicholas Tindle
06550a87eb feat(backend): add missed default credentials (#11760)
### Changes 🏗️

**Fixed missing default credentials and provider name mismatch in the
credentials store:**

1. **Provider name correction** (`credentials_store.py:97-103`)
- Changed `provider="unreal"` → `provider="unreal_speech"` to match the
existing `unreal_speech_api_key` setting and block usage
- Updated title from "Use Credits for Unreal" → "Use Credits for Unreal
Speech" for clarity

2. **Added missing OpenWeatherMap credentials**
(`credentials_store.py:219-226`)
- New `openweathermap_credentials` definition with `APIKeyCredentials`
- Uses existing `settings.secrets.openweathermap_api_key` setting that
was previously defined but had no credential object
   - Added to `DEFAULT_CREDENTIALS` list

3. **Fixed credentials not exposed in `get_all_creds()`**
(`credentials_store.py:343-354`)
- Added `llama_api_credentials` conditional append (was defined but not
returned to users)
- Added `v0_credentials` conditional append (was defined but not
returned to users)
   - Added `openweathermap_credentials` conditional append

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verified provider name `unreal_speech` matches block usage in
`text_to_speech_block.py`
  - [x] Confirmed `openweathermap_api_key` setting exists in secrets
- [x] Confirmed `llama_api_key` and `v0_api_key` settings exist in
secrets

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Aligns backend credential definitions and exposes missing system
creds; updates frontend to hide new built-ins.
> 
> - Backend `credentials_store.py`:
>   - Corrects `provider` to `unreal_speech` and updates title
> - Adds `openweathermap_credentials`; includes in `DEFAULT_CREDENTIALS`
and `get_all_creds()` when key present
> - Ensures `llama_api_credentials` and `v0_credentials` are returned by
`get_all_creds()`
> - Frontend `integrations/page.tsx`:
> - Extends `hiddenCredentials` with IDs for `v0`, `webshare_proxy`, and
`openweathermap`
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
e7d46b76c6. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <ntindle@users.noreply.github.com>
2026-01-16 21:18:12 +00:00
235 changed files with 21824 additions and 2618 deletions

View File

@@ -93,5 +93,5 @@ jobs:
Error logs:
${{ toJSON(fromJSON(steps.failure_details.outputs.result).errorLogs) }}
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
claude_code_oauth_token: ${{ secrets.CLAUDE_CODE_OAUTH_TOKEN }}
claude_args: "--allowedTools 'Edit,MultiEdit,Write,Read,Glob,Grep,LS,Bash(git:*),Bash(bun:*),Bash(npm:*),Bash(npx:*),Bash(gh:*)'"

View File

@@ -7,7 +7,7 @@
# - Provide actionable recommendations for the development team
#
# Triggered on: Dependabot PRs (opened, synchronize)
# Requirements: ANTHROPIC_API_KEY secret must be configured
# Requirements: CLAUDE_CODE_OAUTH_TOKEN secret must be configured
name: Claude Dependabot PR Review
@@ -308,7 +308,7 @@ jobs:
id: claude_review
uses: anthropics/claude-code-action@v1
with:
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
claude_code_oauth_token: ${{ secrets.CLAUDE_CODE_OAUTH_TOKEN }}
claude_args: |
--allowedTools "Bash(npm:*),Bash(pnpm:*),Bash(poetry:*),Bash(git:*),Edit,Replace,NotebookEditCell,mcp__github_inline_comment__create_inline_comment,Bash(gh pr comment:*), Bash(gh pr diff:*), Bash(gh pr view:*)"
prompt: |

View File

@@ -323,7 +323,7 @@ jobs:
id: claude
uses: anthropics/claude-code-action@v1
with:
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
claude_code_oauth_token: ${{ secrets.CLAUDE_CODE_OAUTH_TOKEN }}
claude_args: |
--allowedTools "Bash(npm:*),Bash(pnpm:*),Bash(poetry:*),Bash(git:*),Edit,Replace,NotebookEditCell,mcp__github_inline_comment__create_inline_comment,Bash(gh pr comment:*), Bash(gh pr diff:*), Bash(gh pr view:*), Bash(gh pr edit:*)"
--model opus

78
.github/workflows/docs-block-sync.yml vendored Normal file
View File

@@ -0,0 +1,78 @@
name: Block Documentation Sync Check
on:
push:
branches: [master, dev]
paths:
- "autogpt_platform/backend/backend/blocks/**"
- "docs/integrations/**"
- "autogpt_platform/backend/scripts/generate_block_docs.py"
- ".github/workflows/docs-block-sync.yml"
pull_request:
branches: [master, dev]
paths:
- "autogpt_platform/backend/backend/blocks/**"
- "docs/integrations/**"
- "autogpt_platform/backend/scripts/generate_block_docs.py"
- ".github/workflows/docs-block-sync.yml"
jobs:
check-docs-sync:
runs-on: ubuntu-latest
timeout-minutes: 15
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 1
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.11"
- name: Set up Python dependency cache
uses: actions/cache@v4
with:
path: ~/.cache/pypoetry
key: poetry-${{ runner.os }}-${{ hashFiles('autogpt_platform/backend/poetry.lock') }}
restore-keys: |
poetry-${{ runner.os }}-
- name: Install Poetry
run: |
cd autogpt_platform/backend
HEAD_POETRY_VERSION=$(python3 ../../.github/workflows/scripts/get_package_version_from_lockfile.py poetry)
echo "Found Poetry version ${HEAD_POETRY_VERSION} in backend/poetry.lock"
curl -sSL https://install.python-poetry.org | POETRY_VERSION=$HEAD_POETRY_VERSION python3 -
echo "$HOME/.local/bin" >> $GITHUB_PATH
- name: Install dependencies
working-directory: autogpt_platform/backend
run: |
poetry install --only main
poetry run prisma generate
- name: Check block documentation is in sync
working-directory: autogpt_platform/backend
run: |
echo "Checking if block documentation is in sync with code..."
poetry run python scripts/generate_block_docs.py --check
- name: Show diff if out of sync
if: failure()
working-directory: autogpt_platform/backend
run: |
echo "::error::Block documentation is out of sync with code!"
echo ""
echo "To fix this, run the following command locally:"
echo " cd autogpt_platform/backend && poetry run python scripts/generate_block_docs.py"
echo ""
echo "Then commit the updated documentation files."
echo ""
echo "Regenerating docs to show diff..."
poetry run python scripts/generate_block_docs.py
echo ""
echo "Changes detected:"
git diff ../../docs/integrations/ || true

View File

@@ -0,0 +1,95 @@
name: Claude Block Docs Review
on:
pull_request:
types: [opened, synchronize]
paths:
- "docs/integrations/**"
- "autogpt_platform/backend/backend/blocks/**"
jobs:
claude-review:
# Only run for PRs from members/collaborators
if: |
github.event.pull_request.author_association == 'OWNER' ||
github.event.pull_request.author_association == 'MEMBER' ||
github.event.pull_request.author_association == 'COLLABORATOR'
runs-on: ubuntu-latest
timeout-minutes: 15
permissions:
contents: read
pull-requests: write
id-token: write
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.11"
- name: Set up Python dependency cache
uses: actions/cache@v4
with:
path: ~/.cache/pypoetry
key: poetry-${{ runner.os }}-${{ hashFiles('autogpt_platform/backend/poetry.lock') }}
restore-keys: |
poetry-${{ runner.os }}-
- name: Install Poetry
run: |
cd autogpt_platform/backend
HEAD_POETRY_VERSION=$(python3 ../../.github/workflows/scripts/get_package_version_from_lockfile.py poetry)
curl -sSL https://install.python-poetry.org | POETRY_VERSION=$HEAD_POETRY_VERSION python3 -
echo "$HOME/.local/bin" >> $GITHUB_PATH
- name: Install dependencies
working-directory: autogpt_platform/backend
run: |
poetry install --only main
poetry run prisma generate
- name: Run Claude Code Review
uses: anthropics/claude-code-action@v1
with:
claude_code_oauth_token: ${{ secrets.CLAUDE_CODE_OAUTH_TOKEN }}
claude_args: |
--allowedTools "Read,Glob,Grep,Bash(gh pr comment:*),Bash(gh pr diff:*),Bash(gh pr view:*)"
prompt: |
You are reviewing a PR that modifies block documentation or block code for AutoGPT.
## Your Task
Review the changes in this PR and provide constructive feedback. Focus on:
1. **Documentation Accuracy**: For any block code changes, verify that:
- Input/output tables in docs match the actual block schemas
- Description text accurately reflects what the block does
- Any new blocks have corresponding documentation
2. **Manual Content Quality**: Check manual sections (marked with `<!-- MANUAL: -->` markers):
- "How it works" sections should have clear technical explanations
- "Possible use case" sections should have practical, real-world examples
- Content should be helpful for users trying to understand the blocks
3. **Template Compliance**: Ensure docs follow the standard template:
- What it is (brief intro)
- What it does (description)
- How it works (technical explanation)
- Inputs table
- Outputs table
- Possible use case
4. **Cross-references**: Check that links and anchors are correct
## Review Process
1. First, get the PR diff to see what changed: `gh pr diff ${{ github.event.pull_request.number }}`
2. Read any modified block files to understand the implementation
3. Read corresponding documentation files to verify accuracy
4. Provide your feedback as a PR comment
Be constructive and specific. If everything looks good, say so!
If there are issues, explain what's wrong and suggest how to fix it.

194
.github/workflows/docs-enhance.yml vendored Normal file
View File

@@ -0,0 +1,194 @@
name: Enhance Block Documentation
on:
workflow_dispatch:
inputs:
block_pattern:
description: 'Block file pattern to enhance (e.g., "google/*.md" or "*" for all blocks)'
required: true
default: '*'
type: string
dry_run:
description: 'Dry run mode - show proposed changes without committing'
type: boolean
default: true
max_blocks:
description: 'Maximum number of blocks to process (0 for unlimited)'
type: number
default: 10
jobs:
enhance-docs:
runs-on: ubuntu-latest
timeout-minutes: 45
permissions:
contents: write
pull-requests: write
id-token: write
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 1
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.11"
- name: Set up Python dependency cache
uses: actions/cache@v4
with:
path: ~/.cache/pypoetry
key: poetry-${{ runner.os }}-${{ hashFiles('autogpt_platform/backend/poetry.lock') }}
restore-keys: |
poetry-${{ runner.os }}-
- name: Install Poetry
run: |
cd autogpt_platform/backend
HEAD_POETRY_VERSION=$(python3 ../../.github/workflows/scripts/get_package_version_from_lockfile.py poetry)
curl -sSL https://install.python-poetry.org | POETRY_VERSION=$HEAD_POETRY_VERSION python3 -
echo "$HOME/.local/bin" >> $GITHUB_PATH
- name: Install dependencies
working-directory: autogpt_platform/backend
run: |
poetry install --only main
poetry run prisma generate
- name: Run Claude Enhancement
uses: anthropics/claude-code-action@v1
with:
claude_code_oauth_token: ${{ secrets.CLAUDE_CODE_OAUTH_TOKEN }}
claude_args: |
--allowedTools "Read,Edit,Glob,Grep,Write,Bash(git:*),Bash(gh:*),Bash(find:*),Bash(ls:*)"
prompt: |
You are enhancing block documentation for AutoGPT. Your task is to improve the MANUAL sections
of block documentation files by reading the actual block implementations and writing helpful content.
## Configuration
- Block pattern: ${{ inputs.block_pattern }}
- Dry run: ${{ inputs.dry_run }}
- Max blocks to process: ${{ inputs.max_blocks }}
## Your Task
1. **Find Documentation Files**
Find block documentation files matching the pattern in `docs/integrations/`
Pattern: ${{ inputs.block_pattern }}
Use: `find docs/integrations -name "*.md" -type f`
2. **For Each Documentation File** (up to ${{ inputs.max_blocks }} files):
a. Read the documentation file
b. Identify which block(s) it documents (look for the block class name)
c. Find and read the corresponding block implementation in `autogpt_platform/backend/backend/blocks/`
d. Improve the MANUAL sections:
**"How it works" section** (within `<!-- MANUAL: how_it_works -->` markers):
- Explain the technical flow of the block
- Describe what APIs or services it connects to
- Note any important configuration or prerequisites
- Keep it concise but informative (2-4 paragraphs)
**"Possible use case" section** (within `<!-- MANUAL: use_case -->` markers):
- Provide 2-3 practical, real-world examples
- Make them specific and actionable
- Show how this block could be used in an automation workflow
3. **Important Rules**
- ONLY modify content within `<!-- MANUAL: -->` and `<!-- END MANUAL -->` markers
- Do NOT modify auto-generated sections (inputs/outputs tables, descriptions)
- Keep content accurate based on the actual block implementation
- Write for users who may not be technical experts
4. **Output**
${{ inputs.dry_run == true && 'DRY RUN MODE: Show proposed changes for each file but do NOT actually edit the files. Describe what you would change.' || 'LIVE MODE: Actually edit the files to improve the documentation.' }}
## Example Improvements
**Before (How it works):**
```
_Add technical explanation here._
```
**After (How it works):**
```
This block connects to the GitHub API to retrieve issue information. When executed,
it authenticates using your GitHub credentials and fetches issue details including
title, body, labels, and assignees.
The block requires a valid GitHub OAuth connection with repository access permissions.
It supports both public and private repositories you have access to.
```
**Before (Possible use case):**
```
_Add practical use case examples here._
```
**After (Possible use case):**
```
**Customer Support Automation**: Monitor a GitHub repository for new issues with
the "bug" label, then automatically create a ticket in your support system and
notify the on-call engineer via Slack.
**Release Notes Generation**: When a new release is published, gather all closed
issues since the last release and generate a summary for your changelog.
```
Begin by finding and listing the documentation files to process.
- name: Create PR with enhanced documentation
if: ${{ inputs.dry_run == false }}
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
# Check if there are changes
if git diff --quiet docs/integrations/; then
echo "No changes to commit"
exit 0
fi
# Configure git
git config user.name "github-actions[bot]"
git config user.email "github-actions[bot]@users.noreply.github.com"
# Create branch and commit
BRANCH_NAME="docs/enhance-blocks-$(date +%Y%m%d-%H%M%S)"
git checkout -b "$BRANCH_NAME"
git add docs/integrations/
git commit -m "docs: enhance block documentation with LLM-generated content
Pattern: ${{ inputs.block_pattern }}
Max blocks: ${{ inputs.max_blocks }}
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>"
# Push and create PR
git push -u origin "$BRANCH_NAME"
gh pr create \
--title "docs: LLM-enhanced block documentation" \
--body "## Summary
This PR contains LLM-enhanced documentation for block files matching pattern: \`${{ inputs.block_pattern }}\`
The following manual sections were improved:
- **How it works**: Technical explanations based on block implementations
- **Possible use case**: Practical, real-world examples
## Review Checklist
- [ ] Content is accurate based on block implementations
- [ ] Examples are practical and helpful
- [ ] No auto-generated sections were modified
---
🤖 Generated with [Claude Code](https://claude.com/claude-code)" \
--base dev

View File

@@ -28,6 +28,7 @@ from backend.executor.manager import get_db_async_client
from backend.util.settings import Settings
logger = logging.getLogger(__name__)
settings = Settings()
class ExecutionAnalyticsRequest(BaseModel):
@@ -63,6 +64,8 @@ class ExecutionAnalyticsResult(BaseModel):
score: Optional[float]
status: str # "success", "failed", "skipped"
error_message: Optional[str] = None
started_at: Optional[datetime] = None
ended_at: Optional[datetime] = None
class ExecutionAnalyticsResponse(BaseModel):
@@ -224,11 +227,6 @@ async def generate_execution_analytics(
)
try:
# Validate model configuration
settings = Settings()
if not settings.secrets.openai_internal_api_key:
raise HTTPException(status_code=500, detail="OpenAI API key not configured")
# Get database client
db_client = get_db_async_client()
@@ -320,6 +318,8 @@ async def generate_execution_analytics(
),
status="skipped",
error_message=None, # Not an error - just already processed
started_at=execution.started_at,
ended_at=execution.ended_at,
)
)
@@ -349,6 +349,9 @@ async def _process_batch(
) -> list[ExecutionAnalyticsResult]:
"""Process a batch of executions concurrently."""
if not settings.secrets.openai_internal_api_key:
raise HTTPException(status_code=500, detail="OpenAI API key not configured")
async def process_single_execution(execution) -> ExecutionAnalyticsResult:
try:
# Generate activity status and score using the specified model
@@ -387,6 +390,8 @@ async def _process_batch(
score=None,
status="skipped",
error_message="Activity generation returned None",
started_at=execution.started_at,
ended_at=execution.ended_at,
)
# Update the execution stats
@@ -416,6 +421,8 @@ async def _process_batch(
summary_text=activity_response["activity_status"],
score=activity_response["correctness_score"],
status="success",
started_at=execution.started_at,
ended_at=execution.ended_at,
)
except Exception as e:
@@ -429,6 +436,8 @@ async def _process_batch(
score=None,
status="failed",
error_message=str(e),
started_at=execution.started_at,
ended_at=execution.ended_at,
)
# Process all executions in the batch concurrently

View File

@@ -4,14 +4,9 @@ from collections.abc import AsyncGenerator
from typing import Any
import orjson
from langfuse import Langfuse
from openai import (
APIConnectionError,
APIError,
APIStatusError,
AsyncOpenAI,
RateLimitError,
)
from langfuse import get_client, propagate_attributes
from langfuse.openai import openai # type: ignore
from openai import APIConnectionError, APIError, APIStatusError, RateLimitError
from openai.types.chat import ChatCompletionChunk, ChatCompletionToolParam
from backend.data.understanding import (
@@ -21,7 +16,6 @@ from backend.data.understanding import (
from backend.util.exceptions import NotFoundError
from backend.util.settings import Settings
from . import db as chat_db
from .config import ChatConfig
from .model import (
ChatMessage,
@@ -50,10 +44,10 @@ logger = logging.getLogger(__name__)
config = ChatConfig()
settings = Settings()
client = AsyncOpenAI(api_key=config.api_key, base_url=config.base_url)
client = openai.AsyncOpenAI(api_key=config.api_key, base_url=config.base_url)
# Langfuse client (lazy initialization)
_langfuse_client: Langfuse | None = None
langfuse = get_client()
class LangfuseNotConfiguredError(Exception):
@@ -69,65 +63,6 @@ def _is_langfuse_configured() -> bool:
)
def _get_langfuse_client() -> Langfuse:
"""Get or create the Langfuse client for prompt management and tracing."""
global _langfuse_client
if _langfuse_client is None:
if not _is_langfuse_configured():
raise LangfuseNotConfiguredError(
"Langfuse is not configured. The chat feature requires Langfuse for prompt management. "
"Please set the LANGFUSE_PUBLIC_KEY and LANGFUSE_SECRET_KEY environment variables."
)
_langfuse_client = Langfuse(
public_key=settings.secrets.langfuse_public_key,
secret_key=settings.secrets.langfuse_secret_key,
host=settings.secrets.langfuse_host or "https://cloud.langfuse.com",
)
return _langfuse_client
def _get_environment() -> str:
"""Get the current environment name for Langfuse tagging."""
return settings.config.app_env.value
def _get_langfuse_prompt() -> str:
"""Fetch the latest production prompt from Langfuse.
Returns:
The compiled prompt text from Langfuse.
Raises:
Exception: If Langfuse is unavailable or prompt fetch fails.
"""
try:
langfuse = _get_langfuse_client()
# cache_ttl_seconds=0 disables SDK caching to always get the latest prompt
prompt = langfuse.get_prompt(config.langfuse_prompt_name, cache_ttl_seconds=0)
compiled = prompt.compile()
logger.info(
f"Fetched prompt '{config.langfuse_prompt_name}' from Langfuse "
f"(version: {prompt.version})"
)
return compiled
except Exception as e:
logger.error(f"Failed to fetch prompt from Langfuse: {e}")
raise
async def _is_first_session(user_id: str) -> bool:
"""Check if this is the user's first chat session.
Returns True if the user has 1 or fewer sessions (meaning this is their first).
"""
try:
session_count = await chat_db.get_user_session_count(user_id)
return session_count <= 1
except Exception as e:
logger.warning(f"Failed to check session count for user {user_id}: {e}")
return False # Default to non-onboarding if we can't check
async def _build_system_prompt(user_id: str | None) -> tuple[str, Any]:
"""Build the full system prompt including business understanding if available.
@@ -139,8 +74,6 @@ async def _build_system_prompt(user_id: str | None) -> tuple[str, Any]:
Tuple of (compiled prompt string, Langfuse prompt object for tracing)
"""
langfuse = _get_langfuse_client()
# cache_ttl_seconds=0 disables SDK caching to always get the latest prompt
prompt = langfuse.get_prompt(config.langfuse_prompt_name, cache_ttl_seconds=0)
@@ -158,7 +91,7 @@ async def _build_system_prompt(user_id: str | None) -> tuple[str, Any]:
context = "This is the first time you are meeting the user. Greet them and introduce them to the platform"
compiled = prompt.compile(users_information=context)
return compiled, prompt
return compiled, understanding
async def _generate_session_title(message: str) -> str | None:
@@ -217,6 +150,7 @@ async def assign_user_to_session(
async def stream_chat_completion(
session_id: str,
message: str | None = None,
tool_call_response: str | None = None,
is_user_message: bool = True,
user_id: str | None = None,
retry_count: int = 0,
@@ -256,11 +190,6 @@ async def stream_chat_completion(
yield StreamFinish()
return
# Langfuse observations will be created after session is loaded (need messages for input)
# Initialize to None so finally block can safely check and end them
trace = None
generation = None
# Only fetch from Redis if session not provided (initial call)
if session is None:
session = await get_chat_session(session_id, user_id)
@@ -336,297 +265,259 @@ async def stream_chat_completion(
asyncio.create_task(_update_title())
# Build system prompt with business understanding
system_prompt, langfuse_prompt = await _build_system_prompt(user_id)
# Build input messages including system prompt for complete Langfuse logging
trace_input_messages = [{"role": "system", "content": system_prompt}] + [
m.model_dump() for m in session.messages
]
system_prompt, understanding = await _build_system_prompt(user_id)
# Create Langfuse trace for this LLM call (each call gets its own trace, grouped by session_id)
# Using v3 SDK: start_observation creates a root span, update_trace sets trace-level attributes
try:
langfuse = _get_langfuse_client()
env = _get_environment()
trace = langfuse.start_observation(
name="chat_completion",
input={"messages": trace_input_messages},
metadata={
"environment": env,
"model": config.model,
"message_count": len(session.messages),
"prompt_name": langfuse_prompt.name if langfuse_prompt else None,
"prompt_version": langfuse_prompt.version if langfuse_prompt else None,
},
)
# Set trace-level attributes (session_id, user_id, tags)
trace.update_trace(
input = message
if not message and tool_call_response:
input = tool_call_response
langfuse = get_client()
with langfuse.start_as_current_observation(
as_type="span",
name="user-copilot-request",
input=input,
) as span:
with propagate_attributes(
session_id=session_id,
user_id=user_id,
tags=[env, "copilot"],
)
except Exception as e:
logger.warning(f"Failed to create Langfuse trace: {e}")
tags=["copilot"],
metadata={
"users_information": format_understanding_for_prompt(understanding)[
:200
] # langfuse only accepts upto to 200 chars
},
):
# Initialize variables that will be used in finally block (must be defined before try)
assistant_response = ChatMessage(
role="assistant",
content="",
)
accumulated_tool_calls: list[dict[str, Any]] = []
# Wrap main logic in try/finally to ensure Langfuse observations are always ended
try:
has_yielded_end = False
has_yielded_error = False
has_done_tool_call = False
has_received_text = False
text_streaming_ended = False
tool_response_messages: list[ChatMessage] = []
should_retry = False
# Generate unique IDs for AI SDK protocol
import uuid as uuid_module
message_id = str(uuid_module.uuid4())
text_block_id = str(uuid_module.uuid4())
# Yield message start
yield StreamStart(messageId=message_id)
# Create Langfuse generation for each LLM call, linked to the prompt
# Using v3 SDK: start_observation with as_type="generation"
generation = (
trace.start_observation(
as_type="generation",
name="llm_call",
model=config.model,
input={"messages": trace_input_messages},
prompt=langfuse_prompt,
# Initialize variables that will be used in finally block (must be defined before try)
assistant_response = ChatMessage(
role="assistant",
content="",
)
if trace
else None
)
accumulated_tool_calls: list[dict[str, Any]] = []
try:
async for chunk in _stream_chat_chunks(
session=session,
tools=tools,
system_prompt=system_prompt,
text_block_id=text_block_id,
):
# Wrap main logic in try/finally to ensure Langfuse observations are always ended
has_yielded_end = False
has_yielded_error = False
has_done_tool_call = False
has_received_text = False
text_streaming_ended = False
tool_response_messages: list[ChatMessage] = []
should_retry = False
if isinstance(chunk, StreamTextStart):
# Emit text-start before first text delta
if not has_received_text:
# Generate unique IDs for AI SDK protocol
import uuid as uuid_module
message_id = str(uuid_module.uuid4())
text_block_id = str(uuid_module.uuid4())
# Yield message start
yield StreamStart(messageId=message_id)
try:
async for chunk in _stream_chat_chunks(
session=session,
tools=tools,
system_prompt=system_prompt,
text_block_id=text_block_id,
):
if isinstance(chunk, StreamTextStart):
# Emit text-start before first text delta
if not has_received_text:
yield chunk
elif isinstance(chunk, StreamTextDelta):
delta = chunk.delta or ""
assert assistant_response.content is not None
assistant_response.content += delta
has_received_text = True
yield chunk
elif isinstance(chunk, StreamTextDelta):
delta = chunk.delta or ""
assert assistant_response.content is not None
assistant_response.content += delta
has_received_text = True
yield chunk
elif isinstance(chunk, StreamTextEnd):
# Emit text-end after text completes
if has_received_text and not text_streaming_ended:
text_streaming_ended = True
yield chunk
elif isinstance(chunk, StreamToolInputStart):
# Emit text-end before first tool call, but only if we've received text
if has_received_text and not text_streaming_ended:
yield StreamTextEnd(id=text_block_id)
text_streaming_ended = True
yield chunk
elif isinstance(chunk, StreamToolInputAvailable):
# Accumulate tool calls in OpenAI format
accumulated_tool_calls.append(
{
"id": chunk.toolCallId,
"type": "function",
"function": {
"name": chunk.toolName,
"arguments": orjson.dumps(chunk.input).decode("utf-8"),
},
}
)
elif isinstance(chunk, StreamToolOutputAvailable):
result_content = (
chunk.output
if isinstance(chunk.output, str)
else orjson.dumps(chunk.output).decode("utf-8")
)
tool_response_messages.append(
ChatMessage(
role="tool",
content=result_content,
tool_call_id=chunk.toolCallId,
)
)
has_done_tool_call = True
# Track if any tool execution failed
if not chunk.success:
logger.warning(
f"Tool {chunk.toolName} (ID: {chunk.toolCallId}) execution failed"
)
yield chunk
elif isinstance(chunk, StreamFinish):
if not has_done_tool_call:
# Emit text-end before finish if we received text but haven't closed it
elif isinstance(chunk, StreamTextEnd):
# Emit text-end after text completes
if has_received_text and not text_streaming_ended:
text_streaming_ended = True
if assistant_response.content:
logger.warn(
f"StreamTextEnd: Attempting to set output {assistant_response.content}"
)
span.update_trace(output=assistant_response.content)
span.update(output=assistant_response.content)
yield chunk
elif isinstance(chunk, StreamToolInputStart):
# Emit text-end before first tool call, but only if we've received text
if has_received_text and not text_streaming_ended:
yield StreamTextEnd(id=text_block_id)
text_streaming_ended = True
has_yielded_end = True
yield chunk
elif isinstance(chunk, StreamError):
has_yielded_error = True
elif isinstance(chunk, StreamUsage):
session.usage.append(
Usage(
prompt_tokens=chunk.promptTokens,
completion_tokens=chunk.completionTokens,
total_tokens=chunk.totalTokens,
elif isinstance(chunk, StreamToolInputAvailable):
# Accumulate tool calls in OpenAI format
accumulated_tool_calls.append(
{
"id": chunk.toolCallId,
"type": "function",
"function": {
"name": chunk.toolName,
"arguments": orjson.dumps(chunk.input).decode(
"utf-8"
),
},
}
)
elif isinstance(chunk, StreamToolOutputAvailable):
result_content = (
chunk.output
if isinstance(chunk.output, str)
else orjson.dumps(chunk.output).decode("utf-8")
)
tool_response_messages.append(
ChatMessage(
role="tool",
content=result_content,
tool_call_id=chunk.toolCallId,
)
)
has_done_tool_call = True
# Track if any tool execution failed
if not chunk.success:
logger.warning(
f"Tool {chunk.toolName} (ID: {chunk.toolCallId}) execution failed"
)
yield chunk
elif isinstance(chunk, StreamFinish):
if not has_done_tool_call:
# Emit text-end before finish if we received text but haven't closed it
if has_received_text and not text_streaming_ended:
yield StreamTextEnd(id=text_block_id)
text_streaming_ended = True
has_yielded_end = True
yield chunk
elif isinstance(chunk, StreamError):
has_yielded_error = True
elif isinstance(chunk, StreamUsage):
session.usage.append(
Usage(
prompt_tokens=chunk.promptTokens,
completion_tokens=chunk.completionTokens,
total_tokens=chunk.totalTokens,
)
)
else:
logger.error(
f"Unknown chunk type: {type(chunk)}", exc_info=True
)
if assistant_response.content:
langfuse.update_current_trace(output=assistant_response.content)
langfuse.update_current_span(output=assistant_response.content)
elif tool_response_messages:
langfuse.update_current_trace(output=str(tool_response_messages))
langfuse.update_current_span(output=str(tool_response_messages))
except Exception as e:
logger.error(f"Error during stream: {e!s}", exc_info=True)
# Check if this is a retryable error (JSON parsing, incomplete tool calls, etc.)
is_retryable = isinstance(
e, (orjson.JSONDecodeError, KeyError, TypeError)
)
if is_retryable and retry_count < config.max_retries:
logger.info(
f"Retryable error encountered. Attempt {retry_count + 1}/{config.max_retries}"
)
should_retry = True
else:
logger.error(f"Unknown chunk type: {type(chunk)}", exc_info=True)
except Exception as e:
logger.error(f"Error during stream: {e!s}", exc_info=True)
# Non-retryable error or max retries exceeded
# Save any partial progress before reporting error
messages_to_save: list[ChatMessage] = []
# Check if this is a retryable error (JSON parsing, incomplete tool calls, etc.)
is_retryable = isinstance(e, (orjson.JSONDecodeError, KeyError, TypeError))
# Add assistant message if it has content or tool calls
if accumulated_tool_calls:
assistant_response.tool_calls = accumulated_tool_calls
if assistant_response.content or assistant_response.tool_calls:
messages_to_save.append(assistant_response)
if is_retryable and retry_count < config.max_retries:
# Add tool response messages after assistant message
messages_to_save.extend(tool_response_messages)
session.messages.extend(messages_to_save)
await upsert_chat_session(session)
if not has_yielded_error:
error_message = str(e)
if not is_retryable:
error_message = f"Non-retryable error: {error_message}"
elif retry_count >= config.max_retries:
error_message = f"Max retries ({config.max_retries}) exceeded: {error_message}"
error_response = StreamError(errorText=error_message)
yield error_response
if not has_yielded_end:
yield StreamFinish()
return
# Handle retry outside of exception handler to avoid nesting
if should_retry and retry_count < config.max_retries:
logger.info(
f"Retryable error encountered. Attempt {retry_count + 1}/{config.max_retries}"
f"Retrying stream_chat_completion for session {session_id}, attempt {retry_count + 1}"
)
should_retry = True
else:
# Non-retryable error or max retries exceeded
# Save any partial progress before reporting error
messages_to_save: list[ChatMessage] = []
async for chunk in stream_chat_completion(
session_id=session.session_id,
user_id=user_id,
retry_count=retry_count + 1,
session=session,
context=context,
):
yield chunk
return # Exit after retry to avoid double-saving in finally block
# Add assistant message if it has content or tool calls
if accumulated_tool_calls:
assistant_response.tool_calls = accumulated_tool_calls
if assistant_response.content or assistant_response.tool_calls:
messages_to_save.append(assistant_response)
# Add tool response messages after assistant message
messages_to_save.extend(tool_response_messages)
session.messages.extend(messages_to_save)
await upsert_chat_session(session)
if not has_yielded_error:
error_message = str(e)
if not is_retryable:
error_message = f"Non-retryable error: {error_message}"
elif retry_count >= config.max_retries:
error_message = f"Max retries ({config.max_retries}) exceeded: {error_message}"
error_response = StreamError(errorText=error_message)
yield error_response
if not has_yielded_end:
yield StreamFinish()
return
# Handle retry outside of exception handler to avoid nesting
if should_retry and retry_count < config.max_retries:
# Normal completion path - save session and handle tool call continuation
logger.info(
f"Retrying stream_chat_completion for session {session_id}, attempt {retry_count + 1}"
)
async for chunk in stream_chat_completion(
session_id=session.session_id,
user_id=user_id,
retry_count=retry_count + 1,
session=session,
context=context,
):
yield chunk
return # Exit after retry to avoid double-saving in finally block
# Normal completion path - save session and handle tool call continuation
logger.info(
f"Normal completion path: session={session.session_id}, "
f"current message_count={len(session.messages)}"
)
# Build the messages list in the correct order
messages_to_save: list[ChatMessage] = []
# Add assistant message with tool_calls if any
if accumulated_tool_calls:
assistant_response.tool_calls = accumulated_tool_calls
logger.info(
f"Added {len(accumulated_tool_calls)} tool calls to assistant message"
)
if assistant_response.content or assistant_response.tool_calls:
messages_to_save.append(assistant_response)
logger.info(
f"Saving assistant message with content_len={len(assistant_response.content or '')}, tool_calls={len(assistant_response.tool_calls or [])}"
f"Normal completion path: session={session.session_id}, "
f"current message_count={len(session.messages)}"
)
# Add tool response messages after assistant message
messages_to_save.extend(tool_response_messages)
logger.info(
f"Saving {len(tool_response_messages)} tool response messages, "
f"total_to_save={len(messages_to_save)}"
)
# Build the messages list in the correct order
messages_to_save: list[ChatMessage] = []
session.messages.extend(messages_to_save)
logger.info(
f"Extended session messages, new message_count={len(session.messages)}"
)
await upsert_chat_session(session)
# If we did a tool call, stream the chat completion again to get the next response
if has_done_tool_call:
logger.info(
"Tool call executed, streaming chat completion again to get assistant response"
)
async for chunk in stream_chat_completion(
session_id=session.session_id,
user_id=user_id,
session=session, # Pass session object to avoid Redis refetch
context=context,
):
yield chunk
finally:
# Always end Langfuse observations to prevent resource leaks
# Guard against None and catch errors to avoid masking original exceptions
if generation is not None:
try:
latest_usage = session.usage[-1] if session.usage else None
generation.update(
model=config.model,
output={
"content": assistant_response.content,
"tool_calls": accumulated_tool_calls or None,
},
usage_details=(
{
"input": latest_usage.prompt_tokens,
"output": latest_usage.completion_tokens,
"total": latest_usage.total_tokens,
}
if latest_usage
else None
),
# Add assistant message with tool_calls if any
if accumulated_tool_calls:
assistant_response.tool_calls = accumulated_tool_calls
logger.info(
f"Added {len(accumulated_tool_calls)} tool calls to assistant message"
)
if assistant_response.content or assistant_response.tool_calls:
messages_to_save.append(assistant_response)
logger.info(
f"Saving assistant message with content_len={len(assistant_response.content or '')}, tool_calls={len(assistant_response.tool_calls or [])}"
)
generation.end()
except Exception as e:
logger.warning(f"Failed to end Langfuse generation: {e}")
if trace is not None:
try:
if accumulated_tool_calls:
trace.update_trace(output={"tool_calls": accumulated_tool_calls})
else:
trace.update_trace(output={"response": assistant_response.content})
trace.end()
except Exception as e:
logger.warning(f"Failed to end Langfuse trace: {e}")
# Add tool response messages after assistant message
messages_to_save.extend(tool_response_messages)
logger.info(
f"Saving {len(tool_response_messages)} tool response messages, "
f"total_to_save={len(messages_to_save)}"
)
session.messages.extend(messages_to_save)
logger.info(
f"Extended session messages, new message_count={len(session.messages)}"
)
await upsert_chat_session(session)
# If we did a tool call, stream the chat completion again to get the next response
if has_done_tool_call:
logger.info(
"Tool call executed, streaming chat completion again to get assistant response"
)
async for chunk in stream_chat_completion(
session_id=session.session_id,
user_id=user_id,
session=session, # Pass session object to avoid Redis refetch
context=context,
tool_call_response=str(tool_response_messages),
):
yield chunk
# Retry configuration for OpenAI API calls
@@ -900,5 +791,4 @@ async def _yield_tool_call(
session=session,
)
logger.info(f"Yielding Tool execution response: {tool_execution_response}")
yield tool_execution_response

View File

@@ -30,7 +30,7 @@ TOOL_REGISTRY: dict[str, BaseTool] = {
"find_library_agent": FindLibraryAgentTool(),
"run_agent": RunAgentTool(),
"run_block": RunBlockTool(),
"agent_output": AgentOutputTool(),
"view_agent_output": AgentOutputTool(),
"search_docs": SearchDocsTool(),
"get_doc_page": GetDocPageTool(),
}

View File

@@ -3,6 +3,8 @@
import logging
from typing import Any
from langfuse import observe
from backend.api.features.chat.model import ChatSession
from backend.data.understanding import (
BusinessUnderstandingInput,
@@ -59,6 +61,7 @@ and automations for the user's specific needs."""
"""Requires authentication to store user-specific data."""
return True
@observe(as_type="tool", name="add_understanding")
async def _execute(
self,
user_id: str | None,

View File

@@ -218,7 +218,6 @@ async def save_agent_to_library(
library_agents = await library_db.create_library_agent(
graph=created_graph,
user_id=user_id,
sensitive_action_safe_mode=True,
create_library_agents_for_sub_graphs=False,
)

View File

@@ -5,6 +5,7 @@ import re
from datetime import datetime, timedelta, timezone
from typing import Any
from langfuse import observe
from pydantic import BaseModel, field_validator
from backend.api.features.chat.model import ChatSession
@@ -103,7 +104,7 @@ class AgentOutputTool(BaseTool):
@property
def name(self) -> str:
return "agent_output"
return "view_agent_output"
@property
def description(self) -> str:
@@ -328,6 +329,7 @@ class AgentOutputTool(BaseTool):
total_executions=len(available_executions) if available_executions else 1,
)
@observe(as_type="tool", name="view_agent_output")
async def _execute(
self,
user_id: str | None,

View File

@@ -3,6 +3,8 @@
import logging
from typing import Any
from langfuse import observe
from backend.api.features.chat.model import ChatSession
from .agent_generator import (
@@ -78,6 +80,7 @@ class CreateAgentTool(BaseTool):
"required": ["description"],
}
@observe(as_type="tool", name="create_agent")
async def _execute(
self,
user_id: str | None,

View File

@@ -3,6 +3,8 @@
import logging
from typing import Any
from langfuse import observe
from backend.api.features.chat.model import ChatSession
from .agent_generator import (
@@ -85,6 +87,7 @@ class EditAgentTool(BaseTool):
"required": ["agent_id", "changes"],
}
@observe(as_type="tool", name="edit_agent")
async def _execute(
self,
user_id: str | None,

View File

@@ -2,6 +2,8 @@
from typing import Any
from langfuse import observe
from backend.api.features.chat.model import ChatSession
from .agent_search import search_agents
@@ -35,6 +37,7 @@ class FindAgentTool(BaseTool):
"required": ["query"],
}
@observe(as_type="tool", name="find_agent")
async def _execute(
self, user_id: str | None, session: ChatSession, **kwargs
) -> ToolResponseBase:

View File

@@ -1,6 +1,7 @@
import logging
from typing import Any
from langfuse import observe
from prisma.enums import ContentType
from backend.api.features.chat.model import ChatSession
@@ -55,6 +56,7 @@ class FindBlockTool(BaseTool):
def requires_auth(self) -> bool:
return True
@observe(as_type="tool", name="find_block")
async def _execute(
self,
user_id: str | None,

View File

@@ -2,6 +2,8 @@
from typing import Any
from langfuse import observe
from backend.api.features.chat.model import ChatSession
from .agent_search import search_agents
@@ -41,6 +43,7 @@ class FindLibraryAgentTool(BaseTool):
def requires_auth(self) -> bool:
return True
@observe(as_type="tool", name="find_library_agent")
async def _execute(
self, user_id: str | None, session: ChatSession, **kwargs
) -> ToolResponseBase:

View File

@@ -4,6 +4,8 @@ import logging
from pathlib import Path
from typing import Any
from langfuse import observe
from backend.api.features.chat.model import ChatSession
from backend.api.features.chat.tools.base import BaseTool
from backend.api.features.chat.tools.models import (
@@ -71,6 +73,7 @@ class GetDocPageTool(BaseTool):
url_path = path.rsplit(".", 1)[0] if "." in path else path
return f"{DOCS_BASE_URL}/{url_path}"
@observe(as_type="tool", name="get_doc_page")
async def _execute(
self,
user_id: str | None,

View File

@@ -3,6 +3,7 @@
import logging
from typing import Any
from langfuse import observe
from pydantic import BaseModel, Field, field_validator
from backend.api.features.chat.config import ChatConfig
@@ -154,6 +155,7 @@ class RunAgentTool(BaseTool):
"""All operations require authentication."""
return True
@observe(as_type="tool", name="run_agent")
async def _execute(
self,
user_id: str | None,

View File

@@ -4,6 +4,8 @@ import logging
from collections import defaultdict
from typing import Any
from langfuse import observe
from backend.api.features.chat.model import ChatSession
from backend.data.block import get_block
from backend.data.execution import ExecutionContext
@@ -127,6 +129,7 @@ class RunBlockTool(BaseTool):
return matched_credentials, missing_credentials
@observe(as_type="tool", name="run_block")
async def _execute(
self,
user_id: str | None,

View File

@@ -3,6 +3,7 @@
import logging
from typing import Any
from langfuse import observe
from prisma.enums import ContentType
from backend.api.features.chat.model import ChatSession
@@ -87,6 +88,7 @@ class SearchDocsTool(BaseTool):
url_path = path.rsplit(".", 1)[0] if "." in path else path
return f"{DOCS_BASE_URL}/{url_path}"
@observe(as_type="tool", name="search_docs")
async def _execute(
self,
user_id: str | None,

View File

@@ -401,10 +401,27 @@ async def add_generated_agent_image(
)
def _initialize_graph_settings(graph: graph_db.GraphModel) -> GraphSettings:
"""
Initialize GraphSettings based on graph content.
Args:
graph: The graph to analyze
Returns:
GraphSettings with appropriate human_in_the_loop_safe_mode value
"""
if graph.has_human_in_the_loop:
# Graph has HITL blocks - set safe mode to True by default
return GraphSettings(human_in_the_loop_safe_mode=True)
else:
# Graph has no HITL blocks - keep None
return GraphSettings(human_in_the_loop_safe_mode=None)
async def create_library_agent(
graph: graph_db.GraphModel,
user_id: str,
sensitive_action_safe_mode: bool = False,
create_library_agents_for_sub_graphs: bool = True,
) -> list[library_model.LibraryAgent]:
"""
@@ -413,7 +430,6 @@ async def create_library_agent(
Args:
agent: The agent/Graph to add to the library.
user_id: The user to whom the agent will be added.
sensitive_action_safe_mode: Whether sensitive action blocks require review.
create_library_agents_for_sub_graphs: If True, creates LibraryAgent records for sub-graphs as well.
Returns:
@@ -449,10 +465,7 @@ async def create_library_agent(
}
},
settings=SafeJson(
GraphSettings.from_graph(
graph_entry,
sensitive_action_safe_mode=sensitive_action_safe_mode,
).model_dump()
_initialize_graph_settings(graph_entry).model_dump()
),
),
include=library_agent_include(
@@ -614,6 +627,33 @@ async def update_library_agent(
raise DatabaseError("Failed to update library agent") from e
async def update_library_agent_settings(
user_id: str,
agent_id: str,
settings: GraphSettings,
) -> library_model.LibraryAgent:
"""
Updates the settings for a specific LibraryAgent.
Args:
user_id: The owner of the LibraryAgent.
agent_id: The ID of the LibraryAgent to update.
settings: New GraphSettings to apply.
Returns:
The updated LibraryAgent.
Raises:
NotFoundError: If the specified LibraryAgent does not exist.
DatabaseError: If there's an error in the update operation.
"""
return await update_library_agent(
library_agent_id=agent_id,
user_id=user_id,
settings=settings,
)
async def delete_library_agent(
library_agent_id: str, user_id: str, soft_delete: bool = True
) -> None:
@@ -798,7 +838,7 @@ async def add_store_agent_to_library(
"isCreatedByUser": False,
"useGraphIsActiveVersion": False,
"settings": SafeJson(
GraphSettings.from_graph(graph_model).model_dump()
_initialize_graph_settings(graph_model).model_dump()
),
},
include=library_agent_include(
@@ -1188,14 +1228,8 @@ async def fork_library_agent(
)
new_graph = await on_graph_activate(new_graph, user_id=user_id)
# Create a library agent for the new graph, preserving sensitive_action_safe_mode
return (
await create_library_agent(
new_graph,
user_id,
sensitive_action_safe_mode=original_agent.settings.sensitive_action_safe_mode,
)
)[0]
# Create a library agent for the new graph
return (await create_library_agent(new_graph, user_id))[0]
except prisma.errors.PrismaError as e:
logger.error(f"Database error cloning library agent: {e}")
raise DatabaseError("Failed to fork library agent") from e

View File

@@ -73,12 +73,6 @@ class LibraryAgent(pydantic.BaseModel):
has_external_trigger: bool = pydantic.Field(
description="Whether the agent has an external trigger (e.g. webhook) node"
)
has_human_in_the_loop: bool = pydantic.Field(
description="Whether the agent has human-in-the-loop blocks"
)
has_sensitive_action: bool = pydantic.Field(
description="Whether the agent has sensitive action blocks"
)
trigger_setup_info: Optional[GraphTriggerInfo] = None
# Indicates whether there's a new output (based on recent runs)
@@ -186,8 +180,6 @@ class LibraryAgent(pydantic.BaseModel):
graph.credentials_input_schema if sub_graphs is not None else None
),
has_external_trigger=graph.has_external_trigger,
has_human_in_the_loop=graph.has_human_in_the_loop,
has_sensitive_action=graph.has_sensitive_action,
trigger_setup_info=graph.trigger_setup_info,
new_output=new_output,
can_access_graph=can_access_graph,

View File

@@ -52,8 +52,6 @@ async def test_get_library_agents_success(
output_schema={"type": "object", "properties": {}},
credentials_input_schema={"type": "object", "properties": {}},
has_external_trigger=False,
has_human_in_the_loop=False,
has_sensitive_action=False,
status=library_model.LibraryAgentStatus.COMPLETED,
recommended_schedule_cron=None,
new_output=False,
@@ -77,8 +75,6 @@ async def test_get_library_agents_success(
output_schema={"type": "object", "properties": {}},
credentials_input_schema={"type": "object", "properties": {}},
has_external_trigger=False,
has_human_in_the_loop=False,
has_sensitive_action=False,
status=library_model.LibraryAgentStatus.COMPLETED,
recommended_schedule_cron=None,
new_output=False,
@@ -154,8 +150,6 @@ async def test_get_favorite_library_agents_success(
output_schema={"type": "object", "properties": {}},
credentials_input_schema={"type": "object", "properties": {}},
has_external_trigger=False,
has_human_in_the_loop=False,
has_sensitive_action=False,
status=library_model.LibraryAgentStatus.COMPLETED,
recommended_schedule_cron=None,
new_output=False,
@@ -224,8 +218,6 @@ def test_add_agent_to_library_success(
output_schema={"type": "object", "properties": {}},
credentials_input_schema={"type": "object", "properties": {}},
has_external_trigger=False,
has_human_in_the_loop=False,
has_sensitive_action=False,
status=library_model.LibraryAgentStatus.COMPLETED,
new_output=False,
can_access_graph=True,

View File

@@ -154,15 +154,16 @@ async def store_content_embedding(
# Upsert the embedding
# WHERE clause in DO UPDATE prevents PostgreSQL 15 bug with NULLS NOT DISTINCT
# Use {schema}.vector for explicit pgvector type qualification
await execute_raw_with_schema(
"""
INSERT INTO {schema_prefix}"UnifiedContentEmbedding" (
"id", "contentType", "contentId", "userId", "embedding", "searchableText", "metadata", "createdAt", "updatedAt"
)
VALUES (gen_random_uuid()::text, $1::{schema_prefix}"ContentType", $2, $3, $4::vector, $5, $6::jsonb, NOW(), NOW())
VALUES (gen_random_uuid()::text, $1::{schema_prefix}"ContentType", $2, $3, $4::{schema}.vector, $5, $6::jsonb, NOW(), NOW())
ON CONFLICT ("contentType", "contentId", "userId")
DO UPDATE SET
"embedding" = $4::vector,
"embedding" = $4::{schema}.vector,
"searchableText" = $5,
"metadata" = $6::jsonb,
"updatedAt" = NOW()
@@ -177,7 +178,6 @@ async def store_content_embedding(
searchable_text,
metadata_json,
client=client,
set_public_search_path=True,
)
logger.info(f"Stored embedding for {content_type}:{content_id}")
@@ -236,7 +236,6 @@ async def get_content_embedding(
content_type,
content_id,
user_id,
set_public_search_path=True,
)
if result and len(result) > 0:
@@ -871,31 +870,34 @@ async def semantic_search(
# Add content type parameters and build placeholders dynamically
content_type_start_idx = len(params) + 1
content_type_placeholders = ", ".join(
f'${content_type_start_idx + i}::{{{{schema_prefix}}}}"ContentType"'
'$' + str(content_type_start_idx + i) + '::{schema_prefix}"ContentType"'
for i in range(len(content_types))
)
params.extend([ct.value for ct in content_types])
sql = f"""
# Build min_similarity param index before appending
min_similarity_idx = len(params) + 1
params.append(min_similarity)
# Use regular string (not f-string) for template to preserve {schema_prefix} and {schema} placeholders
# Use OPERATOR({schema}.<=>) for explicit operator schema qualification
sql = """
SELECT
"contentId" as content_id,
"contentType" as content_type,
"searchableText" as searchable_text,
metadata,
1 - (embedding <=> '{embedding_str}'::vector) as similarity
FROM {{{{schema_prefix}}}}"UnifiedContentEmbedding"
WHERE "contentType" IN ({content_type_placeholders})
{user_filter}
AND 1 - (embedding <=> '{embedding_str}'::vector) >= ${len(params) + 1}
1 - (embedding OPERATOR({schema}.<=>) '""" + embedding_str + """'::{schema}.vector) as similarity
FROM {schema_prefix}"UnifiedContentEmbedding"
WHERE "contentType" IN (""" + content_type_placeholders + """)
""" + user_filter + """
AND 1 - (embedding OPERATOR({schema}.<=>) '""" + embedding_str + """'::{schema}.vector) >= $""" + str(min_similarity_idx) + """
ORDER BY similarity DESC
LIMIT $1
"""
params.append(min_similarity)
try:
results = await query_raw_with_schema(
sql, *params, set_public_search_path=True
)
results = await query_raw_with_schema(sql, *params)
return [
{
"content_id": row["content_id"],
@@ -922,31 +924,33 @@ async def semantic_search(
# Add content type parameters and build placeholders dynamically
content_type_start_idx = len(params_lexical) + 1
content_type_placeholders_lexical = ", ".join(
f'${content_type_start_idx + i}::{{{{schema_prefix}}}}"ContentType"'
'$' + str(content_type_start_idx + i) + '::{schema_prefix}"ContentType"'
for i in range(len(content_types))
)
params_lexical.extend([ct.value for ct in content_types])
sql_lexical = f"""
# Build query param index before appending
query_param_idx = len(params_lexical) + 1
params_lexical.append(f"%{query}%")
# Use regular string (not f-string) for template to preserve {schema_prefix} placeholders
sql_lexical = """
SELECT
"contentId" as content_id,
"contentType" as content_type,
"searchableText" as searchable_text,
metadata,
0.0 as similarity
FROM {{{{schema_prefix}}}}"UnifiedContentEmbedding"
WHERE "contentType" IN ({content_type_placeholders_lexical})
{user_filter}
AND "searchableText" ILIKE ${len(params_lexical) + 1}
FROM {schema_prefix}"UnifiedContentEmbedding"
WHERE "contentType" IN (""" + content_type_placeholders_lexical + """)
""" + user_filter + """
AND "searchableText" ILIKE $""" + str(query_param_idx) + """
ORDER BY "updatedAt" DESC
LIMIT $1
"""
params_lexical.append(f"%{query}%")
try:
results = await query_raw_with_schema(
sql_lexical, *params_lexical, set_public_search_path=True
)
results = await query_raw_with_schema(sql_lexical, *params_lexical)
return [
{
"content_id": row["content_id"],

View File

@@ -155,18 +155,14 @@ async def test_store_embedding_success(mocker):
)
assert result is True
# execute_raw is called twice: once for SET search_path, once for INSERT
assert mock_client.execute_raw.call_count == 2
# execute_raw is called once for INSERT (no separate SET search_path needed)
assert mock_client.execute_raw.call_count == 1
# First call: SET search_path
first_call_args = mock_client.execute_raw.call_args_list[0][0]
assert "SET search_path" in first_call_args[0]
# Second call: INSERT query with the actual data
second_call_args = mock_client.execute_raw.call_args_list[1][0]
assert "test-version-id" in second_call_args
assert "[0.1,0.2,0.3]" in second_call_args
assert None in second_call_args # userId should be None for store agents
# Verify the INSERT query with the actual data
call_args = mock_client.execute_raw.call_args_list[0][0]
assert "test-version-id" in call_args
assert "[0.1,0.2,0.3]" in call_args
assert None in call_args # userId should be None for store agents
@pytest.mark.asyncio(loop_scope="session")

View File

@@ -295,7 +295,7 @@ async def unified_hybrid_search(
FROM {{schema_prefix}}"UnifiedContentEmbedding" uce
WHERE uce."contentType" = ANY({content_types_param}::{{schema_prefix}}"ContentType"[])
{user_filter}
ORDER BY uce.embedding <=> {embedding_param}::vector
ORDER BY uce.embedding OPERATOR({{schema}}.<=>) {embedding_param}::{{schema}}.vector
LIMIT 200
)
),
@@ -307,7 +307,7 @@ async def unified_hybrid_search(
uce.metadata,
uce."updatedAt" as updated_at,
-- Semantic score: cosine similarity (1 - distance)
COALESCE(1 - (uce.embedding <=> {embedding_param}::vector), 0) as semantic_score,
COALESCE(1 - (uce.embedding OPERATOR({{schema}}.<=>) {embedding_param}::{{schema}}.vector), 0) as semantic_score,
-- Lexical score: ts_rank_cd
COALESCE(ts_rank_cd(uce.search, plainto_tsquery('english', {query_param})), 0) as lexical_raw,
-- Category match from metadata
@@ -363,9 +363,7 @@ async def unified_hybrid_search(
LIMIT {limit_param} OFFSET {offset_param}
"""
results = await query_raw_with_schema(
sql_query, *params, set_public_search_path=True
)
results = await query_raw_with_schema(sql_query, *params)
total = results[0]["total_count"] if results else 0
# Apply BM25 reranking
@@ -585,7 +583,7 @@ async def hybrid_search(
WHERE uce."contentType" = 'STORE_AGENT'::{{schema_prefix}}"ContentType"
AND uce."userId" IS NULL
AND {where_clause}
ORDER BY uce.embedding <=> {embedding_param}::vector
ORDER BY uce.embedding OPERATOR({{schema}}.<=>) {embedding_param}::{{schema}}.vector
LIMIT 200
) uce
),
@@ -607,7 +605,7 @@ async def hybrid_search(
-- Searchable text for BM25 reranking
COALESCE(sa.agent_name, '') || ' ' || COALESCE(sa.sub_heading, '') || ' ' || COALESCE(sa.description, '') as searchable_text,
-- Semantic score
COALESCE(1 - (uce.embedding <=> {embedding_param}::vector), 0) as semantic_score,
COALESCE(1 - (uce.embedding OPERATOR({{schema}}.<=>) {embedding_param}::{{schema}}.vector), 0) as semantic_score,
-- Lexical score (raw, will normalize)
COALESCE(ts_rank_cd(uce.search, plainto_tsquery('english', {query_param})), 0) as lexical_raw,
-- Category match
@@ -688,9 +686,7 @@ async def hybrid_search(
LIMIT {limit_param} OFFSET {offset_param}
"""
results = await query_raw_with_schema(
sql_query, *params, set_public_search_path=True
)
results = await query_raw_with_schema(sql_query, *params)
total = results[0]["total_count"] if results else 0

View File

@@ -761,8 +761,10 @@ async def create_new_graph(
graph.reassign_ids(user_id=user_id, reassign_graph_id=True)
graph.validate_graph(for_run=False)
# The return value of the create graph & library function is intentionally not used here,
# as the graph already valid and no sub-graphs are returned back.
await graph_db.create_graph(graph, user_id=user_id)
await library_db.create_library_agent(graph, user_id)
await library_db.create_library_agent(graph, user_id=user_id)
activated_graph = await on_graph_activate(graph, user_id=user_id)
if create_graph.source == "builder":
@@ -886,19 +888,21 @@ async def set_graph_active_version(
async def _update_library_agent_version_and_settings(
user_id: str, agent_graph: graph_db.GraphModel
) -> library_model.LibraryAgent:
# Keep the library agent up to date with the new active version
library = await library_db.update_agent_version_in_library(
user_id, agent_graph.id, agent_graph.version
)
updated_settings = GraphSettings.from_graph(
graph=agent_graph,
hitl_safe_mode=library.settings.human_in_the_loop_safe_mode,
sensitive_action_safe_mode=library.settings.sensitive_action_safe_mode,
)
if updated_settings != library.settings:
library = await library_db.update_library_agent(
library_agent_id=library.id,
# If the graph has HITL node, initialize the setting if it's not already set.
if (
agent_graph.has_human_in_the_loop
and library.settings.human_in_the_loop_safe_mode is None
):
await library_db.update_library_agent_settings(
user_id=user_id,
settings=updated_settings,
agent_id=library.id,
settings=library.settings.model_copy(
update={"human_in_the_loop_safe_mode": True}
),
)
return library
@@ -915,18 +919,21 @@ async def update_graph_settings(
user_id: Annotated[str, Security(get_user_id)],
) -> GraphSettings:
"""Update graph settings for the user's library agent."""
# Get the library agent for this graph
library_agent = await library_db.get_library_agent_by_graph_id(
graph_id=graph_id, user_id=user_id
)
if not library_agent:
raise HTTPException(404, f"Graph #{graph_id} not found in user's library")
updated_agent = await library_db.update_library_agent(
library_agent_id=library_agent.id,
# Update the library agent settings
updated_agent = await library_db.update_library_agent_settings(
user_id=user_id,
agent_id=library_agent.id,
settings=settings,
)
# Return the updated settings
return GraphSettings.model_validate(updated_agent.settings)

View File

@@ -174,7 +174,7 @@ class AIShortformVideoCreatorBlock(Block):
)
frame_rate: int = SchemaField(description="Frame rate of the video", default=60)
generation_preset: GenerationPreset = SchemaField(
description="Generation preset for visual style - only effects AI generated visuals",
description="Generation preset for visual style - only affects AI-generated visuals",
default=GenerationPreset.LEONARDO,
placeholder=GenerationPreset.LEONARDO,
)

View File

@@ -381,7 +381,7 @@ Each range you add needs to be a string, with the upper and lower numbers of the
organization_locations: Optional[list[str]] = SchemaField(
description="""The location of the company headquarters. You can search across cities, US states, and countries.
If a company has several office locations, results are still based on the headquarters location. For example, if you search chicago but a company's HQ location is in boston, any Boston-based companies will not appearch in your search results, even if they match other parameters.
If a company has several office locations, results are still based on the headquarters location. For example, if you search chicago but a company's HQ location is in boston, any Boston-based companies will not appear in your search results, even if they match other parameters.
To exclude companies based on location, use the organization_not_locations parameter.
""",

View File

@@ -34,7 +34,7 @@ Each range you add needs to be a string, with the upper and lower numbers of the
organization_locations: list[str] = SchemaField(
description="""The location of the company headquarters. You can search across cities, US states, and countries.
If a company has several office locations, results are still based on the headquarters location. For example, if you search chicago but a company's HQ location is in boston, any Boston-based companies will not appearch in your search results, even if they match other parameters.
If a company has several office locations, results are still based on the headquarters location. For example, if you search chicago but a company's HQ location is in boston, any Boston-based companies will not appear in your search results, even if they match other parameters.
To exclude companies based on location, use the organization_not_locations parameter.
""",

View File

@@ -81,7 +81,7 @@ class StoreValueBlock(Block):
def __init__(self):
super().__init__(
id="1ff065e9-88e8-4358-9d82-8dc91f622ba9",
description="This block forwards an input value as output, allowing reuse without change.",
description="A basic block that stores and forwards a value throughout workflows, allowing it to be reused without changes across multiple blocks.",
categories={BlockCategory.BASIC},
input_schema=StoreValueBlock.Input,
output_schema=StoreValueBlock.Output,
@@ -111,7 +111,7 @@ class PrintToConsoleBlock(Block):
def __init__(self):
super().__init__(
id="f3b1c1b2-4c4f-4f0d-8d2f-4c4f0d8d2f4c",
description="Print the given text to the console, this is used for a debugging purpose.",
description="A debugging block that outputs text to the console for monitoring and troubleshooting workflow execution.",
categories={BlockCategory.BASIC},
input_schema=PrintToConsoleBlock.Input,
output_schema=PrintToConsoleBlock.Output,
@@ -137,7 +137,7 @@ class NoteBlock(Block):
def __init__(self):
super().__init__(
id="cc10ff7b-7753-4ff2-9af6-9399b1a7eddc",
description="This block is used to display a sticky note with the given text.",
description="A visual annotation block that displays a sticky note in the workflow editor for documentation and organization purposes.",
categories={BlockCategory.BASIC},
input_schema=NoteBlock.Input,
output_schema=NoteBlock.Output,

View File

@@ -159,7 +159,7 @@ class FindInDictionaryBlock(Block):
def __init__(self):
super().__init__(
id="0e50422c-6dee-4145-83d6-3a5a392f65de",
description="Lookup the given key in the input dictionary/object/list and return the value.",
description="A block that looks up a value in a dictionary, list, or object by key or index and returns the corresponding value.",
input_schema=FindInDictionaryBlock.Input,
output_schema=FindInDictionaryBlock.Output,
test_input=[

View File

@@ -51,7 +51,7 @@ class GithubCommentBlock(Block):
def __init__(self):
super().__init__(
id="a8db4d8d-db1c-4a25-a1b0-416a8c33602b",
description="This block posts a comment on a specified GitHub issue or pull request.",
description="A block that posts comments on GitHub issues or pull requests using the GitHub API.",
categories={BlockCategory.DEVELOPER_TOOLS},
input_schema=GithubCommentBlock.Input,
output_schema=GithubCommentBlock.Output,
@@ -151,7 +151,7 @@ class GithubUpdateCommentBlock(Block):
def __init__(self):
super().__init__(
id="b3f4d747-10e3-4e69-8c51-f2be1d99c9a7",
description="This block updates a comment on a specified GitHub issue or pull request.",
description="A block that updates an existing comment on a GitHub issue or pull request.",
categories={BlockCategory.DEVELOPER_TOOLS},
input_schema=GithubUpdateCommentBlock.Input,
output_schema=GithubUpdateCommentBlock.Output,
@@ -249,7 +249,7 @@ class GithubListCommentsBlock(Block):
def __init__(self):
super().__init__(
id="c4b5fb63-0005-4a11-b35a-0c2467bd6b59",
description="This block lists all comments for a specified GitHub issue or pull request.",
description="A block that retrieves all comments from a GitHub issue or pull request, including comment metadata and content.",
categories={BlockCategory.DEVELOPER_TOOLS},
input_schema=GithubListCommentsBlock.Input,
output_schema=GithubListCommentsBlock.Output,
@@ -363,7 +363,7 @@ class GithubMakeIssueBlock(Block):
def __init__(self):
super().__init__(
id="691dad47-f494-44c3-a1e8-05b7990f2dab",
description="This block creates a new issue on a specified GitHub repository.",
description="A block that creates new issues on GitHub repositories with a title and body content.",
categories={BlockCategory.DEVELOPER_TOOLS},
input_schema=GithubMakeIssueBlock.Input,
output_schema=GithubMakeIssueBlock.Output,
@@ -433,7 +433,7 @@ class GithubReadIssueBlock(Block):
def __init__(self):
super().__init__(
id="6443c75d-032a-4772-9c08-230c707c8acc",
description="This block reads the body, title, and user of a specified GitHub issue.",
description="A block that retrieves information about a specific GitHub issue, including its title, body content, and creator.",
categories={BlockCategory.DEVELOPER_TOOLS},
input_schema=GithubReadIssueBlock.Input,
output_schema=GithubReadIssueBlock.Output,
@@ -510,7 +510,7 @@ class GithubListIssuesBlock(Block):
def __init__(self):
super().__init__(
id="c215bfd7-0e57-4573-8f8c-f7d4963dcd74",
description="This block lists all issues for a specified GitHub repository.",
description="A block that retrieves a list of issues from a GitHub repository with their titles and URLs.",
categories={BlockCategory.DEVELOPER_TOOLS},
input_schema=GithubListIssuesBlock.Input,
output_schema=GithubListIssuesBlock.Output,
@@ -597,7 +597,7 @@ class GithubAddLabelBlock(Block):
def __init__(self):
super().__init__(
id="98bd6b77-9506-43d5-b669-6b9733c4b1f1",
description="This block adds a label to a specified GitHub issue or pull request.",
description="A block that adds a label to a GitHub issue or pull request for categorization and organization.",
categories={BlockCategory.DEVELOPER_TOOLS},
input_schema=GithubAddLabelBlock.Input,
output_schema=GithubAddLabelBlock.Output,
@@ -657,7 +657,7 @@ class GithubRemoveLabelBlock(Block):
def __init__(self):
super().__init__(
id="78f050c5-3e3a-48c0-9e5b-ef1ceca5589c",
description="This block removes a label from a specified GitHub issue or pull request.",
description="A block that removes a label from a GitHub issue or pull request.",
categories={BlockCategory.DEVELOPER_TOOLS},
input_schema=GithubRemoveLabelBlock.Input,
output_schema=GithubRemoveLabelBlock.Output,
@@ -720,7 +720,7 @@ class GithubAssignIssueBlock(Block):
def __init__(self):
super().__init__(
id="90507c72-b0ff-413a-886a-23bbbd66f542",
description="This block assigns a user to a specified GitHub issue.",
description="A block that assigns a GitHub user to an issue for task ownership and tracking.",
categories={BlockCategory.DEVELOPER_TOOLS},
input_schema=GithubAssignIssueBlock.Input,
output_schema=GithubAssignIssueBlock.Output,
@@ -786,7 +786,7 @@ class GithubUnassignIssueBlock(Block):
def __init__(self):
super().__init__(
id="d154002a-38f4-46c2-962d-2488f2b05ece",
description="This block unassigns a user from a specified GitHub issue.",
description="A block that removes a user's assignment from a GitHub issue.",
categories={BlockCategory.DEVELOPER_TOOLS},
input_schema=GithubUnassignIssueBlock.Input,
output_schema=GithubUnassignIssueBlock.Output,

View File

@@ -353,7 +353,7 @@ class GmailReadBlock(GmailBase):
def __init__(self):
super().__init__(
id="25310c70-b89b-43ba-b25c-4dfa7e2a481c",
description="This block reads emails from Gmail.",
description="A block that retrieves and reads emails from a Gmail account based on search criteria, returning detailed message information including subject, sender, body, and attachments.",
categories={BlockCategory.COMMUNICATION},
disabled=not GOOGLE_OAUTH_IS_CONFIGURED,
input_schema=GmailReadBlock.Input,
@@ -743,7 +743,7 @@ class GmailListLabelsBlock(GmailBase):
def __init__(self):
super().__init__(
id="3e1c2c1c-c689-4520-b956-1f3bf4e02bb7",
description="This block lists all labels in Gmail.",
description="A block that retrieves all labels (categories) from a Gmail account for organizing and categorizing emails.",
categories={BlockCategory.COMMUNICATION},
input_schema=GmailListLabelsBlock.Input,
output_schema=GmailListLabelsBlock.Output,
@@ -807,7 +807,7 @@ class GmailAddLabelBlock(GmailBase):
def __init__(self):
super().__init__(
id="f884b2fb-04f4-4265-9658-14f433926ac9",
description="This block adds a label to a Gmail message.",
description="A block that adds a label to a specific email message in Gmail, creating the label if it doesn't exist.",
categories={BlockCategory.COMMUNICATION},
input_schema=GmailAddLabelBlock.Input,
output_schema=GmailAddLabelBlock.Output,
@@ -893,7 +893,7 @@ class GmailRemoveLabelBlock(GmailBase):
def __init__(self):
super().__init__(
id="0afc0526-aba1-4b2b-888e-a22b7c3f359d",
description="This block removes a label from a Gmail message.",
description="A block that removes a label from a specific email message in a Gmail account.",
categories={BlockCategory.COMMUNICATION},
input_schema=GmailRemoveLabelBlock.Input,
output_schema=GmailRemoveLabelBlock.Output,
@@ -961,7 +961,7 @@ class GmailGetThreadBlock(GmailBase):
def __init__(self):
super().__init__(
id="21a79166-9df7-4b5f-9f36-96f639d86112",
description="Get a full Gmail thread by ID",
description="A block that retrieves an entire Gmail thread (email conversation) by ID, returning all messages with decoded bodies for reading complete conversations.",
categories={BlockCategory.COMMUNICATION},
input_schema=GmailGetThreadBlock.Input,
output_schema=GmailGetThreadBlock.Output,

View File

@@ -282,7 +282,7 @@ class GoogleSheetsReadBlock(Block):
def __init__(self):
super().__init__(
id="5724e902-3635-47e9-a108-aaa0263a4988",
description="This block reads data from a Google Sheets spreadsheet.",
description="A block that reads data from a Google Sheets spreadsheet using A1 notation range selection.",
categories={BlockCategory.DATA},
input_schema=GoogleSheetsReadBlock.Input,
output_schema=GoogleSheetsReadBlock.Output,
@@ -409,7 +409,7 @@ class GoogleSheetsWriteBlock(Block):
def __init__(self):
super().__init__(
id="d9291e87-301d-47a8-91fe-907fb55460e5",
description="This block writes data to a Google Sheets spreadsheet.",
description="A block that writes data to a Google Sheets spreadsheet at a specified A1 notation range.",
categories={BlockCategory.DATA},
input_schema=GoogleSheetsWriteBlock.Input,
output_schema=GoogleSheetsWriteBlock.Output,

View File

@@ -84,7 +84,7 @@ class HITLReviewHelper:
Exception: If review creation or status update fails
"""
# Skip review if safe mode is disabled - return auto-approved result
if not execution_context.human_in_the_loop_safe_mode:
if not execution_context.safe_mode:
logger.info(
f"Block {block_name} skipping review for node {node_exec_id} - safe mode disabled"
)

View File

@@ -104,7 +104,7 @@ class HumanInTheLoopBlock(Block):
execution_context: ExecutionContext,
**_kwargs,
) -> BlockOutput:
if not execution_context.human_in_the_loop_safe_mode:
if not execution_context.safe_mode:
logger.info(
f"HITL block skipping review for node {node_exec_id} - safe mode disabled"
)

View File

@@ -76,7 +76,7 @@ class AgentInputBlock(Block):
super().__init__(
**{
"id": "c0a8e994-ebf1-4a9c-a4d8-89d09c86741b",
"description": "Base block for user inputs.",
"description": "A block that accepts and processes user input values within a workflow, supporting various input types and validation.",
"input_schema": AgentInputBlock.Input,
"output_schema": AgentInputBlock.Output,
"test_input": [
@@ -168,7 +168,7 @@ class AgentOutputBlock(Block):
def __init__(self):
super().__init__(
id="363ae599-353e-4804-937e-b2ee3cef3da4",
description="Stores the output of the graph for users to see.",
description="A block that records and formats workflow results for display to users, with optional Jinja2 template formatting support.",
input_schema=AgentOutputBlock.Input,
output_schema=AgentOutputBlock.Output,
test_input=[

View File

@@ -854,7 +854,7 @@ class AIStructuredResponseGeneratorBlock(AIBlockBase):
def __init__(self):
super().__init__(
id="ed55ac19-356e-4243-a6cb-bc599e9b716f",
description="Call a Large Language Model (LLM) to generate formatted object based on the given prompt.",
description="A block that generates structured JSON responses using a Large Language Model (LLM), with schema validation and format enforcement.",
categories={BlockCategory.AI},
input_schema=AIStructuredResponseGeneratorBlock.Input,
output_schema=AIStructuredResponseGeneratorBlock.Output,
@@ -1265,7 +1265,7 @@ class AITextGeneratorBlock(AIBlockBase):
def __init__(self):
super().__init__(
id="1f292d4a-41a4-4977-9684-7c8d560b9f91",
description="Call a Large Language Model (LLM) to generate a string based on the given prompt.",
description="A block that produces text responses using a Large Language Model (LLM) based on customizable prompts and system instructions.",
categories={BlockCategory.AI},
input_schema=AITextGeneratorBlock.Input,
output_schema=AITextGeneratorBlock.Output,
@@ -1361,7 +1361,7 @@ class AITextSummarizerBlock(AIBlockBase):
def __init__(self):
super().__init__(
id="a0a69be1-4528-491c-a85a-a4ab6873e3f0",
description="Utilize a Large Language Model (LLM) to summarize a long text.",
description="A block that summarizes long texts using a Large Language Model (LLM), with configurable focus topics and summary styles.",
categories={BlockCategory.AI, BlockCategory.TEXT},
input_schema=AITextSummarizerBlock.Input,
output_schema=AITextSummarizerBlock.Output,
@@ -1562,7 +1562,7 @@ class AIConversationBlock(AIBlockBase):
def __init__(self):
super().__init__(
id="32a87eab-381e-4dd4-bdb8-4c47151be35a",
description="Advanced LLM call that takes a list of messages and sends them to the language model.",
description="A block that facilitates multi-turn conversations with a Large Language Model (LLM), maintaining context across message exchanges.",
categories={BlockCategory.AI},
input_schema=AIConversationBlock.Input,
output_schema=AIConversationBlock.Output,
@@ -1682,7 +1682,7 @@ class AIListGeneratorBlock(AIBlockBase):
def __init__(self):
super().__init__(
id="9c0b0450-d199-458b-a731-072189dd6593",
description="Generate a list of values based on the given prompt using a Large Language Model (LLM).",
description="A block that creates lists of items based on prompts using a Large Language Model (LLM), with optional source data for context.",
categories={BlockCategory.AI, BlockCategory.TEXT},
input_schema=AIListGeneratorBlock.Input,
output_schema=AIListGeneratorBlock.Output,

View File

@@ -46,7 +46,7 @@ class PublishToMediumBlock(Block):
class Input(BlockSchemaInput):
author_id: BlockSecret = SecretField(
key="medium_author_id",
description="""The Medium AuthorID of the user. You can get this by calling the /me endpoint of the Medium API.\n\ncurl -H "Authorization: Bearer YOUR_ACCESS_TOKEN" https://api.medium.com/v1/me" the response will contain the authorId field.""",
description="""The Medium AuthorID of the user. You can get this by calling the /me endpoint of the Medium API.\n\ncurl -H "Authorization: Bearer YOUR_ACCESS_TOKEN" https://api.medium.com/v1/me\n\nThe response will contain the authorId field.""",
placeholder="Enter the author's Medium AuthorID",
)
title: str = SchemaField(

View File

@@ -50,7 +50,7 @@ class CreateTalkingAvatarVideoBlock(Block):
description="The voice provider to use", default="microsoft"
)
voice_id: str = SchemaField(
description="The voice ID to use, get list of voices [here](https://docs.agpt.co/server/d_id)",
description="The voice ID to use, see [available voice IDs](https://agpt.co/docs/platform/using-ai-services/d_id)",
default="en-US-JennyNeural",
)
presenter_id: str = SchemaField(

View File

@@ -242,7 +242,7 @@ async def test_smart_decision_maker_tracks_llm_stats():
outputs = {}
# Create execution context
mock_execution_context = ExecutionContext(human_in_the_loop_safe_mode=False)
mock_execution_context = ExecutionContext(safe_mode=False)
# Create a mock execution processor for tests
@@ -343,7 +343,7 @@ async def test_smart_decision_maker_parameter_validation():
# Create execution context
mock_execution_context = ExecutionContext(human_in_the_loop_safe_mode=False)
mock_execution_context = ExecutionContext(safe_mode=False)
# Create a mock execution processor for tests
@@ -409,7 +409,7 @@ async def test_smart_decision_maker_parameter_validation():
# Create execution context
mock_execution_context = ExecutionContext(human_in_the_loop_safe_mode=False)
mock_execution_context = ExecutionContext(safe_mode=False)
# Create a mock execution processor for tests
@@ -471,7 +471,7 @@ async def test_smart_decision_maker_parameter_validation():
outputs = {}
# Create execution context
mock_execution_context = ExecutionContext(human_in_the_loop_safe_mode=False)
mock_execution_context = ExecutionContext(safe_mode=False)
# Create a mock execution processor for tests
@@ -535,7 +535,7 @@ async def test_smart_decision_maker_parameter_validation():
outputs = {}
# Create execution context
mock_execution_context = ExecutionContext(human_in_the_loop_safe_mode=False)
mock_execution_context = ExecutionContext(safe_mode=False)
# Create a mock execution processor for tests
@@ -658,7 +658,7 @@ async def test_smart_decision_maker_raw_response_conversion():
outputs = {}
# Create execution context
mock_execution_context = ExecutionContext(human_in_the_loop_safe_mode=False)
mock_execution_context = ExecutionContext(safe_mode=False)
# Create a mock execution processor for tests
@@ -730,7 +730,7 @@ async def test_smart_decision_maker_raw_response_conversion():
outputs = {}
# Create execution context
mock_execution_context = ExecutionContext(human_in_the_loop_safe_mode=False)
mock_execution_context = ExecutionContext(safe_mode=False)
# Create a mock execution processor for tests
@@ -786,7 +786,7 @@ async def test_smart_decision_maker_raw_response_conversion():
outputs = {}
# Create execution context
mock_execution_context = ExecutionContext(human_in_the_loop_safe_mode=False)
mock_execution_context = ExecutionContext(safe_mode=False)
# Create a mock execution processor for tests
@@ -905,7 +905,7 @@ async def test_smart_decision_maker_agent_mode():
# Create a mock execution context
mock_execution_context = ExecutionContext(
human_in_the_loop_safe_mode=False,
safe_mode=False,
)
# Create a mock execution processor for agent mode tests
@@ -1027,7 +1027,7 @@ async def test_smart_decision_maker_traditional_mode_default():
# Create execution context
mock_execution_context = ExecutionContext(human_in_the_loop_safe_mode=False)
mock_execution_context = ExecutionContext(safe_mode=False)
# Create a mock execution processor for tests

View File

@@ -386,7 +386,7 @@ async def test_output_yielding_with_dynamic_fields():
outputs = {}
from backend.data.execution import ExecutionContext
mock_execution_context = ExecutionContext(human_in_the_loop_safe_mode=False)
mock_execution_context = ExecutionContext(safe_mode=False)
mock_execution_processor = MagicMock()
async for output_name, output_value in block.run(
@@ -609,9 +609,7 @@ async def test_validation_errors_dont_pollute_conversation():
outputs = {}
from backend.data.execution import ExecutionContext
mock_execution_context = ExecutionContext(
human_in_the_loop_safe_mode=False
)
mock_execution_context = ExecutionContext(safe_mode=False)
# Create a proper mock execution processor for agent mode
from collections import defaultdict

View File

@@ -104,7 +104,7 @@ async def get_accuracy_trends_and_alerts(
AND e."executionStatus" IN ('COMPLETED', 'FAILED', 'TERMINATED')
{user_filter}
GROUP BY DATE(e."createdAt")
HAVING COUNT(*) >= 3 -- Need at least 3 executions per day
HAVING COUNT(*) >= 1 -- Include all days with at least 1 execution
),
trends AS (
SELECT

View File

@@ -474,7 +474,7 @@ class Block(ABC, Generic[BlockSchemaInputType, BlockSchemaOutputType]):
self.block_type = block_type
self.webhook_config = webhook_config
self.execution_stats: NodeExecutionStats = NodeExecutionStats()
self.is_sensitive_action: bool = False
self.requires_human_review: bool = False
if self.webhook_config:
if isinstance(self.webhook_config, BlockWebhookConfig):
@@ -637,9 +637,8 @@ class Block(ABC, Generic[BlockSchemaInputType, BlockSchemaOutputType]):
- should_pause: True if execution should be paused for review
- input_data_to_use: The input data to use (may be modified by reviewer)
"""
if not (
self.is_sensitive_action and execution_context.sensitive_action_safe_mode
):
# Skip review if not required or safe mode is disabled
if not self.requires_human_review or not execution_context.safe_mode:
return False, input_data
from backend.blocks.helpers.review import HITLReviewHelper

View File

@@ -38,20 +38,6 @@ POOL_TIMEOUT = os.getenv("DB_POOL_TIMEOUT")
if POOL_TIMEOUT:
DATABASE_URL = add_param(DATABASE_URL, "pool_timeout", POOL_TIMEOUT)
# Add public schema to search_path for pgvector type access
# The vector extension is in public schema, but search_path is determined by schema parameter
# Extract the schema from DATABASE_URL or default to 'public' (matching get_database_schema())
parsed_url = urlparse(DATABASE_URL)
url_params = dict(parse_qsl(parsed_url.query))
db_schema = url_params.get("schema", "public")
# Build search_path, avoiding duplicates if db_schema is already 'public'
search_path_schemas = list(
dict.fromkeys([db_schema, "public"])
) # Preserves order, removes duplicates
search_path = ",".join(search_path_schemas)
# This allows using ::vector without schema qualification
DATABASE_URL = add_param(DATABASE_URL, "options", f"-c search_path={search_path}")
HTTP_TIMEOUT = int(POOL_TIMEOUT) if POOL_TIMEOUT else None
prisma = Prisma(
@@ -127,38 +113,43 @@ async def _raw_with_schema(
*args,
execute: bool = False,
client: Prisma | None = None,
set_public_search_path: bool = False,
) -> list[dict] | int:
"""Internal: Execute raw SQL with proper schema handling.
Use query_raw_with_schema() or execute_raw_with_schema() instead.
Supports placeholders:
- {schema_prefix}: Table/type prefix (e.g., "platform".)
- {schema}: Raw schema name (e.g., platform) for pgvector types and operators
Args:
query_template: SQL query with {schema_prefix} placeholder
query_template: SQL query with {schema_prefix} and/or {schema} placeholders
*args: Query parameters
execute: If False, executes SELECT query. If True, executes INSERT/UPDATE/DELETE.
client: Optional Prisma client for transactions (only used when execute=True).
set_public_search_path: If True, sets search_path to include public schema.
Needed for pgvector types and other public schema objects.
Returns:
- list[dict] if execute=False (query results)
- int if execute=True (number of affected rows)
Example with vector type:
await execute_raw_with_schema(
'INSERT INTO {schema_prefix}"Embedding" (vec) VALUES ($1::{schema}.vector)',
embedding_data
)
"""
schema = get_database_schema()
schema_prefix = f'"{schema}".' if schema != "public" else ""
formatted_query = query_template.format(schema_prefix=schema_prefix)
formatted_query = query_template.format(
schema_prefix=schema_prefix,
schema=schema,
)
import prisma as prisma_module
db_client = client if client else prisma_module.get_client()
# Set search_path to include public schema if requested
# Prisma doesn't support the 'options' connection parameter, so we set it per-session
# This is idempotent and safe to call multiple times
if set_public_search_path:
await db_client.execute_raw(f"SET search_path = {schema}, public") # type: ignore
if execute:
result = await db_client.execute_raw(formatted_query, *args) # type: ignore
else:
@@ -167,16 +158,12 @@ async def _raw_with_schema(
return result
async def query_raw_with_schema(
query_template: str, *args, set_public_search_path: bool = False
) -> list[dict]:
async def query_raw_with_schema(query_template: str, *args) -> list[dict]:
"""Execute raw SQL SELECT query with proper schema handling.
Args:
query_template: SQL query with {schema_prefix} placeholder
query_template: SQL query with {schema_prefix} and/or {schema} placeholders
*args: Query parameters
set_public_search_path: If True, sets search_path to include public schema.
Needed for pgvector types and other public schema objects.
Returns:
List of result rows as dictionaries
@@ -187,23 +174,20 @@ async def query_raw_with_schema(
user_id
)
"""
return await _raw_with_schema(query_template, *args, execute=False, set_public_search_path=set_public_search_path) # type: ignore
return await _raw_with_schema(query_template, *args, execute=False) # type: ignore
async def execute_raw_with_schema(
query_template: str,
*args,
client: Prisma | None = None,
set_public_search_path: bool = False,
) -> int:
"""Execute raw SQL command (INSERT/UPDATE/DELETE) with proper schema handling.
Args:
query_template: SQL query with {schema_prefix} placeholder
query_template: SQL query with {schema_prefix} and/or {schema} placeholders
*args: Query parameters
client: Optional Prisma client for transactions
set_public_search_path: If True, sets search_path to include public schema.
Needed for pgvector types and other public schema objects.
Returns:
Number of affected rows
@@ -215,7 +199,7 @@ async def execute_raw_with_schema(
client=tx # Optional transaction client
)
"""
return await _raw_with_schema(query_template, *args, execute=True, client=client, set_public_search_path=set_public_search_path) # type: ignore
return await _raw_with_schema(query_template, *args, execute=True, client=client) # type: ignore
class BaseDbModel(BaseModel):

View File

@@ -81,8 +81,7 @@ class ExecutionContext(BaseModel):
This includes information needed by blocks, sub-graphs, and execution management.
"""
human_in_the_loop_safe_mode: bool = True
sensitive_action_safe_mode: bool = False
safe_mode: bool = True
user_timezone: str = "UTC"
root_execution_id: Optional[str] = None
parent_execution_id: Optional[str] = None
@@ -154,8 +153,14 @@ class GraphExecutionMeta(BaseDbModel):
nodes_input_masks: Optional[dict[str, BlockInput]]
preset_id: Optional[str]
status: ExecutionStatus
started_at: datetime
ended_at: datetime
started_at: Optional[datetime] = Field(
None,
description="When execution started running. Null if not yet started (QUEUED).",
)
ended_at: Optional[datetime] = Field(
None,
description="When execution finished. Null if not yet completed (QUEUED, RUNNING, INCOMPLETE, REVIEW).",
)
is_shared: bool = False
share_token: Optional[str] = None
@@ -230,10 +235,8 @@ class GraphExecutionMeta(BaseDbModel):
@staticmethod
def from_db(_graph_exec: AgentGraphExecution):
now = datetime.now(timezone.utc)
# TODO: make started_at and ended_at optional
start_time = _graph_exec.startedAt or _graph_exec.createdAt
end_time = _graph_exec.updatedAt or now
start_time = _graph_exec.startedAt
end_time = _graph_exec.endedAt
try:
stats = GraphExecutionStats.model_validate(_graph_exec.stats)
@@ -903,6 +906,14 @@ async def update_graph_execution_stats(
if status:
update_data["executionStatus"] = status
# Set endedAt when execution reaches a terminal status
terminal_statuses = [
ExecutionStatus.COMPLETED,
ExecutionStatus.FAILED,
ExecutionStatus.TERMINATED,
]
if status in terminal_statuses:
update_data["endedAt"] = datetime.now(tz=timezone.utc)
where_clause: AgentGraphExecutionWhereInput = {"id": graph_exec_id}

View File

@@ -62,23 +62,7 @@ logger = logging.getLogger(__name__)
class GraphSettings(BaseModel):
human_in_the_loop_safe_mode: bool = True
sensitive_action_safe_mode: bool = False
@classmethod
def from_graph(
cls,
graph: "GraphModel",
hitl_safe_mode: bool | None = None,
sensitive_action_safe_mode: bool = False,
) -> "GraphSettings":
# Default to True if not explicitly set
if hitl_safe_mode is None:
hitl_safe_mode = True
return cls(
human_in_the_loop_safe_mode=hitl_safe_mode,
sensitive_action_safe_mode=sensitive_action_safe_mode,
)
human_in_the_loop_safe_mode: bool | None = None
class Link(BaseDbModel):
@@ -260,14 +244,10 @@ class BaseGraph(BaseDbModel):
return any(
node.block_id
for node in self.nodes
if node.block.block_type == BlockType.HUMAN_IN_THE_LOOP
)
@computed_field
@property
def has_sensitive_action(self) -> bool:
return any(
node.block_id for node in self.nodes if node.block.is_sensitive_action
if (
node.block.block_type == BlockType.HUMAN_IN_THE_LOOP
or node.block.requires_human_review
)
)
@property

View File

@@ -328,6 +328,8 @@ async def clear_business_understanding(user_id: str) -> bool:
def format_understanding_for_prompt(understanding: BusinessUnderstanding) -> str:
"""Format business understanding as text for system prompt injection."""
if not understanding:
return ""
sections = []
# User info section

View File

@@ -873,8 +873,11 @@ async def add_graph_execution(
settings = await gdb.get_graph_settings(user_id=user_id, graph_id=graph_id)
execution_context = ExecutionContext(
human_in_the_loop_safe_mode=settings.human_in_the_loop_safe_mode,
sensitive_action_safe_mode=settings.sensitive_action_safe_mode,
safe_mode=(
settings.human_in_the_loop_safe_mode
if settings.human_in_the_loop_safe_mode is not None
else True
),
user_timezone=(
user.timezone if user.timezone != USER_TIMEZONE_NOT_SET else "UTC"
),

View File

@@ -386,7 +386,6 @@ async def test_add_graph_execution_is_repeatable(mocker: MockerFixture):
mock_user.timezone = "UTC"
mock_settings = mocker.MagicMock()
mock_settings.human_in_the_loop_safe_mode = True
mock_settings.sensitive_action_safe_mode = False
mock_udb.get_user_by_id = mocker.AsyncMock(return_value=mock_user)
mock_gdb.get_graph_settings = mocker.AsyncMock(return_value=mock_settings)
@@ -652,7 +651,6 @@ async def test_add_graph_execution_with_nodes_to_skip(mocker: MockerFixture):
mock_user.timezone = "UTC"
mock_settings = mocker.MagicMock()
mock_settings.human_in_the_loop_safe_mode = True
mock_settings.sensitive_action_safe_mode = False
mock_udb.get_user_by_id = mocker.AsyncMock(return_value=mock_user)
mock_gdb.get_graph_settings = mocker.AsyncMock(return_value=mock_settings)

View File

@@ -96,9 +96,9 @@ jina_credentials = APIKeyCredentials(
)
unreal_credentials = APIKeyCredentials(
id="66f20754-1b81-48e4-91d0-f4f0dd82145f",
provider="unreal",
provider="unreal_speech",
api_key=SecretStr(settings.secrets.unreal_speech_api_key),
title="Use Credits for Unreal",
title="Use Credits for Unreal Speech",
expires_at=None,
)
open_router_credentials = APIKeyCredentials(
@@ -216,6 +216,14 @@ webshare_proxy_credentials = UserPasswordCredentials(
title="Use Credits for Webshare Proxy",
)
openweathermap_credentials = APIKeyCredentials(
id="8b3d4e5f-6a7b-8c9d-0e1f-2a3b4c5d6e7f",
provider="openweathermap",
api_key=SecretStr(settings.secrets.openweathermap_api_key),
title="Use Credits for OpenWeatherMap",
expires_at=None,
)
DEFAULT_CREDENTIALS = [
ollama_credentials,
revid_credentials,
@@ -243,6 +251,7 @@ DEFAULT_CREDENTIALS = [
llama_api_credentials,
v0_credentials,
webshare_proxy_credentials,
openweathermap_credentials,
]
SYSTEM_CREDENTIAL_IDS = {cred.id for cred in DEFAULT_CREDENTIALS}
@@ -346,11 +355,17 @@ class IntegrationCredentialsStore:
all_credentials.append(zerobounce_credentials)
if settings.secrets.google_maps_api_key:
all_credentials.append(google_maps_credentials)
if settings.secrets.llama_api_key:
all_credentials.append(llama_api_credentials)
if settings.secrets.v0_api_key:
all_credentials.append(v0_credentials)
if (
settings.secrets.webshare_proxy_username
and settings.secrets.webshare_proxy_password
):
all_credentials.append(webshare_proxy_credentials)
if settings.secrets.openweathermap_api_key:
all_credentials.append(openweathermap_credentials)
return all_credentials
async def get_creds_by_id(

View File

@@ -60,8 +60,10 @@ class LateExecutionMonitor:
if not all_late_executions:
return "No late executions detected."
# Sort by created time (oldest first)
all_late_executions.sort(key=lambda x: x.started_at)
# Sort by started time (oldest first), with None values (unstarted) first
all_late_executions.sort(
key=lambda x: x.started_at or datetime.min.replace(tzinfo=timezone.utc)
)
num_total_late = len(all_late_executions)
num_queued = len(queued_late_executions)
@@ -74,7 +76,7 @@ class LateExecutionMonitor:
was_truncated = num_total_late > tuncate_size
late_execution_details = [
f"* `Execution ID: {exec.id}, Graph ID: {exec.graph_id}v{exec.graph_version}, User ID: {exec.user_id}, Status: {exec.status}, Created At: {exec.started_at.isoformat()}`"
f"* `Execution ID: {exec.id}, Graph ID: {exec.graph_id}v{exec.graph_version}, User ID: {exec.user_id}, Status: {exec.status}, Started At: {exec.started_at.isoformat() if exec.started_at else 'Not started'}`"
for exec in truncated_executions
]

View File

@@ -1,9 +1,10 @@
-- CreateExtension
-- Supabase: pgvector must be enabled via Dashboard → Database → Extensions first
-- Create in public schema so vector type is available across all schemas
-- Creates extension in current schema (determined by search_path)
-- The extension may already exist in a different schema (e.g., Supabase pre-enables it)
DO $$
BEGIN
CREATE EXTENSION IF NOT EXISTS "vector" WITH SCHEMA "public";
CREATE EXTENSION IF NOT EXISTS "vector";
EXCEPTION WHEN OTHERS THEN
RAISE NOTICE 'vector extension not available or already exists, skipping';
END $$;
@@ -12,6 +13,7 @@ END $$;
CREATE TYPE "ContentType" AS ENUM ('STORE_AGENT', 'BLOCK', 'INTEGRATION', 'DOCUMENTATION', 'LIBRARY_AGENT');
-- CreateTable
-- Note: vector type is unqualified - relies on search_path including the schema where pgvector is installed
CREATE TABLE "UnifiedContentEmbedding" (
"id" TEXT NOT NULL,
"createdAt" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP,
@@ -19,7 +21,7 @@ CREATE TABLE "UnifiedContentEmbedding" (
"contentType" "ContentType" NOT NULL,
"contentId" TEXT NOT NULL,
"userId" TEXT,
"embedding" public.vector(1536) NOT NULL,
"embedding" vector(1536) NOT NULL,
"searchableText" TEXT NOT NULL,
"metadata" JSONB NOT NULL DEFAULT '{}',
@@ -45,4 +47,4 @@ CREATE UNIQUE INDEX "UnifiedContentEmbedding_contentType_contentId_userId_key" O
-- Uses cosine distance operator (<=>), which matches the query in hybrid_search.py
-- Note: Drop first in case Prisma created a btree index (Prisma doesn't support HNSW)
DROP INDEX IF EXISTS "UnifiedContentEmbedding_embedding_idx";
CREATE INDEX "UnifiedContentEmbedding_embedding_idx" ON "UnifiedContentEmbedding" USING hnsw ("embedding" public.vector_cosine_ops);
CREATE INDEX "UnifiedContentEmbedding_embedding_idx" ON "UnifiedContentEmbedding" USING hnsw ("embedding" vector_cosine_ops);

View File

@@ -0,0 +1,8 @@
-- AlterTable
ALTER TABLE "AgentGraphExecution" ADD COLUMN "endedAt" TIMESTAMP(3);
-- Set endedAt to updatedAt for existing records with terminal status only
UPDATE "AgentGraphExecution"
SET "endedAt" = "updatedAt"
WHERE "endedAt" IS NULL
AND "executionStatus" IN ('COMPLETED', 'FAILED', 'TERMINATED');

View File

@@ -450,6 +450,7 @@ model AgentGraphExecution {
createdAt DateTime @default(now())
updatedAt DateTime? @updatedAt
startedAt DateTime?
endedAt DateTime?
isDeleted Boolean @default(false)

View File

@@ -0,0 +1,864 @@
#!/usr/bin/env python3
"""
Block Documentation Generator
Generates markdown documentation for all blocks from code introspection.
Preserves manually-written content between marker comments.
Usage:
# Generate all docs
poetry run python scripts/generate_block_docs.py
# Check mode for CI (exits 1 if stale)
poetry run python scripts/generate_block_docs.py --check
# Verbose output
poetry run python scripts/generate_block_docs.py -v
"""
import argparse
import inspect
import logging
import re
import sys
from collections import defaultdict
from dataclasses import dataclass, field
from pathlib import Path
from typing import Any
# Add backend to path for imports
backend_dir = Path(__file__).parent.parent
sys.path.insert(0, str(backend_dir))
logger = logging.getLogger(__name__)
# Default output directory relative to repo root
DEFAULT_OUTPUT_DIR = (
Path(__file__).parent.parent.parent.parent / "docs" / "integrations"
)
@dataclass
class FieldDoc:
"""Documentation for a single input/output field."""
name: str
description: str
type_str: str
required: bool
default: Any = None
advanced: bool = False
hidden: bool = False
placeholder: str | None = None
@dataclass
class BlockDoc:
"""Documentation data extracted from a block."""
id: str
name: str
class_name: str
description: str
categories: list[str]
category_descriptions: dict[str, str]
inputs: list[FieldDoc]
outputs: list[FieldDoc]
block_type: str
source_file: str
contributors: list[str] = field(default_factory=list)
# Category to human-readable name mapping
CATEGORY_DISPLAY_NAMES = {
"AI": "AI and Language Models",
"BASIC": "Basic Operations",
"TEXT": "Text Processing",
"SEARCH": "Search and Information Retrieval",
"SOCIAL": "Social Media and Content",
"DEVELOPER_TOOLS": "Developer Tools",
"DATA": "Data Processing",
"LOGIC": "Logic and Control Flow",
"COMMUNICATION": "Communication",
"INPUT": "Input/Output",
"OUTPUT": "Input/Output",
"MULTIMEDIA": "Media Generation",
"PRODUCTIVITY": "Productivity",
"HARDWARE": "Hardware",
"AGENT": "Agent Integration",
"CRM": "CRM Services",
"SAFETY": "AI Safety",
"ISSUE_TRACKING": "Issue Tracking",
"MARKETING": "Marketing",
}
# Category to doc file mapping (for grouping related blocks)
CATEGORY_FILE_MAP = {
"BASIC": "basic",
"TEXT": "text",
"AI": "llm",
"SEARCH": "search",
"DATA": "data",
"LOGIC": "logic",
"COMMUNICATION": "communication",
"MULTIMEDIA": "multimedia",
"PRODUCTIVITY": "productivity",
}
def class_name_to_display_name(class_name: str) -> str:
"""Convert BlockClassName to 'Block Class Name'."""
# Remove 'Block' suffix (only at the end, not all occurrences)
name = class_name.removesuffix("Block")
# Insert space before capitals
name = re.sub(r"([a-z])([A-Z])", r"\1 \2", name)
# Handle consecutive capitals (e.g., 'HTTPRequest' -> 'HTTP Request')
name = re.sub(r"([A-Z]+)([A-Z][a-z])", r"\1 \2", name)
return name.strip()
def type_to_readable(type_schema: dict[str, Any] | Any) -> str:
"""Convert JSON schema type to human-readable string."""
if not isinstance(type_schema, dict):
return str(type_schema) if type_schema else "Any"
if "anyOf" in type_schema:
# Union type - show options
any_of = type_schema["anyOf"]
if not isinstance(any_of, list):
return "Any"
options = []
for opt in any_of:
if isinstance(opt, dict) and opt.get("type") == "null":
continue
options.append(type_to_readable(opt))
if not options:
return "None"
if len(options) == 1:
return options[0]
return " | ".join(options)
if "allOf" in type_schema:
all_of = type_schema["allOf"]
if not isinstance(all_of, list) or not all_of:
return "Any"
return type_to_readable(all_of[0])
schema_type = type_schema.get("type")
if schema_type == "array":
items = type_schema.get("items", {})
item_type = type_to_readable(items)
return f"List[{item_type}]"
if schema_type == "object":
if "additionalProperties" in type_schema:
additional_props = type_schema["additionalProperties"]
# additionalProperties: true means any value type is allowed
if additional_props is True:
return "Dict[str, Any]"
value_type = type_to_readable(additional_props)
return f"Dict[str, {value_type}]"
# Check if it's a specific model
title = type_schema.get("title", "Object")
return title
if schema_type == "string":
if "enum" in type_schema:
return " | ".join(f'"{v}"' for v in type_schema["enum"])
if "format" in type_schema:
return f"str ({type_schema['format']})"
return "str"
if schema_type == "integer":
return "int"
if schema_type == "number":
return "float"
if schema_type == "boolean":
return "bool"
if schema_type == "null":
return "None"
# Fallback
return type_schema.get("title", schema_type or "Any")
def safe_get(d: Any, key: str, default: Any = None) -> Any:
"""Safely get a value from a dict-like object."""
if isinstance(d, dict):
return d.get(key, default)
return default
def file_path_to_title(file_path: str) -> str:
"""Convert file path to a readable title.
Examples:
"github/issues.md" -> "GitHub Issues"
"basic.md" -> "Basic"
"llm.md" -> "LLM"
"google/sheets.md" -> "Google Sheets"
"""
# Special case replacements (applied after title casing)
TITLE_FIXES = {
"Llm": "LLM",
"Github": "GitHub",
"Api": "API",
"Ai": "AI",
"Oauth": "OAuth",
"Url": "URL",
"Ci": "CI",
"Pr": "PR",
"Gmb": "GMB", # Google My Business
"Hubspot": "HubSpot",
"Linkedin": "LinkedIn",
"Tiktok": "TikTok",
"Youtube": "YouTube",
}
def apply_fixes(text: str) -> str:
# Split into words, fix each word, rejoin
words = text.split()
fixed_words = [TITLE_FIXES.get(word, word) for word in words]
return " ".join(fixed_words)
path = Path(file_path)
name = path.stem # e.g., "issues" or "sheets"
# Get parent dir if exists
parent = path.parent.name if path.parent.name != "." else None
# Title case and apply fixes
if parent:
parent_title = apply_fixes(parent.replace("_", " ").title())
name_title = apply_fixes(name.replace("_", " ").title())
return f"{parent_title} {name_title}"
return apply_fixes(name.replace("_", " ").title())
def extract_block_doc(block_cls: type) -> BlockDoc:
"""Extract documentation data from a block class."""
block = block_cls.create()
# Get source file
try:
source_file = inspect.getfile(block_cls)
# Make relative to blocks directory
blocks_dir = Path(source_file).parent
while blocks_dir.name != "blocks" and blocks_dir.parent != blocks_dir:
blocks_dir = blocks_dir.parent
source_file = str(Path(source_file).relative_to(blocks_dir.parent))
except (TypeError, ValueError):
source_file = "unknown"
# Extract input fields
input_schema = block.input_schema.jsonschema()
input_properties = safe_get(input_schema, "properties", {})
if not isinstance(input_properties, dict):
input_properties = {}
required_raw = safe_get(input_schema, "required", [])
# Handle edge cases where required might not be a list
if isinstance(required_raw, (list, set, tuple)):
required_inputs = set(required_raw)
else:
required_inputs = set()
inputs = []
for field_name, field_schema in input_properties.items():
if not isinstance(field_schema, dict):
continue
# Skip credentials fields in docs (they're auto-handled)
if "credentials" in field_name.lower():
continue
inputs.append(
FieldDoc(
name=field_name,
description=safe_get(field_schema, "description", ""),
type_str=type_to_readable(field_schema),
required=field_name in required_inputs,
default=safe_get(field_schema, "default"),
advanced=safe_get(field_schema, "advanced", False) or False,
hidden=safe_get(field_schema, "hidden", False) or False,
placeholder=safe_get(field_schema, "placeholder"),
)
)
# Extract output fields
output_schema = block.output_schema.jsonschema()
output_properties = safe_get(output_schema, "properties", {})
if not isinstance(output_properties, dict):
output_properties = {}
outputs = []
for field_name, field_schema in output_properties.items():
if not isinstance(field_schema, dict):
continue
outputs.append(
FieldDoc(
name=field_name,
description=safe_get(field_schema, "description", ""),
type_str=type_to_readable(field_schema),
required=True, # Outputs are always produced
hidden=safe_get(field_schema, "hidden", False) or False,
)
)
# Get category info (sort for deterministic ordering since it's a set)
categories = []
category_descriptions = {}
for cat in sorted(block.categories, key=lambda c: c.name):
categories.append(cat.name)
category_descriptions[cat.name] = cat.value
# Get contributors
contributors = []
for contrib in block.contributors:
contributors.append(contrib.name if hasattr(contrib, "name") else str(contrib))
return BlockDoc(
id=block.id,
name=class_name_to_display_name(block.name),
class_name=block.name,
description=block.description,
categories=categories,
category_descriptions=category_descriptions,
inputs=inputs,
outputs=outputs,
block_type=block.block_type.value,
source_file=source_file,
contributors=contributors,
)
def generate_anchor(name: str) -> str:
"""Generate markdown anchor from block name."""
return name.lower().replace(" ", "-").replace("(", "").replace(")", "")
def extract_manual_content(existing_content: str) -> dict[str, str]:
"""Extract content between MANUAL markers from existing file."""
manual_sections = {}
# Pattern: <!-- MANUAL: section_name -->content<!-- END MANUAL -->
pattern = r"<!-- MANUAL: (\w+) -->\s*(.*?)\s*<!-- END MANUAL -->"
matches = re.findall(pattern, existing_content, re.DOTALL)
for section_name, content in matches:
manual_sections[section_name] = content.strip()
return manual_sections
def generate_block_markdown(
block: BlockDoc,
manual_content: dict[str, str] | None = None,
) -> str:
"""Generate markdown documentation for a single block."""
manual_content = manual_content or {}
lines = []
# All blocks use ## heading, sections use ### (consistent siblings)
lines.append(f"## {block.name}")
lines.append("")
# What it is (full description)
lines.append(f"### What it is")
lines.append(block.description or "No description available.")
lines.append("")
# How it works (manual section)
lines.append(f"### How it works")
how_it_works = manual_content.get(
"how_it_works", "_Add technical explanation here._"
)
lines.append("<!-- MANUAL: how_it_works -->")
lines.append(how_it_works)
lines.append("<!-- END MANUAL -->")
lines.append("")
# Inputs table (auto-generated)
visible_inputs = [f for f in block.inputs if not f.hidden]
if visible_inputs:
lines.append(f"### Inputs")
lines.append("")
lines.append("| Input | Description | Type | Required |")
lines.append("|-------|-------------|------|----------|")
for inp in visible_inputs:
required = "Yes" if inp.required else "No"
desc = inp.description or "-"
type_str = inp.type_str or "-"
# Normalize newlines and escape pipes for valid table syntax
desc = desc.replace("\n", " ").replace("|", "\\|")
type_str = type_str.replace("|", "\\|")
lines.append(f"| {inp.name} | {desc} | {type_str} | {required} |")
lines.append("")
# Outputs table (auto-generated)
visible_outputs = [f for f in block.outputs if not f.hidden]
if visible_outputs:
lines.append(f"### Outputs")
lines.append("")
lines.append("| Output | Description | Type |")
lines.append("|--------|-------------|------|")
for out in visible_outputs:
desc = out.description or "-"
type_str = out.type_str or "-"
# Normalize newlines and escape pipes for valid table syntax
desc = desc.replace("\n", " ").replace("|", "\\|")
type_str = type_str.replace("|", "\\|")
lines.append(f"| {out.name} | {desc} | {type_str} |")
lines.append("")
# Possible use case (manual section)
lines.append(f"### Possible use case")
use_case = manual_content.get("use_case", "_Add practical use case examples here._")
lines.append("<!-- MANUAL: use_case -->")
lines.append(use_case)
lines.append("<!-- END MANUAL -->")
lines.append("")
lines.append("---")
lines.append("")
return "\n".join(lines)
def get_block_file_mapping(blocks: list[BlockDoc]) -> dict[str, list[BlockDoc]]:
"""
Map blocks to their documentation files.
Returns dict of {relative_file_path: [blocks]}
"""
file_mapping = defaultdict(list)
for block in blocks:
# Determine file path based on source file or category
source_path = Path(block.source_file)
# If source is in a subdirectory (e.g., google/gmail.py), use that structure
if len(source_path.parts) > 2: # blocks/subdir/file.py
subdir = source_path.parts[1] # e.g., "google"
# Use the Python filename as the md filename
md_file = source_path.stem + ".md" # e.g., "gmail.md"
file_path = f"{subdir}/{md_file}"
else:
# Use category-based grouping for top-level blocks
primary_category = block.categories[0] if block.categories else "BASIC"
file_name = CATEGORY_FILE_MAP.get(primary_category, "misc")
file_path = f"{file_name}.md"
file_mapping[file_path].append(block)
return dict(file_mapping)
def generate_overview_table(blocks: list[BlockDoc]) -> str:
"""Generate the overview table markdown (blocks.md)."""
lines = []
lines.append("# AutoGPT Blocks Overview")
lines.append("")
lines.append(
'AutoGPT uses a modular approach with various "blocks" to handle different tasks. These blocks are the building blocks of AutoGPT workflows, allowing users to create complex automations by combining simple, specialized components.'
)
lines.append("")
lines.append('!!! info "Creating Your Own Blocks"')
lines.append(" Want to create your own custom blocks? Check out our guides:")
lines.append(" ")
lines.append(
" - [Build your own Blocks](https://docs.agpt.co/platform/new_blocks/) - Step-by-step tutorial with examples"
)
lines.append(
" - [Block SDK Guide](https://docs.agpt.co/platform/block-sdk-guide/) - Advanced SDK patterns with OAuth, webhooks, and provider configuration"
)
lines.append("")
lines.append(
"Below is a comprehensive list of all available blocks, categorized by their primary function. Click on any block name to view its detailed documentation."
)
lines.append("")
# Group blocks by category
by_category = defaultdict(list)
for block in blocks:
primary_cat = block.categories[0] if block.categories else "BASIC"
by_category[primary_cat].append(block)
# Sort categories
category_order = [
"BASIC",
"DATA",
"TEXT",
"AI",
"SEARCH",
"SOCIAL",
"COMMUNICATION",
"DEVELOPER_TOOLS",
"MULTIMEDIA",
"PRODUCTIVITY",
"LOGIC",
"INPUT",
"OUTPUT",
"AGENT",
"CRM",
"SAFETY",
"ISSUE_TRACKING",
"HARDWARE",
"MARKETING",
]
# Track emitted display names to avoid duplicate headers
# (e.g., INPUT and OUTPUT both map to "Input/Output")
emitted_display_names: set[str] = set()
for category in category_order:
if category not in by_category:
continue
display_name = CATEGORY_DISPLAY_NAMES.get(category, category)
# Collect all blocks for this display name (may span multiple categories)
if display_name in emitted_display_names:
# Already emitted header, just add rows to existing table
# Find the position before the last empty line and insert rows
cat_blocks = sorted(by_category[category], key=lambda b: b.name)
# Remove the trailing empty line, add rows, then re-add empty line
lines.pop()
for block in cat_blocks:
file_mapping = get_block_file_mapping([block])
file_path = list(file_mapping.keys())[0]
anchor = generate_anchor(block.name)
short_desc = (
block.description.split(".")[0]
if block.description
else "No description"
)
short_desc = short_desc.replace("\n", " ").replace("|", "\\|")
lines.append(f"| [{block.name}]({file_path}#{anchor}) | {short_desc} |")
lines.append("")
continue
emitted_display_names.add(display_name)
cat_blocks = sorted(by_category[category], key=lambda b: b.name)
lines.append(f"## {display_name}")
lines.append("")
lines.append("| Block Name | Description |")
lines.append("|------------|-------------|")
for block in cat_blocks:
# Determine link path
file_mapping = get_block_file_mapping([block])
file_path = list(file_mapping.keys())[0]
anchor = generate_anchor(block.name)
# Short description (first sentence)
short_desc = (
block.description.split(".")[0]
if block.description
else "No description"
)
short_desc = short_desc.replace("\n", " ").replace("|", "\\|")
lines.append(f"| [{block.name}]({file_path}#{anchor}) | {short_desc} |")
lines.append("")
return "\n".join(lines)
def load_all_blocks_for_docs() -> list[BlockDoc]:
"""Load all blocks and extract documentation."""
from backend.blocks import load_all_blocks
block_classes = load_all_blocks()
blocks = []
for _block_id, block_cls in block_classes.items():
try:
block_doc = extract_block_doc(block_cls)
blocks.append(block_doc)
except Exception as e:
logger.warning(f"Failed to extract docs for {block_cls.__name__}: {e}")
return blocks
def write_block_docs(
output_dir: Path,
blocks: list[BlockDoc],
verbose: bool = False,
) -> dict[str, str]:
"""
Write block documentation files.
Returns dict of {file_path: content} for all generated files.
"""
output_dir = Path(output_dir)
output_dir.mkdir(parents=True, exist_ok=True)
file_mapping = get_block_file_mapping(blocks)
generated_files = {}
for file_path, file_blocks in file_mapping.items():
full_path = output_dir / file_path
# Create subdirectories if needed
full_path.parent.mkdir(parents=True, exist_ok=True)
# Load existing content for manual section preservation
existing_content = ""
if full_path.exists():
existing_content = full_path.read_text()
# Always generate title from file path (with fixes applied)
file_title = file_path_to_title(file_path)
# Extract existing file description if present (preserve manual content)
file_header_pattern = (
r"^# .+?\n<!-- MANUAL: file_description -->\n(.*?)\n<!-- END MANUAL -->"
)
file_header_match = re.search(file_header_pattern, existing_content, re.DOTALL)
if file_header_match:
file_description = file_header_match.group(1)
else:
file_description = "_Add a description of this category of blocks._"
# Generate file header
file_header = f"# {file_title}\n"
file_header += "<!-- MANUAL: file_description -->\n"
file_header += f"{file_description}\n"
file_header += "<!-- END MANUAL -->\n"
# Generate content for each block
content_parts = []
for block in sorted(file_blocks, key=lambda b: b.name):
# Extract manual content specific to this block
# Match block heading (h2) and capture until --- separator
block_pattern = rf"(?:^|\n)## {re.escape(block.name)}\s*\n(.*?)(?=\n---|\Z)"
block_match = re.search(block_pattern, existing_content, re.DOTALL)
if block_match:
manual_content = extract_manual_content(block_match.group(1))
else:
manual_content = {}
content_parts.append(
generate_block_markdown(
block,
manual_content,
)
)
full_content = file_header + "\n" + "\n".join(content_parts)
generated_files[str(file_path)] = full_content
if verbose:
print(f" Writing {file_path} ({len(file_blocks)} blocks)")
full_path.write_text(full_content)
# Generate overview file
overview_content = generate_overview_table(blocks)
overview_path = output_dir / "README.md"
generated_files["README.md"] = overview_content
overview_path.write_text(overview_content)
if verbose:
print(" Writing README.md (overview)")
return generated_files
def check_docs_in_sync(output_dir: Path, blocks: list[BlockDoc]) -> bool:
"""
Check if generated docs match existing docs.
Returns True if in sync, False otherwise.
"""
output_dir = Path(output_dir)
file_mapping = get_block_file_mapping(blocks)
all_match = True
out_of_sync_details: list[tuple[str, list[str]]] = []
for file_path, file_blocks in file_mapping.items():
full_path = output_dir / file_path
if not full_path.exists():
block_names = [b.name for b in sorted(file_blocks, key=lambda b: b.name)]
print(f"MISSING: {file_path}")
print(f" Blocks: {', '.join(block_names)}")
out_of_sync_details.append((file_path, block_names))
all_match = False
continue
existing_content = full_path.read_text()
# Always generate title from file path (with fixes applied)
file_title = file_path_to_title(file_path)
# Extract existing file description if present (preserve manual content)
file_header_pattern = (
r"^# .+?\n<!-- MANUAL: file_description -->\n(.*?)\n<!-- END MANUAL -->"
)
file_header_match = re.search(file_header_pattern, existing_content, re.DOTALL)
if file_header_match:
file_description = file_header_match.group(1)
else:
file_description = "_Add a description of this category of blocks._"
# Generate expected file header
file_header = f"# {file_title}\n"
file_header += "<!-- MANUAL: file_description -->\n"
file_header += f"{file_description}\n"
file_header += "<!-- END MANUAL -->\n"
# Extract manual content from existing file
manual_sections_by_block = {}
for block in file_blocks:
block_pattern = rf"(?:^|\n)## {re.escape(block.name)}\s*\n(.*?)(?=\n---|\Z)"
block_match = re.search(block_pattern, existing_content, re.DOTALL)
if block_match:
manual_sections_by_block[block.name] = extract_manual_content(
block_match.group(1)
)
# Generate expected content and check each block individually
content_parts = []
mismatched_blocks = []
for block in sorted(file_blocks, key=lambda b: b.name):
manual_content = manual_sections_by_block.get(block.name, {})
expected_block_content = generate_block_markdown(
block,
manual_content,
)
content_parts.append(expected_block_content)
# Check if this specific block's section exists and matches
# Include the --- separator to match generate_block_markdown output
block_pattern = rf"(?:^|\n)(## {re.escape(block.name)}\s*\n.*?\n---\n)"
block_match = re.search(block_pattern, existing_content, re.DOTALL)
if not block_match:
mismatched_blocks.append(f"{block.name} (missing)")
elif block_match.group(1).strip() != expected_block_content.strip():
mismatched_blocks.append(block.name)
expected_content = file_header + "\n" + "\n".join(content_parts)
if existing_content.strip() != expected_content.strip():
print(f"OUT OF SYNC: {file_path}")
if mismatched_blocks:
print(f" Affected blocks: {', '.join(mismatched_blocks)}")
out_of_sync_details.append((file_path, mismatched_blocks))
all_match = False
# Check overview
overview_path = output_dir / "README.md"
if overview_path.exists():
existing_overview = overview_path.read_text()
expected_overview = generate_overview_table(blocks)
if existing_overview.strip() != expected_overview.strip():
print("OUT OF SYNC: README.md (overview)")
print(" The blocks overview table needs regeneration")
out_of_sync_details.append(("README.md", ["overview table"]))
all_match = False
else:
print("MISSING: README.md (overview)")
out_of_sync_details.append(("README.md", ["overview table"]))
all_match = False
# Check for unfilled manual sections
unfilled_patterns = [
"_Add a description of this category of blocks._",
"_Add technical explanation here._",
"_Add practical use case examples here._",
]
files_with_unfilled = []
for file_path in file_mapping.keys():
full_path = output_dir / file_path
if full_path.exists():
content = full_path.read_text()
unfilled_count = sum(1 for p in unfilled_patterns if p in content)
if unfilled_count > 0:
files_with_unfilled.append((file_path, unfilled_count))
if files_with_unfilled:
print("\nWARNING: Files with unfilled manual sections:")
for file_path, count in sorted(files_with_unfilled):
print(f" {file_path}: {count} unfilled section(s)")
print(
f"\nTotal: {len(files_with_unfilled)} files with unfilled manual sections"
)
return all_match
def main():
parser = argparse.ArgumentParser(
description="Generate block documentation from code introspection"
)
parser.add_argument(
"--output-dir",
type=Path,
default=DEFAULT_OUTPUT_DIR,
help="Output directory for generated docs",
)
parser.add_argument(
"--check",
action="store_true",
help="Check if docs are in sync (for CI), exit 1 if not",
)
parser.add_argument(
"-v",
"--verbose",
action="store_true",
help="Verbose output",
)
args = parser.parse_args()
logging.basicConfig(
level=logging.DEBUG if args.verbose else logging.INFO,
format="%(levelname)s: %(message)s",
)
print("Loading blocks...")
blocks = load_all_blocks_for_docs()
print(f"Found {len(blocks)} blocks")
if args.check:
print(f"Checking docs in {args.output_dir}...")
in_sync = check_docs_in_sync(args.output_dir, blocks)
if in_sync:
print("All documentation is in sync!")
sys.exit(0)
else:
print("\n" + "=" * 60)
print("Documentation is out of sync!")
print("=" * 60)
print("\nTo fix this, run one of the following:")
print("\n Option 1 - Run locally:")
print(
" cd autogpt_platform/backend && poetry run python scripts/generate_block_docs.py"
)
print("\n Option 2 - Ask Claude Code to run it:")
print(' "Run the block docs generator script to sync documentation"')
print("\n" + "=" * 60)
sys.exit(1)
else:
print(f"Generating docs to {args.output_dir}...")
write_block_docs(
args.output_dir,
blocks,
verbose=args.verbose,
)
print("Done!")
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,208 @@
#!/usr/bin/env python3
"""Tests for the block documentation generator."""
import pytest
from scripts.generate_block_docs import (
class_name_to_display_name,
extract_manual_content,
generate_anchor,
type_to_readable,
)
class TestClassNameToDisplayName:
"""Tests for class_name_to_display_name function."""
def test_simple_block_name(self):
assert class_name_to_display_name("PrintBlock") == "Print"
def test_multi_word_block_name(self):
assert class_name_to_display_name("GetWeatherBlock") == "Get Weather"
def test_consecutive_capitals(self):
assert class_name_to_display_name("HTTPRequestBlock") == "HTTP Request"
def test_ai_prefix(self):
assert class_name_to_display_name("AIConditionBlock") == "AI Condition"
def test_no_block_suffix(self):
assert class_name_to_display_name("SomeClass") == "Some Class"
class TestTypeToReadable:
"""Tests for type_to_readable function."""
def test_string_type(self):
assert type_to_readable({"type": "string"}) == "str"
def test_integer_type(self):
assert type_to_readable({"type": "integer"}) == "int"
def test_number_type(self):
assert type_to_readable({"type": "number"}) == "float"
def test_boolean_type(self):
assert type_to_readable({"type": "boolean"}) == "bool"
def test_array_type(self):
result = type_to_readable({"type": "array", "items": {"type": "string"}})
assert result == "List[str]"
def test_object_type(self):
result = type_to_readable({"type": "object", "title": "MyModel"})
assert result == "MyModel"
def test_anyof_with_null(self):
result = type_to_readable({"anyOf": [{"type": "string"}, {"type": "null"}]})
assert result == "str"
def test_anyof_multiple_types(self):
result = type_to_readable({"anyOf": [{"type": "string"}, {"type": "integer"}]})
assert result == "str | int"
def test_enum_type(self):
result = type_to_readable(
{"type": "string", "enum": ["option1", "option2", "option3"]}
)
assert result == '"option1" | "option2" | "option3"'
def test_none_input(self):
assert type_to_readable(None) == "Any"
def test_non_dict_input(self):
assert type_to_readable("string") == "string"
class TestExtractManualContent:
"""Tests for extract_manual_content function."""
def test_extract_how_it_works(self):
content = """
### How it works
<!-- MANUAL: how_it_works -->
This is how it works.
<!-- END MANUAL -->
"""
result = extract_manual_content(content)
assert result == {"how_it_works": "This is how it works."}
def test_extract_use_case(self):
content = """
### Possible use case
<!-- MANUAL: use_case -->
Example use case here.
<!-- END MANUAL -->
"""
result = extract_manual_content(content)
assert result == {"use_case": "Example use case here."}
def test_extract_multiple_sections(self):
content = """
<!-- MANUAL: how_it_works -->
How it works content.
<!-- END MANUAL -->
<!-- MANUAL: use_case -->
Use case content.
<!-- END MANUAL -->
"""
result = extract_manual_content(content)
assert result == {
"how_it_works": "How it works content.",
"use_case": "Use case content.",
}
def test_empty_content(self):
result = extract_manual_content("")
assert result == {}
def test_no_markers(self):
result = extract_manual_content("Some content without markers")
assert result == {}
class TestGenerateAnchor:
"""Tests for generate_anchor function."""
def test_simple_name(self):
assert generate_anchor("Print") == "print"
def test_multi_word_name(self):
assert generate_anchor("Get Weather") == "get-weather"
def test_name_with_parentheses(self):
assert generate_anchor("Something (Optional)") == "something-optional"
def test_already_lowercase(self):
assert generate_anchor("already lowercase") == "already-lowercase"
class TestIntegration:
"""Integration tests that require block loading."""
def test_load_blocks(self):
"""Test that blocks can be loaded successfully."""
import logging
import sys
from pathlib import Path
logging.disable(logging.CRITICAL)
sys.path.insert(0, str(Path(__file__).parent.parent))
from scripts.generate_block_docs import load_all_blocks_for_docs
blocks = load_all_blocks_for_docs()
assert len(blocks) > 0, "Should load at least one block"
def test_block_doc_has_required_fields(self):
"""Test that extracted block docs have required fields."""
import logging
import sys
from pathlib import Path
logging.disable(logging.CRITICAL)
sys.path.insert(0, str(Path(__file__).parent.parent))
from scripts.generate_block_docs import load_all_blocks_for_docs
blocks = load_all_blocks_for_docs()
block = blocks[0]
assert hasattr(block, "id")
assert hasattr(block, "name")
assert hasattr(block, "description")
assert hasattr(block, "categories")
assert hasattr(block, "inputs")
assert hasattr(block, "outputs")
def test_file_mapping_is_deterministic(self):
"""Test that file mapping produces consistent results."""
import logging
import sys
from pathlib import Path
logging.disable(logging.CRITICAL)
sys.path.insert(0, str(Path(__file__).parent.parent))
from scripts.generate_block_docs import (
get_block_file_mapping,
load_all_blocks_for_docs,
)
# Load blocks twice and compare mappings
blocks1 = load_all_blocks_for_docs()
blocks2 = load_all_blocks_for_docs()
mapping1 = get_block_file_mapping(blocks1)
mapping2 = get_block_file_mapping(blocks2)
# Check same files are generated
assert set(mapping1.keys()) == set(mapping2.keys())
# Check same block counts per file
for file_path in mapping1:
assert len(mapping1[file_path]) == len(mapping2[file_path])
if __name__ == "__main__":
pytest.main([__file__, "-v"])

View File

@@ -11,7 +11,6 @@
"forked_from_version": null,
"has_external_trigger": false,
"has_human_in_the_loop": false,
"has_sensitive_action": false,
"id": "graph-123",
"input_schema": {
"properties": {},

View File

@@ -11,7 +11,6 @@
"forked_from_version": null,
"has_external_trigger": false,
"has_human_in_the_loop": false,
"has_sensitive_action": false,
"id": "graph-123",
"input_schema": {
"properties": {},

View File

@@ -27,8 +27,6 @@
"properties": {}
},
"has_external_trigger": false,
"has_human_in_the_loop": false,
"has_sensitive_action": false,
"trigger_setup_info": null,
"new_output": false,
"can_access_graph": true,
@@ -36,8 +34,7 @@
"is_favorite": false,
"recommended_schedule_cron": null,
"settings": {
"human_in_the_loop_safe_mode": true,
"sensitive_action_safe_mode": false
"human_in_the_loop_safe_mode": null
},
"marketplace_listing": null
},
@@ -68,8 +65,6 @@
"properties": {}
},
"has_external_trigger": false,
"has_human_in_the_loop": false,
"has_sensitive_action": false,
"trigger_setup_info": null,
"new_output": false,
"can_access_graph": false,
@@ -77,8 +72,7 @@
"is_favorite": false,
"recommended_schedule_cron": null,
"settings": {
"human_in_the_loop_safe_mode": true,
"sensitive_action_safe_mode": false
"human_in_the_loop_safe_mode": null
},
"marketplace_listing": null
}

View File

@@ -51,6 +51,8 @@ export function AnalyticsResultsTable({ results }: Props) {
"Execution ID",
"Status",
"Score",
"Started At",
"Ended At",
"Summary Text",
"Error Message",
];
@@ -62,6 +64,8 @@ export function AnalyticsResultsTable({ results }: Props) {
result.exec_id,
result.status,
result.score?.toString() || "",
result.started_at ? new Date(result.started_at).toLocaleString() : "",
result.ended_at ? new Date(result.ended_at).toLocaleString() : "",
`"${(result.summary_text || "").replace(/"/g, '""')}"`, // Escape quotes in summary
`"${(result.error_message || "").replace(/"/g, '""')}"`, // Escape quotes in error
]);
@@ -248,15 +252,13 @@ export function AnalyticsResultsTable({ results }: Props) {
)}
</td>
<td className="px-4 py-3">
{(result.summary_text || result.error_message) && (
<Button
variant="ghost"
size="small"
onClick={() => toggleRowExpansion(result.exec_id)}
>
<EyeIcon size={16} />
</Button>
)}
<Button
variant="ghost"
size="small"
onClick={() => toggleRowExpansion(result.exec_id)}
>
<EyeIcon size={16} />
</Button>
</td>
</tr>
@@ -264,6 +266,44 @@ export function AnalyticsResultsTable({ results }: Props) {
<tr>
<td colSpan={7} className="bg-gray-50 px-4 py-3">
<div className="space-y-3">
{/* Timestamps section */}
<div className="grid grid-cols-2 gap-4 border-b border-gray-200 pb-3">
<div>
<Text
variant="body"
className="text-xs font-medium text-gray-600"
>
Started At:
</Text>
<Text
variant="body"
className="text-sm text-gray-700"
>
{result.started_at
? new Date(
result.started_at,
).toLocaleString()
: "—"}
</Text>
</div>
<div>
<Text
variant="body"
className="text-xs font-medium text-gray-600"
>
Ended At:
</Text>
<Text
variant="body"
className="text-sm text-gray-700"
>
{result.ended_at
? new Date(result.ended_at).toLocaleString()
: "—"}
</Text>
</div>
</div>
{result.summary_text && (
<div>
<Text

View File

@@ -541,7 +541,19 @@ export function ExecutionAnalyticsForm() {
{/* Accuracy Trends Display */}
{trendsData && (
<div className="space-y-4">
<h3 className="text-lg font-semibold">Execution Accuracy Trends</h3>
<div className="flex items-start justify-between">
<h3 className="text-lg font-semibold">Execution Accuracy Trends</h3>
<div className="rounded-md bg-blue-50 px-3 py-2 text-xs text-blue-700">
<p className="font-medium">
Chart Filters (matches monitoring system):
</p>
<ul className="mt-1 list-inside list-disc space-y-1">
<li>Only days with 1 execution with correctness score</li>
<li>Last 30 days</li>
<li>Averages calculated from scored executions only</li>
</ul>
</div>
</div>
{/* Alert Section */}
{trendsData.alert && (

View File

@@ -31,18 +31,10 @@ export function AgentSettingsModal({
}
}
const {
currentHITLSafeMode,
showHITLToggle,
handleHITLToggle,
currentSensitiveActionSafeMode,
showSensitiveActionToggle,
handleSensitiveActionToggle,
isPending,
shouldShowToggle,
} = useAgentSafeMode(agent);
const { currentSafeMode, isPending, hasHITLBlocks, handleToggle } =
useAgentSafeMode(agent);
if (!shouldShowToggle) return null;
if (!hasHITLBlocks) return null;
return (
<Dialog
@@ -65,48 +57,23 @@ export function AgentSettingsModal({
)}
<Dialog.Content>
<div className="space-y-6">
{showHITLToggle && (
<div className="flex w-full flex-col items-start gap-4 rounded-xl border border-zinc-100 bg-white p-6">
<div className="flex w-full items-start justify-between gap-4">
<div className="flex-1">
<Text variant="large-semibold">
Human-in-the-loop approval
</Text>
<Text variant="large" className="mt-1 text-zinc-900">
The agent will pause at human-in-the-loop blocks and wait
for your review before continuing
</Text>
</div>
<Switch
checked={currentHITLSafeMode || false}
onCheckedChange={handleHITLToggle}
disabled={isPending}
className="mt-1"
/>
<div className="flex w-full flex-col items-start gap-4 rounded-xl border border-zinc-100 bg-white p-6">
<div className="flex w-full items-start justify-between gap-4">
<div className="flex-1">
<Text variant="large-semibold">Require human approval</Text>
<Text variant="large" className="mt-1 text-zinc-900">
The agent will pause and wait for your review before
continuing
</Text>
</div>
<Switch
checked={currentSafeMode || false}
onCheckedChange={handleToggle}
disabled={isPending}
className="mt-1"
/>
</div>
)}
{showSensitiveActionToggle && (
<div className="flex w-full flex-col items-start gap-4 rounded-xl border border-zinc-100 bg-white p-6">
<div className="flex w-full items-start justify-between gap-4">
<div className="flex-1">
<Text variant="large-semibold">
Sensitive action approval
</Text>
<Text variant="large" className="mt-1 text-zinc-900">
The agent will pause at sensitive action blocks and wait for
your review before continuing
</Text>
</div>
<Switch
checked={currentSensitiveActionSafeMode}
onCheckedChange={handleSensitiveActionToggle}
disabled={isPending}
className="mt-1"
/>
</div>
</div>
)}
</div>
</div>
</Dialog.Content>
</Dialog>

View File

@@ -13,16 +13,8 @@ interface Props {
}
export function SelectedSettingsView({ agent, onClearSelectedRun }: Props) {
const {
currentHITLSafeMode,
showHITLToggle,
handleHITLToggle,
currentSensitiveActionSafeMode,
showSensitiveActionToggle,
handleSensitiveActionToggle,
isPending,
shouldShowToggle,
} = useAgentSafeMode(agent);
const { currentSafeMode, isPending, hasHITLBlocks, handleToggle } =
useAgentSafeMode(agent);
return (
<SelectedViewLayout agent={agent}>
@@ -42,51 +34,24 @@ export function SelectedSettingsView({ agent, onClearSelectedRun }: Props) {
</div>
<div className={`${AGENT_LIBRARY_SECTION_PADDING_X} space-y-6`}>
{shouldShowToggle ? (
<>
{showHITLToggle && (
<div className="flex w-full max-w-2xl flex-col items-start gap-4 rounded-xl border border-zinc-100 bg-white p-6">
<div className="flex w-full items-start justify-between gap-4">
<div className="flex-1">
<Text variant="large-semibold">
Human-in-the-loop approval
</Text>
<Text variant="large" className="mt-1 text-zinc-900">
The agent will pause at human-in-the-loop blocks and
wait for your review before continuing
</Text>
</div>
<Switch
checked={currentHITLSafeMode || false}
onCheckedChange={handleHITLToggle}
disabled={isPending}
className="mt-1"
/>
</div>
{hasHITLBlocks ? (
<div className="flex w-full max-w-2xl flex-col items-start gap-4 rounded-xl border border-zinc-100 bg-white p-6">
<div className="flex w-full items-start justify-between gap-4">
<div className="flex-1">
<Text variant="large-semibold">Require human approval</Text>
<Text variant="large" className="mt-1 text-zinc-900">
The agent will pause and wait for your review before
continuing
</Text>
</div>
)}
{showSensitiveActionToggle && (
<div className="flex w-full max-w-2xl flex-col items-start gap-4 rounded-xl border border-zinc-100 bg-white p-6">
<div className="flex w-full items-start justify-between gap-4">
<div className="flex-1">
<Text variant="large-semibold">
Sensitive action approval
</Text>
<Text variant="large" className="mt-1 text-zinc-900">
The agent will pause at sensitive action blocks and wait
for your review before continuing
</Text>
</div>
<Switch
checked={currentSensitiveActionSafeMode}
onCheckedChange={handleSensitiveActionToggle}
disabled={isPending}
className="mt-1"
/>
</div>
</div>
)}
</>
<Switch
checked={currentSafeMode || false}
onCheckedChange={handleToggle}
disabled={isPending}
className="mt-1"
/>
</div>
</div>
) : (
<div className="rounded-xl border border-zinc-100 bg-white p-6">
<Text variant="body" className="text-muted-foreground">

View File

@@ -173,8 +173,9 @@ export function OldAgentLibraryView() {
if (agentRuns.length > 0) {
// select latest run
const latestRun = agentRuns.reduce((latest, current) => {
if (latest.started_at && !current.started_at) return current;
else if (!latest.started_at) return latest;
if (!latest.started_at && !current.started_at) return latest;
if (!latest.started_at) return current;
if (!current.started_at) return latest;
return latest.started_at > current.started_at ? latest : current;
}, agentRuns[0]);
selectRun(latestRun.id as GraphExecutionID);

View File

@@ -184,9 +184,11 @@ export function AgentRunsSelectorList({
))}
{agentPresets.length > 0 && <Separator className="my-1" />}
{agentRuns
.toSorted(
(a, b) => b.started_at.getTime() - a.started_at.getTime(),
)
.toSorted((a, b) => {
const aTime = a.started_at?.getTime() ?? 0;
const bTime = b.started_at?.getTime() ?? 0;
return bTime - aTime;
})
.map((run) => (
<AgentRunSummaryCard
className={listItemClasses}
@@ -199,7 +201,7 @@ export function AgentRunsSelectorList({
?.name
: null) ?? agent.name
}
timestamp={run.started_at}
timestamp={run.started_at ?? undefined}
selected={selectedView.id === run.id}
onClick={() => onSelectRun(run.id)}
onDelete={() => doDeleteRun(run as GraphExecutionMeta)}

View File

@@ -120,9 +120,11 @@ export const AgentFlowList = ({
lastRun =
runCount == 0
? null
: _flowRuns.reduce((a, c) =>
a.started_at > c.started_at ? a : c,
);
: _flowRuns.reduce((a, c) => {
const aTime = a.started_at?.getTime() ?? 0;
const cTime = c.started_at?.getTime() ?? 0;
return aTime > cTime ? a : c;
});
}
return { flow, runCount, lastRun };
})
@@ -130,10 +132,9 @@ export const AgentFlowList = ({
if (!a.lastRun && !b.lastRun) return 0;
if (!a.lastRun) return 1;
if (!b.lastRun) return -1;
return (
b.lastRun.started_at.getTime() -
a.lastRun.started_at.getTime()
);
const bTime = b.lastRun.started_at?.getTime() ?? 0;
const aTime = a.lastRun.started_at?.getTime() ?? 0;
return bTime - aTime;
})
.map(({ flow, runCount, lastRun }) => (
<TableRow

View File

@@ -29,7 +29,10 @@ export const FlowRunsStatus: React.FC<{
: statsSince;
const filteredFlowRuns =
statsSinceTimestamp != null
? executions.filter((fr) => fr.started_at.getTime() > statsSinceTimestamp)
? executions.filter(
(fr) =>
fr.started_at && fr.started_at.getTime() > statsSinceTimestamp,
)
: executions;
return (

View File

@@ -98,40 +98,43 @@ export const FlowRunsTimeline = ({
<Scatter
key={flow.id}
data={executions
.filter((e) => e.graph_id == flow.graph_id)
.filter((e) => e.graph_id == flow.graph_id && e.started_at)
.map((e) => ({
...e,
time:
e.started_at.getTime() + (e.stats?.node_exec_time ?? 0) * 1000,
(e.started_at?.getTime() ?? 0) +
(e.stats?.node_exec_time ?? 0) * 1000,
_duration: e.stats?.node_exec_time ?? 0,
}))}
name={flow.name}
fill={`hsl(${(hashString(flow.id) * 137.5) % 360}, 70%, 50%)`}
/>
))}
{executions.map((execution) => (
<Line
key={execution.id}
type="linear"
dataKey="_duration"
data={[
{
...execution,
time: execution.started_at.getTime(),
_duration: 0,
},
{
...execution,
time: execution.ended_at.getTime(),
_duration: execution.stats?.node_exec_time ?? 0,
},
]}
stroke={`hsl(${(hashString(execution.graph_id) * 137.5) % 360}, 70%, 50%)`}
strokeWidth={2}
dot={false}
legendType="none"
/>
))}
{executions
.filter((e) => e.started_at && e.ended_at)
.map((execution) => (
<Line
key={execution.id}
type="linear"
dataKey="_duration"
data={[
{
...execution,
time: execution.started_at!.getTime(),
_duration: 0,
},
{
...execution,
time: execution.ended_at!.getTime(),
_duration: execution.stats?.node_exec_time ?? 0,
},
]}
stroke={`hsl(${(hashString(execution.graph_id) * 137.5) % 360}, 70%, 50%)`}
strokeWidth={2}
dot={false}
legendType="none"
/>
))}
<Legend
content={<ScrollableLegend />}
wrapperStyle={{

View File

@@ -98,7 +98,11 @@ const Monitor = () => {
...(selectedFlow
? executions.filter((v) => v.graph_id == selectedFlow.graph_id)
: executions),
].sort((a, b) => b.started_at.getTime() - a.started_at.getTime())}
].sort((a, b) => {
const aTime = a.started_at?.getTime() ?? 0;
const bTime = b.started_at?.getTime() ?? 0;
return bTime - aTime;
})}
selectedRun={selectedRun}
onSelectRun={(r) => setSelectedRun(r.id == selectedRun?.id ? null : r)}
/>

View File

@@ -116,6 +116,9 @@ export default function UserIntegrationsPage() {
"63a6e279-2dc2-448e-bf57-85776f7176dc", // ZeroBounce
"9aa1bde0-4947-4a70-a20c-84daa3850d52", // Google Maps
"d44045af-1c33-4833-9e19-752313214de2", // Llama API
"c4e6d1a0-3b5f-4789-a8e2-9b123456789f", // V0 by Vercel
"a5b3c7d9-2e4f-4a6b-8c1d-9e0f1a2b3c4d", // Webshare Proxy
"8b3d4e5f-6a7b-8c9d-0e1f-2a3b4c5d6e7f", // OpenWeatherMap
],
[],
);

View File

@@ -6383,11 +6383,6 @@
"title": "Has Human In The Loop",
"readOnly": true
},
"has_sensitive_action": {
"type": "boolean",
"title": "Has Sensitive Action",
"readOnly": true
},
"trigger_setup_info": {
"anyOf": [
{ "$ref": "#/components/schemas/GraphTriggerInfo" },
@@ -6404,7 +6399,6 @@
"output_schema",
"has_external_trigger",
"has_human_in_the_loop",
"has_sensitive_action",
"trigger_setup_info"
],
"title": "BaseGraph"
@@ -7154,6 +7148,20 @@
"error_message": {
"anyOf": [{ "type": "string" }, { "type": "null" }],
"title": "Error Message"
},
"started_at": {
"anyOf": [
{ "type": "string", "format": "date-time" },
{ "type": "null" }
],
"title": "Started At"
},
"ended_at": {
"anyOf": [
{ "type": "string", "format": "date-time" },
{ "type": "null" }
],
"title": "Ended At"
}
},
"type": "object",
@@ -7260,14 +7268,20 @@
},
"status": { "$ref": "#/components/schemas/AgentExecutionStatus" },
"started_at": {
"type": "string",
"format": "date-time",
"title": "Started At"
"anyOf": [
{ "type": "string", "format": "date-time" },
{ "type": "null" }
],
"title": "Started At",
"description": "When execution started running. Null if not yet started (QUEUED)."
},
"ended_at": {
"type": "string",
"format": "date-time",
"title": "Ended At"
"anyOf": [
{ "type": "string", "format": "date-time" },
{ "type": "null" }
],
"title": "Ended At",
"description": "When execution finished. Null if not yet completed (QUEUED, RUNNING, INCOMPLETE, REVIEW)."
},
"is_shared": {
"type": "boolean",
@@ -7301,8 +7315,6 @@
"nodes_input_masks",
"preset_id",
"status",
"started_at",
"ended_at",
"stats",
"outputs"
],
@@ -7401,14 +7413,20 @@
},
"status": { "$ref": "#/components/schemas/AgentExecutionStatus" },
"started_at": {
"type": "string",
"format": "date-time",
"title": "Started At"
"anyOf": [
{ "type": "string", "format": "date-time" },
{ "type": "null" }
],
"title": "Started At",
"description": "When execution started running. Null if not yet started (QUEUED)."
},
"ended_at": {
"type": "string",
"format": "date-time",
"title": "Ended At"
"anyOf": [
{ "type": "string", "format": "date-time" },
{ "type": "null" }
],
"title": "Ended At",
"description": "When execution finished. Null if not yet completed (QUEUED, RUNNING, INCOMPLETE, REVIEW)."
},
"is_shared": {
"type": "boolean",
@@ -7437,8 +7455,6 @@
"nodes_input_masks",
"preset_id",
"status",
"started_at",
"ended_at",
"stats"
],
"title": "GraphExecutionMeta"
@@ -7485,14 +7501,20 @@
},
"status": { "$ref": "#/components/schemas/AgentExecutionStatus" },
"started_at": {
"type": "string",
"format": "date-time",
"title": "Started At"
"anyOf": [
{ "type": "string", "format": "date-time" },
{ "type": "null" }
],
"title": "Started At",
"description": "When execution started running. Null if not yet started (QUEUED)."
},
"ended_at": {
"type": "string",
"format": "date-time",
"title": "Ended At"
"anyOf": [
{ "type": "string", "format": "date-time" },
{ "type": "null" }
],
"title": "Ended At",
"description": "When execution finished. Null if not yet completed (QUEUED, RUNNING, INCOMPLETE, REVIEW)."
},
"is_shared": {
"type": "boolean",
@@ -7531,8 +7553,6 @@
"nodes_input_masks",
"preset_id",
"status",
"started_at",
"ended_at",
"stats",
"outputs",
"node_executions"
@@ -7609,11 +7629,6 @@
"title": "Has Human In The Loop",
"readOnly": true
},
"has_sensitive_action": {
"type": "boolean",
"title": "Has Sensitive Action",
"readOnly": true
},
"trigger_setup_info": {
"anyOf": [
{ "$ref": "#/components/schemas/GraphTriggerInfo" },
@@ -7637,7 +7652,6 @@
"output_schema",
"has_external_trigger",
"has_human_in_the_loop",
"has_sensitive_action",
"trigger_setup_info",
"credentials_input_schema"
],
@@ -7716,11 +7730,6 @@
"title": "Has Human In The Loop",
"readOnly": true
},
"has_sensitive_action": {
"type": "boolean",
"title": "Has Sensitive Action",
"readOnly": true
},
"trigger_setup_info": {
"anyOf": [
{ "$ref": "#/components/schemas/GraphTriggerInfo" },
@@ -7745,7 +7754,6 @@
"output_schema",
"has_external_trigger",
"has_human_in_the_loop",
"has_sensitive_action",
"trigger_setup_info",
"credentials_input_schema"
],
@@ -7754,14 +7762,8 @@
"GraphSettings": {
"properties": {
"human_in_the_loop_safe_mode": {
"type": "boolean",
"title": "Human In The Loop Safe Mode",
"default": true
},
"sensitive_action_safe_mode": {
"type": "boolean",
"title": "Sensitive Action Safe Mode",
"default": false
"anyOf": [{ "type": "boolean" }, { "type": "null" }],
"title": "Human In The Loop Safe Mode"
}
},
"type": "object",
@@ -7919,16 +7921,6 @@
"title": "Has External Trigger",
"description": "Whether the agent has an external trigger (e.g. webhook) node"
},
"has_human_in_the_loop": {
"type": "boolean",
"title": "Has Human In The Loop",
"description": "Whether the agent has human-in-the-loop blocks"
},
"has_sensitive_action": {
"type": "boolean",
"title": "Has Sensitive Action",
"description": "Whether the agent has sensitive action blocks"
},
"trigger_setup_info": {
"anyOf": [
{ "$ref": "#/components/schemas/GraphTriggerInfo" },
@@ -7975,8 +7967,6 @@
"output_schema",
"credentials_input_schema",
"has_external_trigger",
"has_human_in_the_loop",
"has_sensitive_action",
"new_output",
"can_access_graph",
"is_latest_version",

View File

@@ -50,7 +50,9 @@ export function ActivityItem({ execution }: Props) {
execution.status === AgentExecutionStatus.QUEUED;
if (isActiveStatus) {
const timeAgo = formatTimeAgo(execution.started_at.toString());
const timeAgo = execution.started_at
? formatTimeAgo(execution.started_at.toString())
: "recently";
const statusText =
execution.status === AgentExecutionStatus.QUEUED ? "queued" : "running";
return [
@@ -61,7 +63,9 @@ export function ActivityItem({ execution }: Props) {
// Handle all other statuses with time display
const timeAgo = execution.ended_at
? formatTimeAgo(execution.ended_at.toString())
: formatTimeAgo(execution.started_at.toString());
: execution.started_at
? formatTimeAgo(execution.started_at.toString())
: "recently";
let statusText = "ended";
switch (execution.status) {

View File

@@ -20,15 +20,11 @@ function hasHITLBlocks(graph: GraphModel | LibraryAgent | Graph): boolean {
if ("has_human_in_the_loop" in graph) {
return !!graph.has_human_in_the_loop;
}
return false;
}
function hasSensitiveActionBlocks(
graph: GraphModel | LibraryAgent | Graph,
): boolean {
if ("has_sensitive_action" in graph) {
return !!graph.has_sensitive_action;
if (isLibraryAgent(graph)) {
return graph.settings?.human_in_the_loop_safe_mode !== null;
}
return false;
}
@@ -44,9 +40,7 @@ export function useAgentSafeMode(graph: GraphModel | LibraryAgent | Graph) {
const graphId = getGraphId(graph);
const isAgent = isLibraryAgent(graph);
const showHITLToggle = hasHITLBlocks(graph);
const showSensitiveActionToggle = hasSensitiveActionBlocks(graph);
const shouldShowToggle = showHITLToggle || showSensitiveActionToggle;
const shouldShowToggle = hasHITLBlocks(graph);
const { mutateAsync: updateGraphSettings, isPending } =
usePatchV1UpdateGraphSettings();
@@ -62,37 +56,27 @@ export function useAgentSafeMode(graph: GraphModel | LibraryAgent | Graph) {
},
);
const [localHITLSafeMode, setLocalHITLSafeMode] = useState<boolean>(true);
const [localSensitiveActionSafeMode, setLocalSensitiveActionSafeMode] =
useState<boolean>(false);
const [isLocalStateLoaded, setIsLocalStateLoaded] = useState<boolean>(false);
const [localSafeMode, setLocalSafeMode] = useState<boolean | null>(null);
useEffect(() => {
if (!isAgent && libraryAgent) {
setLocalHITLSafeMode(
libraryAgent.settings?.human_in_the_loop_safe_mode ?? true,
);
setLocalSensitiveActionSafeMode(
libraryAgent.settings?.sensitive_action_safe_mode ?? false,
);
setIsLocalStateLoaded(true);
const backendValue = libraryAgent.settings?.human_in_the_loop_safe_mode;
if (backendValue !== undefined) {
setLocalSafeMode(backendValue);
}
}
}, [isAgent, libraryAgent]);
const currentHITLSafeMode = isAgent
? (graph.settings?.human_in_the_loop_safe_mode ?? true)
: localHITLSafeMode;
const currentSafeMode = isAgent
? graph.settings?.human_in_the_loop_safe_mode
: localSafeMode;
const currentSensitiveActionSafeMode = isAgent
? (graph.settings?.sensitive_action_safe_mode ?? false)
: localSensitiveActionSafeMode;
const isStateUndetermined = isAgent
? graph.settings?.human_in_the_loop_safe_mode == null
: isLoading || localSafeMode === null;
const isHITLStateUndetermined = isAgent
? false
: isLoading || !isLocalStateLoaded;
const handleHITLToggle = useCallback(async () => {
const newSafeMode = !currentHITLSafeMode;
const handleToggle = useCallback(async () => {
const newSafeMode = !currentSafeMode;
try {
await updateGraphSettings({
@@ -101,7 +85,7 @@ export function useAgentSafeMode(graph: GraphModel | LibraryAgent | Graph) {
});
if (!isAgent) {
setLocalHITLSafeMode(newSafeMode);
setLocalSafeMode(newSafeMode);
}
if (isAgent) {
@@ -117,62 +101,37 @@ export function useAgentSafeMode(graph: GraphModel | LibraryAgent | Graph) {
queryClient.invalidateQueries({ queryKey: ["v2", "executions"] });
toast({
title: `HITL safe mode ${newSafeMode ? "enabled" : "disabled"}`,
title: `Safe mode ${newSafeMode ? "enabled" : "disabled"}`,
description: newSafeMode
? "Human-in-the-loop blocks will require manual review"
: "Human-in-the-loop blocks will proceed automatically",
duration: 2000,
});
} catch (error) {
handleToggleError(error, isAgent, toast);
}
}, [
currentHITLSafeMode,
graphId,
isAgent,
graph.id,
updateGraphSettings,
queryClient,
toast,
]);
const isNotFoundError =
error instanceof Error &&
(error.message.includes("404") || error.message.includes("not found"));
const handleSensitiveActionToggle = useCallback(async () => {
const newSafeMode = !currentSensitiveActionSafeMode;
try {
await updateGraphSettings({
graphId,
data: { sensitive_action_safe_mode: newSafeMode },
});
if (!isAgent) {
setLocalSensitiveActionSafeMode(newSafeMode);
}
if (isAgent) {
queryClient.invalidateQueries({
queryKey: getGetV2GetLibraryAgentQueryOptions(graph.id.toString())
.queryKey,
if (!isAgent && isNotFoundError) {
toast({
title: "Safe mode not available",
description:
"To configure safe mode, please save this graph to your library first.",
variant: "destructive",
});
} else {
toast({
title: "Failed to update safe mode",
description:
error instanceof Error
? error.message
: "An unexpected error occurred.",
variant: "destructive",
});
}
queryClient.invalidateQueries({
queryKey: ["v1", "graphs", graphId, "executions"],
});
queryClient.invalidateQueries({ queryKey: ["v2", "executions"] });
toast({
title: `Sensitive action safe mode ${newSafeMode ? "enabled" : "disabled"}`,
description: newSafeMode
? "Sensitive action blocks will require manual review"
: "Sensitive action blocks will proceed automatically",
duration: 2000,
});
} catch (error) {
handleToggleError(error, isAgent, toast);
}
}, [
currentSensitiveActionSafeMode,
currentSafeMode,
graphId,
isAgent,
graph.id,
@@ -182,53 +141,11 @@ export function useAgentSafeMode(graph: GraphModel | LibraryAgent | Graph) {
]);
return {
// HITL safe mode
currentHITLSafeMode,
showHITLToggle,
isHITLStateUndetermined,
handleHITLToggle,
// Sensitive action safe mode
currentSensitiveActionSafeMode,
showSensitiveActionToggle,
handleSensitiveActionToggle,
// General
currentSafeMode,
isPending,
shouldShowToggle,
// Backwards compatibility
currentSafeMode: currentHITLSafeMode,
isStateUndetermined: isHITLStateUndetermined,
handleToggle: handleHITLToggle,
hasHITLBlocks: showHITLToggle,
isStateUndetermined,
handleToggle,
hasHITLBlocks: shouldShowToggle,
};
}
function handleToggleError(
error: unknown,
isAgent: boolean,
toast: ReturnType<typeof useToast>["toast"],
) {
const isNotFoundError =
error instanceof Error &&
(error.message.includes("404") || error.message.includes("not found"));
if (!isAgent && isNotFoundError) {
toast({
title: "Safe mode not available",
description:
"To configure safe mode, please save this graph to your library first.",
variant: "destructive",
});
} else {
toast({
title: "Failed to update safe mode",
description:
error instanceof Error
? error.message
: "An unexpected error occurred.",
variant: "destructive",
});
}
}

View File

@@ -327,8 +327,8 @@ export type GraphExecutionMeta = {
| "FAILED"
| "INCOMPLETE"
| "REVIEW";
started_at: Date;
ended_at: Date;
started_at: Date | null;
ended_at: Date | null;
stats: {
error: string | null;
cost: number;

View File

@@ -0,0 +1 @@
# Video editing blocks

558
docs/integrations/README.md Normal file
View File

@@ -0,0 +1,558 @@
# AutoGPT Blocks Overview
AutoGPT uses a modular approach with various "blocks" to handle different tasks. These blocks are the building blocks of AutoGPT workflows, allowing users to create complex automations by combining simple, specialized components.
!!! info "Creating Your Own Blocks"
Want to create your own custom blocks? Check out our guides:
- [Build your own Blocks](https://docs.agpt.co/platform/new_blocks/) - Step-by-step tutorial with examples
- [Block SDK Guide](https://docs.agpt.co/platform/block-sdk-guide/) - Advanced SDK patterns with OAuth, webhooks, and provider configuration
Below is a comprehensive list of all available blocks, categorized by their primary function. Click on any block name to view its detailed documentation.
## Basic Operations
| Block Name | Description |
|------------|-------------|
| [Add Memory](basic.md#add-memory) | Add new memories to Mem0 with user segmentation |
| [Add To Dictionary](basic.md#add-to-dictionary) | Adds a new key-value pair to a dictionary |
| [Add To Library From Store](system/library_operations.md#add-to-library-from-store) | Add an agent from the store to your personal library |
| [Add To List](basic.md#add-to-list) | Adds a new entry to a list |
| [Agent Date Input](basic.md#agent-date-input) | Block for date input |
| [Agent Dropdown Input](basic.md#agent-dropdown-input) | Block for dropdown text selection |
| [Agent File Input](basic.md#agent-file-input) | Block for file upload input (string path for example) |
| [Agent Google Drive File Input](basic.md#agent-google-drive-file-input) | Block for selecting a file from Google Drive |
| [Agent Input](basic.md#agent-input) | A block that accepts and processes user input values within a workflow, supporting various input types and validation |
| [Agent Long Text Input](basic.md#agent-long-text-input) | Block for long text input (multi-line) |
| [Agent Number Input](basic.md#agent-number-input) | Block for number input |
| [Agent Output](basic.md#agent-output) | A block that records and formats workflow results for display to users, with optional Jinja2 template formatting support |
| [Agent Short Text Input](basic.md#agent-short-text-input) | Block for short text input (single-line) |
| [Agent Table Input](basic.md#agent-table-input) | Block for table data input with customizable headers |
| [Agent Time Input](basic.md#agent-time-input) | Block for time input |
| [Agent Toggle Input](basic.md#agent-toggle-input) | Block for boolean toggle input |
| [Block Installation](basic.md#block-installation) | Given a code string, this block allows the verification and installation of a block code into the system |
| [Dictionary Is Empty](basic.md#dictionary-is-empty) | Checks if a dictionary is empty |
| [File Store](basic.md#file-store) | Stores the input file in the temporary directory |
| [Find In Dictionary](basic.md#find-in-dictionary) | A block that looks up a value in a dictionary, list, or object by key or index and returns the corresponding value |
| [Find In List](basic.md#find-in-list) | Finds the index of the value in the list |
| [Get All Memories](basic.md#get-all-memories) | Retrieve all memories from Mem0 with optional conversation filtering |
| [Get Latest Memory](basic.md#get-latest-memory) | Retrieve the latest memory from Mem0 with optional key filtering |
| [Get List Item](basic.md#get-list-item) | Returns the element at the given index |
| [Get Store Agent Details](system/store_operations.md#get-store-agent-details) | Get detailed information about an agent from the store |
| [Get Weather Information](basic.md#get-weather-information) | Retrieves weather information for a specified location using OpenWeatherMap API |
| [Human In The Loop](basic.md#human-in-the-loop) | Pause execution and wait for human approval or modification of data |
| [Linear Search Issues](linear/issues.md#linear-search-issues) | Searches for issues on Linear |
| [List Is Empty](basic.md#list-is-empty) | Checks if a list is empty |
| [List Library Agents](system/library_operations.md#list-library-agents) | List all agents in your personal library |
| [Note](basic.md#note) | A visual annotation block that displays a sticky note in the workflow editor for documentation and organization purposes |
| [Print To Console](basic.md#print-to-console) | A debugging block that outputs text to the console for monitoring and troubleshooting workflow execution |
| [Remove From Dictionary](basic.md#remove-from-dictionary) | Removes a key-value pair from a dictionary |
| [Remove From List](basic.md#remove-from-list) | Removes an item from a list by value or index |
| [Replace Dictionary Value](basic.md#replace-dictionary-value) | Replaces the value for a specified key in a dictionary |
| [Replace List Item](basic.md#replace-list-item) | Replaces an item at the specified index |
| [Reverse List Order](basic.md#reverse-list-order) | Reverses the order of elements in a list |
| [Search Memory](basic.md#search-memory) | Search memories in Mem0 by user |
| [Search Store Agents](system/store_operations.md#search-store-agents) | Search for agents in the store |
| [Slant3D Cancel Order](slant3d/order.md#slant3d-cancel-order) | Cancel an existing order |
| [Slant3D Create Order](slant3d/order.md#slant3d-create-order) | Create a new print order |
| [Slant3D Estimate Order](slant3d/order.md#slant3d-estimate-order) | Get order cost estimate |
| [Slant3D Estimate Shipping](slant3d/order.md#slant3d-estimate-shipping) | Get shipping cost estimate |
| [Slant3D Filament](slant3d/filament.md#slant3d-filament) | Get list of available filaments |
| [Slant3D Get Orders](slant3d/order.md#slant3d-get-orders) | Get all orders for the account |
| [Slant3D Slicer](slant3d/slicing.md#slant3d-slicer) | Slice a 3D model file and get pricing information |
| [Slant3D Tracking](slant3d/order.md#slant3d-tracking) | Track order status and shipping |
| [Store Value](basic.md#store-value) | A basic block that stores and forwards a value throughout workflows, allowing it to be reused without changes across multiple blocks |
| [Universal Type Converter](basic.md#universal-type-converter) | This block is used to convert a value to a universal type |
| [XML Parser](basic.md#xml-parser) | Parses XML using gravitasml to tokenize and coverts it to dict |
## Data Processing
| Block Name | Description |
|------------|-------------|
| [Airtable Create Base](airtable/bases.md#airtable-create-base) | Create or find a base in Airtable |
| [Airtable Create Field](airtable/schema.md#airtable-create-field) | Add a new field to an Airtable table |
| [Airtable Create Records](airtable/records.md#airtable-create-records) | Create records in an Airtable table |
| [Airtable Create Table](airtable/schema.md#airtable-create-table) | Create a new table in an Airtable base |
| [Airtable Delete Records](airtable/records.md#airtable-delete-records) | Delete records from an Airtable table |
| [Airtable Get Record](airtable/records.md#airtable-get-record) | Get a single record from Airtable |
| [Airtable List Bases](airtable/bases.md#airtable-list-bases) | List all bases in Airtable |
| [Airtable List Records](airtable/records.md#airtable-list-records) | List records from an Airtable table |
| [Airtable List Schema](airtable/schema.md#airtable-list-schema) | Get the complete schema of an Airtable base |
| [Airtable Update Field](airtable/schema.md#airtable-update-field) | Update field properties in an Airtable table |
| [Airtable Update Records](airtable/records.md#airtable-update-records) | Update records in an Airtable table |
| [Airtable Update Table](airtable/schema.md#airtable-update-table) | Update table properties |
| [Airtable Webhook Trigger](airtable/triggers.md#airtable-webhook-trigger) | Starts a flow whenever Airtable emits a webhook event |
| [Baas Bot Delete Recording](baas/bots.md#baas-bot-delete-recording) | Permanently delete a meeting's recorded data |
| [Baas Bot Fetch Meeting Data](baas/bots.md#baas-bot-fetch-meeting-data) | Retrieve recorded meeting data |
| [Create Dictionary](data.md#create-dictionary) | Creates a dictionary with the specified key-value pairs |
| [Create List](data.md#create-list) | Creates a list with the specified values |
| [Data For Seo Keyword Suggestions](dataforseo/keyword_suggestions.md#data-for-seo-keyword-suggestions) | Get keyword suggestions from DataForSEO Labs Google API |
| [Data For Seo Related Keywords](dataforseo/related_keywords.md#data-for-seo-related-keywords) | Get related keywords from DataForSEO Labs Google API |
| [Exa Create Import](exa/websets_import_export.md#exa-create-import) | Import CSV data to use with websets for targeted searches |
| [Exa Delete Import](exa/websets_import_export.md#exa-delete-import) | Delete an import |
| [Exa Export Webset](exa/websets_import_export.md#exa-export-webset) | Export webset data in JSON, CSV, or JSON Lines format |
| [Exa Get Import](exa/websets_import_export.md#exa-get-import) | Get the status and details of an import |
| [Exa Get New Items](exa/websets_items.md#exa-get-new-items) | Get items added since a cursor - enables incremental processing without reprocessing |
| [Exa List Imports](exa/websets_import_export.md#exa-list-imports) | List all imports with pagination support |
| [File Read](data.md#file-read) | Reads a file and returns its content as a string, with optional chunking by delimiter and size limits |
| [Google Calendar Read Events](google/calendar.md#google-calendar-read-events) | Retrieves upcoming events from a Google Calendar with filtering options |
| [Google Docs Append Markdown](google/docs.md#google-docs-append-markdown) | Append Markdown content to the end of a Google Doc with full formatting - ideal for LLM/AI output |
| [Google Docs Append Plain Text](google/docs.md#google-docs-append-plain-text) | Append plain text to the end of a Google Doc (no formatting applied) |
| [Google Docs Create](google/docs.md#google-docs-create) | Create a new Google Doc |
| [Google Docs Delete Content](google/docs.md#google-docs-delete-content) | Delete a range of content from a Google Doc |
| [Google Docs Export](google/docs.md#google-docs-export) | Export a Google Doc to PDF, Word, text, or other formats |
| [Google Docs Find Replace Plain Text](google/docs.md#google-docs-find-replace-plain-text) | Find and replace plain text in a Google Doc (no formatting applied to replacement) |
| [Google Docs Format Text](google/docs.md#google-docs-format-text) | Apply formatting (bold, italic, color, etc |
| [Google Docs Get Metadata](google/docs.md#google-docs-get-metadata) | Get metadata about a Google Doc |
| [Google Docs Get Structure](google/docs.md#google-docs-get-structure) | Get document structure with index positions for precise editing operations |
| [Google Docs Insert Markdown At](google/docs.md#google-docs-insert-markdown-at) | Insert formatted Markdown at a specific position in a Google Doc - ideal for LLM/AI output |
| [Google Docs Insert Page Break](google/docs.md#google-docs-insert-page-break) | Insert a page break into a Google Doc |
| [Google Docs Insert Plain Text](google/docs.md#google-docs-insert-plain-text) | Insert plain text at a specific position in a Google Doc (no formatting applied) |
| [Google Docs Insert Table](google/docs.md#google-docs-insert-table) | Insert a table into a Google Doc, optionally with content and Markdown formatting |
| [Google Docs Read](google/docs.md#google-docs-read) | Read text content from a Google Doc |
| [Google Docs Replace All With Markdown](google/docs.md#google-docs-replace-all-with-markdown) | Replace entire Google Doc content with formatted Markdown - ideal for LLM/AI output |
| [Google Docs Replace Content With Markdown](google/docs.md#google-docs-replace-content-with-markdown) | Find text and replace it with formatted Markdown - ideal for LLM/AI output and templates |
| [Google Docs Replace Range With Markdown](google/docs.md#google-docs-replace-range-with-markdown) | Replace a specific index range in a Google Doc with formatted Markdown - ideal for LLM/AI output |
| [Google Docs Set Public Access](google/docs.md#google-docs-set-public-access) | Make a Google Doc public or private |
| [Google Docs Share](google/docs.md#google-docs-share) | Share a Google Doc with specific users |
| [Google Sheets Add Column](google/sheets.md#google-sheets-add-column) | Add a new column with a header |
| [Google Sheets Add Dropdown](google/sheets.md#google-sheets-add-dropdown) | Add a dropdown list (data validation) to cells |
| [Google Sheets Add Note](google/sheets.md#google-sheets-add-note) | Add a note to a cell in a Google Sheet |
| [Google Sheets Append Row](google/sheets.md#google-sheets-append-row) | Append or Add a single row to the end of a Google Sheet |
| [Google Sheets Batch Operations](google/sheets.md#google-sheets-batch-operations) | This block performs multiple operations on a Google Sheets spreadsheet in a single batch request |
| [Google Sheets Clear](google/sheets.md#google-sheets-clear) | This block clears data from a specified range in a Google Sheets spreadsheet |
| [Google Sheets Copy To Spreadsheet](google/sheets.md#google-sheets-copy-to-spreadsheet) | Copy a sheet from one spreadsheet to another |
| [Google Sheets Create Named Range](google/sheets.md#google-sheets-create-named-range) | Create a named range to reference cells by name instead of A1 notation |
| [Google Sheets Create Spreadsheet](google/sheets.md#google-sheets-create-spreadsheet) | This block creates a new Google Sheets spreadsheet with specified sheets |
| [Google Sheets Delete Column](google/sheets.md#google-sheets-delete-column) | Delete a column by header name or column letter |
| [Google Sheets Delete Rows](google/sheets.md#google-sheets-delete-rows) | Delete specific rows from a Google Sheet by their row indices |
| [Google Sheets Export Csv](google/sheets.md#google-sheets-export-csv) | Export a Google Sheet as CSV data |
| [Google Sheets Filter Rows](google/sheets.md#google-sheets-filter-rows) | Filter rows in a Google Sheet based on a column condition |
| [Google Sheets Find](google/sheets.md#google-sheets-find) | Find text in a Google Sheets spreadsheet |
| [Google Sheets Find Replace](google/sheets.md#google-sheets-find-replace) | This block finds and replaces text in a Google Sheets spreadsheet |
| [Google Sheets Format](google/sheets.md#google-sheets-format) | Format a range in a Google Sheet (sheet optional) |
| [Google Sheets Get Column](google/sheets.md#google-sheets-get-column) | Extract all values from a specific column |
| [Google Sheets Get Notes](google/sheets.md#google-sheets-get-notes) | Get notes from cells in a Google Sheet |
| [Google Sheets Get Row](google/sheets.md#google-sheets-get-row) | Get a specific row by its index |
| [Google Sheets Get Row Count](google/sheets.md#google-sheets-get-row-count) | Get row count and dimensions of a Google Sheet |
| [Google Sheets Get Unique Values](google/sheets.md#google-sheets-get-unique-values) | Get unique values from a column |
| [Google Sheets Import Csv](google/sheets.md#google-sheets-import-csv) | Import CSV data into a Google Sheet |
| [Google Sheets Insert Row](google/sheets.md#google-sheets-insert-row) | Insert a single row at a specific position |
| [Google Sheets List Named Ranges](google/sheets.md#google-sheets-list-named-ranges) | List all named ranges in a spreadsheet |
| [Google Sheets Lookup Row](google/sheets.md#google-sheets-lookup-row) | Look up a row by finding a value in a specific column |
| [Google Sheets Manage Sheet](google/sheets.md#google-sheets-manage-sheet) | Create, delete, or copy sheets (sheet optional) |
| [Google Sheets Metadata](google/sheets.md#google-sheets-metadata) | This block retrieves metadata about a Google Sheets spreadsheet including sheet names and properties |
| [Google Sheets Protect Range](google/sheets.md#google-sheets-protect-range) | Protect a cell range or entire sheet from editing |
| [Google Sheets Read](google/sheets.md#google-sheets-read) | A block that reads data from a Google Sheets spreadsheet using A1 notation range selection |
| [Google Sheets Remove Duplicates](google/sheets.md#google-sheets-remove-duplicates) | Remove duplicate rows based on specified columns |
| [Google Sheets Set Public Access](google/sheets.md#google-sheets-set-public-access) | Make a Google Spreadsheet public or private |
| [Google Sheets Share Spreadsheet](google/sheets.md#google-sheets-share-spreadsheet) | Share a Google Spreadsheet with users or get shareable link |
| [Google Sheets Sort](google/sheets.md#google-sheets-sort) | Sort a Google Sheet by one or two columns |
| [Google Sheets Update Cell](google/sheets.md#google-sheets-update-cell) | Update a single cell in a Google Sheets spreadsheet |
| [Google Sheets Update Row](google/sheets.md#google-sheets-update-row) | Update a specific row by its index |
| [Google Sheets Write](google/sheets.md#google-sheets-write) | A block that writes data to a Google Sheets spreadsheet at a specified A1 notation range |
| [Keyword Suggestion Extractor](dataforseo/keyword_suggestions.md#keyword-suggestion-extractor) | Extract individual fields from a KeywordSuggestion object |
| [Persist Information](data.md#persist-information) | Persist key-value information for the current user |
| [Read Spreadsheet](data.md#read-spreadsheet) | Reads CSV and Excel files and outputs the data as a list of dictionaries and individual rows |
| [Related Keyword Extractor](dataforseo/related_keywords.md#related-keyword-extractor) | Extract individual fields from a RelatedKeyword object |
| [Retrieve Information](data.md#retrieve-information) | Retrieve key-value information for the current user |
| [Screenshot Web Page](data.md#screenshot-web-page) | Takes a screenshot of a specified website using ScreenshotOne API |
## Text Processing
| Block Name | Description |
|------------|-------------|
| [Code Extraction](text.md#code-extraction) | Extracts code blocks from text and identifies their programming languages |
| [Combine Texts](text.md#combine-texts) | This block combines multiple input texts into a single output text |
| [Countdown Timer](text.md#countdown-timer) | This block triggers after a specified duration |
| [Extract Text Information](text.md#extract-text-information) | This block extracts the text from the given text using the pattern (regex) |
| [Fill Text Template](text.md#fill-text-template) | This block formats the given texts using the format template |
| [Get Current Date](text.md#get-current-date) | This block outputs the current date with an optional offset |
| [Get Current Date And Time](text.md#get-current-date-and-time) | This block outputs the current date and time |
| [Get Current Time](text.md#get-current-time) | This block outputs the current time |
| [Match Text Pattern](text.md#match-text-pattern) | Matches text against a regex pattern and forwards data to positive or negative output based on the match |
| [Text Decoder](text.md#text-decoder) | Decodes a string containing escape sequences into actual text |
| [Text Replace](text.md#text-replace) | This block is used to replace a text with a new text |
| [Text Split](text.md#text-split) | This block is used to split a text into a list of strings |
| [Word Character Count](text.md#word-character-count) | Counts the number of words and characters in a given text |
## AI and Language Models
| Block Name | Description |
|------------|-------------|
| [AI Ad Maker Video Creator](llm.md#ai-ad-maker-video-creator) | Creates an AIgenerated 30second advert (text + images) |
| [AI Condition](llm.md#ai-condition) | Uses AI to evaluate natural language conditions and provide conditional outputs |
| [AI Conversation](llm.md#ai-conversation) | A block that facilitates multi-turn conversations with a Large Language Model (LLM), maintaining context across message exchanges |
| [AI Image Customizer](llm.md#ai-image-customizer) | Generate and edit custom images using Google's Nano-Banana model from Gemini 2 |
| [AI Image Editor](llm.md#ai-image-editor) | Edit images using BlackForest Labs' Flux Kontext models |
| [AI Image Generator](llm.md#ai-image-generator) | Generate images using various AI models through a unified interface |
| [AI List Generator](llm.md#ai-list-generator) | A block that creates lists of items based on prompts using a Large Language Model (LLM), with optional source data for context |
| [AI Music Generator](llm.md#ai-music-generator) | This block generates music using Meta's MusicGen model on Replicate |
| [AI Screenshot To Video Ad](llm.md#ai-screenshot-to-video-ad) | Turns a screenshot into an engaging, avatarnarrated video advert |
| [AI Shortform Video Creator](llm.md#ai-shortform-video-creator) | Creates a shortform video using revid |
| [AI Structured Response Generator](llm.md#ai-structured-response-generator) | A block that generates structured JSON responses using a Large Language Model (LLM), with schema validation and format enforcement |
| [AI Text Generator](llm.md#ai-text-generator) | A block that produces text responses using a Large Language Model (LLM) based on customizable prompts and system instructions |
| [AI Text Summarizer](llm.md#ai-text-summarizer) | A block that summarizes long texts using a Large Language Model (LLM), with configurable focus topics and summary styles |
| [AI Video Generator](fal/ai_video_generator.md#ai-video-generator) | Generate videos using FAL AI models |
| [Bannerbear Text Overlay](bannerbear/text_overlay.md#bannerbear-text-overlay) | Add text overlay to images using Bannerbear templates |
| [Code Generation](llm.md#code-generation) | Generate or refactor code using OpenAI's Codex (Responses API) |
| [Create Talking Avatar Video](llm.md#create-talking-avatar-video) | This block integrates with D-ID to create video clips and retrieve their URLs |
| [Exa Answer](exa/answers.md#exa-answer) | Get an LLM answer to a question informed by Exa search results |
| [Exa Create Enrichment](exa/websets_enrichment.md#exa-create-enrichment) | Create enrichments to extract additional structured data from webset items |
| [Exa Create Research](exa/research.md#exa-create-research) | Create research task with optional waiting - explores web and synthesizes findings with citations |
| [Ideogram Model](llm.md#ideogram-model) | This block runs Ideogram models with both simple and advanced settings |
| [Jina Chunking](jina/chunking.md#jina-chunking) | Chunks texts using Jina AI's segmentation service |
| [Jina Embedding](jina/embeddings.md#jina-embedding) | Generates embeddings using Jina AI |
| [Perplexity](llm.md#perplexity) | Query Perplexity's sonar models with real-time web search capabilities and receive annotated responses with source citations |
| [Replicate Flux Advanced Model](replicate/flux_advanced.md#replicate-flux-advanced-model) | This block runs Flux models on Replicate with advanced settings |
| [Replicate Model](replicate/replicate_block.md#replicate-model) | Run Replicate models synchronously |
| [Smart Decision Maker](llm.md#smart-decision-maker) | Uses AI to intelligently decide what tool to use |
| [Stagehand Act](stagehand/blocks.md#stagehand-act) | Interact with a web page by performing actions on a web page |
| [Stagehand Extract](stagehand/blocks.md#stagehand-extract) | Extract structured data from a webpage |
| [Stagehand Observe](stagehand/blocks.md#stagehand-observe) | Find suggested actions for your workflows |
| [Unreal Text To Speech](llm.md#unreal-text-to-speech) | Converts text to speech using the Unreal Speech API |
## Search and Information Retrieval
| Block Name | Description |
|------------|-------------|
| [Ask Wolfram](wolfram/llm_api.md#ask-wolfram) | Ask Wolfram Alpha a question |
| [Exa Bulk Webset Items](exa/websets_items.md#exa-bulk-webset-items) | Get all items from a webset in bulk (with configurable limits) |
| [Exa Cancel Enrichment](exa/websets_enrichment.md#exa-cancel-enrichment) | Cancel a running enrichment operation |
| [Exa Cancel Webset](exa/websets.md#exa-cancel-webset) | Cancel all operations being performed on a Webset |
| [Exa Cancel Webset Search](exa/websets_search.md#exa-cancel-webset-search) | Cancel a running webset search |
| [Exa Contents](exa/contents.md#exa-contents) | Retrieves document contents using Exa's contents API |
| [Exa Create Monitor](exa/websets_monitor.md#exa-create-monitor) | Create automated monitors to keep websets updated with fresh data on a schedule |
| [Exa Create Or Find Webset](exa/websets.md#exa-create-or-find-webset) | Create a new webset or return existing one by external_id (idempotent operation) |
| [Exa Create Webset](exa/websets.md#exa-create-webset) | Create a new Exa Webset for persistent web search collections with optional waiting for initial results |
| [Exa Create Webset Search](exa/websets_search.md#exa-create-webset-search) | Add a new search to an existing webset to find more items |
| [Exa Delete Enrichment](exa/websets_enrichment.md#exa-delete-enrichment) | Delete an enrichment from a webset |
| [Exa Delete Monitor](exa/websets_monitor.md#exa-delete-monitor) | Delete a monitor from a webset |
| [Exa Delete Webset](exa/websets.md#exa-delete-webset) | Delete a Webset and all its items |
| [Exa Delete Webset Item](exa/websets_items.md#exa-delete-webset-item) | Delete a specific item from a webset |
| [Exa Find Or Create Search](exa/websets_search.md#exa-find-or-create-search) | Find existing search by query or create new - prevents duplicate searches in workflows |
| [Exa Find Similar](exa/similar.md#exa-find-similar) | Finds similar links using Exa's findSimilar API |
| [Exa Get Enrichment](exa/websets_enrichment.md#exa-get-enrichment) | Get the status and details of a webset enrichment |
| [Exa Get Monitor](exa/websets_monitor.md#exa-get-monitor) | Get the details and status of a webset monitor |
| [Exa Get Research](exa/research.md#exa-get-research) | Get status and results of a research task |
| [Exa Get Webset](exa/websets.md#exa-get-webset) | Retrieve a Webset by ID or external ID |
| [Exa Get Webset Item](exa/websets_items.md#exa-get-webset-item) | Get a specific item from a webset by its ID |
| [Exa Get Webset Search](exa/websets_search.md#exa-get-webset-search) | Get the status and details of a webset search |
| [Exa List Monitors](exa/websets_monitor.md#exa-list-monitors) | List all monitors with optional webset filtering |
| [Exa List Research](exa/research.md#exa-list-research) | List all research tasks with pagination support |
| [Exa List Webset Items](exa/websets_items.md#exa-list-webset-items) | List items in a webset with pagination support |
| [Exa List Websets](exa/websets.md#exa-list-websets) | List all Websets with pagination support |
| [Exa Preview Webset](exa/websets.md#exa-preview-webset) | Preview how a search query will be interpreted before creating a webset |
| [Exa Search](exa/search.md#exa-search) | Searches the web using Exa's advanced search API |
| [Exa Update Enrichment](exa/websets_enrichment.md#exa-update-enrichment) | Update an existing enrichment configuration |
| [Exa Update Monitor](exa/websets_monitor.md#exa-update-monitor) | Update a monitor's status, schedule, or metadata |
| [Exa Update Webset](exa/websets.md#exa-update-webset) | Update metadata for an existing Webset |
| [Exa Wait For Enrichment](exa/websets_polling.md#exa-wait-for-enrichment) | Wait for a webset enrichment to complete with progress tracking |
| [Exa Wait For Research](exa/research.md#exa-wait-for-research) | Wait for a research task to complete with configurable timeout |
| [Exa Wait For Search](exa/websets_polling.md#exa-wait-for-search) | Wait for a specific webset search to complete with progress tracking |
| [Exa Wait For Webset](exa/websets_polling.md#exa-wait-for-webset) | Wait for a webset to reach a specific status with progress tracking |
| [Exa Webset Items Summary](exa/websets_items.md#exa-webset-items-summary) | Get a summary of webset items without retrieving all data |
| [Exa Webset Status](exa/websets.md#exa-webset-status) | Get a quick status overview of a webset |
| [Exa Webset Summary](exa/websets.md#exa-webset-summary) | Get a comprehensive summary of a webset with samples and statistics |
| [Extract Website Content](jina/search.md#extract-website-content) | This block scrapes the content from the given web URL |
| [Fact Checker](jina/fact_checker.md#fact-checker) | This block checks the factuality of a given statement using Jina AI's Grounding API |
| [Firecrawl Crawl](firecrawl/crawl.md#firecrawl-crawl) | Firecrawl crawls websites to extract comprehensive data while bypassing blockers |
| [Firecrawl Extract](firecrawl/extract.md#firecrawl-extract) | Firecrawl crawls websites to extract comprehensive data while bypassing blockers |
| [Firecrawl Map Website](firecrawl/map.md#firecrawl-map-website) | Firecrawl maps a website to extract all the links |
| [Firecrawl Scrape](firecrawl/scrape.md#firecrawl-scrape) | Firecrawl scrapes a website to extract comprehensive data while bypassing blockers |
| [Firecrawl Search](firecrawl/search.md#firecrawl-search) | Firecrawl searches the web for the given query |
| [Get Person Detail](apollo/person.md#get-person-detail) | Get detailed person data with Apollo API, including email reveal |
| [Get Wikipedia Summary](search.md#get-wikipedia-summary) | This block fetches the summary of a given topic from Wikipedia |
| [Google Maps Search](search.md#google-maps-search) | This block searches for local businesses using Google Maps API |
| [Search Organizations](apollo/organization.md#search-organizations) | Search for organizations in Apollo |
| [Search People](apollo/people.md#search-people) | Search for people in Apollo |
| [Search The Web](jina/search.md#search-the-web) | This block searches the internet for the given search query |
| [Validate Emails](zerobounce/validate_emails.md#validate-emails) | Validate emails |
## Social Media and Content
| Block Name | Description |
|------------|-------------|
| [Create Discord Thread](discord/bot_blocks.md#create-discord-thread) | Creates a new thread in a Discord channel |
| [Create Reddit Post](misc.md#create-reddit-post) | Create a new post on a subreddit |
| [Delete Reddit Comment](misc.md#delete-reddit-comment) | Delete a Reddit comment that you own |
| [Delete Reddit Post](misc.md#delete-reddit-post) | Delete a Reddit post that you own |
| [Discord Channel Info](discord/bot_blocks.md#discord-channel-info) | Resolves Discord channel names to IDs and vice versa |
| [Discord Get Current User](discord/oauth_blocks.md#discord-get-current-user) | Gets information about the currently authenticated Discord user using OAuth2 credentials |
| [Discord User Info](discord/bot_blocks.md#discord-user-info) | Gets information about a Discord user by their ID |
| [Edit Reddit Post](misc.md#edit-reddit-post) | Edit the body text of an existing Reddit post that you own |
| [Get Linkedin Profile](enrichlayer/linkedin.md#get-linkedin-profile) | Fetch LinkedIn profile data using Enrichlayer |
| [Get Linkedin Profile Picture](enrichlayer/linkedin.md#get-linkedin-profile-picture) | Get LinkedIn profile pictures using Enrichlayer |
| [Get Reddit Comment](misc.md#get-reddit-comment) | Get details about a specific Reddit comment by its ID |
| [Get Reddit Comment Replies](misc.md#get-reddit-comment-replies) | Get replies to a specific Reddit comment |
| [Get Reddit Inbox](misc.md#get-reddit-inbox) | Get messages, mentions, and comment replies from your Reddit inbox |
| [Get Reddit Post](misc.md#get-reddit-post) | Get detailed information about a specific Reddit post by its ID |
| [Get Reddit Post Comments](misc.md#get-reddit-post-comments) | Get top-level comments on a Reddit post |
| [Get Reddit Posts](misc.md#get-reddit-posts) | This block fetches Reddit posts from a defined subreddit name |
| [Get Reddit User Info](misc.md#get-reddit-user-info) | Get information about a Reddit user including karma, account age, and verification status |
| [Get Subreddit Flairs](misc.md#get-subreddit-flairs) | Get available link flair options for a subreddit |
| [Get Subreddit Info](misc.md#get-subreddit-info) | Get information about a subreddit including subscriber count, description, and rules |
| [Get Subreddit Rules](misc.md#get-subreddit-rules) | Get the rules for a subreddit to ensure compliance before posting |
| [Get User Posts](misc.md#get-user-posts) | Fetch posts by a specific Reddit user |
| [Linkedin Person Lookup](enrichlayer/linkedin.md#linkedin-person-lookup) | Look up LinkedIn profiles by person information using Enrichlayer |
| [Linkedin Role Lookup](enrichlayer/linkedin.md#linkedin-role-lookup) | Look up LinkedIn profiles by role in a company using Enrichlayer |
| [Post Reddit Comment](misc.md#post-reddit-comment) | This block posts a Reddit comment on a specified Reddit post |
| [Post To Bluesky](ayrshare/post_to_bluesky.md#post-to-bluesky) | Post to Bluesky using Ayrshare |
| [Post To Facebook](ayrshare/post_to_facebook.md#post-to-facebook) | Post to Facebook using Ayrshare |
| [Post To GMB](ayrshare/post_to_gmb.md#post-to-gmb) | Post to Google My Business using Ayrshare |
| [Post To Instagram](ayrshare/post_to_instagram.md#post-to-instagram) | Post to Instagram using Ayrshare |
| [Post To Linked In](ayrshare/post_to_linkedin.md#post-to-linked-in) | Post to LinkedIn using Ayrshare |
| [Post To Pinterest](ayrshare/post_to_pinterest.md#post-to-pinterest) | Post to Pinterest using Ayrshare |
| [Post To Reddit](ayrshare/post_to_reddit.md#post-to-reddit) | Post to Reddit using Ayrshare |
| [Post To Snapchat](ayrshare/post_to_snapchat.md#post-to-snapchat) | Post to Snapchat using Ayrshare |
| [Post To Telegram](ayrshare/post_to_telegram.md#post-to-telegram) | Post to Telegram using Ayrshare |
| [Post To Threads](ayrshare/post_to_threads.md#post-to-threads) | Post to Threads using Ayrshare |
| [Post To Tik Tok](ayrshare/post_to_tiktok.md#post-to-tik-tok) | Post to TikTok using Ayrshare |
| [Post To X](ayrshare/post_to_x.md#post-to-x) | Post to X / Twitter using Ayrshare |
| [Post To You Tube](ayrshare/post_to_youtube.md#post-to-you-tube) | Post to YouTube using Ayrshare |
| [Publish To Medium](misc.md#publish-to-medium) | Publishes a post to Medium |
| [Read Discord Messages](discord/bot_blocks.md#read-discord-messages) | Reads messages from a Discord channel using a bot token |
| [Reddit Get My Posts](misc.md#reddit-get-my-posts) | Fetch posts created by the authenticated Reddit user (you) |
| [Reply To Discord Message](discord/bot_blocks.md#reply-to-discord-message) | Replies to a specific Discord message |
| [Reply To Reddit Comment](misc.md#reply-to-reddit-comment) | Reply to a specific Reddit comment |
| [Search Reddit](misc.md#search-reddit) | Search Reddit for posts matching a query |
| [Send Discord DM](discord/bot_blocks.md#send-discord-dm) | Sends a direct message to a Discord user using their user ID |
| [Send Discord Embed](discord/bot_blocks.md#send-discord-embed) | Sends a rich embed message to a Discord channel |
| [Send Discord File](discord/bot_blocks.md#send-discord-file) | Sends a file attachment to a Discord channel |
| [Send Discord Message](discord/bot_blocks.md#send-discord-message) | Sends a message to a Discord channel using a bot token |
| [Send Reddit Message](misc.md#send-reddit-message) | Send a private message (DM) to a Reddit user |
| [Transcribe Youtube Video](misc.md#transcribe-youtube-video) | Transcribes a YouTube video using a proxy |
| [Twitter Add List Member](twitter/list_members.md#twitter-add-list-member) | This block adds a specified user to a Twitter List owned by the authenticated user |
| [Twitter Bookmark Tweet](twitter/bookmark.md#twitter-bookmark-tweet) | This block bookmarks a tweet on Twitter |
| [Twitter Create List](twitter/manage_lists.md#twitter-create-list) | This block creates a new Twitter List for the authenticated user |
| [Twitter Delete List](twitter/manage_lists.md#twitter-delete-list) | This block deletes a specified Twitter List owned by the authenticated user |
| [Twitter Delete Tweet](twitter/manage.md#twitter-delete-tweet) | This block deletes a tweet on Twitter |
| [Twitter Follow List](twitter/list_follows.md#twitter-follow-list) | This block follows a specified Twitter list for the authenticated user |
| [Twitter Follow User](twitter/follows.md#twitter-follow-user) | This block follows a specified Twitter user |
| [Twitter Get Blocked Users](twitter/blocks.md#twitter-get-blocked-users) | This block retrieves a list of users blocked by the authenticating user |
| [Twitter Get Bookmarked Tweets](twitter/bookmark.md#twitter-get-bookmarked-tweets) | This block retrieves bookmarked tweets from Twitter |
| [Twitter Get Followers](twitter/follows.md#twitter-get-followers) | This block retrieves followers of a specified Twitter user |
| [Twitter Get Following](twitter/follows.md#twitter-get-following) | This block retrieves the users that a specified Twitter user is following |
| [Twitter Get Home Timeline](twitter/timeline.md#twitter-get-home-timeline) | This block retrieves the authenticated user's home timeline |
| [Twitter Get Liked Tweets](twitter/like.md#twitter-get-liked-tweets) | This block gets information about tweets liked by a user |
| [Twitter Get Liking Users](twitter/like.md#twitter-get-liking-users) | This block gets information about users who liked a tweet |
| [Twitter Get List](twitter/list_lookup.md#twitter-get-list) | This block retrieves information about a specified Twitter List |
| [Twitter Get List Members](twitter/list_members.md#twitter-get-list-members) | This block retrieves the members of a specified Twitter List |
| [Twitter Get List Memberships](twitter/list_members.md#twitter-get-list-memberships) | This block retrieves all Lists that a specified user is a member of |
| [Twitter Get List Tweets](twitter/list_tweets_lookup.md#twitter-get-list-tweets) | This block retrieves tweets from a specified Twitter list |
| [Twitter Get Muted Users](twitter/mutes.md#twitter-get-muted-users) | This block gets a list of users muted by the authenticating user |
| [Twitter Get Owned Lists](twitter/list_lookup.md#twitter-get-owned-lists) | This block retrieves all Lists owned by a specified Twitter user |
| [Twitter Get Pinned Lists](twitter/pinned_lists.md#twitter-get-pinned-lists) | This block returns the Lists pinned by the authenticated user |
| [Twitter Get Quote Tweets](twitter/quote.md#twitter-get-quote-tweets) | This block gets quote tweets for a specific tweet |
| [Twitter Get Retweeters](twitter/retweet.md#twitter-get-retweeters) | This block gets information about who has retweeted a tweet |
| [Twitter Get Space Buyers](twitter/spaces_lookup.md#twitter-get-space-buyers) | This block retrieves a list of users who purchased tickets to a Twitter Space |
| [Twitter Get Space By Id](twitter/spaces_lookup.md#twitter-get-space-by-id) | This block retrieves information about a single Twitter Space |
| [Twitter Get Space Tweets](twitter/spaces_lookup.md#twitter-get-space-tweets) | This block retrieves tweets shared in a Twitter Space |
| [Twitter Get Spaces](twitter/spaces_lookup.md#twitter-get-spaces) | This block retrieves information about multiple Twitter Spaces |
| [Twitter Get Tweet](twitter/tweet_lookup.md#twitter-get-tweet) | This block retrieves information about a specific Tweet |
| [Twitter Get Tweets](twitter/tweet_lookup.md#twitter-get-tweets) | This block retrieves information about multiple Tweets |
| [Twitter Get User](twitter/user_lookup.md#twitter-get-user) | This block retrieves information about a specified Twitter user |
| [Twitter Get User Mentions](twitter/timeline.md#twitter-get-user-mentions) | This block retrieves Tweets mentioning a specific user |
| [Twitter Get User Tweets](twitter/timeline.md#twitter-get-user-tweets) | This block retrieves Tweets composed by a single user |
| [Twitter Get Users](twitter/user_lookup.md#twitter-get-users) | This block retrieves information about multiple Twitter users |
| [Twitter Hide Reply](twitter/hide.md#twitter-hide-reply) | This block hides a reply to a tweet |
| [Twitter Like Tweet](twitter/like.md#twitter-like-tweet) | This block likes a tweet |
| [Twitter Mute User](twitter/mutes.md#twitter-mute-user) | This block mutes a specified Twitter user |
| [Twitter Pin List](twitter/pinned_lists.md#twitter-pin-list) | This block allows the authenticated user to pin a specified List |
| [Twitter Post Tweet](twitter/manage.md#twitter-post-tweet) | This block posts a tweet on Twitter |
| [Twitter Remove Bookmark Tweet](twitter/bookmark.md#twitter-remove-bookmark-tweet) | This block removes a bookmark from a tweet on Twitter |
| [Twitter Remove List Member](twitter/list_members.md#twitter-remove-list-member) | This block removes a specified user from a Twitter List owned by the authenticated user |
| [Twitter Remove Retweet](twitter/retweet.md#twitter-remove-retweet) | This block removes a retweet on Twitter |
| [Twitter Retweet](twitter/retweet.md#twitter-retweet) | This block retweets a tweet on Twitter |
| [Twitter Search Recent Tweets](twitter/manage.md#twitter-search-recent-tweets) | This block searches all public Tweets in Twitter history |
| [Twitter Search Spaces](twitter/search_spaces.md#twitter-search-spaces) | This block searches for Twitter Spaces based on specified terms |
| [Twitter Unfollow List](twitter/list_follows.md#twitter-unfollow-list) | This block unfollows a specified Twitter list for the authenticated user |
| [Twitter Unfollow User](twitter/follows.md#twitter-unfollow-user) | This block unfollows a specified Twitter user |
| [Twitter Unhide Reply](twitter/hide.md#twitter-unhide-reply) | This block unhides a reply to a tweet |
| [Twitter Unlike Tweet](twitter/like.md#twitter-unlike-tweet) | This block unlikes a tweet |
| [Twitter Unmute User](twitter/mutes.md#twitter-unmute-user) | This block unmutes a specified Twitter user |
| [Twitter Unpin List](twitter/pinned_lists.md#twitter-unpin-list) | This block allows the authenticated user to unpin a specified List |
| [Twitter Update List](twitter/manage_lists.md#twitter-update-list) | This block updates a specified Twitter List owned by the authenticated user |
## Communication
| Block Name | Description |
|------------|-------------|
| [Baas Bot Join Meeting](baas/bots.md#baas-bot-join-meeting) | Deploy a bot to join and record a meeting |
| [Baas Bot Leave Meeting](baas/bots.md#baas-bot-leave-meeting) | Remove a bot from an ongoing meeting |
| [Gmail Add Label](google/gmail.md#gmail-add-label) | A block that adds a label to a specific email message in Gmail, creating the label if it doesn't exist |
| [Gmail Create Draft](google/gmail.md#gmail-create-draft) | Create draft emails in Gmail with automatic HTML detection and proper text formatting |
| [Gmail Draft Reply](google/gmail.md#gmail-draft-reply) | Create draft replies to Gmail threads with automatic HTML detection and proper text formatting |
| [Gmail Forward](google/gmail.md#gmail-forward) | Forward Gmail messages to other recipients with automatic HTML detection and proper formatting |
| [Gmail Get Profile](google/gmail.md#gmail-get-profile) | Get the authenticated user's Gmail profile details including email address and message statistics |
| [Gmail Get Thread](google/gmail.md#gmail-get-thread) | A block that retrieves an entire Gmail thread (email conversation) by ID, returning all messages with decoded bodies for reading complete conversations |
| [Gmail List Labels](google/gmail.md#gmail-list-labels) | A block that retrieves all labels (categories) from a Gmail account for organizing and categorizing emails |
| [Gmail Read](google/gmail.md#gmail-read) | A block that retrieves and reads emails from a Gmail account based on search criteria, returning detailed message information including subject, sender, body, and attachments |
| [Gmail Remove Label](google/gmail.md#gmail-remove-label) | A block that removes a label from a specific email message in a Gmail account |
| [Gmail Reply](google/gmail.md#gmail-reply) | Reply to Gmail threads with automatic HTML detection and proper text formatting |
| [Gmail Send](google/gmail.md#gmail-send) | Send emails via Gmail with automatic HTML detection and proper text formatting |
| [Hub Spot Engagement](hubspot/engagement.md#hub-spot-engagement) | Manages HubSpot engagements - sends emails and tracks engagement metrics |
## Developer Tools
| Block Name | Description |
|------------|-------------|
| [Exa Code Context](exa/code_context.md#exa-code-context) | Search billions of GitHub repos, docs, and Stack Overflow for relevant code examples |
| [Execute Code](misc.md#execute-code) | Executes code in a sandbox environment with internet access |
| [Execute Code Step](misc.md#execute-code-step) | Execute code in a previously instantiated sandbox |
| [Github Add Label](github/issues.md#github-add-label) | A block that adds a label to a GitHub issue or pull request for categorization and organization |
| [Github Assign Issue](github/issues.md#github-assign-issue) | A block that assigns a GitHub user to an issue for task ownership and tracking |
| [Github Assign PR Reviewer](github/pull_requests.md#github-assign-pr-reviewer) | This block assigns a reviewer to a specified GitHub pull request |
| [Github Comment](github/issues.md#github-comment) | A block that posts comments on GitHub issues or pull requests using the GitHub API |
| [Github Create Check Run](github/checks.md#github-create-check-run) | Creates a new check run for a specific commit in a GitHub repository |
| [Github Create Comment Object](github/reviews.md#github-create-comment-object) | Creates a comment object for use with GitHub blocks |
| [Github Create File](github/repo.md#github-create-file) | This block creates a new file in a GitHub repository |
| [Github Create PR Review](github/reviews.md#github-create-pr-review) | This block creates a review on a GitHub pull request with optional inline comments |
| [Github Create Repository](github/repo.md#github-create-repository) | This block creates a new GitHub repository |
| [Github Create Status](github/statuses.md#github-create-status) | Creates a new commit status in a GitHub repository |
| [Github Delete Branch](github/repo.md#github-delete-branch) | This block deletes a specified branch |
| [Github Discussion Trigger](github/triggers.md#github-discussion-trigger) | This block triggers on GitHub Discussions events |
| [Github Get CI Results](github/ci.md#github-get-ci-results) | This block gets CI results for a commit or PR, with optional search for specific errors/warnings in logs |
| [Github Get PR Review Comments](github/reviews.md#github-get-pr-review-comments) | This block gets all review comments from a GitHub pull request or from a specific review |
| [Github Issues Trigger](github/triggers.md#github-issues-trigger) | This block triggers on GitHub issues events |
| [Github List Branches](github/repo.md#github-list-branches) | This block lists all branches for a specified GitHub repository |
| [Github List Comments](github/issues.md#github-list-comments) | A block that retrieves all comments from a GitHub issue or pull request, including comment metadata and content |
| [Github List Discussions](github/repo.md#github-list-discussions) | This block lists recent discussions for a specified GitHub repository |
| [Github List Issues](github/issues.md#github-list-issues) | A block that retrieves a list of issues from a GitHub repository with their titles and URLs |
| [Github List PR Reviewers](github/pull_requests.md#github-list-pr-reviewers) | This block lists all reviewers for a specified GitHub pull request |
| [Github List PR Reviews](github/reviews.md#github-list-pr-reviews) | This block lists all reviews for a specified GitHub pull request |
| [Github List Pull Requests](github/pull_requests.md#github-list-pull-requests) | This block lists all pull requests for a specified GitHub repository |
| [Github List Releases](github/repo.md#github-list-releases) | This block lists all releases for a specified GitHub repository |
| [Github List Stargazers](github/repo.md#github-list-stargazers) | This block lists all users who have starred a specified GitHub repository |
| [Github List Tags](github/repo.md#github-list-tags) | This block lists all tags for a specified GitHub repository |
| [Github Make Branch](github/repo.md#github-make-branch) | This block creates a new branch from a specified source branch |
| [Github Make Issue](github/issues.md#github-make-issue) | A block that creates new issues on GitHub repositories with a title and body content |
| [Github Make Pull Request](github/pull_requests.md#github-make-pull-request) | This block creates a new pull request on a specified GitHub repository |
| [Github Pull Request Trigger](github/triggers.md#github-pull-request-trigger) | This block triggers on pull request events and outputs the event type and payload |
| [Github Read File](github/repo.md#github-read-file) | This block reads the content of a specified file from a GitHub repository |
| [Github Read Folder](github/repo.md#github-read-folder) | This block reads the content of a specified folder from a GitHub repository |
| [Github Read Issue](github/issues.md#github-read-issue) | A block that retrieves information about a specific GitHub issue, including its title, body content, and creator |
| [Github Read Pull Request](github/pull_requests.md#github-read-pull-request) | This block reads the body, title, user, and changes of a specified GitHub pull request |
| [Github Release Trigger](github/triggers.md#github-release-trigger) | This block triggers on GitHub release events |
| [Github Remove Label](github/issues.md#github-remove-label) | A block that removes a label from a GitHub issue or pull request |
| [Github Resolve Review Discussion](github/reviews.md#github-resolve-review-discussion) | This block resolves or unresolves a review discussion thread on a GitHub pull request |
| [Github Star Trigger](github/triggers.md#github-star-trigger) | This block triggers on GitHub star events |
| [Github Submit Pending Review](github/reviews.md#github-submit-pending-review) | This block submits a pending (draft) review on a GitHub pull request |
| [Github Unassign Issue](github/issues.md#github-unassign-issue) | A block that removes a user's assignment from a GitHub issue |
| [Github Unassign PR Reviewer](github/pull_requests.md#github-unassign-pr-reviewer) | This block unassigns a reviewer from a specified GitHub pull request |
| [Github Update Check Run](github/checks.md#github-update-check-run) | Updates an existing check run in a GitHub repository |
| [Github Update Comment](github/issues.md#github-update-comment) | A block that updates an existing comment on a GitHub issue or pull request |
| [Github Update File](github/repo.md#github-update-file) | This block updates an existing file in a GitHub repository |
| [Instantiate Code Sandbox](misc.md#instantiate-code-sandbox) | Instantiate a sandbox environment with internet access in which you can execute code with the Execute Code Step block |
| [Slant3D Order Webhook](slant3d/webhook.md#slant3d-order-webhook) | This block triggers on Slant3D order status updates and outputs the event details, including tracking information when orders are shipped |
## Media Generation
| Block Name | Description |
|------------|-------------|
| [Add Audio To Video](multimedia.md#add-audio-to-video) | Block to attach an audio file to a video file using moviepy |
| [Loop Video](multimedia.md#loop-video) | Block to loop a video to a given duration or number of repeats |
| [Media Duration](multimedia.md#media-duration) | Block to get the duration of a media file |
## Productivity
| Block Name | Description |
|------------|-------------|
| [Google Calendar Create Event](google/calendar.md#google-calendar-create-event) | This block creates a new event in Google Calendar with customizable parameters |
| [Notion Create Page](notion/create_page.md#notion-create-page) | Create a new page in Notion |
| [Notion Read Database](notion/read_database.md#notion-read-database) | Query a Notion database with optional filtering and sorting, returning structured entries |
| [Notion Read Page](notion/read_page.md#notion-read-page) | Read a Notion page by its ID and return its raw JSON |
| [Notion Read Page Markdown](notion/read_page_markdown.md#notion-read-page-markdown) | Read a Notion page and convert it to Markdown format with proper formatting for headings, lists, links, and rich text |
| [Notion Search](notion/search.md#notion-search) | Search your Notion workspace for pages and databases by text query |
| [Todoist Close Task](todoist/tasks.md#todoist-close-task) | Closes a task in Todoist |
| [Todoist Create Comment](todoist/comments.md#todoist-create-comment) | Creates a new comment on a Todoist task or project |
| [Todoist Create Label](todoist/labels.md#todoist-create-label) | Creates a new label in Todoist, It will not work if same name already exists |
| [Todoist Create Project](todoist/projects.md#todoist-create-project) | Creates a new project in Todoist |
| [Todoist Create Task](todoist/tasks.md#todoist-create-task) | Creates a new task in a Todoist project |
| [Todoist Delete Comment](todoist/comments.md#todoist-delete-comment) | Deletes a Todoist comment |
| [Todoist Delete Label](todoist/labels.md#todoist-delete-label) | Deletes a personal label in Todoist |
| [Todoist Delete Project](todoist/projects.md#todoist-delete-project) | Deletes a Todoist project and all its contents |
| [Todoist Delete Section](todoist/sections.md#todoist-delete-section) | Deletes a section and all its tasks from Todoist |
| [Todoist Delete Task](todoist/tasks.md#todoist-delete-task) | Deletes a task in Todoist |
| [Todoist Get Comment](todoist/comments.md#todoist-get-comment) | Get a single comment from Todoist |
| [Todoist Get Comments](todoist/comments.md#todoist-get-comments) | Get all comments for a Todoist task or project |
| [Todoist Get Label](todoist/labels.md#todoist-get-label) | Gets a personal label from Todoist by ID |
| [Todoist Get Project](todoist/projects.md#todoist-get-project) | Gets details for a specific Todoist project |
| [Todoist Get Section](todoist/sections.md#todoist-get-section) | Gets a single section by ID from Todoist |
| [Todoist Get Shared Labels](todoist/labels.md#todoist-get-shared-labels) | Gets all shared labels from Todoist |
| [Todoist Get Task](todoist/tasks.md#todoist-get-task) | Get an active task from Todoist |
| [Todoist Get Tasks](todoist/tasks.md#todoist-get-tasks) | Get active tasks from Todoist |
| [Todoist List Collaborators](todoist/projects.md#todoist-list-collaborators) | Gets all collaborators for a specific Todoist project |
| [Todoist List Labels](todoist/labels.md#todoist-list-labels) | Gets all personal labels from Todoist |
| [Todoist List Projects](todoist/projects.md#todoist-list-projects) | Gets all projects and their details from Todoist |
| [Todoist List Sections](todoist/sections.md#todoist-list-sections) | Gets all sections and their details from Todoist |
| [Todoist Remove Shared Labels](todoist/labels.md#todoist-remove-shared-labels) | Removes all instances of a shared label |
| [Todoist Rename Shared Labels](todoist/labels.md#todoist-rename-shared-labels) | Renames all instances of a shared label |
| [Todoist Reopen Task](todoist/tasks.md#todoist-reopen-task) | Reopens a task in Todoist |
| [Todoist Update Comment](todoist/comments.md#todoist-update-comment) | Updates a Todoist comment |
| [Todoist Update Label](todoist/labels.md#todoist-update-label) | Updates a personal label in Todoist |
| [Todoist Update Project](todoist/projects.md#todoist-update-project) | Updates an existing project in Todoist |
| [Todoist Update Task](todoist/tasks.md#todoist-update-task) | Updates an existing task in Todoist |
## Logic and Control Flow
| Block Name | Description |
|------------|-------------|
| [Calculator](logic.md#calculator) | Performs a mathematical operation on two numbers |
| [Condition](logic.md#condition) | Handles conditional logic based on comparison operators |
| [Count Items](logic.md#count-items) | Counts the number of items in a collection |
| [Data Sampling](logic.md#data-sampling) | This block samples data from a given dataset using various sampling methods |
| [Exa Webset Ready Check](exa/websets.md#exa-webset-ready-check) | Check if webset is ready for next operation - enables conditional workflow branching |
| [If Input Matches](logic.md#if-input-matches) | Handles conditional logic based on comparison operators |
| [Pinecone Init](logic.md#pinecone-init) | Initializes a Pinecone index |
| [Pinecone Insert](logic.md#pinecone-insert) | Upload data to a Pinecone index |
| [Pinecone Query](logic.md#pinecone-query) | Queries a Pinecone index |
| [Step Through Items](logic.md#step-through-items) | Iterates over a list or dictionary and outputs each item |
## Input/Output
| Block Name | Description |
|------------|-------------|
| [Exa Webset Webhook](exa/webhook_blocks.md#exa-webset-webhook) | Receive webhook notifications for Exa webset events |
| [Generic Webhook Trigger](generic_webhook/triggers.md#generic-webhook-trigger) | This block will output the contents of the generic input for the webhook |
| [Read RSS Feed](misc.md#read-rss-feed) | Reads RSS feed entries from a given URL |
| [Send Authenticated Web Request](misc.md#send-authenticated-web-request) | Make an authenticated HTTP request with host-scoped credentials (JSON / form / multipart) |
| [Send Email](misc.md#send-email) | This block sends an email using the provided SMTP credentials |
| [Send Web Request](misc.md#send-web-request) | Make an HTTP request (JSON / form / multipart) |
## Agent Integration
| Block Name | Description |
|------------|-------------|
| [Agent Executor](misc.md#agent-executor) | Executes an existing agent inside your agent |
## CRM Services
| Block Name | Description |
|------------|-------------|
| [Add Lead To Campaign](smartlead/campaign.md#add-lead-to-campaign) | Add a lead to a campaign in SmartLead |
| [Create Campaign](smartlead/campaign.md#create-campaign) | Create a campaign in SmartLead |
| [Hub Spot Company](hubspot/company.md#hub-spot-company) | Manages HubSpot companies - create, update, and retrieve company information |
| [Hub Spot Contact](hubspot/contact.md#hub-spot-contact) | Manages HubSpot contacts - create, update, and retrieve contact information |
| [Save Campaign Sequences](smartlead/campaign.md#save-campaign-sequences) | Save sequences within a campaign |
## AI Safety
| Block Name | Description |
|------------|-------------|
| [Nvidia Deepfake Detect](nvidia/deepfake.md#nvidia-deepfake-detect) | Detects potential deepfakes in images using Nvidia's AI API |
## Issue Tracking
| Block Name | Description |
|------------|-------------|
| [Linear Create Comment](linear/comment.md#linear-create-comment) | Creates a new comment on a Linear issue |
| [Linear Create Issue](linear/issues.md#linear-create-issue) | Creates a new issue on Linear |
| [Linear Get Project Issues](linear/issues.md#linear-get-project-issues) | Gets issues from a Linear project filtered by status and assignee |
| [Linear Search Projects](linear/projects.md#linear-search-projects) | Searches for projects on Linear |
## Hardware
| Block Name | Description |
|------------|-------------|
| [Compass AI Trigger](compass/triggers.md#compass-ai-trigger) | This block will output the contents of the compass transcription |

View File

@@ -0,0 +1,84 @@
# Airtable Bases
<!-- MANUAL: file_description -->
Blocks for creating and managing Airtable bases, which are the top-level containers for tables, records, and data in Airtable.
<!-- END MANUAL -->
## Airtable Create Base
### What it is
Create or find a base in Airtable
### How it works
<!-- MANUAL: how_it_works -->
This block creates a new Airtable base in a specified workspace, or finds an existing one with the same name. When creating, you can optionally define initial tables and their fields to set up the schema.
Enable find_existing to search for a base with the same name before creating a new one, preventing duplicates in your workspace.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| workspace_id | The workspace ID where the base will be created | str | Yes |
| name | The name of the new base | str | Yes |
| find_existing | If true, return existing base with same name instead of creating duplicate | bool | No |
| tables | At least one table and field must be specified. Array of table objects to create in the base. Each table should have 'name' and 'fields' properties | List[Dict[str, Any]] | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| base_id | The ID of the created or found base | str |
| tables | Array of table objects | List[Dict[str, Any]] |
| table | A single table object | Dict[str, Any] |
| was_created | True if a new base was created, False if existing was found | bool |
### Possible use case
<!-- MANUAL: use_case -->
**Project Setup**: Automatically create new bases when projects start with predefined table structures.
**Template Deployment**: Deploy standardized base templates across teams or clients.
**Multi-Tenant Apps**: Create separate bases for each customer or project programmatically.
<!-- END MANUAL -->
---
## Airtable List Bases
### What it is
List all bases in Airtable
### How it works
<!-- MANUAL: how_it_works -->
This block retrieves a list of all Airtable bases accessible to your connected account. It returns basic information about each base including ID, name, and permission level.
Results are paginated; use the offset output to retrieve additional pages if there are more bases than returned in a single call.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| trigger | Trigger the block to run - value is ignored | str | No |
| offset | Pagination offset from previous request | str | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| bases | Array of base objects | List[Dict[str, Any]] |
| offset | Offset for next page (null if no more bases) | str |
### Possible use case
<!-- MANUAL: use_case -->
**Base Discovery**: Find available bases for building dynamic dropdowns or navigation.
**Inventory Management**: List all bases in an organization for auditing or documentation.
**Cross-Base Operations**: Enumerate bases to perform operations across multiple databases.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,214 @@
# Airtable Records
<!-- MANUAL: file_description -->
Blocks for creating, reading, updating, and deleting records in Airtable tables.
<!-- END MANUAL -->
## Airtable Create Records
### What it is
Create records in an Airtable table
### How it works
<!-- MANUAL: how_it_works -->
This block creates new records in an Airtable table using the Airtable API. Each record is specified with a fields object containing field names and values. You can create up to 10 records in a single call.
Enable typecast to automatically convert string values to appropriate field types (dates, numbers, etc.). The block returns the created records with their assigned IDs.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| base_id | The Airtable base ID | str | Yes |
| table_id_or_name | Table ID or name | str | Yes |
| records | Array of records to create (each with 'fields' object) | List[Dict[str, Any]] | Yes |
| skip_normalization | Skip output normalization to get raw Airtable response (faster but may have missing fields) | bool | No |
| typecast | Automatically convert string values to appropriate types | bool | No |
| return_fields_by_field_id | Return fields by field ID | bool | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| records | Array of created record objects | List[Dict[str, Any]] |
| details | Details of the created records | Dict[str, Any] |
### Possible use case
<!-- MANUAL: use_case -->
**Data Import**: Bulk import data from external sources into Airtable tables.
**Form Submissions**: Create records from form submissions or API integrations.
**Workflow Output**: Save workflow results or processed data to Airtable for tracking.
<!-- END MANUAL -->
---
## Airtable Delete Records
### What it is
Delete records from an Airtable table
### How it works
<!-- MANUAL: how_it_works -->
This block deletes records from an Airtable table by their record IDs. You can delete up to 10 records in a single call. The operation is permanent and cannot be undone.
Provide an array of record IDs to delete. Using the table ID instead of the name is recommended for reliability.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| base_id | The Airtable base ID | str | Yes |
| table_id_or_name | Table ID or name - It's better to use the table ID instead of the name | str | Yes |
| record_ids | Array of upto 10 record IDs to delete | List[str] | Yes |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| records | Array of deletion results | List[Dict[str, Any]] |
### Possible use case
<!-- MANUAL: use_case -->
**Data Cleanup**: Remove outdated or duplicate records from tables.
**Workflow Cleanup**: Delete temporary records after processing is complete.
**Batch Removal**: Remove multiple records that match certain criteria.
<!-- END MANUAL -->
---
## Airtable Get Record
### What it is
Get a single record from Airtable
### How it works
<!-- MANUAL: how_it_works -->
This block retrieves a single record from an Airtable table by its ID. The record includes all field values and metadata like creation time. Enable normalize_output to ensure all fields are included with proper empty values.
Optionally include field metadata for type information and configuration details about each field.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| base_id | The Airtable base ID | str | Yes |
| table_id_or_name | Table ID or name | str | Yes |
| record_id | The record ID to retrieve | str | Yes |
| normalize_output | Normalize output to include all fields with proper empty values (disable to skip schema fetch and get raw Airtable response) | bool | No |
| include_field_metadata | Include field type and configuration metadata (requires normalize_output=true) | bool | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| id | The record ID | str |
| fields | The record fields | Dict[str, Any] |
| created_time | The record created time | str |
| field_metadata | Field type and configuration metadata (only when include_field_metadata=true) | Dict[str, Any] |
### Possible use case
<!-- MANUAL: use_case -->
**Detail View**: Fetch complete record data for display or detailed processing.
**Record Lookup**: Retrieve specific records by ID from webhook payloads or references.
**Data Validation**: Check record contents before performing updates or related operations.
<!-- END MANUAL -->
---
## Airtable List Records
### What it is
List records from an Airtable table
### How it works
<!-- MANUAL: how_it_works -->
This block queries records from an Airtable table with optional filtering, sorting, and pagination. Use Airtable formulas to filter records and specify sort order by field and direction.
Results can be limited, paginated with offsets, and restricted to specific fields. Enable normalize_output for consistent field values across records.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| base_id | The Airtable base ID | str | Yes |
| table_id_or_name | Table ID or name | str | Yes |
| filter_formula | Airtable formula to filter records | str | No |
| view | View ID or name to use | str | No |
| sort | Sort configuration (array of {field, direction}) | List[Dict[str, Any]] | No |
| max_records | Maximum number of records to return | int | No |
| page_size | Number of records per page (max 100) | int | No |
| offset | Pagination offset from previous request | str | No |
| return_fields | Specific fields to return (comma-separated) | List[str] | No |
| normalize_output | Normalize output to include all fields with proper empty values (disable to skip schema fetch and get raw Airtable response) | bool | No |
| include_field_metadata | Include field type and configuration metadata (requires normalize_output=true) | bool | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| records | Array of record objects | List[Dict[str, Any]] |
| offset | Offset for next page (null if no more records) | str |
| field_metadata | Field type and configuration metadata (only when include_field_metadata=true) | Dict[str, Any] |
### Possible use case
<!-- MANUAL: use_case -->
**Report Generation**: Query records with filters to build reports or dashboards.
**Data Export**: Fetch records matching criteria for export to other systems.
**Batch Processing**: List records to process in subsequent workflow steps.
<!-- END MANUAL -->
---
## Airtable Update Records
### What it is
Update records in an Airtable table
### How it works
<!-- MANUAL: how_it_works -->
This block updates existing records in an Airtable table. Each record update requires the record ID and a fields object with the values to update. Only specified fields are modified; other fields remain unchanged.
Enable typecast to automatically convert string values to appropriate types. You can update up to 10 records per call.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| base_id | The Airtable base ID | str | Yes |
| table_id_or_name | Table ID or name - It's better to use the table ID instead of the name | str | Yes |
| records | Array of records to update (each with 'id' and 'fields') | List[Dict[str, Any]] | Yes |
| typecast | Automatically convert string values to appropriate types | bool | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| records | Array of updated record objects | List[Dict[str, Any]] |
### Possible use case
<!-- MANUAL: use_case -->
**Status Updates**: Update record status fields as workflows progress.
**Data Enrichment**: Add computed or fetched data to existing records.
**Batch Modifications**: Update multiple records based on processed results.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,202 @@
# Airtable Schema
<!-- MANUAL: file_description -->
Blocks for managing Airtable schema including tables, fields, and their configurations.
<!-- END MANUAL -->
## Airtable Create Field
### What it is
Add a new field to an Airtable table
### How it works
<!-- MANUAL: how_it_works -->
This block adds a new field to an existing Airtable table using the Airtable API. Specify the field type (text, email, URL, etc.), name, and optional description and configuration options.
The field is created immediately and becomes available for use in all records. Returns the created field object with its assigned ID.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| base_id | The Airtable base ID | str | Yes |
| table_id | The table ID to add field to | str | Yes |
| field_type | The type of the field to create | "singleLineText" \| "email" \| "url" \| "multilineText" \| "number" \| "percent" \| "currency" \| "singleSelect" \| "multipleSelects" \| "singleCollaborator" \| "multipleCollaborators" \| "multipleRecordLinks" \| "date" \| "dateTime" \| "phoneNumber" \| "multipleAttachments" \| "checkbox" \| "formula" \| "createdTime" \| "rollup" \| "count" \| "lookup" \| "multipleLookupValues" \| "autoNumber" \| "barcode" \| "rating" \| "richText" \| "duration" \| "lastModifiedTime" \| "button" \| "createdBy" \| "lastModifiedBy" \| "externalSyncSource" \| "aiText" | No |
| name | The name of the field to create | str | Yes |
| description | The description of the field to create | str | No |
| options | The options of the field to create | Dict[str, str] | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| field | Created field object | Dict[str, Any] |
| field_id | ID of the created field | str |
### Possible use case
<!-- MANUAL: use_case -->
**Schema Evolution**: Add new fields to tables as application requirements grow.
**Dynamic Forms**: Create fields based on user configuration or form builder settings.
**Data Integration**: Add fields to capture data from newly integrated external systems.
<!-- END MANUAL -->
---
## Airtable Create Table
### What it is
Create a new table in an Airtable base
### How it works
<!-- MANUAL: how_it_works -->
This block creates a new table in an Airtable base with the specified name and optional field definitions. Each field definition includes name, type, and type-specific options.
The table is created with the defined schema and is immediately ready for use. Returns the created table object with its ID.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| base_id | The Airtable base ID | str | Yes |
| table_name | The name of the table to create | str | Yes |
| table_fields | Table fields with name, type, and options | List[Dict[str, Any]] | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| table | Created table object | Dict[str, Any] |
| table_id | ID of the created table | str |
### Possible use case
<!-- MANUAL: use_case -->
**Application Scaffolding**: Create tables programmatically when setting up new application modules.
**Multi-Tenant Setup**: Generate customer-specific tables dynamically.
**Feature Expansion**: Add new tables as features are enabled or installed.
<!-- END MANUAL -->
---
## Airtable List Schema
### What it is
Get the complete schema of an Airtable base
### How it works
<!-- MANUAL: how_it_works -->
This block retrieves the complete schema of an Airtable base, including all tables, their fields, field types, and views. This metadata is essential for building dynamic integrations that need to understand table structure.
The schema includes field configurations, validation rules, and relationship definitions between tables.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| base_id | The Airtable base ID | str | Yes |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| base_schema | Complete base schema with tables, fields, and views | Dict[str, Any] |
| tables | Array of table objects | List[Dict[str, Any]] |
### Possible use case
<!-- MANUAL: use_case -->
**Schema Discovery**: Understand table structure for building dynamic forms or queries.
**Documentation**: Generate documentation of database schema automatically.
**Migration Planning**: Analyze schema before migrating data to other systems.
<!-- END MANUAL -->
---
## Airtable Update Field
### What it is
Update field properties in an Airtable table
### How it works
<!-- MANUAL: how_it_works -->
This block updates properties of an existing field in an Airtable table. You can modify the field name and description. Note that field type cannot be changed after creation.
Changes take effect immediately across all records and views that use the field.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| base_id | The Airtable base ID | str | Yes |
| table_id | The table ID containing the field | str | Yes |
| field_id | The field ID to update | str | Yes |
| name | The name of the field to update | str | No |
| description | The description of the field to update | str | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| field | Updated field object | Dict[str, Any] |
### Possible use case
<!-- MANUAL: use_case -->
**Field Renaming**: Update field names to match evolving terminology or standards.
**Documentation Updates**: Add or update field descriptions for better team understanding.
**Schema Maintenance**: Keep field metadata current as application requirements change.
<!-- END MANUAL -->
---
## Airtable Update Table
### What it is
Update table properties
### How it works
<!-- MANUAL: how_it_works -->
This block updates table properties in an Airtable base. You can change the table name, description, and date dependency settings. Changes apply immediately and affect all users accessing the table.
This is useful for maintaining table metadata and organizing your base structure.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| base_id | The Airtable base ID | str | Yes |
| table_id | The table ID to update | str | Yes |
| table_name | The name of the table to update | str | No |
| table_description | The description of the table to update | str | No |
| date_dependency | The date dependency of the table to update | Dict[str, Any] | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| table | Updated table object | Dict[str, Any] |
### Possible use case
<!-- MANUAL: use_case -->
**Table Organization**: Rename tables to follow naming conventions or reflect current usage.
**Description Management**: Update table descriptions for documentation purposes.
**Configuration Updates**: Modify table settings like date dependencies as requirements change.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,42 @@
# Airtable Triggers
<!-- MANUAL: file_description -->
Blocks for triggering workflows based on Airtable events like record creation, updates, and deletions.
<!-- END MANUAL -->
## Airtable Webhook Trigger
### What it is
Starts a flow whenever Airtable emits a webhook event
### How it works
<!-- MANUAL: how_it_works -->
This block subscribes to Airtable webhook events for a specific base and table. When records are created, updated, or deleted, Airtable sends a webhook notification that triggers your workflow.
You specify which events to listen for using the event selector. The webhook payload includes details about the changed records and the type of change that occurred.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| base_id | Airtable base ID | str | Yes |
| table_id_or_name | Airtable table ID or name | str | Yes |
| events | Airtable webhook event filter | AirtableEventSelector | Yes |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| payload | Airtable webhook payload | WebhookPayload |
### Possible use case
<!-- MANUAL: use_case -->
**Real-Time Sync**: Automatically sync Airtable changes to other systems like CRMs or databases.
**Notification Workflows**: Send alerts when specific records are created or modified in Airtable.
**Automated Processing**: Trigger document generation or emails when new entries are added to a table.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,47 @@
# Apollo Organization
<!-- MANUAL: file_description -->
Blocks for searching and retrieving organization data from Apollo's B2B database.
<!-- END MANUAL -->
## Search Organizations
### What it is
Search for organizations in Apollo
### How it works
<!-- MANUAL: how_it_works -->
This block searches the Apollo database for organizations using various filters like employee count, location, and keywords. Apollo maintains a comprehensive database of company information for sales and marketing purposes.
Results can be filtered by headquarters location, excluded locations, industry keywords, and specific Apollo organization IDs.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| organization_num_employees_range | The number range of employees working for the company. This enables you to find companies based on headcount. You can add multiple ranges to expand your search results. Each range you add needs to be a string, with the upper and lower numbers of the range separated only by a comma. | List[int] | No |
| organization_locations | The location of the company headquarters. You can search across cities, US states, and countries. If a company has several office locations, results are still based on the headquarters location. For example, if you search chicago but a company's HQ location is in boston, any Boston-based companies will not appear in your search results, even if they match other parameters. To exclude companies based on location, use the organization_not_locations parameter. | List[str] | No |
| organizations_not_locations | Exclude companies from search results based on the location of the company headquarters. You can use cities, US states, and countries as locations to exclude. This parameter is useful for ensuring you do not prospect in an undesirable territory. For example, if you use ireland as a value, no Ireland-based companies will appear in your search results. | List[str] | No |
| q_organization_keyword_tags | Filter search results based on keywords associated with companies. For example, you can enter mining as a value to return only companies that have an association with the mining industry. | List[str] | No |
| q_organization_name | Filter search results to include a specific company name. If the value you enter for this parameter does not match with a company's name, the company will not appear in search results, even if it matches other parameters. Partial matches are accepted. For example, if you filter by the value marketing, a company called NY Marketing Unlimited would still be eligible as a search result, but NY Market Analysis would not be eligible. | str | No |
| organization_ids | The Apollo IDs for the companies you want to include in your search results. Each company in the Apollo database is assigned a unique ID. To find IDs, identify the values for organization_id when you call this endpoint. | List[str] | No |
| max_results | The maximum number of results to return. If you don't specify this parameter, the default is 100. | int | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the search failed | str |
| organizations | List of organizations found | List[Dict[str, Any]] |
| organization | Each found organization, one at a time | Dict[str, Any] |
### Possible use case
<!-- MANUAL: use_case -->
**Market Research**: Find companies matching specific criteria for market analysis.
**Lead List Building**: Build targeted lists of companies for outbound sales campaigns.
**Competitive Intelligence**: Research competitors and similar companies in your market.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,50 @@
# Apollo People
<!-- MANUAL: file_description -->
Blocks for searching people in Apollo's B2B contact database with various filters.
<!-- END MANUAL -->
## Search People
### What it is
Search for people in Apollo
### How it works
<!-- MANUAL: how_it_works -->
This block searches Apollo's database for people based on job titles, seniority, location, company, and other criteria. It's designed for finding prospects and contacts for sales and marketing.
Enable enrich_info to get detailed contact information including verified email addresses (costs more credits).
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| person_titles | Job titles held by the people you want to find. For a person to be included in search results, they only need to match 1 of the job titles you add. Adding more job titles expands your search results. Results also include job titles with the same terms, even if they are not exact matches. For example, searching for marketing manager might return people with the job title content marketing manager. Use this parameter in combination with the person_seniorities[] parameter to find people based on specific job functions and seniority levels. | List[str] | No |
| person_locations | The location where people live. You can search across cities, US states, and countries. To find people based on the headquarters locations of their current employer, use the organization_locations parameter. | List[str] | No |
| person_seniorities | The job seniority that people hold within their current employer. This enables you to find people that currently hold positions at certain reporting levels, such as Director level or senior IC level. For a person to be included in search results, they only need to match 1 of the seniorities you add. Adding more seniorities expands your search results. Searches only return results based on their current job title, so searching for Director-level employees only returns people that currently hold a Director-level title. If someone was previously a Director, but is currently a VP, they would not be included in your search results. Use this parameter in combination with the person_titles[] parameter to find people based on specific job functions and seniority levels. | List["owner" \| "founder" \| "c_suite" \| "partner" \| "vp" \| "head" \| "director" \| "manager" \| "senior" \| "entry" \| "intern"] | No |
| organization_locations | The location of the company headquarters for a person's current employer. You can search across cities, US states, and countries. If a company has several office locations, results are still based on the headquarters location. For example, if you search chicago but a company's HQ location is in boston, people that work for the Boston-based company will not appear in your results, even if they match other parameters. To find people based on their personal location, use the person_locations parameter. | List[str] | No |
| q_organization_domains | The domain name for the person's employer. This can be the current employer or a previous employer. Do not include www., the @ symbol, or similar. You can add multiple domains to search across companies. Examples: apollo.io and microsoft.com | List[str] | No |
| contact_email_statuses | The email statuses for the people you want to find. You can add multiple statuses to expand your search. | List["verified" \| "unverified" \| "likely_to_engage" \| "unavailable"] | No |
| organization_ids | The Apollo IDs for the companies (employers) you want to include in your search results. Each company in the Apollo database is assigned a unique ID. To find IDs, call the Organization Search endpoint and identify the values for organization_id. | List[str] | No |
| organization_num_employees_range | The number range of employees working for the company. This enables you to find companies based on headcount. You can add multiple ranges to expand your search results. Each range you add needs to be a string, with the upper and lower numbers of the range separated only by a comma. | List[int] | No |
| q_keywords | A string of words over which we want to filter the results | str | No |
| max_results | The maximum number of results to return. If you don't specify this parameter, the default is 25. Limited to 500 to prevent overspending. | int | No |
| enrich_info | Whether to enrich contacts with detailed information including real email addresses. This will double the search cost. | bool | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the search failed | str |
| people | List of people found | List[Dict[str, Any]] |
### Possible use case
<!-- MANUAL: use_case -->
**Prospecting**: Find decision-makers at target companies for outbound sales.
**Recruiting**: Search for candidates with specific titles and experience.
**ABM Campaigns**: Build contact lists at specific accounts for account-based marketing.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,49 @@
# Apollo Person
<!-- MANUAL: file_description -->
Blocks for enriching individual person data including contact details and email discovery.
<!-- END MANUAL -->
## Get Person Detail
### What it is
Get detailed person data with Apollo API, including email reveal
### How it works
<!-- MANUAL: how_it_works -->
This block enriches person data using Apollo's API. You can look up by Apollo person ID for best accuracy, or match by name plus company information, LinkedIn URL, or email address.
Returns comprehensive contact details including email addresses (if available), job title, company information, and social profiles.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| person_id | Apollo person ID to enrich (most accurate method) | str | No |
| first_name | First name of the person to enrich | str | No |
| last_name | Last name of the person to enrich | str | No |
| name | Full name of the person to enrich (alternative to first_name + last_name) | str | No |
| email | Known email address of the person (helps with matching) | str | No |
| domain | Company domain of the person (e.g., 'google.com') | str | No |
| company | Company name of the person | str | No |
| linkedin_url | LinkedIn URL of the person | str | No |
| organization_id | Apollo organization ID of the person's company | str | No |
| title | Job title of the person to enrich | str | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if enrichment failed | str |
| contact | Enriched contact information | Dict[str, Any] |
### Possible use case
<!-- MANUAL: use_case -->
**Contact Enrichment**: Get full contact details from partial information like name and company.
**Email Discovery**: Find verified email addresses for outreach campaigns.
**Profile Completion**: Fill in missing contact details in your CRM or database.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,52 @@
# Ayrshare Post To Bluesky
<!-- MANUAL: file_description -->
Blocks for posting content to Bluesky using the Ayrshare social media management API.
<!-- END MANUAL -->
## Post To Bluesky
### What it is
Post to Bluesky using Ayrshare
### How it works
<!-- MANUAL: how_it_works -->
This block uses Ayrshare's social media API to publish content to Bluesky. It handles text posts (up to 300 characters), images (up to 4), and video content with support for scheduling, accessibility features like alt text, and link shortening.
The block authenticates through your Ayrshare credentials and sends the post data to Ayrshare's unified API, which then publishes to Bluesky. It returns post identifiers and status information upon completion, or error details if the operation fails.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| post | The post text to be published (max 300 characters for Bluesky) | str | No |
| media_urls | Optional list of media URLs to include. Bluesky supports up to 4 images or 1 video. | List[str] | No |
| is_video | Whether the media is a video | bool | No |
| schedule_date | UTC datetime for scheduling (YYYY-MM-DDThh:mm:ssZ) | str (date-time) | No |
| disable_comments | Whether to disable comments | bool | No |
| shorten_links | Whether to shorten links | bool | No |
| unsplash | Unsplash image configuration | str | No |
| requires_approval | Whether to enable approval workflow | bool | No |
| random_post | Whether to generate random post text | bool | No |
| random_media_url | Whether to generate random media | bool | No |
| notes | Additional notes for the post | str | No |
| alt_text | Alt text for each media item (accessibility) | List[str] | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| post_result | The result of the post | PostResponse |
| post | The result of the post | PostIds |
### Possible use case
<!-- MANUAL: use_case -->
**Cross-Platform Publishing**: Automatically share content across Bluesky and other social networks from a single workflow.
**Scheduled Content Calendar**: Queue up posts with specific publishing times to maintain consistent presence on Bluesky.
**Visual Content Sharing**: Share image galleries with accessibility-friendly alt text for photo-focused content strategies.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,68 @@
# Ayrshare Post To Facebook
<!-- MANUAL: file_description -->
Blocks for posting content to Facebook Pages using the Ayrshare social media management API.
<!-- END MANUAL -->
## Post To Facebook
### What it is
Post to Facebook using Ayrshare
### How it works
<!-- MANUAL: how_it_works -->
This block uses Ayrshare's social media API to publish content to Facebook Pages. It supports text posts, images, videos, carousels (2-10 items), Reels, and Stories, with features like audience targeting by age and country, location tagging, and scheduling.
The block authenticates through Ayrshare and leverages the Meta Graph API to handle various Facebook-specific formats. Advanced options include draft mode for Meta Business Suite, custom link previews, and video thumbnails. Results include post IDs for tracking engagement.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| post | The post text to be published | str | No |
| media_urls | Optional list of media URLs to include. Set is_video in advanced settings to true if you want to upload videos. | List[str] | No |
| is_video | Whether the media is a video | bool | No |
| schedule_date | UTC datetime for scheduling (YYYY-MM-DDThh:mm:ssZ) | str (date-time) | No |
| disable_comments | Whether to disable comments | bool | No |
| shorten_links | Whether to shorten links | bool | No |
| unsplash | Unsplash image configuration | str | No |
| requires_approval | Whether to enable approval workflow | bool | No |
| random_post | Whether to generate random post text | bool | No |
| random_media_url | Whether to generate random media | bool | No |
| notes | Additional notes for the post | str | No |
| is_carousel | Whether to post a carousel | bool | No |
| carousel_link | The URL for the 'See More At' button in the carousel | str | No |
| carousel_items | List of carousel items with name, link and picture URLs. Min 2, max 10 items. | List[CarouselItem] | No |
| is_reels | Whether to post to Facebook Reels | bool | No |
| reels_title | Title for the Reels video (max 255 chars) | str | No |
| reels_thumbnail | Thumbnail URL for Reels video (JPEG/PNG, <10MB) | str | No |
| is_story | Whether to post as a Facebook Story | bool | No |
| media_captions | Captions for each media item | List[str] | No |
| location_id | Facebook Page ID or name for location tagging | str | No |
| age_min | Minimum age for audience targeting (13,15,18,21,25) | int | No |
| target_countries | List of country codes to target (max 25) | List[str] | No |
| alt_text | Alt text for each media item | List[str] | No |
| video_title | Title for video post | str | No |
| video_thumbnail | Thumbnail URL for video post | str | No |
| is_draft | Save as draft in Meta Business Suite | bool | No |
| scheduled_publish_date | Schedule publish time in Meta Business Suite (UTC) | str | No |
| preview_link | URL for custom link preview | str | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| post_result | The result of the post | PostResponse |
| post | The result of the post | PostIds |
### Possible use case
<!-- MANUAL: use_case -->
**Product Launches**: Create carousel posts showcasing multiple product images with links to purchase pages.
**Event Promotion**: Share event details with age-targeted reach and location tagging for local business events.
**Short-Form Video**: Automatically publish Reels with custom thumbnails to maximize video content reach.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,64 @@
# Ayrshare Post To GMB
<!-- MANUAL: file_description -->
Blocks for posting content to Google My Business profiles using the Ayrshare social media management API.
<!-- END MANUAL -->
## Post To GMB
### What it is
Post to Google My Business using Ayrshare
### How it works
<!-- MANUAL: how_it_works -->
This block uses Ayrshare's API to publish content to Google My Business profiles. It supports standard posts, photo/video posts (categorized by type like exterior, interior, product), and special post types including events and promotional offers with coupon codes.
The block integrates with Google's Business Profile API through Ayrshare, enabling call-to-action buttons (book, order, shop, learn more, sign up, call), event scheduling with start/end dates, and promotional offers with terms and redemption URLs.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| post | The post text to be published | str | No |
| media_urls | Optional list of media URLs. GMB supports only one image or video per post. | List[str] | No |
| is_video | Whether the media is a video | bool | No |
| schedule_date | UTC datetime for scheduling (YYYY-MM-DDThh:mm:ssZ) | str (date-time) | No |
| disable_comments | Whether to disable comments | bool | No |
| shorten_links | Whether to shorten links | bool | No |
| unsplash | Unsplash image configuration | str | No |
| requires_approval | Whether to enable approval workflow | bool | No |
| random_post | Whether to generate random post text | bool | No |
| random_media_url | Whether to generate random media | bool | No |
| notes | Additional notes for the post | str | No |
| is_photo_video | Whether this is a photo/video post (appears in Photos section) | bool | No |
| photo_category | Category for photo/video: cover, profile, logo, exterior, interior, product, at_work, food_and_drink, menu, common_area, rooms, teams | str | No |
| call_to_action_type | Type of action button: 'book', 'order', 'shop', 'learn_more', 'sign_up', or 'call' | str | No |
| call_to_action_url | URL for the action button (not required for 'call' action) | str | No |
| event_title | Event title for event posts | str | No |
| event_start_date | Event start date in ISO format (e.g., '2024-03-15T09:00:00Z') | str | No |
| event_end_date | Event end date in ISO format (e.g., '2024-03-15T17:00:00Z') | str | No |
| offer_title | Offer title for promotional posts | str | No |
| offer_start_date | Offer start date in ISO format (e.g., '2024-03-15T00:00:00Z') | str | No |
| offer_end_date | Offer end date in ISO format (e.g., '2024-04-15T23:59:59Z') | str | No |
| offer_coupon_code | Coupon code for the offer (max 58 characters) | str | No |
| offer_redeem_online_url | URL where customers can redeem the offer online | str | No |
| offer_terms_conditions | Terms and conditions for the offer | str | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| post_result | The result of the post | PostResponse |
| post | The result of the post | PostIds |
### Possible use case
<!-- MANUAL: use_case -->
**Local Business Updates**: Post daily specials, new arrivals, or service announcements directly to your Google Business Profile.
**Promotional Campaigns**: Create time-limited offers with coupon codes and online redemption links to drive sales.
**Event Marketing**: Announce upcoming events with dates, descriptions, and call-to-action buttons for reservations.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,61 @@
# Ayrshare Post To Instagram
<!-- MANUAL: file_description -->
Blocks for posting content to Instagram using the Ayrshare social media management API.
<!-- END MANUAL -->
## Post To Instagram
### What it is
Post to Instagram using Ayrshare. Requires a Business or Creator Instagram Account connected with a Facebook Page
### How it works
<!-- MANUAL: how_it_works -->
This block uses Ayrshare's API to publish content to Instagram Business or Creator accounts. It supports feed posts, Stories (24-hour expiration), Reels, and carousels (up to 10 images/videos), with features like collaborator invitations, location tagging, and user tags with coordinates.
The block requires an Instagram account connected to a Facebook Page and authenticates through Meta's Graph API via Ayrshare. Instagram-specific features include auto-resize for optimal dimensions, audio naming for Reels, and thumbnail customization with frame offset control.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| post | The post text (max 2,200 chars, up to 30 hashtags, 3 @mentions) | str | No |
| media_urls | Optional list of media URLs. Instagram supports up to 10 images/videos in a carousel. | List[str] | No |
| is_video | Whether the media is a video | bool | No |
| schedule_date | UTC datetime for scheduling (YYYY-MM-DDThh:mm:ssZ) | str (date-time) | No |
| disable_comments | Whether to disable comments | bool | No |
| shorten_links | Whether to shorten links | bool | No |
| unsplash | Unsplash image configuration | str | No |
| requires_approval | Whether to enable approval workflow | bool | No |
| random_post | Whether to generate random post text | bool | No |
| random_media_url | Whether to generate random media | bool | No |
| notes | Additional notes for the post | str | No |
| is_story | Whether to post as Instagram Story (24-hour expiration) | bool | No |
| share_reels_feed | Whether Reel should appear in both Feed and Reels tabs | bool | No |
| audio_name | Audio name for Reels (e.g., 'The Weeknd - Blinding Lights') | str | No |
| thumbnail | Thumbnail URL for Reel video | str | No |
| thumbnail_offset | Thumbnail frame offset in milliseconds (default: 0) | int | No |
| alt_text | Alt text for each media item (up to 1,000 chars each, accessibility feature), each item in the list corresponds to a media item in the media_urls list | List[str] | No |
| location_id | Facebook Page ID or name for location tagging (e.g., '7640348500' or '@guggenheimmuseum') | str | No |
| user_tags | List of users to tag with coordinates for images | List[Dict[str, Any]] | No |
| collaborators | Instagram usernames to invite as collaborators (max 3, public accounts only) | List[str] | No |
| auto_resize | Auto-resize images to 1080x1080px for Instagram | bool | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| post_result | The result of the post | PostResponse |
| post | The result of the post | PostIds |
### Possible use case
<!-- MANUAL: use_case -->
**Influencer Collaborations**: Create posts with collaborator tags to feature brand partnerships across multiple accounts.
**E-commerce Product Showcases**: Share carousel posts of product images with location tags for local discovery.
**Reels Automation**: Automatically publish short-form video content with custom thumbnails and trending audio.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,63 @@
# Ayrshare Post To LinkedIn
<!-- MANUAL: file_description -->
Blocks for posting content to LinkedIn using the Ayrshare social media management API.
<!-- END MANUAL -->
## Post To Linked In
### What it is
Post to LinkedIn using Ayrshare
### How it works
<!-- MANUAL: how_it_works -->
This block uses Ayrshare's social media API to post content to LinkedIn. It handles text posts, images, videos, and documents, with support for scheduling and audience targeting. The block authenticates through Ayrshare's API.
LinkedIn-specific features include visibility controls, comment management, and targeting by country, seniority, industry, and other demographics (requires 300+ followers in target audience).
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| post | The post text (max 3,000 chars, hashtags supported with #) | str | No |
| media_urls | Optional list of media URLs. LinkedIn supports up to 9 images, videos, or documents (PPT, PPTX, DOC, DOCX, PDF <100MB, <300 pages). | List[str] | No |
| is_video | Whether the media is a video | bool | No |
| schedule_date | UTC datetime for scheduling (YYYY-MM-DDThh:mm:ssZ) | str (date-time) | No |
| disable_comments | Whether to disable comments | bool | No |
| shorten_links | Whether to shorten links | bool | No |
| unsplash | Unsplash image configuration | str | No |
| requires_approval | Whether to enable approval workflow | bool | No |
| random_post | Whether to generate random post text | bool | No |
| random_media_url | Whether to generate random media | bool | No |
| notes | Additional notes for the post | str | No |
| visibility | Post visibility: 'public' (default), 'connections' (personal only), 'loggedin' | str | No |
| alt_text | Alt text for each image (accessibility feature, not supported for videos/documents) | List[str] | No |
| titles | Title/caption for each image or video | List[str] | No |
| document_title | Title for document posts (max 400 chars, uses filename if not specified) | str | No |
| thumbnail | Thumbnail URL for video (PNG/JPG, same dimensions as video, <10MB) | str | No |
| targeting_countries | Country codes for targeting (e.g., ['US', 'IN', 'DE', 'GB']). Requires 300+ followers in target audience. | List[str] | No |
| targeting_seniorities | Seniority levels for targeting (e.g., ['Senior', 'VP']). Requires 300+ followers in target audience. | List[str] | No |
| targeting_degrees | Education degrees for targeting. Requires 300+ followers in target audience. | List[str] | No |
| targeting_fields_of_study | Fields of study for targeting. Requires 300+ followers in target audience. | List[str] | No |
| targeting_industries | Industry categories for targeting. Requires 300+ followers in target audience. | List[str] | No |
| targeting_job_functions | Job function categories for targeting. Requires 300+ followers in target audience. | List[str] | No |
| targeting_staff_count_ranges | Company size ranges for targeting. Requires 300+ followers in target audience. | List[str] | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| post_result | The result of the post | PostResponse |
| post | The result of the post | PostIds |
### Possible use case
<!-- MANUAL: use_case -->
**Thought Leadership**: Automatically share blog posts or industry insights with professional network.
**Scheduled Content**: Queue up a week's worth of LinkedIn posts with scheduled publishing times.
**Targeted Announcements**: Share company updates targeted to specific industries or seniority levels.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,58 @@
# Ayrshare Post To Pinterest
<!-- MANUAL: file_description -->
Blocks for posting pins to Pinterest using the Ayrshare social media management API.
<!-- END MANUAL -->
## Post To Pinterest
### What it is
Post to Pinterest using Ayrshare
### How it works
<!-- MANUAL: how_it_works -->
This block uses Ayrshare's API to publish pins to Pinterest boards. It supports image pins, video pins (with required thumbnails), and carousel pins (up to 5 images), with customizable titles, descriptions, destination links, and private notes.
The block connects to Pinterest's API through Ayrshare, allowing you to specify target boards, add alt text for accessibility, and configure per-image carousel options including individual titles, links, and descriptions. Pins can be scheduled for future publishing.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| post | Pin description (max 500 chars, links not clickable - use link field instead) | str | No |
| media_urls | Required image/video URLs. Pinterest requires at least one image. Videos need thumbnail. Up to 5 images for carousel. | List[str] | No |
| is_video | Whether the media is a video | bool | No |
| schedule_date | UTC datetime for scheduling (YYYY-MM-DDThh:mm:ssZ) | str (date-time) | No |
| disable_comments | Whether to disable comments | bool | No |
| shorten_links | Whether to shorten links | bool | No |
| unsplash | Unsplash image configuration | str | No |
| requires_approval | Whether to enable approval workflow | bool | No |
| random_post | Whether to generate random post text | bool | No |
| random_media_url | Whether to generate random media | bool | No |
| notes | Additional notes for the post | str | No |
| pin_title | Pin title displayed in 'Add your title' section (max 100 chars) | str | No |
| link | Clickable destination URL when users click the pin (max 2048 chars) | str | No |
| board_id | Pinterest Board ID to post to (from /user/details endpoint, uses default board if not specified) | str | No |
| note | Private note for the pin (only visible to you and board collaborators) | str | No |
| thumbnail | Required thumbnail URL for video pins (must have valid image Content-Type) | str | No |
| carousel_options | Options for each image in carousel (title, link, description per image) | List[PinterestCarouselOption] | No |
| alt_text | Alt text for each image/video (max 500 chars each, accessibility feature) | List[str] | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| post_result | The result of the post | PostResponse |
| post | The result of the post | PostIds |
### Possible use case
<!-- MANUAL: use_case -->
**Product Catalog Distribution**: Automatically pin product images with direct links to purchase pages organized by board category.
**Content Repurposing**: Convert blog posts and articles into visual pins with clickable destination URLs.
**Visual Inspiration Boards**: Create carousel pins showcasing design ideas, recipes, or tutorials with step-by-step images.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,51 @@
# Ayrshare Post To Reddit
<!-- MANUAL: file_description -->
Blocks for posting content to Reddit using the Ayrshare social media management API.
<!-- END MANUAL -->
## Post To Reddit
### What it is
Post to Reddit using Ayrshare
### How it works
<!-- MANUAL: how_it_works -->
This block uses Ayrshare's API to publish content to Reddit. It supports text posts, image posts, and video submissions with optional scheduling and link shortening features.
The block authenticates through Ayrshare and submits content to your connected Reddit account. Common options include approval workflows for content review before publishing, random content generation, and Unsplash integration for sourcing images.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| post | The post text to be published | str | No |
| media_urls | Optional list of media URLs to include. Set is_video in advanced settings to true if you want to upload videos. | List[str] | No |
| is_video | Whether the media is a video | bool | No |
| schedule_date | UTC datetime for scheduling (YYYY-MM-DDThh:mm:ssZ) | str (date-time) | No |
| disable_comments | Whether to disable comments | bool | No |
| shorten_links | Whether to shorten links | bool | No |
| unsplash | Unsplash image configuration | str | No |
| requires_approval | Whether to enable approval workflow | bool | No |
| random_post | Whether to generate random post text | bool | No |
| random_media_url | Whether to generate random media | bool | No |
| notes | Additional notes for the post | str | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| post_result | The result of the post | PostResponse |
| post | The result of the post | PostIds |
### Possible use case
<!-- MANUAL: use_case -->
**Community Engagement**: Share relevant content to niche subreddits as part of community marketing strategies.
**Content Distribution**: Cross-post blog articles or announcements to relevant Reddit communities for broader reach.
**Brand Monitoring Response**: Automatically share updates or responses in communities where your brand is discussed.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,53 @@
# Ayrshare Post To Snapchat
<!-- MANUAL: file_description -->
Blocks for posting video content to Snapchat using the Ayrshare social media management API.
<!-- END MANUAL -->
## Post To Snapchat
### What it is
Post to Snapchat using Ayrshare
### How it works
<!-- MANUAL: how_it_works -->
This block uses Ayrshare's API to publish video content to Snapchat. Snapchat only supports video content, with three destination options: Stories (24-hour ephemeral content), Saved Stories (persistent Stories), and Spotlight (public discovery feed).
The block authenticates through Ayrshare and uploads video content with optional custom thumbnails. Videos can be scheduled for future publishing and support approval workflows for content review before going live.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| post | The post text (optional for video-only content) | str | No |
| media_urls | Required video URL for Snapchat posts. Snapchat only supports video content. | List[str] | No |
| is_video | Whether the media is a video | bool | No |
| schedule_date | UTC datetime for scheduling (YYYY-MM-DDThh:mm:ssZ) | str (date-time) | No |
| disable_comments | Whether to disable comments | bool | No |
| shorten_links | Whether to shorten links | bool | No |
| unsplash | Unsplash image configuration | str | No |
| requires_approval | Whether to enable approval workflow | bool | No |
| random_post | Whether to generate random post text | bool | No |
| random_media_url | Whether to generate random media | bool | No |
| notes | Additional notes for the post | str | No |
| story_type | Type of Snapchat content: 'story' (24-hour Stories), 'saved_story' (Saved Stories), or 'spotlight' (Spotlight posts) | str | No |
| video_thumbnail | Thumbnail URL for video content (optional, auto-generated if not provided) | str | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| post_result | The result of the post | PostResponse |
| post | The result of the post | PostIds |
### Possible use case
<!-- MANUAL: use_case -->
**Ephemeral Marketing**: Share time-sensitive promotions or behind-the-scenes content that creates urgency through 24-hour Stories.
**Public Discovery**: Post engaging video content to Spotlight to reach new audiences beyond your followers.
**Scheduled Story Series**: Plan and schedule a sequence of video Stories for product launches or events.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,51 @@
# Ayrshare Post To Telegram
<!-- MANUAL: file_description -->
Blocks for posting messages to Telegram channels using the Ayrshare social media management API.
<!-- END MANUAL -->
## Post To Telegram
### What it is
Post to Telegram using Ayrshare
### How it works
<!-- MANUAL: how_it_works -->
This block uses Ayrshare's API to publish messages to Telegram channels. It supports text messages, images, videos, and animated GIFs, with automatic link preview generation unless media is included.
The block authenticates through Ayrshare and sends content to your connected Telegram channel or bot. User mentions are supported via @handle syntax, and content can be scheduled for future delivery.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| post | The post text (empty string allowed). Use @handle to mention other Telegram users. | str | No |
| media_urls | Optional list of media URLs. For animated GIFs, only one URL is allowed. Telegram will auto-preview links unless image/video is included. | List[str] | No |
| is_video | Whether the media is a video. Set to true for animated GIFs that don't end in .gif/.GIF extension. | bool | No |
| schedule_date | UTC datetime for scheduling (YYYY-MM-DDThh:mm:ssZ) | str (date-time) | No |
| disable_comments | Whether to disable comments | bool | No |
| shorten_links | Whether to shorten links | bool | No |
| unsplash | Unsplash image configuration | str | No |
| requires_approval | Whether to enable approval workflow | bool | No |
| random_post | Whether to generate random post text | bool | No |
| random_media_url | Whether to generate random media | bool | No |
| notes | Additional notes for the post | str | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| post_result | The result of the post | PostResponse |
| post | The result of the post | PostIds |
### Possible use case
<!-- MANUAL: use_case -->
**Channel Broadcasting**: Automatically distribute announcements, updates, or news to Telegram channel subscribers.
**Alert Systems**: Send automated notifications with media attachments to monitoring or alert channels.
**Content Syndication**: Cross-post content from other platforms to Telegram communities for broader reach.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,51 @@
# Ayrshare Post To Threads
<!-- MANUAL: file_description -->
Blocks for posting content to Threads using the Ayrshare social media management API.
<!-- END MANUAL -->
## Post To Threads
### What it is
Post to Threads using Ayrshare
### How it works
<!-- MANUAL: how_it_works -->
This block uses Ayrshare's API to publish content to Threads (Meta's text-based social platform). It supports text posts (up to 500 characters with one hashtag), images, videos, and carousels (up to 20 items), with automatic link previews when no media is attached.
The block authenticates through Meta's API via Ayrshare. Content can mention users via @handle syntax, be scheduled for future publishing, and include approval workflows for content review.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| post | The post text (max 500 chars, empty string allowed). Only 1 hashtag allowed. Use @handle to mention users. | str | No |
| media_urls | Optional list of media URLs. Supports up to 20 images/videos in a carousel. Auto-preview links unless media is included. | List[str] | No |
| is_video | Whether the media is a video | bool | No |
| schedule_date | UTC datetime for scheduling (YYYY-MM-DDThh:mm:ssZ) | str (date-time) | No |
| disable_comments | Whether to disable comments | bool | No |
| shorten_links | Whether to shorten links | bool | No |
| unsplash | Unsplash image configuration | str | No |
| requires_approval | Whether to enable approval workflow | bool | No |
| random_post | Whether to generate random post text | bool | No |
| random_media_url | Whether to generate random media | bool | No |
| notes | Additional notes for the post | str | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| post_result | The result of the post | PostResponse |
| post | The result of the post | PostIds |
### Possible use case
<!-- MANUAL: use_case -->
**Thought Leadership**: Share quick insights, opinions, or industry commentary in a conversational format.
**Cross-Platform Text Content**: Automatically syndicate text-based content from other platforms to Threads.
**Community Engagement**: Post discussion prompts or responses to engage with your Threads audience.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,62 @@
# Ayrshare Post To TikTok
<!-- MANUAL: file_description -->
Blocks for posting videos and image slideshows to TikTok using the Ayrshare social media management API.
<!-- END MANUAL -->
## Post To Tik Tok
### What it is
Post to TikTok using Ayrshare
### How it works
<!-- MANUAL: how_it_works -->
This block uses Ayrshare's API to publish content to TikTok. It supports video posts and image slideshows (up to 35 images), with extensive options for content labeling including AI-generated disclosure, branded content, and brand organic content tags.
The block connects to TikTok's API through Ayrshare with controls for visibility, duet/stitch permissions, comment settings, auto-music, and thumbnail selection. Videos can be posted as drafts for final review, and scheduled for future publishing.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| post | The post text (max 2,200 chars, empty string allowed). Use @handle to mention users. Line breaks will be ignored. | str | Yes |
| media_urls | Required media URLs. Either 1 video OR up to 35 images (JPG/JPEG/WEBP only). Cannot mix video and images. | List[str] | No |
| is_video | Whether the media is a video | bool | No |
| schedule_date | UTC datetime for scheduling (YYYY-MM-DDThh:mm:ssZ) | str (date-time) | No |
| disable_comments | Disable comments on the published post | bool | No |
| shorten_links | Whether to shorten links | bool | No |
| unsplash | Unsplash image configuration | str | No |
| requires_approval | Whether to enable approval workflow | bool | No |
| random_post | Whether to generate random post text | bool | No |
| random_media_url | Whether to generate random media | bool | No |
| notes | Additional notes for the post | str | No |
| auto_add_music | Whether to automatically add recommended music to the post. If you set this field to true, you can change the music later in the TikTok app. | bool | No |
| disable_duet | Disable duets on published video (video only) | bool | No |
| disable_stitch | Disable stitch on published video (video only) | bool | No |
| is_ai_generated | If you enable the toggle, your video will be labeled as “Creator labeled as AI-generated” once posted and cant be changed. The “Creator labeled as AI-generated” label indicates that the content was completely AI-generated or significantly edited with AI. | bool | No |
| is_branded_content | Whether to enable the Branded Content toggle. If this field is set to true, the video will be labeled as Branded Content, indicating you are in a paid partnership with a brand. A “Paid partnership” label will be attached to the video. | bool | No |
| is_brand_organic | Whether to enable the Brand Organic Content toggle. If this field is set to true, the video will be labeled as Brand Organic Content, indicating you are promoting yourself or your own business. A “Promotional content” label will be attached to the video. | bool | No |
| image_cover_index | Index of image to use as cover (0-based, image posts only) | int | No |
| title | Title for image posts | str | No |
| thumbnail_offset | Video thumbnail frame offset in milliseconds (video only) | int | No |
| visibility | Post visibility: 'public', 'private', 'followers', or 'friends' | "public" \| "private" \| "followers" | No |
| draft | Create as draft post (video only) | bool | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| post_result | The result of the post | PostResponse |
| post | The result of the post | PostIds |
### Possible use case
<!-- MANUAL: use_case -->
**Creator Content Pipeline**: Automate video uploads with proper AI disclosure labels and visibility settings for content creators.
**Brand Campaigns**: Publish branded content with proper disclosure labels to maintain FTC compliance and platform guidelines.
**Image Slideshow Posts**: Create TikTok slideshows from product images or photo series with automatic cover selection.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,64 @@
# Ayrshare Post To X
<!-- MANUAL: file_description -->
Blocks for posting tweets and threads to X (Twitter) using the Ayrshare social media management API.
<!-- END MANUAL -->
## Post To X
### What it is
Post to X / Twitter using Ayrshare
### How it works
<!-- MANUAL: how_it_works -->
This block uses Ayrshare's API to publish content to X (formerly Twitter). It supports standard tweets (280 characters, or 25,000 for Premium users), threads, polls, quote tweets, and replies, with up to 4 media attachments including video with subtitles.
The block authenticates through Ayrshare and handles X-specific features like automatic thread breaking using double newlines, thread numbering, per-post media attachments, and long-form video uploads (with approval). Poll options and duration can be configured for engagement posts.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| post | The post text (max 280 chars, up to 25,000 for Premium users). Use @handle to mention users. Use \n\n for thread breaks. | str | Yes |
| media_urls | Optional list of media URLs. X supports up to 4 images or videos per tweet. Auto-preview links unless media is included. | List[str] | No |
| is_video | Whether the media is a video | bool | No |
| schedule_date | UTC datetime for scheduling (YYYY-MM-DDThh:mm:ssZ) | str (date-time) | No |
| disable_comments | Whether to disable comments | bool | No |
| shorten_links | Whether to shorten links | bool | No |
| unsplash | Unsplash image configuration | str | No |
| requires_approval | Whether to enable approval workflow | bool | No |
| random_post | Whether to generate random post text | bool | No |
| random_media_url | Whether to generate random media | bool | No |
| notes | Additional notes for the post | str | No |
| reply_to_id | ID of the tweet to reply to | str | No |
| quote_tweet_id | ID of the tweet to quote (low-level Tweet ID) | str | No |
| poll_options | Poll options (2-4 choices) | List[str] | No |
| poll_duration | Poll duration in minutes (1-10080) | int | No |
| alt_text | Alt text for each image (max 1,000 chars each, not supported for videos) | List[str] | No |
| is_thread | Whether to automatically break post into thread based on line breaks | bool | No |
| thread_number | Add thread numbers (1/n format) to each thread post | bool | No |
| thread_media_urls | Media URLs for thread posts (one per thread, use 'null' to skip) | List[str] | No |
| long_post | Force long form post (requires Premium X account) | bool | No |
| long_video | Enable long video upload (requires approval and Business/Enterprise plan) | bool | No |
| subtitle_url | URL to SRT subtitle file for videos (must be HTTPS and end in .srt) | str | No |
| subtitle_language | Language code for subtitles (default: 'en') | str | No |
| subtitle_name | Name of caption track (max 150 chars, default: 'English') | str | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| post_result | The result of the post | PostResponse |
| post | The result of the post | PostIds |
### Possible use case
<!-- MANUAL: use_case -->
**Thread Publishing**: Automatically format and publish long-form content as numbered thread sequences.
**Engagement Polls**: Create polls to gather audience feedback or drive interaction with scheduled posting.
**Reply Automation**: Build workflows that automatically respond to mentions or engage in conversations.
<!-- END MANUAL -->
---

Some files were not shown because too many files have changed in this diff Show More