Compare commits

...

12 Commits

Author SHA1 Message Date
Nicholas Tindle
04cf8d01b4 docs(blocks): Add improved block documentation for new platform location
Add block documentation files that were generated with improved descriptions
from Bobby's GitBook site. These docs include technical explanations and
use cases for blocks including Twitter, Google, GitHub, Wolfram, ZeroBounce,
and many other integrations.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-09 16:49:55 -07:00
Nicholas Tindle
3d78b833f0 fix(docs): Update generate_block_docs.py output path
Update DEFAULT_OUTPUT_DIR from docs/content/platform/blocks
to docs/platform/blocks to match the new docs structure.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-09 16:45:30 -07:00
Nicholas Tindle
eea4bca826 Merge dev into figure-out-docs
Accept dev's docs restructure (docs/content/platform → docs/platform).
Resolves merge conflicts by accepting dev's file deletions.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-09 16:44:34 -07:00
Nicholas Tindle
0bf55b453b update 2026-01-09 16:16:44 -07:00
Nicholas Tindle
f25d2a0ae6 docs(blocks): Improve documentation clarity and consistency across various blocks 2026-01-09 13:20:45 -07:00
Nicholas Tindle
4baf0f7ee3 Delete docs/content/SUMMARY.md 2026-01-08 00:32:48 -06:00
Nicholas Tindle
56f296af36 Delete .claude/settings.json 2026-01-08 00:32:20 -06:00
Nicholas Tindle
302e6d548d Delete docs/.gitbook.yaml 2026-01-08 00:32:01 -06:00
Nicholas Tindle
8d7defc89a docs(blocks): Add technical explanations and use cases to all block documentation
Replace placeholder text in MANUAL sections with comprehensive documentation:
- Added 2-paragraph technical explanations for how_it_works sections
- Added 3 practical use case examples with bold titles for use_case sections

Services updated: Twitter, Exa, Ayrshare, Google, GitHub, Firecrawl, Notion,
Airtable, Slant3D, Linear, Jina, HubSpot, Apollo, Discord, DataForSEO, Todoist,
System, Core blocks (data, logic, misc, multimedia, search, llm), BaaS,
Bannerbear, Replicate, Enrichlayer, Wolfram, FAL, Stagehand, SmartLead,
Nvidia, Compass, ZeroBounce, and generic webhooks.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-07 23:18:53 -07:00
Nicholas Tindle
7f94404dc2 Merge branch 'dev' into figure-out-docs 2026-01-07 21:22:41 -07:00
Nicholas Tindle
7fda42d48b Merge branch 'dev' into figure-out-docs 2026-01-06 19:38:15 -07:00
Nicholas Tindle
615d20613b feat(docs): Add block documentation auto-generation system
- Add generate_block_docs.py script that introspects block code to generate markdown
- Support manual content preservation via <!-- MANUAL: --> markers
- Add migrate_block_docs.py to preserve existing manual content from git HEAD
- Add CI workflow (docs-block-sync.yml) to fail if docs drift from code
- Add Claude PR review workflow (docs-claude-review.yml) for doc changes
- Add manual LLM enhancement workflow (docs-enhance.yml)
- Add GitBook configuration (.gitbook.yaml, SUMMARY.md)
- Fix non-deterministic category ordering (categories is a set)
- Add comprehensive test suite (32 tests)
- Generate docs for 444 blocks with 66 preserved manual sections

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-06 18:05:16 -07:00
136 changed files with 18674 additions and 1111 deletions

74
.github/workflows/docs-block-sync.yml vendored Normal file
View File

@@ -0,0 +1,74 @@
name: Block Documentation Sync Check
on:
push:
branches: [master, dev]
paths:
- "autogpt_platform/backend/backend/blocks/**"
- "docs/content/platform/blocks/**"
- "autogpt_platform/backend/scripts/generate_block_docs.py"
- ".github/workflows/docs-block-sync.yml"
pull_request:
branches: [master, dev]
paths:
- "autogpt_platform/backend/backend/blocks/**"
- "docs/content/platform/blocks/**"
- "autogpt_platform/backend/scripts/generate_block_docs.py"
- ".github/workflows/docs-block-sync.yml"
jobs:
check-docs-sync:
runs-on: ubuntu-latest
timeout-minutes: 15
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 1
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.11"
- name: Set up Python dependency cache
uses: actions/cache@v4
with:
path: ~/.cache/pypoetry
key: poetry-${{ runner.os }}-${{ hashFiles('autogpt_platform/backend/poetry.lock') }}
restore-keys: |
poetry-${{ runner.os }}-
- name: Install Poetry
run: |
cd autogpt_platform/backend
HEAD_POETRY_VERSION=$(python3 ../../.github/workflows/scripts/get_package_version_from_lockfile.py poetry)
echo "Found Poetry version ${HEAD_POETRY_VERSION} in backend/poetry.lock"
curl -sSL https://install.python-poetry.org | POETRY_VERSION=$HEAD_POETRY_VERSION python3 -
echo "$HOME/.local/bin" >> $GITHUB_PATH
- name: Install dependencies
working-directory: autogpt_platform/backend
run: |
poetry install --only main
poetry run prisma generate
- name: Check block documentation is in sync
working-directory: autogpt_platform/backend
run: |
echo "Checking if block documentation is in sync with code..."
poetry run python scripts/generate_block_docs.py --check
- name: Show diff if out of sync
if: failure()
run: |
echo "::error::Block documentation is out of sync with code!"
echo ""
echo "To fix this, run the following command locally:"
echo " cd autogpt_platform/backend && poetry run python scripts/generate_block_docs.py"
echo ""
echo "Then commit the updated documentation files."
echo ""
echo "Changes detected:"
git diff docs/content/platform/blocks/ || true

View File

@@ -0,0 +1,94 @@
name: Claude Block Docs Review
on:
pull_request:
types: [opened, synchronize]
paths:
- "docs/content/platform/blocks/**"
- "autogpt_platform/backend/backend/blocks/**"
jobs:
claude-review:
# Only run for PRs from members/collaborators
if: |
github.event.pull_request.author_association == 'OWNER' ||
github.event.pull_request.author_association == 'MEMBER' ||
github.event.pull_request.author_association == 'COLLABORATOR'
runs-on: ubuntu-latest
timeout-minutes: 15
permissions:
contents: read
pull-requests: write
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.11"
- name: Set up Python dependency cache
uses: actions/cache@v4
with:
path: ~/.cache/pypoetry
key: poetry-${{ runner.os }}-${{ hashFiles('autogpt_platform/backend/poetry.lock') }}
restore-keys: |
poetry-${{ runner.os }}-
- name: Install Poetry
run: |
cd autogpt_platform/backend
HEAD_POETRY_VERSION=$(python3 ../../.github/workflows/scripts/get_package_version_from_lockfile.py poetry)
curl -sSL https://install.python-poetry.org | POETRY_VERSION=$HEAD_POETRY_VERSION python3 -
echo "$HOME/.local/bin" >> $GITHUB_PATH
- name: Install dependencies
working-directory: autogpt_platform/backend
run: |
poetry install --only main
poetry run prisma generate
- name: Run Claude Code Review
uses: anthropics/claude-code-action@v1
with:
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
claude_args: |
--allowedTools "Read,Glob,Grep,Bash(gh pr comment:*),Bash(gh pr diff:*),Bash(gh pr view:*)"
prompt: |
You are reviewing a PR that modifies block documentation or block code for AutoGPT.
## Your Task
Review the changes in this PR and provide constructive feedback. Focus on:
1. **Documentation Accuracy**: For any block code changes, verify that:
- Input/output tables in docs match the actual block schemas
- Description text accurately reflects what the block does
- Any new blocks have corresponding documentation
2. **Manual Content Quality**: Check manual sections (marked with `<!-- MANUAL: -->` markers):
- "How it works" sections should have clear technical explanations
- "Possible use case" sections should have practical, real-world examples
- Content should be helpful for users trying to understand the blocks
3. **Template Compliance**: Ensure docs follow the standard template:
- What it is (brief intro)
- What it does (description)
- How it works (technical explanation)
- Inputs table
- Outputs table
- Possible use case
4. **Cross-references**: Check that links and anchors are correct
## Review Process
1. First, get the PR diff to see what changed: `gh pr diff ${{ github.event.pull_request.number }}`
2. Read any modified block files to understand the implementation
3. Read corresponding documentation files to verify accuracy
4. Provide your feedback as a PR comment
Be constructive and specific. If everything looks good, say so!
If there are issues, explain what's wrong and suggest how to fix it.

193
.github/workflows/docs-enhance.yml vendored Normal file
View File

@@ -0,0 +1,193 @@
name: Enhance Block Documentation
on:
workflow_dispatch:
inputs:
block_pattern:
description: 'Block file pattern to enhance (e.g., "google/*.md" or "*" for all blocks)'
required: true
default: '*'
type: string
dry_run:
description: 'Dry run mode - show proposed changes without committing'
type: boolean
default: true
max_blocks:
description: 'Maximum number of blocks to process (0 for unlimited)'
type: number
default: 10
jobs:
enhance-docs:
runs-on: ubuntu-latest
timeout-minutes: 45
permissions:
contents: write
pull-requests: write
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 1
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.11"
- name: Set up Python dependency cache
uses: actions/cache@v4
with:
path: ~/.cache/pypoetry
key: poetry-${{ runner.os }}-${{ hashFiles('autogpt_platform/backend/poetry.lock') }}
restore-keys: |
poetry-${{ runner.os }}-
- name: Install Poetry
run: |
cd autogpt_platform/backend
HEAD_POETRY_VERSION=$(python3 ../../.github/workflows/scripts/get_package_version_from_lockfile.py poetry)
curl -sSL https://install.python-poetry.org | POETRY_VERSION=$HEAD_POETRY_VERSION python3 -
echo "$HOME/.local/bin" >> $GITHUB_PATH
- name: Install dependencies
working-directory: autogpt_platform/backend
run: |
poetry install --only main
poetry run prisma generate
- name: Run Claude Enhancement
uses: anthropics/claude-code-action@v1
with:
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
claude_args: |
--allowedTools "Read,Edit,Glob,Grep,Write,Bash(git:*),Bash(gh:*),Bash(find:*),Bash(ls:*)"
prompt: |
You are enhancing block documentation for AutoGPT. Your task is to improve the MANUAL sections
of block documentation files by reading the actual block implementations and writing helpful content.
## Configuration
- Block pattern: ${{ inputs.block_pattern }}
- Dry run: ${{ inputs.dry_run }}
- Max blocks to process: ${{ inputs.max_blocks }}
## Your Task
1. **Find Documentation Files**
Find block documentation files matching the pattern in `docs/content/platform/blocks/`
Pattern: ${{ inputs.block_pattern }}
Use: `find docs/content/platform/blocks -name "*.md" -type f`
2. **For Each Documentation File** (up to ${{ inputs.max_blocks }} files):
a. Read the documentation file
b. Identify which block(s) it documents (look for the block class name)
c. Find and read the corresponding block implementation in `autogpt_platform/backend/backend/blocks/`
d. Improve the MANUAL sections:
**"How it works" section** (within `<!-- MANUAL: how_it_works -->` markers):
- Explain the technical flow of the block
- Describe what APIs or services it connects to
- Note any important configuration or prerequisites
- Keep it concise but informative (2-4 paragraphs)
**"Possible use case" section** (within `<!-- MANUAL: use_case -->` markers):
- Provide 2-3 practical, real-world examples
- Make them specific and actionable
- Show how this block could be used in an automation workflow
3. **Important Rules**
- ONLY modify content within `<!-- MANUAL: -->` and `<!-- END MANUAL -->` markers
- Do NOT modify auto-generated sections (inputs/outputs tables, descriptions)
- Keep content accurate based on the actual block implementation
- Write for users who may not be technical experts
4. **Output**
${{ inputs.dry_run == true && 'DRY RUN MODE: Show proposed changes for each file but do NOT actually edit the files. Describe what you would change.' || 'LIVE MODE: Actually edit the files to improve the documentation.' }}
## Example Improvements
**Before (How it works):**
```
_Add technical explanation here._
```
**After (How it works):**
```
This block connects to the GitHub API to retrieve issue information. When executed,
it authenticates using your GitHub credentials and fetches issue details including
title, body, labels, and assignees.
The block requires a valid GitHub OAuth connection with repository access permissions.
It supports both public and private repositories you have access to.
```
**Before (Possible use case):**
```
_Add practical use case examples here._
```
**After (Possible use case):**
```
**Customer Support Automation**: Monitor a GitHub repository for new issues with
the "bug" label, then automatically create a ticket in your support system and
notify the on-call engineer via Slack.
**Release Notes Generation**: When a new release is published, gather all closed
issues since the last release and generate a summary for your changelog.
```
Begin by finding and listing the documentation files to process.
- name: Create PR with enhanced documentation
if: ${{ inputs.dry_run == false }}
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
# Check if there are changes
if git diff --quiet docs/content/platform/blocks/; then
echo "No changes to commit"
exit 0
fi
# Configure git
git config user.name "github-actions[bot]"
git config user.email "github-actions[bot]@users.noreply.github.com"
# Create branch and commit
BRANCH_NAME="docs/enhance-blocks-$(date +%Y%m%d-%H%M%S)"
git checkout -b "$BRANCH_NAME"
git add docs/content/platform/blocks/
git commit -m "docs: enhance block documentation with LLM-generated content
Pattern: ${{ inputs.block_pattern }}
Max blocks: ${{ inputs.max_blocks }}
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>"
# Push and create PR
git push -u origin "$BRANCH_NAME"
gh pr create \
--title "docs: LLM-enhanced block documentation" \
--body "## Summary
This PR contains LLM-enhanced documentation for block files matching pattern: \`${{ inputs.block_pattern }}\`
The following manual sections were improved:
- **How it works**: Technical explanations based on block implementations
- **Possible use case**: Practical, real-world examples
## Review Checklist
- [ ] Content is accurate based on block implementations
- [ ] Examples are practical and helpful
- [ ] No auto-generated sections were modified
---
🤖 Generated with [Claude Code](https://claude.com/claude-code)" \
--base dev

View File

@@ -81,7 +81,7 @@ class StoreValueBlock(Block):
def __init__(self):
super().__init__(
id="1ff065e9-88e8-4358-9d82-8dc91f622ba9",
description="This block forwards an input value as output, allowing reuse without change.",
description="A basic block that stores and forwards a value throughout workflows, allowing it to be reused without changes across multiple blocks.",
categories={BlockCategory.BASIC},
input_schema=StoreValueBlock.Input,
output_schema=StoreValueBlock.Output,
@@ -111,7 +111,7 @@ class PrintToConsoleBlock(Block):
def __init__(self):
super().__init__(
id="f3b1c1b2-4c4f-4f0d-8d2f-4c4f0d8d2f4c",
description="Print the given text to the console, this is used for a debugging purpose.",
description="A debugging block that outputs text to the console for monitoring and troubleshooting workflow execution.",
categories={BlockCategory.BASIC},
input_schema=PrintToConsoleBlock.Input,
output_schema=PrintToConsoleBlock.Output,
@@ -137,7 +137,7 @@ class NoteBlock(Block):
def __init__(self):
super().__init__(
id="cc10ff7b-7753-4ff2-9af6-9399b1a7eddc",
description="This block is used to display a sticky note with the given text.",
description="A visual annotation block that displays a sticky note in the workflow editor for documentation and organization purposes.",
categories={BlockCategory.BASIC},
input_schema=NoteBlock.Input,
output_schema=NoteBlock.Output,

View File

@@ -159,7 +159,7 @@ class FindInDictionaryBlock(Block):
def __init__(self):
super().__init__(
id="0e50422c-6dee-4145-83d6-3a5a392f65de",
description="Lookup the given key in the input dictionary/object/list and return the value.",
description="A block that looks up a value in a dictionary, list, or object by key or index and returns the corresponding value.",
input_schema=FindInDictionaryBlock.Input,
output_schema=FindInDictionaryBlock.Output,
test_input=[

View File

@@ -51,7 +51,7 @@ class GithubCommentBlock(Block):
def __init__(self):
super().__init__(
id="a8db4d8d-db1c-4a25-a1b0-416a8c33602b",
description="This block posts a comment on a specified GitHub issue or pull request.",
description="A block that posts comments on GitHub issues or pull requests using the GitHub API.",
categories={BlockCategory.DEVELOPER_TOOLS},
input_schema=GithubCommentBlock.Input,
output_schema=GithubCommentBlock.Output,
@@ -151,7 +151,7 @@ class GithubUpdateCommentBlock(Block):
def __init__(self):
super().__init__(
id="b3f4d747-10e3-4e69-8c51-f2be1d99c9a7",
description="This block updates a comment on a specified GitHub issue or pull request.",
description="A block that updates an existing comment on a GitHub issue or pull request.",
categories={BlockCategory.DEVELOPER_TOOLS},
input_schema=GithubUpdateCommentBlock.Input,
output_schema=GithubUpdateCommentBlock.Output,
@@ -249,7 +249,7 @@ class GithubListCommentsBlock(Block):
def __init__(self):
super().__init__(
id="c4b5fb63-0005-4a11-b35a-0c2467bd6b59",
description="This block lists all comments for a specified GitHub issue or pull request.",
description="A block that retrieves all comments from a GitHub issue or pull request, including comment metadata and content.",
categories={BlockCategory.DEVELOPER_TOOLS},
input_schema=GithubListCommentsBlock.Input,
output_schema=GithubListCommentsBlock.Output,
@@ -363,7 +363,7 @@ class GithubMakeIssueBlock(Block):
def __init__(self):
super().__init__(
id="691dad47-f494-44c3-a1e8-05b7990f2dab",
description="This block creates a new issue on a specified GitHub repository.",
description="A block that creates new issues on GitHub repositories with a title and body content.",
categories={BlockCategory.DEVELOPER_TOOLS},
input_schema=GithubMakeIssueBlock.Input,
output_schema=GithubMakeIssueBlock.Output,
@@ -433,7 +433,7 @@ class GithubReadIssueBlock(Block):
def __init__(self):
super().__init__(
id="6443c75d-032a-4772-9c08-230c707c8acc",
description="This block reads the body, title, and user of a specified GitHub issue.",
description="A block that retrieves information about a specific GitHub issue, including its title, body content, and creator.",
categories={BlockCategory.DEVELOPER_TOOLS},
input_schema=GithubReadIssueBlock.Input,
output_schema=GithubReadIssueBlock.Output,
@@ -510,7 +510,7 @@ class GithubListIssuesBlock(Block):
def __init__(self):
super().__init__(
id="c215bfd7-0e57-4573-8f8c-f7d4963dcd74",
description="This block lists all issues for a specified GitHub repository.",
description="A block that retrieves a list of issues from a GitHub repository with their titles and URLs.",
categories={BlockCategory.DEVELOPER_TOOLS},
input_schema=GithubListIssuesBlock.Input,
output_schema=GithubListIssuesBlock.Output,
@@ -597,7 +597,7 @@ class GithubAddLabelBlock(Block):
def __init__(self):
super().__init__(
id="98bd6b77-9506-43d5-b669-6b9733c4b1f1",
description="This block adds a label to a specified GitHub issue or pull request.",
description="A block that adds a label to a GitHub issue or pull request for categorization and organization.",
categories={BlockCategory.DEVELOPER_TOOLS},
input_schema=GithubAddLabelBlock.Input,
output_schema=GithubAddLabelBlock.Output,
@@ -657,7 +657,7 @@ class GithubRemoveLabelBlock(Block):
def __init__(self):
super().__init__(
id="78f050c5-3e3a-48c0-9e5b-ef1ceca5589c",
description="This block removes a label from a specified GitHub issue or pull request.",
description="A block that removes a label from a GitHub issue or pull request.",
categories={BlockCategory.DEVELOPER_TOOLS},
input_schema=GithubRemoveLabelBlock.Input,
output_schema=GithubRemoveLabelBlock.Output,
@@ -720,7 +720,7 @@ class GithubAssignIssueBlock(Block):
def __init__(self):
super().__init__(
id="90507c72-b0ff-413a-886a-23bbbd66f542",
description="This block assigns a user to a specified GitHub issue.",
description="A block that assigns a GitHub user to an issue for task ownership and tracking.",
categories={BlockCategory.DEVELOPER_TOOLS},
input_schema=GithubAssignIssueBlock.Input,
output_schema=GithubAssignIssueBlock.Output,
@@ -786,7 +786,7 @@ class GithubUnassignIssueBlock(Block):
def __init__(self):
super().__init__(
id="d154002a-38f4-46c2-962d-2488f2b05ece",
description="This block unassigns a user from a specified GitHub issue.",
description="A block that removes a user's assignment from a GitHub issue.",
categories={BlockCategory.DEVELOPER_TOOLS},
input_schema=GithubUnassignIssueBlock.Input,
output_schema=GithubUnassignIssueBlock.Output,

View File

@@ -353,7 +353,7 @@ class GmailReadBlock(GmailBase):
def __init__(self):
super().__init__(
id="25310c70-b89b-43ba-b25c-4dfa7e2a481c",
description="This block reads emails from Gmail.",
description="A block that retrieves and reads emails from a Gmail account based on search criteria, returning detailed message information including subject, sender, body, and attachments.",
categories={BlockCategory.COMMUNICATION},
disabled=not GOOGLE_OAUTH_IS_CONFIGURED,
input_schema=GmailReadBlock.Input,
@@ -743,7 +743,7 @@ class GmailListLabelsBlock(GmailBase):
def __init__(self):
super().__init__(
id="3e1c2c1c-c689-4520-b956-1f3bf4e02bb7",
description="This block lists all labels in Gmail.",
description="A block that retrieves all labels (categories) from a Gmail account for organizing and categorizing emails.",
categories={BlockCategory.COMMUNICATION},
input_schema=GmailListLabelsBlock.Input,
output_schema=GmailListLabelsBlock.Output,
@@ -807,7 +807,7 @@ class GmailAddLabelBlock(GmailBase):
def __init__(self):
super().__init__(
id="f884b2fb-04f4-4265-9658-14f433926ac9",
description="This block adds a label to a Gmail message.",
description="A block that adds a label to a specific email message in Gmail, creating the label if it doesn't exist.",
categories={BlockCategory.COMMUNICATION},
input_schema=GmailAddLabelBlock.Input,
output_schema=GmailAddLabelBlock.Output,
@@ -893,7 +893,7 @@ class GmailRemoveLabelBlock(GmailBase):
def __init__(self):
super().__init__(
id="0afc0526-aba1-4b2b-888e-a22b7c3f359d",
description="This block removes a label from a Gmail message.",
description="A block that removes a label from a specific email message in a Gmail account.",
categories={BlockCategory.COMMUNICATION},
input_schema=GmailRemoveLabelBlock.Input,
output_schema=GmailRemoveLabelBlock.Output,
@@ -961,7 +961,7 @@ class GmailGetThreadBlock(GmailBase):
def __init__(self):
super().__init__(
id="21a79166-9df7-4b5f-9f36-96f639d86112",
description="Get a full Gmail thread by ID",
description="A block that retrieves an entire Gmail thread (email conversation) by ID, returning all messages with decoded bodies for reading complete conversations.",
categories={BlockCategory.COMMUNICATION},
input_schema=GmailGetThreadBlock.Input,
output_schema=GmailGetThreadBlock.Output,

View File

@@ -282,7 +282,7 @@ class GoogleSheetsReadBlock(Block):
def __init__(self):
super().__init__(
id="5724e902-3635-47e9-a108-aaa0263a4988",
description="This block reads data from a Google Sheets spreadsheet.",
description="A block that reads data from a Google Sheets spreadsheet using A1 notation range selection.",
categories={BlockCategory.DATA},
input_schema=GoogleSheetsReadBlock.Input,
output_schema=GoogleSheetsReadBlock.Output,
@@ -409,7 +409,7 @@ class GoogleSheetsWriteBlock(Block):
def __init__(self):
super().__init__(
id="d9291e87-301d-47a8-91fe-907fb55460e5",
description="This block writes data to a Google Sheets spreadsheet.",
description="A block that writes data to a Google Sheets spreadsheet at a specified A1 notation range.",
categories={BlockCategory.DATA},
input_schema=GoogleSheetsWriteBlock.Input,
output_schema=GoogleSheetsWriteBlock.Output,

View File

@@ -76,7 +76,7 @@ class AgentInputBlock(Block):
super().__init__(
**{
"id": "c0a8e994-ebf1-4a9c-a4d8-89d09c86741b",
"description": "Base block for user inputs.",
"description": "A block that accepts and processes user input values within a workflow, supporting various input types and validation.",
"input_schema": AgentInputBlock.Input,
"output_schema": AgentInputBlock.Output,
"test_input": [
@@ -168,7 +168,7 @@ class AgentOutputBlock(Block):
def __init__(self):
super().__init__(
id="363ae599-353e-4804-937e-b2ee3cef3da4",
description="Stores the output of the graph for users to see.",
description="A block that records and formats workflow results for display to users, with optional Jinja2 template formatting support.",
input_schema=AgentOutputBlock.Input,
output_schema=AgentOutputBlock.Output,
test_input=[

View File

@@ -854,7 +854,7 @@ class AIStructuredResponseGeneratorBlock(AIBlockBase):
def __init__(self):
super().__init__(
id="ed55ac19-356e-4243-a6cb-bc599e9b716f",
description="Call a Large Language Model (LLM) to generate formatted object based on the given prompt.",
description="A block that generates structured JSON responses using a Large Language Model (LLM), with schema validation and format enforcement.",
categories={BlockCategory.AI},
input_schema=AIStructuredResponseGeneratorBlock.Input,
output_schema=AIStructuredResponseGeneratorBlock.Output,
@@ -1265,7 +1265,7 @@ class AITextGeneratorBlock(AIBlockBase):
def __init__(self):
super().__init__(
id="1f292d4a-41a4-4977-9684-7c8d560b9f91",
description="Call a Large Language Model (LLM) to generate a string based on the given prompt.",
description="A block that produces text responses using a Large Language Model (LLM) based on customizable prompts and system instructions.",
categories={BlockCategory.AI},
input_schema=AITextGeneratorBlock.Input,
output_schema=AITextGeneratorBlock.Output,
@@ -1361,7 +1361,7 @@ class AITextSummarizerBlock(AIBlockBase):
def __init__(self):
super().__init__(
id="a0a69be1-4528-491c-a85a-a4ab6873e3f0",
description="Utilize a Large Language Model (LLM) to summarize a long text.",
description="A block that summarizes long texts using a Large Language Model (LLM), with configurable focus topics and summary styles.",
categories={BlockCategory.AI, BlockCategory.TEXT},
input_schema=AITextSummarizerBlock.Input,
output_schema=AITextSummarizerBlock.Output,
@@ -1562,7 +1562,7 @@ class AIConversationBlock(AIBlockBase):
def __init__(self):
super().__init__(
id="32a87eab-381e-4dd4-bdb8-4c47151be35a",
description="Advanced LLM call that takes a list of messages and sends them to the language model.",
description="A block that facilitates multi-turn conversations with a Large Language Model (LLM), maintaining context across message exchanges.",
categories={BlockCategory.AI},
input_schema=AIConversationBlock.Input,
output_schema=AIConversationBlock.Output,
@@ -1682,7 +1682,7 @@ class AIListGeneratorBlock(AIBlockBase):
def __init__(self):
super().__init__(
id="9c0b0450-d199-458b-a731-072189dd6593",
description="Generate a list of values based on the given prompt using a Large Language Model (LLM).",
description="A block that creates lists of items based on prompts using a Large Language Model (LLM), with optional source data for context.",
categories={BlockCategory.AI, BlockCategory.TEXT},
input_schema=AIListGeneratorBlock.Input,
output_schema=AIListGeneratorBlock.Output,

View File

@@ -0,0 +1,746 @@
#!/usr/bin/env python3
"""
Block Documentation Generator
Generates markdown documentation for all blocks from code introspection.
Preserves manually-written content between marker comments.
Usage:
# Generate all docs
poetry run python scripts/generate_block_docs.py
# Check mode for CI (exits 1 if stale)
poetry run python scripts/generate_block_docs.py --check
# Migrate existing docs (add markers, preserve content)
poetry run python scripts/generate_block_docs.py --migrate
# Verbose output
poetry run python scripts/generate_block_docs.py -v
"""
import argparse
import inspect
import logging
import re
import sys
from collections import defaultdict
from dataclasses import dataclass, field
from pathlib import Path
from typing import Any
# Add backend to path for imports
backend_dir = Path(__file__).parent.parent
sys.path.insert(0, str(backend_dir))
logger = logging.getLogger(__name__)
# Default output directory relative to repo root
DEFAULT_OUTPUT_DIR = (
Path(__file__).parent.parent.parent.parent / "docs" / "platform" / "blocks"
)
@dataclass
class FieldDoc:
"""Documentation for a single input/output field."""
name: str
description: str
type_str: str
required: bool
default: Any = None
advanced: bool = False
hidden: bool = False
placeholder: str | None = None
@dataclass
class BlockDoc:
"""Documentation data extracted from a block."""
id: str
name: str
class_name: str
description: str
categories: list[str]
category_descriptions: dict[str, str]
inputs: list[FieldDoc]
outputs: list[FieldDoc]
block_type: str
source_file: str
contributors: list[str] = field(default_factory=list)
# Category to human-readable name mapping
CATEGORY_DISPLAY_NAMES = {
"AI": "AI and Language Models",
"BASIC": "Basic Operations",
"TEXT": "Text Processing",
"SEARCH": "Search and Information Retrieval",
"SOCIAL": "Social Media and Content",
"DEVELOPER_TOOLS": "Developer Tools",
"DATA": "Data Processing",
"LOGIC": "Logic and Control Flow",
"COMMUNICATION": "Communication",
"INPUT": "Input/Output",
"OUTPUT": "Input/Output",
"MULTIMEDIA": "Media Generation",
"PRODUCTIVITY": "Productivity",
"HARDWARE": "Hardware",
"AGENT": "Agent Integration",
"CRM": "CRM Services",
"SAFETY": "AI Safety",
"ISSUE_TRACKING": "Issue Tracking",
"MARKETING": "Marketing",
}
# Category to doc file mapping (for grouping related blocks)
CATEGORY_FILE_MAP = {
"BASIC": "basic",
"TEXT": "text",
"AI": "llm",
"SEARCH": "search",
"DATA": "data",
"LOGIC": "logic",
"COMMUNICATION": "communication",
"MULTIMEDIA": "multimedia",
"PRODUCTIVITY": "productivity",
}
def class_name_to_display_name(class_name: str) -> str:
"""Convert BlockClassName to 'Block Class Name'."""
# Remove 'Block' suffix
name = class_name.replace("Block", "")
# Insert space before capitals
name = re.sub(r"([a-z])([A-Z])", r"\1 \2", name)
# Handle consecutive capitals (e.g., 'HTTPRequest' -> 'HTTP Request')
name = re.sub(r"([A-Z]+)([A-Z][a-z])", r"\1 \2", name)
return name.strip()
def type_to_readable(type_schema: dict[str, Any]) -> str:
"""Convert JSON schema type to human-readable string."""
if not isinstance(type_schema, dict):
return str(type_schema) if type_schema else "Any"
if "anyOf" in type_schema:
# Union type - show options
any_of = type_schema["anyOf"]
if not isinstance(any_of, list):
return "Any"
options = []
for opt in any_of:
if isinstance(opt, dict) and opt.get("type") == "null":
continue
options.append(type_to_readable(opt))
if len(options) == 1:
return options[0]
return " | ".join(options)
if "allOf" in type_schema:
all_of = type_schema["allOf"]
if not isinstance(all_of, list) or not all_of:
return "Any"
return type_to_readable(all_of[0])
schema_type = type_schema.get("type")
if schema_type == "array":
items = type_schema.get("items", {})
item_type = type_to_readable(items)
return f"List[{item_type}]"
if schema_type == "object":
if "additionalProperties" in type_schema:
value_type = type_to_readable(type_schema["additionalProperties"])
return f"Dict[str, {value_type}]"
# Check if it's a specific model
title = type_schema.get("title", "Object")
return title
if schema_type == "string":
if "enum" in type_schema:
return " | ".join(f'"{v}"' for v in type_schema["enum"][:3])
if "format" in type_schema:
return f"str ({type_schema['format']})"
return "str"
if schema_type == "integer":
return "int"
if schema_type == "number":
return "float"
if schema_type == "boolean":
return "bool"
if schema_type == "null":
return "None"
# Fallback
return type_schema.get("title", schema_type or "Any")
def safe_get(d: Any, key: str, default: Any = None) -> Any:
"""Safely get a value from a dict-like object."""
if isinstance(d, dict):
return d.get(key, default)
return default
def extract_block_doc(block_cls: type) -> BlockDoc:
"""Extract documentation data from a block class."""
block = block_cls.create()
# Get source file
try:
source_file = inspect.getfile(block_cls)
# Make relative to blocks directory
blocks_dir = Path(source_file).parent
while blocks_dir.name != "blocks" and blocks_dir.parent != blocks_dir:
blocks_dir = blocks_dir.parent
source_file = str(Path(source_file).relative_to(blocks_dir.parent))
except (TypeError, ValueError):
source_file = "unknown"
# Extract input fields
input_schema = block.input_schema.jsonschema()
input_properties = safe_get(input_schema, "properties", {})
if not isinstance(input_properties, dict):
input_properties = {}
required_raw = safe_get(input_schema, "required", [])
# Handle edge cases where required might not be a list
if isinstance(required_raw, (list, set, tuple)):
required_inputs = set(required_raw)
else:
required_inputs = set()
inputs = []
for field_name, field_schema in input_properties.items():
if not isinstance(field_schema, dict):
continue
# Skip credentials fields in docs (they're auto-handled)
if "credentials" in field_name.lower():
continue
inputs.append(
FieldDoc(
name=field_name,
description=safe_get(field_schema, "description", ""),
type_str=type_to_readable(field_schema),
required=field_name in required_inputs,
default=safe_get(field_schema, "default"),
advanced=safe_get(field_schema, "advanced", False) or False,
hidden=safe_get(field_schema, "hidden", False) or False,
placeholder=safe_get(field_schema, "placeholder"),
)
)
# Extract output fields
output_schema = block.output_schema.jsonschema()
output_properties = safe_get(output_schema, "properties", {})
if not isinstance(output_properties, dict):
output_properties = {}
outputs = []
for field_name, field_schema in output_properties.items():
if not isinstance(field_schema, dict):
continue
outputs.append(
FieldDoc(
name=field_name,
description=safe_get(field_schema, "description", ""),
type_str=type_to_readable(field_schema),
required=True, # Outputs are always produced
hidden=safe_get(field_schema, "hidden", False) or False,
)
)
# Get category info (sort for deterministic ordering since it's a set)
categories = []
category_descriptions = {}
for cat in sorted(block.categories, key=lambda c: c.name):
categories.append(cat.name)
category_descriptions[cat.name] = cat.value
# Get contributors
contributors = []
for contrib in block.contributors:
contributors.append(contrib.name if hasattr(contrib, "name") else str(contrib))
return BlockDoc(
id=block.id,
name=class_name_to_display_name(block.name),
class_name=block.name,
description=block.description,
categories=categories,
category_descriptions=category_descriptions,
inputs=inputs,
outputs=outputs,
block_type=block.block_type.value,
source_file=source_file,
contributors=contributors,
)
def generate_anchor(name: str) -> str:
"""Generate markdown anchor from block name."""
return name.lower().replace(" ", "-").replace("(", "").replace(")", "")
def extract_manual_content(existing_content: str) -> dict[str, str]:
"""Extract content between MANUAL markers from existing file."""
manual_sections = {}
# Pattern: <!-- MANUAL: section_name -->content<!-- END MANUAL -->
pattern = r"<!-- MANUAL: (\w+) -->\s*(.*?)\s*<!-- END MANUAL -->"
matches = re.findall(pattern, existing_content, re.DOTALL)
for section_name, content in matches:
manual_sections[section_name] = content.strip()
return manual_sections
def strip_markers(content: str) -> str:
"""Remove MANUAL markers from content."""
# Remove opening markers
content = re.sub(r"<!-- MANUAL: \w+ -->\s*", "", content)
# Remove closing markers
content = re.sub(r"\s*<!-- END MANUAL -->", "", content)
return content.strip()
def extract_legacy_content(existing_content: str) -> dict[str, str]:
"""Extract content from legacy docs without markers (for migration)."""
manual_sections = {}
# Try to extract "How it works" section
how_it_works_match = re.search(
r"### How it works\s*\n(.*?)(?=\n### |\n## |\Z)", existing_content, re.DOTALL
)
if how_it_works_match:
content = strip_markers(how_it_works_match.group(1).strip())
if content and not content.startswith("|"): # Not a table
manual_sections["how_it_works"] = content
# Try to extract "Possible use case" section
use_case_match = re.search(
r"### Possible use case\s*\n(.*?)(?=\n### |\n## |\n---|\Z)",
existing_content,
re.DOTALL,
)
if use_case_match:
content = strip_markers(use_case_match.group(1).strip())
if content:
manual_sections["use_case"] = content
return manual_sections
def generate_block_markdown(
block: BlockDoc,
manual_content: dict[str, str] | None = None,
is_first_in_file: bool = True,
) -> str:
"""Generate markdown documentation for a single block."""
manual_content = manual_content or {}
lines = []
# Block heading
heading_level = "#" if is_first_in_file else "##"
lines.append(f"{heading_level} {block.name}")
lines.append("")
# What it is (full description)
lines.append("### What it is")
lines.append(block.description or "No description available.")
lines.append("")
# How it works (manual section)
lines.append("### How it works")
how_it_works = manual_content.get(
"how_it_works", "_Add technical explanation here._"
)
lines.append("<!-- MANUAL: how_it_works -->")
lines.append(how_it_works)
lines.append("<!-- END MANUAL -->")
lines.append("")
# Inputs table (auto-generated)
visible_inputs = [f for f in block.inputs if not f.hidden]
if visible_inputs:
lines.append("### Inputs")
lines.append("| Input | Description | Type | Required |")
lines.append("|-------|-------------|------|----------|")
for inp in visible_inputs:
required = "Yes" if inp.required else "No"
desc = inp.description or "-"
# Escape pipes in description
desc = desc.replace("|", "\\|")
lines.append(f"| {inp.name} | {desc} | {inp.type_str} | {required} |")
lines.append("")
# Outputs table (auto-generated)
visible_outputs = [f for f in block.outputs if not f.hidden]
if visible_outputs:
lines.append("### Outputs")
lines.append("| Output | Description | Type |")
lines.append("|--------|-------------|------|")
for out in visible_outputs:
desc = out.description or "-"
desc = desc.replace("|", "\\|")
lines.append(f"| {out.name} | {desc} | {out.type_str} |")
lines.append("")
# Possible use case (manual section)
lines.append("### Possible use case")
use_case = manual_content.get("use_case", "_Add practical use case examples here._")
lines.append("<!-- MANUAL: use_case -->")
lines.append(use_case)
lines.append("<!-- END MANUAL -->")
lines.append("")
lines.append("---")
lines.append("")
return "\n".join(lines)
def get_block_file_mapping(blocks: list[BlockDoc]) -> dict[str, list[BlockDoc]]:
"""
Map blocks to their documentation files.
Returns dict of {relative_file_path: [blocks]}
"""
file_mapping = defaultdict(list)
for block in blocks:
# Determine file path based on source file or category
source_path = Path(block.source_file)
# If source is in a subdirectory (e.g., google/gmail.py), use that structure
if len(source_path.parts) > 2: # blocks/subdir/file.py
subdir = source_path.parts[1] # e.g., "google"
# Use the Python filename as the md filename
md_file = source_path.stem + ".md" # e.g., "gmail.md"
file_path = f"{subdir}/{md_file}"
else:
# Use category-based grouping for top-level blocks
primary_category = block.categories[0] if block.categories else "BASIC"
file_name = CATEGORY_FILE_MAP.get(primary_category, "misc")
file_path = f"{file_name}.md"
file_mapping[file_path].append(block)
return dict(file_mapping)
def generate_overview_table(blocks: list[BlockDoc]) -> str:
"""Generate the overview table markdown (blocks.md)."""
lines = []
lines.append("# AutoGPT Blocks Overview")
lines.append("")
lines.append(
'AutoGPT uses a modular approach with various "blocks" to handle different tasks. These blocks are the building blocks of AutoGPT workflows, allowing users to create complex automations by combining simple, specialized components.'
)
lines.append("")
lines.append('!!! info "Creating Your Own Blocks"')
lines.append(" Want to create your own custom blocks? Check out our guides:")
lines.append(" ")
lines.append(
" - [Build your own Blocks](../new_blocks.md) - Step-by-step tutorial with examples"
)
lines.append(
" - [Block SDK Guide](../block-sdk-guide.md) - Advanced SDK patterns with OAuth, webhooks, and provider configuration"
)
lines.append("")
lines.append(
"Below is a comprehensive list of all available blocks, categorized by their primary function. Click on any block name to view its detailed documentation."
)
lines.append("")
# Group blocks by category
by_category = defaultdict(list)
for block in blocks:
primary_cat = block.categories[0] if block.categories else "BASIC"
by_category[primary_cat].append(block)
# Sort categories
category_order = [
"BASIC",
"DATA",
"TEXT",
"AI",
"SEARCH",
"SOCIAL",
"COMMUNICATION",
"DEVELOPER_TOOLS",
"MULTIMEDIA",
"PRODUCTIVITY",
"LOGIC",
"INPUT",
"OUTPUT",
"AGENT",
"CRM",
"SAFETY",
"ISSUE_TRACKING",
"HARDWARE",
"MARKETING",
]
for category in category_order:
if category not in by_category:
continue
cat_blocks = sorted(by_category[category], key=lambda b: b.name)
display_name = CATEGORY_DISPLAY_NAMES.get(category, category)
lines.append(f"## {display_name}")
lines.append("| Block Name | Description |")
lines.append("|------------|-------------|")
for block in cat_blocks:
# Determine link path
file_mapping = get_block_file_mapping([block])
file_path = list(file_mapping.keys())[0]
anchor = generate_anchor(block.name)
# Short description (first sentence)
short_desc = (
block.description.split(".")[0]
if block.description
else "No description"
)
short_desc = short_desc.replace("|", "\\|")
lines.append(f"| [{block.name}]({file_path}#{anchor}) | {short_desc} |")
lines.append("")
return "\n".join(lines)
def load_all_blocks_for_docs() -> list[BlockDoc]:
"""Load all blocks and extract documentation."""
from backend.blocks import load_all_blocks
block_classes = load_all_blocks()
blocks = []
for _block_id, block_cls in block_classes.items():
try:
block_doc = extract_block_doc(block_cls)
blocks.append(block_doc)
except Exception as e:
logger.warning(f"Failed to extract docs for {block_cls.__name__}: {e}")
return blocks
def write_block_docs(
output_dir: Path,
blocks: list[BlockDoc],
migrate: bool = False,
verbose: bool = False,
) -> dict[str, str]:
"""
Write block documentation files.
Returns dict of {file_path: content} for all generated files.
"""
output_dir = Path(output_dir)
output_dir.mkdir(parents=True, exist_ok=True)
file_mapping = get_block_file_mapping(blocks)
generated_files = {}
for file_path, file_blocks in file_mapping.items():
full_path = output_dir / file_path
# Create subdirectories if needed
full_path.parent.mkdir(parents=True, exist_ok=True)
# Load existing content for manual section preservation
existing_content = ""
if full_path.exists():
existing_content = full_path.read_text()
# Generate content for each block
content_parts = []
for i, block in enumerate(sorted(file_blocks, key=lambda b: b.name)):
# Try to extract manual content
if migrate:
manual_content = extract_legacy_content(existing_content)
else:
# Extract manual content specific to this block
# Look for content after the block heading
block_pattern = (
rf"(?:^|\n)##? {re.escape(block.name)}\s*\n(.*?)(?=\n##? |\Z)"
)
block_match = re.search(block_pattern, existing_content, re.DOTALL)
if block_match:
manual_content = extract_manual_content(block_match.group(1))
else:
manual_content = {}
content_parts.append(
generate_block_markdown(
block,
manual_content,
is_first_in_file=(i == 0),
)
)
full_content = "\n".join(content_parts)
generated_files[str(file_path)] = full_content
if verbose:
print(f" Writing {file_path} ({len(file_blocks)} blocks)")
full_path.write_text(full_content)
# Generate overview file
overview_content = generate_overview_table(blocks)
overview_path = output_dir / "blocks.md"
generated_files["blocks.md"] = overview_content
overview_path.write_text(overview_content)
if verbose:
print(" Writing blocks.md (overview)")
return generated_files
def check_docs_in_sync(output_dir: Path, blocks: list[BlockDoc]) -> bool:
"""
Check if generated docs match existing docs.
Returns True if in sync, False otherwise.
"""
output_dir = Path(output_dir)
file_mapping = get_block_file_mapping(blocks)
all_match = True
for file_path, file_blocks in file_mapping.items():
full_path = output_dir / file_path
if not full_path.exists():
print(f"MISSING: {file_path}")
all_match = False
continue
existing_content = full_path.read_text()
# Extract manual content from existing file
manual_sections_by_block = {}
for block in file_blocks:
block_pattern = (
rf"(?:^|\n)##? {re.escape(block.name)}\s*\n(.*?)(?=\n##? |\Z)"
)
block_match = re.search(block_pattern, existing_content, re.DOTALL)
if block_match:
manual_sections_by_block[block.name] = extract_manual_content(
block_match.group(1)
)
# Generate expected content
content_parts = []
for i, block in enumerate(sorted(file_blocks, key=lambda b: b.name)):
manual_content = manual_sections_by_block.get(block.name, {})
content_parts.append(
generate_block_markdown(
block,
manual_content,
is_first_in_file=(i == 0),
)
)
expected_content = "\n".join(content_parts)
if existing_content.strip() != expected_content.strip():
print(f"OUT OF SYNC: {file_path}")
all_match = False
# Check overview
overview_path = output_dir / "blocks.md"
if overview_path.exists():
existing_overview = overview_path.read_text()
expected_overview = generate_overview_table(blocks)
if existing_overview.strip() != expected_overview.strip():
print("OUT OF SYNC: blocks.md (overview)")
all_match = False
else:
print("MISSING: blocks.md (overview)")
all_match = False
return all_match
def main():
parser = argparse.ArgumentParser(
description="Generate block documentation from code introspection"
)
parser.add_argument(
"--output-dir",
type=Path,
default=DEFAULT_OUTPUT_DIR,
help="Output directory for generated docs",
)
parser.add_argument(
"--check",
action="store_true",
help="Check if docs are in sync (for CI), exit 1 if not",
)
parser.add_argument(
"--migrate",
action="store_true",
help="Migrate existing docs (extract legacy manual content)",
)
parser.add_argument(
"-v",
"--verbose",
action="store_true",
help="Verbose output",
)
args = parser.parse_args()
logging.basicConfig(
level=logging.DEBUG if args.verbose else logging.INFO,
format="%(levelname)s: %(message)s",
)
print("Loading blocks...")
blocks = load_all_blocks_for_docs()
print(f"Found {len(blocks)} blocks")
if args.check:
print(f"Checking docs in {args.output_dir}...")
in_sync = check_docs_in_sync(args.output_dir, blocks)
if in_sync:
print("All documentation is in sync!")
sys.exit(0)
else:
print("\nDocumentation is out of sync!")
print(
"Run: cd autogpt_platform/backend && poetry run python scripts/generate_block_docs.py"
)
sys.exit(1)
else:
print(f"Generating docs to {args.output_dir}...")
write_block_docs(
args.output_dir,
blocks,
migrate=args.migrate,
verbose=args.verbose,
)
print("Done!")
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,211 @@
#!/usr/bin/env python3
"""
Migration script to preserve manual content from existing docs.
This script:
1. Reads all existing block documentation (from git HEAD)
2. Extracts manual content (How it works, Possible use case) by block name
3. Creates a JSON mapping of block_name -> manual_content
4. Generates new docs using current block structure while preserving manual content
"""
import json
import re
import subprocess
import sys
from pathlib import Path
sys.path.insert(0, str(Path(__file__).parent.parent))
from scripts.generate_block_docs import (
generate_block_markdown,
generate_overview_table,
get_block_file_mapping,
load_all_blocks_for_docs,
strip_markers,
)
def get_git_file_content(file_path: str) -> str | None:
"""Get file content from git HEAD."""
try:
result = subprocess.run(
["git", "show", f"HEAD:{file_path}"],
capture_output=True,
text=True,
cwd=Path(__file__).parent.parent.parent.parent, # repo root
)
if result.returncode == 0:
return result.stdout
return None
except Exception:
return None
def extract_blocks_from_doc(content: str) -> dict[str, dict[str, str]]:
"""Extract all block sections and their manual content from a doc file."""
blocks = {}
# Find all block headings (# or ##)
block_pattern = r"(?:^|\n)(##?) ([^\n]+)\n"
matches = list(re.finditer(block_pattern, content))
for i, match in enumerate(matches):
block_name = match.group(2).strip()
start = match.end()
# Find end (next heading or end of file)
if i + 1 < len(matches):
end = matches[i + 1].start()
else:
end = len(content)
block_content = content[start:end]
# Extract manual sections
manual_content = {}
# How it works
how_match = re.search(
r"### How it works\s*\n(.*?)(?=\n### |\Z)", block_content, re.DOTALL
)
if how_match:
text = strip_markers(how_match.group(1).strip())
# Skip if it's just placeholder or a table
if text and not text.startswith("|") and not text.startswith("_Add"):
manual_content["how_it_works"] = text
# Possible use case
use_case_match = re.search(
r"### Possible use case\s*\n(.*?)(?=\n### |\n## |\n---|\Z)",
block_content,
re.DOTALL,
)
if use_case_match:
text = strip_markers(use_case_match.group(1).strip())
if text and not text.startswith("_Add"):
manual_content["use_case"] = text
if manual_content:
blocks[block_name] = manual_content
return blocks
def collect_existing_manual_content() -> dict[str, dict[str, str]]:
"""Collect all manual content from existing git HEAD docs."""
all_manual_content = {}
# Find all existing md files via git
result = subprocess.run(
["git", "ls-files", "docs/content/platform/blocks/"],
capture_output=True,
text=True,
cwd=Path(__file__).parent.parent.parent.parent,
)
if result.returncode != 0:
print("Failed to list git files")
return {}
for file_path in result.stdout.strip().split("\n"):
if not file_path.endswith(".md"):
continue
if file_path.endswith("blocks.md"): # Skip overview
continue
print(f"Processing: {file_path}")
content = get_git_file_content(file_path)
if content:
blocks = extract_blocks_from_doc(content)
for block_name, manual_content in blocks.items():
if block_name in all_manual_content:
# Merge if already exists
all_manual_content[block_name].update(manual_content)
else:
all_manual_content[block_name] = manual_content
return all_manual_content
def run_migration():
"""Run the migration."""
print("Step 1: Collecting existing manual content from git HEAD...")
manual_content_cache = collect_existing_manual_content()
print(f"\nFound manual content for {len(manual_content_cache)} blocks")
# Show some examples
for name, content in list(manual_content_cache.items())[:3]:
print(f" - {name}: {list(content.keys())}")
# Save cache for reference
cache_path = Path(__file__).parent / "manual_content_cache.json"
with open(cache_path, "w") as f:
json.dump(manual_content_cache, f, indent=2)
print(f"\nSaved cache to {cache_path}")
print("\nStep 2: Loading blocks from code...")
blocks = load_all_blocks_for_docs()
print(f"Found {len(blocks)} blocks")
print("\nStep 3: Generating new documentation...")
output_dir = (
Path(__file__).parent.parent.parent.parent
/ "docs"
/ "content"
/ "platform"
/ "blocks"
)
file_mapping = get_block_file_mapping(blocks)
# Track statistics
preserved_count = 0
missing_count = 0
for file_path, file_blocks in file_mapping.items():
full_path = output_dir / file_path
full_path.parent.mkdir(parents=True, exist_ok=True)
content_parts = []
for i, block in enumerate(sorted(file_blocks, key=lambda b: b.name)):
# Look up manual content by block name
manual_content = manual_content_cache.get(block.name, {})
if manual_content:
preserved_count += 1
else:
# Try with class name
manual_content = manual_content_cache.get(block.class_name, {})
if not manual_content:
missing_count += 1
content_parts.append(
generate_block_markdown(
block,
manual_content,
is_first_in_file=(i == 0),
)
)
full_content = "\n".join(content_parts)
full_path.write_text(full_content)
print(f" Wrote {file_path} ({len(file_blocks)} blocks)")
# Generate overview
overview_content = generate_overview_table(blocks)
overview_path = output_dir / "blocks.md"
overview_path.write_text(overview_content)
print(" Wrote blocks.md (overview)")
print("\nMigration complete!")
print(f" - Blocks with preserved manual content: {preserved_count}")
print(f" - Blocks without manual content: {missing_count}")
print(
"\nYou can now run `poetry run python scripts/generate_block_docs.py --check` to verify"
)
if __name__ == "__main__":
run_migration()

View File

@@ -0,0 +1,233 @@
#!/usr/bin/env python3
"""Tests for the block documentation generator."""
import pytest
from scripts.generate_block_docs import (
class_name_to_display_name,
extract_manual_content,
generate_anchor,
strip_markers,
type_to_readable,
)
class TestClassNameToDisplayName:
"""Tests for class_name_to_display_name function."""
def test_simple_block_name(self):
assert class_name_to_display_name("PrintBlock") == "Print"
def test_multi_word_block_name(self):
assert class_name_to_display_name("GetWeatherBlock") == "Get Weather"
def test_consecutive_capitals(self):
assert class_name_to_display_name("HTTPRequestBlock") == "HTTP Request"
def test_ai_prefix(self):
assert class_name_to_display_name("AIConditionBlock") == "AI Condition"
def test_no_block_suffix(self):
assert class_name_to_display_name("SomeClass") == "Some Class"
class TestTypeToReadable:
"""Tests for type_to_readable function."""
def test_string_type(self):
assert type_to_readable({"type": "string"}) == "str"
def test_integer_type(self):
assert type_to_readable({"type": "integer"}) == "int"
def test_number_type(self):
assert type_to_readable({"type": "number"}) == "float"
def test_boolean_type(self):
assert type_to_readable({"type": "boolean"}) == "bool"
def test_array_type(self):
result = type_to_readable({"type": "array", "items": {"type": "string"}})
assert result == "List[str]"
def test_object_type(self):
result = type_to_readable({"type": "object", "title": "MyModel"})
assert result == "MyModel"
def test_anyof_with_null(self):
result = type_to_readable({"anyOf": [{"type": "string"}, {"type": "null"}]})
assert result == "str"
def test_anyof_multiple_types(self):
result = type_to_readable({"anyOf": [{"type": "string"}, {"type": "integer"}]})
assert result == "str | int"
def test_enum_type(self):
result = type_to_readable(
{"type": "string", "enum": ["option1", "option2", "option3"]}
)
assert result == '"option1" | "option2" | "option3"'
def test_none_input(self):
assert type_to_readable(None) == "Any"
def test_non_dict_input(self):
assert type_to_readable("string") == "string"
class TestExtractManualContent:
"""Tests for extract_manual_content function."""
def test_extract_how_it_works(self):
content = """
### How it works
<!-- MANUAL: how_it_works -->
This is how it works.
<!-- END MANUAL -->
"""
result = extract_manual_content(content)
assert result == {"how_it_works": "This is how it works."}
def test_extract_use_case(self):
content = """
### Possible use case
<!-- MANUAL: use_case -->
Example use case here.
<!-- END MANUAL -->
"""
result = extract_manual_content(content)
assert result == {"use_case": "Example use case here."}
def test_extract_multiple_sections(self):
content = """
<!-- MANUAL: how_it_works -->
How it works content.
<!-- END MANUAL -->
<!-- MANUAL: use_case -->
Use case content.
<!-- END MANUAL -->
"""
result = extract_manual_content(content)
assert result == {
"how_it_works": "How it works content.",
"use_case": "Use case content.",
}
def test_empty_content(self):
result = extract_manual_content("")
assert result == {}
def test_no_markers(self):
result = extract_manual_content("Some content without markers")
assert result == {}
class TestStripMarkers:
"""Tests for strip_markers function."""
def test_strip_opening_marker(self):
content = "<!-- MANUAL: how_it_works -->\nContent here"
result = strip_markers(content)
assert result == "Content here"
def test_strip_closing_marker(self):
content = "Content here\n<!-- END MANUAL -->"
result = strip_markers(content)
assert result == "Content here"
def test_strip_both_markers(self):
content = "<!-- MANUAL: section -->\nContent here\n<!-- END MANUAL -->"
result = strip_markers(content)
assert result == "Content here"
def test_no_markers(self):
content = "Content without markers"
result = strip_markers(content)
assert result == "Content without markers"
class TestGenerateAnchor:
"""Tests for generate_anchor function."""
def test_simple_name(self):
assert generate_anchor("Print") == "print"
def test_multi_word_name(self):
assert generate_anchor("Get Weather") == "get-weather"
def test_name_with_parentheses(self):
assert generate_anchor("Something (Optional)") == "something-optional"
def test_already_lowercase(self):
assert generate_anchor("already lowercase") == "already-lowercase"
class TestIntegration:
"""Integration tests that require block loading."""
def test_load_blocks(self):
"""Test that blocks can be loaded successfully."""
import logging
import sys
from pathlib import Path
logging.disable(logging.CRITICAL)
sys.path.insert(0, str(Path(__file__).parent.parent))
from scripts.generate_block_docs import load_all_blocks_for_docs
blocks = load_all_blocks_for_docs()
assert len(blocks) > 0, "Should load at least one block"
def test_block_doc_has_required_fields(self):
"""Test that extracted block docs have required fields."""
import logging
import sys
from pathlib import Path
logging.disable(logging.CRITICAL)
sys.path.insert(0, str(Path(__file__).parent.parent))
from scripts.generate_block_docs import load_all_blocks_for_docs
blocks = load_all_blocks_for_docs()
block = blocks[0]
assert hasattr(block, "id")
assert hasattr(block, "name")
assert hasattr(block, "description")
assert hasattr(block, "categories")
assert hasattr(block, "inputs")
assert hasattr(block, "outputs")
def test_file_mapping_is_deterministic(self):
"""Test that file mapping produces consistent results."""
import logging
import sys
from pathlib import Path
logging.disable(logging.CRITICAL)
sys.path.insert(0, str(Path(__file__).parent.parent))
from scripts.generate_block_docs import (
get_block_file_mapping,
load_all_blocks_for_docs,
)
# Load blocks twice and compare mappings
blocks1 = load_all_blocks_for_docs()
blocks2 = load_all_blocks_for_docs()
mapping1 = get_block_file_mapping(blocks1)
mapping2 = get_block_file_mapping(blocks2)
# Check same files are generated
assert set(mapping1.keys()) == set(mapping2.keys())
# Check same block counts per file
for file_path in mapping1:
assert len(mapping1[file_path]) == len(mapping2[file_path])
if __name__ == "__main__":
pytest.main([__file__, "-v"])

View File

@@ -0,0 +1,75 @@
# Airtable Create Base
### What it is
Create or find a base in Airtable
### How it works
<!-- MANUAL: how_it_works -->
This block creates a new Airtable base in a specified workspace, or finds an existing one with the same name. When creating, you can optionally define initial tables and their fields to set up the schema.
Enable find_existing to search for a base with the same name before creating a new one, preventing duplicates in your workspace.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| workspace_id | The workspace ID where the base will be created | str | Yes |
| name | The name of the new base | str | Yes |
| find_existing | If true, return existing base with same name instead of creating duplicate | bool | No |
| tables | At least one table and field must be specified. Array of table objects to create in the base. Each table should have 'name' and 'fields' properties | List[Dict[str, True]] | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| base_id | The ID of the created or found base | str |
| tables | Array of table objects | List[Dict[str, True]] |
| table | A single table object | Dict[str, True] |
| was_created | True if a new base was created, False if existing was found | bool |
### Possible use case
<!-- MANUAL: use_case -->
**Project Setup**: Automatically create new bases when projects start with predefined table structures.
**Template Deployment**: Deploy standardized base templates across teams or clients.
**Multi-Tenant Apps**: Create separate bases for each customer or project programmatically.
<!-- END MANUAL -->
---
## Airtable List Bases
### What it is
List all bases in Airtable
### How it works
<!-- MANUAL: how_it_works -->
This block retrieves a list of all Airtable bases accessible to your connected account. It returns basic information about each base including ID, name, and permission level.
Results are paginated; use the offset output to retrieve additional pages if there are more bases than returned in a single call.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| trigger | Trigger the block to run - value is ignored | str | No |
| offset | Pagination offset from previous request | str | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| bases | Array of base objects | List[Dict[str, True]] |
| offset | Offset for next page (null if no more bases) | str |
### Possible use case
<!-- MANUAL: use_case -->
**Base Discovery**: Find available bases for building dynamic dropdowns or navigation.
**Inventory Management**: List all bases in an organization for auditing or documentation.
**Cross-Base Operations**: Enumerate bases to perform operations across multiple databases.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,199 @@
# Airtable Create Records
### What it is
Create records in an Airtable table
### How it works
<!-- MANUAL: how_it_works -->
This block creates new records in an Airtable table using the Airtable API. Each record is specified with a fields object containing field names and values. You can create up to 10 records in a single call.
Enable typecast to automatically convert string values to appropriate field types (dates, numbers, etc.). The block returns the created records with their assigned IDs.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| base_id | The Airtable base ID | str | Yes |
| table_id_or_name | Table ID or name | str | Yes |
| records | Array of records to create (each with 'fields' object) | List[Dict[str, True]] | Yes |
| skip_normalization | Skip output normalization to get raw Airtable response (faster but may have missing fields) | bool | No |
| typecast | Automatically convert string values to appropriate types | bool | No |
| return_fields_by_field_id | Return fields by field ID | bool | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| records | Array of created record objects | List[Dict[str, True]] |
| details | Details of the created records | Dict[str, True] |
### Possible use case
<!-- MANUAL: use_case -->
**Data Import**: Bulk import data from external sources into Airtable tables.
**Form Submissions**: Create records from form submissions or API integrations.
**Workflow Output**: Save workflow results or processed data to Airtable for tracking.
<!-- END MANUAL -->
---
## Airtable Delete Records
### What it is
Delete records from an Airtable table
### How it works
<!-- MANUAL: how_it_works -->
This block deletes records from an Airtable table by their record IDs. You can delete up to 10 records in a single call. The operation is permanent and cannot be undone.
Provide an array of record IDs to delete. Using the table ID instead of the name is recommended for reliability.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| base_id | The Airtable base ID | str | Yes |
| table_id_or_name | Table ID or name - It's better to use the table ID instead of the name | str | Yes |
| record_ids | Array of upto 10 record IDs to delete | List[str] | Yes |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| records | Array of deletion results | List[Dict[str, True]] |
### Possible use case
<!-- MANUAL: use_case -->
**Data Cleanup**: Remove outdated or duplicate records from tables.
**Workflow Cleanup**: Delete temporary records after processing is complete.
**Batch Removal**: Remove multiple records that match certain criteria.
<!-- END MANUAL -->
---
## Airtable Get Record
### What it is
Get a single record from Airtable
### How it works
<!-- MANUAL: how_it_works -->
This block retrieves a single record from an Airtable table by its ID. The record includes all field values and metadata like creation time. Enable normalize_output to ensure all fields are included with proper empty values.
Optionally include field metadata for type information and configuration details about each field.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| base_id | The Airtable base ID | str | Yes |
| table_id_or_name | Table ID or name | str | Yes |
| record_id | The record ID to retrieve | str | Yes |
| normalize_output | Normalize output to include all fields with proper empty values (disable to skip schema fetch and get raw Airtable response) | bool | No |
| include_field_metadata | Include field type and configuration metadata (requires normalize_output=true) | bool | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| id | The record ID | str |
| fields | The record fields | Dict[str, True] |
| created_time | The record created time | str |
| field_metadata | Field type and configuration metadata (only when include_field_metadata=true) | Dict[str, True] |
### Possible use case
<!-- MANUAL: use_case -->
**Detail View**: Fetch complete record data for display or detailed processing.
**Record Lookup**: Retrieve specific records by ID from webhook payloads or references.
**Data Validation**: Check record contents before performing updates or related operations.
<!-- END MANUAL -->
---
## Airtable List Records
### What it is
List records from an Airtable table
### How it works
<!-- MANUAL: how_it_works -->
This block queries records from an Airtable table with optional filtering, sorting, and pagination. Use Airtable formulas to filter records and specify sort order by field and direction.
Results can be limited, paginated with offsets, and restricted to specific fields. Enable normalize_output for consistent field values across records.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| base_id | The Airtable base ID | str | Yes |
| table_id_or_name | Table ID or name | str | Yes |
| filter_formula | Airtable formula to filter records | str | No |
| view | View ID or name to use | str | No |
| sort | Sort configuration (array of {field, direction}) | List[Dict[str, True]] | No |
| max_records | Maximum number of records to return | int | No |
| page_size | Number of records per page (max 100) | int | No |
| offset | Pagination offset from previous request | str | No |
| return_fields | Specific fields to return (comma-separated) | List[str] | No |
| normalize_output | Normalize output to include all fields with proper empty values (disable to skip schema fetch and get raw Airtable response) | bool | No |
| include_field_metadata | Include field type and configuration metadata (requires normalize_output=true) | bool | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| records | Array of record objects | List[Dict[str, True]] |
| offset | Offset for next page (null if no more records) | str |
| field_metadata | Field type and configuration metadata (only when include_field_metadata=true) | Dict[str, True] |
### Possible use case
<!-- MANUAL: use_case -->
**Report Generation**: Query records with filters to build reports or dashboards.
**Data Export**: Fetch records matching criteria for export to other systems.
**Batch Processing**: List records to process in subsequent workflow steps.
<!-- END MANUAL -->
---
## Airtable Update Records
### What it is
Update records in an Airtable table
### How it works
<!-- MANUAL: how_it_works -->
This block updates existing records in an Airtable table. Each record update requires the record ID and a fields object with the values to update. Only specified fields are modified; other fields remain unchanged.
Enable typecast to automatically convert string values to appropriate types. You can update up to 10 records per call.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| base_id | The Airtable base ID | str | Yes |
| table_id_or_name | Table ID or name - It's better to use the table ID instead of the name | str | Yes |
| records | Array of records to update (each with 'id' and 'fields') | List[Dict[str, True]] | Yes |
| typecast | Automatically convert string values to appropriate types | bool | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| records | Array of updated record objects | List[Dict[str, True]] |
### Possible use case
<!-- MANUAL: use_case -->
**Status Updates**: Update record status fields as workflows progress.
**Data Enrichment**: Add computed or fetched data to existing records.
**Batch Modifications**: Update multiple records based on processed results.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,187 @@
# Airtable Create Field
### What it is
Add a new field to an Airtable table
### How it works
<!-- MANUAL: how_it_works -->
This block adds a new field to an existing Airtable table using the Airtable API. Specify the field type (text, email, URL, etc.), name, and optional description and configuration options.
The field is created immediately and becomes available for use in all records. Returns the created field object with its assigned ID.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| base_id | The Airtable base ID | str | Yes |
| table_id | The table ID to add field to | str | Yes |
| field_type | The type of the field to create | "singleLineText" | "email" | "url" | No |
| name | The name of the field to create | str | Yes |
| description | The description of the field to create | str | No |
| options | The options of the field to create | Dict[str, str] | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| field | Created field object | Dict[str, True] |
| field_id | ID of the created field | str |
### Possible use case
<!-- MANUAL: use_case -->
**Schema Evolution**: Add new fields to tables as application requirements grow.
**Dynamic Forms**: Create fields based on user configuration or form builder settings.
**Data Integration**: Add fields to capture data from newly integrated external systems.
<!-- END MANUAL -->
---
## Airtable Create Table
### What it is
Create a new table in an Airtable base
### How it works
<!-- MANUAL: how_it_works -->
This block creates a new table in an Airtable base with the specified name and optional field definitions. Each field definition includes name, type, and type-specific options.
The table is created with the defined schema and is immediately ready for use. Returns the created table object with its ID.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| base_id | The Airtable base ID | str | Yes |
| table_name | The name of the table to create | str | Yes |
| table_fields | Table fields with name, type, and options | List[Dict[str, True]] | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| table | Created table object | Dict[str, True] |
| table_id | ID of the created table | str |
### Possible use case
<!-- MANUAL: use_case -->
**Application Scaffolding**: Create tables programmatically when setting up new application modules.
**Multi-Tenant Setup**: Generate customer-specific tables dynamically.
**Feature Expansion**: Add new tables as features are enabled or installed.
<!-- END MANUAL -->
---
## Airtable List Schema
### What it is
Get the complete schema of an Airtable base
### How it works
<!-- MANUAL: how_it_works -->
This block retrieves the complete schema of an Airtable base, including all tables, their fields, field types, and views. This metadata is essential for building dynamic integrations that need to understand table structure.
The schema includes field configurations, validation rules, and relationship definitions between tables.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| base_id | The Airtable base ID | str | Yes |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| base_schema | Complete base schema with tables, fields, and views | Dict[str, True] |
| tables | Array of table objects | List[Dict[str, True]] |
### Possible use case
<!-- MANUAL: use_case -->
**Schema Discovery**: Understand table structure for building dynamic forms or queries.
**Documentation**: Generate documentation of database schema automatically.
**Migration Planning**: Analyze schema before migrating data to other systems.
<!-- END MANUAL -->
---
## Airtable Update Field
### What it is
Update field properties in an Airtable table
### How it works
<!-- MANUAL: how_it_works -->
This block updates properties of an existing field in an Airtable table. You can modify the field name and description. Note that field type cannot be changed after creation.
Changes take effect immediately across all records and views that use the field.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| base_id | The Airtable base ID | str | Yes |
| table_id | The table ID containing the field | str | Yes |
| field_id | The field ID to update | str | Yes |
| name | The name of the field to update | str | No |
| description | The description of the field to update | str | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| field | Updated field object | Dict[str, True] |
### Possible use case
<!-- MANUAL: use_case -->
**Field Renaming**: Update field names to match evolving terminology or standards.
**Documentation Updates**: Add or update field descriptions for better team understanding.
**Schema Maintenance**: Keep field metadata current as application requirements change.
<!-- END MANUAL -->
---
## Airtable Update Table
### What it is
Update table properties
### How it works
<!-- MANUAL: how_it_works -->
This block updates table properties in an Airtable base. You can change the table name, description, and date dependency settings. Changes apply immediately and affect all users accessing the table.
This is useful for maintaining table metadata and organizing your base structure.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| base_id | The Airtable base ID | str | Yes |
| table_id | The table ID to update | str | Yes |
| table_name | The name of the table to update | str | No |
| table_description | The description of the table to update | str | No |
| date_dependency | The date dependency of the table to update | Dict[str, True] | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| table | Updated table object | Dict[str, True] |
### Possible use case
<!-- MANUAL: use_case -->
**Table Organization**: Rename tables to follow naming conventions or reflect current usage.
**Description Management**: Update table descriptions for documentation purposes.
**Configuration Updates**: Modify table settings like date dependencies as requirements change.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,35 @@
# Airtable Webhook Trigger
### What it is
Starts a flow whenever Airtable emits a webhook event
### How it works
<!-- MANUAL: how_it_works -->
This block subscribes to Airtable webhook events for a specific base and table. When records are created, updated, or deleted, Airtable sends a webhook notification that triggers your workflow.
You specify which events to listen for using the event selector. The webhook payload includes details about the changed records and the type of change that occurred.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| base_id | Airtable base ID | str | Yes |
| table_id_or_name | Airtable table ID or name | str | Yes |
| events | Airtable webhook event filter | AirtableEventSelector | Yes |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| payload | Airtable webhook payload | WebhookPayload |
### Possible use case
<!-- MANUAL: use_case -->
**Real-Time Sync**: Automatically sync Airtable changes to other systems like CRMs or databases.
**Notification Workflows**: Send alerts when specific records are created or modified in Airtable.
**Automated Processing**: Trigger document generation or emails when new entries are added to a table.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,54 @@
# Search Organizations
### What it is
Search for organizations in Apollo
### How it works
<!-- MANUAL: how_it_works -->
This block searches the Apollo database for organizations using various filters like employee count, location, and keywords. Apollo maintains a comprehensive database of company information for sales and marketing purposes.
Results can be filtered by headquarters location, excluded locations, industry keywords, and specific Apollo organization IDs.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| organization_num_employees_range | The number range of employees working for the company. This enables you to find companies based on headcount. You can add multiple ranges to expand your search results.
Each range you add needs to be a string, with the upper and lower numbers of the range separated only by a comma. | List[int] | No |
| organization_locations | The location of the company headquarters. You can search across cities, US states, and countries.
If a company has several office locations, results are still based on the headquarters location. For example, if you search chicago but a company's HQ location is in boston, any Boston-based companies will not appearch in your search results, even if they match other parameters.
To exclude companies based on location, use the organization_not_locations parameter.
| List[str] | No |
| organizations_not_locations | Exclude companies from search results based on the location of the company headquarters. You can use cities, US states, and countries as locations to exclude.
This parameter is useful for ensuring you do not prospect in an undesirable territory. For example, if you use ireland as a value, no Ireland-based companies will appear in your search results.
| List[str] | No |
| q_organization_keyword_tags | Filter search results based on keywords associated with companies. For example, you can enter mining as a value to return only companies that have an association with the mining industry. | List[str] | No |
| q_organization_name | Filter search results to include a specific company name.
If the value you enter for this parameter does not match with a company's name, the company will not appear in search results, even if it matches other parameters. Partial matches are accepted. For example, if you filter by the value marketing, a company called NY Marketing Unlimited would still be eligible as a search result, but NY Market Analysis would not be eligible. | str | No |
| organization_ids | The Apollo IDs for the companies you want to include in your search results. Each company in the Apollo database is assigned a unique ID.
To find IDs, identify the values for organization_id when you call this endpoint. | List[str] | No |
| max_results | The maximum number of results to return. If you don't specify this parameter, the default is 100. | int | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the search failed | str |
| organizations | List of organizations found | List[Dict[str, True]] |
| organization | Each found organization, one at a time | Dict[str, True] |
### Possible use case
<!-- MANUAL: use_case -->
**Market Research**: Find companies matching specific criteria for market analysis.
**Lead List Building**: Build targeted lists of companies for outbound sales campaigns.
**Competitive Intelligence**: Research competitors and similar companies in your market.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,68 @@
# Search People
### What it is
Search for people in Apollo
### How it works
<!-- MANUAL: how_it_works -->
This block searches Apollo's database for people based on job titles, seniority, location, company, and other criteria. It's designed for finding prospects and contacts for sales and marketing.
Enable enrich_info to get detailed contact information including verified email addresses (costs more credits).
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| person_titles | Job titles held by the people you want to find. For a person to be included in search results, they only need to match 1 of the job titles you add. Adding more job titles expands your search results.
Results also include job titles with the same terms, even if they are not exact matches. For example, searching for marketing manager might return people with the job title content marketing manager.
Use this parameter in combination with the person_seniorities[] parameter to find people based on specific job functions and seniority levels.
| List[str] | No |
| person_locations | The location where people live. You can search across cities, US states, and countries.
To find people based on the headquarters locations of their current employer, use the organization_locations parameter. | List[str] | No |
| person_seniorities | The job seniority that people hold within their current employer. This enables you to find people that currently hold positions at certain reporting levels, such as Director level or senior IC level.
For a person to be included in search results, they only need to match 1 of the seniorities you add. Adding more seniorities expands your search results.
Searches only return results based on their current job title, so searching for Director-level employees only returns people that currently hold a Director-level title. If someone was previously a Director, but is currently a VP, they would not be included in your search results.
Use this parameter in combination with the person_titles[] parameter to find people based on specific job functions and seniority levels. | List["owner" | "founder" | "c_suite"] | No |
| organization_locations | The location of the company headquarters for a person's current employer. You can search across cities, US states, and countries.
If a company has several office locations, results are still based on the headquarters location. For example, if you search chicago but a company's HQ location is in boston, people that work for the Boston-based company will not appear in your results, even if they match other parameters.
To find people based on their personal location, use the person_locations parameter. | List[str] | No |
| q_organization_domains | The domain name for the person's employer. This can be the current employer or a previous employer. Do not include www., the @ symbol, or similar.
You can add multiple domains to search across companies.
Examples: apollo.io and microsoft.com | List[str] | No |
| contact_email_statuses | The email statuses for the people you want to find. You can add multiple statuses to expand your search. | List["verified" | "unverified" | "likely_to_engage"] | No |
| organization_ids | The Apollo IDs for the companies (employers) you want to include in your search results. Each company in the Apollo database is assigned a unique ID.
To find IDs, call the Organization Search endpoint and identify the values for organization_id. | List[str] | No |
| organization_num_employees_range | The number range of employees working for the company. This enables you to find companies based on headcount. You can add multiple ranges to expand your search results.
Each range you add needs to be a string, with the upper and lower numbers of the range separated only by a comma. | List[int] | No |
| q_keywords | A string of words over which we want to filter the results | str | No |
| max_results | The maximum number of results to return. If you don't specify this parameter, the default is 25. Limited to 500 to prevent overspending. | int | No |
| enrich_info | Whether to enrich contacts with detailed information including real email addresses. This will double the search cost. | bool | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the search failed | str |
| people | List of people found | List[Dict[str, True]] |
### Possible use case
<!-- MANUAL: use_case -->
**Prospecting**: Find decision-makers at target companies for outbound sales.
**Recruiting**: Search for candidates with specific titles and experience.
**ABM Campaigns**: Build contact lists at specific accounts for account-based marketing.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,42 @@
# Get Person Detail
### What it is
Get detailed person data with Apollo API, including email reveal
### How it works
<!-- MANUAL: how_it_works -->
This block enriches person data using Apollo's API. You can look up by Apollo person ID for best accuracy, or match by name plus company information, LinkedIn URL, or email address.
Returns comprehensive contact details including email addresses (if available), job title, company information, and social profiles.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| person_id | Apollo person ID to enrich (most accurate method) | str | No |
| first_name | First name of the person to enrich | str | No |
| last_name | Last name of the person to enrich | str | No |
| name | Full name of the person to enrich (alternative to first_name + last_name) | str | No |
| email | Known email address of the person (helps with matching) | str | No |
| domain | Company domain of the person (e.g., 'google.com') | str | No |
| company | Company name of the person | str | No |
| linkedin_url | LinkedIn URL of the person | str | No |
| organization_id | Apollo organization ID of the person's company | str | No |
| title | Job title of the person to enrich | str | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if enrichment failed | str |
| contact | Enriched contact information | Dict[str, True] |
### Possible use case
<!-- MANUAL: use_case -->
**Contact Enrichment**: Get full contact details from partial information like name and company.
**Email Discovery**: Find verified email addresses for outreach campaigns.
**Profile Completion**: Fill in missing contact details in your CRM or database.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,45 @@
# Post To Bluesky
### What it is
Post to Bluesky using Ayrshare
### How it works
<!-- MANUAL: how_it_works -->
This block uses Ayrshare's social media API to publish content to Bluesky. It handles text posts (up to 300 characters), images (up to 4), and video content with support for scheduling, accessibility features like alt text, and link shortening.
The block authenticates through your Ayrshare credentials and sends the post data to Ayrshare's unified API, which then publishes to Bluesky. It returns post identifiers and status information upon completion, or error details if the operation fails.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| post | The post text to be published (max 300 characters for Bluesky) | str | No |
| media_urls | Optional list of media URLs to include. Bluesky supports up to 4 images or 1 video. | List[str] | No |
| is_video | Whether the media is a video | bool | No |
| schedule_date | UTC datetime for scheduling (YYYY-MM-DDThh:mm:ssZ) | str (date-time) | No |
| disable_comments | Whether to disable comments | bool | No |
| shorten_links | Whether to shorten links | bool | No |
| unsplash | Unsplash image configuration | str | No |
| requires_approval | Whether to enable approval workflow | bool | No |
| random_post | Whether to generate random post text | bool | No |
| random_media_url | Whether to generate random media | bool | No |
| notes | Additional notes for the post | str | No |
| alt_text | Alt text for each media item (accessibility) | List[str] | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| post_result | The result of the post | PostResponse |
| post | The result of the post | PostIds |
### Possible use case
<!-- MANUAL: use_case -->
**Cross-Platform Publishing**: Automatically share content across Bluesky and other social networks from a single workflow.
**Scheduled Content Calendar**: Queue up posts with specific publishing times to maintain consistent presence on Bluesky.
**Visual Content Sharing**: Share image galleries with accessibility-friendly alt text for photo-focused content strategies.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,61 @@
# Post To Facebook
### What it is
Post to Facebook using Ayrshare
### How it works
<!-- MANUAL: how_it_works -->
This block uses Ayrshare's social media API to publish content to Facebook Pages. It supports text posts, images, videos, carousels (2-10 items), Reels, and Stories, with features like audience targeting by age and country, location tagging, and scheduling.
The block authenticates through Ayrshare and leverages the Meta Graph API to handle various Facebook-specific formats. Advanced options include draft mode for Meta Business Suite, custom link previews, and video thumbnails. Results include post IDs for tracking engagement.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| post | The post text to be published | str | No |
| media_urls | Optional list of media URLs to include. Set is_video in advanced settings to true if you want to upload videos. | List[str] | No |
| is_video | Whether the media is a video | bool | No |
| schedule_date | UTC datetime for scheduling (YYYY-MM-DDThh:mm:ssZ) | str (date-time) | No |
| disable_comments | Whether to disable comments | bool | No |
| shorten_links | Whether to shorten links | bool | No |
| unsplash | Unsplash image configuration | str | No |
| requires_approval | Whether to enable approval workflow | bool | No |
| random_post | Whether to generate random post text | bool | No |
| random_media_url | Whether to generate random media | bool | No |
| notes | Additional notes for the post | str | No |
| is_carousel | Whether to post a carousel | bool | No |
| carousel_link | The URL for the 'See More At' button in the carousel | str | No |
| carousel_items | List of carousel items with name, link and picture URLs. Min 2, max 10 items. | List[CarouselItem] | No |
| is_reels | Whether to post to Facebook Reels | bool | No |
| reels_title | Title for the Reels video (max 255 chars) | str | No |
| reels_thumbnail | Thumbnail URL for Reels video (JPEG/PNG, <10MB) | str | No |
| is_story | Whether to post as a Facebook Story | bool | No |
| media_captions | Captions for each media item | List[str] | No |
| location_id | Facebook Page ID or name for location tagging | str | No |
| age_min | Minimum age for audience targeting (13,15,18,21,25) | int | No |
| target_countries | List of country codes to target (max 25) | List[str] | No |
| alt_text | Alt text for each media item | List[str] | No |
| video_title | Title for video post | str | No |
| video_thumbnail | Thumbnail URL for video post | str | No |
| is_draft | Save as draft in Meta Business Suite | bool | No |
| scheduled_publish_date | Schedule publish time in Meta Business Suite (UTC) | str | No |
| preview_link | URL for custom link preview | str | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| post_result | The result of the post | PostResponse |
| post | The result of the post | PostIds |
### Possible use case
<!-- MANUAL: use_case -->
**Product Launches**: Create carousel posts showcasing multiple product images with links to purchase pages.
**Event Promotion**: Share event details with age-targeted reach and location tagging for local business events.
**Short-Form Video**: Automatically publish Reels with custom thumbnails to maximize video content reach.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,57 @@
# Post To GMB
### What it is
Post to Google My Business using Ayrshare
### How it works
<!-- MANUAL: how_it_works -->
This block uses Ayrshare's API to publish content to Google My Business profiles. It supports standard posts, photo/video posts (categorized by type like exterior, interior, product), and special post types including events and promotional offers with coupon codes.
The block integrates with Google's Business Profile API through Ayrshare, enabling call-to-action buttons (book, order, shop, learn more, sign up, call), event scheduling with start/end dates, and promotional offers with terms and redemption URLs.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| post | The post text to be published | str | No |
| media_urls | Optional list of media URLs. GMB supports only one image or video per post. | List[str] | No |
| is_video | Whether the media is a video | bool | No |
| schedule_date | UTC datetime for scheduling (YYYY-MM-DDThh:mm:ssZ) | str (date-time) | No |
| disable_comments | Whether to disable comments | bool | No |
| shorten_links | Whether to shorten links | bool | No |
| unsplash | Unsplash image configuration | str | No |
| requires_approval | Whether to enable approval workflow | bool | No |
| random_post | Whether to generate random post text | bool | No |
| random_media_url | Whether to generate random media | bool | No |
| notes | Additional notes for the post | str | No |
| is_photo_video | Whether this is a photo/video post (appears in Photos section) | bool | No |
| photo_category | Category for photo/video: cover, profile, logo, exterior, interior, product, at_work, food_and_drink, menu, common_area, rooms, teams | str | No |
| call_to_action_type | Type of action button: 'book', 'order', 'shop', 'learn_more', 'sign_up', or 'call' | str | No |
| call_to_action_url | URL for the action button (not required for 'call' action) | str | No |
| event_title | Event title for event posts | str | No |
| event_start_date | Event start date in ISO format (e.g., '2024-03-15T09:00:00Z') | str | No |
| event_end_date | Event end date in ISO format (e.g., '2024-03-15T17:00:00Z') | str | No |
| offer_title | Offer title for promotional posts | str | No |
| offer_start_date | Offer start date in ISO format (e.g., '2024-03-15T00:00:00Z') | str | No |
| offer_end_date | Offer end date in ISO format (e.g., '2024-04-15T23:59:59Z') | str | No |
| offer_coupon_code | Coupon code for the offer (max 58 characters) | str | No |
| offer_redeem_online_url | URL where customers can redeem the offer online | str | No |
| offer_terms_conditions | Terms and conditions for the offer | str | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| post_result | The result of the post | PostResponse |
| post | The result of the post | PostIds |
### Possible use case
<!-- MANUAL: use_case -->
**Local Business Updates**: Post daily specials, new arrivals, or service announcements directly to your Google Business Profile.
**Promotional Campaigns**: Create time-limited offers with coupon codes and online redemption links to drive sales.
**Event Marketing**: Announce upcoming events with dates, descriptions, and call-to-action buttons for reservations.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,54 @@
# Post To Instagram
### What it is
Post to Instagram using Ayrshare. Requires a Business or Creator Instagram Account connected with a Facebook Page
### How it works
<!-- MANUAL: how_it_works -->
This block uses Ayrshare's API to publish content to Instagram Business or Creator accounts. It supports feed posts, Stories (24-hour expiration), Reels, and carousels (up to 10 images/videos), with features like collaborator invitations, location tagging, and user tags with coordinates.
The block requires an Instagram account connected to a Facebook Page and authenticates through Meta's Graph API via Ayrshare. Instagram-specific features include auto-resize for optimal dimensions, audio naming for Reels, and thumbnail customization with frame offset control.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| post | The post text (max 2,200 chars, up to 30 hashtags, 3 @mentions) | str | No |
| media_urls | Optional list of media URLs. Instagram supports up to 10 images/videos in a carousel. | List[str] | No |
| is_video | Whether the media is a video | bool | No |
| schedule_date | UTC datetime for scheduling (YYYY-MM-DDThh:mm:ssZ) | str (date-time) | No |
| disable_comments | Whether to disable comments | bool | No |
| shorten_links | Whether to shorten links | bool | No |
| unsplash | Unsplash image configuration | str | No |
| requires_approval | Whether to enable approval workflow | bool | No |
| random_post | Whether to generate random post text | bool | No |
| random_media_url | Whether to generate random media | bool | No |
| notes | Additional notes for the post | str | No |
| is_story | Whether to post as Instagram Story (24-hour expiration) | bool | No |
| share_reels_feed | Whether Reel should appear in both Feed and Reels tabs | bool | No |
| audio_name | Audio name for Reels (e.g., 'The Weeknd - Blinding Lights') | str | No |
| thumbnail | Thumbnail URL for Reel video | str | No |
| thumbnail_offset | Thumbnail frame offset in milliseconds (default: 0) | int | No |
| alt_text | Alt text for each media item (up to 1,000 chars each, accessibility feature), each item in the list corresponds to a media item in the media_urls list | List[str] | No |
| location_id | Facebook Page ID or name for location tagging (e.g., '7640348500' or '@guggenheimmuseum') | str | No |
| user_tags | List of users to tag with coordinates for images | List[Dict[str, True]] | No |
| collaborators | Instagram usernames to invite as collaborators (max 3, public accounts only) | List[str] | No |
| auto_resize | Auto-resize images to 1080x1080px for Instagram | bool | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| post_result | The result of the post | PostResponse |
| post | The result of the post | PostIds |
### Possible use case
<!-- MANUAL: use_case -->
**Influencer Collaborations**: Create posts with collaborator tags to feature brand partnerships across multiple accounts.
**E-commerce Product Showcases**: Share carousel posts of product images with location tags for local discovery.
**Reels Automation**: Automatically publish short-form video content with custom thumbnails and trending audio.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,56 @@
# Post To Linked In
### What it is
Post to LinkedIn using Ayrshare
### How it works
<!-- MANUAL: how_it_works -->
This block uses Ayrshare's social media API to post content to LinkedIn. It handles text posts, images, videos, and documents, with support for scheduling and audience targeting. The block authenticates through Ayrshare's API.
LinkedIn-specific features include visibility controls, comment management, and targeting by country, seniority, industry, and other demographics (requires 300+ followers in target audience).
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| post | The post text (max 3,000 chars, hashtags supported with #) | str | No |
| media_urls | Optional list of media URLs. LinkedIn supports up to 9 images, videos, or documents (PPT, PPTX, DOC, DOCX, PDF <100MB, <300 pages). | List[str] | No |
| is_video | Whether the media is a video | bool | No |
| schedule_date | UTC datetime for scheduling (YYYY-MM-DDThh:mm:ssZ) | str (date-time) | No |
| disable_comments | Whether to disable comments | bool | No |
| shorten_links | Whether to shorten links | bool | No |
| unsplash | Unsplash image configuration | str | No |
| requires_approval | Whether to enable approval workflow | bool | No |
| random_post | Whether to generate random post text | bool | No |
| random_media_url | Whether to generate random media | bool | No |
| notes | Additional notes for the post | str | No |
| visibility | Post visibility: 'public' (default), 'connections' (personal only), 'loggedin' | str | No |
| alt_text | Alt text for each image (accessibility feature, not supported for videos/documents) | List[str] | No |
| titles | Title/caption for each image or video | List[str] | No |
| document_title | Title for document posts (max 400 chars, uses filename if not specified) | str | No |
| thumbnail | Thumbnail URL for video (PNG/JPG, same dimensions as video, <10MB) | str | No |
| targeting_countries | Country codes for targeting (e.g., ['US', 'IN', 'DE', 'GB']). Requires 300+ followers in target audience. | List[str] | No |
| targeting_seniorities | Seniority levels for targeting (e.g., ['Senior', 'VP']). Requires 300+ followers in target audience. | List[str] | No |
| targeting_degrees | Education degrees for targeting. Requires 300+ followers in target audience. | List[str] | No |
| targeting_fields_of_study | Fields of study for targeting. Requires 300+ followers in target audience. | List[str] | No |
| targeting_industries | Industry categories for targeting. Requires 300+ followers in target audience. | List[str] | No |
| targeting_job_functions | Job function categories for targeting. Requires 300+ followers in target audience. | List[str] | No |
| targeting_staff_count_ranges | Company size ranges for targeting. Requires 300+ followers in target audience. | List[str] | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| post_result | The result of the post | PostResponse |
| post | The result of the post | PostIds |
### Possible use case
<!-- MANUAL: use_case -->
**Thought Leadership**: Automatically share blog posts or industry insights with professional network.
**Scheduled Content**: Queue up a week's worth of LinkedIn posts with scheduled publishing times.
**Targeted Announcements**: Share company updates targeted to specific industries or seniority levels.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,51 @@
# Post To Pinterest
### What it is
Post to Pinterest using Ayrshare
### How it works
<!-- MANUAL: how_it_works -->
This block uses Ayrshare's API to publish pins to Pinterest boards. It supports image pins, video pins (with required thumbnails), and carousel pins (up to 5 images), with customizable titles, descriptions, destination links, and private notes.
The block connects to Pinterest's API through Ayrshare, allowing you to specify target boards, add alt text for accessibility, and configure per-image carousel options including individual titles, links, and descriptions. Pins can be scheduled for future publishing.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| post | Pin description (max 500 chars, links not clickable - use link field instead) | str | No |
| media_urls | Required image/video URLs. Pinterest requires at least one image. Videos need thumbnail. Up to 5 images for carousel. | List[str] | No |
| is_video | Whether the media is a video | bool | No |
| schedule_date | UTC datetime for scheduling (YYYY-MM-DDThh:mm:ssZ) | str (date-time) | No |
| disable_comments | Whether to disable comments | bool | No |
| shorten_links | Whether to shorten links | bool | No |
| unsplash | Unsplash image configuration | str | No |
| requires_approval | Whether to enable approval workflow | bool | No |
| random_post | Whether to generate random post text | bool | No |
| random_media_url | Whether to generate random media | bool | No |
| notes | Additional notes for the post | str | No |
| pin_title | Pin title displayed in 'Add your title' section (max 100 chars) | str | No |
| link | Clickable destination URL when users click the pin (max 2048 chars) | str | No |
| board_id | Pinterest Board ID to post to (from /user/details endpoint, uses default board if not specified) | str | No |
| note | Private note for the pin (only visible to you and board collaborators) | str | No |
| thumbnail | Required thumbnail URL for video pins (must have valid image Content-Type) | str | No |
| carousel_options | Options for each image in carousel (title, link, description per image) | List[PinterestCarouselOption] | No |
| alt_text | Alt text for each image/video (max 500 chars each, accessibility feature) | List[str] | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| post_result | The result of the post | PostResponse |
| post | The result of the post | PostIds |
### Possible use case
<!-- MANUAL: use_case -->
**Product Catalog Distribution**: Automatically pin product images with direct links to purchase pages organized by board category.
**Content Repurposing**: Convert blog posts and articles into visual pins with clickable destination URLs.
**Visual Inspiration Boards**: Create carousel pins showcasing design ideas, recipes, or tutorials with step-by-step images.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,44 @@
# Post To Reddit
### What it is
Post to Reddit using Ayrshare
### How it works
<!-- MANUAL: how_it_works -->
This block uses Ayrshare's API to publish content to Reddit. It supports text posts, image posts, and video submissions with optional scheduling and link shortening features.
The block authenticates through Ayrshare and submits content to your connected Reddit account. Common options include approval workflows for content review before publishing, random content generation, and Unsplash integration for sourcing images.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| post | The post text to be published | str | No |
| media_urls | Optional list of media URLs to include. Set is_video in advanced settings to true if you want to upload videos. | List[str] | No |
| is_video | Whether the media is a video | bool | No |
| schedule_date | UTC datetime for scheduling (YYYY-MM-DDThh:mm:ssZ) | str (date-time) | No |
| disable_comments | Whether to disable comments | bool | No |
| shorten_links | Whether to shorten links | bool | No |
| unsplash | Unsplash image configuration | str | No |
| requires_approval | Whether to enable approval workflow | bool | No |
| random_post | Whether to generate random post text | bool | No |
| random_media_url | Whether to generate random media | bool | No |
| notes | Additional notes for the post | str | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| post_result | The result of the post | PostResponse |
| post | The result of the post | PostIds |
### Possible use case
<!-- MANUAL: use_case -->
**Community Engagement**: Share relevant content to niche subreddits as part of community marketing strategies.
**Content Distribution**: Cross-post blog articles or announcements to relevant Reddit communities for broader reach.
**Brand Monitoring Response**: Automatically share updates or responses in communities where your brand is discussed.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,46 @@
# Post To Snapchat
### What it is
Post to Snapchat using Ayrshare
### How it works
<!-- MANUAL: how_it_works -->
This block uses Ayrshare's API to publish video content to Snapchat. Snapchat only supports video content, with three destination options: Stories (24-hour ephemeral content), Saved Stories (persistent Stories), and Spotlight (public discovery feed).
The block authenticates through Ayrshare and uploads video content with optional custom thumbnails. Videos can be scheduled for future publishing and support approval workflows for content review before going live.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| post | The post text (optional for video-only content) | str | No |
| media_urls | Required video URL for Snapchat posts. Snapchat only supports video content. | List[str] | No |
| is_video | Whether the media is a video | bool | No |
| schedule_date | UTC datetime for scheduling (YYYY-MM-DDThh:mm:ssZ) | str (date-time) | No |
| disable_comments | Whether to disable comments | bool | No |
| shorten_links | Whether to shorten links | bool | No |
| unsplash | Unsplash image configuration | str | No |
| requires_approval | Whether to enable approval workflow | bool | No |
| random_post | Whether to generate random post text | bool | No |
| random_media_url | Whether to generate random media | bool | No |
| notes | Additional notes for the post | str | No |
| story_type | Type of Snapchat content: 'story' (24-hour Stories), 'saved_story' (Saved Stories), or 'spotlight' (Spotlight posts) | str | No |
| video_thumbnail | Thumbnail URL for video content (optional, auto-generated if not provided) | str | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| post_result | The result of the post | PostResponse |
| post | The result of the post | PostIds |
### Possible use case
<!-- MANUAL: use_case -->
**Ephemeral Marketing**: Share time-sensitive promotions or behind-the-scenes content that creates urgency through 24-hour Stories.
**Public Discovery**: Post engaging video content to Spotlight to reach new audiences beyond your followers.
**Scheduled Story Series**: Plan and schedule a sequence of video Stories for product launches or events.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,44 @@
# Post To Telegram
### What it is
Post to Telegram using Ayrshare
### How it works
<!-- MANUAL: how_it_works -->
This block uses Ayrshare's API to publish messages to Telegram channels. It supports text messages, images, videos, and animated GIFs, with automatic link preview generation unless media is included.
The block authenticates through Ayrshare and sends content to your connected Telegram channel or bot. User mentions are supported via @handle syntax, and content can be scheduled for future delivery.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| post | The post text (empty string allowed). Use @handle to mention other Telegram users. | str | No |
| media_urls | Optional list of media URLs. For animated GIFs, only one URL is allowed. Telegram will auto-preview links unless image/video is included. | List[str] | No |
| is_video | Whether the media is a video. Set to true for animated GIFs that don't end in .gif/.GIF extension. | bool | No |
| schedule_date | UTC datetime for scheduling (YYYY-MM-DDThh:mm:ssZ) | str (date-time) | No |
| disable_comments | Whether to disable comments | bool | No |
| shorten_links | Whether to shorten links | bool | No |
| unsplash | Unsplash image configuration | str | No |
| requires_approval | Whether to enable approval workflow | bool | No |
| random_post | Whether to generate random post text | bool | No |
| random_media_url | Whether to generate random media | bool | No |
| notes | Additional notes for the post | str | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| post_result | The result of the post | PostResponse |
| post | The result of the post | PostIds |
### Possible use case
<!-- MANUAL: use_case -->
**Channel Broadcasting**: Automatically distribute announcements, updates, or news to Telegram channel subscribers.
**Alert Systems**: Send automated notifications with media attachments to monitoring or alert channels.
**Content Syndication**: Cross-post content from other platforms to Telegram communities for broader reach.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,44 @@
# Post To Threads
### What it is
Post to Threads using Ayrshare
### How it works
<!-- MANUAL: how_it_works -->
This block uses Ayrshare's API to publish content to Threads (Meta's text-based social platform). It supports text posts (up to 500 characters with one hashtag), images, videos, and carousels (up to 20 items), with automatic link previews when no media is attached.
The block authenticates through Meta's API via Ayrshare. Content can mention users via @handle syntax, be scheduled for future publishing, and include approval workflows for content review.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| post | The post text (max 500 chars, empty string allowed). Only 1 hashtag allowed. Use @handle to mention users. | str | No |
| media_urls | Optional list of media URLs. Supports up to 20 images/videos in a carousel. Auto-preview links unless media is included. | List[str] | No |
| is_video | Whether the media is a video | bool | No |
| schedule_date | UTC datetime for scheduling (YYYY-MM-DDThh:mm:ssZ) | str (date-time) | No |
| disable_comments | Whether to disable comments | bool | No |
| shorten_links | Whether to shorten links | bool | No |
| unsplash | Unsplash image configuration | str | No |
| requires_approval | Whether to enable approval workflow | bool | No |
| random_post | Whether to generate random post text | bool | No |
| random_media_url | Whether to generate random media | bool | No |
| notes | Additional notes for the post | str | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| post_result | The result of the post | PostResponse |
| post | The result of the post | PostIds |
### Possible use case
<!-- MANUAL: use_case -->
**Thought Leadership**: Share quick insights, opinions, or industry commentary in a conversational format.
**Cross-Platform Text Content**: Automatically syndicate text-based content from other platforms to Threads.
**Community Engagement**: Post discussion prompts or responses to engage with your Threads audience.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,55 @@
# Post To Tik Tok
### What it is
Post to TikTok using Ayrshare
### How it works
<!-- MANUAL: how_it_works -->
This block uses Ayrshare's API to publish content to TikTok. It supports video posts and image slideshows (up to 35 images), with extensive options for content labeling including AI-generated disclosure, branded content, and brand organic content tags.
The block connects to TikTok's API through Ayrshare with controls for visibility, duet/stitch permissions, comment settings, auto-music, and thumbnail selection. Videos can be posted as drafts for final review, and scheduled for future publishing.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| post | The post text (max 2,200 chars, empty string allowed). Use @handle to mention users. Line breaks will be ignored. | str | Yes |
| media_urls | Required media URLs. Either 1 video OR up to 35 images (JPG/JPEG/WEBP only). Cannot mix video and images. | List[str] | No |
| is_video | Whether the media is a video | bool | No |
| schedule_date | UTC datetime for scheduling (YYYY-MM-DDThh:mm:ssZ) | str (date-time) | No |
| disable_comments | Disable comments on the published post | bool | No |
| shorten_links | Whether to shorten links | bool | No |
| unsplash | Unsplash image configuration | str | No |
| requires_approval | Whether to enable approval workflow | bool | No |
| random_post | Whether to generate random post text | bool | No |
| random_media_url | Whether to generate random media | bool | No |
| notes | Additional notes for the post | str | No |
| auto_add_music | Whether to automatically add recommended music to the post. If you set this field to true, you can change the music later in the TikTok app. | bool | No |
| disable_duet | Disable duets on published video (video only) | bool | No |
| disable_stitch | Disable stitch on published video (video only) | bool | No |
| is_ai_generated | If you enable the toggle, your video will be labeled as “Creator labeled as AI-generated” once posted and cant be changed. The “Creator labeled as AI-generated” label indicates that the content was completely AI-generated or significantly edited with AI. | bool | No |
| is_branded_content | Whether to enable the Branded Content toggle. If this field is set to true, the video will be labeled as Branded Content, indicating you are in a paid partnership with a brand. A “Paid partnership” label will be attached to the video. | bool | No |
| is_brand_organic | Whether to enable the Brand Organic Content toggle. If this field is set to true, the video will be labeled as Brand Organic Content, indicating you are promoting yourself or your own business. A “Promotional content” label will be attached to the video. | bool | No |
| image_cover_index | Index of image to use as cover (0-based, image posts only) | int | No |
| title | Title for image posts | str | No |
| thumbnail_offset | Video thumbnail frame offset in milliseconds (video only) | int | No |
| visibility | Post visibility: 'public', 'private', 'followers', or 'friends' | "public" | "private" | "followers" | No |
| draft | Create as draft post (video only) | bool | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| post_result | The result of the post | PostResponse |
| post | The result of the post | PostIds |
### Possible use case
<!-- MANUAL: use_case -->
**Creator Content Pipeline**: Automate video uploads with proper AI disclosure labels and visibility settings for content creators.
**Brand Campaigns**: Publish branded content with proper disclosure labels to maintain FTC compliance and platform guidelines.
**Image Slideshow Posts**: Create TikTok slideshows from product images or photo series with automatic cover selection.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,57 @@
# Post To X
### What it is
Post to X / Twitter using Ayrshare
### How it works
<!-- MANUAL: how_it_works -->
This block uses Ayrshare's API to publish content to X (formerly Twitter). It supports standard tweets (280 characters, or 25,000 for Premium users), threads, polls, quote tweets, and replies, with up to 4 media attachments including video with subtitles.
The block authenticates through Ayrshare and handles X-specific features like automatic thread breaking using double newlines, thread numbering, per-post media attachments, and long-form video uploads (with approval). Poll options and duration can be configured for engagement posts.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| post | The post text (max 280 chars, up to 25,000 for Premium users). Use @handle to mention users. Use \n\n for thread breaks. | str | Yes |
| media_urls | Optional list of media URLs. X supports up to 4 images or videos per tweet. Auto-preview links unless media is included. | List[str] | No |
| is_video | Whether the media is a video | bool | No |
| schedule_date | UTC datetime for scheduling (YYYY-MM-DDThh:mm:ssZ) | str (date-time) | No |
| disable_comments | Whether to disable comments | bool | No |
| shorten_links | Whether to shorten links | bool | No |
| unsplash | Unsplash image configuration | str | No |
| requires_approval | Whether to enable approval workflow | bool | No |
| random_post | Whether to generate random post text | bool | No |
| random_media_url | Whether to generate random media | bool | No |
| notes | Additional notes for the post | str | No |
| reply_to_id | ID of the tweet to reply to | str | No |
| quote_tweet_id | ID of the tweet to quote (low-level Tweet ID) | str | No |
| poll_options | Poll options (2-4 choices) | List[str] | No |
| poll_duration | Poll duration in minutes (1-10080) | int | No |
| alt_text | Alt text for each image (max 1,000 chars each, not supported for videos) | List[str] | No |
| is_thread | Whether to automatically break post into thread based on line breaks | bool | No |
| thread_number | Add thread numbers (1/n format) to each thread post | bool | No |
| thread_media_urls | Media URLs for thread posts (one per thread, use 'null' to skip) | List[str] | No |
| long_post | Force long form post (requires Premium X account) | bool | No |
| long_video | Enable long video upload (requires approval and Business/Enterprise plan) | bool | No |
| subtitle_url | URL to SRT subtitle file for videos (must be HTTPS and end in .srt) | str | No |
| subtitle_language | Language code for subtitles (default: 'en') | str | No |
| subtitle_name | Name of caption track (max 150 chars, default: 'English') | str | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| post_result | The result of the post | PostResponse |
| post | The result of the post | PostIds |
### Possible use case
<!-- MANUAL: use_case -->
**Thread Publishing**: Automatically format and publish long-form content as numbered thread sequences.
**Engagement Polls**: Create polls to gather audience feedback or drive interaction with scheduled posting.
**Reply Automation**: Build workflows that automatically respond to mentions or engage in conversations.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,60 @@
# Post To You Tube
### What it is
Post to YouTube using Ayrshare
### How it works
<!-- MANUAL: how_it_works -->
This block uses Ayrshare's API to upload videos to YouTube. It handles video uploads with extensive metadata including titles, descriptions, tags, custom thumbnails, playlist assignment, category selection, and visibility controls (public, private, unlisted).
The block supports YouTube Shorts (up to 3 minutes), geographic targeting to allow or block specific countries, subtitle files (SRT/SBV format), synthetic/AI content disclosure, kids content labeling, and subscriber notification controls. Videos can be scheduled for specific publish times.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| post | Video description (max 5,000 chars, empty string allowed). Cannot contain < or > characters. | str | Yes |
| media_urls | Required video URL. YouTube only supports 1 video per post. | List[str] | No |
| is_video | Whether the media is a video | bool | No |
| schedule_date | UTC datetime for scheduling (YYYY-MM-DDThh:mm:ssZ) | str (date-time) | No |
| disable_comments | Whether to disable comments | bool | No |
| shorten_links | Whether to shorten links | bool | No |
| unsplash | Unsplash image configuration | str | No |
| requires_approval | Whether to enable approval workflow | bool | No |
| random_post | Whether to generate random post text | bool | No |
| random_media_url | Whether to generate random media | bool | No |
| notes | Additional notes for the post | str | No |
| title | Video title (max 100 chars, required). Cannot contain < or > characters. | str | Yes |
| visibility | Video visibility: 'private' (default), 'public' , or 'unlisted' | "private" | "public" | "unlisted" | No |
| thumbnail | Thumbnail URL (JPEG/PNG under 2MB, must end in .png/.jpg/.jpeg). Requires phone verification. | str | No |
| playlist_id | Playlist ID to add video (user must own playlist) | str | No |
| tags | Video tags (min 2 chars each, max 500 chars total) | List[str] | No |
| made_for_kids | Self-declared kids content | bool | No |
| is_shorts | Post as YouTube Short (max 3 minutes, adds #shorts) | bool | No |
| notify_subscribers | Send notification to subscribers | bool | No |
| category_id | Video category ID (e.g., 24 = Entertainment) | int | No |
| contains_synthetic_media | Disclose realistic AI/synthetic content | bool | No |
| publish_at | UTC publish time (YouTube controlled, format: 2022-10-08T21:18:36Z) | str | No |
| targeting_block_countries | Country codes to block from viewing (e.g., ['US', 'CA']) | List[str] | No |
| targeting_allow_countries | Country codes to allow viewing (e.g., ['GB', 'AU']) | List[str] | No |
| subtitle_url | URL to SRT or SBV subtitle file (must be HTTPS and end in .srt/.sbv, under 100MB) | str | No |
| subtitle_language | Language code for subtitles (default: 'en') | str | No |
| subtitle_name | Name of caption track (max 150 chars, default: 'English') | str | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| post_result | The result of the post | PostResponse |
| post | The result of the post | PostIds |
### Possible use case
<!-- MANUAL: use_case -->
**Video Publishing Pipeline**: Automate video uploads with thumbnails, descriptions, and playlist organization for content creators.
**YouTube Shorts Automation**: Publish short-form vertical videos to YouTube Shorts with proper metadata and hashtags.
**Multi-Region Content**: Upload videos with geographic restrictions for region-specific content licensing or compliance.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,147 @@
# Baas Bot Delete Recording
### What it is
Permanently delete a meeting's recorded data
### How it works
<!-- MANUAL: how_it_works -->
This block permanently deletes the recorded data for a meeting bot using the BaaS (Bot as a Service) API. The deletion is irreversible and removes all associated recording files and transcripts.
Provide the bot_id from a previous recording session to delete that specific meeting's data.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| bot_id | UUID of the bot whose data to delete | str | Yes |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| deleted | Whether the data was successfully deleted | bool |
### Possible use case
<!-- MANUAL: use_case -->
**Privacy Compliance**: Delete recordings to comply with data retention policies or user requests.
**Storage Management**: Clean up old recordings to manage storage costs.
**Post-Processing Cleanup**: Delete recordings after extracting needed information.
<!-- END MANUAL -->
---
## Baas Bot Fetch Meeting Data
### What it is
Retrieve recorded meeting data
### How it works
<!-- MANUAL: how_it_works -->
This block retrieves recorded meeting data including video URL, transcript, and metadata from a completed bot session. The video URL is time-limited and should be downloaded promptly.
Enable include_transcripts to receive the full meeting transcript with speaker identification and timestamps.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| bot_id | UUID of the bot whose data to fetch | str | Yes |
| include_transcripts | Include transcript data in response | bool | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| mp4_url | URL to download the meeting recording (time-limited) | str |
| transcript | Meeting transcript data | List[Any] |
| metadata | Meeting metadata and bot information | Dict[str, True] |
### Possible use case
<!-- MANUAL: use_case -->
**Meeting Summarization**: Retrieve transcripts for AI summarization and action item extraction.
**Recording Archive**: Download and store meeting recordings for compliance or reference.
**Analytics**: Extract meeting metadata for participation and duration analytics.
<!-- END MANUAL -->
---
## Baas Bot Join Meeting
### What it is
Deploy a bot to join and record a meeting
### How it works
<!-- MANUAL: how_it_works -->
This block deploys a recording bot to join a video meeting (Zoom, Google Meet, Teams). Configure the bot's display name, avatar, and entry message. The bot joins, records, and transcribes the meeting.
Use webhooks to receive notifications when the meeting ends and recordings are ready.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| meeting_url | The URL of the meeting the bot should join | str | Yes |
| bot_name | Display name for the bot in the meeting | str | Yes |
| bot_image | URL to an image for the bot's avatar (16:9 ratio recommended) | str | No |
| entry_message | Chat message the bot will post upon entry | str | No |
| reserved | Use a reserved bot slot (joins 4 min before meeting) | bool | No |
| start_time | Unix timestamp (ms) when bot should join | int | No |
| webhook_url | URL to receive webhook events for this bot | str | No |
| timeouts | Automatic leave timeouts configuration | Dict[str, True] | No |
| extra | Custom metadata to attach to the bot | Dict[str, True] | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| bot_id | UUID of the deployed bot | str |
| join_response | Full response from join operation | Dict[str, True] |
### Possible use case
<!-- MANUAL: use_case -->
**Automated Recording**: Record meetings automatically without requiring host intervention.
**Meeting Assistant**: Deploy bots to take notes and transcribe customer or team meetings.
**Compliance Recording**: Ensure all meetings are recorded for compliance or quality assurance.
<!-- END MANUAL -->
---
## Baas Bot Leave Meeting
### What it is
Remove a bot from an ongoing meeting
### How it works
<!-- MANUAL: how_it_works -->
This block removes a recording bot from an ongoing meeting. Use this when you need to stop recording before the meeting naturally ends.
The bot leaves gracefully and recording data becomes available for retrieval.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| bot_id | UUID of the bot to remove from meeting | str | Yes |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| left | Whether the bot successfully left | bool |
### Possible use case
<!-- MANUAL: use_case -->
**Early Termination**: Stop recording when a meeting transitions to an off-record discussion.
**Time-Based Recording**: Leave after capturing a specific portion of a meeting.
**Error Recovery**: Remove and redeploy bots when issues occur during recording.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,42 @@
# Bannerbear Text Overlay
### What it is
Add text overlay to images using Bannerbear templates. Perfect for creating social media graphics, marketing materials, and dynamic image content.
### How it works
<!-- MANUAL: how_it_works -->
This block uses Bannerbear's API to generate images by populating templates with dynamic text and images. Create templates in Bannerbear with text layers, then modify layer content programmatically.
Webhooks can notify you when asynchronous generation completes. Include custom metadata for tracking generated images.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| template_id | The unique ID of your Bannerbear template | str | Yes |
| project_id | Optional: Project ID (required when using Master API Key) | str | No |
| text_modifications | List of text layers to modify in the template | List[TextModification] | Yes |
| image_url | Optional: URL of an image to use in the template | str | No |
| image_layer_name | Optional: Name of the image layer in the template | str | No |
| webhook_url | Optional: URL to receive webhook notification when image is ready | str | No |
| metadata | Optional: Custom metadata to attach to the image | str | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| success | Whether the image generation was successfully initiated | bool |
| image_url | URL of the generated image (if synchronous) or placeholder | str |
| uid | Unique identifier for the generated image | str |
| status | Status of the image generation | str |
### Possible use case
<!-- MANUAL: use_case -->
**Social Media Graphics**: Generate personalized social posts with dynamic quotes, stats, or headlines.
**Marketing Banners**: Create ad banners with different product names, prices, or offers.
**Certificates & Cards**: Generate personalized certificates, invitations, or greeting cards.
<!-- END MANUAL -->
---

File diff suppressed because it is too large Load Diff

View File

@@ -13,194 +13,515 @@ Below is a comprehensive list of all available blocks, categorized by their prim
## Basic Operations
| Block Name | Description |
|------------|-------------|
| [Store Value](basic.md#store-value) | Stores and forwards a value |
| [Print to Console](basic.md#print-to-console) | Outputs text to the console for debugging |
| [Find in Dictionary](basic.md#find-in-dictionary) | Looks up a value in a dictionary or list |
| [Agent Input](basic.md#agent-input) | Accepts user input in a workflow |
| [Agent Output](basic.md#agent-output) | Records and formats workflow results |
| [Add to Dictionary](basic.md#add-to-dictionary) | Adds a new key-value pair to a dictionary |
| [Add to List](basic.md#add-to-list) | Adds a new entry to a list |
| [Note](basic.md#note) | Displays a sticky note in the workflow |
| [Add Memory](basic.md#add-memory) | Add new memories to Mem0 with user segmentation |
| [Add To Dictionary](basic.md#add-to-dictionary) | Adds a new key-value pair to a dictionary |
| [Add To Library From Store](system/library_operations.md#add-to-library-from-store) | Add an agent from the store to your personal library |
| [Add To List](basic.md#add-to-list) | Adds a new entry to a list |
| [Agent Date Input](basic.md#agent-date-input) | Block for date input |
| [Agent Dropdown Input](basic.md#agent-dropdown-input) | Block for dropdown text selection |
| [Agent File Input](basic.md#agent-file-input) | Block for file upload input (string path for example) |
| [Agent Google Drive File Input](basic.md#agent-google-drive-file-input) | Block for selecting a file from Google Drive |
| [Agent Input](basic.md#agent-input) | A block that accepts and processes user input values within a workflow, supporting various input types and validation |
| [Agent Long Text Input](basic.md#agent-long-text-input) | Block for long text input (multi-line) |
| [Agent Number Input](basic.md#agent-number-input) | Block for number input |
| [Agent Output](basic.md#agent-output) | A block that records and formats workflow results for display to users, with optional Jinja2 template formatting support |
| [Agent Short Text Input](basic.md#agent-short-text-input) | Block for short text input (single-line) |
| [Agent Table Input](basic.md#agent-table-input) | Block for table data input with customizable headers |
| [Agent Time Input](basic.md#agent-time-input) | Block for time input |
| [Agent Toggle Input](basic.md#agent-toggle-input) | Block for boolean toggle input |
| [Dictionary Is Empty](basic.md#dictionary-is-empty) | Checks if a dictionary is empty |
| [File Store](basic.md#file-store) | Stores the input file in the temporary directory |
| [Find In Dictionary](basic.md#find-in-dictionary) | A block that looks up a value in a dictionary, list, or object by key or index and returns the corresponding value |
| [Find In List](basic.md#find-in-list) | Finds the index of the value in the list |
| [Get All Memories](basic.md#get-all-memories) | Retrieve all memories from Mem0 with optional conversation filtering |
| [Get Latest Memory](basic.md#get-latest-memory) | Retrieve the latest memory from Mem0 with optional key filtering |
| [Get List Item](basic.md#get-list-item) | Returns the element at the given index |
| [Get Store Agent Details](system/store_operations.md#get-store-agent-details) | Get detailed information about an agent from the store |
| [Get Weather Information](basic.md#get-weather-information) | Retrieves weather information for a specified location using OpenWeatherMap API |
| [Human In The Loop](basic.md#human-in-the-loop) | Pause execution and wait for human approval or modification of data |
| [Installation](basic.md#installation) | Given a code string, this block allows the verification and installation of a block code into the system |
| [Linear Search Issues](linear/issues.md#linear-search-issues) | Searches for issues on Linear |
| [List Is Empty](basic.md#list-is-empty) | Checks if a list is empty |
| [List Library Agents](system/library_operations.md#list-library-agents) | List all agents in your personal library |
| [Note](basic.md#note) | A visual annotation block that displays a sticky note in the workflow editor for documentation and organization purposes |
| [Print To Console](basic.md#print-to-console) | A debugging block that outputs text to the console for monitoring and troubleshooting workflow execution |
| [Remove From Dictionary](basic.md#remove-from-dictionary) | Removes a key-value pair from a dictionary |
| [Remove From List](basic.md#remove-from-list) | Removes an item from a list by value or index |
| [Replace Dictionary Value](basic.md#replace-dictionary-value) | Replaces the value for a specified key in a dictionary |
| [Replace List Item](basic.md#replace-list-item) | Replaces an item at the specified index |
| [Reverse List Order](basic.md#reverse-list-order) | Reverses the order of elements in a list |
| [Search Memory](basic.md#search-memory) | Search memories in Mem0 by user |
| [Search Store Agents](system/store_operations.md#search-store-agents) | Search for agents in the store |
| [Slant3D Cancel Order](slant3d/order.md#slant3d-cancel-order) | Cancel an existing order |
| [Slant3D Create Order](slant3d/order.md#slant3d-create-order) | Create a new print order |
| [Slant3D Estimate Order](slant3d/order.md#slant3d-estimate-order) | Get order cost estimate |
| [Slant3D Estimate Shipping](slant3d/order.md#slant3d-estimate-shipping) | Get shipping cost estimate |
| [Slant3D Filament](slant3d/filament.md#slant3d-filament) | Get list of available filaments |
| [Slant3D Get Orders](slant3d/order.md#slant3d-get-orders) | Get all orders for the account |
| [Slant3D Slicer](slant3d/slicing.md#slant3d-slicer) | Slice a 3D model file and get pricing information |
| [Slant3D Tracking](slant3d/order.md#slant3d-tracking) | Track order status and shipping |
| [Store Value](basic.md#store-value) | A basic block that stores and forwards a value throughout workflows, allowing it to be reused without changes across multiple blocks |
| [Universal Type Converter](basic.md#universal-type-converter) | This block is used to convert a value to a universal type |
| [XML Parser](basic.md#xml-parser) | Parses XML using gravitasml to tokenize and coverts it to dict |
## Data Processing
| Block Name | Description |
|------------|-------------|
| [Read CSV](csv.md#read-csv) | Processes and extracts data from CSV files |
| [Data Sampling](sampling.md#data-sampling) | Selects a subset of data using various sampling methods |
| [Airtable Create Base](airtable/bases.md#airtable-create-base) | Create or find a base in Airtable |
| [Airtable Create Field](airtable/schema.md#airtable-create-field) | Add a new field to an Airtable table |
| [Airtable Create Records](airtable/records.md#airtable-create-records) | Create records in an Airtable table |
| [Airtable Create Table](airtable/schema.md#airtable-create-table) | Create a new table in an Airtable base |
| [Airtable Delete Records](airtable/records.md#airtable-delete-records) | Delete records from an Airtable table |
| [Airtable Get Record](airtable/records.md#airtable-get-record) | Get a single record from Airtable |
| [Airtable List Bases](airtable/bases.md#airtable-list-bases) | List all bases in Airtable |
| [Airtable List Records](airtable/records.md#airtable-list-records) | List records from an Airtable table |
| [Airtable List Schema](airtable/schema.md#airtable-list-schema) | Get the complete schema of an Airtable base |
| [Airtable Update Field](airtable/schema.md#airtable-update-field) | Update field properties in an Airtable table |
| [Airtable Update Records](airtable/records.md#airtable-update-records) | Update records in an Airtable table |
| [Airtable Update Table](airtable/schema.md#airtable-update-table) | Update table properties |
| [Airtable Webhook Trigger](airtable/triggers.md#airtable-webhook-trigger) | Starts a flow whenever Airtable emits a webhook event |
| [Baas Bot Delete Recording](baas/bots.md#baas-bot-delete-recording) | Permanently delete a meeting's recorded data |
| [Baas Bot Fetch Meeting Data](baas/bots.md#baas-bot-fetch-meeting-data) | Retrieve recorded meeting data |
| [Create Dictionary](data.md#create-dictionary) | Creates a dictionary with the specified key-value pairs |
| [Create List](data.md#create-list) | Creates a list with the specified values |
| [Data For Seo Keyword Suggestions](dataforseo/keyword_suggestions.md#data-for-seo-keyword-suggestions) | Get keyword suggestions from DataForSEO Labs Google API |
| [Data For Seo Related Keywords](dataforseo/related_keywords.md#data-for-seo-related-keywords) | Get related keywords from DataForSEO Labs Google API |
| [Exa Create Import](exa/websets_import_export.md#exa-create-import) | Import CSV data to use with websets for targeted searches |
| [Exa Delete Import](exa/websets_import_export.md#exa-delete-import) | Delete an import |
| [Exa Export Webset](exa/websets_import_export.md#exa-export-webset) | Export webset data in JSON, CSV, or JSON Lines format |
| [Exa Get Import](exa/websets_import_export.md#exa-get-import) | Get the status and details of an import |
| [Exa Get New Items](exa/websets_items.md#exa-get-new-items) | Get items added since a cursor - enables incremental processing without reprocessing |
| [Exa List Imports](exa/websets_import_export.md#exa-list-imports) | List all imports with pagination support |
| [File Read](data.md#file-read) | Reads a file and returns its content as a string, with optional chunking by delimiter and size limits |
| [Google Calendar Read Events](google/calendar.md#google-calendar-read-events) | Retrieves upcoming events from a Google Calendar with filtering options |
| [Google Docs Append Markdown](google/docs.md#google-docs-append-markdown) | Append Markdown content to the end of a Google Doc with full formatting - ideal for LLM/AI output |
| [Google Docs Append Plain Text](google/docs.md#google-docs-append-plain-text) | Append plain text to the end of a Google Doc (no formatting applied) |
| [Google Docs Create](google/docs.md#google-docs-create) | Create a new Google Doc |
| [Google Docs Delete Content](google/docs.md#google-docs-delete-content) | Delete a range of content from a Google Doc |
| [Google Docs Export](google/docs.md#google-docs-export) | Export a Google Doc to PDF, Word, text, or other formats |
| [Google Docs Find Replace Plain Text](google/docs.md#google-docs-find-replace-plain-text) | Find and replace plain text in a Google Doc (no formatting applied to replacement) |
| [Google Docs Format Text](google/docs.md#google-docs-format-text) | Apply formatting (bold, italic, color, etc |
| [Google Docs Get Metadata](google/docs.md#google-docs-get-metadata) | Get metadata about a Google Doc |
| [Google Docs Get Structure](google/docs.md#google-docs-get-structure) | Get document structure with index positions for precise editing operations |
| [Google Docs Insert Markdown At](google/docs.md#google-docs-insert-markdown-at) | Insert formatted Markdown at a specific position in a Google Doc - ideal for LLM/AI output |
| [Google Docs Insert Page Break](google/docs.md#google-docs-insert-page-break) | Insert a page break into a Google Doc |
| [Google Docs Insert Plain Text](google/docs.md#google-docs-insert-plain-text) | Insert plain text at a specific position in a Google Doc (no formatting applied) |
| [Google Docs Insert Table](google/docs.md#google-docs-insert-table) | Insert a table into a Google Doc, optionally with content and Markdown formatting |
| [Google Docs Read](google/docs.md#google-docs-read) | Read text content from a Google Doc |
| [Google Docs Replace All With Markdown](google/docs.md#google-docs-replace-all-with-markdown) | Replace entire Google Doc content with formatted Markdown - ideal for LLM/AI output |
| [Google Docs Replace Content With Markdown](google/docs.md#google-docs-replace-content-with-markdown) | Find text and replace it with formatted Markdown - ideal for LLM/AI output and templates |
| [Google Docs Replace Range With Markdown](google/docs.md#google-docs-replace-range-with-markdown) | Replace a specific index range in a Google Doc with formatted Markdown - ideal for LLM/AI output |
| [Google Docs Set Public Access](google/docs.md#google-docs-set-public-access) | Make a Google Doc public or private |
| [Google Docs Share](google/docs.md#google-docs-share) | Share a Google Doc with specific users |
| [Google Sheets Add Column](google/sheets.md#google-sheets-add-column) | Add a new column with a header |
| [Google Sheets Add Dropdown](google/sheets.md#google-sheets-add-dropdown) | Add a dropdown list (data validation) to cells |
| [Google Sheets Add Note](google/sheets.md#google-sheets-add-note) | Add a note to a cell in a Google Sheet |
| [Google Sheets Append Row](google/sheets.md#google-sheets-append-row) | Append or Add a single row to the end of a Google Sheet |
| [Google Sheets Batch Operations](google/sheets.md#google-sheets-batch-operations) | This block performs multiple operations on a Google Sheets spreadsheet in a single batch request |
| [Google Sheets Clear](google/sheets.md#google-sheets-clear) | This block clears data from a specified range in a Google Sheets spreadsheet |
| [Google Sheets Copy To Spreadsheet](google/sheets.md#google-sheets-copy-to-spreadsheet) | Copy a sheet from one spreadsheet to another |
| [Google Sheets Create Named Range](google/sheets.md#google-sheets-create-named-range) | Create a named range to reference cells by name instead of A1 notation |
| [Google Sheets Create Spreadsheet](google/sheets.md#google-sheets-create-spreadsheet) | This block creates a new Google Sheets spreadsheet with specified sheets |
| [Google Sheets Delete Column](google/sheets.md#google-sheets-delete-column) | Delete a column by header name or column letter |
| [Google Sheets Delete Rows](google/sheets.md#google-sheets-delete-rows) | Delete specific rows from a Google Sheet by their row indices |
| [Google Sheets Export Csv](google/sheets.md#google-sheets-export-csv) | Export a Google Sheet as CSV data |
| [Google Sheets Filter Rows](google/sheets.md#google-sheets-filter-rows) | Filter rows in a Google Sheet based on a column condition |
| [Google Sheets Find](google/sheets.md#google-sheets-find) | Find text in a Google Sheets spreadsheet |
| [Google Sheets Find Replace](google/sheets.md#google-sheets-find-replace) | This block finds and replaces text in a Google Sheets spreadsheet |
| [Google Sheets Format](google/sheets.md#google-sheets-format) | Format a range in a Google Sheet (sheet optional) |
| [Google Sheets Get Column](google/sheets.md#google-sheets-get-column) | Extract all values from a specific column |
| [Google Sheets Get Notes](google/sheets.md#google-sheets-get-notes) | Get notes from cells in a Google Sheet |
| [Google Sheets Get Row](google/sheets.md#google-sheets-get-row) | Get a specific row by its index |
| [Google Sheets Get Row Count](google/sheets.md#google-sheets-get-row-count) | Get row count and dimensions of a Google Sheet |
| [Google Sheets Get Unique Values](google/sheets.md#google-sheets-get-unique-values) | Get unique values from a column |
| [Google Sheets Import Csv](google/sheets.md#google-sheets-import-csv) | Import CSV data into a Google Sheet |
| [Google Sheets Insert Row](google/sheets.md#google-sheets-insert-row) | Insert a single row at a specific position |
| [Google Sheets List Named Ranges](google/sheets.md#google-sheets-list-named-ranges) | List all named ranges in a spreadsheet |
| [Google Sheets Lookup Row](google/sheets.md#google-sheets-lookup-row) | Look up a row by finding a value in a specific column |
| [Google Sheets Manage Sheet](google/sheets.md#google-sheets-manage-sheet) | Create, delete, or copy sheets (sheet optional) |
| [Google Sheets Metadata](google/sheets.md#google-sheets-metadata) | This block retrieves metadata about a Google Sheets spreadsheet including sheet names and properties |
| [Google Sheets Protect Range](google/sheets.md#google-sheets-protect-range) | Protect a cell range or entire sheet from editing |
| [Google Sheets Read](google/sheets.md#google-sheets-read) | A block that reads data from a Google Sheets spreadsheet using A1 notation range selection |
| [Google Sheets Remove Duplicates](google/sheets.md#google-sheets-remove-duplicates) | Remove duplicate rows based on specified columns |
| [Google Sheets Set Public Access](google/sheets.md#google-sheets-set-public-access) | Make a Google Spreadsheet public or private |
| [Google Sheets Share Spreadsheet](google/sheets.md#google-sheets-share-spreadsheet) | Share a Google Spreadsheet with users or get shareable link |
| [Google Sheets Sort](google/sheets.md#google-sheets-sort) | Sort a Google Sheet by one or two columns |
| [Google Sheets Update Cell](google/sheets.md#google-sheets-update-cell) | Update a single cell in a Google Sheets spreadsheet |
| [Google Sheets Update Row](google/sheets.md#google-sheets-update-row) | Update a specific row by its index |
| [Google Sheets Write](google/sheets.md#google-sheets-write) | A block that writes data to a Google Sheets spreadsheet at a specified A1 notation range |
| [Keyword Suggestion Extractor](dataforseo/keyword_suggestions.md#keyword-suggestion-extractor) | Extract individual fields from a KeywordSuggestion object |
| [Persist Information](data.md#persist-information) | Persist key-value information for the current user |
| [Read Spreadsheet](data.md#read-spreadsheet) | Reads CSV and Excel files and outputs the data as a list of dictionaries and individual rows |
| [Related Keyword Extractor](dataforseo/related_keywords.md#related-keyword-extractor) | Extract individual fields from a RelatedKeyword object |
| [Retrieve Information](data.md#retrieve-information) | Retrieve key-value information for the current user |
| [Screenshot Web Page](data.md#screenshot-web-page) | Takes a screenshot of a specified website using ScreenshotOne API |
## Text Processing
| Block Name | Description |
|------------|-------------|
| [Match Text Pattern](text.md#match-text-pattern) | Checks if text matches a specified pattern |
| [Extract Text Information](text.md#extract-text-information) | Extracts specific information from text using patterns |
| [Fill Text Template](text.md#fill-text-template) | Populates a template with provided values |
| [Combine Texts](text.md#combine-texts) | Merges multiple text inputs into one |
| [Text Decoder](decoder_block.md#text-decoder) | Converts encoded text into readable format |
| [Code Extraction](text.md#code-extraction) | Extracts code blocks from text and identifies their programming languages |
| [Combine Texts](text.md#combine-texts) | This block combines multiple input texts into a single output text |
| [Countdown Timer](text.md#countdown-timer) | This block triggers after a specified duration |
| [Extract Text Information](text.md#extract-text-information) | This block extracts the text from the given text using the pattern (regex) |
| [Fill Text Template](text.md#fill-text-template) | This block formats the given texts using the format template |
| [Get Current Date](text.md#get-current-date) | This block outputs the current date with an optional offset |
| [Get Current Date And Time](text.md#get-current-date-and-time) | This block outputs the current date and time |
| [Get Current Time](text.md#get-current-time) | This block outputs the current time |
| [Match Text Pattern](text.md#match-text-pattern) | Matches text against a regex pattern and forwards data to positive or negative output based on the match |
| [Text Decoder](text.md#text-decoder) | Decodes a string containing escape sequences into actual text |
| [Text Replace](text.md#text-replace) | This block is used to replace a text with a new text |
| [Text Split](text.md#text-split) | This block is used to split a text into a list of strings |
| [Word Character Count](text.md#word-character-count) | Counts the number of words and characters in a given text |
## AI and Language Models
| Block Name | Description |
|------------|-------------|
| [AI Structured Response Generator](llm.md#ai-structured-response-generator) | Generates structured responses using LLMs |
| [AI Text Generator](llm.md#ai-text-generator) | Produces text responses using LLMs |
| [AI Text Summarizer](llm.md#ai-text-summarizer) | Summarizes long texts using LLMs |
| [AI Conversation](llm.md#ai-conversation) | Facilitates multi-turn conversations with LLMs |
| [AI List Generator](llm.md#ai-list-generator) | Creates lists based on prompts using LLMs |
## Web and API Interactions
| Block Name | Description |
|------------|-------------|
| [Send Web Request](http.md#send-web-request) | Makes HTTP requests to specified web addresses |
| [Read RSS Feed](rss.md#read-rss-feed) | Retrieves and processes entries from RSS feeds |
| [Get Weather Information](search.md#get-weather-information) | Fetches current weather data for a location |
| [Google Maps Search](google_maps.md#google-maps-search) | Searches for local businesses using Google Maps API |
## Social Media and Content
| Block Name | Description |
|------------|-------------|
| [Get Reddit Posts](reddit.md#get-reddit-posts) | Retrieves posts from specified subreddits |
| [Post Reddit Comment](reddit.md#post-reddit-comment) | Posts comments on Reddit |
| [Publish to Medium](medium.md#publish-to-medium) | Publishes content directly to Medium |
| [Read Discord Messages](discord.md#read-discord-messages) | Retrieves messages from Discord channels |
| [Send Discord Message](discord.md#send-discord-message) | Sends messages to Discord channels |
| [AI Ad Maker Video Creator](llm.md#ai-ad-maker-video-creator) | Creates an AIgenerated 30second advert (text + images) |
| [AI Condition](llm.md#ai-condition) | Uses AI to evaluate natural language conditions and provide conditional outputs |
| [AI Conversation](llm.md#ai-conversation) | A block that facilitates multi-turn conversations with a Large Language Model (LLM), maintaining context across message exchanges |
| [AI Image Customizer](llm.md#ai-image-customizer) | Generate and edit custom images using Google's Nano-Banana model from Gemini 2 |
| [AI Image Editor](llm.md#ai-image-editor) | Edit images using BlackForest Labs' Flux Kontext models |
| [AI Image Generator](llm.md#ai-image-generator) | Generate images using various AI models through a unified interface |
| [AI List Generator](llm.md#ai-list-generator) | A block that creates lists of items based on prompts using a Large Language Model (LLM), with optional source data for context |
| [AI Music Generator](llm.md#ai-music-generator) | This block generates music using Meta's MusicGen model on Replicate |
| [AI Screenshot To Video Ad](llm.md#ai-screenshot-to-video-ad) | Turns a screenshot into an engaging, avatarnarrated video advert |
| [AI Shortform Video Creator](llm.md#ai-shortform-video-creator) | Creates a shortform video using revid |
| [AI Structured Response Generator](llm.md#ai-structured-response-generator) | A block that generates structured JSON responses using a Large Language Model (LLM), with schema validation and format enforcement |
| [AI Text Generator](llm.md#ai-text-generator) | A block that produces text responses using a Large Language Model (LLM) based on customizable prompts and system instructions |
| [AI Text Summarizer](llm.md#ai-text-summarizer) | A block that summarizes long texts using a Large Language Model (LLM), with configurable focus topics and summary styles |
| [AI Video Generator](fal/ai_video_generator.md#ai-video-generator) | Generate videos using FAL AI models |
| [Bannerbear Text Overlay](bannerbear/text_overlay.md#bannerbear-text-overlay) | Add text overlay to images using Bannerbear templates |
| [Code Generation](llm.md#code-generation) | Generate or refactor code using OpenAI's Codex (Responses API) |
| [Create Talking Avatar Video](llm.md#create-talking-avatar-video) | This block integrates with D-ID to create video clips and retrieve their URLs |
| [Exa Answer](exa/answers.md#exa-answer) | Get an LLM answer to a question informed by Exa search results |
| [Exa Create Enrichment](exa/websets_enrichment.md#exa-create-enrichment) | Create enrichments to extract additional structured data from webset items |
| [Exa Create Research](exa/research.md#exa-create-research) | Create research task with optional waiting - explores web and synthesizes findings with citations |
| [Ideogram Model](llm.md#ideogram-model) | This block runs Ideogram models with both simple and advanced settings |
| [Jina Chunking](jina/chunking.md#jina-chunking) | Chunks texts using Jina AI's segmentation service |
| [Jina Embedding](jina/embeddings.md#jina-embedding) | Generates embeddings using Jina AI |
| [Perplexity](llm.md#perplexity) | Query Perplexity's sonar models with real-time web search capabilities and receive annotated responses with source citations |
| [Replicate Flux Advanced Model](replicate/flux_advanced.md#replicate-flux-advanced-model) | This block runs Flux models on Replicate with advanced settings |
| [Replicate Model](replicate/replicate_block.md#replicate-model) | Run Replicate models synchronously |
| [Smart Decision Maker](llm.md#smart-decision-maker) | Uses AI to intelligently decide what tool to use |
| [Stagehand Act](stagehand/blocks.md#stagehand-act) | Interact with a web page by performing actions on a web page |
| [Stagehand Extract](stagehand/blocks.md#stagehand-extract) | Extract structured data from a webpage |
| [Stagehand Observe](stagehand/blocks.md#stagehand-observe) | Find suggested actions for your workflows |
| [Unreal Text To Speech](llm.md#unreal-text-to-speech) | Converts text to speech using the Unreal Speech API |
## Search and Information Retrieval
| Block Name | Description |
|------------|-------------|
| [Get Wikipedia Summary](search.md#get-wikipedia-summary) | Fetches summaries of topics from Wikipedia |
| [Search The Web](search.md#search-the-web) | Performs web searches and returns results |
| [Extract Website Content](search.md#extract-website-content) | Retrieves and extracts content from websites |
| [Ask Wolfram](wolfram/llm_api.md#ask-wolfram) | Ask Wolfram Alpha a question |
| [Exa Bulk Webset Items](exa/websets_items.md#exa-bulk-webset-items) | Get all items from a webset in bulk (with configurable limits) |
| [Exa Cancel Enrichment](exa/websets_enrichment.md#exa-cancel-enrichment) | Cancel a running enrichment operation |
| [Exa Cancel Webset](exa/websets.md#exa-cancel-webset) | Cancel all operations being performed on a Webset |
| [Exa Cancel Webset Search](exa/websets_search.md#exa-cancel-webset-search) | Cancel a running webset search |
| [Exa Contents](exa/contents.md#exa-contents) | Retrieves document contents using Exa's contents API |
| [Exa Create Monitor](exa/websets_monitor.md#exa-create-monitor) | Create automated monitors to keep websets updated with fresh data on a schedule |
| [Exa Create Or Find Webset](exa/websets.md#exa-create-or-find-webset) | Create a new webset or return existing one by external_id (idempotent operation) |
| [Exa Create Webset](exa/websets.md#exa-create-webset) | Create a new Exa Webset for persistent web search collections with optional waiting for initial results |
| [Exa Create Webset Search](exa/websets_search.md#exa-create-webset-search) | Add a new search to an existing webset to find more items |
| [Exa Delete Enrichment](exa/websets_enrichment.md#exa-delete-enrichment) | Delete an enrichment from a webset |
| [Exa Delete Monitor](exa/websets_monitor.md#exa-delete-monitor) | Delete a monitor from a webset |
| [Exa Delete Webset](exa/websets.md#exa-delete-webset) | Delete a Webset and all its items |
| [Exa Delete Webset Item](exa/websets_items.md#exa-delete-webset-item) | Delete a specific item from a webset |
| [Exa Find Or Create Search](exa/websets_search.md#exa-find-or-create-search) | Find existing search by query or create new - prevents duplicate searches in workflows |
| [Exa Find Similar](exa/similar.md#exa-find-similar) | Finds similar links using Exa's findSimilar API |
| [Exa Get Enrichment](exa/websets_enrichment.md#exa-get-enrichment) | Get the status and details of a webset enrichment |
| [Exa Get Monitor](exa/websets_monitor.md#exa-get-monitor) | Get the details and status of a webset monitor |
| [Exa Get Research](exa/research.md#exa-get-research) | Get status and results of a research task |
| [Exa Get Webset](exa/websets.md#exa-get-webset) | Retrieve a Webset by ID or external ID |
| [Exa Get Webset Item](exa/websets_items.md#exa-get-webset-item) | Get a specific item from a webset by its ID |
| [Exa Get Webset Search](exa/websets_search.md#exa-get-webset-search) | Get the status and details of a webset search |
| [Exa List Monitors](exa/websets_monitor.md#exa-list-monitors) | List all monitors with optional webset filtering |
| [Exa List Research](exa/research.md#exa-list-research) | List all research tasks with pagination support |
| [Exa List Webset Items](exa/websets_items.md#exa-list-webset-items) | List items in a webset with pagination support |
| [Exa List Websets](exa/websets.md#exa-list-websets) | List all Websets with pagination support |
| [Exa Preview Webset](exa/websets.md#exa-preview-webset) | Preview how a search query will be interpreted before creating a webset |
| [Exa Search](exa/search.md#exa-search) | Searches the web using Exa's advanced search API |
| [Exa Update Enrichment](exa/websets_enrichment.md#exa-update-enrichment) | Update an existing enrichment configuration |
| [Exa Update Monitor](exa/websets_monitor.md#exa-update-monitor) | Update a monitor's status, schedule, or metadata |
| [Exa Update Webset](exa/websets.md#exa-update-webset) | Update metadata for an existing Webset |
| [Exa Wait For Enrichment](exa/websets_polling.md#exa-wait-for-enrichment) | Wait for a webset enrichment to complete with progress tracking |
| [Exa Wait For Research](exa/research.md#exa-wait-for-research) | Wait for a research task to complete with configurable timeout |
| [Exa Wait For Search](exa/websets_polling.md#exa-wait-for-search) | Wait for a specific webset search to complete with progress tracking |
| [Exa Wait For Webset](exa/websets_polling.md#exa-wait-for-webset) | Wait for a webset to reach a specific status with progress tracking |
| [Exa Webset Items Summary](exa/websets_items.md#exa-webset-items-summary) | Get a summary of webset items without retrieving all data |
| [Exa Webset Status](exa/websets.md#exa-webset-status) | Get a quick status overview of a webset |
| [Exa Webset Summary](exa/websets.md#exa-webset-summary) | Get a comprehensive summary of a webset with samples and statistics |
| [Extract Website Content](jina/search.md#extract-website-content) | This block scrapes the content from the given web URL |
| [Fact Checker](jina/fact_checker.md#fact-checker) | This block checks the factuality of a given statement using Jina AI's Grounding API |
| [Firecrawl Crawl](firecrawl/crawl.md#firecrawl-crawl) | Firecrawl crawls websites to extract comprehensive data while bypassing blockers |
| [Firecrawl Extract](firecrawl/extract.md#firecrawl-extract) | Firecrawl crawls websites to extract comprehensive data while bypassing blockers |
| [Firecrawl Map Website](firecrawl/map.md#firecrawl-map-website) | Firecrawl maps a website to extract all the links |
| [Firecrawl Scrape](firecrawl/scrape.md#firecrawl-scrape) | Firecrawl scrapes a website to extract comprehensive data while bypassing blockers |
| [Firecrawl Search](firecrawl/search.md#firecrawl-search) | Firecrawl searches the web for the given query |
| [Get Person Detail](apollo/person.md#get-person-detail) | Get detailed person data with Apollo API, including email reveal |
| [Get Wikipedia Summary](search.md#get-wikipedia-summary) | This block fetches the summary of a given topic from Wikipedia |
| [Google Maps Search](search.md#google-maps-search) | This block searches for local businesses using Google Maps API |
| [Search Organizations](apollo/organization.md#search-organizations) | Search for organizations in Apollo |
| [Search People](apollo/people.md#search-people) | Search for people in Apollo |
| [Search The Web](jina/search.md#search-the-web) | This block searches the internet for the given search query |
| [Validate Emails](zerobounce/validate_emails.md#validate-emails) | Validate emails |
## Time and Date
## Social Media and Content
| Block Name | Description |
|------------|-------------|
| [Get Current Time](time_blocks.md#get-current-time) | Provides the current time |
| [Get Current Date](time_blocks.md#get-current-date) | Provides the current date |
| [Get Current Date and Time](time_blocks.md#get-current-date-and-time) | Provides both current date and time |
| [Countdown Timer](time_blocks.md#countdown-timer) | Acts as a countdown timer |
| [Create Discord Thread](discord/bot_blocks.md#create-discord-thread) | Creates a new thread in a Discord channel |
| [Discord Channel Info](discord/bot_blocks.md#discord-channel-info) | Resolves Discord channel names to IDs and vice versa |
| [Discord Get Current User](discord/oauth_blocks.md#discord-get-current-user) | Gets information about the currently authenticated Discord user using OAuth2 credentials |
| [Discord User Info](discord/bot_blocks.md#discord-user-info) | Gets information about a Discord user by their ID |
| [Get Linkedin Profile](enrichlayer/linkedin.md#get-linkedin-profile) | Fetch LinkedIn profile data using Enrichlayer |
| [Get Linkedin Profile Picture](enrichlayer/linkedin.md#get-linkedin-profile-picture) | Get LinkedIn profile pictures using Enrichlayer |
| [Get Reddit Posts](misc.md#get-reddit-posts) | This block fetches Reddit posts from a defined subreddit name |
| [Linkedin Person Lookup](enrichlayer/linkedin.md#linkedin-person-lookup) | Look up LinkedIn profiles by person information using Enrichlayer |
| [Linkedin Role Lookup](enrichlayer/linkedin.md#linkedin-role-lookup) | Look up LinkedIn profiles by role in a company using Enrichlayer |
| [Post Reddit Comment](misc.md#post-reddit-comment) | This block posts a Reddit comment on a specified Reddit post |
| [Post To Bluesky](ayrshare/post_to_bluesky.md#post-to-bluesky) | Post to Bluesky using Ayrshare |
| [Post To Facebook](ayrshare/post_to_facebook.md#post-to-facebook) | Post to Facebook using Ayrshare |
| [Post To GMB](ayrshare/post_to_gmb.md#post-to-gmb) | Post to Google My Business using Ayrshare |
| [Post To Instagram](ayrshare/post_to_instagram.md#post-to-instagram) | Post to Instagram using Ayrshare |
| [Post To Linked In](ayrshare/post_to_linkedin.md#post-to-linked-in) | Post to LinkedIn using Ayrshare |
| [Post To Pinterest](ayrshare/post_to_pinterest.md#post-to-pinterest) | Post to Pinterest using Ayrshare |
| [Post To Reddit](ayrshare/post_to_reddit.md#post-to-reddit) | Post to Reddit using Ayrshare |
| [Post To Snapchat](ayrshare/post_to_snapchat.md#post-to-snapchat) | Post to Snapchat using Ayrshare |
| [Post To Telegram](ayrshare/post_to_telegram.md#post-to-telegram) | Post to Telegram using Ayrshare |
| [Post To Threads](ayrshare/post_to_threads.md#post-to-threads) | Post to Threads using Ayrshare |
| [Post To Tik Tok](ayrshare/post_to_tiktok.md#post-to-tik-tok) | Post to TikTok using Ayrshare |
| [Post To X](ayrshare/post_to_x.md#post-to-x) | Post to X / Twitter using Ayrshare |
| [Post To You Tube](ayrshare/post_to_youtube.md#post-to-you-tube) | Post to YouTube using Ayrshare |
| [Publish To Medium](misc.md#publish-to-medium) | Publishes a post to Medium |
| [Read Discord Messages](discord/bot_blocks.md#read-discord-messages) | Reads messages from a Discord channel using a bot token |
| [Reply To Discord Message](discord/bot_blocks.md#reply-to-discord-message) | Replies to a specific Discord message |
| [Send Discord DM](discord/bot_blocks.md#send-discord-dm) | Sends a direct message to a Discord user using their user ID |
| [Send Discord Embed](discord/bot_blocks.md#send-discord-embed) | Sends a rich embed message to a Discord channel |
| [Send Discord File](discord/bot_blocks.md#send-discord-file) | Sends a file attachment to a Discord channel |
| [Send Discord Message](discord/bot_blocks.md#send-discord-message) | Sends a message to a Discord channel using a bot token |
| [Transcribe Youtube Video](misc.md#transcribe-youtube-video) | Transcribes a YouTube video using a proxy |
| [Twitter Add List Member](twitter/list_members.md#twitter-add-list-member) | This block adds a specified user to a Twitter List owned by the authenticated user |
| [Twitter Bookmark Tweet](twitter/bookmark.md#twitter-bookmark-tweet) | This block bookmarks a tweet on Twitter |
| [Twitter Create List](twitter/manage_lists.md#twitter-create-list) | This block creates a new Twitter List for the authenticated user |
| [Twitter Delete List](twitter/manage_lists.md#twitter-delete-list) | This block deletes a specified Twitter List owned by the authenticated user |
| [Twitter Delete Tweet](twitter/manage.md#twitter-delete-tweet) | This block deletes a tweet on Twitter |
| [Twitter Follow List](twitter/list_follows.md#twitter-follow-list) | This block follows a specified Twitter list for the authenticated user |
| [Twitter Follow User](twitter/follows.md#twitter-follow-user) | This block follows a specified Twitter user |
| [Twitter Get Bookmarked Tweets](twitter/bookmark.md#twitter-get-bookmarked-tweets) | This block retrieves bookmarked tweets from Twitter |
| [Twitter Get Followers](twitter/follows.md#twitter-get-followers) | This block retrieves followers of a specified Twitter user |
| [Twitter Get Following](twitter/follows.md#twitter-get-following) | This block retrieves the users that a specified Twitter user is following |
| [Twitter Get Home Timeline](twitter/timeline.md#twitter-get-home-timeline) | This block retrieves the authenticated user's home timeline |
| [Twitter Get Liked Tweets](twitter/like.md#twitter-get-liked-tweets) | This block gets information about tweets liked by a user |
| [Twitter Get Liking Users](twitter/like.md#twitter-get-liking-users) | This block gets information about users who liked a tweet |
| [Twitter Get List](twitter/list_lookup.md#twitter-get-list) | This block retrieves information about a specified Twitter List |
| [Twitter Get List Members](twitter/list_members.md#twitter-get-list-members) | This block retrieves the members of a specified Twitter List |
| [Twitter Get List Memberships](twitter/list_members.md#twitter-get-list-memberships) | This block retrieves all Lists that a specified user is a member of |
| [Twitter Get List Tweets](twitter/list_tweets_lookup.md#twitter-get-list-tweets) | This block retrieves tweets from a specified Twitter list |
| [Twitter Get Muted Users](twitter/mutes.md#twitter-get-muted-users) | This block gets a list of users muted by the authenticating user |
| [Twitter Get Owned Lists](twitter/list_lookup.md#twitter-get-owned-lists) | This block retrieves all Lists owned by a specified Twitter user |
| [Twitter Get Pinned Lists](twitter/pinned_lists.md#twitter-get-pinned-lists) | This block returns the Lists pinned by the authenticated user |
| [Twitter Get Quote Tweets](twitter/quote.md#twitter-get-quote-tweets) | This block gets quote tweets for a specific tweet |
| [Twitter Get Retweeters](twitter/retweet.md#twitter-get-retweeters) | This block gets information about who has retweeted a tweet |
| [Twitter Get Space Buyers](twitter/spaces_lookup.md#twitter-get-space-buyers) | This block retrieves a list of users who purchased tickets to a Twitter Space |
| [Twitter Get Space By Id](twitter/spaces_lookup.md#twitter-get-space-by-id) | This block retrieves information about a single Twitter Space |
| [Twitter Get Space Tweets](twitter/spaces_lookup.md#twitter-get-space-tweets) | This block retrieves tweets shared in a Twitter Space |
| [Twitter Get Spaces](twitter/spaces_lookup.md#twitter-get-spaces) | This block retrieves information about multiple Twitter Spaces |
| [Twitter Get Tweet](twitter/tweet_lookup.md#twitter-get-tweet) | This block retrieves information about a specific Tweet |
| [Twitter Get Tweets](twitter/tweet_lookup.md#twitter-get-tweets) | This block retrieves information about multiple Tweets |
| [Twitter Get User](twitter/user_lookup.md#twitter-get-user) | This block retrieves information about a specified Twitter user |
| [Twitter Get User Mentions](twitter/timeline.md#twitter-get-user-mentions) | This block retrieves Tweets mentioning a specific user |
| [Twitter Get User Tweets](twitter/timeline.md#twitter-get-user-tweets) | This block retrieves Tweets composed by a single user |
| [Twitter Get Users](twitter/user_lookup.md#twitter-get-users) | This block retrieves information about multiple Twitter users |
| [Twitter Geted Users](twitter/blocks.md#twitter-geted-users) | This block retrieves a list of users blocked by the authenticating user |
| [Twitter Hide Reply](twitter/hide.md#twitter-hide-reply) | This block hides a reply to a tweet |
| [Twitter Like Tweet](twitter/like.md#twitter-like-tweet) | This block likes a tweet |
| [Twitter Mute User](twitter/mutes.md#twitter-mute-user) | This block mutes a specified Twitter user |
| [Twitter Pin List](twitter/pinned_lists.md#twitter-pin-list) | This block allows the authenticated user to pin a specified List |
| [Twitter Post Tweet](twitter/manage.md#twitter-post-tweet) | This block posts a tweet on Twitter |
| [Twitter Remove Bookmark Tweet](twitter/bookmark.md#twitter-remove-bookmark-tweet) | This block removes a bookmark from a tweet on Twitter |
| [Twitter Remove List Member](twitter/list_members.md#twitter-remove-list-member) | This block removes a specified user from a Twitter List owned by the authenticated user |
| [Twitter Remove Retweet](twitter/retweet.md#twitter-remove-retweet) | This block removes a retweet on Twitter |
| [Twitter Retweet](twitter/retweet.md#twitter-retweet) | This block retweets a tweet on Twitter |
| [Twitter Search Recent Tweets](twitter/manage.md#twitter-search-recent-tweets) | This block searches all public Tweets in Twitter history |
| [Twitter Search Spaces](twitter/search_spaces.md#twitter-search-spaces) | This block searches for Twitter Spaces based on specified terms |
| [Twitter Unfollow List](twitter/list_follows.md#twitter-unfollow-list) | This block unfollows a specified Twitter list for the authenticated user |
| [Twitter Unfollow User](twitter/follows.md#twitter-unfollow-user) | This block unfollows a specified Twitter user |
| [Twitter Unhide Reply](twitter/hide.md#twitter-unhide-reply) | This block unhides a reply to a tweet |
| [Twitter Unlike Tweet](twitter/like.md#twitter-unlike-tweet) | This block unlikes a tweet |
| [Twitter Unmute User](twitter/mutes.md#twitter-unmute-user) | This block unmutes a specified Twitter user |
| [Twitter Unpin List](twitter/pinned_lists.md#twitter-unpin-list) | This block allows the authenticated user to unpin a specified List |
| [Twitter Update List](twitter/manage_lists.md#twitter-update-list) | This block updates a specified Twitter List owned by the authenticated user |
## Math and Calculations
## Communication
| Block Name | Description |
|------------|-------------|
| [Calculator](maths.md#calculator) | Performs basic mathematical operations |
| [Count Items](maths.md#count-items) | Counts items in a collection |
| [Baas Bot Join Meeting](baas/bots.md#baas-bot-join-meeting) | Deploy a bot to join and record a meeting |
| [Baas Bot Leave Meeting](baas/bots.md#baas-bot-leave-meeting) | Remove a bot from an ongoing meeting |
| [Gmail Add Label](google/gmail.md#gmail-add-label) | A block that adds a label to a specific email message in Gmail, creating the label if it doesn't exist |
| [Gmail Create Draft](google/gmail.md#gmail-create-draft) | Create draft emails in Gmail with automatic HTML detection and proper text formatting |
| [Gmail Draft Reply](google/gmail.md#gmail-draft-reply) | Create draft replies to Gmail threads with automatic HTML detection and proper text formatting |
| [Gmail Forward](google/gmail.md#gmail-forward) | Forward Gmail messages to other recipients with automatic HTML detection and proper formatting |
| [Gmail Get Profile](google/gmail.md#gmail-get-profile) | Get the authenticated user's Gmail profile details including email address and message statistics |
| [Gmail Get Thread](google/gmail.md#gmail-get-thread) | A block that retrieves an entire Gmail thread (email conversation) by ID, returning all messages with decoded bodies for reading complete conversations |
| [Gmail List Labels](google/gmail.md#gmail-list-labels) | A block that retrieves all labels (categories) from a Gmail account for organizing and categorizing emails |
| [Gmail Read](google/gmail.md#gmail-read) | A block that retrieves and reads emails from a Gmail account based on search criteria, returning detailed message information including subject, sender, body, and attachments |
| [Gmail Remove Label](google/gmail.md#gmail-remove-label) | A block that removes a label from a specific email message in a Gmail account |
| [Gmail Reply](google/gmail.md#gmail-reply) | Reply to Gmail threads with automatic HTML detection and proper text formatting |
| [Gmail Send](google/gmail.md#gmail-send) | Send emails via Gmail with automatic HTML detection and proper text formatting |
| [Hub Spot Engagement](hubspot/engagement.md#hub-spot-engagement) | Manages HubSpot engagements - sends emails and tracks engagement metrics |
## Developer Tools
| Block Name | Description |
|------------|-------------|
| [Exa Code Context](exa/code_context.md#exa-code-context) | Search billions of GitHub repos, docs, and Stack Overflow for relevant code examples |
| [Execute Code](misc.md#execute-code) | Executes code in a sandbox environment with internet access |
| [Execute Code Step](misc.md#execute-code-step) | Execute code in a previously instantiated sandbox |
| [Github Add Label](github/issues.md#github-add-label) | A block that adds a label to a GitHub issue or pull request for categorization and organization |
| [Github Assign Issue](github/issues.md#github-assign-issue) | A block that assigns a GitHub user to an issue for task ownership and tracking |
| [Github Assign PR Reviewer](github/pull_requests.md#github-assign-pr-reviewer) | This block assigns a reviewer to a specified GitHub pull request |
| [Github Comment](github/issues.md#github-comment) | A block that posts comments on GitHub issues or pull requests using the GitHub API |
| [Github Create Check Run](github/checks.md#github-create-check-run) | Creates a new check run for a specific commit in a GitHub repository |
| [Github Create Comment Object](github/reviews.md#github-create-comment-object) | Creates a comment object for use with GitHub blocks |
| [Github Create File](github/repo.md#github-create-file) | This block creates a new file in a GitHub repository |
| [Github Create PR Review](github/reviews.md#github-create-pr-review) | This block creates a review on a GitHub pull request with optional inline comments |
| [Github Create Repository](github/repo.md#github-create-repository) | This block creates a new GitHub repository |
| [Github Create Status](github/statuses.md#github-create-status) | Creates a new commit status in a GitHub repository |
| [Github Delete Branch](github/repo.md#github-delete-branch) | This block deletes a specified branch |
| [Github Discussion Trigger](github/triggers.md#github-discussion-trigger) | This block triggers on GitHub Discussions events |
| [Github Get CI Results](github/ci.md#github-get-ci-results) | This block gets CI results for a commit or PR, with optional search for specific errors/warnings in logs |
| [Github Get PR Review Comments](github/reviews.md#github-get-pr-review-comments) | This block gets all review comments from a GitHub pull request or from a specific review |
| [Github Issues Trigger](github/triggers.md#github-issues-trigger) | This block triggers on GitHub issues events |
| [Github List Branches](github/repo.md#github-list-branches) | This block lists all branches for a specified GitHub repository |
| [Github List Comments](github/issues.md#github-list-comments) | A block that retrieves all comments from a GitHub issue or pull request, including comment metadata and content |
| [Github List Discussions](github/repo.md#github-list-discussions) | This block lists recent discussions for a specified GitHub repository |
| [Github List Issues](github/issues.md#github-list-issues) | A block that retrieves a list of issues from a GitHub repository with their titles and URLs |
| [Github List PR Reviewers](github/pull_requests.md#github-list-pr-reviewers) | This block lists all reviewers for a specified GitHub pull request |
| [Github List PR Reviews](github/reviews.md#github-list-pr-reviews) | This block lists all reviews for a specified GitHub pull request |
| [Github List Pull Requests](github/pull_requests.md#github-list-pull-requests) | This block lists all pull requests for a specified GitHub repository |
| [Github List Releases](github/repo.md#github-list-releases) | This block lists all releases for a specified GitHub repository |
| [Github List Stargazers](github/repo.md#github-list-stargazers) | This block lists all users who have starred a specified GitHub repository |
| [Github List Tags](github/repo.md#github-list-tags) | This block lists all tags for a specified GitHub repository |
| [Github Make Branch](github/repo.md#github-make-branch) | This block creates a new branch from a specified source branch |
| [Github Make Issue](github/issues.md#github-make-issue) | A block that creates new issues on GitHub repositories with a title and body content |
| [Github Make Pull Request](github/pull_requests.md#github-make-pull-request) | This block creates a new pull request on a specified GitHub repository |
| [Github Pull Request Trigger](github/triggers.md#github-pull-request-trigger) | This block triggers on pull request events and outputs the event type and payload |
| [Github Read File](github/repo.md#github-read-file) | This block reads the content of a specified file from a GitHub repository |
| [Github Read Folder](github/repo.md#github-read-folder) | This block reads the content of a specified folder from a GitHub repository |
| [Github Read Issue](github/issues.md#github-read-issue) | A block that retrieves information about a specific GitHub issue, including its title, body content, and creator |
| [Github Read Pull Request](github/pull_requests.md#github-read-pull-request) | This block reads the body, title, user, and changes of a specified GitHub pull request |
| [Github Release Trigger](github/triggers.md#github-release-trigger) | This block triggers on GitHub release events |
| [Github Remove Label](github/issues.md#github-remove-label) | A block that removes a label from a GitHub issue or pull request |
| [Github Resolve Review Discussion](github/reviews.md#github-resolve-review-discussion) | This block resolves or unresolves a review discussion thread on a GitHub pull request |
| [Github Star Trigger](github/triggers.md#github-star-trigger) | This block triggers on GitHub star events |
| [Github Submit Pending Review](github/reviews.md#github-submit-pending-review) | This block submits a pending (draft) review on a GitHub pull request |
| [Github Unassign Issue](github/issues.md#github-unassign-issue) | A block that removes a user's assignment from a GitHub issue |
| [Github Unassign PR Reviewer](github/pull_requests.md#github-unassign-pr-reviewer) | This block unassigns a reviewer from a specified GitHub pull request |
| [Github Update Check Run](github/checks.md#github-update-check-run) | Updates an existing check run in a GitHub repository |
| [Github Update Comment](github/issues.md#github-update-comment) | A block that updates an existing comment on a GitHub issue or pull request |
| [Github Update File](github/repo.md#github-update-file) | This block updates an existing file in a GitHub repository |
| [Instantiate Code Sandbox](misc.md#instantiate-code-sandbox) | Instantiate a sandbox environment with internet access in which you can execute code with the Execute Code Step block |
| [Slant3D Order Webhook](slant3d/webhook.md#slant3d-order-webhook) | This block triggers on Slant3D order status updates and outputs the event details, including tracking information when orders are shipped |
## Media Generation
| Block Name | Description |
|------------|-------------|
| [Ideogram Model](ideogram.md#ideogram-model) | Generates images based on text prompts |
| [Create Talking Avatar Video](talking_head.md#create-talking-avatar-video) | Creates videos with talking avatars |
| [Unreal Text to Speech](text_to_speech_block.md#unreal-text-to-speech) | Converts text to speech using Unreal Speech API |
| [AI Shortform Video Creator](ai_shortform_video_block.md#ai-shortform-video-creator) | Generates short-form videos using AI |
| [Replicate Flux Advanced Model](replicate_flux_advanced.md#replicate-flux-advanced-model) | Creates images using Replicate's Flux models |
| [Flux Kontext](flux_kontext.md#flux-kontext) | Text-based image editing using Flux Kontext |
| [Add Audio To Video](multimedia.md#add-audio-to-video) | Block to attach an audio file to a video file using moviepy |
| [Loop Video](multimedia.md#loop-video) | Block to loop a video to a given duration or number of repeats |
| [Media Duration](multimedia.md#media-duration) | Block to get the duration of a media file |
## Miscellaneous
## Productivity
| Block Name | Description |
|------------|-------------|
| [Transcribe YouTube Video](youtube.md#transcribe-youtube-video) | Transcribes audio from YouTube videos |
| [Send Email](email_block.md#send-email) | Sends emails using SMTP |
| [Condition Block](branching.md#condition-block) | Evaluates conditions for workflow branching |
| [Step Through Items](iteration.md#step-through-items) | Iterates through lists or dictionaries |
| [Google Calendar Create Event](google/calendar.md#google-calendar-create-event) | This block creates a new event in Google Calendar with customizable parameters |
| [Notion Create Page](notion/create_page.md#notion-create-page) | Create a new page in Notion |
| [Notion Read Database](notion/read_database.md#notion-read-database) | Query a Notion database with optional filtering and sorting, returning structured entries |
| [Notion Read Page](notion/read_page.md#notion-read-page) | Read a Notion page by its ID and return its raw JSON |
| [Notion Read Page Markdown](notion/read_page_markdown.md#notion-read-page-markdown) | Read a Notion page and convert it to Markdown format with proper formatting for headings, lists, links, and rich text |
| [Notion Search](notion/search.md#notion-search) | Search your Notion workspace for pages and databases by text query |
| [Todoist Close Task](todoist/tasks.md#todoist-close-task) | Closes a task in Todoist |
| [Todoist Create Comment](todoist/comments.md#todoist-create-comment) | Creates a new comment on a Todoist task or project |
| [Todoist Create Label](todoist/labels.md#todoist-create-label) | Creates a new label in Todoist, It will not work if same name already exists |
| [Todoist Create Project](todoist/projects.md#todoist-create-project) | Creates a new project in Todoist |
| [Todoist Create Task](todoist/tasks.md#todoist-create-task) | Creates a new task in a Todoist project |
| [Todoist Delete Comment](todoist/comments.md#todoist-delete-comment) | Deletes a Todoist comment |
| [Todoist Delete Label](todoist/labels.md#todoist-delete-label) | Deletes a personal label in Todoist |
| [Todoist Delete Project](todoist/projects.md#todoist-delete-project) | Deletes a Todoist project and all its contents |
| [Todoist Delete Section](todoist/sections.md#todoist-delete-section) | Deletes a section and all its tasks from Todoist |
| [Todoist Delete Task](todoist/tasks.md#todoist-delete-task) | Deletes a task in Todoist |
| [Todoist Get Comment](todoist/comments.md#todoist-get-comment) | Get a single comment from Todoist |
| [Todoist Get Comments](todoist/comments.md#todoist-get-comments) | Get all comments for a Todoist task or project |
| [Todoist Get Label](todoist/labels.md#todoist-get-label) | Gets a personal label from Todoist by ID |
| [Todoist Get Project](todoist/projects.md#todoist-get-project) | Gets details for a specific Todoist project |
| [Todoist Get Section](todoist/sections.md#todoist-get-section) | Gets a single section by ID from Todoist |
| [Todoist Get Shared Labels](todoist/labels.md#todoist-get-shared-labels) | Gets all shared labels from Todoist |
| [Todoist Get Task](todoist/tasks.md#todoist-get-task) | Get an active task from Todoist |
| [Todoist Get Tasks](todoist/tasks.md#todoist-get-tasks) | Get active tasks from Todoist |
| [Todoist List Collaborators](todoist/projects.md#todoist-list-collaborators) | Gets all collaborators for a specific Todoist project |
| [Todoist List Labels](todoist/labels.md#todoist-list-labels) | Gets all personal labels from Todoist |
| [Todoist List Projects](todoist/projects.md#todoist-list-projects) | Gets all projects and their details from Todoist |
| [Todoist List Sections](todoist/sections.md#todoist-list-sections) | Gets all sections and their details from Todoist |
| [Todoist Remove Shared Labels](todoist/labels.md#todoist-remove-shared-labels) | Removes all instances of a shared label |
| [Todoist Rename Shared Labels](todoist/labels.md#todoist-rename-shared-labels) | Renames all instances of a shared label |
| [Todoist Reopen Task](todoist/tasks.md#todoist-reopen-task) | Reopens a task in Todoist |
| [Todoist Update Comment](todoist/comments.md#todoist-update-comment) | Updates a Todoist comment |
| [Todoist Update Label](todoist/labels.md#todoist-update-label) | Updates a personal label in Todoist |
| [Todoist Update Project](todoist/projects.md#todoist-update-project) | Updates an existing project in Todoist |
| [Todoist Update Task](todoist/tasks.md#todoist-update-task) | Updates an existing task in Todoist |
## Google Services
## Logic and Control Flow
| Block Name | Description |
|------------|-------------|
| [Gmail Read](google/gmail.md#gmail-read) | Retrieves and reads emails from a Gmail account |
| [Gmail Get Thread](google/gmail.md#gmail-get-thread) | Returns every message in a Gmail thread |
| [Gmail Reply](google/gmail.md#gmail-reply) | Sends a reply that stays in the same thread |
| [Gmail Send](google/gmail.md#gmail-send) | Sends emails using a Gmail account |
| [Gmail List Labels](google/gmail.md#gmail-list-labels) | Retrieves all labels from a Gmail account |
| [Gmail Add Label](google/gmail.md#gmail-add-label) | Adds a label to a specific email in a Gmail account |
| [Gmail Remove Label](google/gmail.md#gmail-remove-label) | Removes a label from a specific email in a Gmail account |
| [Google Sheets Read](google/sheet.md#google-sheets-read) | Reads data from a Google Sheets spreadsheet |
| [Google Sheets Write](google/sheet.md#google-sheets-write) | Writes data to a Google Sheets spreadsheet |
| [Google Maps Search](google_maps.md#google-maps-search) | Searches for local businesses using the Google Maps API |
| [Calculator](logic.md#calculator) | Performs a mathematical operation on two numbers |
| [Condition](logic.md#condition) | Handles conditional logic based on comparison operators |
| [Count Items](logic.md#count-items) | Counts the number of items in a collection |
| [Data Sampling](logic.md#data-sampling) | This block samples data from a given dataset using various sampling methods |
| [Exa Webset Ready Check](exa/websets.md#exa-webset-ready-check) | Check if webset is ready for next operation - enables conditional workflow branching |
| [If Input Matches](logic.md#if-input-matches) | Handles conditional logic based on comparison operators |
| [Pinecone Init](logic.md#pinecone-init) | Initializes a Pinecone index |
| [Pinecone Insert](logic.md#pinecone-insert) | Upload data to a Pinecone index |
| [Pinecone Query](logic.md#pinecone-query) | Queries a Pinecone index |
| [Step Through Items](logic.md#step-through-items) | Iterates over a list or dictionary and outputs each item |
## GitHub Integration
## Input/Output
| Block Name | Description |
|------------|-------------|
| [GitHub Comment](github/issues.md#github-comment) | Posts comments on GitHub issues or pull requests |
| [GitHub Make Issue](github/issues.md#github-make-issue) | Creates new issues on GitHub repositories |
| [GitHub Read Issue](github/issues.md#github-read-issue) | Retrieves information about a specific GitHub issue |
| [GitHub List Issues](github/issues.md#github-list-issues) | Retrieves a list of issues from a GitHub repository |
| [GitHub Add Label](github/issues.md#github-add-label) | Adds a label to a GitHub issue or pull request |
| [GitHub Remove Label](github/issues.md#github-remove-label) | Removes a label from a GitHub issue or pull request |
| [GitHub Assign Issue](github/issues.md#github-assign-issue) | Assigns a user to a GitHub issue |
| [GitHub List Tags](github/repo.md#github-list-tags) | Retrieves and lists all tags for a specified GitHub repository |
| [GitHub List Branches](github/repo.md#github-list-branches) | Retrieves and lists all branches for a specified GitHub repository |
| [GitHub List Discussions](github/repo.md#github-list-discussions) | Retrieves and lists recent discussions for a specified GitHub repository |
| [GitHub Make Branch](github/repo.md#github-make-branch) | Creates a new branch in a GitHub repository |
| [GitHub Delete Branch](github/repo.md#github-delete-branch) | Deletes a specified branch from a GitHub repository |
| [GitHub List Pull Requests](github/pull_requests.md#github-list-pull-requests) | Retrieves a list of pull requests from a specified GitHub repository |
| [GitHub Make Pull Request](github/pull_requests.md#github-make-pull-request) | Creates a new pull request in a specified GitHub repository |
| [GitHub Read Pull Request](github/pull_requests.md#github-read-pull-request) | Retrieves detailed information about a specific GitHub pull request |
| [GitHub Assign PR Reviewer](github/pull_requests.md#github-assign-pr-reviewer) | Assigns a reviewer to a specific GitHub pull request |
| [GitHub Unassign PR Reviewer](github/pull_requests.md#github-unassign-pr-reviewer) | Removes an assigned reviewer from a specific GitHub pull request |
| [GitHub List PR Reviewers](github/pull_requests.md#github-list-pr-reviewers) | Retrieves a list of all assigned reviewers for a specific GitHub pull request |
| [Exa Webset Webhook](exa/webhook_blocks.md#exa-webset-webhook) | Receive webhook notifications for Exa webset events |
| [Generic Webhook Trigger](generic_webhook/triggers.md#generic-webhook-trigger) | This block will output the contents of the generic input for the webhook |
| [Read RSS Feed](misc.md#read-rss-feed) | Reads RSS feed entries from a given URL |
## Twitter Integration
## Input/Output
| Block Name | Description |
|------------|-------------|
| [Twitter Post Tweet](twitter/twitter.md#twitter-post-tweet-block) | Creates a tweet on Twitter with text content and optional attachments including media, polls, quotes, or deep links |
| [Twitter Delete Tweet](twitter/twitter.md#twitter-delete-tweet-block) | Deletes a specified tweet using its tweet ID |
| [Twitter Search Recent Tweets](twitter/twitter.md#twitter-search-recent-tweets-block) | Searches for tweets matching specified criteria with options for filtering and pagination |
| [Twitter Get Quote Tweets](twitter/twitter.md#twitter-get-quote-tweets-block) | Gets tweets that quote a specified tweet ID with options for pagination and filtering |
| [Twitter Retweet](twitter/twitter.md#twitter-retweet-block) | Creates a retweet of a specified tweet using its tweet ID |
| [Twitter Remove Retweet](twitter/twitter.md#twitter-remove-retweet-block) | Removes an existing retweet of a specified tweet |
| [Twitter Get Retweeters](twitter/twitter.md#twitter-get-retweeters-block) | Gets list of users who have retweeted a specified tweet with pagination and filtering options |
| [Twitter Get User Mentions](twitter/twitter.md#twitter-get-user-mentions-block) | Gets tweets where a specific user is mentioned using their user ID |
| [Twitter Get Home Timeline](twitter/twitter.md#twitter-get-home-timeline-block) | Gets recent tweets and retweets from authenticated user and followed accounts |
| [Twitter Get User](twitter/twitter.md#twitter-get-user-block) | Gets detailed profile information for a single Twitter user |
| [Twitter Get Users](twitter/twitter.md#twitter-get-users-block) | Gets profile information for multiple Twitter users (up to 100) |
| [Twitter Search Spaces](twitter/twitter.md#twitter-search-spaces-block) | Searches for Twitter Spaces matching title keywords with state filtering |
| [Twitter Get Spaces](twitter/twitter.md#twitter-get-spaces-block) | Gets information about multiple Twitter Spaces by Space IDs or creator IDs |
| [Twitter Get Space By Id](twitter/twitter.md#twitter-get-space-by-id-block) | Gets detailed information about a single Twitter Space |
| [Twitter Get Space Tweets](twitter/twitter.md#twitter-get-space-tweets-block) | Gets tweets that were shared during a Twitter Space session |
| [Twitter Follow List](twitter/twitter.md#twitter-follow-list-block) | Follows a Twitter List using its List ID |
| [Twitter Unfollow List](twitter/twitter.md#twitter-unfollow-list-block) | Unfollows a previously followed Twitter List |
| [Twitter Get List](twitter/twitter.md#twitter-get-list-block) | Gets detailed information about a specific Twitter List |
| [Twitter Get Owned Lists](twitter/twitter.md#twitter-get-owned-lists-block) | Gets all Twitter Lists owned by a specified user |
| [Twitter Get List Members](twitter/twitter.md#twitter-get-list-members-block) | Gets information about members of a specified Twitter List |
| [Twitter Add List Member](twitter/twitter.md#twitter-add-list-member-block) | Adds a specified user as a member to a Twitter List |
| [Twitter Remove List Member](twitter/twitter.md#twitter-remove-list-member-block) | Removes a specified user from a Twitter List |
| [Twitter Get List Tweets](twitter/twitter.md#twitter-get-list-tweets-block) | Gets tweets posted within a specified Twitter List |
| [Twitter Create List](twitter/twitter.md#twitter-create-list-block) | Creates a new Twitter List with specified name and settings |
| [Twitter Update List](twitter/twitter.md#twitter-update-list-block) | Updates name and/or description of an existing Twitter List |
| [Twitter Delete List](twitter/twitter.md#twitter-delete-list-block) | Deletes a specified Twitter List |
| [Twitter Pin List](twitter/twitter.md#twitter-pin-list-block) | Pins a Twitter List to appear at top of Lists |
| [Twitter Unpin List](twitter/twitter.md#twitter-unpin-list-block) | Removes a Twitter List from pinned Lists |
| [Twitter Get Pinned Lists](twitter/twitter.md#twitter-get-pinned-lists-block) | Gets all Twitter Lists that are currently pinned |
| Twitter List Get Followers | Working... Gets all followers of a specified Twitter List |
| Twitter Get Followed Lists | Working... Gets all Lists that a user follows |
| Twitter Get DM Events | Working... Retrieves direct message events for a user |
| Twitter Send Direct Message | Working... Sends a direct message to a specified user |
| Twitter Create DM Conversation | Working... Creates a new direct message conversation |
| [Send Authenticated Web Request](misc.md#send-authenticated-web-request) | Make an authenticated HTTP request with host-scoped credentials (JSON / form / multipart) |
| [Send Email](misc.md#send-email) | This block sends an email using the provided SMTP credentials |
| [Send Web Request](misc.md#send-web-request) | Make an HTTP request (JSON / form / multipart) |
## Todoist Integration
## Agent Integration
| Block Name | Description |
|------------|-------------|
| [Todoist Create Label](todoist.md#todoist-create-label) | Creates a new label in Todoist |
| [Todoist List Labels](todoist.md#todoist-list-labels) | Retrieves all personal labels from Todoist |
| [Todoist Get Label](todoist.md#todoist-get-label) | Retrieves a specific label by ID |
| [Todoist Create Task](todoist.md#todoist-create-task) | Creates a new task in Todoist |
| [Todoist Get Tasks](todoist.md#todoist-get-tasks) | Retrieves active tasks from Todoist |
| [Todoist Update Task](todoist.md#todoist-update-task) | Updates an existing task |
| [Todoist Close Task](todoist.md#todoist-close-task) | Completes/closes a task |
| [Todoist Reopen Task](todoist.md#todoist-reopen-task) | Reopens a completed task |
| [Todoist Delete Task](todoist.md#todoist-delete-task) | Permanently deletes a task |
| [Todoist List Projects](todoist.md#todoist-list-projects) | Retrieves all projects from Todoist |
| [Todoist Create Project](todoist.md#todoist-create-project) | Creates a new project in Todoist |
| [Todoist Get Project](todoist.md#todoist-get-project) | Retrieves details for a specific project |
| [Todoist Update Project](todoist.md#todoist-update-project) | Updates an existing project |
| [Todoist Delete Project](todoist.md#todoist-delete-project) | Deletes a project and its contents |
| [Todoist List Collaborators](todoist.md#todoist-list-collaborators) | Retrieves collaborators on a project |
| [Todoist List Sections](todoist.md#todoist-list-sections) | Retrieves sections from Todoist |
| [Todoist Get Section](todoist.md#todoist-get-section) | Retrieves details for a specific section |
| [Todoist Delete Section](todoist.md#todoist-delete-section) | Deletes a section and its tasks |
| [Todoist Create Comment](todoist.md#todoist-create-comment) | Creates a new comment on a task or project |
| [Todoist Get Comments](todoist.md#todoist-get-comments) | Retrieves all comments for a task or project |
| [Todoist Get Comment](todoist.md#todoist-get-comment) | Retrieves a specific comment by ID |
| [Todoist Update Comment](todoist.md#todoist-update-comment) | Updates an existing comment |
| [Todoist Delete Comment](todoist.md#todoist-delete-comment) | Deletes a comment |
| [Agent Executor](misc.md#agent-executor) | Executes an existing agent inside your agent |
This comprehensive list covers all the blocks available in AutoGPT. Each block is designed to perform a specific task, and they can be combined to create powerful, automated workflows. For more detailed information on each block, click on its name to view the full documentation.
## CRM Services
| Block Name | Description |
|------------|-------------|
| [Add Lead To Campaign](smartlead/campaign.md#add-lead-to-campaign) | Add a lead to a campaign in SmartLead |
| [Create Campaign](smartlead/campaign.md#create-campaign) | Create a campaign in SmartLead |
| [Hub Spot Company](hubspot/company.md#hub-spot-company) | Manages HubSpot companies - create, update, and retrieve company information |
| [Hub Spot Contact](hubspot/contact.md#hub-spot-contact) | Manages HubSpot contacts - create, update, and retrieve contact information |
| [Save Campaign Sequences](smartlead/campaign.md#save-campaign-sequences) | Save sequences within a campaign |
## AI Safety
| Block Name | Description |
|------------|-------------|
| [Nvidia Deepfake Detect](nvidia/deepfake.md#nvidia-deepfake-detect) | Detects potential deepfakes in images using Nvidia's AI API |
## Issue Tracking
| Block Name | Description |
|------------|-------------|
| [Linear Create Comment](linear/comment.md#linear-create-comment) | Creates a new comment on a Linear issue |
| [Linear Create Issue](linear/issues.md#linear-create-issue) | Creates a new issue on Linear |
| [Linear Get Project Issues](linear/issues.md#linear-get-project-issues) | Gets issues from a Linear project filtered by status and assignee |
| [Linear Search Projects](linear/projects.md#linear-search-projects) | Searches for projects on Linear |
## Hardware
| Block Name | Description |
|------------|-------------|
| [Compass AI Trigger](compass/triggers.md#compass-ai-trigger) | This block will output the contents of the compass transcription |

View File

@@ -0,0 +1,28 @@
# Compass AI Trigger
### What it is
This block will output the contents of the compass transcription.
### How it works
<!-- MANUAL: how_it_works -->
This block triggers when a Compass AI transcription is received. It outputs the transcription text content, enabling workflows that process voice input or meeting transcripts from Compass AI.
The transcription is output as a string for downstream processing, analysis, or storage.
<!-- END MANUAL -->
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| transcription | The contents of the compass transcription. | str |
### Possible use case
<!-- MANUAL: use_case -->
**Voice Command Processing**: Trigger workflows from voice commands transcribed by Compass AI.
**Meeting Automation**: Process meeting transcripts to extract action items or summaries.
**Transcription Analysis**: Analyze transcribed content for sentiment, topics, or key information.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,266 @@
# Create Dictionary
### What it is
Creates a dictionary with the specified key-value pairs. Use this when you know all the values you want to add upfront.
### How it works
<!-- MANUAL: how_it_works -->
This block creates a new dictionary from specified key-value pairs in a single operation. It's designed for cases where you know all the data upfront, rather than building the dictionary incrementally.
The block takes a dictionary input and outputs it as-is, making it useful as a starting point for workflows that need to pass structured data between blocks.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| values | Key-value pairs to create the dictionary with | Dict[str, True] | Yes |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if dictionary creation failed | str |
| dictionary | The created dictionary containing the specified key-value pairs | Dict[str, True] |
### Possible use case
<!-- MANUAL: use_case -->
**API Request Payloads**: Create complete request body objects with all required fields before sending to an API.
**Configuration Objects**: Build settings dictionaries with predefined values for initializing services or workflows.
**Data Mapping**: Transform input data into a structured format with specific keys expected by downstream blocks.
<!-- END MANUAL -->
---
## Create List
### What it is
Creates a list with the specified values. Use this when you know all the values you want to add upfront. This block can also yield the list in batches based on a maximum size or token limit.
### How it works
<!-- MANUAL: how_it_works -->
This block creates a list from provided values and can optionally chunk it into smaller batches. When max_size is set, the list is yielded in chunks of that size. When max_tokens is set, chunks are sized to fit within token limits for LLM processing.
This batching capability is particularly useful when processing large datasets that need to be split for API limits or memory constraints.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| values | A list of values to be combined into a new list. | List[Any] | Yes |
| max_size | Maximum size of the list. If provided, the list will be yielded in chunks of this size. | int | No |
| max_tokens | Maximum tokens for the list. If provided, the list will be yielded in chunks that fit within this token limit. | int | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| list | The created list containing the specified values. | List[Any] |
### Possible use case
<!-- MANUAL: use_case -->
**Batch Processing**: Split large datasets into manageable chunks for API calls with rate limits.
**LLM Token Management**: Divide text content into token-limited batches for processing by language models.
**Parallel Processing**: Create batches of work items that can be processed concurrently by multiple blocks.
<!-- END MANUAL -->
---
## File Read
### What it is
Reads a file and returns its content as a string, with optional chunking by delimiter and size limits
### How it works
<!-- MANUAL: how_it_works -->
This block reads file content from various sources (URL, data URI, or local path) and returns it as a string. It supports chunking via delimiter (like newlines) or size limits, yielding content in manageable pieces.
Use skip_rows and skip_size to skip header content or initial bytes. When delimiter and limits are set, content is yielded chunk by chunk, enabling processing of large files without loading everything into memory.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| file_input | The file to read from (URL, data URI, or local path) | str (file) | Yes |
| delimiter | Delimiter to split the content into rows/chunks (e.g., '\n' for lines) | str | No |
| size_limit | Maximum size in bytes per chunk to yield (0 for no limit) | int | No |
| row_limit | Maximum number of rows to process (0 for no limit, requires delimiter) | int | No |
| skip_size | Number of characters to skip from the beginning of the file | int | No |
| skip_rows | Number of rows to skip from the beginning (requires delimiter) | int | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| content | File content, yielded as individual chunks when delimiter or size limits are applied | str |
### Possible use case
<!-- MANUAL: use_case -->
**Log File Processing**: Read and process log files line by line, filtering or transforming each entry.
**Large Document Analysis**: Read large text files in chunks for summarization or analysis without memory issues.
**Data Import**: Read text-based data files and process them row by row for database import.
<!-- END MANUAL -->
---
## Persist Information
### What it is
Persist key-value information for the current user
### How it works
<!-- MANUAL: how_it_works -->
This block stores key-value data that persists across workflow runs. You can scope the persistence to either within_agent (available to all runs of this specific agent) or across_agents (available to all agents for this user).
The stored data remains available until explicitly overwritten, enabling state management and configuration persistence between workflow executions.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| key | Key to store the information under | str | Yes |
| value | Value to store | Value | Yes |
| scope | Scope of persistence: within_agent (shared across all runs of this agent) or across_agents (shared across all agents for this user) | "within_agent" | "across_agents" | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| value | Value that was stored | Value |
### Possible use case
<!-- MANUAL: use_case -->
**User Preferences**: Store user settings like preferred language or notification preferences for future runs.
**Progress Tracking**: Save the last processed item ID to resume batch processing where you left off.
**API Token Caching**: Store refreshed API tokens that can be reused across multiple workflow executions.
<!-- END MANUAL -->
---
## Read Spreadsheet
### What it is
Reads CSV and Excel files and outputs the data as a list of dictionaries and individual rows. Excel files are automatically converted to CSV format.
### How it works
<!-- MANUAL: how_it_works -->
This block parses CSV and Excel files, converting each row into a dictionary with column headers as keys. Excel files are automatically converted to CSV format before processing.
Configure delimiter, quote character, and escape character for proper CSV parsing. Use skip_rows to ignore headers or initial rows, and skip_columns to exclude unwanted columns from the output.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| contents | The contents of the CSV/spreadsheet data to read | str | No |
| file_input | CSV or Excel file to read from (URL, data URI, or local path). Excel files are automatically converted to CSV | str (file) | No |
| delimiter | The delimiter used in the CSV/spreadsheet data | str | No |
| quotechar | The character used to quote fields | str | No |
| escapechar | The character used to escape the delimiter | str | No |
| has_header | Whether the CSV file has a header row | bool | No |
| skip_rows | The number of rows to skip from the start of the file | int | No |
| strip | Whether to strip whitespace from the values | bool | No |
| skip_columns | The columns to skip from the start of the row | List[str] | No |
| produce_singular_result | If True, yield individual 'row' outputs only (can be slow). If False, yield both 'rows' (all data) | bool | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| row | The data produced from each row in the spreadsheet | Dict[str, str] |
| rows | All the data in the spreadsheet as a list of rows | List[Dict[str, str]] |
### Possible use case
<!-- MANUAL: use_case -->
**Data Import**: Import product catalogs, contact lists, or inventory data from spreadsheet exports.
**Report Processing**: Parse generated CSV reports from other systems for analysis or transformation.
**Bulk Operations**: Process spreadsheets of email addresses, user records, or configuration data row by row.
<!-- END MANUAL -->
---
## Retrieve Information
### What it is
Retrieve key-value information for the current user
### How it works
<!-- MANUAL: how_it_works -->
This block retrieves previously stored key-value data for the current user. Specify the key and scope to fetch the corresponding value. If the key doesn't exist, the default_value is returned.
Use within_agent scope for agent-specific data or across_agents for data shared across all user agents.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| key | Key to retrieve the information for | str | Yes |
| scope | Scope of persistence: within_agent (shared across all runs of this agent) or across_agents (shared across all agents for this user) | "within_agent" | "across_agents" | No |
| default_value | Default value to return if key is not found | Default Value | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| value | Retrieved value or default value | Value |
### Possible use case
<!-- MANUAL: use_case -->
**Resume Processing**: Retrieve the last processed item ID to continue batch operations from where you left off.
**Load Preferences**: Fetch stored user preferences at workflow start to customize behavior.
**State Restoration**: Retrieve workflow state saved from a previous run to maintain continuity.
<!-- END MANUAL -->
---
## Screenshot Web Page
### What it is
Takes a screenshot of a specified website using ScreenshotOne API
### How it works
<!-- MANUAL: how_it_works -->
This block uses the ScreenshotOne API to capture screenshots of web pages. Configure viewport dimensions, output format, and whether to capture the full page or just the visible area.
Optional features include blocking ads, cookie banners, and chat widgets for cleaner screenshots. Caching can be enabled to improve performance for repeated captures of the same page.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| url | URL of the website to screenshot | str | Yes |
| viewport_width | Width of the viewport in pixels | int | No |
| viewport_height | Height of the viewport in pixels | int | No |
| full_page | Whether to capture the full page length | bool | No |
| format | Output format (png, jpeg, webp) | "png" | "jpeg" | "webp" | No |
| block_ads | Whether to block ads | bool | No |
| block_cookie_banners | Whether to block cookie banners | bool | No |
| block_chats | Whether to block chat widgets | bool | No |
| cache | Whether to enable caching | bool | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| image | The screenshot image data | str (file) |
### Possible use case
<!-- MANUAL: use_case -->
**Visual Documentation**: Capture screenshots of web pages for documentation, reports, or archives.
**Competitive Monitoring**: Regularly screenshot competitor websites to track design and content changes.
**Visual Testing**: Capture page renders for visual regression testing or design verification workflows.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,82 @@
# Data For Seo Keyword Suggestions
### What it is
Get keyword suggestions from DataForSEO Labs Google API
### How it works
<!-- MANUAL: how_it_works -->
This block calls the DataForSEO Labs Google Keyword Suggestions API to generate keyword ideas based on a seed keyword. It provides search volume, competition metrics, CPC data, and keyword difficulty scores for each suggestion.
Configure location and language targeting to get region-specific results. Optional SERP and clickstream data provide additional insights into search behavior and click patterns.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| keyword | Seed keyword to get suggestions for | str | Yes |
| location_code | Location code for targeting (e.g., 2840 for USA) | int | No |
| language_code | Language code (e.g., 'en' for English) | str | No |
| include_seed_keyword | Include the seed keyword in results | bool | No |
| include_serp_info | Include SERP information | bool | No |
| include_clickstream_data | Include clickstream metrics | bool | No |
| limit | Maximum number of results (up to 3000) | int | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| suggestions | List of keyword suggestions with metrics | List[KeywordSuggestion] |
| suggestion | A single keyword suggestion with metrics | KeywordSuggestion |
| total_count | Total number of suggestions returned | int |
| seed_keyword | The seed keyword used for the query | str |
### Possible use case
<!-- MANUAL: use_case -->
**Content Planning**: Generate blog post and article ideas based on keyword suggestions with high search volume.
**SEO Strategy**: Discover new keyword opportunities to target based on competition and difficulty metrics.
**PPC Campaigns**: Find keywords for advertising campaigns using CPC and competition data.
<!-- END MANUAL -->
---
## Keyword Suggestion Extractor
### What it is
Extract individual fields from a KeywordSuggestion object
### How it works
<!-- MANUAL: how_it_works -->
This block extracts individual fields from a KeywordSuggestion object returned by the Keyword Suggestions block. It decomposes the suggestion into separate outputs for easier use in workflows.
Each field including keyword text, search volume, competition level, CPC, difficulty score, and optional SERP/clickstream data becomes available as individual outputs for downstream processing.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| suggestion | The keyword suggestion object to extract fields from | KeywordSuggestion | Yes |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| keyword | The keyword suggestion | str |
| search_volume | Monthly search volume | int |
| competition | Competition level (0-1) | float |
| cpc | Cost per click in USD | float |
| keyword_difficulty | Keyword difficulty score | int |
| serp_info | data from SERP for each keyword | Dict[str, True] |
| clickstream_data | Clickstream data metrics | Dict[str, True] |
### Possible use case
<!-- MANUAL: use_case -->
**Keyword Filtering**: Extract search volume and difficulty to filter keywords meeting specific thresholds.
**Data Analysis**: Access individual metrics for comparison, sorting, or custom scoring algorithms.
**Report Generation**: Pull specific fields like CPC and competition for SEO or PPC reports.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,83 @@
# Data For Seo Related Keywords
### What it is
Get related keywords from DataForSEO Labs Google API
### How it works
<!-- MANUAL: how_it_works -->
This block uses the DataForSEO Labs Google Related Keywords API to find semantically related keywords based on a seed keyword. It returns keywords that share similar search intent or topic relevance.
The depth parameter controls the breadth of the search, with higher values returning exponentially more related keywords. Results include search metrics, competition data, and optional SERP/clickstream information.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| keyword | Seed keyword to find related keywords for | str | Yes |
| location_code | Location code for targeting (e.g., 2840 for USA) | int | No |
| language_code | Language code (e.g., 'en' for English) | str | No |
| include_seed_keyword | Include the seed keyword in results | bool | No |
| include_serp_info | Include SERP information | bool | No |
| include_clickstream_data | Include clickstream metrics | bool | No |
| limit | Maximum number of results (up to 3000) | int | No |
| depth | Keyword search depth (0-4). Controls the number of returned keywords: 0=1 keyword, 1=~8 keywords, 2=~72 keywords, 3=~584 keywords, 4=~4680 keywords | int | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| related_keywords | List of related keywords with metrics | List[RelatedKeyword] |
| related_keyword | A related keyword with metrics | RelatedKeyword |
| total_count | Total number of related keywords returned | int |
| seed_keyword | The seed keyword used for the query | str |
### Possible use case
<!-- MANUAL: use_case -->
**Topic Clustering**: Group related keywords to build comprehensive content clusters around a topic.
**Semantic SEO**: Discover LSI (latent semantic indexing) keywords to improve content relevance.
**Keyword Expansion**: Expand targeting beyond exact match to capture related search traffic.
<!-- END MANUAL -->
---
## Related Keyword Extractor
### What it is
Extract individual fields from a RelatedKeyword object
### How it works
<!-- MANUAL: how_it_works -->
This block extracts individual fields from a RelatedKeyword object returned by the Related Keywords block. It separates the compound object into distinct outputs for workflow integration.
Outputs include the keyword text, search volume, competition score, CPC, keyword difficulty, and any SERP or clickstream data that was requested in the original search.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| related_keyword | The related keyword object to extract fields from | RelatedKeyword | Yes |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| keyword | The related keyword | str |
| search_volume | Monthly search volume | int |
| competition | Competition level (0-1) | float |
| cpc | Cost per click in USD | float |
| keyword_difficulty | Keyword difficulty score | int |
| serp_info | SERP data for the keyword | Dict[str, True] |
| clickstream_data | Clickstream data metrics | Dict[str, True] |
### Possible use case
<!-- MANUAL: use_case -->
**Keyword Prioritization**: Extract metrics to rank related keywords by opportunity score.
**Content Optimization**: Access keyword difficulty and search volume for content planning decisions.
**Competitive Analysis**: Pull competition and CPC data to assess keyword viability.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,336 @@
# Create Discord Thread
### What it is
Creates a new thread in a Discord channel.
### How it works
<!-- MANUAL: how_it_works -->
This block uses the Discord API with a bot token to create a new thread in a specified channel. Threads can be public or private (private requires Boost Level 2+).
Configure auto-archive duration and optionally send an initial message when the thread is created.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| channel_name | Channel ID or channel name to create the thread in | str | Yes |
| server_name | Server name (only needed if using channel name) | str | No |
| thread_name | The name of the thread to create | str | Yes |
| is_private | Whether to create a private thread (requires Boost Level 2+) or public thread | bool | No |
| auto_archive_duration | Duration before the thread is automatically archived | "60" | "1440" | "4320" | No |
| message_content | Optional initial message to send in the thread | str | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| status | Operation status | str |
| thread_id | ID of the created thread | str |
| thread_name | Name of the created thread | str |
### Possible use case
<!-- MANUAL: use_case -->
**Support Tickets**: Create threads for individual support conversations to keep channels organized.
**Discussion Topics**: Automatically create threads for new topics or announcements.
**Project Channels**: Spin up discussion threads for specific tasks or features.
<!-- END MANUAL -->
---
## Discord Channel Info
### What it is
Resolves Discord channel names to IDs and vice versa.
### How it works
<!-- MANUAL: how_it_works -->
This block resolves Discord channel identifiers, converting between channel names and IDs. It queries the Discord API to find the channel and returns comprehensive information including server details.
Useful for workflows that receive channel names but need IDs for other Discord operations.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| channel_identifier | Channel name or channel ID to look up | str | Yes |
| server_name | Server name (optional, helps narrow down search) | str | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| channel_id | The channel's ID | str |
| channel_name | The channel's name | str |
| server_id | The server's ID | str |
| server_name | The server's name | str |
| channel_type | Type of channel (text, voice, etc) | str |
### Possible use case
<!-- MANUAL: use_case -->
**Dynamic Routing**: Look up channel IDs to route messages to user-specified channels by name.
**Validation**: Verify channel existence before attempting to send messages.
**Workflow Setup**: Get channel details during workflow configuration.
<!-- END MANUAL -->
---
## Discord User Info
### What it is
Gets information about a Discord user by their ID.
### How it works
<!-- MANUAL: how_it_works -->
This block retrieves information about a Discord user by their ID. It queries the Discord API and returns profile details including username, display name, avatar, and account creation date.
The user must be visible to your bot (share a server with your bot).
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| user_id | The Discord user ID to get information about | str | Yes |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| user_id | The user's ID (passed through for chaining) | str |
| username | The user's username | str |
| display_name | The user's display name | str |
| discriminator | The user's discriminator (if applicable) | str |
| avatar_url | URL to the user's avatar | str |
| is_bot | Whether the user is a bot | bool |
| created_at | When the account was created | str |
### Possible use case
<!-- MANUAL: use_case -->
**User Profiling**: Get user details to personalize responses or create user profiles.
**Mention Resolution**: Look up user information when processing mentions in messages.
**Activity Logging**: Retrieve user details for logging or analytics purposes.
<!-- END MANUAL -->
---
## Read Discord Messages
### What it is
Reads messages from a Discord channel using a bot token.
### How it works
<!-- MANUAL: how_it_works -->
The block uses a Discord bot to log into a server and listen for new messages. When a message is received, it extracts the content, channel name, and username of the sender. If the message contains a text file attachment, the block also retrieves and includes the file's content.
<!-- END MANUAL -->
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| message_content | The content of the message received | str |
| message_id | The ID of the message | str |
| channel_id | The ID of the channel | str |
| channel_name | The name of the channel the message was received from | str |
| user_id | The ID of the user who sent the message | str |
| username | The username of the user who sent the message | str |
### Possible use case
<!-- MANUAL: use_case -->
This block could be used to monitor a Discord channel for support requests. When a user posts a message, the block captures it, allowing another part of the system to process and respond to the request.
<!-- END MANUAL -->
---
## Reply To Discord Message
### What it is
Replies to a specific Discord message.
### How it works
<!-- MANUAL: how_it_works -->
This block sends a reply to a specific Discord message, creating a threaded reply that references the original message. Optionally mention the original author to notify them.
The reply appears linked to the original message in Discord's UI, maintaining conversation context.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| channel_id | The channel ID where the message to reply to is located | str | Yes |
| message_id | The ID of the message to reply to | str | Yes |
| reply_content | The content of the reply | str | Yes |
| mention_author | Whether to mention the original message author | bool | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| status | Operation status | str |
| reply_id | ID of the reply message | str |
### Possible use case
<!-- MANUAL: use_case -->
**Conversation Bots**: Reply to user questions maintaining conversation context.
**Support Responses**: Respond to support requests by replying to the original message.
**Interactive Commands**: Reply to command messages with results or confirmations.
<!-- END MANUAL -->
---
## Send Discord DM
### What it is
Sends a direct message to a Discord user using their user ID.
### How it works
<!-- MANUAL: how_it_works -->
This block sends a direct message to a Discord user. It opens a DM channel with the user (if not already open) and sends the message. The user must allow DMs from server members or share a server with your bot.
Returns the message ID of the sent DM for tracking purposes.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| user_id | The Discord user ID to send the DM to (e.g., '123456789012345678') | str | Yes |
| message_content | The content of the direct message to send | str | Yes |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| status | The status of the operation | str |
| message_id | The ID of the sent message | str |
### Possible use case
<!-- MANUAL: use_case -->
**Private Notifications**: Send private alerts or notifications to specific users.
**Welcome Messages**: DM new server members with welcome information.
**Verification Systems**: Send verification codes or instructions via DM.
<!-- END MANUAL -->
---
## Send Discord Embed
### What it is
Sends a rich embed message to a Discord channel.
### How it works
<!-- MANUAL: how_it_works -->
This block sends a rich embed message to a Discord channel. Embeds support formatted content with titles, descriptions, colors, images, thumbnails, author sections, footers, and structured fields.
Configure the embed's appearance with colors, images, and multiple fields for organized information display.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| channel_identifier | Channel ID or channel name to send the embed to | str | Yes |
| server_name | Server name (only needed if using channel name) | str | No |
| title | The title of the embed | str | No |
| description | The main content/description of the embed | str | No |
| color | Embed color as integer (e.g., 0x00ff00 for green) | int | No |
| thumbnail_url | URL for the thumbnail image | str | No |
| image_url | URL for the main embed image | str | No |
| author_name | Author name to display | str | No |
| footer_text | Footer text | str | No |
| fields | List of field dictionaries with 'name', 'value', and optional 'inline' keys | List[Dict[str, True]] | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| status | Operation status | str |
| message_id | ID of the sent embed message | str |
### Possible use case
<!-- MANUAL: use_case -->
**Status Updates**: Send formatted status updates with colors and structured information.
**Data Displays**: Present data in organized embed fields for easy reading.
**Announcements**: Create visually appealing announcements with images and branding.
<!-- END MANUAL -->
---
## Send Discord File
### What it is
Sends a file attachment to a Discord channel.
### How it works
<!-- MANUAL: how_it_works -->
This block uploads and sends a file attachment to a Discord channel. It supports various file types including images, documents, and other media. Files can be provided as URLs, data URIs, or local paths.
Optionally include a message along with the file attachment.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| channel_identifier | Channel ID or channel name to send the file to | str | Yes |
| server_name | Server name (only needed if using channel name) | str | No |
| file | The file to send (URL, data URI, or local path). Supports images, videos, documents, etc. | str (file) | Yes |
| filename | Name of the file when sent (e.g., 'report.pdf', 'image.png') | str | No |
| message_content | Optional message to send with the file | str | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| status | Operation status | str |
| message_id | ID of the sent message | str |
### Possible use case
<!-- MANUAL: use_case -->
**Report Sharing**: Send generated reports or documents to Discord channels.
**Image Posting**: Share images from workflows or external sources.
**Backup Distribution**: Share backup files or exports with team channels.
<!-- END MANUAL -->
---
## Send Discord Message
### What it is
Sends a message to a Discord channel using a bot token.
### How it works
<!-- MANUAL: how_it_works -->
The block uses a Discord bot to log into a server, locate the specified channel, and send the provided message. If the message is longer than Discord's character limit, it automatically splits the message into smaller chunks and sends them sequentially.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| message_content | The content of the message to send | str | Yes |
| channel_name | Channel ID or channel name to send the message to | str | Yes |
| server_name | Server name (only needed if using channel name) | str | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| status | The status of the operation (e.g., 'Message sent', 'Error') | str |
| message_id | The ID of the sent message | str |
| channel_id | The ID of the channel where the message was sent | str |
### Possible use case
<!-- MANUAL: use_case -->
This block could be used as part of an automated notification system. For example, it could send alerts to a Discord channel when certain events occur in another system, such as when a new user signs up or when a critical error is detected.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,32 @@
# Discord Get Current User
### What it is
Gets information about the currently authenticated Discord user using OAuth2 credentials.
### How it works
<!-- MANUAL: how_it_works -->
This block uses Discord's OAuth2 API to retrieve information about the currently authenticated user. It requires valid OAuth2 credentials that have been obtained through Discord's authorization flow with the `identify` scope.
The block queries the Discord `/users/@me` endpoint and returns the user's profile information including their unique ID, username, avatar, and customization settings like banner and accent color.
<!-- END MANUAL -->
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| user_id | The authenticated user's Discord ID | str |
| username | The user's username | str |
| avatar_url | URL to the user's avatar image | str |
| banner_url | URL to the user's banner image (if set) | str |
| accent_color | The user's accent color as an integer | int |
### Possible use case
<!-- MANUAL: use_case -->
**User Authentication**: Verify user identity after OAuth login to personalize experiences or grant access.
**Profile Integration**: Display Discord user information in external applications or dashboards.
**Account Linking**: Connect Discord accounts with other services using the unique user ID.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,151 @@
# Get Linkedin Profile
### What it is
Fetch LinkedIn profile data using Enrichlayer
### How it works
<!-- MANUAL: how_it_works -->
This block retrieves comprehensive LinkedIn profile data using Enrichlayer's API. Provide a LinkedIn profile URL to fetch details including work history, education, skills, and contact information.
Configure caching options for performance and optionally include additional data like inferred salary, personal email, or social media links.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| linkedin_url | LinkedIn profile URL to fetch data from | str | Yes |
| fallback_to_cache | Cache usage if live fetch fails | "on-error" | "never" | No |
| use_cache | Cache utilization strategy | "if-present" | "never" | No |
| include_skills | Include skills data | bool | No |
| include_inferred_salary | Include inferred salary data | bool | No |
| include_personal_email | Include personal email | bool | No |
| include_personal_contact_number | Include personal contact number | bool | No |
| include_social_media | Include social media profiles | bool | No |
| include_extra | Include additional data | bool | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| profile | LinkedIn profile data | PersonProfileResponse |
### Possible use case
<!-- MANUAL: use_case -->
**Lead Enrichment**: Enrich sales leads with detailed professional background information.
**Recruitment Research**: Gather candidate information for hiring and outreach workflows.
**Contact Discovery**: Find contact details associated with LinkedIn profiles.
<!-- END MANUAL -->
---
## Get Linkedin Profile Picture
### What it is
Get LinkedIn profile pictures using Enrichlayer
### How it works
<!-- MANUAL: how_it_works -->
This block retrieves the profile picture URL for a LinkedIn profile using Enrichlayer's API. Provide the LinkedIn profile URL to get a direct link to the user's profile photo.
The returned URL can be used for display, download, or further image processing.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| linkedin_profile_url | LinkedIn profile URL | str | Yes |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| profile_picture_url | LinkedIn profile picture URL | str (file) |
### Possible use case
<!-- MANUAL: use_case -->
**CRM Enhancement**: Add profile photos to contact records for visual identification.
**Personalized Outreach**: Include profile pictures in personalized email or message templates.
**Identity Verification**: Retrieve profile photos for manual identity verification workflows.
<!-- END MANUAL -->
---
## Linkedin Person Lookup
### What it is
Look up LinkedIn profiles by person information using Enrichlayer
### How it works
<!-- MANUAL: how_it_works -->
This block finds LinkedIn profiles by matching person details like name, company, and title using Enrichlayer's API. Provide first name and company domain as minimum inputs, with optional last name, location, and title for better matching.
Enable similarity checks and profile enrichment for more detailed results.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| first_name | Person's first name | str | Yes |
| last_name | Person's last name | str | No |
| company_domain | Domain of the company they work for (optional) | str | Yes |
| location | Person's location (optional) | str | No |
| title | Person's job title (optional) | str | No |
| include_similarity_checks | Include similarity checks | bool | No |
| enrich_profile | Enrich the profile with additional data | bool | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| lookup_result | LinkedIn profile lookup result | PersonLookupResponse |
### Possible use case
<!-- MANUAL: use_case -->
**Lead Discovery**: Find LinkedIn profiles for leads when you only have name and company.
**Contact Matching**: Match CRM contacts to their LinkedIn profiles for enrichment.
**Prospecting**: Discover LinkedIn profiles of people at target companies.
<!-- END MANUAL -->
---
## Linkedin Role Lookup
### What it is
Look up LinkedIn profiles by role in a company using Enrichlayer
### How it works
<!-- MANUAL: how_it_works -->
This block finds LinkedIn profiles by role title and company using Enrichlayer's API. Specify a role like CEO, CTO, or VP of Sales along with the company name to find matching profiles.
Enable enrich_profile to automatically fetch full profile data for the matched result.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| role | Role title (e.g., CEO, CTO) | str | Yes |
| company_name | Name of the company | str | Yes |
| enrich_profile | Enrich the profile with additional data | bool | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| role_lookup_result | LinkedIn role lookup result | RoleLookupResponse |
### Possible use case
<!-- MANUAL: use_case -->
**Decision Maker Discovery**: Find key decision makers at target companies for sales outreach.
**Executive Research**: Look up C-suite executives for account-based marketing.
**Org Chart Building**: Map leadership at companies by looking up specific roles.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,36 @@
# Exa Answer
### What it is
Get an LLM answer to a question informed by Exa search results
### How it works
<!-- MANUAL: how_it_works -->
This block sends your question to the Exa Answer API, which performs a semantic search across billions of web pages to find relevant information. The API then uses an LLM to synthesize the search results into a coherent answer with citations.
The block returns both the generated answer and the source citations that informed it. You can optionally include full text content from the search results for more comprehensive answers.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| query | The question or query to answer | str | Yes |
| text | Include full text content in the search results used for the answer | bool | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the request failed | str |
| answer | The generated answer based on search results | str |
| citations | Search results used to generate the answer | List[AnswerCitation] |
| citation | Individual citation from the answer | AnswerCitation |
### Possible use case
<!-- MANUAL: use_case -->
**Research Assistance**: Get quick, sourced answers to complex questions without manually searching multiple websites.
**Fact Verification**: Verify claims or statements by getting answers backed by real web sources with citations.
**Content Creation**: Generate research-backed content by asking questions about topics and using the cited sources.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,40 @@
# Exa Code Context
### What it is
Search billions of GitHub repos, docs, and Stack Overflow for relevant code examples
### How it works
<!-- MANUAL: how_it_works -->
This block uses Exa's specialized code search API to find relevant code examples from GitHub repositories, official documentation, and Stack Overflow. The search is optimized for code context, returning formatted snippets with source references.
The block returns code snippets along with metadata including the source URL, search time, and token counts. You can control response size with the tokens_num parameter to balance comprehensiveness with cost.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| query | Search query to find relevant code snippets. Describe what you're trying to do or what code you're looking for. | str | Yes |
| tokens_num | Token limit for response. Use 'dynamic' for automatic sizing, 5000 for standard queries, or 10000 for comprehensive examples. | str | int | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| request_id | Unique identifier for this request | str |
| query | The search query used | str |
| response | Formatted code snippets and contextual examples with sources | str |
| results_count | Number of code sources found and included | int |
| cost_dollars | Cost of this request in dollars | str |
| search_time | Time taken to search in milliseconds | float |
| output_tokens | Number of tokens in the response | int |
### Possible use case
<!-- MANUAL: use_case -->
**API Integration Examples**: Find real-world code examples showing how to integrate with specific APIs or libraries.
**Debugging Assistance**: Search for code patterns related to error messages or specific programming challenges.
**Learning New Technologies**: Discover implementation examples when learning a new framework or programming language.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,47 @@
# Exa Contents
### What it is
Retrieves document contents using Exa's contents API
### How it works
<!-- MANUAL: how_it_works -->
This block retrieves full content from web pages using Exa's contents API. You can provide URLs directly or document IDs from previous searches. The API supports live crawling to fetch fresh content and can extract text, highlights, and AI-generated summaries.
The block supports subpage crawling to gather related content and offers various content retrieval options including full text extraction, relevant highlights, and customizable summary generation. Results are formatted for easy use with LLMs.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| urls | Array of URLs to crawl (preferred over 'ids') | List[str] | No |
| ids | [DEPRECATED - use 'urls' instead] Array of document IDs obtained from searches | List[str] | No |
| text | Retrieve text content from pages | bool | No |
| highlights | Text snippets most relevant from each page | HighlightSettings | No |
| summary | LLM-generated summary of the webpage | SummarySettings | No |
| livecrawl | Livecrawling options: never, fallback (default), always, preferred | "never" | "fallback" | "always" | No |
| livecrawl_timeout | Timeout for livecrawling in milliseconds | int | No |
| subpages | Number of subpages to crawl | int | No |
| subpage_target | Keyword(s) to find specific subpages of search results | str | List[str] | No |
| extras | Extra parameters for additional content | ExtrasSettings | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the request failed | str |
| results | List of document contents with metadata | List[ExaSearchResults] |
| result | Single document content result | ExaSearchResults |
| context | A formatted string of the results ready for LLMs | str |
| request_id | Unique identifier for the request | str |
| statuses | Status information for each requested URL | List[ContentStatus] |
| cost_dollars | Cost breakdown for the request | CostDollars |
### Possible use case
<!-- MANUAL: use_case -->
**Content Aggregation**: Retrieve full article content from multiple URLs for analysis or summarization.
**Competitive Research**: Crawl competitor websites to extract product information, pricing, or feature details.
**Data Enrichment**: Fetch detailed content from URLs discovered through Exa searches to build comprehensive datasets.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,173 @@
# Exa Create Research
### What it is
Create research task with optional waiting - explores web and synthesizes findings with citations
### How it works
<!-- MANUAL: how_it_works -->
This block creates an asynchronous research task using Exa's Research API. The API autonomously explores the web, searches for relevant information, and synthesizes findings into a comprehensive report with citations.
You can choose from different model tiers (fast, standard, pro) depending on your speed vs. depth requirements. The block supports structured output via JSON Schema and can optionally wait for completion to return results immediately.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| instructions | Research instructions - clearly define what information to find, how to conduct research, and desired output format. | str | Yes |
| model | Research model: 'fast' for quick results, 'standard' for balanced quality, 'pro' for thorough analysis | "exa-research-fast" | "exa-research" | "exa-research-pro" | No |
| output_schema | JSON Schema to enforce structured output. When provided, results are validated and returned as parsed JSON. | Dict[str, True] | No |
| wait_for_completion | Wait for research to complete before returning. Ensures you get results immediately. | bool | No |
| polling_timeout | Maximum time to wait for completion in seconds (only if wait_for_completion is True) | int | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| research_id | Unique identifier for tracking this research request | str |
| status | Final status of the research | str |
| model | The research model used | str |
| instructions | The research instructions provided | str |
| created_at | When the research was created (Unix timestamp in ms) | int |
| output_content | Research output as text (only if wait_for_completion was True and completed) | str |
| output_parsed | Structured JSON output (only if wait_for_completion and outputSchema were provided) | Dict[str, True] |
| cost_total | Total cost in USD (only if wait_for_completion was True and completed) | float |
| elapsed_time | Time taken to complete in seconds (only if wait_for_completion was True) | float |
### Possible use case
<!-- MANUAL: use_case -->
**Market Research**: Automatically research market trends, competitors, or industry developments with cited sources.
**Due Diligence**: Conduct comprehensive background research on companies, people, or technologies.
**Content Research**: Gather research on topics for articles, reports, or presentations with proper citations.
<!-- END MANUAL -->
---
## Exa Get Research
### What it is
Get status and results of a research task
### How it works
<!-- MANUAL: how_it_works -->
This block retrieves the current status and results of a previously created research task. You can check whether the research is still running, completed, or failed.
When the research is complete, the block returns the full output content along with cost breakdown including searches performed, pages crawled, and tokens used. You can also optionally retrieve the detailed event log of research operations.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| research_id | The ID of the research task to retrieve | str | Yes |
| include_events | Include detailed event log of research operations | bool | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| research_id | The research task identifier | str |
| status | Current status: pending, running, completed, canceled, or failed | str |
| instructions | The original research instructions | str |
| model | The research model used | str |
| created_at | When research was created (Unix timestamp in ms) | int |
| finished_at | When research finished (Unix timestamp in ms, if completed/canceled/failed) | int |
| output_content | Research output as text (if completed) | str |
| output_parsed | Structured JSON output matching outputSchema (if provided and completed) | Dict[str, True] |
| cost_total | Total cost in USD (if completed) | float |
| cost_searches | Number of searches performed (if completed) | int |
| cost_pages | Number of pages crawled (if completed) | int |
| cost_reasoning_tokens | AI tokens used for reasoning (if completed) | int |
| error_message | Error message if research failed | str |
| events | Detailed event log (if include_events was True) | List[Dict[str, True]] |
### Possible use case
<!-- MANUAL: use_case -->
**Status Monitoring**: Check progress of long-running research tasks that were started asynchronously.
**Result Retrieval**: Fetch completed research results from tasks started earlier in your workflow.
**Cost Tracking**: Review the cost breakdown of completed research for budgeting and optimization.
<!-- END MANUAL -->
---
## Exa List Research
### What it is
List all research tasks with pagination support
### How it works
<!-- MANUAL: how_it_works -->
This block retrieves a list of all your research tasks, ordered by creation time with newest first. It supports pagination for handling large numbers of tasks.
The block returns basic information about each task including its ID, status, instructions, and timestamps. Use this to find specific research tasks or monitor all ongoing research activities.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| cursor | Cursor for pagination through results | str | No |
| limit | Number of research tasks to return (1-50) | int | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| research_tasks | List of research tasks ordered by creation time (newest first) | List[ResearchTaskModel] |
| research_task | Individual research task (yielded for each task) | ResearchTaskModel |
| has_more | Whether there are more tasks to paginate through | bool |
| next_cursor | Cursor for the next page of results | str |
### Possible use case
<!-- MANUAL: use_case -->
**Research Management**: View all active and completed research tasks for project management.
**Task Discovery**: Find previously created research tasks to retrieve their results or check status.
**Activity Auditing**: Review research activity history for compliance or reporting purposes.
<!-- END MANUAL -->
---
## Exa Wait For Research
### What it is
Wait for a research task to complete with configurable timeout
### How it works
<!-- MANUAL: how_it_works -->
This block polls a research task until it completes or times out. It periodically checks the task status at configurable intervals and returns the final results when done.
The block is useful when you need to block workflow execution until research completes. It returns whether the operation timed out, allowing you to handle incomplete research gracefully.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| research_id | The ID of the research task to wait for | str | Yes |
| timeout | Maximum time to wait in seconds | int | No |
| check_interval | Seconds between status checks | int | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| research_id | The research task identifier | str |
| final_status | Final status when polling stopped | str |
| output_content | Research output as text (if completed) | str |
| output_parsed | Structured JSON output (if outputSchema was provided and completed) | Dict[str, True] |
| cost_total | Total cost in USD | float |
| elapsed_time | Total time waited in seconds | float |
| timed_out | Whether polling timed out before completion | bool |
### Possible use case
<!-- MANUAL: use_case -->
**Sequential Workflows**: Ensure research completes before proceeding to dependent workflow steps.
**Synchronous Integration**: Convert asynchronous research into synchronous operations for simpler workflow logic.
**Timeout Handling**: Implement research with graceful timeout handling for time-sensitive applications.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,52 @@
# Exa Search
### What it is
Searches the web using Exa's advanced search API
### How it works
<!-- MANUAL: how_it_works -->
This block uses Exa's advanced search API to find web content. Unlike traditional search engines, Exa offers neural search that understands semantic meaning, making it excellent for finding specific types of content. You can choose between keyword search (traditional), neural search (semantic understanding), or fast search.
The block supports powerful filtering by domain, date ranges, content categories (companies, research papers, news, etc.), and text patterns. Results include URLs, titles, and optionally full content extraction.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| query | The search query | str | Yes |
| type | Type of search | "keyword" | "neural" | "fast" | No |
| category | Category to search within: company, research paper, news, pdf, github, tweet, personal site, linkedin profile, financial report | "company" | "research paper" | "news" | No |
| user_location | The two-letter ISO country code of the user (e.g., 'US') | str | No |
| number_of_results | Number of results to return | int | No |
| include_domains | Domains to include in search | List[str] | No |
| exclude_domains | Domains to exclude from search | List[str] | No |
| start_crawl_date | Start date for crawled content | str (date-time) | No |
| end_crawl_date | End date for crawled content | str (date-time) | No |
| start_published_date | Start date for published content | str (date-time) | No |
| end_published_date | End date for published content | str (date-time) | No |
| include_text | Text patterns to include | List[str] | No |
| exclude_text | Text patterns to exclude | List[str] | No |
| contents | Content retrieval settings | ContentSettings | No |
| moderation | Enable content moderation to filter unsafe content from search results | bool | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the request failed | str |
| results | List of search results | List[ExaSearchResults] |
| result | Single search result | ExaSearchResults |
| context | A formatted string of the search results ready for LLMs. | str |
| search_type | For auto searches, indicates which search type was selected. | str |
| resolved_search_type | The search type that was actually used for this request (neural or keyword) | str |
| cost_dollars | Cost breakdown for the request | CostDollars |
### Possible use case
<!-- MANUAL: use_case -->
**Competitive Research**: Search for companies in a specific industry, filtered by recent news or funding announcements.
**Content Curation**: Find relevant articles and research papers on specific topics for newsletters or content aggregation.
**Lead Generation**: Search for companies matching specific criteria (industry, size, recent activity) for sales prospecting.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,48 @@
# Exa Find Similar
### What it is
Finds similar links using Exa's findSimilar API
### How it works
<!-- MANUAL: how_it_works -->
This block uses Exa's findSimilar API to discover web pages that are semantically similar to a given URL. The API analyzes the content and context of the provided page to find related content across the web.
The block supports filtering by domains, date ranges, and text patterns to refine results. You can retrieve content directly with results and enable content moderation to filter unsafe content.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| url | The url for which you would like to find similar links | str | Yes |
| number_of_results | Number of results to return | int | No |
| include_domains | List of domains to include in the search. If specified, results will only come from these domains. | List[str] | No |
| exclude_domains | Domains to exclude from search | List[str] | No |
| start_crawl_date | Start date for crawled content | str (date-time) | No |
| end_crawl_date | End date for crawled content | str (date-time) | No |
| start_published_date | Start date for published content | str (date-time) | No |
| end_published_date | End date for published content | str (date-time) | No |
| include_text | Text patterns to include (max 1 string, up to 5 words) | List[str] | No |
| exclude_text | Text patterns to exclude (max 1 string, up to 5 words) | List[str] | No |
| contents | Content retrieval settings | ContentSettings | No |
| moderation | Enable content moderation to filter unsafe content from search results | bool | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the request failed | str |
| results | List of similar documents with metadata and content | List[ExaSearchResults] |
| result | Single similar document result | ExaSearchResults |
| context | A formatted string of the results ready for LLMs. | str |
| request_id | Unique identifier for the request | str |
| cost_dollars | Cost breakdown for the request | CostDollars |
### Possible use case
<!-- MANUAL: use_case -->
**Content Discovery**: Find related articles, blog posts, or resources similar to content you already like.
**Competitor Analysis**: Discover similar companies or products by finding pages similar to known competitors.
**Research Expansion**: Expand your research by finding additional sources similar to key reference materials.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,39 @@
# Exa Webset Webhook
### What it is
Receive webhook notifications for Exa webset events
### How it works
<!-- MANUAL: how_it_works -->
This block acts as a webhook receiver for Exa webset events. When events occur on your websets (like new items found, searches completed, or enrichments finished), Exa sends notifications to this webhook endpoint.
The block can filter events by webset ID and event type. It parses incoming webhook payloads and outputs structured event data including the event type, affected webset, and event-specific details.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| webset_id | The webset ID to monitor (optional, monitors all if empty) | str | No |
| event_filter | Configure which events to receive | WebsetEventFilter | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| event_type | Type of event that occurred | str |
| event_id | Unique identifier for this event | str |
| webset_id | ID of the affected webset | str |
| data | Event-specific data | Dict[str, True] |
| timestamp | When the event occurred | str |
| metadata | Additional event metadata | Dict[str, True] |
### Possible use case
<!-- MANUAL: use_case -->
**Real-Time Processing**: Trigger workflows automatically when new items are added to websets without polling.
**Alert Systems**: Receive instant notifications when webset searches find new relevant results.
**Integration Pipelines**: Build event-driven integrations that react to webset changes in real time.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,457 @@
# Exa Cancel Webset
### What it is
Cancel all operations being performed on a Webset
### How it works
<!-- MANUAL: how_it_works -->
This block cancels all running operations (searches, enrichments) on a webset. The webset transitions to an idle state and any in-progress operations are stopped.
The block is useful for stopping long-running operations that are no longer needed or when you need to modify the webset configuration. Items already processed before cancellation are retained.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| webset_id | The ID or external ID of the Webset to cancel | str | Yes |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| webset_id | The unique identifier for the webset | str |
| status | The status of the webset after cancellation | str |
| external_id | The external identifier for the webset | str |
| success | Whether the cancellation was successful | str |
### Possible use case
<!-- MANUAL: use_case -->
**Resource Management**: Stop expensive operations on websets that are no longer needed.
**Configuration Updates**: Cancel operations before making changes to webset settings.
**Error Recovery**: Stop problematic operations and restart with corrected parameters.
<!-- END MANUAL -->
---
## Exa Create Or Find Webset
### What it is
Create a new webset or return existing one by external_id (idempotent operation)
### How it works
<!-- MANUAL: how_it_works -->
This block implements idempotent webset creation using an external ID. If a webset with the given external_id already exists, it returns that webset. Otherwise, it creates a new one.
This pattern prevents duplicate websets when workflows retry or run multiple times. The block indicates whether the webset was newly created or already existed.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| external_id | External identifier for this webset - used to find existing or create new | str | Yes |
| search_query | Search query (optional - only needed if creating new webset) | str | No |
| search_count | Number of items to find in initial search | int | No |
| metadata | Key-value pairs to associate with the webset | Dict[str, True] | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| webset | The webset (existing or newly created) | Webset |
| was_created | True if webset was newly created, False if it already existed | bool |
### Possible use case
<!-- MANUAL: use_case -->
**Idempotent Workflows**: Safely re-run workflows without creating duplicate websets.
**External System Integration**: Map websets to IDs from your own systems for easy reference.
**Retry-Safe Operations**: Handle workflow retries gracefully by reusing existing websets.
<!-- END MANUAL -->
---
## Exa Create Webset
### What it is
Create a new Exa Webset for persistent web search collections with optional waiting for initial results
### How it works
<!-- MANUAL: how_it_works -->
This block creates a new Exa Webset, a persistent collection that stores web search results. You define a search query, entity type, and optional criteria that items must meet. The webset continuously evaluates potential matches against your criteria.
The block supports advanced features like scoped searches (searching within specific imports or other websets), enrichments for extracting structured data, and relationship-based "hop" searches. You can wait for initial results or return immediately for asynchronous processing.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| search_query | Your search query. Use this to describe what you are looking for. Any URL provided will be crawled and used as context for the search. | str | Yes |
| search_count | Number of items the search will attempt to find. The actual number of items found may be less than this number depending on the search complexity. | int | No |
| search_entity_type | Entity type: 'company', 'person', 'article', 'research_paper', or 'custom'. If not provided, we automatically detect the entity from the query. | "company" | "person" | "article" | No |
| search_entity_description | Description for custom entity type (required when search_entity_type is 'custom') | str | No |
| search_criteria | List of criteria descriptions that every item will be evaluated against. If not provided, we automatically detect the criteria from the query. | List[str] | No |
| search_exclude_sources | List of source IDs (imports or websets) to exclude from search results | List[str] | No |
| search_exclude_types | List of source types corresponding to exclude sources ('import' or 'webset') | List["import" | "webset"] | No |
| search_scope_sources | List of source IDs (imports or websets) to limit search scope to | List[str] | No |
| search_scope_types | List of source types corresponding to scope sources ('import' or 'webset') | List["import" | "webset"] | No |
| search_scope_relationships | List of relationship definitions for hop searches (optional, one per scope source) | List[str] | No |
| search_scope_relationship_limits | List of limits on the number of related entities to find (optional, one per scope relationship) | List[int] | No |
| import_sources | List of source IDs to import from | List[str] | No |
| import_types | List of source types corresponding to import sources ('import' or 'webset') | List["import" | "webset"] | No |
| enrichment_descriptions | List of enrichment task descriptions to perform on each webset item | List[str] | No |
| enrichment_formats | List of formats for enrichment responses ('text', 'date', 'number', 'options', 'email', 'phone'). If not specified, we automatically select the best format. | List["text" | "date" | "number"] | No |
| enrichment_options | List of option lists for enrichments with 'options' format. Each inner list contains the option labels. | List[List[str]] | No |
| enrichment_metadata | List of metadata dictionaries for enrichments | List[Dict[str, True]] | No |
| external_id | External identifier for the webset. You can use this to reference the webset by your own internal identifiers. | str | No |
| metadata | Key-value pairs to associate with this webset | Dict[str, True] | No |
| wait_for_initial_results | Wait for the initial search to complete before returning. This ensures you get results immediately. | bool | No |
| polling_timeout | Maximum time to wait for completion in seconds (only used if wait_for_initial_results is True) | int | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| webset | The created webset with full details | Webset |
| initial_item_count | Number of items found in the initial search (only if wait_for_initial_results was True) | int |
| completion_time | Time taken to complete the initial search in seconds (only if wait_for_initial_results was True) | float |
### Possible use case
<!-- MANUAL: use_case -->
**Lead Generation**: Create websets to find companies or people matching specific criteria for sales outreach.
**Competitive Intelligence**: Build persistent collections tracking competitors, market entrants, or industry news.
**Research Databases**: Compile curated collections of articles, papers, or resources on specific topics.
<!-- END MANUAL -->
---
## Exa Delete Webset
### What it is
Delete a Webset and all its items
### How it works
<!-- MANUAL: how_it_works -->
This block permanently deletes a webset and all of its items, searches, enrichments, and monitors. The operation cannot be undone.
Use this to clean up websets that are no longer needed or to remove test data. The block accepts either the Exa-generated ID or your custom external_id.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| webset_id | The ID or external ID of the Webset to delete | str | Yes |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| webset_id | The unique identifier for the deleted webset | str |
| external_id | The external identifier for the deleted webset | str |
| status | The status of the deleted webset | str |
| success | Whether the deletion was successful | str |
### Possible use case
<!-- MANUAL: use_case -->
**Cleanup Operations**: Remove completed or abandoned websets to maintain organization.
**Data Management**: Delete websets containing outdated or irrelevant data.
**Cost Control**: Remove unused websets to prevent unnecessary storage costs.
<!-- END MANUAL -->
---
## Exa Get Webset
### What it is
Retrieve a Webset by ID or external ID
### How it works
<!-- MANUAL: how_it_works -->
This block retrieves detailed information about a specific webset including its status, configured searches, enrichments, and monitors.
The block returns the webset's current state, metadata, and timestamps. Use this to check webset configuration or monitor status before performing operations.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| webset_id | The ID or external ID of the Webset to retrieve | str | Yes |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| webset_id | The unique identifier for the webset | str |
| status | The status of the webset | str |
| external_id | The external identifier for the webset | str |
| searches | The searches performed on the webset | List[Dict[str, True]] |
| enrichments | The enrichments applied to the webset | List[Dict[str, True]] |
| monitors | The monitors for the webset | List[Dict[str, True]] |
| metadata | Key-value pairs associated with the webset | Dict[str, True] |
| created_at | The date and time the webset was created | str |
| updated_at | The date and time the webset was last updated | str |
### Possible use case
<!-- MANUAL: use_case -->
**Configuration Review**: Retrieve webset details to verify settings before making changes.
**Status Checking**: Check the current status and configuration of a webset.
**Workflow Integration**: Fetch webset information for use in downstream workflow steps.
<!-- END MANUAL -->
---
## Exa List Websets
### What it is
List all Websets with pagination support
### How it works
<!-- MANUAL: how_it_works -->
This block retrieves a paginated list of all your websets. Results include basic webset information and can be paginated through using cursor tokens.
Use this to discover existing websets, find specific websets by browsing, or build management interfaces for your webset collections.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| trigger | Trigger for the webset, value is ignored! | Any | No |
| cursor | Cursor for pagination through results | str | No |
| limit | Number of websets to return (1-100) | int | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| websets | List of websets | List[Webset] |
| has_more | Whether there are more results to paginate through | bool |
| next_cursor | Cursor for the next page of results | str |
### Possible use case
<!-- MANUAL: use_case -->
**Inventory Management**: List all websets to understand your current data collections.
**Bulk Operations**: Iterate through websets to perform batch updates or cleanup.
**Dashboard Building**: Retrieve webset listings for management dashboards or reporting.
<!-- END MANUAL -->
---
## Exa Preview Webset
### What it is
Preview how a search query will be interpreted before creating a webset. Helps understand entity detection, criteria generation, and available enrichments.
### How it works
<!-- MANUAL: how_it_works -->
This block analyzes your search query and shows how Exa will interpret it before you create a webset. It reveals the detected entity type, generated criteria, and available enrichment columns.
Use this to refine your query and understand what results to expect. The block also provides suggestions for improving your query for better results.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| query | Your search query to preview. Use this to see how Exa will interpret your search before creating a webset. | str | Yes |
| entity_type | Entity type to force: 'company', 'person', 'article', 'research_paper', or 'custom'. If not provided, Exa will auto-detect. | "company" | "person" | "article" | No |
| entity_description | Description for custom entity type (required when entity_type is 'custom') | str | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| preview | Full preview response with search and enrichment details | PreviewWebsetModel |
| entity_type | The detected or specified entity type | str |
| entity_description | Description of the entity type | str |
| criteria | Generated search criteria that will be used | List[PreviewCriterionModel] |
| enrichment_columns | Available enrichment columns that can be extracted | List[PreviewEnrichmentModel] |
| interpretation | Human-readable interpretation of how the query will be processed | str |
| suggestions | Suggestions for improving the query | List[str] |
### Possible use case
<!-- MANUAL: use_case -->
**Query Optimization**: Test and refine search queries before committing to webset creation.
**Entity Validation**: Verify that Exa correctly detects the entity type for your use case.
**Enrichment Planning**: Discover available enrichment columns to plan data extraction.
<!-- END MANUAL -->
---
## Exa Update Webset
### What it is
Update metadata for an existing Webset
### How it works
<!-- MANUAL: how_it_works -->
This block updates the metadata associated with an existing webset. Metadata is stored as key-value pairs and can be used to organize, tag, or annotate websets.
Setting metadata to null clears all existing metadata. This operation does not affect the webset's items, searches, or enrichments.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| webset_id | The ID or external ID of the Webset to update | str | Yes |
| metadata | Key-value pairs to associate with this webset (set to null to clear) | Dict[str, True] | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| webset_id | The unique identifier for the webset | str |
| status | The status of the webset | str |
| external_id | The external identifier for the webset | str |
| metadata | Updated metadata for the webset | Dict[str, True] |
| updated_at | The date and time the webset was updated | str |
### Possible use case
<!-- MANUAL: use_case -->
**Tagging Systems**: Add tags or labels to websets for organization and filtering.
**Project Association**: Link websets to specific projects or campaigns via metadata.
**Workflow State**: Store workflow-related state or flags in webset metadata.
<!-- END MANUAL -->
---
## Exa Webset Ready Check
### What it is
Check if webset is ready for next operation - enables conditional workflow branching
### How it works
<!-- MANUAL: how_it_works -->
This block checks if a webset is idle (no running operations) and optionally has a minimum number of items. It returns a boolean ready status along with a recommendation for the next action.
Use this block for conditional workflow branching to decide whether to proceed with processing, wait for more results, or add additional searches.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| webset_id | The ID or external ID of the Webset to check | str | Yes |
| min_items | Minimum number of items required to be 'ready' | int | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| is_ready | True if webset is idle AND has minimum items | bool |
| status | Current webset status | str |
| item_count | Number of items in webset | int |
| has_searches | Whether webset has any searches configured | bool |
| has_enrichments | Whether webset has any enrichments | bool |
| recommendation | Suggested next action (ready_to_process, waiting_for_results, needs_search, etc.) | str |
### Possible use case
<!-- MANUAL: use_case -->
**Workflow Gating**: Only proceed with data processing when webset has enough items.
**Conditional Branching**: Route workflow based on webset readiness for different scenarios.
**Polling Logic**: Implement smart polling by checking readiness before fetching items.
<!-- END MANUAL -->
---
## Exa Webset Status
### What it is
Get a quick status overview of a webset
### How it works
<!-- MANUAL: how_it_works -->
This block returns a lightweight status overview of a webset without retrieving full item data. It includes counts for items, searches, enrichments, and monitors along with the current processing status.
Use this for quick status checks and monitoring without the overhead of retrieving complete webset details or items.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| webset_id | The ID or external ID of the Webset | str | Yes |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| webset_id | The webset identifier | str |
| status | Current status (idle, running, paused, etc.) | str |
| item_count | Total number of items in the webset | int |
| search_count | Number of searches performed | int |
| enrichment_count | Number of enrichments configured | int |
| monitor_count | Number of monitors configured | int |
| last_updated | When the webset was last updated | str |
| is_processing | Whether any operations are currently running | bool |
### Possible use case
<!-- MANUAL: use_case -->
**Status Dashboards**: Display webset status in monitoring dashboards.
**Health Checks**: Verify websets are active and processing as expected.
**Lightweight Polling**: Check status without fetching full webset data.
<!-- END MANUAL -->
---
## Exa Webset Summary
### What it is
Get a comprehensive summary of a webset with samples and statistics
### How it works
<!-- MANUAL: how_it_works -->
This block generates a comprehensive summary of a webset including statistics, sample items, and detailed breakdowns of searches and enrichments. It provides an overview useful for reporting and analysis.
You can control what to include in the summary such as sample items, search details, and enrichment details to balance comprehensiveness with response size.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| webset_id | The ID or external ID of the Webset | str | Yes |
| include_sample_items | Include sample items in the summary | bool | No |
| sample_size | Number of sample items to include | int | No |
| include_search_details | Include details about searches | bool | No |
| include_enrichment_details | Include details about enrichments | bool | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| webset_id | The webset identifier | str |
| status | Current status | str |
| entity_type | Type of entities in the webset | str |
| total_items | Total number of items | int |
| sample_items | Sample items from the webset | List[Dict[str, True]] |
| search_summary | Summary of searches performed | SearchSummaryModel |
| enrichment_summary | Summary of enrichments applied | EnrichmentSummaryModel |
| monitor_summary | Summary of monitors configured | MonitorSummaryModel |
| statistics | Various statistics about the webset | WebsetStatisticsModel |
| created_at | When the webset was created | str |
| updated_at | When the webset was last updated | str |
### Possible use case
<!-- MANUAL: use_case -->
**Executive Reporting**: Generate summaries of webset collections for stakeholder reports.
**Quality Review**: Review sample items and statistics to assess webset quality.
**Progress Tracking**: Monitor webset growth and activity through periodic summaries.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,211 @@
# Exa Cancel Enrichment
### What it is
Cancel a running enrichment operation
### How it works
<!-- MANUAL: how_it_works -->
This block stops a running enrichment operation on a webset. Items already enriched before cancellation retain their enrichment data, but remaining items will not be processed.
Use this when an enrichment is taking too long, producing unexpected results, or is no longer needed. The block returns the approximate number of items enriched before cancellation.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| webset_id | The ID or external ID of the Webset | str | Yes |
| enrichment_id | The ID of the enrichment to cancel | str | Yes |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| enrichment_id | The ID of the canceled enrichment | str |
| status | Status after cancellation | str |
| items_enriched_before_cancel | Approximate number of items enriched before cancellation | int |
| success | Whether the cancellation was successful | str |
### Possible use case
<!-- MANUAL: use_case -->
**Cost Control**: Stop enrichments that are exceeding budget or taking too long.
**Error Handling**: Cancel enrichments producing incorrect results to fix configuration.
**Priority Changes**: Stop lower-priority enrichments to free resources for urgent tasks.
<!-- END MANUAL -->
---
## Exa Create Enrichment
### What it is
Create enrichments to extract additional structured data from webset items
### How it works
<!-- MANUAL: how_it_works -->
This block creates an enrichment task that extracts specific data from each webset item using AI. You define what to extract via a description, and the enrichment runs against all current and future items in the webset.
Enrichments support various output formats including text, dates, numbers, and predefined options. You can apply enrichments to existing items immediately or configure them to run only on new items.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| webset_id | The ID or external ID of the Webset | str | Yes |
| description | What data to extract from each item | str | Yes |
| title | Short title for this enrichment (auto-generated if not provided) | str | No |
| format | Expected format of the extracted data | "text" | "date" | "number" | No |
| options | Available options when format is 'options' | List[str] | No |
| apply_to_existing | Apply this enrichment to existing items in the webset | bool | No |
| metadata | Metadata to attach to the enrichment | Dict[str, True] | No |
| wait_for_completion | Wait for the enrichment to complete on existing items | bool | No |
| polling_timeout | Maximum time to wait for completion in seconds | int | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| enrichment_id | The unique identifier for the created enrichment | str |
| webset_id | The webset this enrichment belongs to | str |
| status | Current status of the enrichment | str |
| title | Title of the enrichment | str |
| description | Description of what data is extracted | str |
| format | Format of the extracted data | str |
| instructions | Generated instructions for the enrichment | str |
| items_enriched | Number of items enriched (if wait_for_completion was True) | int |
| completion_time | Time taken to complete in seconds (if wait_for_completion was True) | float |
### Possible use case
<!-- MANUAL: use_case -->
**Data Extraction**: Extract specific fields like founding dates, employee counts, or contact info from company profiles.
**Classification**: Categorize items into predefined buckets using the options format.
**Sentiment Analysis**: Analyze sentiment or tone from article content or reviews.
<!-- END MANUAL -->
---
## Exa Delete Enrichment
### What it is
Delete an enrichment from a webset
### How it works
<!-- MANUAL: how_it_works -->
This block removes an enrichment configuration from a webset. The enrichment will no longer be applied to new items, but existing enrichment data on items is not deleted.
Use this to clean up enrichments that are no longer needed or to remove misconfigured enrichments before creating corrected ones.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| webset_id | The ID or external ID of the Webset | str | Yes |
| enrichment_id | The ID of the enrichment to delete | str | Yes |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| enrichment_id | The ID of the deleted enrichment | str |
| success | Whether the deletion was successful | str |
### Possible use case
<!-- MANUAL: use_case -->
**Configuration Cleanup**: Remove enrichments that are no longer relevant to your data needs.
**Reconfiguration**: Delete misconfigured enrichments before creating corrected replacements.
**Cost Optimization**: Remove unnecessary enrichments to reduce processing costs on new items.
<!-- END MANUAL -->
---
## Exa Get Enrichment
### What it is
Get the status and details of a webset enrichment
### How it works
<!-- MANUAL: how_it_works -->
This block retrieves detailed information about a specific enrichment including its configuration, current status, and processing progress.
Use this to monitor enrichment progress, verify configuration, or troubleshoot issues with enrichment results. Returns the full enrichment specification along with timestamps.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| webset_id | The ID or external ID of the Webset | str | Yes |
| enrichment_id | The ID of the enrichment to retrieve | str | Yes |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| enrichment_id | The unique identifier for the enrichment | str |
| status | Current status of the enrichment | str |
| title | Title of the enrichment | str |
| description | Description of what data is extracted | str |
| format | Format of the extracted data | str |
| options | Available options (for 'options' format) | List[str] |
| instructions | Generated instructions for the enrichment | str |
| created_at | When the enrichment was created | str |
| updated_at | When the enrichment was last updated | str |
| metadata | Metadata attached to the enrichment | Dict[str, True] |
### Possible use case
<!-- MANUAL: use_case -->
**Progress Monitoring**: Check enrichment status to monitor completion of large batch operations.
**Configuration Verification**: Retrieve enrichment details to verify settings before making changes.
**Debugging**: Investigate enrichment configuration when results don't match expectations.
<!-- END MANUAL -->
---
## Exa Update Enrichment
### What it is
Update an existing enrichment configuration
### How it works
<!-- MANUAL: how_it_works -->
This block modifies an existing enrichment's configuration. You can update the description, output format, available options, or metadata without recreating the enrichment.
Changes apply to future items; existing enrichment data is not reprocessed unless you explicitly re-run the enrichment on existing items.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| webset_id | The ID or external ID of the Webset | str | Yes |
| enrichment_id | The ID of the enrichment to update | str | Yes |
| description | New description for what data to extract | str | No |
| format | New format for the extracted data | "text" | "date" | "number" | No |
| options | New options when format is 'options' | List[str] | No |
| metadata | New metadata to attach to the enrichment | Dict[str, True] | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| enrichment_id | The unique identifier for the enrichment | str |
| status | Current status of the enrichment | str |
| title | Title of the enrichment | str |
| description | Updated description | str |
| format | Updated format | str |
| success | Whether the update was successful | str |
### Possible use case
<!-- MANUAL: use_case -->
**Refinement**: Improve enrichment descriptions based on initial results to get better extractions.
**Option Updates**: Add or modify options for classification enrichments as needs evolve.
**Metadata Management**: Update enrichment metadata for organization or tracking purposes.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,207 @@
# Exa Create Import
### What it is
Import CSV data to use with websets for targeted searches
### How it works
<!-- MANUAL: how_it_works -->
This block creates an import from CSV data that can be used as a source for webset searches. Imports allow you to bring your own data (like company lists or contact lists) and use them for scoped or exclusion searches.
You specify the entity type and which columns contain identifiers and URLs. The import becomes available as a source that can be referenced when creating webset searches.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| title | Title for this import | str | Yes |
| csv_data | CSV data to import (as a string) | str | Yes |
| entity_type | Type of entities being imported | "company" | "person" | "article" | No |
| entity_description | Description for custom entity type | str | No |
| identifier_column | Column index containing the identifier (0-based) | int | No |
| url_column | Column index containing URLs (optional) | int | No |
| metadata | Metadata to attach to the import | Dict[str, True] | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| import_id | The unique identifier for the created import | str |
| status | Current status of the import | str |
| title | Title of the import | str |
| count | Number of items in the import | int |
| entity_type | Type of entities imported | str |
| upload_url | Upload URL for CSV data (only if csv_data not provided in request) | str |
| upload_valid_until | Expiration time for upload URL (only if upload_url is provided) | str |
| created_at | When the import was created | str |
### Possible use case
<!-- MANUAL: use_case -->
**Customer Enrichment**: Import your customer list to find similar companies or related contacts.
**Exclusion Lists**: Import existing leads to exclude from new prospecting searches.
**Targeted Expansion**: Use imported data as a starting point for relationship-based searches.
<!-- END MANUAL -->
---
## Exa Delete Import
### What it is
Delete an import
### How it works
<!-- MANUAL: how_it_works -->
This block permanently deletes an import and its data. Any websets that reference this import for scoped or exclusion searches will no longer have access to it.
Use this to clean up imports that are no longer needed or contain outdated data. The deletion cannot be undone.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| import_id | The ID of the import to delete | str | Yes |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| import_id | The ID of the deleted import | str |
| success | Whether the deletion was successful | str |
### Possible use case
<!-- MANUAL: use_case -->
**Data Refresh**: Delete outdated imports before uploading updated versions.
**Cleanup Operations**: Remove imports that are no longer used in any webset searches.
**Compliance**: Delete imports containing data that needs to be removed for privacy compliance.
<!-- END MANUAL -->
---
## Exa Export Webset
### What it is
Export webset data in JSON, CSV, or JSON Lines format
### How it works
<!-- MANUAL: how_it_works -->
This block exports all items from a webset in your chosen format. You can include full content and enrichment data in the export, and limit the number of items exported.
Supported formats include JSON for structured data, CSV for spreadsheet compatibility, and JSON Lines for streaming or large dataset processing.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| webset_id | The ID or external ID of the Webset to export | str | Yes |
| format | Export format | "json" | "csv" | "jsonl" | No |
| include_content | Include full content in export | bool | No |
| include_enrichments | Include enrichment data in export | bool | No |
| max_items | Maximum number of items to export | int | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| export_data | Exported data in the requested format | str |
| item_count | Number of items exported | int |
| total_items | Total number of items in the webset | int |
| truncated | Whether the export was truncated due to max_items limit | bool |
| format | Format of the exported data | str |
### Possible use case
<!-- MANUAL: use_case -->
**CRM Integration**: Export webset data as CSV to import into CRM or marketing automation systems.
**Reporting**: Generate exports for analysis in spreadsheets or business intelligence tools.
**Backup**: Create periodic exports of valuable webset data for archival purposes.
<!-- END MANUAL -->
---
## Exa Get Import
### What it is
Get the status and details of an import
### How it works
<!-- MANUAL: how_it_works -->
This block retrieves detailed information about an import including its status, item count, and configuration. Use this to check if an import is ready to use or to troubleshoot failed imports.
The block returns upload status information if the import is pending data upload, or failure details if the import encountered errors.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| import_id | The ID of the import to retrieve | str | Yes |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| import_id | The unique identifier for the import | str |
| status | Current status of the import | str |
| title | Title of the import | str |
| format | Format of the imported data | str |
| entity_type | Type of entities imported | str |
| count | Number of items imported | int |
| upload_url | Upload URL for CSV data (if import not yet uploaded) | str |
| upload_valid_until | Expiration time for upload URL (if applicable) | str |
| failed_reason | Reason for failure (if applicable) | str |
| failed_message | Detailed failure message (if applicable) | str |
| created_at | When the import was created | str |
| updated_at | When the import was last updated | str |
| metadata | Metadata attached to the import | Dict[str, True] |
### Possible use case
<!-- MANUAL: use_case -->
**Status Verification**: Check import status after upload to confirm data is ready for use.
**Error Investigation**: Retrieve import details to understand why an import failed.
**Audit Trail**: Review import configuration and metadata for documentation purposes.
<!-- END MANUAL -->
---
## Exa List Imports
### What it is
List all imports with pagination support
### How it works
<!-- MANUAL: how_it_works -->
This block retrieves a paginated list of all your imports. Results include basic information about each import such as title, status, and item count.
Use this to discover existing imports that can be referenced in webset searches or to manage your import library.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| limit | Number of imports to return | int | No |
| cursor | Cursor for pagination | str | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| imports | List of imports | List[Dict[str, True]] |
| import_item | Individual import (yielded for each import) | Dict[str, True] |
| has_more | Whether there are more imports to paginate through | bool |
| next_cursor | Cursor for the next page of results | str |
### Possible use case
<!-- MANUAL: use_case -->
**Import Discovery**: Find existing imports to reference when creating new webset searches.
**Library Management**: Review all imports to identify outdated data that can be cleaned up.
**Source Selection**: Browse available imports when setting up scoped or exclusion searches.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,237 @@
# Exa Bulk Webset Items
### What it is
Get all items from a webset in bulk (with configurable limits)
### How it works
<!-- MANUAL: how_it_works -->
This block retrieves all items from a webset in a single operation, automatically handling pagination internally. You can specify a maximum number of items and choose whether to include enrichments and full content.
Use this for batch processing when you need all webset data at once rather than paginating through results manually.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| webset_id | The ID or external ID of the Webset | str | Yes |
| max_items | Maximum number of items to retrieve (1-1000). Note: Large values may take longer. | int | No |
| include_enrichments | Include enrichment data for each item | bool | No |
| include_content | Include full content for each item | bool | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| items | All items from the webset | List[WebsetItemModel] |
| item | Individual item (yielded for each item) | WebsetItemModel |
| total_retrieved | Total number of items retrieved | int |
| truncated | Whether results were truncated due to max_items limit | bool |
### Possible use case
<!-- MANUAL: use_case -->
**Batch Processing**: Retrieve all webset items for bulk analysis or processing in external systems.
**Data Export**: Get complete webset data for integration with other tools or databases.
**Full Dataset Analysis**: Analyze entire webset contents when pagination isn't practical.
<!-- END MANUAL -->
---
## Exa Delete Webset Item
### What it is
Delete a specific item from a webset
### How it works
<!-- MANUAL: how_it_works -->
This block permanently removes a specific item from a webset. The item and all its enrichment data are deleted and cannot be recovered.
Use this to clean up irrelevant results, remove duplicates, or curate webset contents by removing items that don't meet your quality standards.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| webset_id | The ID or external ID of the Webset | str | Yes |
| item_id | The ID of the item to delete | str | Yes |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| item_id | The ID of the deleted item | str |
| success | Whether the deletion was successful | str |
### Possible use case
<!-- MANUAL: use_case -->
**Data Curation**: Remove irrelevant or low-quality items to improve webset accuracy.
**Duplicate Removal**: Delete duplicate entries discovered during review.
**Compliance**: Remove items that shouldn't be included for legal or policy reasons.
<!-- END MANUAL -->
---
## Exa Get New Items
### What it is
Get items added since a cursor - enables incremental processing without reprocessing
### How it works
<!-- MANUAL: how_it_works -->
This block retrieves only items added to a webset since your last check, identified by a cursor. This enables efficient incremental processing without re-fetching previously processed items.
Save the returned next_cursor for subsequent calls to implement continuous incremental processing of new webset additions.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| webset_id | The ID or external ID of the Webset | str | Yes |
| since_cursor | Cursor from previous run - only items after this will be returned. Leave empty on first run. | str | No |
| max_items | Maximum number of new items to retrieve | int | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| new_items | Items added since the cursor | List[WebsetItemModel] |
| item | Individual item (yielded for each new item) | WebsetItemModel |
| count | Number of new items found | int |
| next_cursor | Save this cursor for the next run to get only newer items | str |
| has_more | Whether there are more new items beyond max_items | bool |
### Possible use case
<!-- MANUAL: use_case -->
**Incremental Processing**: Process only new webset items in scheduled workflows without duplicating work.
**Real-Time Pipelines**: Build efficient pipelines that react to new data without full dataset scans.
**Change Detection**: Track what's new in websets for alerting or notification systems.
<!-- END MANUAL -->
---
## Exa Get Webset Item
### What it is
Get a specific item from a webset by its ID
### How it works
<!-- MANUAL: how_it_works -->
This block retrieves detailed information about a specific webset item including its content, entity data, and enrichments. Use this when you need complete data for a particular item.
The block returns the full item record with all available data, timestamps, and any enrichment results that have been applied.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| webset_id | The ID or external ID of the Webset | str | Yes |
| item_id | The ID of the specific item to retrieve | str | Yes |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| item_id | The unique identifier for the item | str |
| url | The URL of the original source | str |
| title | The title of the item | str |
| content | The main content of the item | str |
| entity_data | Entity-specific structured data | Dict[str, True] |
| enrichments | Enrichment data added to the item | Dict[str, True] |
| created_at | When the item was added to the webset | str |
| updated_at | When the item was last updated | str |
### Possible use case
<!-- MANUAL: use_case -->
**Detail View**: Fetch complete item data for display in detail views or profiles.
**Enrichment Review**: Retrieve item with enrichments to verify data extraction quality.
**Reference Lookup**: Get specific items by ID for cross-referencing or validation.
<!-- END MANUAL -->
---
## Exa List Webset Items
### What it is
List items in a webset with pagination support
### How it works
<!-- MANUAL: how_it_works -->
This block retrieves a paginated list of items from a webset. You control page size and can optionally wait for items if the webset is still processing.
Use pagination cursors to iterate through large websets efficiently. Each page returns items along with metadata about whether more pages exist.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| webset_id | The ID or external ID of the Webset | str | Yes |
| limit | Number of items to return (1-100) | int | No |
| cursor | Cursor for pagination through results | str | No |
| wait_for_items | Wait for items to be available if webset is still processing | bool | No |
| wait_timeout | Maximum time to wait for items in seconds | int | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| items | List of webset items | List[WebsetItemModel] |
| webset_id | The ID of the webset | str |
| item | Individual item (yielded for each item in the list) | WebsetItemModel |
| has_more | Whether there are more items to paginate through | bool |
| next_cursor | Cursor for the next page of results | str |
### Possible use case
<!-- MANUAL: use_case -->
**Paginated Display**: Build UIs that display webset items with pagination controls.
**Streaming Processing**: Process webset items in manageable batches to avoid memory issues.
**Controlled Iteration**: Step through large websets methodically for thorough analysis.
<!-- END MANUAL -->
---
## Exa Webset Items Summary
### What it is
Get a summary of webset items without retrieving all data
### How it works
<!-- MANUAL: how_it_works -->
This block provides a lightweight summary of webset items including total count, entity type, available enrichment columns, and optional sample items. It's efficient for getting an overview without fetching full data.
Use this to understand webset contents at a glance, check enrichment availability, or get sample data for validation.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| webset_id | The ID or external ID of the Webset | str | Yes |
| sample_size | Number of sample items to include | int | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| total_items | Total number of items in the webset | int |
| entity_type | Type of entities in the webset | str |
| sample_items | Sample of items from the webset | List[WebsetItemModel] |
| enrichment_columns | List of enrichment columns available | List[str] |
### Possible use case
<!-- MANUAL: use_case -->
**Quick Overview**: Get webset statistics and samples without loading all data.
**Schema Discovery**: Check what enrichment columns are available before building exports.
**Validation**: Review sample items to verify webset quality before full processing.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,212 @@
# Exa Create Monitor
### What it is
Create automated monitors to keep websets updated with fresh data on a schedule
### How it works
<!-- MANUAL: how_it_works -->
This block creates a scheduled monitor that automatically updates a webset on a cron schedule. Monitors can either search for new items matching criteria or refresh existing item content and enrichments.
Configure the cron expression for your desired frequency (daily, weekly, etc.) and choose between search behavior to find new items or refresh behavior to update existing data.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| webset_id | The ID or external ID of the Webset to monitor | str | Yes |
| cron_expression | Cron expression for scheduling (5 fields, max once per day) | str | Yes |
| timezone | IANA timezone for the schedule | str | No |
| behavior_type | Type of monitor behavior (search for new items or refresh existing) | "search" | "refresh" | No |
| search_query | Search query for finding new items (required for search behavior) | str | No |
| search_count | Number of items to find in each search | int | No |
| search_criteria | Criteria that items must meet | List[str] | No |
| search_behavior | How new results interact with existing items | "append" | "override" | No |
| entity_type | Type of entity to search for (company, person, etc.) | str | No |
| refresh_content | Refresh content from source URLs (for refresh behavior) | bool | No |
| refresh_enrichments | Re-run enrichments on items (for refresh behavior) | bool | No |
| metadata | Metadata to attach to the monitor | Dict[str, True] | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| monitor_id | The unique identifier for the created monitor | str |
| webset_id | The webset this monitor belongs to | str |
| status | Status of the monitor | str |
| behavior_type | Type of monitor behavior | str |
| next_run_at | When the monitor will next run | str |
| cron_expression | The schedule cron expression | str |
| timezone | The timezone for scheduling | str |
| created_at | When the monitor was created | str |
### Possible use case
<!-- MANUAL: use_case -->
**Continuous Lead Generation**: Schedule daily searches to find new companies matching your criteria.
**News Monitoring**: Set up monitors to discover fresh articles on topics of interest.
**Data Freshness**: Schedule periodic refreshes to keep enrichment data current.
<!-- END MANUAL -->
---
## Exa Delete Monitor
### What it is
Delete a monitor from a webset
### How it works
<!-- MANUAL: how_it_works -->
This block permanently deletes a monitor, stopping all future scheduled runs. Any data already collected by the monitor remains in the webset.
Use this to clean up monitors that are no longer needed or to stop scheduled operations before deleting a webset.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| monitor_id | The ID of the monitor to delete | str | Yes |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| monitor_id | The ID of the deleted monitor | str |
| success | Whether the deletion was successful | str |
### Possible use case
<!-- MANUAL: use_case -->
**Project Completion**: Delete monitors when monitoring campaigns or projects conclude.
**Cost Management**: Remove monitors that are no longer providing value to reduce costs.
**Configuration Cleanup**: Delete old monitors before creating updated replacements.
<!-- END MANUAL -->
---
## Exa Get Monitor
### What it is
Get the details and status of a webset monitor
### How it works
<!-- MANUAL: how_it_works -->
This block retrieves detailed information about a monitor including its configuration, schedule, current status, and information about the last run.
Use this to verify monitor settings, check when the next run is scheduled, or review results from recent executions.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| monitor_id | The ID of the monitor to retrieve | str | Yes |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| monitor_id | The unique identifier for the monitor | str |
| webset_id | The webset this monitor belongs to | str |
| status | Current status of the monitor | str |
| behavior_type | Type of monitor behavior | str |
| behavior_config | Configuration for the monitor behavior | Dict[str, True] |
| cron_expression | The schedule cron expression | str |
| timezone | The timezone for scheduling | str |
| next_run_at | When the monitor will next run | str |
| last_run | Information about the last run | Dict[str, True] |
| created_at | When the monitor was created | str |
| updated_at | When the monitor was last updated | str |
| metadata | Metadata attached to the monitor | Dict[str, True] |
### Possible use case
<!-- MANUAL: use_case -->
**Schedule Verification**: Check when monitors are scheduled to run next.
**Performance Review**: Examine last run details to assess monitor effectiveness.
**Configuration Audit**: Retrieve monitor settings for documentation or troubleshooting.
<!-- END MANUAL -->
---
## Exa List Monitors
### What it is
List all monitors with optional webset filtering
### How it works
<!-- MANUAL: how_it_works -->
This block retrieves a paginated list of all monitors, optionally filtered by webset. Results include basic monitor information such as status, schedule, and next run time.
Use this to get an overview of all active monitors or find monitors associated with a specific webset.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| webset_id | Filter monitors by webset ID | str | No |
| limit | Number of monitors to return | int | No |
| cursor | Cursor for pagination | str | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| monitors | List of monitors | List[Dict[str, True]] |
| monitor | Individual monitor (yielded for each monitor) | Dict[str, True] |
| has_more | Whether there are more monitors to paginate through | bool |
| next_cursor | Cursor for the next page of results | str |
### Possible use case
<!-- MANUAL: use_case -->
**Monitor Dashboard**: Build dashboards showing all active monitors and their schedules.
**Webset Management**: Find monitors associated with websets before making changes.
**Activity Overview**: Review all scheduled monitoring activity across your account.
<!-- END MANUAL -->
---
## Exa Update Monitor
### What it is
Update a monitor's status, schedule, or metadata
### How it works
<!-- MANUAL: how_it_works -->
This block modifies an existing monitor's configuration. You can enable, disable, or pause monitors, change their schedule, update the timezone, or modify metadata.
Changes take effect immediately. Disabling a monitor stops future scheduled runs until re-enabled.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| monitor_id | The ID of the monitor to update | str | Yes |
| status | New status for the monitor | "enabled" | "disabled" | "paused" | No |
| cron_expression | New cron expression for scheduling | str | No |
| timezone | New timezone for the schedule | str | No |
| metadata | New metadata for the monitor | Dict[str, True] | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| monitor_id | The unique identifier for the monitor | str |
| status | Updated status of the monitor | str |
| next_run_at | When the monitor will next run | str |
| updated_at | When the monitor was updated | str |
| success | Whether the update was successful | str |
### Possible use case
<!-- MANUAL: use_case -->
**Schedule Changes**: Adjust monitor frequency based on data velocity or business needs.
**Pause/Resume**: Temporarily pause monitors during maintenance or when not needed.
**Status Management**: Enable or disable monitors programmatically based on conditions.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,132 @@
# Exa Wait For Enrichment
### What it is
Wait for a webset enrichment to complete with progress tracking
### How it works
<!-- MANUAL: how_it_works -->
This block polls an enrichment operation until it completes or times out. It checks status at configurable intervals and can include sample results when done.
Use this to block workflow execution until enrichments finish, enabling sequential operations that depend on enrichment data being available.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| webset_id | The ID or external ID of the Webset | str | Yes |
| enrichment_id | The ID of the enrichment to monitor | str | Yes |
| timeout | Maximum time to wait in seconds | int | No |
| check_interval | Initial interval between status checks in seconds | int | No |
| sample_results | Include sample enrichment results in output | bool | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| enrichment_id | The enrichment ID that was monitored | str |
| final_status | The final status of the enrichment | str |
| items_enriched | Number of items successfully enriched | int |
| enrichment_title | Title/description of the enrichment | str |
| elapsed_time | Total time elapsed in seconds | float |
| sample_data | Sample of enriched data (if requested) | List[SampleEnrichmentModel] |
| timed_out | Whether the operation timed out | bool |
### Possible use case
<!-- MANUAL: use_case -->
**Sequential Processing**: Wait for enrichments to complete before proceeding to export or analysis.
**Data Validation**: Ensure enrichments finish and review samples before continuing workflow.
**Synchronous Workflows**: Convert async enrichment operations to blocking calls for simpler logic.
<!-- END MANUAL -->
---
## Exa Wait For Search
### What it is
Wait for a specific webset search to complete with progress tracking
### How it works
<!-- MANUAL: how_it_works -->
This block polls a webset search operation until it completes or times out. It provides progress information including items found, items analyzed, and completion percentage.
Use this when you need search results before proceeding with downstream operations like enrichments or exports.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| webset_id | The ID or external ID of the Webset | str | Yes |
| search_id | The ID of the search to monitor | str | Yes |
| timeout | Maximum time to wait in seconds | int | No |
| check_interval | Initial interval between status checks in seconds | int | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| search_id | The search ID that was monitored | str |
| final_status | The final status of the search | str |
| items_found | Number of items found by the search | int |
| items_analyzed | Number of items analyzed | int |
| completion_percentage | Completion percentage (0-100) | int |
| elapsed_time | Total time elapsed in seconds | float |
| recall_info | Information about expected results and confidence | Dict[str, True] |
| timed_out | Whether the operation timed out | bool |
### Possible use case
<!-- MANUAL: use_case -->
**Search Completion**: Wait for initial webset population before accessing items.
**Progress Monitoring**: Track search progress in long-running operations.
**Sequential Workflows**: Ensure searches complete before starting enrichments.
<!-- END MANUAL -->
---
## Exa Wait For Webset
### What it is
Wait for a webset to reach a specific status with progress tracking
### How it works
<!-- MANUAL: how_it_works -->
This block polls a webset until it reaches a target status (idle, completed, or running). It uses exponential backoff for efficient polling and provides detailed progress information.
Use this for general-purpose waiting on webset operations when you don't need to track a specific search or enrichment.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| webset_id | The ID or external ID of the Webset to monitor | str | Yes |
| target_status | Status to wait for (idle=all operations complete, completed=search done, running=actively processing) | "idle" | "completed" | "running" | No |
| timeout | Maximum time to wait in seconds | int | No |
| check_interval | Initial interval between status checks in seconds | int | No |
| max_interval | Maximum interval between checks (for exponential backoff) | int | No |
| include_progress | Include detailed progress information in output | bool | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| webset_id | The webset ID that was monitored | str |
| final_status | The final status of the webset | str |
| elapsed_time | Total time elapsed in seconds | float |
| item_count | Number of items found | int |
| search_progress | Detailed search progress information | Dict[str, True] |
| enrichment_progress | Detailed enrichment progress information | Dict[str, True] |
| timed_out | Whether the operation timed out | bool |
### Possible use case
<!-- MANUAL: use_case -->
**Workflow Orchestration**: Wait for all webset operations to complete before next workflow steps.
**Idle State Detection**: Ensure webset is fully idle before making configuration changes.
**Completion Gates**: Block workflow until webset reaches a specific readiness state.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,182 @@
# Exa Cancel Webset Search
### What it is
Cancel a running webset search
### How it works
<!-- MANUAL: how_it_works -->
This block stops a running search operation on a webset. Items already found before cancellation are retained in the webset.
Use this when a search is taking too long, returning unexpected results, or is no longer needed. The block returns the number of items found before cancellation.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| webset_id | The ID or external ID of the Webset | str | Yes |
| search_id | The ID of the search to cancel | str | Yes |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| search_id | The ID of the canceled search | str |
| status | Status after cancellation | str |
| items_found_before_cancel | Number of items found before cancellation | int |
| success | Whether the cancellation was successful | str |
### Possible use case
<!-- MANUAL: use_case -->
**Resource Control**: Stop searches that are taking longer than expected.
**Query Refinement**: Cancel searches to adjust query and restart with better parameters.
**Partial Results**: Stop searches early when you have enough items for your needs.
<!-- END MANUAL -->
---
## Exa Create Webset Search
### What it is
Add a new search to an existing webset to find more items
### How it works
<!-- MANUAL: how_it_works -->
This block adds a new search to an existing webset to expand its contents. You define the search query, target count, and how results should integrate with existing items (append, override, or merge).
Searches support scoped and exclusion sources, criteria validation, and relationship-based "hop" searches to find related entities.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| webset_id | The ID or external ID of the Webset | str | Yes |
| query | Search query describing what to find | str | Yes |
| count | Number of items to find | int | No |
| entity_type | Type of entity to search for | "company" | "person" | "article" | No |
| entity_description | Description for custom entity type | str | No |
| criteria | List of criteria that items must meet. If not provided, auto-detected from query. | List[str] | No |
| behavior | How new results interact with existing items | "override" | "append" | "merge" | No |
| recall | Enable recall estimation for expected results | bool | No |
| exclude_source_ids | IDs of imports/websets to exclude from results | List[str] | No |
| exclude_source_types | Types of sources to exclude ('import' or 'webset') | List[str] | No |
| scope_source_ids | IDs of imports/websets to limit search scope to | List[str] | No |
| scope_source_types | Types of scope sources ('import' or 'webset') | List[str] | No |
| scope_relationships | Relationship definitions for hop searches | List[str] | No |
| scope_relationship_limits | Limits on related entities to find | List[int] | No |
| metadata | Metadata to attach to the search | Dict[str, True] | No |
| wait_for_completion | Wait for the search to complete before returning | bool | No |
| polling_timeout | Maximum time to wait for completion in seconds | int | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| search_id | The unique identifier for the created search | str |
| webset_id | The webset this search belongs to | str |
| status | Current status of the search | str |
| query | The search query | str |
| expected_results | Recall estimation of expected results | Dict[str, True] |
| items_found | Number of items found (if wait_for_completion was True) | int |
| completion_time | Time taken to complete in seconds (if wait_for_completion was True) | float |
### Possible use case
<!-- MANUAL: use_case -->
**Webset Expansion**: Add more items to existing websets with new or refined queries.
**Multi-Criteria Collection**: Run multiple searches with different criteria to build comprehensive datasets.
**Iterative Building**: Progressively expand websets based on analysis of initial results.
<!-- END MANUAL -->
---
## Exa Find Or Create Search
### What it is
Find existing search by query or create new - prevents duplicate searches in workflows
### How it works
<!-- MANUAL: how_it_works -->
This block implements idempotent search creation. If a search with the same query already exists in the webset, it returns that search. Otherwise, it creates a new one.
Use this pattern to prevent duplicate searches when workflows retry or run multiple times with the same parameters.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| webset_id | The ID or external ID of the Webset | str | Yes |
| query | Search query to find or create | str | Yes |
| count | Number of items to find (only used if creating new search) | int | No |
| entity_type | Entity type (only used if creating) | "company" | "person" | "article" | No |
| behavior | Search behavior (only used if creating) | "override" | "append" | "merge" | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| search_id | The search ID (existing or new) | str |
| webset_id | The webset ID | str |
| status | Current search status | str |
| query | The search query | str |
| was_created | True if search was newly created, False if already existed | bool |
| items_found | Number of items found (0 if still running) | int |
### Possible use case
<!-- MANUAL: use_case -->
**Retry-Safe Workflows**: Safely handle workflow retries without creating duplicate searches.
**Deduplication**: Avoid running the same search multiple times when called from different workflow branches.
**Efficient Operations**: Skip search creation when results from identical queries already exist.
<!-- END MANUAL -->
---
## Exa Get Webset Search
### What it is
Get the status and details of a webset search
### How it works
<!-- MANUAL: how_it_works -->
This block retrieves detailed information about a webset search including its query, criteria, progress, and recall estimation.
Use this to monitor search progress, verify search configuration, or investigate search behavior when results don't match expectations.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| webset_id | The ID or external ID of the Webset | str | Yes |
| search_id | The ID of the search to retrieve | str | Yes |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| search_id | The unique identifier for the search | str |
| status | Current status of the search | str |
| query | The search query | str |
| entity_type | Type of entity being searched | str |
| criteria | Criteria used for verification | List[Dict[str, True]] |
| progress | Search progress information | Dict[str, True] |
| recall | Recall estimation information | Dict[str, True] |
| created_at | When the search was created | str |
| updated_at | When the search was last updated | str |
| canceled_at | When the search was canceled (if applicable) | str |
| canceled_reason | Reason for cancellation (if applicable) | str |
| metadata | Metadata attached to the search | Dict[str, True] |
### Possible use case
<!-- MANUAL: use_case -->
**Progress Tracking**: Monitor search completion and items found during long-running operations.
**Configuration Review**: Retrieve search details to verify criteria and settings are correct.
**Debugging**: Investigate search configuration when results don't match expectations.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,35 @@
# AI Video Generator
### What it is
Generate videos using FAL AI models.
### How it works
<!-- MANUAL: how_it_works -->
This block generates videos from text prompts using FAL.ai's video generation models including Mochi, Luma Dream Machine, and Veo3. Describe the video you want to create, and the AI generates it.
The generated video URL is returned along with progress logs for monitoring longer generation jobs.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| prompt | Description of the video to generate. | str | Yes |
| model | The FAL model to use for video generation. | "fal-ai/mochi-v1" | "fal-ai/luma-dream-machine" | "fal-ai/veo3" | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if video generation failed. | str |
| video_url | The URL of the generated video. | str |
| logs | Generation progress logs. | List[str] |
### Possible use case
<!-- MANUAL: use_case -->
**Content Creation**: Generate video clips for social media, ads, or creative projects.
**Visualization**: Create visual representations of concepts, products, or stories.
**Prototyping**: Generate video mockups for creative ideation and storyboarding.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,46 @@
# Firecrawl Crawl
### What it is
Firecrawl crawls websites to extract comprehensive data while bypassing blockers.
### How it works
<!-- MANUAL: how_it_works -->
This block uses Firecrawl's API to crawl multiple pages of a website starting from a given URL. It navigates through links, handling JavaScript rendering and bypassing anti-bot measures to extract clean content from each page.
Configure the crawl depth with the limit parameter, choose output formats (markdown, HTML, or raw HTML), and optionally filter to main content only. The block supports caching with configurable max age and wait times for dynamic content.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| url | The URL to crawl | str | Yes |
| limit | The number of pages to crawl | int | No |
| only_main_content | Only return the main content of the page excluding headers, navs, footers, etc. | bool | No |
| max_age | The maximum age of the page in milliseconds - default is 1 hour | int | No |
| wait_for | Specify a delay in milliseconds before fetching the content, allowing the page sufficient time to load. | int | No |
| formats | The format of the crawl | List["markdown" | "html" | "rawHtml"] | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the crawl failed | str |
| data | The result of the crawl | List[Dict[str, True]] |
| markdown | The markdown of the crawl | str |
| html | The html of the crawl | str |
| raw_html | The raw html of the crawl | str |
| links | The links of the crawl | List[str] |
| screenshot | The screenshot of the crawl | str |
| screenshot_full_page | The screenshot full page of the crawl | str |
| json_data | The json data of the crawl | Dict[str, True] |
| change_tracking | The change tracking of the crawl | Dict[str, True] |
### Possible use case
<!-- MANUAL: use_case -->
**Documentation Indexing**: Crawl entire documentation sites to build searchable knowledge bases or training data.
**Competitor Research**: Extract content from competitor websites for market analysis and comparison.
**Content Archival**: Systematically archive website content for backup or compliance purposes.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,36 @@
# Firecrawl Extract
### What it is
Firecrawl crawls websites to extract comprehensive data while bypassing blockers.
### How it works
<!-- MANUAL: how_it_works -->
This block uses Firecrawl's extraction API to pull structured data from web pages based on a prompt or schema. It crawls the specified URLs and uses AI to extract information matching your requirements.
Define the data structure you want using a JSON schema for precise extraction, or use natural language prompts for flexible extraction. Wildcards in URLs allow extracting data from multiple pages matching a pattern.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| urls | The URLs to crawl - at least one is required. Wildcards are supported. (/*) | List[str] | Yes |
| prompt | The prompt to use for the crawl | str | No |
| output_schema | A Json Schema describing the output structure if more rigid structure is desired. | Dict[str, True] | No |
| enable_web_search | When true, extraction can follow links outside the specified domain. | bool | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the extraction failed | str |
| data | The result of the crawl | Dict[str, True] |
### Possible use case
<!-- MANUAL: use_case -->
**Product Data Extraction**: Extract structured product information (prices, specs, reviews) from e-commerce sites.
**Contact Scraping**: Pull business contact information from company websites in a structured format.
**Data Pipeline Input**: Automatically extract and structure web data for analysis or database population.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,34 @@
# Firecrawl Map Website
### What it is
Firecrawl maps a website to extract all the links.
### How it works
<!-- MANUAL: how_it_works -->
This block uses Firecrawl's mapping API to discover all links on a website without extracting full content. It quickly scans the site structure and returns a comprehensive list of URLs found.
The block is useful for understanding site architecture before performing targeted scraping or for building site maps. Results include both the raw list of links and structured results with titles and descriptions.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| url | The website url to map | str | Yes |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the map failed | str |
| links | List of URLs found on the website | List[str] |
| results | List of search results with url, title, and description | List[Dict[str, True]] |
### Possible use case
<!-- MANUAL: use_case -->
**Site Audit**: Map all pages on a website to identify broken links, orphan pages, or SEO issues.
**Crawl Planning**: Discover site structure before deciding which pages to scrape in detail.
**Content Discovery**: Find all blog posts, product pages, or documentation entries on a site.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,46 @@
# Firecrawl Scrape
### What it is
Firecrawl scrapes a website to extract comprehensive data while bypassing blockers.
### How it works
<!-- MANUAL: how_it_works -->
This block uses Firecrawl's scraping API to extract content from a single URL. It handles JavaScript rendering, bypasses anti-bot measures, and can return content in multiple formats including markdown, HTML, and screenshots.
Configure output formats, filter to main content only, and set wait times for dynamic pages. The block returns comprehensive results including extracted content, links found on the page, and optional change tracking data.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| url | The URL to crawl | str | Yes |
| limit | The number of pages to crawl | int | No |
| only_main_content | Only return the main content of the page excluding headers, navs, footers, etc. | bool | No |
| max_age | The maximum age of the page in milliseconds - default is 1 hour | int | No |
| wait_for | Specify a delay in milliseconds before fetching the content, allowing the page sufficient time to load. | int | No |
| formats | The format of the crawl | List["markdown" | "html" | "rawHtml"] | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the scrape failed | str |
| data | The result of the crawl | Dict[str, True] |
| markdown | The markdown of the crawl | str |
| html | The html of the crawl | str |
| raw_html | The raw html of the crawl | str |
| links | The links of the crawl | List[str] |
| screenshot | The screenshot of the crawl | str |
| screenshot_full_page | The screenshot full page of the crawl | str |
| json_data | The json data of the crawl | Dict[str, True] |
| change_tracking | The change tracking of the crawl | Dict[str, True] |
### Possible use case
<!-- MANUAL: use_case -->
**Article Extraction**: Scrape news articles or blog posts to extract clean, readable content.
**Price Monitoring**: Regularly scrape product pages to track price changes over time.
**Content Backup**: Create markdown backups of important web pages for offline reference.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,38 @@
# Firecrawl Search
### What it is
Firecrawl searches the web for the given query.
### How it works
<!-- MANUAL: how_it_works -->
This block uses Firecrawl's search API to find web pages matching your query and optionally extract their content. It performs a web search and can return results with full page content in your chosen format.
Configure the number of results to return, output formats (markdown, HTML, raw HTML), and caching behavior. The wait_for parameter allows time for JavaScript-heavy pages to fully render before extraction.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| query | The query to search for | str | Yes |
| limit | The number of pages to crawl | int | No |
| max_age | The maximum age of the page in milliseconds - default is 1 hour | int | No |
| wait_for | Specify a delay in milliseconds before fetching the content, allowing the page sufficient time to load. | int | No |
| formats | Returns the content of the search if specified | List["markdown" | "html" | "rawHtml"] | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the search failed | str |
| data | The result of the search | Dict[str, True] |
| site | The site of the search | Dict[str, True] |
### Possible use case
<!-- MANUAL: use_case -->
**Research Automation**: Search for topics and automatically extract content from relevant pages for analysis.
**Lead Generation**: Find companies or contacts matching specific criteria across the web.
**Content Aggregation**: Gather articles, reviews, or information on specific topics from multiple sources.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,34 @@
# Generic Webhook Trigger
### What it is
This block will output the contents of the generic input for the webhook.
### How it works
<!-- MANUAL: how_it_works -->
This block creates a webhook endpoint that receives and outputs any incoming HTTP payload. When external services send data to this webhook URL, the block triggers and outputs the complete payload as a dictionary.
Constants can be configured to pass additional static values alongside the dynamic webhook data.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| constants | The constants to be set when the block is put on the graph | Dict[str, True] | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| payload | The complete webhook payload that was received from the generic webhook. | Dict[str, True] |
| constants | The constants to be set when the block is put on the graph | Dict[str, True] |
### Possible use case
<!-- MANUAL: use_case -->
**External Integrations**: Receive data from any third-party service that supports webhooks.
**Custom Triggers**: Create custom workflow triggers from external systems or internal tools.
**Event Processing**: Capture and process events from IoT devices, payment processors, or notification services.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,81 @@
# Github Create Check Run
### What it is
Creates a new check run for a specific commit in a GitHub repository
### How it works
<!-- MANUAL: how_it_works -->
This block creates a new check run associated with a specific commit using the GitHub Checks API. Check runs represent individual test suites, linting tools, or other CI processes that report status against commits or pull requests.
You specify the commit SHA, check name, and current status. For completed checks, provide a conclusion (success, failure, or neutral) and optional detailed output including title, summary, and extended text for rich reporting in the GitHub UI.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| repo_url | URL of the GitHub repository | str | Yes |
| name | The name of the check run (e.g., 'code-coverage') | str | Yes |
| head_sha | The SHA of the commit to check | str | Yes |
| status | Current status of the check run | "queued" | "in_progress" | "completed" | No |
| conclusion | The final conclusion of the check (required if status is completed) | "success" | "failure" | "neutral" | No |
| details_url | The URL for the full details of the check | str | No |
| output_title | Title of the check run output | str | No |
| output_summary | Summary of the check run output | str | No |
| output_text | Detailed text of the check run output | str | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if check run creation failed | str |
| check_run | Details of the created check run | CheckRunResult |
### Possible use case
<!-- MANUAL: use_case -->
**Custom CI Integration**: Create check runs for external CI systems that aren't natively integrated with GitHub.
**Code Quality Reporting**: Report linting, security scan, or test coverage results directly on commits and PRs.
**Deployment Status**: Track deployment progress by creating check runs that show pending, in-progress, and completed states.
<!-- END MANUAL -->
---
## Github Update Check Run
### What it is
Updates an existing check run in a GitHub repository
### How it works
<!-- MANUAL: how_it_works -->
This block updates an existing check run's status, conclusion, and output details via the GitHub Checks API. Use it to report progress as your CI process advances through different stages.
You can update the status from queued to in_progress to completed, and set the final conclusion when done. The output fields allow you to provide detailed results, annotations, and summaries visible in the GitHub UI.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| repo_url | URL of the GitHub repository | str | Yes |
| check_run_id | The ID of the check run to update | int | Yes |
| status | New status of the check run | "queued" | "in_progress" | "completed" | Yes |
| conclusion | The final conclusion of the check (required if status is completed) | "success" | "failure" | "neutral" | Yes |
| output_title | New title of the check run output | str | No |
| output_summary | New summary of the check run output | str | No |
| output_text | New detailed text of the check run output | str | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| check_run | Details of the updated check run | CheckRunResult |
### Possible use case
<!-- MANUAL: use_case -->
**Progress Reporting**: Update check runs as your CI pipeline progresses through build, test, and deployment stages.
**Real-Time Feedback**: Provide immediate feedback on pull requests as tests complete, rather than waiting for the entire suite.
**Failure Details**: Update check runs with detailed error messages and output when tests fail.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,44 @@
# Github Get CI Results
### What it is
This block gets CI results for a commit or PR, with optional search for specific errors/warnings in logs.
### How it works
<!-- MANUAL: how_it_works -->
This block retrieves CI check results for a specific commit or pull request using the GitHub Checks API. It aggregates results from all CI checks, providing an overall status summary along with individual check details.
Optionally search through CI logs using regex patterns to find specific errors or warnings. You can filter by check name to focus on particular CI jobs. The block returns comprehensive results including pass/fail counts and matched log lines.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| repo | GitHub repository | str | Yes |
| target | Commit SHA or PR number to get CI results for | str | int | Yes |
| search_pattern | Optional regex pattern to search for in CI logs (e.g., error messages, file names) | str | No |
| check_name_filter | Optional filter for specific check names (supports wildcards) | str | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| check_run | Individual CI check run with details | Check Run |
| check_runs | List of all CI check runs | List[CheckRunItem] |
| matched_line | Line matching the search pattern with context | Matched Line |
| matched_lines | All lines matching the search pattern across all checks | List[MatchedLine] |
| overall_status | Overall CI status (pending, success, failure) | str |
| overall_conclusion | Overall CI conclusion if completed | str |
| total_checks | Total number of CI checks | int |
| passed_checks | Number of passed checks | int |
| failed_checks | Number of failed checks | int |
### Possible use case
<!-- MANUAL: use_case -->
**CI Status Monitoring**: Check the overall CI status of commits or PRs before merging or deploying.
**Error Diagnosis**: Search CI logs for specific error patterns to quickly identify why builds are failing.
**Automated PR Validation**: Verify all required checks pass before automatically proceeding with merge or deployment workflows.
<!-- END MANUAL -->
---

View File

@@ -1,152 +1,213 @@
# GitHub Issues
# Github Add Label
### What it is
A block that adds a label to a GitHub issue or pull request for categorization and organization.
### How it works
<!-- MANUAL: how_it_works -->
The block takes the GitHub credentials, the URL of the issue or pull request, and the label to be added as inputs. It then sends a request to the GitHub API to add the label to the specified issue or pull request.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| issue_url | URL of the GitHub issue or pull request | str | Yes |
| label | Label to add to the issue or pull request | str | Yes |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the label addition failed | str |
| status | Status of the label addition operation | str |
### Possible use case
<!-- MANUAL: use_case -->
Automatically categorizing issues based on their content or assigning priority labels to newly created issues.
<!-- END MANUAL -->
---
## Github Assign Issue
### What it is
A block that assigns a GitHub user to an issue for task ownership and tracking.
### How it works
<!-- MANUAL: how_it_works -->
The block takes the GitHub credentials, the URL of the issue, and the username of the person to be assigned as inputs. It then sends a request to the GitHub API to assign the specified user to the issue.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| issue_url | URL of the GitHub issue | str | Yes |
| assignee | Username to assign to the issue | str | Yes |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the issue assignment failed | str |
| status | Status of the issue assignment operation | str |
### Possible use case
<!-- MANUAL: use_case -->
Automatically assigning new issues to team members based on their expertise or workload.
<!-- END MANUAL -->
---
## Github Comment
### What it is
A block that posts comments on GitHub issues or pull requests.
### What it does
This block allows users to add comments to existing GitHub issues or pull requests using the GitHub API.
A block that posts comments on GitHub issues or pull requests using the GitHub API.
### How it works
<!-- MANUAL: how_it_works -->
The block takes the GitHub credentials, the URL of the issue or pull request, and the comment text as inputs. It then sends a request to the GitHub API to post the comment on the specified issue or pull request.
<!-- END MANUAL -->
### Inputs
| Input | Description |
|-------|-------------|
| Credentials | GitHub authentication information |
| Issue URL | The URL of the GitHub issue or pull request where the comment will be posted |
| Comment | The text content of the comment to be posted |
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| issue_url | URL of the GitHub issue or pull request | str | Yes |
| comment | Comment to post on the issue or pull request | str | Yes |
### Outputs
| Output | Description |
|--------|-------------|
| ID | The unique identifier of the created comment |
| URL | The direct link to the posted comment on GitHub |
| Error | Any error message if the comment posting fails |
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the comment posting failed | str |
| id | ID of the created comment | int |
| url | URL to the comment on GitHub | str |
### Possible use case
<!-- MANUAL: use_case -->
Automating responses to issues in a GitHub repository, such as thanking contributors for their submissions or providing status updates on reported bugs.
<!-- END MANUAL -->
---
## Github Make Issue
## Github List Comments
### What it is
A block that creates new issues on GitHub repositories.
### What it does
This block allows users to create new issues in a specified GitHub repository with a title and body content.
A block that retrieves all comments from a GitHub issue or pull request, including comment metadata and content.
### How it works
The block takes the GitHub credentials, repository URL, issue title, and issue body as inputs. It then sends a request to the GitHub API to create a new issue with the provided information.
<!-- MANUAL: how_it_works -->
This block retrieves all comments from a GitHub issue or pull request via the GitHub API. It authenticates using your GitHub credentials and fetches the complete comment history, returning both individual comments and a list of all comments with their metadata.
Each comment includes the comment ID, body text, author username, and a direct URL to the comment on GitHub.
<!-- END MANUAL -->
### Inputs
| Input | Description |
|-------|-------------|
| Credentials | GitHub authentication information |
| Repo URL | The URL of the GitHub repository where the issue will be created |
| Title | The title of the new issue |
| Body | The main content or description of the new issue |
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| issue_url | URL of the GitHub issue or pull request | str | Yes |
### Outputs
| Output | Description |
|--------|-------------|
| Number | The issue number assigned by GitHub |
| URL | The direct link to the newly created issue on GitHub |
| Error | Any error message if the issue creation fails |
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| comment | Comments with their ID, body, user, and URL | Comment |
| comments | List of comments with their ID, body, user, and URL | List[CommentItem] |
### Possible use case
Automatically creating issues for bug reports or feature requests submitted through an external system or form.
<!-- MANUAL: use_case -->
**Conversation Analysis**: Extract all comments from an issue to analyze the discussion or generate a summary of the conversation.
---
**Comment Monitoring**: Track all responses on specific issues to monitor team communication or customer feedback.
## Github Read Issue
### What it is
A block that retrieves information about a specific GitHub issue.
### What it does
This block fetches the details of a given GitHub issue, including its title, body content, and the user who created it.
### How it works
The block takes the GitHub credentials and the issue URL as inputs. It then sends a request to the GitHub API to fetch the issue's details and returns the relevant information.
### Inputs
| Input | Description |
|-------|-------------|
| Credentials | GitHub authentication information |
| Issue URL | The URL of the GitHub issue to be read |
### Outputs
| Output | Description |
|--------|-------------|
| Title | The title of the issue |
| Body | The main content or description of the issue |
| User | The username of the person who created the issue |
| Error | Any error message if reading the issue fails |
### Possible use case
Gathering information about reported issues for analysis or to display on a dashboard.
**Audit Trails**: Collect comment history for compliance or documentation purposes.
<!-- END MANUAL -->
---
## Github List Issues
### What it is
A block that retrieves a list of issues from a GitHub repository.
### What it does
This block fetches all open issues from a specified GitHub repository and provides their titles and URLs.
A block that retrieves a list of issues from a GitHub repository with their titles and URLs.
### How it works
<!-- MANUAL: how_it_works -->
The block takes the GitHub credentials and repository URL as inputs. It then sends a request to the GitHub API to fetch the list of issues and returns their details.
<!-- END MANUAL -->
### Inputs
| Input | Description |
|-------|-------------|
| Credentials | GitHub authentication information |
| Repo URL | The URL of the GitHub repository to list issues from |
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| repo_url | URL of the GitHub repository | str | Yes |
### Outputs
| Output | Description |
|--------|-------------|
| Issue | A list of issues, each containing: |
| - Title | The title of the issue |
| - URL | The direct link to the issue on GitHub |
| Error | Any error message if listing the issues fails |
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| issue | Issues with their title and URL | Issue |
| issues | List of issues with their title and URL | List[IssueItem] |
### Possible use case
<!-- MANUAL: use_case -->
Creating a summary of open issues for a project status report or displaying them on a project management dashboard.
<!-- END MANUAL -->
---
## Github Add Label
## Github Make Issue
### What it is
A block that adds a label to a GitHub issue or pull request.
### What it does
This block allows users to add a specified label to an existing GitHub issue or pull request.
A block that creates new issues on GitHub repositories with a title and body content.
### How it works
The block takes the GitHub credentials, the URL of the issue or pull request, and the label to be added as inputs. It then sends a request to the GitHub API to add the label to the specified issue or pull request.
<!-- MANUAL: how_it_works -->
The block takes the GitHub credentials, repository URL, issue title, and issue body as inputs. It then sends a request to the GitHub API to create a new issue with the provided information.
<!-- END MANUAL -->
### Inputs
| Input | Description |
|-------|-------------|
| Credentials | GitHub authentication information |
| Issue URL | The URL of the GitHub issue or pull request to add the label to |
| Label | The name of the label to be added |
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| repo_url | URL of the GitHub repository | str | Yes |
| title | Title of the issue | str | Yes |
| body | Body of the issue | str | Yes |
### Outputs
| Output | Description |
|--------|-------------|
| Status | A message indicating whether the label was successfully added |
| Error | Any error message if adding the label fails |
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the issue creation failed | str |
| number | Number of the created issue | int |
| url | URL of the created issue | str |
### Possible use case
Automatically categorizing issues based on their content or assigning priority labels to newly created issues.
<!-- MANUAL: use_case -->
Automatically creating issues for bug reports or feature requests submitted through an external system or form.
<!-- END MANUAL -->
---
## Github Read Issue
### What it is
A block that retrieves information about a specific GitHub issue, including its title, body content, and creator.
### How it works
<!-- MANUAL: how_it_works -->
The block takes the GitHub credentials and the issue URL as inputs. It then sends a request to the GitHub API to fetch the issue's details and returns the relevant information.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| issue_url | URL of the GitHub issue | str | Yes |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if reading the issue failed | str |
| title | Title of the issue | str |
| body | Body of the issue | str |
| user | User who created the issue | str |
### Possible use case
<!-- MANUAL: use_case -->
Gathering information about reported issues for analysis or to display on a dashboard.
<!-- END MANUAL -->
---
@@ -155,82 +216,93 @@ Automatically categorizing issues based on their content or assigning priority l
### What it is
A block that removes a label from a GitHub issue or pull request.
### What it does
This block allows users to remove a specified label from an existing GitHub issue or pull request.
### How it works
<!-- MANUAL: how_it_works -->
The block takes the GitHub credentials, the URL of the issue or pull request, and the label to be removed as inputs. It then sends a request to the GitHub API to remove the label from the specified issue or pull request.
<!-- END MANUAL -->
### Inputs
| Input | Description |
|-------|-------------|
| Credentials | GitHub authentication information |
| Issue URL | The URL of the GitHub issue or pull request to remove the label from |
| Label | The name of the label to be removed |
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| issue_url | URL of the GitHub issue or pull request | str | Yes |
| label | Label to remove from the issue or pull request | str | Yes |
### Outputs
| Output | Description |
|--------|-------------|
| Status | A message indicating whether the label was successfully removed |
| Error | Any error message if removing the label fails |
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the label removal failed | str |
| status | Status of the label removal operation | str |
### Possible use case
<!-- MANUAL: use_case -->
Updating the status of issues as they progress through a workflow, such as removing a "In Progress" label when an issue is completed.
---
## Github Assign Issue
### What it is
A block that assigns a user to a GitHub issue.
### What it does
This block allows users to assign a specific GitHub user to an existing issue.
### How it works
The block takes the GitHub credentials, the URL of the issue, and the username of the person to be assigned as inputs. It then sends a request to the GitHub API to assign the specified user to the issue.
### Inputs
| Input | Description |
|-------|-------------|
| Credentials | GitHub authentication information |
| Issue URL | The URL of the GitHub issue to assign |
| Assignee | The username of the person to be assigned to the issue |
### Outputs
| Output | Description |
|--------|-------------|
| Status | A message indicating whether the issue was successfully assigned |
| Error | Any error message if assigning the issue fails |
### Possible use case
Automatically assigning new issues to team members based on their expertise or workload.
<!-- END MANUAL -->
---
## Github Unassign Issue
### What it is
A block that unassigns a user from a GitHub issue.
### What it does
This block allows users to remove a specific GitHub user's assignment from an existing issue.
A block that removes a user's assignment from a GitHub issue.
### How it works
<!-- MANUAL: how_it_works -->
The block takes the GitHub credentials, the URL of the issue, and the username of the person to be unassigned as inputs. It then sends a request to the GitHub API to remove the specified user's assignment from the issue.
<!-- END MANUAL -->
### Inputs
| Input | Description |
|-------|-------------|
| Credentials | GitHub authentication information |
| Issue URL | The URL of the GitHub issue to unassign |
| Assignee | The username of the person to be unassigned from the issue |
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| issue_url | URL of the GitHub issue | str | Yes |
| assignee | Username to unassign from the issue | str | Yes |
### Outputs
| Output | Description |
|--------|-------------|
| Status | A message indicating whether the issue was successfully unassigned |
| Error | Any error message if unassigning the issue fails |
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the issue unassignment failed | str |
| status | Status of the issue unassignment operation | str |
### Possible use case
Automatically unassigning issues that have been inactive for a certain period or when reassigning workload among team members.
<!-- MANUAL: use_case -->
Automatically unassigning issues that have been inactive for a certain period or when reassigning workload among team members.
<!-- END MANUAL -->
---
## Github Update Comment
### What it is
A block that updates an existing comment on a GitHub issue or pull request.
### How it works
<!-- MANUAL: how_it_works -->
This block updates an existing comment on a GitHub issue or pull request. You can identify the comment to update using either the direct comment URL, or a combination of the issue URL and comment ID. The block sends a PATCH request to the GitHub API to replace the comment's content.
The updated comment retains its original author and timestamp context while replacing the body text with your new content.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| comment_url | URL of the GitHub comment | str | No |
| issue_url | URL of the GitHub issue or pull request | str | No |
| comment_id | ID of the GitHub comment | str | No |
| comment | Comment to update | str | Yes |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the comment update failed | str |
| id | ID of the updated comment | int |
| url | URL to the comment on GitHub | str |
### Possible use case
<!-- MANUAL: use_case -->
**Status Updates**: Modify a pinned status comment to reflect current progress on an issue.
**Bot Maintenance**: Update automated bot comments with new information instead of creating duplicate comments.
**Error Corrections**: Fix typos or incorrect information in previously posted comments.
<!-- END MANUAL -->
---

View File

@@ -1,182 +1,216 @@
# Pull Requests
## GitHub List Pull Requests
# Github Assign PR Reviewer
### What it is
A block that retrieves a list of pull requests from a specified GitHub repository.
### What it does
This block fetches all open pull requests for a given GitHub repository and provides their titles and URLs.
This block assigns a reviewer to a specified GitHub pull request.
### How it works
It connects to the GitHub API using the provided credentials and repository URL, then retrieves the list of pull requests and formats the information for easy viewing.
<!-- MANUAL: how_it_works -->
This block requests a code review from a specific user on a GitHub pull request. It uses the GitHub API to add the specified username to the list of requested reviewers, triggering a notification to that user.
The reviewer must have access to the repository. Organization members can typically be assigned as reviewers on any repository they have at least read access to.
<!-- END MANUAL -->
### Inputs
| Input | Description |
|-------|-------------|
| Credentials | GitHub authentication details to access the repository |
| Repository URL | The URL of the GitHub repository to fetch pull requests from |
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| pr_url | URL of the GitHub pull request | str | Yes |
| reviewer | Username of the reviewer to assign | str | Yes |
### Outputs
| Output | Description |
|--------|-------------|
| Pull Request | A list of pull requests, each containing: |
| - Title | The title of the pull request |
| - URL | The web address of the pull request |
| Error | An error message if the operation fails |
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the reviewer assignment failed | str |
| status | Status of the reviewer assignment operation | str |
### Possible use case
A development team leader wants to quickly review all open pull requests in their project repository to prioritize code reviews.
<!-- MANUAL: use_case -->
**Automated Code Review Assignment**: Automatically assign reviewers based on the files changed or the PR author.
**Round-Robin Reviews**: Distribute code review load evenly across team members.
**Expertise-Based Routing**: Assign reviewers who are experts in the specific area of code being modified.
<!-- END MANUAL -->
---
## GitHub Make Pull Request
## Github List PR Reviewers
### What it is
A block that creates a new pull request in a specified GitHub repository.
### What it does
This block allows users to create a new pull request by providing details such as title, body, and branch information.
This block lists all reviewers for a specified GitHub pull request.
### How it works
It uses the GitHub API to create a new pull request with the given information, including the source and target branches for the changes.
<!-- MANUAL: how_it_works -->
This block retrieves the list of requested reviewers for a GitHub pull request. It queries the GitHub API to fetch all users who have been requested to review the PR, returning their usernames and profile URLs.
This includes both pending review requests and users who have already submitted reviews.
<!-- END MANUAL -->
### Inputs
| Input | Description |
|-------|-------------|
| Credentials | GitHub authentication details to access the repository |
| Repository URL | The URL of the GitHub repository where the pull request will be created |
| Title | The title of the new pull request |
| Body | The description or content of the pull request |
| Head | The name of the branch containing the changes |
| Base | The name of the branch you want to merge the changes into |
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| pr_url | URL of the GitHub pull request | str | Yes |
### Outputs
| Output | Description |
|--------|-------------|
| Number | The unique identifier of the created pull request |
| URL | The web address of the newly created pull request |
| Error | An error message if the pull request creation fails |
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if listing reviewers failed | str |
| reviewer | Reviewers with their username and profile URL | Reviewer |
| reviewers | List of reviewers with their username and profile URL | List[ReviewerItem] |
### Possible use case
A developer has finished working on a new feature in a separate branch and wants to create a pull request to merge their changes into the main branch for review.
<!-- MANUAL: use_case -->
**Review Status Monitoring**: Check which reviewers have been assigned to a PR and send reminders to those who haven't responded.
**Workflow Validation**: Verify that required reviewers have been assigned before a PR can be merged.
**Team Dashboard**: Display reviewer assignments across multiple PRs for team visibility.
<!-- END MANUAL -->
---
## GitHub Read Pull Request
## Github List Pull Requests
### What it is
A block that retrieves detailed information about a specific GitHub pull request.
### What it does
This block fetches and provides comprehensive information about a given pull request, including its title, body, author, and optionally, the changes made.
This block lists all pull requests for a specified GitHub repository.
### How it works
It connects to the GitHub API using the provided credentials and pull request URL, then retrieves and formats the requested information.
<!-- MANUAL: how_it_works -->
This block fetches all open pull requests from a GitHub repository. It queries the GitHub API and returns a list of PRs with their titles and URLs, outputting both individual PRs and a complete list.
The block returns open pull requests by default, allowing you to monitor pending code changes in a repository.
<!-- END MANUAL -->
### Inputs
| Input | Description |
|-------|-------------|
| Credentials | GitHub authentication details to access the repository |
| Pull Request URL | The URL of the specific GitHub pull request to read |
| Include PR Changes | An option to include the actual changes made in the pull request |
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| repo_url | URL of the GitHub repository | str | Yes |
### Outputs
| Output | Description |
|--------|-------------|
| Title | The title of the pull request |
| Body | The description or content of the pull request |
| Author | The username of the person who created the pull request |
| Changes | A list of changes made in the pull request (if requested) |
| Error | An error message if reading the pull request fails |
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if listing pull requests failed | str |
| pull_request | PRs with their title and URL | Pull Request |
| pull_requests | List of pull requests with their title and URL | List[PRItem] |
### Possible use case
A code reviewer wants to get a comprehensive overview of a pull request, including its description and changes, before starting the review process.
<!-- MANUAL: use_case -->
**PR Dashboard**: Create a dashboard showing all open pull requests across your repositories.
**Merge Queue Monitoring**: Track pending PRs to prioritize code reviews and identify bottlenecks.
**Stale PR Detection**: List PRs to identify those that have been open too long and need attention.
<!-- END MANUAL -->
---
## GitHub Assign PR Reviewer
## Github Make Pull Request
### What it is
A block that assigns a reviewer to a specific GitHub pull request.
### What it does
This block allows users to assign a designated reviewer to a given pull request in a GitHub repository.
This block creates a new pull request on a specified GitHub repository.
### How it works
It uses the GitHub API to add the specified user as a reviewer for the given pull request.
<!-- MANUAL: how_it_works -->
This block creates a new pull request on a GitHub repository. It uses the GitHub API to submit a PR from your source branch (head) to the target branch (base), with the specified title and description.
For cross-repository PRs, format the head branch as "username:branch". The branches must exist and have divergent commits for the PR to be created successfully.
<!-- END MANUAL -->
### Inputs
| Input | Description |
|-------|-------------|
| Credentials | GitHub authentication details to access the repository |
| Pull Request URL | The URL of the specific GitHub pull request to assign a reviewer to |
| Reviewer | The username of the GitHub user to be assigned as a reviewer |
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| repo_url | URL of the GitHub repository | str | Yes |
| title | Title of the pull request | str | Yes |
| body | Body of the pull request | str | Yes |
| head | The name of the branch where your changes are implemented. For cross-repository pull requests in the same network, namespace head with a user like this: username:branch. | str | Yes |
| base | The name of the branch you want the changes pulled into. | str | Yes |
### Outputs
| Output | Description |
|--------|-------------|
| Status | A message indicating whether the reviewer was successfully assigned |
| Error | An error message if the reviewer assignment fails |
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the pull request creation failed | str |
| number | Number of the created pull request | int |
| url | URL of the created pull request | str |
### Possible use case
A project manager wants to assign a specific team member to review a newly created pull request for a critical feature.
<!-- MANUAL: use_case -->
**Automated Releases**: Create PRs automatically when a release branch is ready to merge to main.
**Dependency Updates**: Programmatically create PRs for dependency updates after testing passes.
**Feature Flags**: Automatically create PRs to enable feature flags in configuration files.
<!-- END MANUAL -->
---
## GitHub Unassign PR Reviewer
## Github Read Pull Request
### What it is
A block that removes an assigned reviewer from a specific GitHub pull request.
### What it does
This block allows users to unassign a previously designated reviewer from a given pull request in a GitHub repository.
This block reads the body, title, user, and changes of a specified GitHub pull request.
### How it works
It uses the GitHub API to remove the specified user from the list of reviewers for the given pull request.
<!-- MANUAL: how_it_works -->
This block reads the details of a GitHub pull request including its title, description, author, and optionally the code diff. It fetches this information via the GitHub API using your credentials.
When include_pr_changes is enabled, the block also retrieves the full diff of all changes in the PR, which can be useful for code review automation or analysis.
<!-- END MANUAL -->
### Inputs
| Input | Description |
|-------|-------------|
| Credentials | GitHub authentication details to access the repository |
| Pull Request URL | The URL of the specific GitHub pull request to unassign a reviewer from |
| Reviewer | The username of the GitHub user to be unassigned as a reviewer |
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| pr_url | URL of the GitHub pull request | str | Yes |
| include_pr_changes | Whether to include the changes made in the pull request | bool | No |
### Outputs
| Output | Description |
|--------|-------------|
| Status | A message indicating whether the reviewer was successfully unassigned |
| Error | An error message if the reviewer unassignment fails |
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if reading the pull request failed | str |
| title | Title of the pull request | str |
| body | Body of the pull request | str |
| author | User who created the pull request | str |
| changes | Changes made in the pull request | str |
### Possible use case
A team lead realizes that an assigned reviewer is unavailable and wants to remove them from a pull request to reassign it to another team member.
<!-- MANUAL: use_case -->
**Automated Code Review**: Read PR content and changes to perform automated code analysis or send to AI for review.
**Changelog Generation**: Extract PR titles and descriptions to automatically compile release notes.
**PR Summarization**: Read PR details to generate summaries for stakeholder updates.
<!-- END MANUAL -->
---
## GitHub List PR Reviewers
## Github Unassign PR Reviewer
### What it is
A block that retrieves a list of all assigned reviewers for a specific GitHub pull request.
### What it does
This block fetches and provides information about all the reviewers currently assigned to a given pull request in a GitHub repository.
This block unassigns a reviewer from a specified GitHub pull request.
### How it works
It connects to the GitHub API using the provided credentials and pull request URL, then retrieves and formats the list of assigned reviewers.
<!-- MANUAL: how_it_works -->
This block removes a reviewer from a GitHub pull request's review request list. It uses the GitHub API to remove the specified user from pending reviewers, which stops further review notifications to that user.
This is useful for reassigning reviews or removing reviewers who are unavailable.
<!-- END MANUAL -->
### Inputs
| Input | Description |
|-------|-------------|
| Credentials | GitHub authentication details to access the repository |
| Pull Request URL | The URL of the specific GitHub pull request to list reviewers for |
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| pr_url | URL of the GitHub pull request | str | Yes |
| reviewer | Username of the reviewer to unassign | str | Yes |
### Outputs
| Output | Description |
|--------|-------------|
| Reviewer | A list of assigned reviewers, each containing: |
| - Username | The GitHub username of the reviewer |
| - URL | The profile URL of the reviewer |
| Error | An error message if listing the reviewers fails |
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the reviewer unassignment failed | str |
| status | Status of the reviewer unassignment operation | str |
### Possible use case
A project coordinator wants to check who is currently assigned to review a specific pull request to ensure all necessary team members are involved in the code review process.
<!-- MANUAL: use_case -->
**Reviewer Reassignment**: Remove unavailable reviewers and replace them with available team members.
**Load Balancing**: Unassign reviewers who have too many pending reviews.
**Vacation Coverage**: Automatically remove reviewers who are out of office.
<!-- END MANUAL -->
---

View File

@@ -1,234 +1,438 @@
# Repository
## GitHub List Tags
# Github Create File
### What it is
A block that retrieves and lists all tags for a specified GitHub repository.
### What it does
This block fetches all tags associated with a given GitHub repository and provides their names and URLs.
This block creates a new file in a GitHub repository.
### How it works
The block connects to the GitHub API using provided credentials, sends a request to retrieve tag information for the specified repository, and then processes the response to extract tag names and URLs.
<!-- MANUAL: how_it_works -->
This block creates a new file in a GitHub repository using the GitHub Contents API. It commits the file with the specified content to the chosen branch (or the default branch if not specified).
The commit message can be customized, and the block returns the URL of the created file along with the commit SHA for tracking purposes.
<!-- END MANUAL -->
### Inputs
| Input | Description |
|-------|-------------|
| Credentials | GitHub authentication credentials required to access the repository |
| Repository URL | The URL of the GitHub repository to fetch tags from |
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| repo_url | URL of the GitHub repository | str | Yes |
| file_path | Path where the file should be created | str | Yes |
| content | Content to write to the file | str | Yes |
| branch | Branch where the file should be created | str | No |
| commit_message | Message for the commit | str | No |
### Outputs
| Output | Description |
|--------|-------------|
| Tag | Information about each tag, including its name and URL |
| Error | Any error message if the tag listing process fails |
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the file creation failed | str |
| url | URL of the created file | str |
| sha | SHA of the commit | str |
### Possible use case
A developer wants to quickly view all release tags for a project to identify the latest version or track the project's release history.
<!-- MANUAL: use_case -->
**Configuration Deployment**: Automatically add configuration files to repositories during project setup.
**Documentation Generation**: Create markdown files or documentation pages programmatically.
**Template Deployment**: Add boilerplate files like LICENSE, .gitignore, or CI configs to repositories.
<!-- END MANUAL -->
---
## GitHub List Branches
## Github Create Repository
### What it is
A block that retrieves and lists all branches for a specified GitHub repository.
### What it does
This block fetches all branches associated with a given GitHub repository and provides their names and URLs.
This block creates a new GitHub repository.
### How it works
The block authenticates with the GitHub API, sends a request to get branch information for the specified repository, and then processes the response to extract branch names and URLs.
<!-- MANUAL: how_it_works -->
This block creates a new GitHub repository under your account using the GitHub API. You can configure visibility (public/private), add a description, and optionally initialize with a README and .gitignore file based on common templates.
The block returns both the web URL for viewing the repository and the clone URL for git operations.
<!-- END MANUAL -->
### Inputs
| Input | Description |
|-------|-------------|
| Credentials | GitHub authentication credentials required to access the repository |
| Repository URL | The URL of the GitHub repository to fetch branches from |
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| name | Name of the repository to create | str | Yes |
| description | Description of the repository | str | No |
| private | Whether the repository should be private | bool | No |
| auto_init | Whether to initialize the repository with a README | bool | No |
| gitignore_template | Git ignore template to use (e.g., Python, Node, Java) | str | No |
### Outputs
| Output | Description |
|--------|-------------|
| Branch | Information about each branch, including its name and URL |
| Error | Any error message if the branch listing process fails |
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the repository creation failed | str |
| url | URL of the created repository | str |
| clone_url | Git clone URL of the repository | str |
### Possible use case
A project manager wants to review all active branches in a repository to track ongoing development efforts and feature implementations.
<!-- MANUAL: use_case -->
**Project Bootstrapping**: Automatically create repositories with standard configuration when starting new projects.
**Template Deployment**: Create pre-configured repositories from templates for team members.
**Automated Workflows**: Generate repositories programmatically as part of onboarding or project management workflows.
<!-- END MANUAL -->
---
## GitHub List Discussions
## Github Delete Branch
### What it is
A block that retrieves and lists recent discussions for a specified GitHub repository.
### What it does
This block fetches a specified number of recent discussions from a given GitHub repository and provides their titles and URLs.
This block deletes a specified branch.
### How it works
The block uses the GitHub GraphQL API to request discussion data for the specified repository, processes the response, and extracts discussion titles and URLs.
<!-- MANUAL: how_it_works -->
This block deletes a specified branch from a GitHub repository using the GitHub References API. The branch is permanently removed, so use with caution—this cannot be undone without re-pushing the branch.
Protected branches cannot be deleted unless protection rules are first removed.
<!-- END MANUAL -->
### Inputs
| Input | Description |
|-------|-------------|
| Credentials | GitHub authentication credentials required to access the repository |
| Repository URL | The URL of the GitHub repository to fetch discussions from |
| Number of Discussions | The number of recent discussions to retrieve (default is 5) |
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| repo_url | URL of the GitHub repository | str | Yes |
| branch | Name of the branch to delete | str | Yes |
### Outputs
| Output | Description |
|--------|-------------|
| Discussion | Information about each discussion, including its title and URL |
| Error | Any error message if the discussion listing process fails |
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the branch deletion failed | str |
| status | Status of the branch deletion operation | str |
### Possible use case
A community manager wants to monitor recent discussions in a project's repository to identify trending topics or issues that need attention.
<!-- MANUAL: use_case -->
**Post-Merge Cleanup**: Automatically delete feature branches after they've been merged.
**Stale Branch Management**: Clean up old or abandoned branches to keep the repository tidy.
**CI/CD Automation**: Delete temporary branches created during build or deployment processes.
<!-- END MANUAL -->
---
## GitHub List Releases
## Github List Branches
### What it is
A block that retrieves and lists all releases for a specified GitHub repository.
### What it does
This block fetches all releases associated with a given GitHub repository and provides their names and URLs.
This block lists all branches for a specified GitHub repository.
### How it works
The block connects to the GitHub API, sends a request to get release information for the specified repository, and then processes the response to extract release names and URLs.
<!-- MANUAL: how_it_works -->
This block retrieves all branches from a GitHub repository. It queries the GitHub API and returns each branch with its name and a URL to browse the files at that branch.
This provides visibility into all development streams in a repository.
<!-- END MANUAL -->
### Inputs
| Input | Description |
|-------|-------------|
| Credentials | GitHub authentication credentials required to access the repository |
| Repository URL | The URL of the GitHub repository to fetch releases from |
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| repo_url | URL of the GitHub repository | str | Yes |
### Outputs
| Output | Description |
|--------|-------------|
| Release | Information about each release, including its name and URL |
| Error | Any error message if the release listing process fails |
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| branch | Branches with their name and file tree browser URL | Branch |
| branches | List of branches with their name and file tree browser URL | List[BranchItem] |
### Possible use case
A user wants to view all official releases of a software project to choose the appropriate version for installation or to track the project's release history.
<!-- MANUAL: use_case -->
**Branch Inventory**: Create a dashboard showing all active branches across repositories.
**Naming Convention Validation**: Check branch names against team conventions.
**Active Development Tracking**: Monitor which branches exist to track parallel development efforts.
<!-- END MANUAL -->
---
## GitHub Read File
## Github List Discussions
### What it is
A block that reads the content of a specified file from a GitHub repository.
### What it does
This block retrieves the content of a specified file from a given GitHub repository, providing both the raw and decoded text content along with the file size.
This block lists recent discussions for a specified GitHub repository.
### How it works
The block authenticates with the GitHub API, sends a request to fetch the specified file's content, and then processes the response to provide the file's raw content, decoded text content, and size.
<!-- MANUAL: how_it_works -->
This block fetches recent discussions from a GitHub repository using the GitHub GraphQL API. Discussions are a forum-style feature for community conversations separate from issues and PRs.
You can limit the number of discussions retrieved with the num_discussions parameter.
<!-- END MANUAL -->
### Inputs
| Input | Description |
|-------|-------------|
| Credentials | GitHub authentication credentials required to access the repository |
| Repository URL | The URL of the GitHub repository containing the file |
| File Path | The path to the file within the repository |
| Branch | The branch name to read from (defaults to "master") |
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| repo_url | URL of the GitHub repository | str | Yes |
| num_discussions | Number of discussions to fetch | int | No |
### Outputs
| Output | Description |
|--------|-------------|
| Text Content | The content of the file decoded as UTF-8 text |
| Raw Content | The raw base64-encoded content of the file |
| Size | The size of the file in bytes |
| Error | Any error message if the file reading process fails |
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if listing discussions failed | str |
| discussion | Discussions with their title and URL | Discussion |
| discussions | List of discussions with their title and URL | List[DiscussionItem] |
### Possible use case
A developer wants to quickly view the contents of a configuration file or source code file in a GitHub repository without having to clone the entire repository.
<!-- MANUAL: use_case -->
**Community Monitoring**: Track community discussions to identify popular topics or user concerns.
**Q&A Automation**: Monitor discussions for questions that can be answered automatically.
**Content Aggregation**: Collect discussion topics for community newsletters or summaries.
<!-- END MANUAL -->
---
## GitHub Read Folder
## Github List Releases
### What it is
A block that reads the content of a specified folder from a GitHub repository.
### What it does
This block retrieves the list of files and directories within a specified folder from a given GitHub repository.
This block lists all releases for a specified GitHub repository.
### How it works
The block connects to the GitHub API, sends a request to fetch the contents of the specified folder, and then processes the response to provide information about files and directories within that folder.
<!-- MANUAL: how_it_works -->
This block retrieves all releases from a GitHub repository. Releases are versioned packages of your software that may include release notes, binaries, and source code archives.
The block returns release information including names and URLs, outputting both individual releases and a complete list.
<!-- END MANUAL -->
### Inputs
| Input | Description |
|-------|-------------|
| Credentials | GitHub authentication credentials required to access the repository |
| Repository URL | The URL of the GitHub repository containing the folder |
| Folder Path | The path to the folder within the repository |
| Branch | The branch name to read from (defaults to "master") |
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| repo_url | URL of the GitHub repository | str | Yes |
### Outputs
| Output | Description |
|--------|-------------|
| File | Information about each file in the folder, including its name, path, and size |
| Directory | Information about each directory in the folder, including its name and path |
| Error | Any error message if the folder reading process fails |
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| release | Releases with their name and file tree browser URL | Release |
| releases | List of releases with their name and file tree browser URL | List[ReleaseItem] |
### Possible use case
A project manager wants to explore the structure of a repository or specific folder to understand the organization of files and directories without cloning the entire repository.
<!-- MANUAL: use_case -->
**Version Tracking**: Monitor releases across dependencies to stay current with updates.
**Changelog Compilation**: Gather release information for documentation or announcement purposes.
**Dependency Monitoring**: Track when new versions of libraries your project depends on are released.
<!-- END MANUAL -->
---
## GitHub Make Branch
## Github List Stargazers
### What it is
A block that creates a new branch in a GitHub repository.
### What it does
This block creates a new branch in a specified GitHub repository, based on an existing source branch.
This block lists all users who have starred a specified GitHub repository.
### How it works
The block authenticates with the GitHub API, retrieves the latest commit SHA of the source branch, and then creates a new branch pointing to that commit.
<!-- MANUAL: how_it_works -->
This block retrieves the list of users who have starred a GitHub repository. Stars are a way for users to bookmark or show appreciation for repositories.
Each stargazer entry includes their username and a link to their GitHub profile.
<!-- END MANUAL -->
### Inputs
| Input | Description |
|-------|-------------|
| Credentials | GitHub authentication credentials required to access the repository |
| Repository URL | The URL of the GitHub repository where the new branch will be created |
| New Branch | The name of the new branch to create |
| Source Branch | The name of the existing branch to use as the starting point for the new branch |
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| repo_url | URL of the GitHub repository | str | Yes |
### Outputs
| Output | Description |
|--------|-------------|
| Status | A message indicating the success of the branch creation operation |
| Error | Any error message if the branch creation process fails |
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if listing stargazers failed | str |
| stargazer | Stargazers with their username and profile URL | Stargazer |
| stargazers | List of stargazers with their username and profile URL | List[StargazerItem] |
### Possible use case
A developer wants to start working on a new feature and needs to create a new branch based on the current state of the main development branch.
<!-- MANUAL: use_case -->
**Community Engagement**: Identify and thank users who have starred your repository.
**Growth Analytics**: Track repository popularity over time by monitoring star growth.
**User Research**: Analyze who is interested in your project based on their profiles.
<!-- END MANUAL -->
---
## GitHub Delete Branch
## Github List Tags
### What it is
A block that deletes a specified branch from a GitHub repository.
### What it does
This block removes a specified branch from a given GitHub repository.
This block lists all tags for a specified GitHub repository.
### How it works
The block authenticates with the GitHub API and sends a delete request for the specified branch.
<!-- MANUAL: how_it_works -->
This block retrieves all git tags from a GitHub repository. Tags are typically used to mark release points or significant milestones in the repository history.
Each tag includes its name and a URL to browse the repository files at that tag.
<!-- END MANUAL -->
### Inputs
| Input | Description |
|-------|-------------|
| Credentials | GitHub authentication credentials required to access the repository |
| Repository URL | The URL of the GitHub repository containing the branch to be deleted |
| Branch | The name of the branch to delete |
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| repo_url | URL of the GitHub repository | str | Yes |
### Outputs
| Output | Description |
|--------|-------------|
| Status | A message indicating the success of the branch deletion operation |
| Error | Any error message if the branch deletion process fails |
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| tag | Tags with their name and file tree browser URL | Tag |
| tags | List of tags with their name and file tree browser URL | List[TagItem] |
### Possible use case
After merging a feature branch into the main development branch, a developer wants to clean up the repository by removing the now-obsolete feature branch.
<!-- MANUAL: use_case -->
**Version Enumeration**: List all versions of a project to check for available updates.
**Release Verification**: Confirm that tags exist for expected release versions.
**Historical Code Access**: Find tags to access the codebase at specific historical points.
<!-- END MANUAL -->
---
## Github Make Branch
### What it is
This block creates a new branch from a specified source branch.
### How it works
<!-- MANUAL: how_it_works -->
This block creates a new branch in a GitHub repository based on an existing source branch. It uses the GitHub References API to create a new ref pointing to the same commit as the source branch.
The new branch immediately contains all the code from the source branch at the time of creation.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| repo_url | URL of the GitHub repository | str | Yes |
| new_branch | Name of the new branch | str | Yes |
| source_branch | Name of the source branch | str | Yes |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the branch creation failed | str |
| status | Status of the branch creation operation | str |
### Possible use case
<!-- MANUAL: use_case -->
**Feature Branch Creation**: Automatically create feature branches from main when work begins.
**Release Preparation**: Create release branches from development when ready to stabilize.
**Hotfix Workflows**: Quickly create hotfix branches from production for urgent fixes.
<!-- END MANUAL -->
---
## Github Read File
### What it is
This block reads the content of a specified file from a GitHub repository.
### How it works
<!-- MANUAL: how_it_works -->
This block reads the contents of a file from a GitHub repository using the Contents API. You can specify which branch to read from, defaulting to the repository's default branch.
The block returns both the decoded text content (for text files) and the raw base64-encoded content, along with the file size.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| repo_url | URL of the GitHub repository | str | Yes |
| file_path | Path to the file in the repository | str | Yes |
| branch | Branch to read from | str | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| text_content | Content of the file (decoded as UTF-8 text) | str |
| raw_content | Raw base64-encoded content of the file | str |
| size | The size of the file (in bytes) | int |
### Possible use case
<!-- MANUAL: use_case -->
**Configuration Reading**: Fetch configuration files from repositories for processing or validation.
**Code Analysis**: Read source files for automated analysis, linting, or documentation generation.
**Version Comparison**: Compare file contents across different branches or versions.
<!-- END MANUAL -->
---
## Github Read Folder
### What it is
This block reads the content of a specified folder from a GitHub repository.
### How it works
<!-- MANUAL: how_it_works -->
This block lists the contents of a folder in a GitHub repository. It returns separate outputs for files and directories found in the specified path, allowing you to explore the repository structure.
You can specify which branch to read from; it defaults to master if not specified.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| repo_url | URL of the GitHub repository | str | Yes |
| folder_path | Path to the folder in the repository | str | Yes |
| branch | Branch name to read from (defaults to master) | str | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if reading the folder failed | str |
| file | Files in the folder | FileEntry |
| dir | Directories in the folder | DirEntry |
### Possible use case
<!-- MANUAL: use_case -->
**Repository Exploration**: Browse repository structure to understand project organization.
**File Discovery**: Find specific file types in directories for batch processing.
**Directory Monitoring**: Check for expected files in specific locations.
<!-- END MANUAL -->
---
## Github Update File
### What it is
This block updates an existing file in a GitHub repository.
### How it works
<!-- MANUAL: how_it_works -->
This block updates an existing file in a GitHub repository using the Contents API. It creates a new commit with the updated file content. The block automatically handles the required SHA of the existing file.
You can customize the commit message and specify which branch to update.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| repo_url | URL of the GitHub repository | str | Yes |
| file_path | Path to the file to update | str | Yes |
| content | New content for the file | str | Yes |
| branch | Branch containing the file | str | No |
| commit_message | Message for the commit | str | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| url | URL of the updated file | str |
| sha | SHA of the commit | str |
### Possible use case
<!-- MANUAL: use_case -->
**Configuration Updates**: Programmatically update configuration files in repositories.
**Version Bumping**: Automatically update version numbers in package files.
**Documentation Sync**: Update documentation files based on code changes.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,228 @@
# Github Create Comment Object
### What it is
Creates a comment object for use with GitHub blocks. Note: For review comments, only path, body, and position are used. Side fields are only for standalone PR comments.
### How it works
<!-- MANUAL: how_it_works -->
This block creates a structured comment object that can be used with GitHub review blocks. It formats the comment data according to GitHub API requirements, including file path, body text, and position information.
For review comments, only path, body, and position fields are used. The side, start_line, and start_side fields are only applicable for standalone PR comments, not review comments.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| path | The file path to comment on | str | Yes |
| body | The comment text | str | Yes |
| position | Position in the diff (line number from first @@ hunk). Use this OR line. | int | No |
| line | Line number in the file (will be used as position if position not provided) | int | No |
| side | Side of the diff to comment on (NOTE: Only for standalone comments, not review comments) | str | No |
| start_line | Start line for multi-line comments (NOTE: Only for standalone comments, not review comments) | int | No |
| start_side | Side for the start of multi-line comments (NOTE: Only for standalone comments, not review comments) | str | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| comment_object | The comment object formatted for GitHub API | Dict[str, True] |
### Possible use case
<!-- MANUAL: use_case -->
**Automated Code Review**: Generate comment objects for automated review systems that analyze code changes.
**Batch Comments**: Create multiple comment objects to submit together in a single review.
**Template Responses**: Build reusable comment templates for common code review feedback patterns.
<!-- END MANUAL -->
---
## Github Create PR Review
### What it is
This block creates a review on a GitHub pull request with optional inline comments. You can create it as a draft or post immediately. Note: For inline comments, 'position' should be the line number in the diff (starting from the first @@ hunk header).
### How it works
<!-- MANUAL: how_it_works -->
This block creates a code review on a GitHub pull request using the Reviews API. Reviews can include a summary comment and optionally inline comments on specific lines of code in the diff.
You can create reviews as drafts (pending) for later submission, or post them immediately with an action: COMMENT for neutral feedback, APPROVE to approve the changes, or REQUEST_CHANGES to block merging until addressed.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| repo | GitHub repository | str | Yes |
| pr_number | Pull request number | int | Yes |
| body | Body of the review comment | str | Yes |
| event | The review action to perform | "COMMENT" | "APPROVE" | "REQUEST_CHANGES" | No |
| create_as_draft | Create the review as a draft (pending) or post it immediately | bool | No |
| comments | Optional inline comments to add to specific files/lines. Note: Only path, body, and position are supported. Position is line number in diff from first @@ hunk. | List[ReviewComment] | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the review creation failed | str |
| review_id | ID of the created review | int |
| state | State of the review (e.g., PENDING, COMMENTED, APPROVED, CHANGES_REQUESTED) | str |
| html_url | URL of the created review | str |
### Possible use case
<!-- MANUAL: use_case -->
**Automated Code Review**: Submit AI-generated code reviews with inline comments on specific lines.
**Review Workflows**: Create structured reviews as part of automated CI/CD pipelines.
**Approval Automation**: Automatically approve PRs that pass all automated checks and criteria.
<!-- END MANUAL -->
---
## Github Get PR Review Comments
### What it is
This block gets all review comments from a GitHub pull request or from a specific review.
### How it works
<!-- MANUAL: how_it_works -->
This block retrieves review comments from a GitHub pull request. Review comments are inline comments made on specific lines of code during code review, distinct from general issue-style comments.
You can get all review comments on the PR, or filter to comments from a specific review by providing the review ID. Each comment includes metadata like the file path, line number, author, and comment body.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| repo | GitHub repository | str | Yes |
| pr_number | Pull request number | int | Yes |
| review_id | ID of a specific review to get comments from (optional) | int | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| comment | Individual review comment with details | Comment |
| comments | List of all review comments on the pull request | List[CommentItem] |
### Possible use case
<!-- MANUAL: use_case -->
**Review Analysis**: Extract all review comments to analyze feedback patterns or generate summaries.
**Comment Tracking**: Monitor which review feedback has been addressed by comparing comments to code changes.
**Documentation**: Collect review discussions for documentation or knowledge base purposes.
<!-- END MANUAL -->
---
## Github List PR Reviews
### What it is
This block lists all reviews for a specified GitHub pull request.
### How it works
<!-- MANUAL: how_it_works -->
This block retrieves all reviews submitted on a GitHub pull request. It returns information about each review including the reviewer, their verdict (approve, request changes, or comment), and the review body.
Use this to check approval status, see who has reviewed, or analyze the review history of a pull request.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| repo | GitHub repository | str | Yes |
| pr_number | Pull request number | int | Yes |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| review | Individual review with details | Review |
| reviews | List of all reviews on the pull request | List[ReviewItem] |
### Possible use case
<!-- MANUAL: use_case -->
**Approval Status Check**: Verify that required reviewers have approved before proceeding with merge.
**Review Metrics**: Track review participation and response times across team members.
**Merge Readiness**: Check review states to determine if a PR meets merge requirements.
<!-- END MANUAL -->
---
## Github Resolve Review Discussion
### What it is
This block resolves or unresolves a review discussion thread on a GitHub pull request.
### How it works
<!-- MANUAL: how_it_works -->
This block resolves or unresolves a review discussion thread on a GitHub pull request using the GraphQL API. Resolved discussions are collapsed in the GitHub UI, indicating the feedback has been addressed.
Specify the comment ID of the thread to resolve. Set resolve to true to mark as resolved, or false to reopen the discussion.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| repo | GitHub repository | str | Yes |
| pr_number | Pull request number | int | Yes |
| comment_id | ID of the review comment to resolve/unresolve | int | Yes |
| resolve | Whether to resolve (true) or unresolve (false) the discussion | bool | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| success | Whether the operation was successful | bool |
### Possible use case
<!-- MANUAL: use_case -->
**Automated Resolution**: Mark discussions as resolved when automated systems verify the feedback was addressed.
**Review Cleanup**: Bulk resolve outdated discussions that no longer apply after significant refactoring.
**Review Management**: Programmatically manage discussion states as part of review workflows.
<!-- END MANUAL -->
---
## Github Submit Pending Review
### What it is
This block submits a pending (draft) review on a GitHub pull request.
### How it works
<!-- MANUAL: how_it_works -->
This block submits a pending (draft) review on a GitHub pull request. Draft reviews allow you to compose multiple inline comments before publishing them together as a cohesive review.
When submitting, choose the review event: COMMENT for general feedback, APPROVE to approve the PR, or REQUEST_CHANGES to request modifications before merging.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| repo | GitHub repository | str | Yes |
| pr_number | Pull request number | int | Yes |
| review_id | ID of the pending review to submit | int | Yes |
| event | The review action to perform when submitting | "COMMENT" | "APPROVE" | "REQUEST_CHANGES" | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the review submission failed | str |
| state | State of the submitted review | str |
| html_url | URL of the submitted review | str |
### Possible use case
<!-- MANUAL: use_case -->
**Batch Review Submission**: Build up multiple comments in a draft, then submit them all at once.
**Review Finalization**: Complete the review process after adding all inline comments and deciding on the verdict.
**Two-Phase Review**: Create draft reviews for internal review before officially submitting to the PR author.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,38 @@
# Github Create Status
### What it is
Creates a new commit status in a GitHub repository
### How it works
<!-- MANUAL: how_it_works -->
This block creates a commit status using the GitHub Status API. Commit statuses are simpler than check runs and appear as colored indicators (pending yellow, success green, failure red, error red) on commits and pull requests.
Provide a context label to differentiate this status from others, an optional target URL for detailed results, and a description. Multiple statuses can exist on the same commit with different context labels.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| repo_url | URL of the GitHub repository | str | Yes |
| sha | The SHA of the commit to set status for | str | Yes |
| state | The state of the status (error, failure, pending, success) | "error" | "failure" | "pending" | Yes |
| target_url | URL with additional details about this status | str | No |
| description | Short description of the status | str | No |
| check_name | Label to differentiate this status from others | str | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| status | Details of the created status | StatusResult |
### Possible use case
<!-- MANUAL: use_case -->
**External CI Integration**: Report build status from CI systems that don't have native GitHub integration.
**Deployment Tracking**: Set commit statuses to indicate deployment state (pending, deployed, failed).
**Required Status Checks**: Create statuses that GitHub branch protection rules require before merging.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,225 @@
# Github Discussion Trigger
### What it is
This block triggers on GitHub Discussions events. Great for syncing Q&A to Discord or auto-responding to common questions. Note: Discussions must be enabled on the repository.
### How it works
<!-- MANUAL: how_it_works -->
This block creates a webhook subscription to GitHub Discussions events using the GitHub Webhooks API. When a discussion event occurs (created, edited, answered, etc.), GitHub sends a webhook payload that triggers your workflow.
The block parses the webhook payload and extracts discussion details including the title, body, category, state, and the user who triggered the event. Note that GitHub Discussions must be enabled on the repository.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| repo | Repository to subscribe to.
**Note:** Make sure your GitHub credentials have permissions to create webhooks on this repo. | str | Yes |
| events | The discussion events to subscribe to | Events | Yes |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the payload could not be processed | str |
| payload | The complete webhook payload that was received from GitHub. Includes information about the affected resource (e.g. pull request), the event, and the user who triggered the event. | Dict[str, True] |
| triggered_by_user | Object representing the GitHub user who triggered the event | Dict[str, True] |
| event | The discussion event that triggered the webhook | str |
| number | The discussion number | int |
| discussion | The full discussion object | Dict[str, True] |
| discussion_url | URL to the discussion | str |
| title | The discussion title | str |
| body | The discussion body | str |
| category | The discussion category object | Dict[str, True] |
| category_name | Name of the category | str |
| state | Discussion state | str |
### Possible use case
<!-- MANUAL: use_case -->
**Discord Sync**: Post new discussions to Discord channels to keep the community engaged across platforms.
**Auto-Responder**: Automatically respond to common questions in discussions with helpful resources.
**Q&A Routing**: Route discussion questions to the appropriate team members based on category or content.
<!-- END MANUAL -->
---
## Github Issues Trigger
### What it is
This block triggers on GitHub issues events. Useful for automated triage, notifications, and welcoming first-time contributors.
### How it works
<!-- MANUAL: how_it_works -->
This block creates a webhook subscription to GitHub Issues events. When an issue event occurs (opened, closed, labeled, assigned, etc.), GitHub sends a webhook payload that triggers your workflow.
The block extracts issue details including the title, body, labels, assignees, state, and the user who triggered the event. Use this for automated triage, notifications, and issue management workflows.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| repo | Repository to subscribe to.
**Note:** Make sure your GitHub credentials have permissions to create webhooks on this repo. | str | Yes |
| events | The issue events to subscribe to | Events | Yes |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the payload could not be processed | str |
| payload | The complete webhook payload that was received from GitHub. Includes information about the affected resource (e.g. pull request), the event, and the user who triggered the event. | Dict[str, True] |
| triggered_by_user | Object representing the GitHub user who triggered the event | Dict[str, True] |
| event | The issue event that triggered the webhook (e.g., 'opened') | str |
| number | The issue number | int |
| issue | The full issue object | Dict[str, True] |
| issue_url | URL to the issue | str |
| issue_title | The issue title | str |
| issue_body | The issue body/description | str |
| labels | List of labels on the issue | List[Any] |
| assignees | List of assignees | List[Any] |
| state | Issue state ('open' or 'closed') | str |
### Possible use case
<!-- MANUAL: use_case -->
**Automated Triage**: Automatically label new issues based on keywords in title or description.
**Welcome Messages**: Send welcome messages to first-time contributors when they open their first issue.
**Slack Notifications**: Post notifications to Slack when issues are opened or closed.
<!-- END MANUAL -->
---
## Github Pull Request Trigger
### What it is
This block triggers on pull request events and outputs the event type and payload.
### How it works
<!-- MANUAL: how_it_works -->
This block creates a webhook subscription to GitHub Pull Request events. When a PR event occurs (opened, closed, merged, review requested, etc.), GitHub sends a webhook payload that triggers your workflow.
The block extracts PR details including the number, URL, and full pull request object. This enables automated code review, CI/CD pipelines, and notification workflows.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| repo | Repository to subscribe to.
**Note:** Make sure your GitHub credentials have permissions to create webhooks on this repo. | str | Yes |
| events | The events to subscribe to | Events | Yes |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the payload could not be processed | str |
| payload | The complete webhook payload that was received from GitHub. Includes information about the affected resource (e.g. pull request), the event, and the user who triggered the event. | Dict[str, True] |
| triggered_by_user | Object representing the GitHub user who triggered the event | Dict[str, True] |
| event | The PR event that triggered the webhook (e.g. 'opened') | str |
| number | The number of the affected pull request | int |
| pull_request | Object representing the affected pull request | Dict[str, True] |
| pull_request_url | The URL of the affected pull request | str |
### Possible use case
<!-- MANUAL: use_case -->
**Automated Code Review**: Trigger AI-powered code review when new PRs are opened.
**CI/CD Automation**: Start builds and tests when PRs are created or updated.
**Reviewer Assignment**: Automatically assign reviewers based on files changed or PR author.
<!-- END MANUAL -->
---
## Github Release Trigger
### What it is
This block triggers on GitHub release events. Perfect for automating announcements to Discord, Twitter, or other platforms.
### How it works
<!-- MANUAL: how_it_works -->
This block creates a webhook subscription to GitHub Release events. When a release event occurs (published, created, edited, etc.), GitHub sends a webhook payload that triggers your workflow.
The block extracts release details including tag name, release name, release notes, prerelease flag, and associated assets. Use this to automate announcements and deployment workflows.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| repo | Repository to subscribe to.
**Note:** Make sure your GitHub credentials have permissions to create webhooks on this repo. | str | Yes |
| events | The release events to subscribe to | Events | Yes |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the payload could not be processed | str |
| payload | The complete webhook payload that was received from GitHub. Includes information about the affected resource (e.g. pull request), the event, and the user who triggered the event. | Dict[str, True] |
| triggered_by_user | Object representing the GitHub user who triggered the event | Dict[str, True] |
| event | The release event that triggered the webhook (e.g., 'published') | str |
| release | The full release object | Dict[str, True] |
| release_url | URL to the release page | str |
| tag_name | The release tag name (e.g., 'v1.0.0') | str |
| release_name | Human-readable release name | str |
| body | Release notes/description | str |
| prerelease | Whether this is a prerelease | bool |
| draft | Whether this is a draft release | bool |
| assets | List of release assets/files | List[Any] |
### Possible use case
<!-- MANUAL: use_case -->
**Release Announcements**: Post release announcements to Discord, Twitter, or Slack when new versions are published.
**Changelog Distribution**: Automatically send release notes to mailing lists or documentation sites.
**Deployment Triggers**: Initiate deployment workflows when releases are published.
<!-- END MANUAL -->
---
## Github Star Trigger
### What it is
This block triggers on GitHub star events. Useful for celebrating milestones (e.g., 1k, 10k stars) or tracking engagement.
### How it works
<!-- MANUAL: how_it_works -->
This block creates a webhook subscription to GitHub Star events. When someone stars or unstars your repository, GitHub sends a webhook payload that triggers your workflow.
The block extracts star details including the timestamp, current star count, repository name, and the user who starred. Use this to track engagement and celebrate milestones.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| repo | Repository to subscribe to.
**Note:** Make sure your GitHub credentials have permissions to create webhooks on this repo. | str | Yes |
| events | The star events to subscribe to | Events | Yes |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the payload could not be processed | str |
| payload | The complete webhook payload that was received from GitHub. Includes information about the affected resource (e.g. pull request), the event, and the user who triggered the event. | Dict[str, True] |
| triggered_by_user | Object representing the GitHub user who triggered the event | Dict[str, True] |
| event | The star event that triggered the webhook ('created' or 'deleted') | str |
| starred_at | ISO timestamp when the repo was starred (empty if deleted) | str |
| stargazers_count | Current number of stars on the repository | int |
| repository_name | Full name of the repository (owner/repo) | str |
| repository_url | URL to the repository | str |
### Possible use case
<!-- MANUAL: use_case -->
**Milestone Celebrations**: Announce when your repository reaches star milestones (100, 1k, 10k stars).
**Engagement Tracking**: Log star events to track repository popularity over time.
**Thank You Messages**: Send personalized thank you messages to users who star your repository.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,85 @@
# Google Calendar Create Event
### What it is
This block creates a new event in Google Calendar with customizable parameters.
### How it works
<!-- MANUAL: how_it_works -->
This block creates events in Google Calendar via the Google Calendar API. It handles various event parameters including timing, location, guest invitations, Google Meet links, and recurring schedules. The block authenticates using your connected Google account credentials.
When you specify guests, they receive email invitations (if notifications are enabled). The Google Meet option adds a video conference link to the event automatically.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| event_title | Title of the event | str | Yes |
| location | Location of the event | str | No |
| description | Description of the event | str | No |
| timing | Specify when the event starts and ends | Timing | No |
| calendar_id | Calendar ID (use 'primary' for your main calendar) | str | No |
| guest_emails | Email addresses of guests to invite | List[str] | No |
| send_notifications | Send email notifications to guests | bool | No |
| add_google_meet | Include a Google Meet video conference link | bool | No |
| recurrence | Whether the event repeats | Recurrence | No |
| reminder_minutes | When to send reminders before the event | List[int] | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| event_id | ID of the created event | str |
| event_link | Link to view the event in Google Calendar | str |
### Possible use case
<!-- MANUAL: use_case -->
**Automated Meeting Scheduling**: Create calendar events when appointments are booked through a form or scheduling system.
**Event Reminders**: Schedule events with custom reminder notifications for team deadlines or milestones.
**Team Coordination**: Create recurring meetings with Google Meet links when onboarding new team members.
<!-- END MANUAL -->
---
## Google Calendar Read Events
### What it is
Retrieves upcoming events from a Google Calendar with filtering options
### How it works
<!-- MANUAL: how_it_works -->
This block fetches upcoming events from Google Calendar using the Calendar API. It retrieves events within a specified time range, with options to filter by search term or exclude declined events. Pagination support allows handling large numbers of events.
Events are returned with details like title, time, location, and attendees. Use 'primary' as the calendar_id to access your main calendar.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| calendar_id | Calendar ID (use 'primary' for your main calendar) | str | No |
| max_events | Maximum number of events to retrieve | int | No |
| start_time | Retrieve events starting from this time | str (date-time) | No |
| time_range_days | Number of days to look ahead for events | int | No |
| search_term | Optional search term to filter events by | str | No |
| page_token | Page token from previous request to get the next batch of events. You can use this if you have lots of events you want to process in a loop | str | No |
| include_declined_events | Include events you've declined | bool | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the request failed | str |
| events | List of calendar events in the requested time range | List[CalendarEvent] |
| event | One of the calendar events in the requested time range | CalendarEvent |
| next_page_token | Token for retrieving the next page of events if more exist | str |
### Possible use case
<!-- MANUAL: use_case -->
**Daily Briefings**: Fetch today's events to generate a morning summary or prepare for upcoming meetings.
**Schedule Conflicts**: Check for overlapping events before scheduling new appointments.
**Meeting Preparation**: Retrieve upcoming meetings to pre-load relevant documents or send reminders.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,715 @@
# Google Docs Append Markdown
### What it is
Append Markdown content to the end of a Google Doc with full formatting - ideal for LLM/AI output
### How it works
<!-- MANUAL: how_it_works -->
This block appends Markdown content to the end of a Google Doc and automatically converts it to native Google Docs formatting using the Google Docs API. It supports headers, bold, italic, links, lists, and code formatting.
Set add_newline to true to insert a line break before the appended content. The document is returned for chaining with other document operations.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| document | Select a Google Doc to append to | Document | No |
| markdown | Markdown content to append to the document | str | Yes |
| add_newline | Add a newline before the appended content | bool | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if operation failed | str |
| result | Result of the append operation | Dict[str, True] |
| document | The document for chaining | GoogleDriveFile |
### Possible use case
<!-- MANUAL: use_case -->
**AI Report Generation**: Append LLM-generated analysis or summaries to existing report documents with proper formatting.
**Content Aggregation**: Continuously add formatted content from multiple sources to a running document.
**Meeting Notes**: Append AI-transcribed and formatted meeting notes to shared team documents.
<!-- END MANUAL -->
---
## Google Docs Append Plain Text
### What it is
Append plain text to the end of a Google Doc (no formatting applied)
### How it works
<!-- MANUAL: how_it_works -->
This block appends unformatted text to the end of a Google Doc using the Google Docs API. Unlike the Markdown version, text is inserted exactly as provided without any formatting interpretation.
The block finds the document's end index and inserts the text there, with an optional newline prefix. This is useful for log entries, raw data, or when formatting is handled elsewhere.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| document | Select a Google Doc to append to | Document | No |
| text | Plain text to append (no formatting applied) | str | Yes |
| add_newline | Add a newline before the appended text | bool | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if append failed | str |
| result | Result of the append operation | Dict[str, True] |
| document | The document for chaining | GoogleDriveFile |
### Possible use case
<!-- MANUAL: use_case -->
**Activity Logging**: Append timestamped log entries to document-based activity logs.
**Data Capture**: Add raw data or transcript text that will be formatted later.
**Simple Notes**: Quickly add text notes without worrying about formatting.
<!-- END MANUAL -->
---
## Google Docs Create
### What it is
Create a new Google Doc
### How it works
<!-- MANUAL: how_it_works -->
This block creates a new Google Doc in the user's Google Drive using the Google Docs API. You specify a title for the document and optionally provide initial text content.
The newly created document is returned with its ID and URL, allowing immediate access and chaining to other document operations like formatting or sharing.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| title | Title for the new document | str | Yes |
| initial_content | Optional initial text content | str | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if creation failed | str |
| document | The created document | GoogleDriveFile |
| document_id | ID of the created document | str |
| document_url | URL to open the document | str |
### Possible use case
<!-- MANUAL: use_case -->
**Report Templates**: Create new documents for each report cycle with standardized titles.
**Dynamic Document Generation**: Generate personalized documents for customers or projects.
**Workflow Automation**: Create documents as part of onboarding or project kickoff workflows.
<!-- END MANUAL -->
---
## Google Docs Delete Content
### What it is
Delete a range of content from a Google Doc
### How it works
<!-- MANUAL: how_it_works -->
This block removes content from a Google Doc by specifying start and end index positions using the Google Docs API. Index positions are 1-based (index 0 is reserved for a section break).
Use the Get Structure block first to find the correct index positions for content you want to delete. The deletion operation shifts all subsequent content to fill the gap.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| document | Select a Google Doc | Document | No |
| start_index | Start index of content to delete (must be >= 1, as index 0 is a section break) | int | Yes |
| end_index | End index of content to delete | int | Yes |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if operation failed | str |
| result | Result of delete operation | Dict[str, True] |
| document | The document for chaining | GoogleDriveFile |
### Possible use case
<!-- MANUAL: use_case -->
**Content Cleanup**: Remove outdated sections or placeholder text from templates.
**Document Restructuring**: Delete sections as part of document reorganization workflows.
**Revision Management**: Remove draft content before finalizing documents.
<!-- END MANUAL -->
---
## Google Docs Export
### What it is
Export a Google Doc to PDF, Word, text, or other formats
### How it works
<!-- MANUAL: how_it_works -->
This block exports a Google Doc to various formats (PDF, DOCX, ODT) using the Google Drive API's export functionality. The exported content is returned as base64-encoded data for binary formats.
The export preserves document formatting as closely as possible in the target format. PDF exports are ideal for final distribution, while DOCX exports enable further editing in Microsoft Word.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| document | Select a Google Doc to export | Document | No |
| format | Export format | "application/pdf" | "application/vnd.openxmlformats-officedocument.wordprocessingml.document" | "application/vnd.oasis.opendocument.text" | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if export failed | str |
| content | Exported content (base64 encoded for binary formats) | str |
| mime_type | MIME type of exported content | str |
| document | The document for chaining | GoogleDriveFile |
### Possible use case
<!-- MANUAL: use_case -->
**Report Distribution**: Export finalized reports as PDF for email distribution or archival.
**Cross-Platform Sharing**: Export to Word format for recipients who don't use Google Docs.
**Backup Creation**: Create periodic PDF exports of important documents for offline storage.
<!-- END MANUAL -->
---
## Google Docs Find Replace Plain Text
### What it is
Find and replace plain text in a Google Doc (no formatting applied to replacement)
### How it works
<!-- MANUAL: how_it_works -->
This block performs a find-and-replace operation across the entire Google Doc using the Google Docs API. It searches for all occurrences of the specified text and replaces them with the provided replacement text.
The replacement preserves the surrounding formatting but does not apply any new formatting to the replacement text. Case-matching is configurable.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| document | Select a Google Doc | Document | No |
| find_text | Plain text to find | str | Yes |
| replace_text | Plain text to replace with (no formatting applied) | str | Yes |
| match_case | Match case when finding text | bool | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if operation failed | str |
| result | Result with replacement count | Dict[str, True] |
| document | The document for chaining | GoogleDriveFile |
### Possible use case
<!-- MANUAL: use_case -->
**Template Population**: Replace placeholder tokens like {{NAME}} with actual values in document templates.
**Batch Updates**: Update company names, dates, or other text across multiple documents.
**Error Correction**: Fix common typos or outdated terminology across documents.
<!-- END MANUAL -->
---
## Google Docs Format Text
### What it is
Apply formatting (bold, italic, color, etc.) to text in a Google Doc
### How it works
<!-- MANUAL: how_it_works -->
This block applies text formatting to a specific range within a Google Doc using the Google Docs API. You specify start and end indexes and choose formatting options like bold, italic, underline, font size, and text color.
Use the Get Structure block to identify the correct index positions. Multiple formatting options can be applied simultaneously in a single request.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| document | Select a Google Doc | Document | No |
| start_index | Start index of text to format (must be >= 1, as index 0 is a section break) | int | Yes |
| end_index | End index of text to format | int | Yes |
| bold | Make text bold | bool | No |
| italic | Make text italic | bool | No |
| underline | Underline text | bool | No |
| font_size | Font size in points (0 = no change) | int | No |
| foreground_color | Text color as hex (e.g., #FF0000 for red) | str | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if operation failed | str |
| result | Result of format operation | Dict[str, True] |
| document | The document for chaining | GoogleDriveFile |
### Possible use case
<!-- MANUAL: use_case -->
**Highlight Important Content**: Apply bold or color formatting to emphasize key findings or action items.
**Conditional Formatting**: Format text based on workflow conditions (e.g., red for overdue items).
**Document Styling**: Apply consistent formatting to generated content that matches brand guidelines.
<!-- END MANUAL -->
---
## Google Docs Get Metadata
### What it is
Get metadata about a Google Doc
### How it works
<!-- MANUAL: how_it_works -->
This block retrieves document metadata from a Google Doc using the Google Docs API. It returns information including the document title, unique ID, current revision ID, and the URL for accessing the document.
This metadata is useful for tracking document versions, building document inventories, or generating links for sharing.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| document | Select a Google Doc | Document | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if operation failed | str |
| title | Document title | str |
| document_id | Document ID | str |
| revision_id | Current revision ID | str |
| document_url | URL to open the document | str |
| document | The document for chaining | GoogleDriveFile |
### Possible use case
<!-- MANUAL: use_case -->
**Document Inventory**: Gather metadata from multiple documents for tracking and cataloging.
**Version Monitoring**: Track revision IDs to detect when documents have been modified.
**Link Generation**: Extract document URLs for sharing via email or other channels.
<!-- END MANUAL -->
---
## Google Docs Get Structure
### What it is
Get document structure with index positions for precise editing operations
### How it works
<!-- MANUAL: how_it_works -->
This block analyzes a Google Doc's structure and returns detailed information about content segments with their index positions using the Google Docs API. Use flat mode for a simple list of segments or detailed mode for full hierarchical structure.
The index positions are essential for precise editing operations like formatting, deletion, or insertion at specific locations within the document.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| document | Select a Google Doc to analyze | Document | No |
| detailed | Return full hierarchical structure instead of flat segments | bool | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if operation failed | str |
| segments | Flat list of content segments with indexes (when detailed=False) | List[Dict[str, True]] |
| structure | Full hierarchical document structure (when detailed=True) | Dict[str, True] |
| document | The document for chaining | GoogleDriveFile |
### Possible use case
<!-- MANUAL: use_case -->
**Position Discovery**: Find correct index positions before performing insert or delete operations.
**Document Analysis**: Understand document structure for content extraction or manipulation.
**Navigation Aid**: Map document sections to enable targeted content operations.
<!-- END MANUAL -->
---
## Google Docs Insert Markdown At
### What it is
Insert formatted Markdown at a specific position in a Google Doc - ideal for LLM/AI output
### How it works
<!-- MANUAL: how_it_works -->
This block inserts Markdown content at a specific index position within a Google Doc, converting the Markdown to native Google Docs formatting using the Google Docs API. Index 1 inserts at the document start.
The Markdown parser handles headers, bold, italic, links, lists, and code formatting. This enables inserting AI-generated content with proper formatting at precise document locations.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| document | Select a Google Doc to insert into | Document | No |
| markdown | Markdown content to insert | str | Yes |
| index | Position index to insert at (1 = start of document) | int | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if operation failed | str |
| result | Result of the insert operation | Dict[str, True] |
| document | The document for chaining | GoogleDriveFile |
### Possible use case
<!-- MANUAL: use_case -->
**Content Insertion**: Insert AI-generated sections at specific locations in templates.
**Document Assembly**: Build documents by inserting formatted content blocks at designated positions.
**Dynamic Reports**: Insert data-driven formatted content at specific sections of report templates.
<!-- END MANUAL -->
---
## Google Docs Insert Page Break
### What it is
Insert a page break into a Google Doc
### How it works
<!-- MANUAL: how_it_works -->
This block inserts a page break at a specified index position in a Google Doc using the Google Docs API. Setting index to 0 inserts at the end of the document.
Page breaks force subsequent content to start on a new page, useful for separating document sections for printing or PDF generation.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| document | Select a Google Doc | Document | No |
| index | Position to insert page break (0 = end of document) | int | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if operation failed | str |
| result | Result of page break insertion | Dict[str, True] |
| document | The document for chaining | GoogleDriveFile |
### Possible use case
<!-- MANUAL: use_case -->
**Report Formatting**: Add page breaks between major sections of generated reports.
**Print Preparation**: Insert page breaks to control page layout before PDF export.
**Document Structure**: Separate document chapters or sections for better readability.
<!-- END MANUAL -->
---
## Google Docs Insert Plain Text
### What it is
Insert plain text at a specific position in a Google Doc (no formatting applied)
### How it works
<!-- MANUAL: how_it_works -->
This block inserts unformatted text at a specific index position within a Google Doc using the Google Docs API. Index 1 inserts at the document start.
Unlike the Markdown insert, text is inserted exactly as provided without any formatting interpretation, preserving surrounding document formatting.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| document | Select a Google Doc to insert into | Document | No |
| text | Plain text to insert (no formatting applied) | str | Yes |
| index | Position index to insert at (1 = start of document) | int | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if insert failed | str |
| result | Result of the insert operation | Dict[str, True] |
| document | The document for chaining | GoogleDriveFile |
### Possible use case
<!-- MANUAL: use_case -->
**Data Insertion**: Insert raw data values at specific positions in documents.
**Template Variables**: Insert variable values at designated template positions.
**Sequential Content**: Add text entries to specific locations in running documents.
<!-- END MANUAL -->
---
## Google Docs Insert Table
### What it is
Insert a table into a Google Doc, optionally with content and Markdown formatting
### How it works
<!-- MANUAL: how_it_works -->
This block inserts a table into a Google Doc at a specified position using the Google Docs API. You can create empty tables by specifying row/column counts, or provide a 2D array of cell content to create pre-populated tables.
Cell content can optionally be formatted as Markdown, enabling rich formatting like bold headers or links within table cells.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| document | Select a Google Doc | Document | No |
| rows | Number of rows (ignored if content provided) | int | No |
| columns | Number of columns (ignored if content provided) | int | No |
| content | Optional 2D array of cell content, e.g. [['Header1', 'Header2'], ['Row1Col1', 'Row1Col2']]. If provided, rows/columns are derived from this. | List[List[str]] | No |
| index | Position to insert table (0 = end of document) | int | No |
| format_as_markdown | Format cell content as Markdown (headers, bold, links, etc.) | bool | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if operation failed | str |
| result | Result of table insertion | Dict[str, True] |
| document | The document for chaining | GoogleDriveFile |
### Possible use case
<!-- MANUAL: use_case -->
**Data Presentation**: Insert tables to display structured data from APIs or databases.
**Report Tables**: Add summary tables with metrics, comparisons, or status information.
**Template Tables**: Create table structures that get populated with dynamic content.
<!-- END MANUAL -->
---
## Google Docs Read
### What it is
Read text content from a Google Doc
### How it works
<!-- MANUAL: how_it_works -->
This block extracts the plain text content from a Google Doc using the Google Docs API. It returns the document's text content without formatting information, along with the document title.
Use this for content analysis, text processing, or feeding document content to AI models for summarization or other processing.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| document | Select a Google Doc to read | Document | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if read failed | str |
| text | Plain text content of the document | str |
| title | Document title | str |
| document | The document for chaining | GoogleDriveFile |
### Possible use case
<!-- MANUAL: use_case -->
**Content Extraction**: Read document text for processing, analysis, or AI summarization.
**Search and Index**: Extract text from documents for full-text search indexing.
**Content Migration**: Read document content to transform or migrate to other systems.
<!-- END MANUAL -->
---
## Google Docs Replace All With Markdown
### What it is
Replace entire Google Doc content with formatted Markdown - ideal for LLM/AI output
### How it works
<!-- MANUAL: how_it_works -->
This block clears all existing content from a Google Doc and replaces it with new formatted Markdown content using the Google Docs API. The Markdown is converted to native Google Docs formatting.
This is ideal for completely regenerating document content from AI-generated Markdown output.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| document | Select a Google Doc to replace content in | Document | No |
| markdown | Markdown content to replace the document with | str | Yes |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if operation failed | str |
| result | Result of the replace operation | Dict[str, True] |
| document | The document for chaining | GoogleDriveFile |
### Possible use case
<!-- MANUAL: use_case -->
**Document Regeneration**: Completely replace document content with newly generated AI output.
**Content Refresh**: Update recurring documents with fresh content while preserving the document.
**Template Reset**: Clear and repopulate template documents for new projects or periods.
<!-- END MANUAL -->
---
## Google Docs Replace Content With Markdown
### What it is
Find text and replace it with formatted Markdown - ideal for LLM/AI output and templates
### How it works
<!-- MANUAL: how_it_works -->
This block finds specific text (like a placeholder token) in a Google Doc and replaces it with formatted Markdown content using the Google Docs API. The Markdown is converted to native Google Docs formatting.
Use this for template systems where placeholders like {{SECTION}} are replaced with AI-generated formatted content.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| document | Select a Google Doc | Document | No |
| find_text | Text to find and replace (e.g., '{{PLACEHOLDER}}' or any text) | str | Yes |
| markdown | Markdown content to replace the found text with | str | Yes |
| match_case | Match case when finding text | bool | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if operation failed | str |
| result | Result with replacement count | Dict[str, True] |
| document | The document for chaining | GoogleDriveFile |
### Possible use case
<!-- MANUAL: use_case -->
**Smart Templates**: Replace placeholder tokens with AI-generated formatted content in templates.
**Dynamic Sections**: Populate document sections with contextual formatted content.
**Mail Merge Plus**: Advanced mail merge with formatted content replacement, not just plain text.
<!-- END MANUAL -->
---
## Google Docs Replace Range With Markdown
### What it is
Replace a specific index range in a Google Doc with formatted Markdown - ideal for LLM/AI output
### How it works
<!-- MANUAL: how_it_works -->
This block replaces content between specific start and end index positions with formatted Markdown content using the Google Docs API. The existing content in the range is deleted and replaced with the new formatted content.
Use Get Structure to find the correct index positions. This enables precise replacement of specific document sections with new formatted content.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| document | Select a Google Doc | Document | No |
| markdown | Markdown content to insert in place of the range | str | Yes |
| start_index | Start index of the range to replace (must be >= 1) | int | Yes |
| end_index | End index of the range to replace | int | Yes |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if operation failed | str |
| result | Result of the replace operation | Dict[str, True] |
| document | The document for chaining | GoogleDriveFile |
### Possible use case
<!-- MANUAL: use_case -->
**Section Updates**: Replace specific document sections with updated content while preserving the rest.
**Targeted Regeneration**: Regenerate specific portions of documents with new AI-generated content.
**Incremental Updates**: Update identified sections of recurring reports without affecting other areas.
<!-- END MANUAL -->
---
## Google Docs Set Public Access
### What it is
Make a Google Doc public or private
### How it works
<!-- MANUAL: how_it_works -->
This block modifies the sharing permissions of a Google Doc using the Google Drive API to make it publicly accessible or private. You can set the access level to reader (view only) or commenter.
When made public, anyone with the link can access the document according to the specified role. The share link is returned for distribution.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| document | Select a Google Doc | Document | No |
| public | True to make public, False to make private | bool | No |
| role | Permission role for public access | "reader" | "commenter" | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if operation failed | str |
| result | Result of the operation | Dict[str, True] |
| share_link | Link to the document | str |
| document | The document for chaining | GoogleDriveFile |
### Possible use case
<!-- MANUAL: use_case -->
**Public Publishing**: Make finalized documents publicly accessible for broad distribution.
**Access Toggle**: Automate switching document access based on workflow stages.
**Link Sharing**: Generate shareable links for documents that don't require individual access grants.
<!-- END MANUAL -->
---
## Google Docs Share
### What it is
Share a Google Doc with specific users
### How it works
<!-- MANUAL: how_it_works -->
This block shares a Google Doc with specific users by email address using the Google Drive API. You can set the permission level (reader, writer, commenter) and optionally send a notification email with a custom message.
Leave the email blank to just generate a shareable link. The block returns the share link for easy distribution.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| document | Select a Google Doc to share | Document | No |
| email | Email address to share with. Leave empty for link sharing. | str | No |
| role | Permission role for the user | "reader" | "writer" | "commenter" | No |
| send_notification | Send notification email to the user | bool | No |
| message | Optional message to include in notification email | str | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if share failed | str |
| result | Result of the share operation | Dict[str, True] |
| share_link | Link to the document | str |
| document | The document for chaining | GoogleDriveFile |
### Possible use case
<!-- MANUAL: use_case -->
**Automated Collaboration**: Share generated documents with stakeholders automatically after creation.
**Workflow Notifications**: Share documents and notify recipients as part of approval workflows.
**Client Delivery**: Share completed deliverables with clients including notification messages.
<!-- END MANUAL -->
---

View File

@@ -1,203 +1,383 @@
# Gmail
## Gmail Read
# Gmail Add Label
### What it is
A block that retrieves and reads emails from a Gmail account.
### What it does
This block searches for and retrieves emails from a specified Gmail account based on given search criteria. It can fetch multiple emails and provide detailed information about each email, including subject, sender, recipient, date, body content, and attachments.
### How it works
The block connects to the user's Gmail account using their credentials, performs a search based on the provided query, and retrieves the specified number of email messages. It then processes each email to extract relevant information and returns the results.
### Inputs
| Input | Description |
|-------|-------------|
| Credentials | The user's Gmail account credentials for authentication |
| Query | A search query to filter emails (e.g., "is:unread" for unread emails). Ignored if using only the `gmail.metadata` scope. |
| Max Results | The maximum number of emails to retrieve |
### Outputs
| Output | Description |
|--------|-------------|
| Email | Detailed information about a single email (now includes `threadId`) |
| Emails | A list of email data for multiple emails |
| Error | An error message if something goes wrong during the process |
### Possible use case
Automatically checking for new customer inquiries in a support email inbox and organizing them for quick response.
---
## Gmail Send
### What it is
A block that sends emails using a Gmail account.
### What it does
This block allows users to compose and send emails through their Gmail account. It handles the creation of the email message and sends it to the specified recipient.
### How it works
The block authenticates with the user's Gmail account, creates an email message with the provided details (recipient, subject, and body), and then sends the email using Gmail's API.
### Inputs
| Input | Description |
|-------|-------------|
| Credentials | The user's Gmail account credentials for authentication |
| To | The recipient's email address |
| Subject | The subject line of the email |
| Body | The main content of the email |
### Outputs
| Output | Description |
|--------|-------------|
| Result | Confirmation of the sent email, including a message ID |
| Error | An error message if something goes wrong during the process |
### Possible use case
Automatically sending confirmation emails to customers after they make a purchase on an e-commerce website.
---
## Gmail List Labels
### What it is
A block that retrieves all labels (categories) from a Gmail account.
### What it does
This block fetches and lists all the labels or categories that are set up in the user's Gmail account. These labels are used to organize and categorize emails.
### How it works
The block connects to the user's Gmail account and requests a list of all labels. It then processes this information and returns a simplified list of label names and their corresponding IDs.
### Inputs
| Input | Description |
|-------|-------------|
| Credentials | The user's Gmail account credentials for authentication |
### Outputs
| Output | Description |
|--------|-------------|
| Result | A list of labels, including their names and IDs |
| Error | An error message if something goes wrong during the process |
### Possible use case
Creating a dashboard that shows an overview of how many emails are in each category or label in a business email account.
---
## Gmail Add Label
### What it is
A block that adds a label to a specific email in a Gmail account.
### What it does
This block allows users to add a label (category) to a particular email message in their Gmail account. If the label doesn't exist, it creates a new one.
A block that adds a label to a specific email message in Gmail, creating the label if it doesn't exist.
### How it works
<!-- MANUAL: how_it_works -->
The block first checks if the specified label exists in the user's Gmail account. If it doesn't, it creates the label. Then, it adds the label to the specified email message using the message ID.
<!-- END MANUAL -->
### Inputs
| Input | Description |
|-------|-------------|
| Credentials | The user's Gmail account credentials for authentication |
| Message ID | The unique identifier of the email message to be labeled |
| Label Name | The name of the label to add to the email |
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| message_id | Message ID to add label to | str | Yes |
| label_name | Label name to add | str | Yes |
### Outputs
| Output | Description |
|--------|-------------|
| Result | Confirmation of the label addition, including the label ID |
| Error | An error message if something goes wrong during the process |
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if any | str |
| result | Label addition result | GmailLabelResult |
### Possible use case
<!-- MANUAL: use_case -->
Automatically categorizing incoming customer emails based on their content, adding labels like "Urgent," "Feedback," or "Invoice" for easier processing.
<!-- END MANUAL -->
---
## Gmail Remove Label
## Gmail Create Draft
### What it is
A block that removes a label from a specific email in a Gmail account.
### What it does
This block allows users to remove a label (category) from a particular email message in their Gmail account.
Create draft emails in Gmail with automatic HTML detection and proper text formatting. Plain text drafts preserve natural paragraph flow without 78-character line wrapping. HTML content is automatically detected and formatted correctly.
### How it works
The block first finds the ID of the specified label in the user's Gmail account. If the label exists, it removes it from the specified email message using the message ID.
<!-- MANUAL: how_it_works -->
This block creates a draft email in Gmail without sending it. The draft is saved to your Drafts folder where you can review and send it manually. The block automatically detects HTML content or you can explicitly set the content type.
Plain text emails preserve natural formatting without forced line breaks. HTML emails support rich formatting. File attachments are supported by providing file paths.
<!-- END MANUAL -->
### Inputs
| Input | Description |
|-------|-------------|
| Credentials | The user's Gmail account credentials for authentication |
| Message ID | The unique identifier of the email message to remove the label from |
| Label Name | The name of the label to remove from the email |
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| to | Recipient email addresses | List[str] | Yes |
| subject | Email subject | str | Yes |
| body | Email body (plain text or HTML) | str | Yes |
| cc | CC recipients | List[str] | No |
| bcc | BCC recipients | List[str] | No |
| content_type | Content type: 'auto' (default - detects HTML), 'plain', or 'html' | "auto" | "plain" | "html" | No |
| attachments | Files to attach | List[str (file)] | No |
### Outputs
| Output | Description |
|--------|-------------|
| Result | Confirmation of the label removal, including the label ID |
| Error | An error message if something goes wrong during the process |
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if any | str |
| result | Draft creation result | GmailDraftResult |
### Possible use case
Automatically removing the "Unread" label from emails after they have been processed by a customer service representative.
<!-- MANUAL: use_case -->
**Email Review Workflow**: Create draft emails for human review before sending important communications.
**Newsletter Preparation**: Build email drafts with dynamic content that can be finalized before distribution.
**Template Saving**: Save email templates as drafts for quick access and reuse.
<!-- END MANUAL -->
---
## Gmail Draft Reply
### What it is
Create draft replies to Gmail threads with automatic HTML detection and proper text formatting. Plain text draft replies maintain natural paragraph flow without 78-character line wrapping. HTML content is automatically detected and formatted correctly.
### How it works
<!-- MANUAL: how_it_works -->
This block creates a draft reply within an existing email thread. The draft maintains proper threading so your reply appears in the conversation. Use replyAll to respond to all original recipients, or specify custom recipients.
The block preserves the thread context and adds proper email headers for threading. Draft replies can be reviewed in Gmail before sending.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| threadId | Thread ID to reply in | str | Yes |
| parentMessageId | ID of the message being replied to | str | Yes |
| to | To recipients | List[str] | No |
| cc | CC recipients | List[str] | No |
| bcc | BCC recipients | List[str] | No |
| replyAll | Reply to all original recipients | bool | No |
| subject | Email subject | str | No |
| body | Email body (plain text or HTML) | str | Yes |
| content_type | Content type: 'auto' (default - detects HTML), 'plain', or 'html' | "auto" | "plain" | "html" | No |
| attachments | Files to attach | List[str (file)] | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| draftId | Created draft ID | str |
| messageId | Draft message ID | str |
| threadId | Thread ID | str |
| status | Draft creation status | str |
### Possible use case
<!-- MANUAL: use_case -->
**Support Response Preparation**: Draft replies to customer inquiries for review before sending.
**Approval Workflows**: Create reply drafts that require manager approval before being sent.
**Scheduled Responses**: Prepare replies to be reviewed and sent at appropriate times.
<!-- END MANUAL -->
---
## Gmail Forward
### What it is
Forward Gmail messages to other recipients with automatic HTML detection and proper formatting. Preserves original message threading and attachments.
### How it works
<!-- MANUAL: how_it_works -->
This block forwards an existing Gmail message to new recipients. The original message content is preserved and can include attachments from the original email. You can add your own message before the forwarded content.
The block handles proper email threading and formatting, prepending "Fwd:" to the subject unless you specify a custom subject.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| messageId | ID of the message to forward | str | Yes |
| to | Recipients to forward the message to | List[str] | Yes |
| cc | CC recipients | List[str] | No |
| bcc | BCC recipients | List[str] | No |
| subject | Optional custom subject (defaults to 'Fwd: [original subject]') | str | No |
| forwardMessage | Optional message to include before the forwarded content | str | No |
| includeAttachments | Include attachments from the original message | bool | No |
| content_type | Content type: 'auto' (default - detects HTML), 'plain', or 'html' | "auto" | "plain" | "html" | No |
| additionalAttachments | Additional files to attach | List[str (file)] | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| messageId | Forwarded message ID | str |
| threadId | Thread ID | str |
| status | Forward status | str |
### Possible use case
<!-- MANUAL: use_case -->
**Email Escalation**: Automatically forward emails matching certain criteria to managers or specialists.
**Team Distribution**: Forward important updates to relevant team members based on content.
**Record Keeping**: Forward copies of important communications to an archive address.
<!-- END MANUAL -->
---
## Gmail Get Profile
### What it is
Get the authenticated user's Gmail profile details including email address and message statistics.
### How it works
<!-- MANUAL: how_it_works -->
This block retrieves profile information for the authenticated Gmail user via the Gmail API. It returns the email address, total message count, thread count, and storage usage statistics.
This is useful for verifying which account is connected and gathering basic mailbox statistics.
<!-- END MANUAL -->
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| profile | Gmail user profile information | Profile |
### Possible use case
<!-- MANUAL: use_case -->
**Account Verification**: Confirm which Gmail account is connected before performing operations.
**Usage Monitoring**: Check storage usage and message counts for mailbox management.
**Multi-Account Workflows**: Get the current user's email address to route workflows appropriately.
<!-- END MANUAL -->
---
## Gmail Get Thread
### What it is
A block that retrieves an entire Gmail thread.
A block that retrieves an entire Gmail thread (email conversation) by ID, returning all messages with decoded bodies for reading complete conversations.
### What it does
Given a `threadId`, this block fetches all messages in that thread and decodes the text bodies.
### How it works
<!-- MANUAL: how_it_works -->
This block retrieves a complete Gmail thread (email conversation) by its thread ID. It returns all messages in the thread with decoded bodies, allowing you to read the full conversation history.
The thread includes all messages, their senders, timestamps, and content, making it easy to analyze entire email conversations.
<!-- END MANUAL -->
### Inputs
| Input | Description |
|-------|-------------|
| Credentials | The user's Gmail account credentials for authentication |
| threadId | The ID of the thread to fetch |
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| threadId | Gmail thread ID | str | Yes |
### Outputs
| Output | Description |
|--------|-------------|
| Thread | Gmail thread with decoded messages |
| Error | An error message if something goes wrong |
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| thread | Gmail thread with decoded message bodies | Thread |
### Possible use case
Checking if a recipient replied in an existing conversation.
<!-- MANUAL: use_case -->
**Conversation Analysis**: Read an entire email thread to understand the full context of a discussion.
**Reply Detection**: Check if a recipient has responded within a conversation thread.
**Thread Summarization**: Gather all messages in a thread for AI-powered summarization.
<!-- END MANUAL -->
---
## Gmail List Labels
### What it is
A block that retrieves all labels (categories) from a Gmail account for organizing and categorizing emails.
### How it works
<!-- MANUAL: how_it_works -->
The block connects to the user's Gmail account and requests a list of all labels. It then processes this information and returns a simplified list of label names and their corresponding IDs.
<!-- END MANUAL -->
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if any | str |
| result | List of labels | List[Dict[str, True]] |
### Possible use case
<!-- MANUAL: use_case -->
Creating a dashboard that shows an overview of how many emails are in each category or label in a business email account.
<!-- END MANUAL -->
---
## Gmail Read
### What it is
A block that retrieves and reads emails from a Gmail account based on search criteria, returning detailed message information including subject, sender, body, and attachments.
### How it works
<!-- MANUAL: how_it_works -->
The block connects to the user's Gmail account using their credentials, performs a search based on the provided query, and retrieves the specified number of email messages. It then processes each email to extract relevant information and returns the results.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| query | Search query for reading emails | str | No |
| max_results | Maximum number of emails to retrieve | int | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if any | str |
| email | Email data | Email |
| emails | List of email data | List[Email] |
### Possible use case
<!-- MANUAL: use_case -->
Automatically checking for new customer inquiries in a support email inbox and organizing them for quick response.
<!-- END MANUAL -->
---
## Gmail Remove Label
### What it is
A block that removes a label from a specific email message in a Gmail account.
### How it works
<!-- MANUAL: how_it_works -->
The block first finds the ID of the specified label in the user's Gmail account. If the label exists, it removes it from the specified email message using the message ID.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| message_id | Message ID to remove label from | str | Yes |
| label_name | Label name to remove | str | Yes |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if any | str |
| result | Label removal result | GmailLabelResult |
### Possible use case
<!-- MANUAL: use_case -->
Automatically removing the "Unread" label from emails after they have been processed by a customer service representative.
<!-- END MANUAL -->
---
## Gmail Reply
### What it is
A block that sends a reply within an existing Gmail thread.
Reply to Gmail threads with automatic HTML detection and proper text formatting. Plain text replies maintain natural paragraph flow without 78-character line wrapping. HTML content is automatically detected and sent with correct MIME type.
### What it does
This block builds a properly formatted reply email and sends it so Gmail keeps it in the same conversation.
### How it works
<!-- MANUAL: how_it_works -->
This block sends a reply directly within an existing Gmail thread. Unlike the draft reply block, this immediately sends the message. The reply maintains proper threading and appears in the conversation.
Use replyAll to respond to all recipients, or specify custom recipients. The block handles email headers and threading automatically.
<!-- END MANUAL -->
### Inputs
| Input | Description |
|-------|-------------|
| Credentials | The user's Gmail account credentials for authentication |
| threadId | The thread to reply in |
| parentMessageId | The ID of the message you are replying to |
| To | List of recipients |
| Cc | List of CC recipients |
| Bcc | List of BCC recipients |
| Subject | Optional subject (defaults to `Re:` prefix) |
| Body | The email body |
| Attachments | Optional files to include |
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| threadId | Thread ID to reply in | str | Yes |
| parentMessageId | ID of the message being replied to | str | Yes |
| to | To recipients | List[str] | No |
| cc | CC recipients | List[str] | No |
| bcc | BCC recipients | List[str] | No |
| replyAll | Reply to all original recipients | bool | No |
| subject | Email subject | str | No |
| body | Email body (plain text or HTML) | str | Yes |
| content_type | Content type: 'auto' (default - detects HTML), 'plain', or 'html' | "auto" | "plain" | "html" | No |
| attachments | Files to attach | List[str (file)] | No |
### Outputs
| Output | Description |
|--------|-------------|
| MessageId | The ID of the sent message |
| ThreadId | The thread the reply belongs to |
| Message | Full Gmail message object |
| Error | Error message if something goes wrong |
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| messageId | Sent message ID | str |
| threadId | Thread ID | str |
| message | Raw Gmail message object | Dict[str, True] |
| email | Parsed email object with decoded body and attachments | Email |
### Possible use case
Automatically respond "Thanks, see you then" to a scheduling email while keeping the conversation tidy.
<!-- MANUAL: use_case -->
**Auto-Acknowledgments**: Automatically send acknowledgment replies to incoming support requests.
**Scheduled Follow-ups**: Reply to threads with follow-up messages at appropriate times.
**Conversation Continuity**: Respond to ongoing threads while keeping all messages organized.
<!-- END MANUAL -->
---
## Gmail Send
### What it is
Send emails via Gmail with automatic HTML detection and proper text formatting. Plain text emails are sent without 78-character line wrapping, preserving natural paragraph flow. HTML emails are automatically detected and sent with correct MIME type.
### How it works
<!-- MANUAL: how_it_works -->
The block authenticates with the user's Gmail account, creates an email message with the provided details (recipient, subject, and body), and then sends the email using Gmail's API.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| to | Recipient email addresses | List[str] | Yes |
| subject | Email subject | str | Yes |
| body | Email body (plain text or HTML) | str | Yes |
| cc | CC recipients | List[str] | No |
| bcc | BCC recipients | List[str] | No |
| content_type | Content type: 'auto' (default - detects HTML), 'plain', or 'html' | "auto" | "plain" | "html" | No |
| attachments | Files to attach | List[str (file)] | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if any | str |
| result | Send confirmation | GmailSendResult |
### Possible use case
<!-- MANUAL: use_case -->
Automatically sending confirmation emails to customers after they make a purchase on an e-commerce website.
<!-- END MANUAL -->
---

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,36 @@
# Hub Spot Company
### What it is
Manages HubSpot companies - create, update, and retrieve company information
### How it works
<!-- MANUAL: how_it_works -->
This block interacts with the HubSpot CRM API to manage company records. It supports three operations: create new companies, update existing companies, and retrieve company information by domain.
Company data is passed as a dictionary with standard HubSpot company properties like name, domain, industry, and custom properties.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| operation | Operation to perform (create, update, get) | str | No |
| company_data | Company data for create/update operations | Dict[str, True] | No |
| domain | Company domain for get/update operations | str | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| company | Company information | Dict[str, True] |
| status | Operation status | str |
### Possible use case
<!-- MANUAL: use_case -->
**Lead Enrichment**: Create or update company records when new leads come in from forms or integrations.
**Data Sync**: Keep company information synchronized between HubSpot and other business systems.
**Account Management**: Retrieve company details to personalize communications or trigger workflows.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,36 @@
# Hub Spot Contact
### What it is
Manages HubSpot contacts - create, update, and retrieve contact information
### How it works
<!-- MANUAL: how_it_works -->
This block interacts with the HubSpot CRM API to manage contact records. It supports creating new contacts, updating existing contacts, and retrieving contacts by email address.
Contact data includes standard properties like email, first name, last name, phone, and any custom properties defined in your HubSpot account.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| operation | Operation to perform (create, update, get) | str | No |
| contact_data | Contact data for create/update operations | Dict[str, True] | No |
| email | Email address for get/update operations | str | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| contact | Contact information | Dict[str, True] |
| status | Operation status | str |
### Possible use case
<!-- MANUAL: use_case -->
**Lead Capture**: Create contacts automatically from form submissions or integrations.
**Contact Updates**: Update contact information when customers change their details.
**CRM Lookup**: Retrieve contact details for personalization or workflow decisions.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,37 @@
# Hub Spot Engagement
### What it is
Manages HubSpot engagements - sends emails and tracks engagement metrics
### How it works
<!-- MANUAL: how_it_works -->
This block manages HubSpot engagements including sending emails and tracking engagement metrics. Use send_email to send emails through HubSpot, or track_engagement to retrieve engagement history for a contact.
Engagement tracking returns metrics like email opens, clicks, and other interactions within a specified timeframe.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| operation | Operation to perform (send_email, track_engagement) | str | No |
| email_data | Email data including recipient, subject, content | Dict[str, True] | No |
| contact_id | Contact ID for engagement tracking | str | No |
| timeframe_days | Number of days to look back for engagement | int | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| result | Operation result | Dict[str, True] |
| status | Operation status | str |
### Possible use case
<!-- MANUAL: use_case -->
**Automated Outreach**: Send personalized emails to contacts based on triggers or workflows.
**Engagement Scoring**: Track contact engagement to prioritize outreach efforts.
**Follow-Up Automation**: Trigger follow-up actions based on engagement metrics.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,36 @@
# Jina Chunking
### What it is
Chunks texts using Jina AI's segmentation service
### How it works
<!-- MANUAL: how_it_works -->
This block uses Jina AI's segmentation service to split texts into semantically meaningful chunks. Unlike simple splitting by character count, Jina's chunking preserves semantic coherence, making it ideal for RAG applications.
Configure maximum chunk length and optionally return token information for each chunk.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| texts | List of texts to chunk | List[Any] | Yes |
| max_chunk_length | Maximum length of each chunk | int | No |
| return_tokens | Whether to return token information | bool | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| chunks | List of chunked texts | List[Any] |
| tokens | List of token information for each chunk | List[Any] |
### Possible use case
<!-- MANUAL: use_case -->
**RAG Preprocessing**: Chunk documents for retrieval-augmented generation systems.
**Embedding Preparation**: Split long texts into optimal chunks for embedding generation.
**Document Processing**: Break down large documents for analysis or storage in vector databases.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,34 @@
# Jina Embedding
### What it is
Generates embeddings using Jina AI
### How it works
<!-- MANUAL: how_it_works -->
This block generates vector embeddings for text using Jina AI's embedding models. Embeddings are numerical representations that capture semantic meaning, enabling similarity search and clustering.
Optionally specify which Jina model to use for embedding generation.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| texts | List of texts to embed | List[Any] | Yes |
| model | Jina embedding model to use | str | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| embeddings | List of embeddings | List[Any] |
### Possible use case
<!-- MANUAL: use_case -->
**Semantic Search**: Generate embeddings to enable semantic similarity search over documents.
**Vector Database**: Create embeddings for storage in vector databases like Pinecone or Weaviate.
**Document Clustering**: Embed documents to cluster similar content or find related items.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,36 @@
# Fact Checker
### What it is
This block checks the factuality of a given statement using Jina AI's Grounding API.
### How it works
<!-- MANUAL: how_it_works -->
This block uses Jina AI's Grounding API to verify the factuality of statements. It analyzes the statement against reliable sources and returns a factuality score, result, reasoning, and supporting references.
The API searches for evidence and determines whether the statement is supported, contradicted, or uncertain based on available information.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| statement | The statement to check for factuality | str | Yes |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| factuality | The factuality score of the statement | float |
| result | The result of the factuality check | bool |
| reason | The reason for the factuality result | str |
| references | List of references supporting or contradicting the statement | List[Reference] |
### Possible use case
<!-- MANUAL: use_case -->
**Content Verification**: Verify claims in articles or social media posts before publishing.
**AI Output Validation**: Check factuality of AI-generated content to ensure accuracy.
**Research Support**: Validate statements in research or journalism with supporting references.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,56 @@
# Extract Website Content
### What it is
This block scrapes the content from the given web URL.
### How it works
<!-- MANUAL: how_it_works -->
The block sends a request to the given URL, downloads the HTML content, and uses content extraction algorithms to identify and extract the main text content of the page.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| url | The URL to scrape the content from | str | Yes |
| raw_content | Whether to do a raw scrape of the content or use Jina-ai Reader to scrape the content | bool | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the content cannot be retrieved | str |
| content | The scraped content from the given URL | str |
### Possible use case
<!-- MANUAL: use_case -->
A data analyst could use this block to automatically extract article content from news websites for sentiment analysis or topic modeling.
<!-- END MANUAL -->
---
## Search The Web
### What it is
This block searches the internet for the given search query.
### How it works
<!-- MANUAL: how_it_works -->
The block sends the search query to a search engine API, processes the results, and returns them in a structured format.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| query | The search query to search the web for | str | Yes |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| results | The search results including content from top 5 URLs | str |
### Possible use case
<!-- MANUAL: use_case -->
A content creator could use this block to research trending topics in their field, gathering ideas for new articles or videos.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,35 @@
# Linear Create Comment
### What it is
Creates a new comment on a Linear issue
### How it works
<!-- MANUAL: how_it_works -->
This block creates a new comment on a Linear issue using the Linear GraphQL API. Provide the issue ID and comment text, and the block posts the comment and returns its ID.
Comments appear in the issue's activity timeline and notify relevant team members.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| issue_id | ID of the issue to comment on | str | Yes |
| comment | Comment text to add to the issue | str | Yes |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| comment_id | ID of the created comment | str |
| comment_body | Text content of the created comment | str |
### Possible use case
<!-- MANUAL: use_case -->
**Automated Updates**: Post status updates or progress reports to issues automatically.
**Integration Comments**: Add comments when external systems (CI/CD, monitoring) detect relevant changes.
**Cross-Tool Communication**: Post comments from chatbots or customer support integrations.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,109 @@
# Linear Create Issue
### What it is
Creates a new issue on Linear
### How it works
<!-- MANUAL: how_it_works -->
This block creates a new issue in Linear using the GraphQL API. Specify the team, title, description, and optionally priority and project. The issue is created immediately and assigned to the specified team's workflow.
Returns the created issue's ID and title for tracking or further operations.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| title | Title of the issue | str | Yes |
| description | Description of the issue | str | Yes |
| team_name | Name of the team to create the issue on | str | Yes |
| priority | Priority of the issue | int | No |
| project_name | Name of the project to create the issue on | str | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| issue_id | ID of the created issue | str |
| issue_title | Title of the created issue | str |
### Possible use case
<!-- MANUAL: use_case -->
**Bug Reporting**: Automatically create issues from error monitoring or customer reports.
**Feature Requests**: Convert feature requests from forms or support tickets into Linear issues.
**Task Automation**: Create issues based on scheduled events or external triggers.
<!-- END MANUAL -->
---
## Linear Get Project Issues
### What it is
Gets issues from a Linear project filtered by status and assignee
### How it works
<!-- MANUAL: how_it_works -->
This block retrieves issues from a Linear project with optional filtering by status and assignee. It queries the Linear GraphQL API and returns matching issues with their details.
Optionally include comments in the response for comprehensive issue data.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| project | Name of the project to get issues from | str | Yes |
| status | Status/state name to filter issues by (e.g., 'In Progress', 'Done') | str | Yes |
| is_assigned | Filter by assignee status - True to get assigned issues, False to get unassigned issues | bool | No |
| include_comments | Whether to include comments in the response | bool | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| issues | List of issues matching the criteria | List[Issue] |
### Possible use case
<!-- MANUAL: use_case -->
**Sprint Reports**: Generate reports of issues in specific states for sprint reviews.
**Workload Analysis**: Find unassigned or overdue issues across projects.
**Status Dashboards**: Build dashboards showing issue distribution by status.
<!-- END MANUAL -->
---
## Linear Search Issues
### What it is
Searches for issues on Linear
### How it works
<!-- MANUAL: how_it_works -->
This block searches for issues in Linear using a text query. It searches across issue titles, descriptions, and other fields to find matching issues.
Returns a list of issues matching the search term.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| term | Term to search for issues | str | Yes |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| issues | List of issues | List[Issue] |
### Possible use case
<!-- MANUAL: use_case -->
**Duplicate Detection**: Search for existing issues before creating new ones.
**Related Issues**: Find issues related to a specific topic or feature.
**Quick Lookup**: Search for issues by keyword for customer support or research.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,33 @@
# Linear Search Projects
### What it is
Searches for projects on Linear
### How it works
<!-- MANUAL: how_it_works -->
This block searches for projects in Linear using a text query. It queries the Linear GraphQL API to find projects matching the search term.
Returns a list of projects with their details for further use in workflows.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| term | Term to search for projects | str | Yes |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| projects | List of projects | List[Project] |
### Possible use case
<!-- MANUAL: use_case -->
**Project Discovery**: Find projects by name to use in issue creation or queries.
**Portfolio Overview**: Search for projects to build portfolio dashboards.
**Dynamic Forms**: Populate project dropdowns in custom interfaces.
<!-- END MANUAL -->
---

View File

@@ -1,173 +1,743 @@
# Large Language Model (LLM) Blocks
## AI Structured Response Generator
# AI Ad Maker Video Creator
### What it is
A block that generates structured responses using a Large Language Model (LLM).
### What it does
It takes a prompt and other parameters, sends them to an LLM, and returns a structured response in a specified format.
Creates an AIgenerated 30second advert (text + images)
### How it works
The block sends the input prompt to a chosen LLM, along with any system prompts and expected response format. It then processes the LLM's response, ensuring it matches the expected format, and returns the structured data.
<!-- MANUAL: how_it_works -->
This block generates video advertisements by combining AI-generated visuals with narrated scripts. Line breaks in the script create scene transitions. Choose from various voices and background music options.
Optionally provide your own images via input_media_urls, or let the AI generate visuals. The finished video is returned as a URL for download or embedding.
<!-- END MANUAL -->
### Inputs
| Input | Description |
|-------|-------------|
| Prompt | The main text prompt to send to the LLM |
| Expected Format | A dictionary specifying the structure of the desired response |
| Model | The specific LLM to use (e.g., GPT-4 Turbo, Claude 3) |
| API Key | The secret key for accessing the LLM service |
| System Prompt | An optional prompt to guide the LLM's behavior |
| Retry | Number of attempts to generate a valid response |
| Prompt Values | Dictionary of values to fill in the prompt template |
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| script | Short advertising copy. Line breaks create new scenes. | str | Yes |
| ratio | Aspect ratio | str | No |
| target_duration | Desired length of the ad in seconds. | int | No |
| voice | Narration voice | "Lily" | "Daniel" | "Brian" | No |
| background_music | Background track | "Observer" | "Futuristic Beat" | "Science Documentary" | No |
| input_media_urls | List of image URLs to feature in the advert. | List[str] | No |
| use_only_provided_media | Restrict visuals to supplied images only. | bool | No |
### Outputs
| Output | Description |
|--------|-------------|
| Response | The structured response from the LLM |
| Error | Any error message if the process fails |
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| video_url | URL of the finished advert | str |
### Possible use case
Extracting specific information from unstructured text, such as generating a product description with predefined fields (name, features, price) from a lengthy product review.
<!-- MANUAL: use_case -->
**Product Marketing**: Create quick promotional videos for products or services.
**Social Media Ads**: Generate short video ads for social media advertising campaigns.
**Content Automation**: Automatically create video ads from product descriptions or scripts.
<!-- END MANUAL -->
---
## AI Text Generator
## AI Condition
### What it is
A block that generates text responses using a Large Language Model (LLM).
### What it does
It takes a prompt and other parameters, sends them to an LLM, and returns a text response.
Uses AI to evaluate natural language conditions and provide conditional outputs
### How it works
The block sends the input prompt to a chosen LLM, processes the response, and returns the generated text.
<!-- MANUAL: how_it_works -->
This block uses an LLM to evaluate natural language conditions that can't be expressed with simple comparisons. Describe the condition in plain English, and the AI determines if it's true or false for the given input.
The result routes data to yes_output or no_output, enabling intelligent branching based on meaning, sentiment, or complex criteria.
<!-- END MANUAL -->
### Inputs
| Input | Description |
|-------|-------------|
| Prompt | The main text prompt to send to the LLM |
| Model | The specific LLM to use (e.g., GPT-4 Turbo, Claude 3) |
| API Key | The secret key for accessing the LLM service |
| System Prompt | An optional prompt to guide the LLM's behavior |
| Retry | Number of attempts to generate a valid response |
| Prompt Values | Dictionary of values to fill in the prompt template |
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| input_value | The input value to evaluate with the AI condition | Input Value | Yes |
| condition | A plaintext English description of the condition to evaluate | str | Yes |
| yes_value | (Optional) Value to output if the condition is true. If not provided, input_value will be used. | Yes Value | No |
| no_value | (Optional) Value to output if the condition is false. If not provided, input_value will be used. | No Value | No |
| model | The language model to use for evaluating the condition. | "o3-mini" | "o3-2025-04-16" | "o1" | No |
### Outputs
| Output | Description |
|--------|-------------|
| Response | The text response from the LLM |
| Error | Any error message if the process fails |
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the AI evaluation is uncertain or fails | str |
| result | The result of the AI condition evaluation (True or False) | bool |
| yes_output | The output value if the condition is true | Yes Output |
| no_output | The output value if the condition is false | No Output |
### Possible use case
Generating creative writing, such as short stories or poetry, based on a given theme or starting sentence.
<!-- MANUAL: use_case -->
**Sentiment Routing**: Route messages differently based on whether they express frustration or satisfaction.
---
**Content Moderation**: Check if content contains inappropriate material or policy violations.
## AI Text Summarizer
### What it is
A block that summarizes long texts using a Large Language Model (LLM).
### What it does
It takes a long text, breaks it into manageable chunks, summarizes each chunk, and then combines these summaries into a final summary.
### How it works
The block splits the input text into smaller chunks, sends each chunk to an LLM for summarization, and then combines these summaries. If the combined summary is still too long, it repeats the process until a concise summary is achieved.
### Inputs
| Input | Description |
|-------|-------------|
| Text | The long text to be summarized |
| Model | The specific LLM to use for summarization |
| Focus | The main topic or aspect to focus on in the summary |
| Style | The desired style of the summary (e.g., concise, detailed, bullet points) |
| API Key | The secret key for accessing the LLM service |
| Max Tokens | The maximum number of tokens for each chunk |
| Chunk Overlap | The number of overlapping tokens between chunks |
### Outputs
| Output | Description |
|--------|-------------|
| Summary | The final summarized text |
| Error | Any error message if the process fails |
### Possible use case
Summarizing lengthy research papers or articles to quickly grasp the main points and key findings.
**Intent Detection**: Determine if a user message is a question, complaint, or request.
<!-- END MANUAL -->
---
## AI Conversation
### What it is
A block that facilitates multi-turn conversations using a Large Language Model (LLM).
### What it does
It takes a list of conversation messages, sends them to an LLM, and returns the model's response to continue the conversation.
A block that facilitates multi-turn conversations with a Large Language Model (LLM), maintaining context across message exchanges.
### How it works
<!-- MANUAL: how_it_works -->
The block sends the entire conversation history to the chosen LLM, including system messages, user inputs, and previous responses. It then returns the LLM's response as the next part of the conversation.
<!-- END MANUAL -->
### Inputs
| Input | Description |
|-------|-------------|
| Messages | A list of previous messages in the conversation |
| Model | The specific LLM to use for the conversation |
| API Key | The secret key for accessing the LLM service |
| Max Tokens | The maximum number of tokens to generate in the response |
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| prompt | The prompt to send to the language model. | str | No |
| messages | List of messages in the conversation. | List[Any] | Yes |
| model | The language model to use for the conversation. | "o3-mini" | "o3-2025-04-16" | "o1" | No |
| max_tokens | The maximum number of tokens to generate in the chat completion. | int | No |
| ollama_host | Ollama host for local models | str | No |
### Outputs
| Output | Description |
|--------|-------------|
| Response | The LLM's response to continue the conversation |
| Error | Any error message if the process fails |
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| response | The model's response to the conversation. | str |
| prompt | The prompt sent to the language model. | List[Any] |
### Possible use case
<!-- MANUAL: use_case -->
Creating an interactive chatbot that can maintain context over multiple exchanges, such as a customer service assistant or a language learning companion.
<!-- END MANUAL -->
---
## AI Image Customizer
### What it is
Generate and edit custom images using Google's Nano-Banana model from Gemini 2.5. Provide a prompt and optional reference images to create or modify images.
### How it works
<!-- MANUAL: how_it_works -->
This block uses Google's Gemini Nano-Banana models for image generation and editing. Provide a text prompt describing the desired image, and optionally include reference images for style guidance or modification.
Configure aspect ratio to match your needs and choose between JPG or PNG output format. The generated image is returned as a URL.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| prompt | A text description of the image you want to generate | str | Yes |
| model | The AI model to use for image generation and editing | "google/nano-banana" | "google/nano-banana-pro" | No |
| images | Optional list of input images to reference or modify | List[str (file)] | No |
| aspect_ratio | Aspect ratio of the generated image | "match_input_image" | "1:1" | "2:3" | No |
| output_format | Format of the output image | "jpg" | "png" | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| image_url | URL of the generated image | str (file) |
### Possible use case
<!-- MANUAL: use_case -->
**Product Visualization**: Generate product images with different backgrounds or settings.
**Creative Content**: Create unique images for marketing, social media, or presentations.
**Image Modification**: Edit existing images by providing them as references with modification prompts.
<!-- END MANUAL -->
---
## AI Image Editor
### What it is
Edit images using BlackForest Labs' Flux Kontext models. Provide a prompt and optional reference image to generate a modified image.
### How it works
<!-- MANUAL: how_it_works -->
This block uses BlackForest Labs' Flux Kontext models for context-aware image editing. Describe the desired edit in the prompt, and optionally provide an input image to modify.
Choose between Flux Kontext Pro or Max for different quality/speed tradeoffs. Set a seed for reproducible results across multiple runs.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| prompt | Text instruction describing the desired edit | str | Yes |
| input_image | Reference image URI (jpeg, png, gif, webp) | str (file) | No |
| aspect_ratio | Aspect ratio of the generated image | "match_input_image" | "1:1" | "16:9" | No |
| seed | Random seed. Set for reproducible generation | int | No |
| model | Model variant to use | "Flux Kontext Pro" | "Flux Kontext Max" | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| output_image | URL of the transformed image | str (file) |
### Possible use case
<!-- MANUAL: use_case -->
**Style Transfer**: Transform images to match different artistic styles or moods.
**Object Editing**: Add, remove, or modify specific elements in existing images.
**Background Changes**: Replace or modify image backgrounds while preserving subjects.
<!-- END MANUAL -->
---
## AI Image Generator
### What it is
Generate images using various AI models through a unified interface
### How it works
<!-- MANUAL: how_it_works -->
This block generates images from text prompts using your choice of AI models including Flux and Recraft. Select the image size (square, landscape, portrait, wide, or tall) and visual style to match your needs.
The unified interface allows switching between models without changing your workflow, making it easy to compare results or adapt to different use cases.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| prompt | Text prompt for image generation | str | Yes |
| model | The AI model to use for image generation | "Flux 1.1 Pro" | "Flux 1.1 Pro Ultra" | "Recraft v3" | No |
| size | Format of the generated image:
- Square: Perfect for profile pictures, icons
- Landscape: Traditional photo format
- Portrait: Vertical photos, portraits
- Wide: Cinematic format, desktop wallpapers
- Tall: Mobile wallpapers, social media stories | "square" | "landscape" | "portrait" | No |
| style | Visual style for the generated image | "any" | "realistic_image" | "realistic_image/b_and_w" | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| image_url | URL of the generated image | str |
### Possible use case
<!-- MANUAL: use_case -->
**Content Creation**: Generate images for blog posts, articles, or social media.
**Marketing Visuals**: Create product images, banners, or promotional graphics.
**Illustration**: Generate custom illustrations for presentations or documents.
<!-- END MANUAL -->
---
## AI List Generator
### What it is
A block that generates lists based on given prompts or source data using a Large Language Model (LLM).
### What it does
It takes a focus or source data, sends it to an LLM, and returns a generated list based on the input.
A block that creates lists of items based on prompts using a Large Language Model (LLM), with optional source data for context.
### How it works
<!-- MANUAL: how_it_works -->
The block formulates a prompt based on the given focus or source data, sends it to the chosen LLM, and then processes the response to ensure it's a valid Python list. It can retry multiple times if the initial attempts fail.
<!-- END MANUAL -->
### Inputs
| Input | Description |
|-------|-------------|
| Focus | The main topic or theme for the list to be generated |
| Source Data | Optional data to use as a basis for list generation |
| Model | The specific LLM to use for list generation |
| API Key | The secret key for accessing the LLM service |
| Max Retries | The maximum number of attempts to generate a valid list |
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| focus | The focus of the list to generate. | str | No |
| source_data | The data to generate the list from. | str | No |
| model | The language model to use for generating the list. | "o3-mini" | "o3-2025-04-16" | "o1" | No |
| max_retries | Maximum number of retries for generating a valid list. | int | No |
| force_json_output | Whether to force the LLM to produce a JSON-only response. This can increase the block's reliability, but may also reduce the quality of the response because it prohibits the LLM from reasoning before providing its JSON response. | bool | No |
| max_tokens | The maximum number of tokens to generate in the chat completion. | int | No |
| ollama_host | Ollama host for local models | str | No |
### Outputs
| Output | Description |
|--------|-------------|
| Generated List | The full list generated by the LLM |
| List Item | Each individual item in the generated list |
| Error | Any error message if the process fails |
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| generated_list | The generated list. | List[str] |
| list_item | Each individual item in the list. | str |
| prompt | The prompt sent to the language model. | List[Any] |
### Possible use case
<!-- MANUAL: use_case -->
Automatically generating a list of key points or action items from a long meeting transcript or summarizing the main topics discussed in a series of documents.
<!-- END MANUAL -->
# Providers
There are severals providers that AutoGPT users can use for running inference with LLM models.
---
## Llama API
Llama API is a Meta-hosted API service that helps you integrate Llama models quickly and efficiently. Using OpenAI comptability endpoints, you can easily access the power of Llama models without the need for complex setup or configuration!
## AI Music Generator
Join the [waitlist](https://llama.developer.meta.com?utm_source=partner-autogpt&utm_medium=readme) to get access!
### What it is
This block generates music using Meta's MusicGen model on Replicate.
Try the Llama API provider by selecting any of the following LLM Model names from the AI blocks mentioned above:
- Llama-4-Scout-17B-16E-Instruct-FP8
- Llama-4-Maverick-17B-128E-Instruct-FP8
- Llama-3.3-8B-Instruct
- Llama-3-70B-Instruct
### How it works
<!-- MANUAL: how_it_works -->
This block uses Meta's MusicGen model to generate original music from text descriptions. Describe the desired music style, mood, and instruments in the prompt, and the AI creates a matching audio track.
Configure duration, temperature (for variety), and output format. Higher temperature produces more diverse results, while lower values stay closer to typical patterns.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| prompt | A description of the music you want to generate | str | Yes |
| music_gen_model_version | Model to use for generation | "stereo-large" | "melody-large" | "large" | No |
| duration | Duration of the generated audio in seconds | int | No |
| temperature | Controls the 'conservativeness' of the sampling process. Higher temperature means more diversity | float | No |
| top_k | Reduces sampling to the k most likely tokens | int | No |
| top_p | Reduces sampling to tokens with cumulative probability of p. When set to 0 (default), top_k sampling is used | float | No |
| classifier_free_guidance | Increases the influence of inputs on the output. Higher values produce lower-variance outputs that adhere more closely to inputs | int | No |
| output_format | Output format for generated audio | "wav" | "mp3" | No |
| normalization_strategy | Strategy for normalizing audio | "loudness" | "clip" | "peak" | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| result | URL of the generated audio file | str |
### Possible use case
<!-- MANUAL: use_case -->
**Video Soundtracks**: Generate background music for videos, podcasts, or presentations.
**Content Creation**: Create original music for social media or marketing content.
**Prototyping**: Quickly generate music concepts for creative projects.
<!-- END MANUAL -->
---
## AI Screenshot To Video Ad
### What it is
Turns a screenshot into an engaging, avatarnarrated video advert.
### How it works
<!-- MANUAL: how_it_works -->
This block creates video advertisements featuring a screenshot with AI-generated narration. Provide the screenshot URL and narration script, and the block generates a video with voice and background music.
Choose from various voices and music tracks. The video showcases the screenshot while the AI narrator reads your script.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| script | Narration that will accompany the screenshot. | str | Yes |
| screenshot_url | Screenshot or image URL to showcase. | str | Yes |
| ratio | - | str | No |
| target_duration | - | int | No |
| voice | - | "Lily" | "Daniel" | "Brian" | No |
| background_music | - | "Observer" | "Futuristic Beat" | "Science Documentary" | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| video_url | Rendered video URL | str |
### Possible use case
<!-- MANUAL: use_case -->
**App Demos**: Create narrated demonstrations of software features from screenshots.
**Product Tours**: Turn product screenshots into engaging video walkthroughs.
**Tutorial Videos**: Generate instructional videos from step-by-step screenshots.
<!-- END MANUAL -->
---
## AI Shortform Video Creator
### What it is
Creates a shortform video using revid.ai
### How it works
<!-- MANUAL: how_it_works -->
This block creates short-form videos from scripts using revid.ai. Format scripts with line breaks for scene changes and use [brackets] to guide visual generation. Text outside brackets becomes narration.
Choose video style (stock video, moving images, or AI-generated), voice, background music, and generation presets. The finished video URL is returned for download or sharing.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| script | 1. Use short and punctuated sentences
2. Use linebreaks to create a new clip
3. Text outside of brackets is spoken by the AI, and [text between brackets] will be used to guide the visual generation. For example, [close-up of a cat] will show a close-up of a cat. | str | Yes |
| ratio | Aspect ratio of the video | str | No |
| resolution | Resolution of the video | str | No |
| frame_rate | Frame rate of the video | int | No |
| generation_preset | Generation preset for visual style - only effects AI generated visuals | "Default" | "Anime" | "Realist" | No |
| background_music | Background music track | "Observer" | "Futuristic Beat" | "Science Documentary" | No |
| voice | AI voice to use for narration | "Lily" | "Daniel" | "Brian" | No |
| video_style | Type of visual media to use for the video | "stockVideo" | "movingImage" | "aiVideo" | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| video_url | The URL of the created video | str |
### Possible use case
<!-- MANUAL: use_case -->
**Social Media Content**: Create TikTok, Reels, or Shorts content automatically.
**Explainer Videos**: Generate short educational or promotional videos.
**Content Repurposing**: Convert written content into engaging short-form video.
<!-- END MANUAL -->
---
## AI Structured Response Generator
### What it is
A block that generates structured JSON responses using a Large Language Model (LLM), with schema validation and format enforcement.
### How it works
<!-- MANUAL: how_it_works -->
The block sends the input prompt to a chosen LLM, along with any system prompts and expected response format. It then processes the LLM's response, ensuring it matches the expected format, and returns the structured data.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| prompt | The prompt to send to the language model. | str | Yes |
| expected_format | Expected format of the response. If provided, the response will be validated against this format. The keys should be the expected fields in the response, and the values should be the description of the field. | Dict[str, str] | Yes |
| list_result | Whether the response should be a list of objects in the expected format. | bool | No |
| model | The language model to use for answering the prompt. | "o3-mini" | "o3-2025-04-16" | "o1" | No |
| force_json_output | Whether to force the LLM to produce a JSON-only response. This can increase the block's reliability, but may also reduce the quality of the response because it prohibits the LLM from reasoning before providing its JSON response. | bool | No |
| sys_prompt | The system prompt to provide additional context to the model. | str | No |
| conversation_history | The conversation history to provide context for the prompt. | List[Dict[str, True]] | No |
| retry | Number of times to retry the LLM call if the response does not match the expected format. | int | No |
| prompt_values | Values used to fill in the prompt. The values can be used in the prompt by putting them in a double curly braces, e.g. {{variable_name}}. | Dict[str, str] | No |
| max_tokens | The maximum number of tokens to generate in the chat completion. | int | No |
| compress_prompt_to_fit | Whether to compress the prompt to fit within the model's context window. | bool | No |
| ollama_host | Ollama host for local models | str | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| response | The response object generated by the language model. | Dict[str, True] | List[Dict[str, True]] |
| prompt | The prompt sent to the language model. | List[Any] |
### Possible use case
<!-- MANUAL: use_case -->
Extracting specific information from unstructured text, such as generating a product description with predefined fields (name, features, price) from a lengthy product review.
<!-- END MANUAL -->
---
## AI Text Generator
### What it is
A block that produces text responses using a Large Language Model (LLM) based on customizable prompts and system instructions.
### How it works
<!-- MANUAL: how_it_works -->
The block sends the input prompt to a chosen LLM, processes the response, and returns the generated text.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| prompt | The prompt to send to the language model. You can use any of the {keys} from Prompt Values to fill in the prompt with values from the prompt values dictionary by putting them in curly braces. | str | Yes |
| model | The language model to use for answering the prompt. | "o3-mini" | "o3-2025-04-16" | "o1" | No |
| sys_prompt | The system prompt to provide additional context to the model. | str | No |
| retry | Number of times to retry the LLM call if the response does not match the expected format. | int | No |
| prompt_values | Values used to fill in the prompt. The values can be used in the prompt by putting them in a double curly braces, e.g. {{variable_name}}. | Dict[str, str] | No |
| ollama_host | Ollama host for local models | str | No |
| max_tokens | The maximum number of tokens to generate in the chat completion. | int | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| response | The response generated by the language model. | str |
| prompt | The prompt sent to the language model. | List[Any] |
### Possible use case
<!-- MANUAL: use_case -->
Generating creative writing, such as short stories or poetry, based on a given theme or starting sentence.
<!-- END MANUAL -->
---
## AI Text Summarizer
### What it is
A block that summarizes long texts using a Large Language Model (LLM), with configurable focus topics and summary styles.
### How it works
<!-- MANUAL: how_it_works -->
The block splits the input text into smaller chunks, sends each chunk to an LLM for summarization, and then combines these summaries. If the combined summary is still too long, it repeats the process until a concise summary is achieved.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| text | The text to summarize. | str | Yes |
| model | The language model to use for summarizing the text. | "o3-mini" | "o3-2025-04-16" | "o1" | No |
| focus | The topic to focus on in the summary | str | No |
| style | The style of the summary to generate. | "concise" | "detailed" | "bullet points" | No |
| max_tokens | The maximum number of tokens to generate in the chat completion. | int | No |
| chunk_overlap | The number of overlapping tokens between chunks to maintain context. | int | No |
| ollama_host | Ollama host for local models | str | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| summary | The final summary of the text. | str |
| prompt | The prompt sent to the language model. | List[Any] |
### Possible use case
<!-- MANUAL: use_case -->
Summarizing lengthy research papers or articles to quickly grasp the main points and key findings.
<!-- END MANUAL -->
---
## Code Generation
### What it is
Generate or refactor code using OpenAI's Codex (Responses API).
### How it works
<!-- MANUAL: how_it_works -->
This block uses OpenAI's Codex model optimized for code generation and refactoring. Provide a prompt describing the code you need, and optionally a system prompt with coding guidelines or context.
Configure reasoning_effort to control how much the model "thinks" before responding. The block returns generated code along with any reasoning the model produced.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| prompt | Primary coding request passed to the Codex model. | str | Yes |
| system_prompt | Optional instructions injected via the Responses API instructions field. | str | No |
| model | Codex-optimized model served via the Responses API. | "gpt-5.1-codex" | No |
| reasoning_effort | Controls the Responses API reasoning budget. Select 'none' to skip reasoning configs. | "none" | "low" | "medium" | No |
| max_output_tokens | Upper bound for generated tokens (hard limit 128,000). Leave blank to let OpenAI decide. | int | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| response | Code-focused response returned by the Codex model. | str |
| reasoning | Reasoning summary returned by the model, if available. | str |
| response_id | ID of the Responses API call for auditing/debugging. | str |
### Possible use case
<!-- MANUAL: use_case -->
**Code Automation**: Generate boilerplate code, functions, or entire modules from descriptions.
**Refactoring**: Transform existing code to follow different patterns or conventions.
**Code Completion**: Fill in missing implementation details based on signatures or comments.
<!-- END MANUAL -->
---
## Create Talking Avatar Video
### What it is
This block integrates with D-ID to create video clips and retrieve their URLs.
### How it works
<!-- MANUAL: how_it_works -->
The block sends a request to the D-ID API with your specified parameters. It then regularly checks the status of the video creation process until it's complete or an error occurs.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| script_input | The text input for the script | str | Yes |
| provider | The voice provider to use | "microsoft" | "elevenlabs" | "amazon" | No |
| voice_id | The voice ID to use, get list of voices [here](https://docs.agpt.co/server/d_id) | str | No |
| presenter_id | The presenter ID to use | str | No |
| driver_id | The driver ID to use | str | No |
| result_format | The desired result format | "mp4" | "gif" | "wav" | No |
| crop_type | The crop type for the presenter | "wide" | "square" | "vertical" | No |
| subtitles | Whether to include subtitles | bool | No |
| ssml | Whether the input is SSML | bool | No |
| max_polling_attempts | Maximum number of polling attempts | int | No |
| polling_interval | Interval between polling attempts in seconds | int | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| video_url | The URL of the created video | str |
### Possible use case
<!-- MANUAL: use_case -->
A marketing team could use this block to create engaging video content for social media. They could input a script promoting a new product, select a friendly-looking avatar, and generate a video that explains the product's features in an appealing way.
<!-- END MANUAL -->
---
## Ideogram Model
### What it is
This block runs Ideogram models with both simple and advanced settings.
### How it works
<!-- MANUAL: how_it_works -->
This block generates images using Ideogram's models (V1, V2, V3) which excel at rendering text within images. Configure aspect ratio, style type, and optionally enable MagicPrompt for enhanced results.
Advanced options include upscaling, custom color palettes, and negative prompts to exclude unwanted elements. Set a seed for reproducible generation.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| prompt | Text prompt for image generation | str | Yes |
| ideogram_model_name | The name of the Image Generation Model, e.g., V_3 | "V_3" | "V_2" | "V_1" | No |
| aspect_ratio | Aspect ratio for the generated image | "ASPECT_10_16" | "ASPECT_16_10" | "ASPECT_9_16" | No |
| upscale | Upscale the generated image | "AI Upscale" | "No Upscale" | No |
| magic_prompt_option | Whether to use MagicPrompt for enhancing the request | "AUTO" | "ON" | "OFF" | No |
| seed | Random seed. Set for reproducible generation | int | No |
| style_type | Style type to apply, applicable for V_2 and above | "AUTO" | "GENERAL" | "REALISTIC" | No |
| negative_prompt | Description of what to exclude from the image | str | No |
| color_palette_name | Color palette preset name, choose 'None' to skip | "NONE" | "EMBER" | "FRESH" | No |
| custom_color_palette | Only available for model version V_2 or V_2_TURBO. Provide one or more color hex codes (e.g., ['#000030', '#1C0C47', '#9900FF', '#4285F4', '#FFFFFF']) to define a custom color palette. Only used if 'color_palette_name' is 'NONE'. | List[str] | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| result | Generated image URL | str |
### Possible use case
<!-- MANUAL: use_case -->
**Text-Heavy Graphics**: Create images with logos, signs, or text overlays.
**Marketing Materials**: Generate promotional images with clear, readable text.
**Social Media Graphics**: Create quote images, announcements, or branded content with text.
<!-- END MANUAL -->
---
## Perplexity
### What it is
Query Perplexity's sonar models with real-time web search capabilities and receive annotated responses with source citations.
### How it works
<!-- MANUAL: how_it_works -->
This block queries Perplexity's sonar models which combine LLM capabilities with real-time web search. Responses include source citations as annotations, providing verifiable references for the information.
Choose from different sonar model variants including deep-research for comprehensive analysis. The block returns both the response text and structured citation data.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| prompt | The query to send to the Perplexity model. | str | Yes |
| model | The Perplexity sonar model to use. | "perplexity/sonar" | "perplexity/sonar-pro" | "perplexity/sonar-deep-research" | No |
| system_prompt | Optional system prompt to provide context to the model. | str | No |
| max_tokens | The maximum number of tokens to generate. | int | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| response | The response from the Perplexity model. | str |
| annotations | List of URL citations and annotations from the response. | List[Dict[str, True]] |
### Possible use case
<!-- MANUAL: use_case -->
**Research Automation**: Get answers to questions with verifiable sources for fact-checking.
**Current Events**: Query real-time information that LLMs with static training data can't provide.
**Competitive Intelligence**: Research companies, products, or markets with cited sources.
<!-- END MANUAL -->
---
## Smart Decision Maker
### What it is
Uses AI to intelligently decide what tool to use.
### How it works
<!-- MANUAL: how_it_works -->
This block enables agentic behavior by letting an LLM decide which tools to use based on the prompt. Connect tool outputs to feed back results, creating autonomous reasoning loops.
Configure agent_mode_max_iterations to control loop behavior: 0 for single decisions, -1 for infinite looping, or a positive number for max iterations. The block outputs tool calls or a finished message.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| prompt | The prompt to send to the language model. | str | Yes |
| model | The language model to use for answering the prompt. | "o3-mini" | "o3-2025-04-16" | "o1" | No |
| multiple_tool_calls | Whether to allow multiple tool calls in a single response. | bool | No |
| sys_prompt | The system prompt to provide additional context to the model. | str | No |
| conversation_history | The conversation history to provide context for the prompt. | List[Dict[str, True]] | No |
| last_tool_output | The output of the last tool that was called. | Last Tool Output | No |
| retry | Number of times to retry the LLM call if the response does not match the expected format. | int | No |
| prompt_values | Values used to fill in the prompt. The values can be used in the prompt by putting them in a double curly braces, e.g. {{variable_name}}. | Dict[str, str] | No |
| max_tokens | The maximum number of tokens to generate in the chat completion. | int | No |
| ollama_host | Ollama host for local models | str | No |
| agent_mode_max_iterations | Maximum iterations for agent mode. 0 = traditional mode (single LLM call, yield tool calls for external execution), -1 = infinite agent mode (loop until finished), 1+ = agent mode with max iterations limit. | int | No |
| conversation_compaction | Automatically compact the context window once it hits the limit | bool | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| tools | The tools that are available to use. | Tools |
| finished | The finished message to display to the user. | str |
| conversations | The conversation history to provide context for the prompt. | List[Any] |
### Possible use case
<!-- MANUAL: use_case -->
**Autonomous Agents**: Build agents that can independently decide which tools to use for tasks.
**Dynamic Workflows**: Create workflows that adapt their execution path based on AI decisions.
**Multi-Tool Orchestration**: Let AI coordinate multiple tools to accomplish complex goals.
<!-- END MANUAL -->
---
## Unreal Text To Speech
### What it is
Converts text to speech using the Unreal Speech API
### How it works
<!-- MANUAL: how_it_works -->
This block converts text into natural-sounding speech using Unreal Speech API. Provide the text content and optionally select a specific voice ID to customize the audio output.
The generated audio is returned as an MP3 URL that can be downloaded, played, or used as input for video creation blocks.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| text | The text to be converted to speech | str | Yes |
| voice_id | The voice ID to use for text-to-speech conversion | str | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| mp3_url | The URL of the generated MP3 file | str |
### Possible use case
<!-- MANUAL: use_case -->
**Voiceover Generation**: Create narration audio for videos, presentations, or tutorials.
**Accessibility**: Convert written content to audio for visually impaired users.
**Audio Content**: Generate podcast intros, announcements, or automated phone messages.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,328 @@
# Calculator
### What it is
Performs a mathematical operation on two numbers.
### How it works
<!-- MANUAL: how_it_works -->
The Calculator block takes in two numbers and an operation choice. It then applies the chosen operation to the numbers and returns the result. If rounding is selected, it rounds the result to the nearest whole number.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| operation | Choose the math operation you want to perform | "Add" | "Subtract" | "Multiply" | Yes |
| a | Enter the first number (A) | float | Yes |
| b | Enter the second number (B) | float | Yes |
| round_result | Do you want to round the result to a whole number? | bool | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| result | The result of your calculation | float |
### Possible use case
<!-- MANUAL: use_case -->
A user wants to quickly perform a calculation, such as adding two numbers or calculating a percentage. They can input the numbers and operation into this block and receive the result instantly.
<!-- END MANUAL -->
---
## Condition
### What it is
Handles conditional logic based on comparison operators
### How it works
<!-- MANUAL: how_it_works -->
This block compares two values using standard operators (==, !=, >, <, >=, <=) and routes data based on the result. The comparison result determines which output receives data: yes_output for true conditions, no_output for false.
Optionally specify yes_value and no_value to output different data than the input values. If not specified, value1 is used as the output value.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| value1 | Enter the first value for comparison | Value1 | Yes |
| operator | Choose the comparison operator | "==" | "!=" | ">" | Yes |
| value2 | Enter the second value for comparison | Value2 | Yes |
| yes_value | (Optional) Value to output if the condition is true. If not provided, value1 will be used. | Yes Value | No |
| no_value | (Optional) Value to output if the condition is false. If not provided, value1 will be used. | No Value | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| result | The result of the condition evaluation (True or False) | bool |
| yes_output | The output value if the condition is true | Yes Output |
| no_output | The output value if the condition is false | No Output |
### Possible use case
<!-- MANUAL: use_case -->
**Threshold Checks**: Route workflow differently when values exceed limits (e.g., order total > $100 triggers approval).
**Status Validation**: Check if a status equals "complete" or "error" to branch workflow logic.
**Numeric Comparisons**: Compare scores, counts, or metrics to conditionally trigger actions.
<!-- END MANUAL -->
---
## Count Items
### What it is
Counts the number of items in a collection.
### How it works
<!-- MANUAL: how_it_works -->
The Count Items block receives a collection as input. It then determines the type of collection and uses the appropriate method to count the items. For most collections, it uses the length function. For other iterable objects, it counts the items one by one.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| collection | Enter the collection you want to count. This can be a list, dictionary, string, or any other iterable. | Collection | Yes |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| count | The number of items in the collection | int |
### Possible use case
<!-- MANUAL: use_case -->
A user has a list of customer names and wants to quickly determine how many customers are in the list. They can input the list into this block and receive the total count immediately.
<!-- END MANUAL -->
---
## Data Sampling
### What it is
This block samples data from a given dataset using various sampling methods.
### How it works
<!-- MANUAL: how_it_works -->
This block extracts a sample from a dataset using various methods: random sampling, systematic sampling (every nth item), or top-N selection. Advanced options include stratified sampling by key, weighted sampling, and cluster sampling.
Configure sample_size to control how many items to select. Use random_seed for reproducible results. The accumulate option collects data before sampling, useful when processing streaming inputs.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| data | The dataset to sample from. Can be a single dictionary, a list of dictionaries, or a list of lists. | Dict[str, True] | List[Dict[str, True] | List[Any]] | Yes |
| sample_size | The number of samples to take from the dataset. | int | No |
| sampling_method | The method to use for sampling. | "random" | "systematic" | "top" | No |
| accumulate | Whether to accumulate data before sampling. | bool | No |
| random_seed | Seed for random number generator (optional). | int | No |
| stratify_key | Key to use for stratified sampling (required for stratified sampling). | str | No |
| weight_key | Key to use for weighted sampling (required for weighted sampling). | str | No |
| cluster_key | Key to use for cluster sampling (required for cluster sampling). | str | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| sampled_data | The sampled subset of the input data. | List[Dict[str, True] | List[Any]] |
| sample_indices | The indices of the sampled data in the original dataset. | List[int] |
### Possible use case
<!-- MANUAL: use_case -->
**A/B Testing**: Randomly sample users or records for testing different workflow paths.
**Representative Subsets**: Extract stratified samples from large datasets for analysis or testing.
**Performance Testing**: Select a smaller sample from large data for workflow development and debugging.
<!-- END MANUAL -->
---
## If Input Matches
### What it is
Handles conditional logic based on comparison operators
### How it works
<!-- MANUAL: how_it_works -->
This block checks if an input matches a specified value and routes data accordingly. When the input equals the value, data flows to yes_output; otherwise, it goes to no_output.
Use yes_value and no_value to specify what data to output in each case. This provides a simple equality check for branching workflow logic.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| input | The input to match against | Input | Yes |
| value | The value to output if the input matches | Value | Yes |
| yes_value | The value to output if the input matches | Yes Value | No |
| no_value | The value to output if the input does not match | No Value | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| result | The result of the condition evaluation (True or False) | bool |
| yes_output | The output value if the condition is true | Yes Output |
| no_output | The output value if the condition is false | No Output |
### Possible use case
<!-- MANUAL: use_case -->
**Category Routing**: Route items to different processing paths based on their category or type.
**Feature Flags**: Check if a feature flag equals "enabled" to conditionally execute new functionality.
**Status Handling**: Branch workflow based on specific status values like "pending", "approved", or "rejected".
<!-- END MANUAL -->
---
## Pinecone Init
### What it is
Initializes a Pinecone index
### How it works
<!-- MANUAL: how_it_works -->
This block initializes or connects to a Pinecone vector database index. Specify the index name, vector dimension, and distance metric (cosine, euclidean, or dot product) for new indexes.
For serverless deployment, configure the cloud provider and region. The block returns the initialized index name for use with insert and query operations.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| index_name | Name of the Pinecone index | str | Yes |
| dimension | Dimension of the vectors | int | No |
| metric | Distance metric for the index | str | No |
| cloud | Cloud provider for serverless | str | No |
| region | Region for serverless | str | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| index | Name of the initialized Pinecone index | str |
| message | Status message | str |
### Possible use case
<!-- MANUAL: use_case -->
**RAG Pipeline Setup**: Initialize a vector index for storing document embeddings in retrieval-augmented generation.
**Semantic Search**: Set up a vector database for similarity search across products, documents, or media.
**Knowledge Base**: Create a searchable vector store for FAQ answers or support documentation.
<!-- END MANUAL -->
---
## Pinecone Insert
### What it is
Upload data to a Pinecone index
### How it works
<!-- MANUAL: how_it_works -->
This block uploads vectors and associated text chunks to a Pinecone index. Each chunk is paired with its embedding vector, and optional metadata can be attached for filtering during queries.
Use namespaces to organize vectors into logical groups within the same index. The upsert operation adds new vectors or updates existing ones with matching IDs.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| index | Initialized Pinecone index | str | Yes |
| chunks | List of text chunks to ingest | List[Any] | Yes |
| embeddings | List of embeddings corresponding to the chunks | List[Any] | Yes |
| namespace | Namespace to use in Pinecone | str | No |
| metadata | Additional metadata to store with each vector | Dict[str, True] | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| upsert_response | Response from Pinecone upsert operation | str |
### Possible use case
<!-- MANUAL: use_case -->
**Document Indexing**: Store document chunks with their embeddings for later semantic search.
**Knowledge Ingestion**: Add FAQ entries, product descriptions, or support articles to a searchable index.
**Memory Storage**: Store conversation history or agent memories as searchable vectors.
<!-- END MANUAL -->
---
## Pinecone Query
### What it is
Queries a Pinecone index
### How it works
<!-- MANUAL: how_it_works -->
This block searches a Pinecone index for vectors similar to a query vector. Specify top_k to control how many results to return, and use namespace to search within a specific partition.
Results include similarity scores and optionally the vector values and metadata. Combined results aggregate the text chunks for easy use in RAG pipelines.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| query_vector | Query vector | List[Any] | Yes |
| namespace | Namespace to query in Pinecone | str | No |
| top_k | Number of top results to return | int | No |
| include_values | Whether to include vector values in the response | bool | No |
| include_metadata | Whether to include metadata in the response | bool | No |
| host | Host for pinecone | str | No |
| idx_name | Index name for pinecone | str | Yes |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| results | Query results from Pinecone | Results |
| combined_results | Combined results from Pinecone | Combined Results |
### Possible use case
<!-- MANUAL: use_case -->
**Semantic Search**: Find the most relevant documents or answers based on meaning, not just keywords.
**RAG Context**: Retrieve relevant context passages to augment LLM prompts with domain-specific knowledge.
**Similar Content**: Find products, articles, or media similar to a reference item.
<!-- END MANUAL -->
---
## Step Through Items
### What it is
Iterates over a list or dictionary and outputs each item.
### How it works
<!-- MANUAL: how_it_works -->
When given a list or dictionary, the block processes each item individually. For lists, it keeps track of the item's position (index). For dictionaries, it focuses on the values, using the value as both the item and the key in the output.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| items | The list or dictionary of items to iterate over | List[Any] | No |
| items_object | The list or dictionary of items to iterate over | Dict[str, True] | No |
| items_str | The list or dictionary of items to iterate over | str | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| item | The current item in the iteration | Item |
| key | The key or index of the current item in the iteration | Key |
### Possible use case
<!-- MANUAL: use_case -->
Imagine you have a list of customer names and you want to perform a specific action for each customer, like sending a personalized email. This block could help you go through the list one by one, allowing you to process each customer individually.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,462 @@
# Agent Executor
### What it is
Executes an existing agent inside your agent
### How it works
<!-- MANUAL: how_it_works -->
This block runs another agent as a sub-agent within your workflow. You provide the agent's graph ID, version, and input data, and the block executes that agent and returns its outputs.
Input and output schemas define the expected data structure for communication between the parent and child agents, enabling modular, reusable agent composition.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| user_id | User ID | str | Yes |
| graph_id | Graph ID | str | Yes |
| graph_version | Graph Version | int | Yes |
| agent_name | Name to display in the Builder UI | str | No |
| inputs | Input data for the graph | Dict[str, True] | Yes |
| input_schema | Input schema for the graph | Dict[str, True] | Yes |
| output_schema | Output schema for the graph | Dict[str, True] | Yes |
### Possible use case
<!-- MANUAL: use_case -->
**Modular Workflows**: Break complex workflows into smaller, reusable agents that can be composed together.
**Specialized Agents**: Call domain-specific agents (like a research agent or formatter) from a main orchestration agent.
**Dynamic Routing**: Execute different agents based on input type or user preferences.
<!-- END MANUAL -->
---
## Execute Code
### What it is
Executes code in a sandbox environment with internet access.
### How it works
<!-- MANUAL: how_it_works -->
This block executes Python, JavaScript, or Bash code in an isolated E2B sandbox with internet access. Use setup_commands to install dependencies before running your code.
The sandbox includes pip and npm pre-installed. Set timeout to limit execution time, and use dispose_sandbox to clean up after execution or keep the sandbox running for follow-up steps.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| setup_commands | Shell commands to set up the sandbox before running the code. You can use `curl` or `git` to install your desired Debian based package manager. `pip` and `npm` are pre-installed.
These commands are executed with `sh`, in the foreground. | List[str] | No |
| code | Code to execute in the sandbox | str | No |
| language | Programming language to execute | "python" | "js" | "bash" | No |
| timeout | Execution timeout in seconds | int | No |
| dispose_sandbox | Whether to dispose of the sandbox immediately after execution. If disabled, the sandbox will run until its timeout expires. | bool | No |
| template_id | You can use an E2B sandbox template by entering its ID here. Check out the E2B docs for more details: [E2B - Sandbox template](https://e2b.dev/docs/sandbox-template) | str | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| main_result | The main result from the code execution | Main Result |
| results | List of results from the code execution | List[CodeExecutionResult] |
| response | Text output (if any) of the main execution result | str |
| stdout_logs | Standard output logs from execution | str |
| stderr_logs | Standard error logs from execution | str |
### Possible use case
<!-- MANUAL: use_case -->
**Data Processing**: Run Python scripts to transform, analyze, or visualize data that can't be handled by standard blocks.
**Custom Integrations**: Execute code to call APIs or services not covered by built-in blocks.
**Dynamic Computation**: Generate and execute code based on AI suggestions for flexible problem-solving.
<!-- END MANUAL -->
---
## Execute Code Step
### What it is
Execute code in a previously instantiated sandbox.
### How it works
<!-- MANUAL: how_it_works -->
This block executes additional code in a sandbox that was previously created with the Instantiate Code Sandbox block. The sandbox maintains state between steps, so variables and installed packages persist.
Use this for multi-step code execution where each step builds on previous results. Set dispose_sandbox to true on the final step to clean up.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| sandbox_id | ID of the sandbox instance to execute the code in | str | Yes |
| step_code | Code to execute in the sandbox | str | No |
| language | Programming language to execute | "python" | "js" | "bash" | No |
| dispose_sandbox | Whether to dispose of the sandbox after executing this code. | bool | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| main_result | The main result from the code execution | Main Result |
| results | List of results from the code execution | List[CodeExecutionResult] |
| response | Text output (if any) of the main execution result | str |
| stdout_logs | Standard output logs from execution | str |
| stderr_logs | Standard error logs from execution | str |
### Possible use case
<!-- MANUAL: use_case -->
**Iterative Processing**: Load data in one step, transform it in another, and export in a third.
**Stateful Computation**: Build up results across multiple code executions with shared variables.
**Interactive Analysis**: Run exploratory data analysis steps sequentially in the same environment.
<!-- END MANUAL -->
---
## Get Reddit Posts
### What it is
This block fetches Reddit posts from a defined subreddit name.
### How it works
<!-- MANUAL: how_it_works -->
The block connects to Reddit using provided credentials, accesses the specified subreddit, and retrieves posts based on the given parameters. It can limit the number of posts, stop at a specific post, or fetch posts within a certain time frame.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| subreddit | Subreddit name, excluding the /r/ prefix | str | No |
| last_minutes | Post time to stop minutes ago while fetching posts | int | No |
| last_post | Post ID to stop when reached while fetching posts | str | No |
| post_limit | Number of posts to fetch | int | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| post | Reddit post | RedditPost |
| posts | List of all Reddit posts | List[RedditPost] |
### Possible use case
<!-- MANUAL: use_case -->
A content curator could use this block to gather recent posts from a specific subreddit for analysis, summarization, or inclusion in a newsletter.
<!-- END MANUAL -->
---
## Instantiate Code Sandbox
### What it is
Instantiate a sandbox environment with internet access in which you can execute code with the Execute Code Step block.
### How it works
<!-- MANUAL: how_it_works -->
This block creates a persistent E2B sandbox environment that can be used for multiple code execution steps. Run setup_commands and setup_code to prepare the environment with dependencies and initial state.
The sandbox persists until its timeout expires or it's explicitly disposed. Use the returned sandbox_id with Execute Code Step blocks for subsequent code execution.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| setup_commands | Shell commands to set up the sandbox before running the code. You can use `curl` or `git` to install your desired Debian based package manager. `pip` and `npm` are pre-installed.
These commands are executed with `sh`, in the foreground. | List[str] | No |
| setup_code | Code to execute in the sandbox | str | No |
| language | Programming language to execute | "python" | "js" | "bash" | No |
| timeout | Execution timeout in seconds | int | No |
| template_id | You can use an E2B sandbox template by entering its ID here. Check out the E2B docs for more details: [E2B - Sandbox template](https://e2b.dev/docs/sandbox-template) | str | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| sandbox_id | ID of the sandbox instance | str |
| response | Text result (if any) of the setup code execution | str |
| stdout_logs | Standard output logs from execution | str |
| stderr_logs | Standard error logs from execution | str |
### Possible use case
<!-- MANUAL: use_case -->
**Complex Pipelines**: Set up an environment with data science libraries for multi-step analysis.
**Persistent State**: Create a sandbox with loaded models or data that multiple workflow branches can access.
**Custom Environments**: Configure specialized environments with specific package versions for reproducible execution.
<!-- END MANUAL -->
---
## Post Reddit Comment
### What it is
This block posts a Reddit comment on a specified Reddit post.
### How it works
<!-- MANUAL: how_it_works -->
The block connects to Reddit using the provided credentials, locates the specified post, and then adds the given comment to that post.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| data | Reddit comment | RedditComment | Yes |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| comment_id | Posted comment ID | str |
### Possible use case
<!-- MANUAL: use_case -->
An automated moderation system could use this block to post pre-defined responses or warnings on Reddit posts that violate community guidelines.
<!-- END MANUAL -->
---
## Publish To Medium
### What it is
Publishes a post to Medium.
### How it works
<!-- MANUAL: how_it_works -->
This block publishes articles to Medium using their API. Provide the content in HTML or Markdown format along with a title, tags, and publishing options. The author_id can be obtained from Medium's /me API endpoint.
Configure publish_status to publish immediately, save as draft, or make unlisted. The block returns the published post's ID and URL.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| author_id | The Medium AuthorID of the user. You can get this by calling the /me endpoint of the Medium API.
curl -H "Authorization: Bearer YOUR_ACCESS_TOKEN" https://api.medium.com/v1/me" the response will contain the authorId field. | str | No |
| title | The title of your Medium post | str | Yes |
| content | The main content of your Medium post | str | Yes |
| content_format | The format of the content: 'html' or 'markdown' | str | Yes |
| tags | List of tags for your Medium post (up to 5) | List[str] | Yes |
| canonical_url | The original home of this content, if it was originally published elsewhere | str | No |
| publish_status | The publish status | "public" | "draft" | "unlisted" | Yes |
| license | The license of the post: 'all-rights-reserved', 'cc-40-by', 'cc-40-by-sa', 'cc-40-by-nd', 'cc-40-by-nc', 'cc-40-by-nc-nd', 'cc-40-by-nc-sa', 'cc-40-zero', 'public-domain' | str | No |
| notify_followers | Whether to notify followers that the user has published | bool | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the post creation failed | str |
| post_id | The ID of the created Medium post | str |
| post_url | The URL of the created Medium post | str |
| published_at | The timestamp when the post was published | int |
### Possible use case
<!-- MANUAL: use_case -->
**Content Syndication**: Automatically publish blog posts or newsletters to Medium to reach a wider audience.
**AI Content Publishing**: Generate articles with AI and publish them directly to Medium.
**Cross-Posting**: Republish existing content from other platforms to Medium with proper canonical URL attribution.
<!-- END MANUAL -->
---
## Read RSS Feed
### What it is
Reads RSS feed entries from a given URL.
### How it works
<!-- MANUAL: how_it_works -->
This block fetches and parses RSS or Atom feeds from a URL. Filter entries by time_period to only get recent items. When run_continuously is enabled, the block polls the feed at the specified polling_rate interval.
Each entry is output individually, enabling processing of new content as it appears. The block also outputs all entries as a list for batch processing.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| rss_url | The URL of the RSS feed to read | str | Yes |
| time_period | The time period to check in minutes relative to the run block runtime, e.g. 60 would check for new entries in the last hour. | int | No |
| polling_rate | The number of seconds to wait between polling attempts. | int | Yes |
| run_continuously | Whether to run the block continuously or just once. | bool | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| entry | The RSS item | RSSEntry |
| entries | List of all RSS entries | List[RSSEntry] |
### Possible use case
<!-- MANUAL: use_case -->
**News Monitoring**: Track industry news feeds and process new articles for summarization or alerts.
**Content Aggregation**: Collect posts from multiple RSS feeds for a curated digest or newsletter.
**Blog Triggers**: Monitor a competitor's blog feed to trigger analysis or response workflows.
<!-- END MANUAL -->
---
## Send Authenticated Web Request
### What it is
Make an authenticated HTTP request with host-scoped credentials (JSON / form / multipart).
### How it works
<!-- MANUAL: how_it_works -->
This block makes HTTP requests with automatic credential injection based on the request URL's host. Credentials are managed separately and applied when the URL matches a configured host pattern.
Supports JSON, form-encoded, and multipart requests with file uploads. The response is parsed and returned along with separate error outputs for client (4xx) and server (5xx) errors.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| url | The URL to send the request to | str | Yes |
| method | The HTTP method to use for the request | "GET" | "POST" | "PUT" | No |
| headers | The headers to include in the request | Dict[str, str] | No |
| json_format | If true, send the body as JSON (unless files are also present). | bool | No |
| body | Form/JSON body payload. If files are supplied, this must be a mapping of formfields. | Dict[str, True] | No |
| files_name | The name of the file field in the form data. | str | No |
| files | Mapping of *form field name* → Image url / path / base64 url. | List[str (file)] | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Errors for all other exceptions | str |
| response | The response from the server | Response |
| client_error | Errors on 4xx status codes | Client Error |
| server_error | Errors on 5xx status codes | Server Error |
### Possible use case
<!-- MANUAL: use_case -->
**Private API Access**: Call APIs that require authentication without exposing credentials in the workflow.
**OAuth Integrations**: Access protected resources using pre-configured OAuth tokens.
**Multi-Tenant APIs**: Make requests to APIs where credentials vary by host or endpoint.
<!-- END MANUAL -->
---
## Send Email
### What it is
This block sends an email using the provided SMTP credentials.
### How it works
<!-- MANUAL: how_it_works -->
This block sends emails via SMTP using your configured email server credentials. Provide the recipient address, subject, and body content. The SMTP configuration includes server host, port, username, and password.
The block handles connection, authentication, and message delivery, returning a status indicating success or failure.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| to_email | Recipient email address | str | Yes |
| subject | Subject of the email | str | Yes |
| body | Body of the email | str | Yes |
| config | SMTP Config | SMTP Config | Yes |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the email sending failed | str |
| status | Status of the email sending operation | str |
### Possible use case
<!-- MANUAL: use_case -->
**Notification Emails**: Send automated notifications when workflow events occur.
**Report Delivery**: Email generated reports or summaries to stakeholders.
**Alert System**: Send email alerts when monitoring workflows detect issues or thresholds.
<!-- END MANUAL -->
---
## Send Web Request
### What it is
Make an HTTP request (JSON / form / multipart).
### How it works
<!-- MANUAL: how_it_works -->
This block makes HTTP requests to any URL. Configure the method (GET, POST, PUT, DELETE, PATCH), headers, and request body. Supports JSON, form-encoded, and multipart content types with file uploads.
The response body is parsed and returned. Separate error outputs distinguish between client errors (4xx), server errors (5xx), and other failures.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| url | The URL to send the request to | str | Yes |
| method | The HTTP method to use for the request | "GET" | "POST" | "PUT" | No |
| headers | The headers to include in the request | Dict[str, str] | No |
| json_format | If true, send the body as JSON (unless files are also present). | bool | No |
| body | Form/JSON body payload. If files are supplied, this must be a mapping of formfields. | Dict[str, True] | No |
| files_name | The name of the file field in the form data. | str | No |
| files | Mapping of *form field name* → Image url / path / base64 url. | List[str (file)] | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Errors for all other exceptions | str |
| response | The response from the server | Response |
| client_error | Errors on 4xx status codes | Client Error |
| server_error | Errors on 5xx status codes | Server Error |
### Possible use case
<!-- MANUAL: use_case -->
**API Integration**: Call REST APIs to fetch data, trigger actions, or send updates.
**Webhook Delivery**: Send webhook notifications to external services when events occur.
**Custom Services**: Integrate with services that don't have dedicated blocks using their HTTP APIs.
<!-- END MANUAL -->
---
## Transcribe Youtube Video
### What it is
Transcribes a YouTube video using a proxy.
### How it works
<!-- MANUAL: how_it_works -->
This block extracts transcripts from YouTube videos using a proxy service. It parses the YouTube URL to get the video ID and retrieves the available transcript, typically the auto-generated or manually uploaded captions.
The transcript text is returned as a single string, suitable for summarization, analysis, or other text processing.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| youtube_url | The URL of the YouTube video to transcribe | str | Yes |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Any error message if the transcription fails | str |
| video_id | The extracted YouTube video ID | str |
| transcript | The transcribed text of the video | str |
### Possible use case
<!-- MANUAL: use_case -->
**Video Summarization**: Extract video transcripts for AI summarization or key point extraction.
**Content Repurposing**: Convert YouTube content into written articles, social posts, or documentation.
**Research Automation**: Transcribe educational or informational videos for analysis and note-taking.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,108 @@
# Add Audio To Video
### What it is
Block to attach an audio file to a video file using moviepy.
### How it works
<!-- MANUAL: how_it_works -->
This block combines a video file with an audio file using the moviepy library. The audio track is attached to the video, optionally with volume adjustment via the volume parameter (1.0 = original volume).
Input files can be URLs, data URIs, or local paths. The output can be returned as either a file path or base64 data URI.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| video_in | Video input (URL, data URI, or local path). | str (file) | Yes |
| audio_in | Audio input (URL, data URI, or local path). | str (file) | Yes |
| volume | Volume scale for the newly attached audio track (1.0 = original). | float | No |
| output_return_type | Return the final output as a relative path or base64 data URI. | "file_path" | "data_uri" | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| video_out | Final video (with attached audio), as a path or data URI. | str (file) |
### Possible use case
<!-- MANUAL: use_case -->
**Add Voiceover**: Combine generated voiceover audio with video content for narrated videos.
**Background Music**: Add music tracks to silent videos or replace existing audio.
**Audio Replacement**: Swap the audio track of a video for localization or accessibility.
<!-- END MANUAL -->
---
## Loop Video
### What it is
Block to loop a video to a given duration or number of repeats.
### How it works
<!-- MANUAL: how_it_works -->
This block extends a video by repeating it to reach a target duration or number of loops. Set duration to specify the total length in seconds, or use n_loops to repeat the video a specific number of times.
The looped video is seamlessly concatenated and can be output as a file path or base64 data URI.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| video_in | The input video (can be a URL, data URI, or local path). | str (file) | Yes |
| duration | Target duration (in seconds) to loop the video to. If omitted, defaults to no looping. | float | No |
| n_loops | Number of times to repeat the video. If omitted, defaults to 1 (no repeat). | int | No |
| output_return_type | How to return the output video. Either a relative path or base64 data URI. | "file_path" | "data_uri" | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| video_out | Looped video returned either as a relative path or a data URI. | str |
### Possible use case
<!-- MANUAL: use_case -->
**Background Videos**: Loop short clips to match the duration of longer audio or content.
**GIF-Like Content**: Create seamlessly looping video content for social media.
**Filler Content**: Extend short video clips to meet minimum duration requirements.
<!-- END MANUAL -->
---
## Media Duration
### What it is
Block to get the duration of a media file.
### How it works
<!-- MANUAL: how_it_works -->
This block analyzes a media file and returns its duration in seconds. Set is_video to true for video files or false for audio files to ensure proper parsing.
The input can be a URL, data URI, or local file path. The duration is returned as a float for precise timing calculations.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| media_in | Media input (URL, data URI, or local path). | str (file) | Yes |
| is_video | Whether the media is a video (True) or audio (False). | bool | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| duration | Duration of the media file (in seconds). | float |
### Possible use case
<!-- MANUAL: use_case -->
**Video Processing Prep**: Get video duration before deciding how to loop, trim, or synchronize it.
**Audio Matching**: Determine audio length to generate matching-length video content.
**Content Validation**: Verify that uploaded media meets duration requirements.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,39 @@
# Notion Create Page
### What it is
Create a new page in Notion. Requires EITHER a parent_page_id OR parent_database_id. Supports markdown content.
### How it works
<!-- MANUAL: how_it_works -->
This block creates a new page in Notion using the Notion API. You can create pages as children of existing pages or as entries in a database. The parent must be accessible to your integration.
Content can be provided as markdown, which gets converted to Notion blocks. For database pages, you can set additional properties like Status or Priority. Optionally add an emoji icon to the page.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| parent_page_id | Parent page ID to create the page under. Either this OR parent_database_id is required. | str | No |
| parent_database_id | Parent database ID to create the page in. Either this OR parent_page_id is required. | str | No |
| title | Title of the new page | str | Yes |
| content | Content for the page. Can be plain text or markdown - will be converted to Notion blocks. | str | No |
| properties | Additional properties for database pages (e.g., {'Status': 'In Progress', 'Priority': 'High'}) | Dict[str, True] | No |
| icon_emoji | Emoji to use as the page icon (e.g., '📄', '🚀') | str | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| page_id | ID of the created page. | str |
| page_url | URL of the created page. | str |
### Possible use case
<!-- MANUAL: use_case -->
**Meeting Notes**: Automatically create meeting notes pages from calendar events with template content.
**Task Creation**: Add new entries to a task database when issues are created in other systems.
**Content Publishing**: Create draft pages in a content calendar from AI-generated or imported content.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,43 @@
# Notion Read Database
### What it is
Query a Notion database with optional filtering and sorting, returning structured entries.
### How it works
<!-- MANUAL: how_it_works -->
This block queries a Notion database using the Notion API. It retrieves entries with optional filtering by property values and sorting. The block requires your Notion integration to have access to the database.
Results include all property values for each entry, the entry IDs for further operations, and the total count. The database connection must be shared with your integration from Notion.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| database_id | Notion database ID. Must be accessible by the connected integration. | str | Yes |
| filter_property | Property name to filter by (e.g., 'Status', 'Priority') | str | No |
| filter_value | Value to filter for in the specified property | str | No |
| sort_property | Property name to sort by | str | No |
| sort_direction | Sort direction: 'ascending' or 'descending' | str | No |
| limit | Maximum number of entries to retrieve | int | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| entries | List of database entries with their properties. | List[Dict[str, True]] |
| entry | Individual database entry (yields one per entry found). | Dict[str, True] |
| entry_ids | List of entry IDs for batch operations. | List[str] |
| entry_id | Individual entry ID (yields one per entry found). | str |
| count | Number of entries retrieved. | int |
| database_title | Title of the database. | str |
### Possible use case
<!-- MANUAL: use_case -->
**Task Management**: Query a Notion task database to find items with a specific status or assigned to a particular person.
**Content Pipeline**: Read entries from a content calendar database to identify posts scheduled for today or this week.
**CRM Sync**: Fetch customer records from a Notion database to sync with other systems or trigger workflows.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,33 @@
# Notion Read Page
### What it is
Read a Notion page by its ID and return its raw JSON.
### How it works
<!-- MANUAL: how_it_works -->
This block retrieves a Notion page by its ID using the Notion API. The page must be accessible to your connected integration, which requires sharing the page with your integration from within Notion.
The block returns the raw JSON representation of the page, including all properties, metadata, and block IDs. This format is useful for programmatic processing or when you need full access to page data.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| page_id | Notion page ID. Must be accessible by the connected integration. You can get this from the page URL notion.so/A-Page-586edd711467478da59fe3ce29a1ffab would be 586edd711467478da59fe35e29a1ffab | str | Yes |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| page | Raw Notion page JSON. | Dict[str, True] |
### Possible use case
<!-- MANUAL: use_case -->
**Data Extraction**: Read page properties and metadata for analysis or migration to other systems.
**Automation Triggers**: Check page properties to decide what actions to take in a workflow.
**Content Backup**: Retrieve full page data for archival or backup purposes.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,35 @@
# Notion Read Page Markdown
### What it is
Read a Notion page and convert it to Markdown format with proper formatting for headings, lists, links, and rich text.
### How it works
<!-- MANUAL: how_it_works -->
This block reads a Notion page and converts its content to Markdown format. It handles Notion's block structure and rich text, translating headings, lists, links, bold, italic, and other formatting into standard Markdown.
The conversion preserves the document structure while making the content portable and usable in other contexts. Optionally include the page title as a top-level header in the output.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| page_id | Notion page ID. Must be accessible by the connected integration. You can get this from the page URL notion.so/A-Page-586edd711467478da59fe35e29a1ffab would be 586edd711467478da59fe35e29a1ffab | str | Yes |
| include_title | Whether to include the page title as a header in the markdown | bool | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| markdown | Page content in Markdown format. | str |
| title | Page title. | str |
### Possible use case
<!-- MANUAL: use_case -->
**Content Export**: Export Notion pages as Markdown for use in static site generators or documentation tools.
**AI Processing**: Convert Notion content to Markdown for LLM processing, summarization, or analysis.
**Cross-Platform Publishing**: Use Notion as a CMS and export content as Markdown for blogs or wikis.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,38 @@
# Notion Search
### What it is
Search your Notion workspace for pages and databases by text query.
### How it works
<!-- MANUAL: how_it_works -->
This block searches across your Notion workspace using the Notion Search API. It finds pages and databases matching your query text, with optional filtering by type (page or database).
Results include titles, types, URLs, and metadata for each match. Leave the query empty to retrieve all accessible pages and databases. Pagination is handled automatically up to the specified limit.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| query | Search query text. Leave empty to get all accessible pages/databases. | str | No |
| filter_type | Filter results by type: 'page' or 'database'. Leave empty for both. | str | No |
| limit | Maximum number of results to return | int | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| results | List of search results with title, type, URL, and metadata. | List[NotionSearchResult] |
| result | Individual search result (yields one per result found). | NotionSearchResult |
| result_ids | List of IDs from search results for batch operations. | List[str] |
| count | Number of results found. | int |
### Possible use case
<!-- MANUAL: use_case -->
**Content Discovery**: Find relevant pages in your workspace based on keywords or topics.
**Database Lookup**: Search for specific databases to use in subsequent operations.
**Knowledge Retrieval**: Search your Notion workspace to find answers or related documentation.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,36 @@
# Nvidia Deepfake Detect
### What it is
Detects potential deepfakes in images using Nvidia's AI API
### How it works
<!-- MANUAL: how_it_works -->
This block analyzes images using Nvidia's AI-powered deepfake detection model. It returns a probability score (0-1) indicating the likelihood that an image has been synthetically manipulated.
Set return_image to true to receive a processed image with detection markings highlighting areas of concern.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| image_base64 | Image to analyze for deepfakes | str (file) | Yes |
| return_image | Whether to return the processed image with markings | bool | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| status | Detection status (SUCCESS, ERROR, CONTENT_FILTERED) | str |
| image | Processed image with detection markings (if return_image=True) | str (file) |
| is_deepfake | Probability that the image is a deepfake (0-1) | float |
### Possible use case
<!-- MANUAL: use_case -->
**Content Verification**: Verify authenticity of user-uploaded profile photos or identity documents.
**Media Integrity**: Screen submitted images for signs of AI manipulation.
**Trust & Safety**: Detect potentially misleading synthetic content in social or news platforms.
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,47 @@
# Replicate Flux Advanced Model
### What it is
This block runs Flux models on Replicate with advanced settings.
### How it works
<!-- MANUAL: how_it_works -->
The block takes a text prompt and several customization options as input. It then sends this information to the selected Flux model on the Replicate platform. The AI model processes the input and generates an image based on the provided specifications. Finally, the block returns a URL to the generated image.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| prompt | Text prompt for image generation | str | Yes |
| replicate_model_name | The name of the Image Generation Model, i.e Flux Schnell | "Flux Schnell" | "Flux Pro" | "Flux Pro 1.1" | No |
| seed | Random seed. Set for reproducible generation | int | No |
| steps | Number of diffusion steps | int | No |
| guidance | Controls the balance between adherence to the text prompt and image quality/diversity. Higher values make the output more closely match the prompt but may reduce overall image quality. | float | No |
| interval | Interval is a setting that increases the variance in possible outputs. Setting this value low will ensure strong prompt following with more consistent outputs. | float | No |
| aspect_ratio | Aspect ratio for the generated image | str | No |
| output_format | File format of the output image | "webp" | "jpg" | "png" | No |
| output_quality | Quality when saving the output images, from 0 to 100. Not relevant for .png outputs | int | No |
| safety_tolerance | Safety tolerance, 1 is most strict and 5 is most permissive | int | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| result | Generated output | str |
### Possible use case
<!-- MANUAL: use_case -->
A graphic designer could use this block to quickly generate concept art for a sci-fi game. They might input a prompt like "A futuristic spaceport on a distant planet with multiple moons in the sky" and adjust the settings to get the desired style and quality. The generated image could then serve as inspiration or a starting point for further design work.
- API Key: Your Replicate API key for authentication
- Prompt: A text description of the image you want to generate (e.g., "A futuristic cityscape at sunset")
- Image Generation Model: Choose from Flux Schnell, Flux Pro, or Flux Pro 1.1
- Seed: An optional number for reproducible image generation
- Steps: The number of diffusion steps in the image generation process
- Guidance: Controls how closely the image follows the text prompt
- Interval: Affects the variety of possible outputs
- Aspect Ratio: The width-to-height ratio of the generated image
- Output Format: Choose between WEBP, JPG, or PNG file formats
- Output Quality: Image quality setting (0-100) for JPG and WEBP formats
- Safety Tolerance: Content safety setting, from 1 (strictest) to 5 (most permissive)
<!-- END MANUAL -->
---

View File

@@ -0,0 +1,37 @@
# Replicate Model
### What it is
Run Replicate models synchronously
### How it works
<!-- MANUAL: how_it_works -->
This block runs any model hosted on Replicate using their API. Specify the model name in owner/model format, provide inputs as a dictionary, and optionally pin to a specific version.
The block waits for completion and returns the model output along with status information.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| model_name | The Replicate model name (format: 'owner/model-name') | str | Yes |
| model_inputs | Dictionary of inputs to pass to the model | Dict[str, str | int] | No |
| version | Specific version hash of the model (optional) | str | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| result | The output from the Replicate model | str |
| status | Status of the prediction | str |
| model_name | Name of the model used | str |
### Possible use case
<!-- MANUAL: use_case -->
**Model Flexibility**: Access thousands of open-source AI models from a single interface.
**Custom Models**: Run your own models deployed on Replicate in workflows.
**Specialized AI Tasks**: Use best-of-breed models for specific tasks like upscaling, segmentation, or captioning.
<!-- END MANUAL -->
---

View File

@@ -1,111 +1,63 @@
## Get Wikipedia Summary
# Get Wikipedia Summary
### What it is
A block that retrieves a summary of a given topic from Wikipedia.
### What it does
This block takes a topic as input and fetches a concise summary about that topic from Wikipedia's API.
This block fetches the summary of a given topic from Wikipedia.
### How it works
<!-- MANUAL: how_it_works -->
The block sends a request to Wikipedia's API with the provided topic. It then extracts the summary from the response and returns it. If there's an error during this process, it will return an error message instead.
<!-- END MANUAL -->
### Inputs
| Input | Description |
|-------|-------------|
| Topic | The subject you want to get a summary about from Wikipedia |
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| topic | The topic to fetch the summary for | str | Yes |
### Outputs
| Output | Description |
|--------|-------------|
| Summary | A brief overview of the requested topic from Wikipedia |
| Error | An error message if the summary retrieval fails |
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the summary cannot be retrieved | str |
| summary | The summary of the given topic | str |
### Possible use case
<!-- MANUAL: use_case -->
A student researching for a project could use this block to quickly get overviews of various topics, helping them decide which areas to focus on for more in-depth study.
<!-- END MANUAL -->
---
## Search The Web
## Google Maps Search
### What it is
A block that performs web searches and returns the results.
### What it does
This block takes a search query and returns a list of relevant web pages, including their titles, URLs, and brief descriptions.
This block searches for local businesses using Google Maps API.
### How it works
The block sends the search query to a search engine API, processes the results, and returns them in a structured format.
<!-- MANUAL: how_it_works -->
This block uses the Google Maps Places API to search for businesses and locations based on a query. Configure radius (up to 50km) to limit the search area and max_results (up to 60) to control how many places are returned.
Each place result includes name, address, rating, reviews, and geographic coordinates for integration with mapping or navigation workflows.
<!-- END MANUAL -->
### Inputs
| Input | Description |
|-------|-------------|
| Query | The search term or phrase to look up on the web |
| Number of Results | How many search results to return (optional, default may vary) |
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| query | Search query for local businesses | str | Yes |
| radius | Search radius in meters (max 50000) | int | No |
| max_results | Maximum number of results to return (max 60) | int | No |
### Outputs
| Output | Description |
|--------|-------------|
| Results | A list of search results, each containing a title, URL, and description |
| Error | An error message if the search fails |
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed | str |
| place | Place found | Place |
### Possible use case
A content creator could use this block to research trending topics in their field, gathering ideas for new articles or videos.
<!-- MANUAL: use_case -->
**Lead Generation**: Find businesses in a specific area for sales outreach.
**Competitive Analysis**: Search for competitors in target locations to analyze their presence and ratings.
**Local SEO**: Gather data on local businesses for market research or directory building.
<!-- END MANUAL -->
---
## Extract Website Content
### What it is
A block that retrieves and extracts content from specified websites.
### What it does
This block takes a URL as input, visits the webpage, and extracts the main content, removing navigation elements, ads, and other non-essential parts.
### How it works
The block sends a request to the given URL, downloads the HTML content, and uses content extraction algorithms to identify and extract the main text content of the page.
### Inputs
| Input | Description |
|-------|-------------|
| URL | The web address of the page to extract content from |
### Outputs
| Output | Description |
|--------|-------------|
| Content | The main text content extracted from the webpage |
| Title | The title of the webpage |
| Error | An error message if the content extraction fails |
### Possible use case
A data analyst could use this block to automatically extract article content from news websites for sentiment analysis or topic modeling.
---
## Get Weather Information
### What it is
A block that fetches current weather data for a specified location.
### What it does
This block takes a location name as input and returns current weather information such as temperature, humidity, and weather conditions.
### How it works
The block sends a request to a weather API (like OpenWeatherMap) with the provided location. It then processes the response to extract relevant weather data.
### Inputs
| Input | Description |
|-------|-------------|
| Location | The city or area you want to get weather information for |
| API Key | Your personal OpenWeatherMap API key (this is kept secret) |
| Use Celsius | An option to choose between Celsius (true) or Fahrenheit (false) for temperature |
### Outputs
| Output | Description |
|--------|-------------|
| Temperature | The current temperature in the specified location |
| Humidity | The current humidity percentage in the specified location |
| Condition | A description of the current weather condition (e.g., "overcast clouds") |
| Error | A message explaining what went wrong if the weather data retrieval fails |
### Possible use case
A travel planning application could use this block to provide users with current weather information for their destination cities.

Some files were not shown because too many files have changed in this diff Show More