Compare commits

..

203 Commits

Author SHA1 Message Date
Ubbe
e57438eac1 Merge branch 'dev' into fix/launch-darkly-card 2025-12-18 17:59:49 +01:00
Lluis Agusti
1f326af67c fix(chat): chat flag should be disabled 2025-12-18 17:54:55 +01:00
Abhimanyu Yadav
e640d36265 feat(frontend): Add special handling for AGENT block type in form and output handlers (#11595)
<!-- Clearly explain the need for these changes: -->

Agent blocks require different handling compared to standard blocks,
particularly for:
- Handle ID generation (using direct keys instead of generated IDs)
- Form data storage structure (nested under `inputs` key)
- Field ID parsing (filtering out schema path prefixes)

This PR implements special handling for `BlockUIType.AGENT` throughout
the form rendering and output handling components to ensure agents work
correctly in the flow editor.

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->

- **CustomNode.tsx**: Pass `uiType` prop to `OutputHandler` component
- **FormCreator.tsx**: 
- Store agent form data in `hardcodedValues.inputs` instead of directly
in `hardcodedValues`
- Extract initial values from `hardcodedValues.inputs` for agent blocks
- **OutputHandler.tsx**: 
  - Accept `uiType` prop
- Use direct key as handle ID for agents instead of
`generateHandleId(key)`
- **useMarketplaceAgentsContent.ts**: 
- Fetch full agent details using `getV2GetLibraryAgent` before adding to
builder
- Ensures agent schemas are properly populated (fixes issue where
marketplace endpoint returns empty schemas)
- **AnyOfField.tsx**: Generate handle IDs for agents by filtering out
"root" and "properties" from schema path
- **FieldTemplate.tsx**: Apply same handle ID generation logic for agent
fields

### Checklist 📋

#### For code changes:

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Add an agent block from marketplace and verify it renders
correctly
- [x] Connect inputs/outputs to/from an agent block and verify
connections work
- [x] Fill in form fields for an agent block and verify data persists
correctly
  - [x] Verify agent blocks work in both new and existing flows
- [x] Test that non-agent blocks still work as before (regression test)
2025-12-18 03:42:55 +00:00
Zamil Majdy
cc9179178f feat(block): Human in The Loop Block restructure (#11627)
## Summary

This PR refactors the Human-In-The-Loop (HITL) review system backend to
improve data handling and API consistency.

## Changes

### Backend Refactoring

#### 1. **Block Output Schema Update** (`human_in_the_loop.py`)
- Replaced single `reviewed_data` and `status` fields with separate
`approved_data` and `rejected_data` outputs
- This allows downstream blocks to handle approved vs rejected data
differently without checking status
- Simplified test outputs to match new schema

#### 2. **Review Data Handling** (`human_review.py`)
- Modified `get_or_create_human_review` to always return
`review.payload` regardless of approval status
- Previously returned `None` for rejected reviews, which could cause
data loss
- Now preserves reviewer-modified data for both approved and rejected
cases

#### 3. **API Route Simplification** (`review/routes.py`)
- Streamlined review decision processing logic using ternary operator
- Unified data handling for both approved and rejected reviews
- Maintains backward compatibility while improving code clarity

## Why These Changes?

- **Better Data Flow**: Separate output pins for approved/rejected data
make workflow design more intuitive
- **Data Preservation**: Rejected reviews can still pass modified data
downstream for logging or alternative processing
- **Cleaner API**: Simplified decision processing reduces code
complexity and potential bugs

## Testing

- All existing tests pass with updated schema
- Backward compatibility maintained for existing workflows
- Human review functionality verified in both approved and rejected
scenarios

## Related

This is the backend portion of changes from #11529, applied separately
to the `feat/hitl` branch.
2025-12-16 12:14:14 +00:00
Ubbe
e8d37ab116 feat(frontend): add nice scrollable tabs on Selected Run view (#11596)
## Changes 🏗️


https://github.com/user-attachments/assets/7e49ed5b-c818-4aa3-b5d6-4fa86fada7ee

When the content of Summary + Outputs + Inputs is long enough, it will
show in this new `<ScrollableTabs />` component, which auto-scrolls the
content as you click on a tab.

## Checklist 📋

### For code changes

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the app locally
  - [x] Check the new page with scrollable tabs
2025-12-15 16:57:36 +07:00
Ubbe
7f7ef6a271 feat(frontend): imporve agent inputs read-only (#11621)
## Changes 🏗️

The main goal of this PR is to improve how we display inputs used for a
given task.

Agent inputs can be of many types (text, long text, date, select, file,
etc.). Until now, we have tried to display them as text, which has not
always worked. Given we already have `<RunAgentInputs />`, which uses
form elements to display the inputs ( _prefilled with data_ ), most of
the time it will look better and less buggy than text.

### Before

<img width="800" height="614" alt="Screenshot 2025-12-14 at 17 45 44"
src="https://github.com/user-attachments/assets/3d851adf-9638-46c1-adfa-b5e68dc78bb0"
/>

### After

<img width="800" height="708" alt="Screenshot 2025-12-14 at 17 45 21"
src="https://github.com/user-attachments/assets/367f32b4-2c30-4368-8d63-4cad06e32437"
/>

### Other improvements

- 🗑️  Removed `<EditInputsModal />`
- it is not used given the API does not support editing inputs for a
schedule yt
- Made `<InformationTooltip />` icon size customisable    

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the app locally
- [x] Check the new view tasks use the form elements instead of text to
display inputs
2025-12-15 00:11:27 +07:00
Ubbe
aefac541d9 fix(frontend): force light mode for now (#11619)
## Changes 🏗️

We have the setup for light/dark mode support ( Tailwind + `next-themes`
), but not the capacity yet from contributions to make the app dark-mode
ready. First, we need to make it look good in light mode 😆

This disables `dark:` mode classes on the code, to prevent the app
looking oopsie when the user is seeing it with a browser with dark mode
preference:

### Before these changes

<img width="800" height="739" alt="Screenshot 2025-12-14 at 17 09 25"
src="https://github.com/user-attachments/assets/76333e03-930a-40b6-b91e-47ee01bf2c00"
/>

### After

<img width="800" height="722" alt="Screenshot 2025-12-14 at 16 55 46"
src="https://github.com/user-attachments/assets/34d85359-c68f-474c-8c66-2bebf28f923e"
/>

## Checklist 📋

### For code changes

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the app on a browser with dark mode preference
  - [x] It still looks in light mode without broken styles
2025-12-15 00:10:36 +07:00
Krzysztof Czerwinski
ff5c8f324b Merge branch 'master' into dev 2025-12-12 22:26:39 +09:00
Krzysztof Czerwinski
f121a22544 hotfix: update next (#11612)
Update next to `15.4.10`
2025-12-12 13:42:36 +01:00
Zamil Majdy
71157bddd7 feat(backend): add agent mode support to SmartDecisionMakerBlock with autonomous tool execution loops (#11547)
## Summary

<img width="2072" height="1836" alt="image"
src="https://github.com/user-attachments/assets/9d231a77-6309-46b9-bc11-befb5d8e9fcc"
/>

**🚀 Major Feature: Agent Mode Support**

Adds autonomous agent mode to SmartDecisionMakerBlock, enabling it to
execute tools directly in loops until tasks are completed, rather than
just yielding tool calls for external execution.

##  **Key New Features**

### 🤖 **Agent Mode with Tool Execution Loops**
- **New `agent_mode_max_iterations` parameter** controls execution
behavior:
  - `0` = Traditional mode (single LLM call, yield tool calls)
  - `1+` = Agent mode with iteration limit
  - `-1` = Infinite agent mode (loop until finished)

### 🔄 **Autonomous Tool Execution**  
- **Direct tool execution** instead of yielding for external handling
- **Multi-iteration loops** with conversation state management
- **Automatic completion detection** when LLM stops making tool calls
- **Iteration limit handling** with graceful completion messages

### 🏗️ **Proper Database Operations**
- **Replace manual execution ID generation** with proper
`upsert_execution_input`/`upsert_execution_output`
- **Real NodeExecutionEntry objects** from database results
- **Proper execution status management**: QUEUED → RUNNING →
COMPLETED/FAILED

### 🔧 **Enhanced Type Safety**
- **Pydantic models** replace TypedDict: `ToolInfo` and
`ExecutionParams`
- **Runtime validation** with better error messages
- **Improved developer experience** with IDE support

## 🔧 **Technical Implementation**

### Agent Mode Flow:
```python
# Agent mode enabled with iterations
if input_data.agent_mode_max_iterations != 0:
    async for result in self._execute_tools_agent_mode(...):
        yield result  # "conversations", "finished"
    return

# Traditional mode (existing behavior)  
# Single LLM call + yield tool calls for external execution
```

### Tool Execution with Database Operations:
```python
# Before: Manual execution IDs
tool_exec_id = f"{node_exec_id}_tool_{sink_node_id}_{len(input_data)}"

# After: Proper database operations
node_exec_result, final_input_data = await db_client.upsert_execution_input(
    node_id=sink_node_id,
    graph_exec_id=execution_params.graph_exec_id,
    input_name=input_name, 
    input_data=input_value,
)
```

### Type Safety with Pydantic:
```python
# Before: Dict access prone to errors
execution_params["user_id"]  

# After: Validated model access
execution_params.user_id  # Runtime validation + IDE support
```

## 🧪 **Comprehensive Test Coverage**

- **Agent mode execution tests** with multi-iteration scenarios
- **Database operation verification** 
- **Type safety validation**
- **Backward compatibility** for traditional mode
- **Enhanced dynamic fields tests**

## 📊 **Usage Examples**

### Traditional Mode (Existing Behavior):
```python
SmartDecisionMakerBlock.Input(
    prompt="Search for keywords",
    agent_mode_max_iterations=0  # Default
)
# → Yields tool calls for external execution
```

### Agent Mode (New Feature):
```python  
SmartDecisionMakerBlock.Input(
    prompt="Complete this task using available tools",
    agent_mode_max_iterations=5  # Max 5 iterations
)
# → Executes tools directly until task completion or iteration limit
```

### Infinite Agent Mode:
```python
SmartDecisionMakerBlock.Input(
    prompt="Analyze and process this data thoroughly", 
    agent_mode_max_iterations=-1  # No limit, run until finished
)
# → Executes tools autonomously until LLM indicates completion
```

##  **Backward Compatibility**

- **Zero breaking changes** to existing functionality
- **Traditional mode remains default** (`agent_mode_max_iterations=0`)
- **All existing tests pass**
- **Same API for tool definitions and execution**

This transforms the SmartDecisionMakerBlock from a simple tool call
generator into a powerful autonomous agent capable of complex multi-step
task execution! 🎯

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-12-12 09:58:06 +00:00
Bentlybro
152e747ea6 Merge branch 'master' into dev 2025-12-11 14:40:01 +00:00
Reinier van der Leer
4d4741d558 fix(frontend/library): Transition from empty tasks view on task init (#11600)
- Resolves #11599

### Changes 🏗️

- Manually update item counts when initiating a task from `EmptyTasks`
view
- Other improvements made while debugging

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] `NewAgentLibraryView` transitions to full layout when a first task
is created
- [x] `NewAgentLibraryView` transitions to full layout when a first
trigger is set up
2025-12-11 11:13:53 +00:00
Krzysztof Czerwinski
bd37fe946d feat(platform): Builder search history (#11457)
Preserve user searches in the new builder and cache search results for
more efficiency.
Search is saved, so the user can see their previous searches.

### Changes 🏗️

- Add `BuilderSearch` column&migration to save user search (with all
filters)
- Builder `db.py` now caches all search results using `@cached` and
returns paginated results, so following pages are returned much quicker
- Score and sort results
- Update models&routes
- Update frontend, so it works properly with modified endpoints
- Frontend: store `serachId` and use it for subsequent searches, so we
don't save partial searches (e.g. "b", "bl", ..., "block"). Search id is
reset when user clears the search field.
- Add clickable chips to the Suggestions builder tab
- Add `HorizontalScroll` component (chips use it)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Search works and is cached
  - [x] Search sorts results
  - [x] Searches are preserved properly

---------

Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
2025-12-10 17:32:17 +00:00
Nicholas Tindle
7ff282c908 fix(frontend): Disable single dollar sign LaTeX mode in markdown rend… (#11598)
Single dollar signs ($10, $variable) are commonly used in content and
were being incorrectly interpreted as inline LaTeX math delimiters. This
change disables that behavior while keeping double dollar sign ($$...$$)
math blocks working.
## Changes 🏗️
• Configure remarkMath plugin with singleDollarTextMath: false in
MarkdownRenderer.tsx
• Double dollar sign display math ($$...$$) continues to work as
expected
• Single dollar signs are no longer interpreted as inline math
delimiters
## Checklist 📋
For code changes:
	-[x]	I have clearly listed my changes in the PR description
	-[x] I have made a test plan
	-[x] I have tested my changes according to the test plan:
-[x] Verify content with dollar amounts (e.g., “$100”) renders as plain
text
-[x] Verify double dollar sign math blocks ($$x^2$$) still render as
LaTeX
-[x] Verify other markdown features (code blocks, tables, links) still
work correctly

Co-authored-by: Claude <noreply@anthropic.com>
2025-12-10 17:10:52 +00:00
Reinier van der Leer
117bb05438 fix(frontend/library): Fix trigger UX flows in v3 library (#11589)
- Resolves #11586
- Follow-up to #11580

### Changes 🏗️

- Fix logic to include manual triggers as a possibility
- Fix input render logic to use trigger setup schema if applicable
- Fix rendering payload input for externally triggered runs
- Amend `RunAgentModal` to load preset inputs+credentials if selected
- Amend `SelectedTemplateView` to use modified input for run (if
applicable)
- Hide non-applicable buttons in `SelectedRunView` for externally
triggered runs
- Implement auto-navigation to `SelectedTriggerView` on trigger setup

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Can set up manual triggers
    - [x] Navigates to trigger view after setup
  - [x] Can set up automatic triggers
  - [x] Can create templates from runs
  - [x] Can run templates
  - [x] Can run templates with modified input
2025-12-10 15:52:02 +00:00
Nicholas Tindle
979d7c3b74 feat(blocks): Add 4 new GitHub webhook trigger blocks (#11588)
I want to be able to automate some actions on social media or our
sevrver in response to actions from discord


<!-- Clearly explain the need for these changes: -->

### Changes 🏗️
Add trigger blocks for common GitHub events to enable OSS automation:
- GithubReleaseTriggerBlock: Trigger on release events (published, etc.)
- GithubStarTriggerBlock: Trigger on star events for milestone
celebrations
- GithubIssuesTriggerBlock: Trigger on issue events for
triage/notifications
- GithubDiscussionTriggerBlock: Trigger on discussion events for Q&A
sync
<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Test Stars
  - [x] Test Discussions
  - [x] Test Issues
  - [x] Test Release

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-12-09 21:25:43 +00:00
Nicholas Tindle
95200b67f8 feat(blocks): add many new spreadsheet blocks (#11574)
<!-- Clearly explain the need for these changes: -->
We have lots we want to do with google sheets and we don't want a lack
of blocks to be a limiter so I pre-ddi a lot of blocks!

### Changes 🏗️
Adds 24 new blocks for google sheets (tested and working)
```
|-----|-------------------------------------------|----------------------------------------|
|  1  | GoogleSheetsFilterRowsBlock               | Filter rows based on column conditions |  |
|  2  | GoogleSheetsLookupRowBlock                | VLOOKUP-style row lookup               |  |
|  3  | GoogleSheetsDeleteRowsBlock               | Delete rows from a sheet               |  |
|  4  | GoogleSheetsGetColumnBlock                | Get data from a specific column        |  |
|  5  | GoogleSheetsSortBlock                     | Sort sheet data                        |  | 
|  6  | GoogleSheetsGetUniqueValuesBlock          | Get unique values from a column        |  | 
|  7  | GoogleSheetsInsertRowBlock                | Insert rows into a sheet               |  |
|  8  | GoogleSheetsAddColumnBlock                | Add a new column                       |  |
|  9  | GoogleSheetsGetRowCountBlock              | Get the number of rows                 |  |
| 10  | GoogleSheetsRemoveDuplicatesBlock         | Remove duplicate rows                  |  | 
| 11  | GoogleSheetsUpdateRowBlock                | Update an existing row                 |  |
| 12  | GoogleSheetsGetRowBlock                   | Get a specific row by index            |  |
| 13  | GoogleSheetsDeleteColumnBlock             | Delete a column                        |  |
| 14  | GoogleSheetsCreateNamedRangeBlock         | Create a named range                   |  | 
| 15  | GoogleSheetsListNamedRangesBlock          | List all named ranges                  |  | 
| 16  | GoogleSheetsAddDropdownBlock              | Add dropdown validation to cells       |  | 
| 17  | GoogleSheetsCopyToSpreadsheetBlock        | Copy sheet to another spreadsheet      |  |
| 18  | GoogleSheetsProtectRangeBlock             | Protect a range from editing           |  |
| 19  | GoogleSheetsExportCsvBlock                | Export sheet as CSV                    |  |
| 20  | GoogleSheetsImportCsvBlock                | Import CSV data                        |  |
| 21  | GoogleSheetsAddNoteBlock                  | Add notes to cells                     |  | 
| 22  | GoogleSheetsGetNotesBlock                 | Get notes from cells                   |  | 
| 23  | GoogleSheetsShareSpreadsheetBlock         | Share spreadsheet with users           |  | 
| 24  | GoogleSheetsSetPublicAccessBlock          | Set public access permissions          |  | 
```


<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Tested using the attached agent 
[super test for
spreadsheets_v9.json](https://github.com/user-attachments/files/24041582/super.test.for.spreadsheets_v9.json)


<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Introduces a large suite of Google Sheets blocks for row/column ops,
filtering/sorting/lookup, CSV import/export, notes, named ranges,
protections, sheet copy, and sharing/public access, plus refactors
append to a simpler single-row append.
> 
> - **Google Sheets blocks (new)**:
> - **Data ops**: `GoogleSheetsFilterRowsBlock`,
`GoogleSheetsLookupRowBlock`, `GoogleSheetsDeleteRowsBlock`,
`GoogleSheetsGetColumnBlock`, `GoogleSheetsSortBlock`,
`GoogleSheetsGetUniqueValuesBlock`, `GoogleSheetsInsertRowBlock`,
`GoogleSheetsAddColumnBlock`, `GoogleSheetsGetRowCountBlock`,
`GoogleSheetsRemoveDuplicatesBlock`, `GoogleSheetsUpdateRowBlock`,
`GoogleSheetsGetRowBlock`, `GoogleSheetsDeleteColumnBlock`.
> - **Named ranges & validation**: `GoogleSheetsCreateNamedRangeBlock`,
`GoogleSheetsListNamedRangesBlock`, `GoogleSheetsAddDropdownBlock`.
> - **Sheet/admin**: `GoogleSheetsCopyToSpreadsheetBlock`,
`GoogleSheetsProtectRangeBlock`.
> - **CSV & notes**: `GoogleSheetsExportCsvBlock`,
`GoogleSheetsImportCsvBlock`, `GoogleSheetsAddNoteBlock`,
`GoogleSheetsGetNotesBlock`.
> - **Sharing**: `GoogleSheetsShareSpreadsheetBlock`,
`GoogleSheetsSetPublicAccessBlock`.
> - **Refactor**:
> - Rename and simplify append: `GoogleSheetsAppendRowBlock` (replaces
multi-row/dict input with single `row`), fixed insert option to
`INSERT_ROWS` and streamlined response.
> - **Utilities/Enums**:
> - Add helpers (`_column_letter_to_index`, `_index_to_column_letter`,
`_apply_filter`) and enums (`FilterOperator`, `SortOrder`, `ShareRole`,
`PublicAccessRole`).
> - Drive/Sheets service builders and file validation reused across new
blocks.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
6e9e2f4024. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Cursor Agent <cursoragent@cursor.com>
2025-12-09 17:28:22 +00:00
Abhimanyu Yadav
f8afc6044e fix(frontend): prevent file upload buttons from triggering form submission (#11576)
<!-- Clearly explain the need for these changes: -->

In the File Widget, the upload button was incorrectly behaving like a
submit button. When users clicked it, the rjsf library immediately
triggered form validation and displayed validation errors, even though
the user was only trying to upload a file.

This happened because HTML buttons inside a form default to
`type="submit"`, which triggers form submission on click. By explicitly
setting `type="button"` on all file-related buttons, we prevent them
from submitting the form while still allowing them to trigger the file
input dialog.

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->

- Added `type="button"` attribute to the clear button in the compact
variant
- Added `type="button"` attribute to the upload button in the compact
variant
- Added `type="button"` attribute to the "Browse File" button in the
default variant

This ensures that clicking any of these buttons only triggers the
intended file selection/upload action without causing unwanted form
validation or submission.

### Checklist 📋

#### For code changes:

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Tested clicking the upload button in a form with File Widget -
form should not submit or show validation errors
- [x] Tested clicking the clear button - should clear the file without
triggering form validation
- [x] Tested clicking the "Browse File" button - should open file dialog
without triggering form validation
- [x] Verified file upload functionality still works correctly after
selecting a file
2025-12-09 15:54:17 +00:00
Abhimanyu Yadav
7edf01777e fix(frontend): sync flowVersion to URL when loading graph from Library (#11585)
<!-- Clearly explain the need for these changes: -->

When opening a graph from the Library, the `flowVersion` query parameter
was not being set in the URL. This caused issues when the graph data
didn't contain an internal `graphVersion`, resulting in the builder
failing and graphs breaking when running.

The `useGetV1GetSpecificGraph` hook relies on the `flowVersion` query
parameter to fetch the correct graph version. Without it being properly
set in the URL, the graph loading logic fails when the version
information is missing from the graph data itself.

### Changes 🏗️

- Added `setQueryStates` to the `useQueryStates` hook return value in
`useFlow.ts`
- Added logic to sync `flowVersion` to the URL query parameters when a
graph is loaded
- When `graph.version` is available, it now updates the `flowVersion`
query parameter in the URL (defaults to `1` if version is undefined)

This ensures the URL stays in sync with the loaded graph's version,
preventing builder failures and execution issues.

### Checklist 📋

#### For code changes:

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Open a graph from the Library that doesn't have a version in the
URL
- [x] Verify that `flowVersion` is automatically added to the URL query
parameters
  - [x] Verify that the graph loads correctly and can be executed
- [x] Verify that graphs with existing `flowVersion` in URL continue to
work correctly
- [x] Verify that graphs opened from Library with version information
sync correctly
2025-12-09 15:26:45 +00:00
Ubbe
c9681f5d44 fix(frontend): library page adjustments (#11587)
## Changes 🏗️

### Adjust layout and styles on mobile 📱 

<img width="448" height="843" alt="Screenshot 2025-12-09 at 22 53 14"
src="https://github.com/user-attachments/assets/159bdf4f-e6b2-42f5-8fdf-25f8a62c62d1"
/>

### Make the sidebar cards have contextual actions

<img width="486" height="243" alt="Screenshot 2025-12-09 at 22 53 27"
src="https://github.com/user-attachments/assets/2f530168-3217-47c4-b08d-feccbb9e9152"
/>

Depending on the card type, different type of actions are shown...

### Make buttons in "About agent" card do something

<img width="344" height="346" alt="Screenshot 2025-12-09 at 22 54 01"
src="https://github.com/user-attachments/assets/47181f80-1f68-4ef1-aecc-bbadc7cc9c44"
/>

### Other

- Hide `Schedule` button for agents with trigger run type
- Adjust secondary button background colour...
- Make drawer content scrollable on mobile 

## Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run locally and test the above

Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
2025-12-09 23:17:44 +07:00
Abhimanyu Yadav
1305325813 fix(frontend): preserve button shape in credential select when content is long (#11577)
<!-- Clearly explain the need for these changes: -->

When the content inside the credential select dropdown becomes too long,
the adjacent link buttons lose their rounded shape and appear squarish.
This happens when the text stretches the container or affects the layout
of the buttons.

The issue occurs because the button's width can shrink below its
intended size when the flex container is stretched by long credential
names. By adding an explicit minimum width constraint with `!min-w-8`,
we ensure the button maintains its proper dimensions and rounded
appearance regardless of the select dropdown's content length.

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->

- Added `!min-w-8` to the external link button's className in
`SelectCredential` component to enforce a minimum width of 2rem (8 *
0.25rem)
- This ensures the button maintains its rounded shape even when the
adjacent select dropdown contains long credential names

### Checklist 📋

#### For code changes:

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Tested credential select with short credential names - button
should maintain rounded shape
- [x] Tested credential select with very long credential names (e.g.,
long provider names, usernames, and hosts) - button should maintain
rounded shape
2025-12-09 15:26:22 +00:00
Abhimanyu Yadav
4f349281bd feat(frontend): switch copied graph storage from local storage to clipboard (#11578)
### Changes 🏗️

This PR migrates the copy/paste functionality for graph nodes and edges
from local storage to the Clipboard API. This change addresses storage
limitations and enables cross-tab copying.


https://github.com/user-attachments/assets/6ef55713-ca5b-4562-bb54-4c12db241d30


**Key changes:**
- Replaced `localStorage` with `navigator.clipboard` API for copy/paste
operations
- Added `CLIPBOARD_PREFIX` constant (`"autogpt-flow-data:"`) to identify
our clipboard data and prevent conflicts with other clipboard content
- Added toast notifications to provide user feedback when copying nodes
- Added error handling for clipboard read/write operations with console
error logging
- Removed dependency on `@/services/storage/local-storage` for copied
flow data
- Updated `useCopyPaste` hook to use async clipboard operations with
proper promise handling

**Benefits:**
-  Removes local storage size limitations (5-10MB)
-  Enables copying nodes between browser tabs/windows
-  Provides better user feedback through toast notifications
-  More standard approach using native browser Clipboard API

### Checklist 📋

#### For code changes:

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Select one or more nodes in the flow editor
  - [x] Press `Ctrl+C` (or `Cmd+C` on Mac) to copy nodes
- [x] Verify toast notification appears showing "Copied successfully"
with node count
  - [x] Press `Ctrl+V` (or `Cmd+V` on Mac) to paste nodes
  - [x] Verify nodes are pasted at viewport center with new unique IDs
  - [x] Verify edges between copied nodes are also pasted correctly
- [x] Test copying nodes in one browser tab and pasting in another tab
(should work)
- [x] Test copying non-flow data (e.g., text) and verify paste doesn't
interfere with flow editor
2025-12-09 15:26:12 +00:00
Ubbe
6c43b34dee feat(frontend): add templates/triggers to new Library page view (#11580)
## Changes 🏗️

Add Templates to the new Agent Library page:

<img width="800" height="889" alt="Screenshot 2025-12-09 at 14 10 01"
src="https://github.com/user-attachments/assets/85f0d478-f3f9-4ccf-81df-b9a7f4ae8849"
/>

- You can create a template from a run ( new action button )
- Templates are listed and can be selected on the sidebar
- When viewing a template, you can edit it, create a task or delete it

Add Triggers to the new Agent Library page:

<img width="800" height="836" alt="Screenshot 2025-12-09 at 14 10 43"
src="https://github.com/user-attachments/assets/c722f807-c72f-4a4d-8778-e36bea203f6e"
/>

- When an agent contains a trigger block, on the modal it will create a
trigger
- When there are triggers, they are listed on the sidebar
- A trigger can be viewed and edited

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the new page locally
2025-12-09 18:30:13 +07:00
Zamil Majdy
c1e21d07e6 feat(platform): add execution accuracy alert system (#11562)
## Summary

<img width="1263" height="883" alt="image"
src="https://github.com/user-attachments/assets/98d4f449-1897-4019-a599-846c27df4191"
/>
<img width="398" height="190" alt="image"
src="https://github.com/user-attachments/assets/0138ac02-420d-4f96-b980-74eb41e3c968"
/>

- Add execution accuracy monitoring with moving averages and Discord
alerts
- Dashboard visualization for accuracy trends and alert detection  
- Hourly monitoring for marketplace agents (≥10 executions in 30 days)
- Generated API client integration with type-safe models

## Features
- **Moving Average Analysis**: 3-day vs 7-day comparison with
configurable thresholds
- **Discord Notifications**: Hourly alerts for accuracy drops ≥10%
- **Dashboard UI**: Real-time trends visualization with alert status
- **Type Safety**: Generated API hooks and models throughout
- **Error Handling**: Graceful OpenAI configuration handling
- **PostgreSQL Optimization**: Window functions for efficient trend
queries

## Test plan
- [x] Backend accuracy monitoring logic tested with sample data
- [x] Frontend components using generated API hooks (no manual fetch)
- [x] Discord notification integration working
- [x] Admin authentication and authorization working
- [x] All formatting and linting checks passing
- [x] Error handling for missing OpenAI configuration
- [x] Test data available with `test-accuracy-agent-001`

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-12-08 19:28:57 +00:00
Ubbe
aaa8dcc5a8 feat(frontend): design updates run agent modal (#11570)
## Changes 🏗️

<img width="500" height="927" alt="Screenshot 2025-12-08 at 16 30 04"
src="https://github.com/user-attachments/assets/97170302-50ae-4750-99e1-275861ee9e08"
/>

<img width="500" height="897" alt="Screenshot 2025-12-08 at 16 30 10"
src="https://github.com/user-attachments/assets/0a7ac828-ebea-40de-a19e-fbf77b0c2eb7"
/>

Update the new **Run Agent Modal** as a new view. From now on, sections
( inputs, credentials, etc... ) are enclosed. Refactor all the existing
code to be more readable and [adhere to code
conventions](https://github.com/Significant-Gravitas/AutoGPT/blob/master/autogpt_platform/frontend/CONTRIBUTING.md).

### Other improvements

- Refactor `<CredentialInputs />`:
  - to adhere to code conventions
  - to show all existing credentials for a given provider
  - to allow deletion of a given credential
  - to allow to select/switch credentials for a given task
- Run Graph Errors:
- when the run fails, show the errors nicely on the toast and link to
the builder for further debugging
- Design System:
  - secondary button colour has been updated 
  - labels size for form fields has been updated 

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the app locally
  - [x] Test the above updates running agents from the new library page
2025-12-08 21:47:32 +07:00
Ubbe
a46976decd fix(frontend): allow to close toasts (#11550)
## Changes 🏗️

<img width="795" height="275" alt="Screenshot 2025-12-04 at 21 05 55"
src="https://github.com/user-attachments/assets/e7572635-3976-4822-89ea-3e5e26196ab8"
/>

Allow to close toasts 🍞 

## Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run Storybook locally
  - [x] Toast have a close button top-right

---------

Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
2025-12-08 17:29:14 +07:00
Nicholas Tindle
c4eb7edb65 feat(platform): Improve Google Sheets/Drive integration with unified credentials (#11520)
Simplifies and improves the Google Sheets/Drive integration by merging
credentials with the file picker and using narrower OAuth scopes.

### Changes 🏗️

- Merge Google credentials and file picker into a single unified input
field for better UX
- Create spreadsheets using Drive API instead of Sheets API for proper
scope support
- Simplify Google Drive OAuth scope to only use `drive.file` (narrowest
permission needed)
- Clean up unused imports (NormalizedPickedFile)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Test creating a new Google Spreadsheet with
GoogleSheetsCreateSpreadsheetBlock
- [x] Test reading from existing spreadsheets with GoogleSheetsReadBlock
  - [x] Test writing to spreadsheets with GoogleSheetsWriteBlock
  - [x] Verify OAuth flow works with simplified scopes
  - [x] Verify file picker works with merged credentials field

#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Unifies Google Drive picker and credentials with auto-credentials
across backend and frontend, updates all Sheets blocks and execution to
use it, and adds Drive-based spreadsheet creation plus supporting tests
and UI fixes.
> 
> - **Backend**:
> - **Google Drive model/field**: Introduce `GoogleDriveFile` (with
`_credentials_id`) and `GoogleDriveFileField()` for unified auth+picker
(`backend/blocks/google/_drive.py`).
> - **Sheets blocks**: Replace `GoogleDrivePickerField` and explicit
credentials with `GoogleDriveFileField` across all Sheets blocks;
preserve and emit credentials for chaining; add Drive service; create
spreadsheets via Drive API then manage via Sheets API.
> - **IO block**: Add `AgentGoogleDriveFileInputBlock` providing a Drive
picker input.
> - **Execution**: Support auto-generated credentials via
`BlockSchema.get_auto_credentials_fields()`; acquire/release multiple
credential locks; pass creds by `credentials_kwarg`
(`executor/manager.py`, `data/block.py`, `util/test.py`).
> - **Tests**: Add validation tests for duplicate/unique
`auto_credentials.kwarg_name` and defaults.
> - **Frontend**:
> - **Picker**: Enhance Google Drive picker to require/use saved
platform credentials, pass `_credentials_id`, validate scopes, and
manage dialog z-index/interaction; expose `requirePlatformCredentials`.
> - **UI**: Update dialogs/CSS to keep Google picker on top and prevent
overlay interactions.
> - **Types**: Extend `GoogleDrivePickerConfig` with `auto_credentials`
and related typings.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
7d25534def. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Copilot <198982749+Copilot@users.noreply.github.com>
2025-12-05 09:44:38 -06:00
Nicholas Tindle
3f690ea7b8 fix(platform/frontend): security upgrade next from 15.4.7 to 15.4.8 (#11536)
![snyk-top-banner](https://res.cloudinary.com/snyk/image/upload/r-d/scm-platform/snyk-pull-requests/pr-banner-default.svg)

### Snyk has created this PR to fix 1 vulnerabilities in the yarn
dependencies of this project.

#### Snyk changed the following file(s):

- `autogpt_platform/frontend/package.json`


#### Note for
[zero-installs](https://yarnpkg.com/features/zero-installs) users

If you are using the Yarn feature
[zero-installs](https://yarnpkg.com/features/zero-installs) that was
introduced in Yarn V2, note that this PR does not update the
`.yarn/cache/` directory meaning this code cannot be pulled and
immediately developed on as one would expect for a zero-install project
- you will need to run `yarn` to update the contents of the
`./yarn/cache` directory.
If you are not using zero-install you can ignore this as your flow
should likely be unchanged.



<details>
<summary>⚠️ <b>Warning</b></summary>

```
Failed to update the yarn.lock, please update manually before merging.
```

</details>



#### Vulnerabilities that will be fixed with an upgrade:

|  | Issue |  
:-------------------------:|:-------------------------
![critical
severity](https://res.cloudinary.com/snyk/image/upload/w_20,h_20/v1561977819/icon/c.png
'critical severity') | Arbitrary Code Injection
<br/>[SNYK-JS-NEXT-14173355](https://snyk.io/vuln/SNYK-JS-NEXT-14173355)




---

> [!IMPORTANT]
>
> - Check the changes in this PR to ensure they won't cause issues with
your project.
> - Max score is 1000. Note that the real score may have changed since
the PR was raised.
> - This PR was automatically created by Snyk using the credentials of a
real user.

---

**Note:** _You are seeing this because you or someone else with access
to this repository has authorized Snyk to open fix PRs._

For more information: <img
src="https://api.segment.io/v1/pixel/track?data=eyJ3cml0ZUtleSI6InJyWmxZcEdHY2RyTHZsb0lYd0dUcVg4WkFRTnNCOUEwIiwiYW5vbnltb3VzSWQiOiJhNDQzN2JlZC0wMjYxLTRhZmMtYmQxOS1hMTUwY2RhMDE3ZDciLCJldmVudCI6IlBSIHZpZXdlZCIsInByb3BlcnRpZXMiOnsicHJJZCI6ImE0NDM3YmVkLTAyNjEtNGFmYy1iZDE5LWExNTBjZGEwMTdkNyJ9fQ=="
width="0" height="0"/>
🧐 [View latest project
report](https://app.snyk.io/org/significant-gravitas/project/3d924968-0cf3-4767-9609-501fa4962856?utm_source&#x3D;github&amp;utm_medium&#x3D;referral&amp;page&#x3D;fix-pr)
📜 [Customise PR
templates](https://docs.snyk.io/scan-using-snyk/pull-requests/snyk-fix-pull-or-merge-requests/customize-pr-templates?utm_source=github&utm_content=fix-pr-template)
🛠 [Adjust project
settings](https://app.snyk.io/org/significant-gravitas/project/3d924968-0cf3-4767-9609-501fa4962856?utm_source&#x3D;github&amp;utm_medium&#x3D;referral&amp;page&#x3D;fix-pr/settings)
📚 [Read about Snyk's upgrade
logic](https://docs.snyk.io/scan-with-snyk/snyk-open-source/manage-vulnerabilities/upgrade-package-versions-to-fix-vulnerabilities?utm_source=github&utm_content=fix-pr-template)

---

**Learn how to fix vulnerabilities with free interactive lessons:**

🦉 [Arbitrary Code
Injection](https://learn.snyk.io/lesson/insecure-deserialization/?loc&#x3D;fix-pr)

[//]: #
'snyk:metadata:{"breakingChangeRiskLevel":null,"FF_showPullRequestBreakingChanges":false,"FF_showPullRequestBreakingChangesWebSearch":false,"customTemplate":{"variablesUsed":[],"fieldsUsed":[]},"dependencies":[{"name":"next","from":"15.4.7","to":"15.4.8"}],"env":"prod","issuesToFix":["SNYK-JS-NEXT-14173355"],"prId":"a4437bed-0261-4afc-bd19-a150cda017d7","prPublicId":"a4437bed-0261-4afc-bd19-a150cda017d7","packageManager":"yarn","priorityScoreList":[null],"projectPublicId":"3d924968-0cf3-4767-9609-501fa4962856","projectUrl":"https://app.snyk.io/org/significant-gravitas/project/3d924968-0cf3-4767-9609-501fa4962856?utm_source=github&utm_medium=referral&page=fix-pr","prType":"fix","templateFieldSources":{"branchName":"default","commitMessage":"default","description":"default","title":"default"},"templateVariants":["updated-fix-title","pr-warning-shown"],"type":"auto","upgrade":["SNYK-JS-NEXT-14173355"],"vulns":["SNYK-JS-NEXT-14173355"],"patch":[],"isBreakingChange":false,"remediationStrategy":"vuln"}'

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Upgrades Next.js from 15.4.7 to 15.4.8 in the frontend and updates
lockfile/transitive references accordingly.
> 
> - **Dependencies**:
> - Bump `next` to `15.4.8` in `autogpt_platform/frontend/package.json`.
> - Update lockfile to align, including `@next/*` SWC binaries and
packages that peer-depend on `next` (e.g., `@sentry/nextjs`,
`@storybook/nextjs`, `@vercel/*`, `geist`, `nuqs`,
`@next/third-parties`).
> - Minor transitive tweak: `sharp` dependency `semver` updated to
`7.7.3`.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
e7741cbfb5. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: snyk-bot <snyk-bot@snyk.io>
Co-authored-by: Bentlybro <Github@bentlybro.com>
Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
2025-12-05 09:44:12 -06:00
Swifty
8be3c88711 feat(backend): add default store agents for seeding test databases (#11552)
This PR adds a collection of pre-built store agents that can be loaded
into test databases for development and testing purposes.

### Changes 🏗️

- Add 17 exported agent JSON files in `backend/agents/` directory
- Add `StoreAgent_rows.csv` containing store listing metadata (titles,
descriptions, categories, images)
- Add `load_store_agents.py` script to load agents into the test
database
- Add `load-store-agents` Makefile target for easy execution

**Included Agents:**
- Flux AI Image Generator
- YouTube Transcription Scraper  
- Decision Maker Lead Finder
- Smart Meeting Prep
- Automated Support Agent
- Unspirational Poster Maker
- AI Video Generator
- Automated SEO Blog Writer
- Lead Finder (Local Businesses)
- LinkedIn Post Generator
- YouTube to LinkedIn Post Converter
- Personal Newsletter
- Email Scout - Contact Finder Assistant
- YouTube Video to SEO Blog Writer
- AI Webpage Copy Improver
- Domain Name Finder
- AI Function

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Run `make load-store-agents` and verify agents are loaded into the
database
  - [x] Verify store listings appear correctly with metadata from CSV
- [x] Confirm no sensitive information (API keys, secrets) is included
in the exported agents

#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

No configuration changes required - this only adds test data and a
loading script.
2025-12-05 16:08:37 +01:00
Zamil Majdy
e4d0dbc283 feat(platform): add Agent Output Demo field to marketplace submission form (#11538)
## Summary
- Add Agent Output Demo field to marketplace agent submission form,
positioned below the Description field
- Store agent output demo URLs in database for future CoPilot
integration
- Implement proper video/image ordering on marketplace pages
- Add shared YouTube URL validation utility to eliminate code
duplication

## Changes Made

### Frontend
- **Agent submission form**: Added Agent Output Demo field with YouTube
URL validation
- **Edit agent form**: Added Agent Output Demo field for existing
submissions
- **Marketplace display**: Implemented proper video/image ordering:
  1. YouTube/Overview video (if exists)
  2. First image (hero)
  3. Agent Output Demo (if exists) 
  4. Additional images
- **Shared utilities**: Created `validateYouTubeUrl` function in
`src/lib/utils.ts`

### Backend
- **Database schema**: Added `agentOutputDemoUrl` field to
`StoreListingVersion` model
- **Database views**: Updated `StoreAgent` view to include
`agent_output_demo` field
- **API models**: Added `agent_output_demo_url` to submission requests
and `agent_output_demo` to responses
- **Database migration**: Added migration to create new column and
update view
- **Test files**: Updated all test files to include the new required
field

## Test Plan
- [x] Frontend form validation works correctly for YouTube URLs
- [x] Database migration applies successfully 
- [x] Backend API accepts and returns the new field
- [x] Marketplace displays videos in correct order
- [x] Both frontend and backend formatting/linting pass
- [x] All test files include required field to prevent failures

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-12-05 11:40:12 +00:00
Swifty
8e476c3f8d fix(backend): pass credential type from SDK registry to integrations API (#11544)
### Changes 🏗️

This PR improves the `/integrations/providers` endpoint to dynamically
determine supported authentication types from the SDK registry instead
of using hardcoded values.

**What changed:**
- The `list_providers` function now looks up each provider in the
`AutoRegistry` to get its `supported_auth_types`
- If a provider has defined auth types in the SDK registry, those are
used to set `supports_api_key`, `supports_user_password`, and
`supports_host_scoped` flags
- Falls back to legacy hardcoded behavior for providers not registered
in the SDK (maintains backwards compatibility)

**Why:**
- Providers can now correctly declare their supported authentication
methods via the SDK
- Removes brittle hardcoded checks like `name in ("smtp",)` for specific
providers
- Makes the credential type system more extensible and maintainable

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verified providers with SDK-defined auth types return correct
flags
  - [x] Verified legacy providers still work with fallback behavior
- [x] Tested the `/integrations/providers` endpoint returns expected
data

#### For configuration changes:

- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

No configuration changes required for this PR.
2025-12-05 12:42:49 +01:00
Zamil Majdy
2f63defb53 fix(backend): Mark ValueError as known block errors (#11537)
### Changes 🏗️

Mark ValueError as known block errors

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
2025-12-05 11:12:18 +00:00
Sukhtumur Narantuya
2934e9ea69 fix(backend): replace print() statements with proper logging (#11499)
- Replace print() with logger.info() in reddit.py for login message
- Replace print() with logger.debug() in airtable/_api.py for API params
- Replace print() with logger.debug() in _manual_base.py for webhook URL
- Add logging imports and logger initialization where missing
- Update FIXME to TODO with GitHub issue reference #8537

<!-- Clearly explain the need for these changes: -->

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] test it still works


<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Switch `print()` to `logger.info/debug()` across Airtable, Reddit, and
manual webhook modules; add logger initialization and clarify TODO with
issue reference.
> 
> - **Backend**:
>   - **Airtable (`backend/blocks/airtable/_api.py`)**:
>     - Replace `print(params)` with `logger.debug` in `create_base`.
>   - **Reddit (`backend/blocks/reddit.py`)**:
>     - Add `logging` import and `logger` initialization.
>     - Replace login `print` with `logger.info` in `get_praw`.
>   - **Webhooks (`backend/integrations/webhooks/_manual_base.py`)**:
> - Replace `print` with `logger.debug` in `_register_webhook` and add
`logger`.
>     - Update `FIXME` to `TODO` with GitHub issue reference `#8537`.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
3add9b0fa9. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
2025-12-05 07:49:20 +00:00
Krzysztof Czerwinski
c880db439d feat(platform): Backend completion of Onboarding tasks (#11375)
Make onboarding task completion backend-authoritative which prevents
cheating (previously users could mark all tasks as completed instantly
and get rewards) and makes task completion more reliable. Completion of
tasks is moved backend with exception of introductory onboarding tasks
and visit-page type tasks.

### Changes 🏗️

- Move incrementing run counter backend and make webhook-triggered and
scheduled task execution count as well
- Use user timezone for calculating run streak
- Frontend task completion is moved from update onboarding state to
separate endpoint and guarded so only frontend tasks can be completed
- Graph creation, execution and add marketplace agent to library accept
`source`, so appropriate tasks can be completed
- Replace `client.ts` api calls with orval generated and remove no
longer used functions from `client.ts`
- Add `resolveResponse` helper function that unwraps orval generated
call result to 2xx response

Small changes&bug fixes:
- Make Redis notification bus serialize all payload fields
- Fix confetti when group is finished
- Collapse finished group when opening Wallet
- Play confetti only for tasks that are listed in the Wallet UI

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Onboarding can be finished
  - [x] All tasks can be finished and work properly
  - [x] Confetti works properly
2025-12-05 02:32:28 +00:00
Nicholas Tindle
486099140d fix(platform/frontend): security upgrade next from 15.4.7 to 15.4.8 (#11536)
![snyk-top-banner](https://res.cloudinary.com/snyk/image/upload/r-d/scm-platform/snyk-pull-requests/pr-banner-default.svg)

### Snyk has created this PR to fix 1 vulnerabilities in the yarn
dependencies of this project.

#### Snyk changed the following file(s):

- `autogpt_platform/frontend/package.json`


#### Note for
[zero-installs](https://yarnpkg.com/features/zero-installs) users

If you are using the Yarn feature
[zero-installs](https://yarnpkg.com/features/zero-installs) that was
introduced in Yarn V2, note that this PR does not update the
`.yarn/cache/` directory meaning this code cannot be pulled and
immediately developed on as one would expect for a zero-install project
- you will need to run `yarn` to update the contents of the
`./yarn/cache` directory.
If you are not using zero-install you can ignore this as your flow
should likely be unchanged.



<details>
<summary>⚠️ <b>Warning</b></summary>

```
Failed to update the yarn.lock, please update manually before merging.
```

</details>



#### Vulnerabilities that will be fixed with an upgrade:

|  | Issue |  
:-------------------------:|:-------------------------
![critical
severity](https://res.cloudinary.com/snyk/image/upload/w_20,h_20/v1561977819/icon/c.png
'critical severity') | Arbitrary Code Injection
<br/>[SNYK-JS-NEXT-14173355](https://snyk.io/vuln/SNYK-JS-NEXT-14173355)




---

> [!IMPORTANT]
>
> - Check the changes in this PR to ensure they won't cause issues with
your project.
> - Max score is 1000. Note that the real score may have changed since
the PR was raised.
> - This PR was automatically created by Snyk using the credentials of a
real user.

---

**Note:** _You are seeing this because you or someone else with access
to this repository has authorized Snyk to open fix PRs._

For more information: <img
src="https://api.segment.io/v1/pixel/track?data=eyJ3cml0ZUtleSI6InJyWmxZcEdHY2RyTHZsb0lYd0dUcVg4WkFRTnNCOUEwIiwiYW5vbnltb3VzSWQiOiJhNDQzN2JlZC0wMjYxLTRhZmMtYmQxOS1hMTUwY2RhMDE3ZDciLCJldmVudCI6IlBSIHZpZXdlZCIsInByb3BlcnRpZXMiOnsicHJJZCI6ImE0NDM3YmVkLTAyNjEtNGFmYy1iZDE5LWExNTBjZGEwMTdkNyJ9fQ=="
width="0" height="0"/>
🧐 [View latest project
report](https://app.snyk.io/org/significant-gravitas/project/3d924968-0cf3-4767-9609-501fa4962856?utm_source&#x3D;github&amp;utm_medium&#x3D;referral&amp;page&#x3D;fix-pr)
📜 [Customise PR
templates](https://docs.snyk.io/scan-using-snyk/pull-requests/snyk-fix-pull-or-merge-requests/customize-pr-templates?utm_source=github&utm_content=fix-pr-template)
🛠 [Adjust project
settings](https://app.snyk.io/org/significant-gravitas/project/3d924968-0cf3-4767-9609-501fa4962856?utm_source&#x3D;github&amp;utm_medium&#x3D;referral&amp;page&#x3D;fix-pr/settings)
📚 [Read about Snyk's upgrade
logic](https://docs.snyk.io/scan-with-snyk/snyk-open-source/manage-vulnerabilities/upgrade-package-versions-to-fix-vulnerabilities?utm_source=github&utm_content=fix-pr-template)

---

**Learn how to fix vulnerabilities with free interactive lessons:**

🦉 [Arbitrary Code
Injection](https://learn.snyk.io/lesson/insecure-deserialization/?loc&#x3D;fix-pr)

[//]: #
'snyk:metadata:{"breakingChangeRiskLevel":null,"FF_showPullRequestBreakingChanges":false,"FF_showPullRequestBreakingChangesWebSearch":false,"customTemplate":{"variablesUsed":[],"fieldsUsed":[]},"dependencies":[{"name":"next","from":"15.4.7","to":"15.4.8"}],"env":"prod","issuesToFix":["SNYK-JS-NEXT-14173355"],"prId":"a4437bed-0261-4afc-bd19-a150cda017d7","prPublicId":"a4437bed-0261-4afc-bd19-a150cda017d7","packageManager":"yarn","priorityScoreList":[null],"projectPublicId":"3d924968-0cf3-4767-9609-501fa4962856","projectUrl":"https://app.snyk.io/org/significant-gravitas/project/3d924968-0cf3-4767-9609-501fa4962856?utm_source=github&utm_medium=referral&page=fix-pr","prType":"fix","templateFieldSources":{"branchName":"default","commitMessage":"default","description":"default","title":"default"},"templateVariants":["updated-fix-title","pr-warning-shown"],"type":"auto","upgrade":["SNYK-JS-NEXT-14173355"],"vulns":["SNYK-JS-NEXT-14173355"],"patch":[],"isBreakingChange":false,"remediationStrategy":"vuln"}'

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Upgrades Next.js from 15.4.7 to 15.4.8 in the frontend and updates
lockfile/transitive references accordingly.
> 
> - **Dependencies**:
> - Bump `next` to `15.4.8` in `autogpt_platform/frontend/package.json`.
> - Update lockfile to align, including `@next/*` SWC binaries and
packages that peer-depend on `next` (e.g., `@sentry/nextjs`,
`@storybook/nextjs`, `@vercel/*`, `geist`, `nuqs`,
`@next/third-parties`).
> - Minor transitive tweak: `sharp` dependency `semver` updated to
`7.7.3`.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
e7741cbfb5. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: snyk-bot <snyk-bot@snyk.io>
Co-authored-by: Bentlybro <Github@bentlybro.com>
Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
2025-12-04 20:09:49 +00:00
Ubbe
6d8906ced7 fix(frontend): summary card layout (#11556)
## Changes 🏗️

- Make it less verbose and adapt the summary card to the new design

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run locally and see summary displaying well
2025-12-04 23:33:57 +07:00
Abhimanyu Yadav
bf32a76f49 fix(frontend): prevent NodeHandle error when array schema items is undefined (#11549)
- Added conditional rendering in `NodeArrayInput` component to only
render `NodeHandle` when `schema.items` is defined
- This prevents the error when array schemas don't have an `items`
property defined
- The input field rendering logic already had proper null checks, so
only the handle rendering needed to be fixed

### Checklist 📋

#### For code changes:

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Open the legacy builder and create a new agent
  - [x] Add a Smart Decision block to the agent
- [x] Verify that the `conversation_history` field renders without
errors
- [x] Verify that array fields with defined `items` still render
correctly (e.g., test with other blocks that use arrays)
- [x] Verify that connecting/disconnecting handles to the
conversation_history field works correctly
  - [x] Verify that the field can accept input values when not connected
2025-12-04 15:13:31 +00:00
Ubbe
f7a8e372dd feat(frontend): implement new actions sidebar + summary (#11545)
## Changes 🏗️

<img width="800" height="767" alt="Screenshot 2025-12-04 at 17 40 10"
src="https://github.com/user-attachments/assets/37036246-bcdb-46eb-832c-f91fddfd9014"
/>

<img width="800" height="492" alt="Screenshot 2025-12-04 at 17 40 16"
src="https://github.com/user-attachments/assets/ba547e54-016a-403c-9ab6-99465d01af6b"
/>

On the new Agent Library page:

- Implement the new actions sidebar ( main change... ) 
- Refactor the layout/components to accommodate that
- Implement the missing "Summary" functionality 
- Update icon buttons in Design system with new designs

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the app locally and test it
2025-12-04 22:46:42 +07:00
Abhimanyu Yadav
3ccc712463 feat(frontend): add host-scoped credentials support to CredentialField (#11546)
### Changes 🏗️

This PR adds support for `host_scoped` credential type in the new
builder's `CredentialField` component. This enables blocks that require
sensitive headers for custom API endpoints to configure host-scoped
credentials directly from the credential field.
<img width="745" height="843" alt="Screenshot 2025-12-04 at 4 31 09 PM"
src="https://github.com/user-attachments/assets/d076b797-64c4-4c31-9c88-47a064814055"
/>

<img width="418" height="180" alt="Screenshot 2025-12-04 at 4 36 02 PM"
src="https://github.com/user-attachments/assets/b4fa6d8d-d8f4-41ff-ab11-7c708017f8fd"
/>

**Key changes:**

- **Added `HostScopedCredentialsModal` component**
(`models/HostScopedCredentialsModal/`)
- Modal dialog for creating host-scoped credentials with host pattern,
optional title, and dynamic header pairs (key-value)
- Auto-populates host from discriminator value (URL field) when
available
  - Supports adding/removing multiple header pairs with validation

- **Enhanced credential filtering logic** (`helpers.ts`)
- Updated `filterCredentialsByProvider` to accept `schema` and
`discriminatorValue` parameters
  - Added intelligent filtering for:
    - Credential types supported by the block
    - OAuth credentials with sufficient scopes
    - Host-scoped credentials matched by host from discriminator value
  - Extracted `getDiscriminatorValue` helper function for reusability

- **Updated `CredentialField` component**
  - Added `supportsHostScoped` check in `useCredentialField` hook
- Conditionally renders `HostScopedCredentialsModal` when
`supportsHostScoped && discriminatorValue` is true
  - Exports `discriminatorValue` for use in child components

- **Updated `useCredentialField` hook**
- Calculates `discriminatorValue` using new `getDiscriminatorValue`
helper
- Passes `schema` and `discriminatorValue` to enhanced
`filterCredentialsByProvider` function
- Returns `supportsHostScoped` and `discriminatorValue` for component
consumption

**Technical details:**
- Host extraction uses `getHostFromUrl` utility to parse host from
discriminator value (URL)
- Header pairs are managed as state with add/remove functionality
- Form validation uses `react-hook-form` with `zod` schema
- Credential creation integrates with existing API endpoints and query
invalidation

### Checklist 📋

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verify `HostScopedCredentialsModal` appears when block supports
`host_scoped` credentials and discriminator value is present
  - [x] Test host auto-population from discriminator value (URL field)
  - [x] Test manual host entry when discriminator value is not available
  - [x] Test adding/removing multiple header pairs
- [x] Test form validation (host required, empty header pairs filtered
out)
  - [x] Test credential creation and successful toast notification
  - [x] Verify credentials list refreshes after creation
- [x] Test host-scoped credential filtering matches credentials by host
from URL
- [x] Verify existing credential types (api_key, oauth2, user_password)
still work correctly
  - [x] Test OAuth scope filtering still works as expected
- [x] Verify modal only shows when `supportsHostScoped &&
discriminatorValue` conditions are met
2025-12-04 15:13:23 +00:00
Abhimanyu Yadav
2b9816cfa5 fix(frontend): ensure node selection state is set before copying in context menu (#11535)
<!-- Clearly explain the need for these changes: -->

Fixes an issue where copying a node via the context menu dialog fails on
the first attempt in a new session. The problem occurs because the node
selection state update and the copy operation happen in quick
succession, causing a race condition where `copySelectedNodes()` reads
the store before the selection state is properly updated.


### Checklist 📋

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Start a new browser session (or clear storage)
  - [x] Open the flow editor
- [x] Right-click on a node and select "Copy Node" from the context menu
  - [x] Verify the node is successfully copied on the first attempt
2025-12-04 15:13:13 +00:00
Abhimanyu Yadav
4e87f668e3 feat(frontend): add file input widget with variants and base64 support (#11533)
<!-- Clearly explain the need for these changes: -->

This PR enhances the FileInput component to support multiple variants
and modes, and integrates it into the form renderer as a file widget.
The changes enable a more flexible file input experience with both
server upload and local base64 conversion capabilities.

### Compact one
<img width="354" height="91" alt="Screenshot 2025-12-03 at 8 05 51 PM"
src="https://github.com/user-attachments/assets/1295a34f-4d9f-4a65-89f2-4b6516f176ff"
/>
<img width="386" height="96" alt="Screenshot 2025-12-03 at 8 06 11 PM"
src="https://github.com/user-attachments/assets/3c10e350-8ddc-43ff-bf0a-68b23f8db394"
/>

## Default one
<img width="671" height="165" alt="Screenshot 2025-12-03 at 8 05 08 PM"
src="https://github.com/user-attachments/assets/bd17c5f1-8cdb-4818-9850-a9e61252d8c1"
/>
<img width="656" height="141" alt="Screenshot 2025-12-03 at 8 05 21 PM"
src="https://github.com/user-attachments/assets/1edf260f-6245-44b9-bd3e-dd962f497413"
/>

### Changes 🏗️

#### FileInput Component Enhancements
- **Added variant support**: Introduced `default` and `compact` variants
- `default`: Full-featured UI with drag & drop, progress bar, and
storage note
- `compact`: Minimal inline design suitable for tight spaces like node
inputs
- **Added mode support**: Introduced `upload` and `base64` modes
- `upload`: Uploads files to server (requires `onUploadFile` and
`uploadProgress`)
  - `base64`: Converts files to base64 locally without server upload
- **Improved type safety**: Refactored props using discriminated unions
(`UploadModeProps | Base64ModeProps`) to ensure type-safe usage
- **Enhanced file handling**: Added `getFileLabelFromValue` helper to
extract file labels from base64 data URIs or file paths
- **Better UX**: Added `showStorageNote` prop to control visibility of
storage disclaimer

#### FileWidget Integration
- **Replaced legacy Input**: Migrated from legacy `Input` component to
new `FileInput` component
- **Smart variant selection**: Automatically selects `default` or
`compact` variant based on form context size
- **Base64 mode**: Uses base64 mode for form inputs, eliminating need
for server uploads in builder context
- **Improved accessibility**: Better disabled/readonly state handling
with visual feedback

#### Form Renderer Updates
- **Disabled validation**: Added `noValidate={true}` and
`liveValidate={false}` to prevent premature form validation

#### Storybook Updates
- **Expanded stories**: Added comprehensive stories for all variant/mode
combinations:
  - `Default`: Default variant with upload mode
  - `Compact`: Compact variant with base64 mode
  - `CompactWithUpload`: Compact variant with upload mode
  - `DefaultWithBase64`: Default variant with base64 mode
- **Improved documentation**: Updated component descriptions to clearly
explain variants and modes

### Checklist 📋

#### For code changes:

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Test FileInput component in Storybook with all variant/mode
combinations
  - [x] Test file upload flow in default variant with upload mode
  - [x] Test base64 conversion in compact variant with base64 mode
  - [x] Test file widget in form renderer (node inputs)
  - [x] Test file type validation (accept prop)
  - [x] Test file size validation (maxFileSize prop)
  - [x] Test error handling for invalid files
  - [x] Test disabled and readonly states
  - [x] Test file clearing/removal functionality
  - [x] Verify compact variant renders correctly in tight spaces
  - [x] Verify default variant shows storage note only in upload mode
  - [x] Test drag & drop functionality in default variant
2025-12-04 15:13:01 +00:00
Abhimanyu Yadav
729400dbe1 feat(frontend): display graph validation errors inline on node fields (#11524)
When running a graph in the new builder, validation errors were only
displayed in toast notifications, making it difficult for users to
identify which specific fields had errors. Users needed to see
validation errors directly next to the problematic fields within each
node for better UX and faster debugging.

<img width="1319" height="953" alt="Screenshot 2025-12-03 at 12 48
15 PM"
src="https://github.com/user-attachments/assets/d444bc71-9bee-4fa7-8b7f-33339bd0cb24"
/>

### Changes 🏗️

- **Error handling in graph execution** (`useRunGraph.ts`):
- Added detection for graph validation errors using
`ApiError.isGraphValidationError()`
  - Parse and store node-level errors from backend validation response
  - Clear all node errors on successful graph execution
- Enhanced toast messages to guide users to fix validation errors on
highlighted nodes

- **Node store error management** (`nodeStore.ts`):
  - Added `errors` field to node data structure
  - Implemented `updateNodeErrors()` to set errors for a specific node
- Implemented `clearNodeErrors()` to remove errors from a specific node
  - Implemented `getNodeErrors()` to retrieve errors for a specific node
- Implemented `setNodeErrorsForBackendId()` to set errors by backend ID
(supports matching by `metadata.backend_id` or node `id`)
- Implemented `clearAllNodeErrors()` to clear all node errors across the
graph

- **Visual error indication** (`CustomNode.tsx`, `NodeContainer.tsx`):
- Added error detection logic to identify both configuration errors and
output errors
- Applied error styling to nodes with validation errors (using `FAILED`
status styling)
- Nodes with errors now display with red border/ring to visually
indicate issues

- **Field-level error display** (`FieldTemplate.tsx`):
  - Fetch node errors from store for the current node
- Match field IDs with error keys (handles both underscore and dot
notation)
  - Display field-specific error messages below each field in red text
- Added helper function `getFieldErrorKey()` to normalize field IDs for
error matching

- **Utility helpers** (`helpers.ts`):
- Created `getFieldErrorKey()` function to extract field key from field
ID (removes `root_` prefix)

### Checklist 📋

#### For code changes:

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Create a graph with multiple nodes and intentionally leave
required fields empty
- [x] Run the graph and verify that validation errors appear in toast
notification
- [x] Verify that nodes with errors are highlighted with red border/ring
styling
- [x] Verify that field-specific error messages appear below each
problematic field in red text
- [x] Verify that error messages handle both underscore and dot notation
in field keys
- [x] Fix validation errors and run graph again - verify errors are
cleared
  - [x] Verify that successful graph execution clears all node errors
- [x] Test with nodes that have `backend_id` in metadata vs nodes
without
  - [x] Verify that nodes without errors don't show error styling
- [x] Test with nested fields and array fields to ensure error matching
works correctly
2025-12-04 15:12:51 +00:00
Abhimanyu Yadav
f6608e99c8 feat(frontend): add expandable text input modal for better editing experience (#11510)
Text inputs in the form builder can be difficult to edit when dealing
with longer content. Users need a way to expand text inputs into a
larger, more comfortable editing interface, especially for multi-line
text, passwords, and longer string values.



https://github.com/user-attachments/assets/443bf4eb-c77c-4bf6-b34c-77091e005c6d


### Changes 🏗️

- **Added `InputExpanderModal` component**: A new modal component that
provides a larger textarea (300px min-height) for editing text inputs
with the following features:
- Copy-to-clipboard functionality with visual feedback (checkmark icon)
  - Toast notification on successful copy
  - Auto-focus on open for better UX
  - Proper state management to reset values when modal opens/closes
  
- **Enhanced `TextInputWidget`**:
- Added expand button (ArrowsOutIcon) with tooltip for text, password,
and textarea input types
  - Button appears inline next to the input field
  - Integrated the new `InputExpanderModal` component
  - Improved layout with flexbox to accommodate the expand button
- Added padding-right to input when expand button is visible to prevent
text overlap

- **Refactored file structure**:
  - Moved `TextInputWidget.tsx` into `TextInputWidget/` directory
  - Updated import path in `widgets/index.ts`

- **UX improvements**:
- Expand button only shows for applicable input types (text, password,
textarea)
  - Number and integer inputs don't show expand button (not needed)
- Modal preserves schema title, description, and placeholder for context

### Checklist 📋

#### For code changes:

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Test expand button appears for text input fields
  - [x] Test expand button appears for password input fields  
  - [x] Test expand button appears for textarea fields
  - [x] Test expand button does NOT appear for number/integer inputs
  - [x] Test clicking expand button opens modal with current value
  - [x] Test editing text in modal and saving updates the input field
  - [x] Test cancel button closes modal without saving changes
- [x] Test copy-to-clipboard button copies text and shows success state
  - [x] Test toast notification appears on successful copy
  - [x] Test modal resets to original value when reopened
  - [x] Test modal auto-focuses textarea on open
  - [x] Test expand button tooltip displays correctly
  - [x] Test input field layout with expand button (no text overlap)
2025-12-04 15:12:32 +00:00
Abhimanyu Yadav
78c2245269 feat(frontend): add automatic collision resolution for flow editor nodes (#11506)
When users drag and drop nodes in the new flow editor, nodes can overlap
with each other, making the graph difficult to read and interact with.
This PR adds an automatic collision resolution algorithm that runs when
a node is dropped, ensuring nodes are automatically separated to prevent
overlaps and maintain a clean, readable graph layout.

### Changes 🏗️

- **Added collision resolution algorithm** (`resolve-collision.ts`):
- Implements an iterative collision detection and resolution system
using Flatbush for efficient spatial indexing
- Automatically resolves overlaps by moving nodes apart along the axis
with the smallest overlap
- Configurable options: `maxIterations`, `overlapThreshold`, and
`margin`
- Uses actual node dimensions (`width`, `height`, or `measured` values)
when available

- **Integrated collision resolution into Flow component**:
- Added `onNodeDragStop` callback that triggers collision resolution
after a node is dropped
- Configured with `maxIterations: Infinity`, `overlapThreshold: 0.5`,
and `margin: 15px`

- **Enhanced node dimension handling**:
- Updated `nodeStore.ts` to prioritize actual node dimensions
(`node.width`, `node.measured.width`) over hardcoded defaults when
calculating positions
  - Ensures collision detection uses accurate node sizes

- **Added dependency**:
- Added `flatbush@4.5.0` for efficient spatial indexing and collision
detection

### Checklist 📋

#### For code changes:

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Drag a node and drop it on top of another node - verify nodes
automatically separate
- [x] Drag multiple nodes to create overlapping clusters - verify all
overlaps are resolved
- [x] Drag nodes with different sizes (NOTE blocks vs regular blocks) -
verify collision detection uses correct dimensions
- [x] Drag nodes near the edge of the canvas - verify nodes don't get
pushed off-screen
- [x] Test with a graph containing many nodes (20+) - verify performance
is acceptable
  - [x] Verify nodes maintain their positions when no collisions occur
- [x] Test with nodes that have custom measured dimensions - verify
accurate collision detection
2025-12-04 14:46:43 +00:00
Nicholas Tindle
113df689dc feat(platform): Improve Google Sheets/Drive integration with unified credentials (#11520)
Simplifies and improves the Google Sheets/Drive integration by merging
credentials with the file picker and using narrower OAuth scopes.

### Changes 🏗️

- Merge Google credentials and file picker into a single unified input
field for better UX
- Create spreadsheets using Drive API instead of Sheets API for proper
scope support
- Simplify Google Drive OAuth scope to only use `drive.file` (narrowest
permission needed)
- Clean up unused imports (NormalizedPickedFile)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Test creating a new Google Spreadsheet with
GoogleSheetsCreateSpreadsheetBlock
- [x] Test reading from existing spreadsheets with GoogleSheetsReadBlock
  - [x] Test writing to spreadsheets with GoogleSheetsWriteBlock
  - [x] Verify OAuth flow works with simplified scopes
  - [x] Verify file picker works with merged credentials field

#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Unifies Google Drive picker and credentials with auto-credentials
across backend and frontend, updates all Sheets blocks and execution to
use it, and adds Drive-based spreadsheet creation plus supporting tests
and UI fixes.
> 
> - **Backend**:
> - **Google Drive model/field**: Introduce `GoogleDriveFile` (with
`_credentials_id`) and `GoogleDriveFileField()` for unified auth+picker
(`backend/blocks/google/_drive.py`).
> - **Sheets blocks**: Replace `GoogleDrivePickerField` and explicit
credentials with `GoogleDriveFileField` across all Sheets blocks;
preserve and emit credentials for chaining; add Drive service; create
spreadsheets via Drive API then manage via Sheets API.
> - **IO block**: Add `AgentGoogleDriveFileInputBlock` providing a Drive
picker input.
> - **Execution**: Support auto-generated credentials via
`BlockSchema.get_auto_credentials_fields()`; acquire/release multiple
credential locks; pass creds by `credentials_kwarg`
(`executor/manager.py`, `data/block.py`, `util/test.py`).
> - **Tests**: Add validation tests for duplicate/unique
`auto_credentials.kwarg_name` and defaults.
> - **Frontend**:
> - **Picker**: Enhance Google Drive picker to require/use saved
platform credentials, pass `_credentials_id`, validate scopes, and
manage dialog z-index/interaction; expose `requirePlatformCredentials`.
> - **UI**: Update dialogs/CSS to keep Google picker on top and prevent
overlay interactions.
> - **Types**: Extend `GoogleDrivePickerConfig` with `auto_credentials`
and related typings.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
7d25534def. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Copilot <198982749+Copilot@users.noreply.github.com>
2025-12-04 14:40:30 +00:00
Ubbe
f1c6c94636 ci(frontend): fix concurrency groups (#11551)
## Changes 🏗️

Fix concurrency grouping on Front-end workflows.

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] We will see once merged
2025-12-04 21:44:06 +07:00
Swifty
6588110bf2 fix(frontend): forward X-API-Key header through proxy (#11530)
The Next.js API proxy was stripping the X-API-Key header when forwarding
requests to the backend, causing API key authentication to fail in
environments where requests go through the proxy (e.g., dev
environment).

### Changes 🏗️

- Updated `createRequestHeaders()` in
`frontend/src/lib/autogpt-server-api/helpers.ts` to forward the
`X-API-Key` header from the original request to the backend

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verify API key authentication works when requests go through the
Next.js proxy
- [x] Verify existing authentication (Authorization header) still works
  - [x] Verify admin impersonation header forwarding still works

#### For configuration changes:

- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

No configuration changes required.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-12-03 13:39:17 +01:00
Ubbe
bfbd4eee53 ci(frontend): concurrency optimizations (#11525)
## Changes 🏗️

Added the same concurrency optimisation to the Front-end and Fullstack
CI workflows. It will:

- Cancel in-progress runs when a new workflow starts for the same
branch/PR
- Reduce CI costs by avoiding redundant runs
- Ensure only the latest workflow runs
- Both workflows now use the same concurrency strategy to optimise CI
billing.

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] We will see as we push commits...
2025-12-03 18:05:22 +07:00
Swifty
7b93600973 fix duplicate promethues metrics 2025-12-03 11:04:38 +01:00
Abhimanyu Yadav
02d9ff8db2 fix(frontend): improve error message extraction in agent execution error handler (#11527)
When agent execution fails, the error toast was only showing
`error.message`, which often lacks detail. The API returns more specific
error messages in `error.response.detail.message`, but these weren't
being displayed to users, making debugging harder.

<img width="1018" height="796" alt="Screenshot 2025-12-03 at 2 14 17 PM"
src="https://github.com/user-attachments/assets/6a93aed9-9e18-450a-9995-8760d7d5ca35"
/>

### Changes 🏗️

- Updated error message extraction in `useAgentRunModal` to check
`error.response.detail.message` first, then fall back to
`error.message`, then to the default message
- This ensures users see the most specific error message available from
the API response

### Checklist 📋

#### For code changes:

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Triggered an agent execution error (e.g., invalid inputs) and
verified the toast shows the detailed error message from
`error.response.detail.message`
- [x] Verified fallback to `error.message` when
`error.response.detail.message` is not available
  - [x] Verified fallback to default message when neither is available
2025-12-03 09:19:29 +00:00
Ubbe
b4a69c49a1 feat(frontend): use websockets on new library page + fixes (#11526)
## Changes 🏗️

<img width="800" height="614" alt="Screenshot 2025-12-03 at 14 52 46"
src="https://github.com/user-attachments/assets/c7012d7a-96d4-4268-a53b-27f2f7322a39"
/>

- Create a new `useExecutionEvents()` hook that can be used to subscribe
to agent executions
- now both the **Activity Dropdown** and the new **Library Agent Page**
use that
  - so subscribing to executions is centralised on a single place
- Apply a couple of design fixes
- Fix not being able to select the new templates tab 

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the app locally and verify the above
2025-12-03 16:24:58 +07:00
Ubbe
e62a8fb572 feat(frontend): design updates on new library page... (#11522)
## Changes 🏗️

### Design updates

Design updates on the new Library page. New empty views with
illustration and overall changes on the sidebar and selected run
sections...

<img width="800" height="871" alt="Screenshot 2025-12-03 at 14 03 45"
src="https://github.com/user-attachments/assets/6970af99-dda8-4cd8-a9b5-97fe5ee2032e"
/>

<img width="800" height="844" alt="Screenshot 2025-12-03 at 14 03 52"
src="https://github.com/user-attachments/assets/92e6af79-db9f-4098-961f-9ae3b3300ffa"
/>

<img width="800" height="816" alt="Screenshot 2025-12-03 at 14 03 57"
src="https://github.com/user-attachments/assets/7e23e612-b446-4c53-8ea2-f0e2b1574ec3"
/>

<img width="800" height="862" alt="Screenshot 2025-12-03 at 14 04 07"
src="https://github.com/user-attachments/assets/e3398232-e74b-4a06-8702-30a96862dc00"
/>

### Architecture

- Make selected tabs/items synced with the URL via `?activeTab=` and
`?activeItem=`, so it is easy and predictable to change their state...
- Some minor updates on the design system I missed on previous PRs ... 

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the app locally and check the new page ( still wip )
2025-12-03 14:33:18 +07:00
Reinier van der Leer
233ff40a24 fix(frontend/marketplace): Fix rendering creator links without schema (#11516)
- [OPEN-2871: TypeError: URL constructor: www.agpt.co is not a valid
URL.](https://linear.app/autogpt/issue/OPEN-2871/typeerror-url-constructor-wwwagptco-is-not-a-valid-url)
- [Sentry Issue BUILDER-56D: TypeError: URL constructor: www.agpt.co is
not a valid
URL.](https://significant-gravitas.sentry.io/issues/7081476631/)

### Changes 🏗️

- Amend URL handling in `CreatorLinks` to correctly handle URLs with
implicit schema

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - Trivial change, CI is sufficient
2025-12-03 06:57:55 +00:00
seer-by-sentry[bot]
fa567991b3 fix(backend): Handle HTTP errors in HTTP block by returning response objects (#11515)
### Changes 🏗️

- Modify the HTTP block to handle HTTP errors (4xx, 5xx) by returning
response objects instead of raising exceptions.
- This allows proper handling of client_error and server_error outputs.

Fixes
[AUTOGPT-SERVER-6VP](https://sentry.io/organizations/significant-gravitas/issues/7023985892/).
The issue was that: HTTP errors are raised as exceptions by `Requests`
default behavior, bypassing the block's intended error output handling,
resulting in `BlockUnknownError`.

This fix was generated by Seer in Sentry, triggered by Nicholas Tindle.
👁️ Run ID: 4902617

Not quite right? [Click here to continue debugging with
Seer.](https://sentry.io/organizations/significant-gravitas/issues/7023985892/?seerDrawer=true)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Tested with a service that will return 4XX and 5XX errors to make
sure the correct paths are followed



<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> HTTP block now returns 4xx/5xx responses instead of raising, and
Requests gains retry_max_attempts with last-result handling.
> 
> - **Backend**
>   - **HTTP block (`backend/blocks/http.py`)**:
> - Use `Requests(raise_for_status=False, retry_max_attempts=1)` so
4xx/5xx return response objects and route to
`client_error`/`server_error` outputs.
>   - **HTTP client util (`backend/util/request.py`)**:
> - Add `retry_max_attempts` option with `stop_after_attempt` and
`_return_last_result` to return the final response when retries stop.
> - Build `tenacity` retry config dynamically in `Requests.request()`;
validate `retry_max_attempts >= 1` when provided.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
fccae61c26. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: seer-by-sentry[bot] <157164994+seer-by-sentry[bot]@users.noreply.github.com>
Co-authored-by: Cursor Agent <cursoragent@cursor.com>
Co-authored-by: nicholas.tindle <nicholas.tindle@agpt.co>
2025-12-02 19:00:43 +00:00
Swifty
2cb6fd581c feat(platform): Integration management from external api (#11472)
Allow the external api to manage credentials 

### Changes 🏗️

- add ability to external api to manage credentials

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] tested it works

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Introduces external API endpoints to manage integrations (OAuth
initiation/completion and credential CRUD), adds external OAuth state
fields, and new API key permissions/config.
> 
> - **External API – Integrations**:
> - Add router `backend/server/external/routes/integrations.py` with
endpoints to:
> - `GET /v1/integrations/providers` list providers (incl. default
scopes)
> - `POST /v1/integrations/{provider}/oauth/initiate` and `POST
/oauth/complete` for external OAuth (custom callback, state)
> - `GET /v1/integrations/credentials` and `GET /{provider}/credentials`
to list credentials
> - `POST /{provider}/credentials` to create `api_key`, `user_password`,
`host_scoped` creds; `DELETE /{provider}/credentials/{cred_id}` to
delete
>   - Wire router in `backend/server/external/api.py`.
> - **Auth/Permissions**:
> - Add `APIKeyPermission` values: `MANAGE_INTEGRATIONS`,
`READ_INTEGRATIONS`, `DELETE_INTEGRATIONS` (schema + migration +
OpenAPI).
> - **Data model / Store**:
> - Extend `OAuthState` with external-flow fields: `callback_url`,
`state_metadata`, `api_key_id`, `is_external`.
> - Update `IntegrationCredentialsStore.store_state_token(...)` to
accept/store external OAuth metadata.
> - **OAuth providers**:
> - Set GitHub handler `DEFAULT_SCOPES = ["repo"]` in
`integrations/oauth/github.py`.
> - **Config**:
> - Add `config.external_oauth_callback_origins` in
`backend/util/settings.py` to validate allowed OAuth callback origins.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
249bba9e59. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
2025-12-02 17:42:53 +01:00
Nicholas Tindle
55af799083 fix(blocks): clamp Twitter search start_time to 10 seconds before now (#11461)
## Summary
- Clamp `start_time` to at least 10 seconds before request time (Twitter
API requirement)
- Update input description to document this automatic adjustment
- Fix `serialize_list` to handle `None` data gracefully (exposed by the
fix)

## Background
Twitter API returns `400 Bad Request` when `start_time` is less than 10
seconds before the request time. Users providing current/future times
would hit this error.

**Sentry Issue:**
[BUILDER-3PG](https://significant-gravitas.sentry.io/issues/6919685270/)

## Affected Blocks
- `TwitterSearchRecentTweetsBlock`
- `TwitterGetUserMentionsBlock`
- `TwitterGetHomeTimelineBlock`
- `TwitterGetUserTweetsBlock`

## Test plan
- [x] Tested `TwitterSearchRecentTweetsBlock` with current time as
`start_time`
- [x] Verified clamping works and API call succeeds
- [x] Verified "No tweets found" is returned correctly when search
window has no results

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
2025-12-02 15:45:31 +00:00
Zamil Majdy
6590fcb76f fix(backend): fix broken update_agent_version_in_library and reduce the method code duplication (#11514)
## Summary
Fix broken `update_agent_version_in_library` functionality by eagerly
loading `AgentGraph` while loading the library, also consolidating
duplicate code that updates agent version in library and configures HITL
safe mode settings.

## Problem
The `update_agent_version_in_library` is currently failed with this
error:
```
  File "/Users/abhi/Documents/AutoGPT/autogpt_platform/backend/backend/server/v2/library/model.py", line 110, in from_db
    raise ValueError("Associated Agent record is required.")
ValueError: Associated Agent record is required.
```

also logic was duplicated across two router endpoints with identical
implementations, creating maintenance burden and potential for
inconsistencies.

## Changes Made

### Created Helper Method
- Add `_update_library_agent_version_and_settings()` helper function  
- Fixes broken `update_agent_version_in_library` by centralizing the
logic
- Uses proper error handling and settings merging with `model_copy()`

### Replaced Duplicate Code  
- **In `update_graph` function** (v1.py:863) - replaced 13 lines with
single helper call
- **In `set_graph_active_version` function** (v1.py:920) - replaced 13
lines with single helper call

### Benefits
- **Fixes broken functionality**: Centralizes
`update_agent_version_in_library` logic
- **DRY Principle**: Eliminates code duplication across two router
endpoints
- **Maintainability**: Single place to modify the library agent update
logic
- **Consistency**: Ensures both endpoints use identical logic for HITL
safe mode configuration
- **Readability**: Cleaner, more focused endpoint implementations

## Technical Details
The helper method fixes broken `update_agent_version_in_library` by
handling:
1. Updating agent version in library via
`update_agent_version_in_library()`
2. Conditionally setting `human_in_the_loop_safe_mode: true` if graph
has HITL blocks and setting is not already configured
3. Proper settings merging to preserve existing configuration

## Testing
- [x] Code compiles and passes type checking
- [x] Pre-commit hooks pass (linting, formatting, type checking)
- [x] Both affected endpoints maintain same functionality with cleaner
implementation

Fixes broken duplicate code identified in v1.py router endpoints for
`update_agent_version_in_library`.

Co-authored-by: Claude <noreply@anthropic.com>
2025-12-02 14:31:36 +00:00
Abhimanyu Yadav
f987937046 fix(frontend): prune empty values from node inputs and fix object editor connection detection (#11507)
This PR addresses two issues:
1. **Empty values being sent to backend**: Node inputs were including
empty strings, null, and undefined values when converting to backend
format, causing unnecessary data to be sent.
2. **Incorrect connection detection in ObjectEditor**: The component was
checking if the parent field was connected instead of checking
individual key-value pairs, causing incorrect UI state for dynamic
properties.

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->

- **nodeStore.ts**: Added `pruneEmptyValues` utility to remove
empty/null/undefined values from `hardcodedValues` before converting
nodes to backend format. This ensures only meaningful input values are
sent to the backend API.
- **ObjectEditorWidget.tsx**: 
  - Removed unused `fieldKey` prop from component interface
- Fixed connection detection logic to check individual key-value handle
IDs instead of the parent field key, ensuring correct connection state
for each dynamic property in object editors

### Checklist 📋

#### For code changes:

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Create a node with object input fields and verify empty values are
not sent to backend
  - [x] Add multiple key-value pairs to an object editor widget
- [x] Connect individual key-value pairs via handles and verify
connection state is correctly displayed
- [x] Disconnect a key-value pair and verify the input field becomes
editable
- [x] Save and reload an agent with object inputs to verify empty values
are pruned correctly
- [x] Verify that non-empty values are still correctly preserved and
sent to backend
2025-12-02 12:47:12 +00:00
Abhimanyu Yadav
b8d67fc2a9 fix(frontend): Update isGraphRunning state when manually executing graphs (#11489)
This PR fixes an issue where the `isGraphRunning` state wasn't being
updated when users manually executed graphs. This caused the UI to not
show proper feedback that a graph was running, such as the running
background animation. The fix adds a `setIsGraphRunning` method to the
graph store and ensures it's called both when starting execution (for
immediate UI feedback) and when errors occur (to reset the state).

### Changes 🏗️

- **Added `setIsGraphRunning` method to graphStore**: Provides a way to
manually control the graph running state
- **Set running state on manual execution**: Updates `isGraphRunning` to
`true` immediately when users click the Run button, providing instant UI
feedback
- **Reset state on execution errors**: Ensures `isGraphRunning` is set
back to `false` if the graph execution fails, preventing the UI from
showing incorrect running state
- **Improved UI responsiveness**: Users now see immediate visual
feedback (like the running background) when they start a graph execution
- **Cleaned up unused import**: Removed unnecessary `flowID` from
useQueryStates in Flow component
- I’ve also changed the colour of the running background to a lighter
shade of purple.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Click Run button and verify the running background appears
immediately
  - [x] Test with graphs that require input values via the input dialog
  - [x] Verify running state is cleared when execution completes
- [x] Test error scenarios and confirm running state is reset on failure
- [x] Confirm all UI elements that depend on `isGraphRunning` update
correctly
- [x] Test multiple consecutive runs to ensure state management is
consistent
2025-12-02 12:42:47 +00:00
Zamil Majdy
e4102bf0fb fix(backend): resolve HITL execution context validation and re-enable tests (#11509)
## Summary
Fix critical validation errors in GraphExecutionEntry and
NodeExecutionEntry models that were preventing HITL block execution, and
re-enable HITL test suite.

## Root Cause
After introducing ExecutionContext as a mandatory field, existing
messages in the execution queue lacked this field, causing validation
failures:
```
[GraphExecutor] [ExecutionManager] Could not parse run message: 1 validation error for GraphExecutionEntry
execution_context
  Field required [type=missing, input_value={'user_id': '26db15cb-a29...
```

## Changes Made

### 🔧 Execution Model Fixes (`backend/data/execution.py`)
- **Add backward compatibility**: `execution_context: ExecutionContext =
Field(default_factory=ExecutionContext)`
- **Prevent shared mutable defaults**: Use `default_factory` instead of
direct instantiation to avoid mutation issues
- **Ignore unknown fields**: Add `model_config = {"extra": "ignore"}`
for future compatibility
- **Applied to both models**: GraphExecutionEntry and NodeExecutionEntry

### 🧪 Re-enable HITL Test Suite
- **Remove test skips**: Remove `pytestmark = pytest.mark.skip()` from
`human_review_test.py`
- **Remove test skips**: Remove `pytestmark = pytest.mark.skip()` from
`review_routes_test.py`
- **Restore test coverage**: HITL functionality now properly tested in
CI

## Default ExecutionContext Behavior
When execution_context is missing from old messages, provides safe
defaults:
- `safe_mode: bool = True` (HITL blocks require approval by default)
- `user_timezone: str = "UTC"` (safe timezone default)  
- `root_execution_id: Optional[str] = None`
- `parent_execution_id: Optional[str] = None`

## Impact
-  **Fixes deployment validation errors** in dev environment
-  **Maintains backward compatibility** with existing queue messages  
-  **Restores proper HITL test coverage** in CI
-  **Ensures isolation**: Each execution gets its own ExecutionContext
instance
-  **Future-proofs**: Protects against message format changes

## Testing
- [x] HITL test suite re-enabled and should pass in CI
- [x] Existing executions continue to work with sensible defaults
- [x] New executions receive proper ExecutionContext from caller
- [x] Verified `default_factory` prevents shared mutable instances

## Files Changed
- `backend/data/execution.py` - Add backward-compatible ExecutionContext
defaults
- `backend/data/human_review_test.py` - Re-enable test suite
- `backend/server/v2/executions/review/review_routes_test.py` -
Re-enable test suite

Resolves the "Field required" validation error preventing HITL block
execution in dev environment.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

Co-authored-by: Claude <noreply@anthropic.com>
2025-12-02 11:02:21 +00:00
Zamil Majdy
7b951c977e feat(platform): implement graph-level Safe Mode toggle for HITL blocks (#11455)
## Summary

This PR implements a graph-level Safe Mode toggle system for
Human-in-the-Loop (HITL) blocks. When Safe Mode is ON (default), HITL
blocks require manual review before proceeding. When OFF, they execute
automatically.

## 🔧 Backend Changes

- **Database**: Added `metadata` JSON column to `AgentGraph` table with
migration
- **API**: Updated `execute_graph` endpoint to accept `safe_mode`
parameter
- **Execution**: Enhanced execution context to use graph metadata as
default with API override capability
- **Auto-detection**: Automatically populate `has_human_in_the_loop` for
graphs containing HITL blocks
- **Block Detection**: HITL block ID:
`8b2a7b3c-6e9d-4a5f-8c1b-2e3f4a5b6c7d`

## 🎨 Frontend Changes

- **Component**: New `FloatingSafeModeToggle` with dual variants:
  - **White variant**: For library pages, integrates with action buttons
  - **Black variant**: For builders, floating positioned  
- **Integration**: Added toggles to both new/legacy builders and library
pages
- **API Integration**: Direct graph metadata updates via
`usePutV1UpdateGraphVersion`
- **Query Management**: React Query cache invalidation for consistent UI
updates
- **Conditional Display**: Toggle only appears when graph contains HITL
blocks

## 🛠 Technical Implementation

- **Safe Mode ON** (default): HITL blocks require manual review before
proceeding
- **Safe Mode OFF**: HITL blocks execute automatically without
intervention
- **Priority**: Backend API `safe_mode` parameter takes precedence over
graph metadata
- **Detection**: Auto-populates `has_human_in_the_loop` metadata field
- **Positioning**: Proper z-index and responsive positioning for
floating elements

## 🚧 Known Issues (Work in Progress)

### High Priority
- [ ] **Toggle state persistence**: Always shows "ON" regardless of
actual state - query invalidation issue
- [ ] **LibraryAgent metadata**: Missing metadata field causing
TypeScript errors
- [ ] **Tooltip z-index**: Still covered by some UI elements despite
high z-index

### Medium Priority  
- [ ] **HITL detection**: Logic needs improvement for reliable block
detection
- [ ] **Error handling**: Removing HITL blocks from graph causes save
errors
- [ ] **TypeScript**: Fix type mismatches between GraphModel and
LibraryAgent

### Low Priority
- [ ] **Frontend API**: Add `safe_mode` parameter to execution calls
once OpenAPI is regenerated
- [ ] **Performance**: Consider debouncing rapid toggle clicks

## 🧪 Test Plan

- [ ] Verify toggle appears only when graph has HITL blocks
- [ ] Test toggle persistence across page refreshes  
- [ ] Confirm API calls update graph metadata correctly
- [ ] Validate execution behavior respects safe mode setting
- [ ] Check styling consistency across builder and library contexts

## 🔗 Related

- Addresses requirements for graph-level HITL configuration
- Builds on existing FloatingReviewsPanel infrastructure
- Integrates with existing graph metadata system

🤖 Generated with [Claude Code](https://claude.ai/code)
2025-12-02 09:55:55 +00:00
Nicholas Tindle
3b2a67a711 ci(claude): add free disk space step to prevent runner out of space (#11503)
## Summary
- Add `jlumbroso/free-disk-space@v1.3.1` action to claude.yml workflow
- Frees up disk space on Ubuntu runner before heavy Docker and
dependency setup
- Skips `large-packages` (slow) and `docker-images` (needed for
workflow)

## Test plan
- [x] Verify claude.yml workflow runs successfully without disk space
errors

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude <noreply@anthropic.com>
2025-12-01 20:06:56 +00:00
Bently
a58ac2150f feat(backend/blocks): add Discord create thread block (#11131)
Adds a new Discord block that allows users to create threads in Discord
channels. This addresses issue OPEN-2666 which requested the ability to
create Discord threads from workflows.

## Solution

Implemented `CreateDiscordThreadBlock` in
`autogpt_platform/backend/backend/blocks/discord/bot_blocks.py` with the
following features:

- Create public or private threads in Discord channels via bot token
- Support for both channel ID and channel name lookup (with optional
server name)
- Configurable thread type (public/private toggle)
- Configurable auto-archive duration (60, 1440, 4320, or 10080 minutes)
- Optional initial message to send in the newly created thread
- Outputs: status, thread_id, and thread_name for workflow chaining

The block follows the existing Discord block patterns and includes
proper error handling for permissions, channel not found, and login
failures.

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Verify block appears in the workflow builder UI
- [x] Test creating a public thread with valid bot token and channel ID
- [x] Test creating a private thread with valid bot token and channel
name
  - [x] Test with invalid channel ID/name to verify error handling
  - [x] Test with bot lacking thread creation permissions
  - [x] Verify thread_id output can be chained to subsequent blocks
- [x] Test auto-archive duration options (60, 1440, 4320, 10080 minutes)
  - [x] Test sending initial message in newly created thread

Video to show the blocks working!



https://github.com/user-attachments/assets/f248f315-05b3-47e2-bd6b-4c64d53c35fc
2025-12-01 20:01:27 +00:00
Swifty
9f37342bc6 feat(platform): Simplify the chat tool system to use only 2 tools (#11464)
Simplifying the chat tool system to use only 2 tools

### Changes 🏗️

- remove old tools
- expand run_agent tool to include all stages

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] tested adding credentials work
  - [x] tested running an agent works
  - [x] tested scheduling an agent works
2025-12-01 20:56:16 +01:00
Swifty
7dc3b201b7 feat(platform): Explain None Message in BlockError Messages (#11490)
Sometime block errors are raised with message set as None, we now handle
this case

### Changes 🏗️

- handle case of None message
- Add tests

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] write unit tests

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Ensure `BlockExecutionError` and `BlockUnknownError` provide default
messages when given None/empty input, with new unit tests covering
formatting and inheritance.
> 
> - **Backend**:
>   - `backend/util/exceptions.py`:
> - `BlockExecutionError`: default `None` message to `"Output error was
None"`.
> - `BlockUnknownError`: default empty/`None` message to `"Unknown error
occurred"`.
> - **Tests**:
> - `backend/util/exceptions_test.py`: add tests for message formatting,
`None`/empty handling, and exception inheritance.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
6e9b31ae47. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2025-12-01 20:55:02 +01:00
Swifty
7d53c0de27 fix(backend): Fix Youtube blocking our cloud ips (#11456)
Youtube can blocks cloud ips causing the youtube transcribe blocks to
not work. This PR adds webshare proxy to get around this issue

### Changes 🏗️

- add webshare proxy to youtube transcribe block 

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] I have tested this works locally using the proxy

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Routes YouTube transcript fetching through Webshare proxy using
user/password credentials, wiring in provider enum, settings, default
credentials, and updated tests.
> 
> - **Blocks** (`backend/blocks/youtube.py`):
> - Use `WebshareProxyConfig` with `YouTubeTranscriptApi` to fetch
transcripts via proxy.
> - Add `credentials` input (`user_password` for `webshare_proxy`);
include test credentials and mocks.
> - Update method signatures: `get_transcript(video_id, credentials)`
and `run(..., *, credentials, ...)`.
>   - Change description to indicate proxy usage; add logging.
> - **Integrations**:
> - Providers (`backend/integrations/providers.py`): add
`ProviderName.WEBSHARE_PROXY`.
> - Credentials store (`backend/integrations/credentials_store.py`): add
`webshare_proxy` `UserPasswordCredentials`; include in
`DEFAULT_CREDENTIALS` and conditionally in `get_all_creds`.
> - **Settings** (`backend/util/settings.py`): add secrets
`webshare_proxy_username` and `webshare_proxy_password`.
> - **Tests** (`test/blocks/test_youtube.py`): update to pass
credentials and assert proxy config; add custom-credentials test; adjust
fallback/priority tests.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
d060898488. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2025-12-01 20:54:52 +01:00
Nicholas Tindle
0728f3bd49 fix(backend): Remove Google Sheets API scopes from block inputs (#11484)
Eliminates explicit Google Sheets API scopes from credentials fields in
all Google Sheets-related blocks. This change may be intended to
centralize or dynamically manage API scopes elsewhere, simplifying block
configuration.

<!-- Clearly explain the need for these changes: -->

### Changes 🏗️
- removes the scopes we aren't approved to use
<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Bently tested it on his fresh account and it worked!
2025-12-01 18:07:31 +00:00
Lluis Agusti
eb7e919450 Revert "chore: experiment"
This reverts commit 686412e7df.
2025-12-01 21:20:51 +07:00
Lluis Agusti
686412e7df chore: experiment 2025-12-01 21:19:21 +07:00
Ubbe
b4e95dba14 feat(frontend): update empty task view designs (#11498)
## Changes 🏗️

Update the new library agent page, empty view to look like:

<img width="900" height="1060" alt="Screenshot 2025-12-01 at 14 12 10"
src="https://github.com/user-attachments/assets/e6a22a4f-35f4-434e-bbb1-593390034b9a"
/>

Now we display an **About this agent** card on the left when the agent
is published on the marketplace. I expanded the:
```
/api/library/agents/{id}
```
endpoint to return as well the following:
```js
{
  // ...
  created_at: "timestamp",
  marketplace_listing: {
    creator: { name: "string", "slug": string, id: "string"  },
    name: "string",
    slug: "string",
    id: "string"
  }
}
```
To be able to display this extra information on the card and link to the
creator and marketplace pages.

Also:
- design system updates regarding navbar and colors

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run locally and see the new page for an agent with no runs
2025-12-01 20:28:44 +07:00
Swifty
00148f4e3d feat(platform): add external api routes for store search and tool usage (#11463)
We want to allow external tools to explore the marketplace and use the
chat agent tools


### Changes 🏗️

- add store api routes
- add tool api routes

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] tested all endpoints work

---------

Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
2025-12-01 12:04:03 +00:00
Ubbe
6db18b8445 feat(frontend): design system tokens update (#11501)
## Changes 🏗️

Update tokens of the design system with new values 🖌️ 🎨 

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Storybook build passes, no type errors
2025-12-01 18:14:10 +07:00
Abhimanyu Yadav
627a33468b feat(frontend): Default node output accordion to expanded state (#11483)
This PR improves the user experience by defaulting the node output
accordion to an expanded state. Previously, users had to manually expand
the accordion to view execution results, which added an unnecessary
click to the workflow. With this change, output data is immediately
visible when available, allowing users to quickly see the results of
their node executions. The ability to collapse the accordion is
preserved for users who prefer a more compact interface.

### Changes 🏗️

- **Changed default state of node output accordion**: The node output
section now defaults to expanded (`isExpanded = true`) instead of
collapsed
- **Improved user experience**: Users can now immediately see node
execution results without needing to manually expand the accordion
- **Maintains collapse functionality**: Users can still manually
collapse the accordion if they prefer a more compact view

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Execute a node and verify the output accordion is expanded by
default
  - [x] Verify output data is immediately visible after node execution
  - [x] Test that the accordion can still be manually collapsed
- [x] Confirm the accordion state resets to expanded when switching
between nodes
- [x] Test with different types of output data (simple values, objects,
arrays)
2025-12-01 05:57:08 +00:00
Abhimanyu Yadav
ea4b55f967 feat(frontend): Add minimum movement threshold for node position history tracking (#11481)
This PR implements a minimum movement threshold of 50 pixels for node
position changes before they are logged to the history system. This
prevents the undo/redo history from being cluttered with minor,
unintentional movements that occur when users interact with block inputs
or accidentally nudge nodes.

### Changes 🏗️

- **Added movement threshold for history tracking**: Implemented a 50px
minimum movement requirement before logging node position changes to
history
- **Prevents history spam**: Stops small, unintentional movements (like
clicking on inputs inside blocks) from cluttering the undo/redo history
- **Tracks drag start positions**: Maintains initial positions when
dragging begins to accurately calculate total movement distance
- **Improved history management**: Only significant node movements are
now recorded, matching the behavior of the old builder
- **Memory efficient**: Cleans up tracked positions after drag
operations complete

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Drag a node less than 50px and verify no history entry is created
  - [x] Drag a node more than 50px and verify history entry is created
- [x] Click on inputs inside blocks and verify no history entries are
created
  - [x] Test undo/redo functionality works correctly with the threshold
  - [x] Verify adding/removing nodes still creates history entries
  - [x] Test multiple nodes being dragged simultaneously
2025-12-01 05:43:50 +00:00
Abhimanyu Yadav
321ab8a48a feat(frontend): Add trigger agent banner for webhook-based flows (#11480)
This PR addresses the need for better user awareness when building
trigger-based agents. When a user adds webhook/trigger nodes to their
flow, a prominent banner now appears at the bottom of the builder
informing them they're creating a "Trigger Agent" and providing a direct
link to monitor its activity in the Agent Library.

### Changes 🏗️

- **Added TriggerAgentBanner component**: New banner that displays at
the bottom of the builder when the flow contains webhook/trigger nodes
- **Implemented webhook node detection**: Added `hasWebhookNodes` method
to nodeStore that checks if any nodes are of type WEBHOOK or
WEBHOOK_MANUAL
- **Conditional banner display**: Builder now shows TriggerAgentBanner
instead of BuilderActions when webhook nodes are present
- **Dynamic library link**: Banner includes a link to the specific agent
in the library (if found) or defaults to the general library page
- **Integrated with existing flow context**: Uses the current flowID to
fetch the corresponding library agent for proper linking

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Add a webhook node to a flow and verify the banner appears
  - [x] Remove all webhook nodes and verify the banner disappears
  - [x] Test with both WEBHOOK and WEBHOOK_MANUAL node types
- [x] Click the library link and verify it navigates to the correct
agent page
  - [x] Test library link fallback when agent is not found in library
- [x] Verify banner styling and positioning at bottom center of builder
2025-12-01 05:43:04 +00:00
Abhimanyu Yadav
cd6a8cbd47 fix(frontend): Include edges when copying multiple nodes with Shift+Select (#11478)
This PR fixes issue where edges were not being copied when selecting
multiple nodes with Shift+Select.

### Changes 🏗️

- **Fixed edge copying logic**: Removed the requirement for edges to be
explicitly selected - now automatically includes all edges between
selected nodes when copying
- **Migrated to custom stores**: Refactored copy-paste functionality to
use `useNodeStore` and `useEdgeStore` instead of ReactFlow's built-in
state management
- **Improved type safety**: Replaced generic `Node` and `Edge` types
with `CustomNode` and `CustomEdge` for better type checking
- **Enhanced paste behavior**: 
- Deselects existing nodes before pasting to ensure only pasted nodes
are selected
- Uses store methods for adding nodes and edges, ensuring proper state
management
- **Simplified node data handling**: Removed manual clearing of
backend_id, status, and nodeExecutionResult - now handled by the store's
addNode method
- **Added debugging support**: Added console logging for copied data to
aid in troubleshooting

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Select multiple nodes using Shift+Select and copy with Ctrl/Cmd+C
  - [x] Verify edges between selected nodes are included in the copy
  - [x] Paste with Ctrl/Cmd+V and confirm both nodes and edges appear
  - [x] Verify pasted elements maintain correct connections
  - [x] Confirm only pasted nodes are selected after paste
  - [x] Test that pasted nodes have unique IDs
  - [x] Verify pasted nodes appear centered in the current viewport
2025-12-01 05:41:25 +00:00
Abhimanyu Yadav
7fabbb25c4 fix(frontend): Preserve customized_name only when explicitly set in node metadata (#11477)
This PR fixes an issue where undefined `customized_name` values were
being included in node metadata, which could cause issues with data
persistence and agent exports. The fix ensures that the
`customized_name` property is only included when it has been explicitly
set by the user.

### Changes 🏗️

- **Fixed metadata serialization**: Modified `getNodeAsGraphNode` to
only include `customized_name` in the metadata when it has been
explicitly set (not undefined)
- **Improved data structure**: Uses object spread syntax to
conditionally include the `customized_name` property, preventing
undefined values from being stored
- **Cleaned up code**: Removed unnecessary TODO comment and improved
code readability

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Create a new agent and verify metadata doesn't include undefined
customized_name
- [x] Customize a block name and verify it's properly saved in metadata
- [x] Export an agent and verify the JSON doesn't contain undefined
customized_name fields
- [x] Import an agent with customized names and verify they are
preserved
- [x] Save and reload an agent to ensure metadata persistence works
correctly
2025-12-01 05:40:52 +00:00
Abhimanyu Yadav
35eb563241 feat(platform): enhance BlockMenuSearch with agent addition (#11474)
This PR enables users to add agents directly to the builder from search
results and marketplace views. Previously, users had to navigate to
different sections to add agents - now they can do it with a single
click from wherever they find the agent. The change includes proper
loading states, error handling, and success notifications to provide a
smooth user experience.

### Changes 🏗️

- **Added direct agent-to-builder functionality**: Users can now add
agents directly to the builder from search results and marketplace views
- **Created reusable hook `useAddAgentToBuilder`**: Centralized logic
for adding both library and marketplace agents to the builder
- **Enhanced search results interaction**: Added click handlers and
loading states to agent cards in search results
- **Improved marketplace agent addition**: Marketplace agents are now
added to both library and builder with proper feedback
- **Added loading states**: Visual feedback when agents are being added
(loading spinners on cards)
- **Improved error handling**: Added toast notifications for success and
failure cases with descriptive error messages
- **Added Sentry error tracking**: Captures exceptions for better
debugging in production

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Search for agents and add them to builder from search results
- [x] Add marketplace agents which should appear in both library and
builder
  - [x] Verify loading states appear during agent addition
  - [x] Test error scenarios (network failure, invalid agent)
- [x] Confirm toast notifications appear for both success and error
cases
  - [x] Verify builder viewport centers on newly added agent
2025-12-01 05:26:38 +00:00
Ubbe
a37b527744 refactor(frontend): agent runs view folder structure (#11475)
## Changes 🏗️

Re-arrange the folder structure of the new Library page sub-components
so they are grouped by location:

### Before

<img width="238" height="506" alt="Screenshot 2025-11-27 at 23 45 27"
src="https://github.com/user-attachments/assets/429fda6e-bf74-4d80-9306-028365789ca1"
/>

All components where on a single level, which works fine for simpler
pages without that many sub-components, but on this one which has so
much functionality it ends up messier...

### After 

<img width="226" height="517" alt="Screenshot 2025-11-27 at 23 45 46"
src="https://github.com/user-attachments/assets/99c098ea-ff11-4779-bad8-7d524bf91605"
/>


### Imports order

I edited some files, and the linter/formatter automatically sorted
import order as per the lint plugin.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the new library agent page locally and click around
2025-11-28 19:00:41 +07:00
Zamil Majdy
3d08c22dd5 feat(platform): add Human In The Loop block with review workflow (#11380)
## Summary
This PR implements a comprehensive Human In The Loop (HITL) block that
allows agents to pause execution and wait for human
approval/modification of data before continuing.



https://github.com/user-attachments/assets/c027d731-17d3-494c-85ca-97c3bf33329c


## Key Features
- Added WAITING_FOR_REVIEW status to AgentExecutionStatus enum
- Created PendingHumanReview database table for storing review requests
- Implemented HumanInTheLoopBlock that extracts input data and creates
review entries
- Added API endpoints at /api/executions/review for fetching and
reviewing pending data
- Updated execution manager to properly handle waiting status and resume
after approval

## Frontend Components
- PendingReviewCard for individual review handling
- PendingReviewsList for multiple reviews
- FloatingReviewsPanel for graph builder integration
- Integrated review UI into 3 locations: legacy library, new library,
and graph builder

## Technical Implementation
- Added proper type safety throughout with SafeJson handling
- Optimized database queries using count functions instead of full data
fetching
- Fixed imports to be top-level instead of local
- All formatters and linters pass

## Test plan
- [ ] Test Human In The Loop block creation in graph builder
- [ ] Test block execution pauses and creates pending review
- [ ] Test review UI appears in all 3 locations
- [ ] Test data modification and approval workflow
- [ ] Test rejection workflow
- [ ] Test execution resumes after approval

🤖 Generated with [Claude Code](https://claude.ai/code)

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **New Features**
* Added Human-In-The-Loop review workflows to pause executions for human
validation.
* Users can approve or reject pending tasks, optionally editing
submitted data and adding a message.
* New "Waiting for Review" execution status with UI indicators across
run lists, badges, and activity views.
* Review management UI: pending review cards, list view, and a floating
reviews panel for quick access.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-11-27 12:07:46 +07:00
Zamil Majdy
ff5dd7a5b4 fix(backend): migrate all query_raw calls to query_raw_with_schema for proper schema handling (#11462)
## Summary

Complete migration of all non-test `query_raw` calls to use
`query_raw_with_schema` for proper PostgreSQL schema context handling.
This resolves the marketplace API failures where queries were looking
for unqualified table names.

## Root Cause

Prisma's `query_raw()` doesn't respect the `schema` parameter in
`DATABASE_URL` (`?schema=platform`) for raw SQL queries, causing queries
to fail when looking for unqualified table names in multi-schema
environments.

## Changes Made

### Files Updated
-  **backend/server/v2/store/db.py**: Already updated in previous
commit
-  **backend/server/v2/builder/db.py**: Updated `get_suggested_blocks`
query at line 343
-  **backend/check_store_data.py**: Updated all 4 `query_raw` calls to
use schema-aware queries
-  **backend/check_db.py**: Updated all `query_raw` calls (import
already existed)

### Technical Implementation
- Add import: `from backend.data.db import query_raw_with_schema`
- Replace `prisma.get_client().query_raw()` with
`query_raw_with_schema()`
- Add `{schema_prefix}` placeholder to table references in SQL queries
- Fix f-string template conflicts by using double braces
`{{schema_prefix}}`

### Query Examples

**Before:**
```sql
FROM "StoreAgent"
FROM "AgentNodeExecution" execution
```

**After:**
```sql  
FROM {schema_prefix}"StoreAgent"
FROM {schema_prefix}"AgentNodeExecution" execution
```

## Impact

-  All raw SQL queries now properly respect platform schema context
-  Fixes "relation does not exist" errors in multi-schema environments
-  Maintains backward compatibility with public schema deployments
-  Code formatting passes with `poetry run format`

## Testing

- All `query_raw` usages in non-test code successfully migrated
- `query_raw_with_schema` automatically handles schema prefix injection
- Existing query logic unchanged, only schema awareness added

## Before/After

**Before:** GET /api/store/agents → "relation 'StoreAgent' does not
exist"
**After:** GET /api/store/agents →  Returns store agents correctly

Resolves the marketplace API failures and ensures consistent schema
handling across all raw SQL operations.

Co-authored-by: Claude <noreply@anthropic.com>
2025-11-27 04:04:20 +00:00
Nicholas Tindle
02f8a69c6a feat(platform): add Google Drive Picker field type for enhanced file selection (#11311)
### 🏗️ Changes 

This PR adds a Google Drive Picker field type to enhance the user
experience of existing Google blocks, replacing manual file ID entry
with a visual file picker.

#### Backend Changes
- **Added  and  types** in :
  - Configurable picker field with OAuth scope management
  - Support for multiselect, folder selection, and MIME type filtering
  - Proper access token handling for file downloads
- **Enhanced Gmail blocks**: Updated attachment fields to use Google
Drive Picker for better UX
- **Enhanced Google Sheets blocks**: Updated spreadsheet selection to
use picker instead of manual ID entry
- **Added utility**: Async file download with virus scanning and 100MB
size limit

#### Frontend Changes  
- **Enhanced GoogleDrivePicker component**: Improved UI with folder icon
and multiselect messaging
- **Integrated picker in form renderers**: Auto-renders for fields with
format
- **Added shared GoogleDrivePickerInput component**: Eliminates code
duplication between NodeInputs and RunAgentInputs
- **Added type definitions**: Complete TypeScript support for picker
schemas and responses

#### Key Features
- 🎯 **Visual file selection**: Replace manual Google Drive file ID entry
with intuitive picker
- 📁 **Flexible configuration**: Support for documents, spreadsheets,
folders, and custom MIME types
- 🔒 **Minimal OAuth scopes**: Uses scope for security (only access to
user-selected files)
-  **Enhanced UX**: Seamless integration in both block configuration
and agent run modals
- 🛡️ **Security**: Virus scanning and file size limits for downloaded
attachments

#### Migration Impact
- **Backward compatible**: Existing blocks continue to work with manual
ID entry
- **Progressive enhancement**: New picker fields provide better UX for
the same functionality
- **No breaking changes**: all existing blocks should be unaffected

This enhancement improves the user experience of Google blocks without
introducing new systems or breaking existing functionality.


### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x]Test multiple of the new blocks [of note is that the create
spreadsheet block should be not used for now as it uses api not drive
picker]
  - [x] chain the blocks together and pass values between them

---------

Co-authored-by: Lluis Agusti <hi@llu.lu>
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
Co-authored-by: Claude <noreply@anthropic.com>
2025-11-27 03:01:29 +00:00
Toran Bruce Richards
e983d5c49a fix(backend): Implement passed uploaded media support for AI image customizer block (#11441)
- Added `store_media_file` utility to convert local file paths to Data
URIs for image processing.
- Updated `AIImageCustomizerBlock` to utilize processed images in model
execution, improving compatibility with Replicate API.
- Added optional Aspect ratio input to AIImageCustomizerBlock

This change enhances the image handling capabilities of the AI image
customizer, ensuring that images are properly formatted for external
processing.

<!-- Clearly explain the need for these changes: -->

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Created agent using AI Image Customizer block attached to agent
file input
  - [x] Run agent, confirmed block is working
- [x] Confirm block is still working in original direct file upload
setup.


### Testing Results

#### Before (dev cloud):
<img width="836" height="592" alt="image"
src="https://github.com/user-attachments/assets/88c75668-c5c9-44bb-bec5-6554088a0cb7"
/>


#### After (local):
<img width="827" height="587" alt="image"
src="https://github.com/user-attachments/assets/04fea431-70a5-4173-bc84-d354c03d7174"
/>

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Preprocesses input images to data URIs and adds an `aspect_ratio`
option, wiring both through to Replicate in `AIImageCustomizerBlock`.
> 
> - **Backend**
>   - **`backend/blocks/ai_image_customizer.py`**:
> - Preprocesses input images via `store_media_file(...,
return_content=True)` to Data URIs before invoking Replicate.
> - Adds `AspectRatio` enum and `aspect_ratio` input; passed through
`run_model` and included in Replicate input.
>     - Updates block test input accordingly.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
4116cf80d7. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2025-11-27 00:41:45 +00:00
Lluis Agusti
bdb94a3cf9 hotfix(frontend): clear cache account on logout 2025-11-26 23:33:17 +07:00
Lluis Agusti
8daec53230 hotfix(frontend): add profile loading state and error boundary 2025-11-26 22:42:42 +07:00
Ubbe
ec6f593edc fix(frontend): code scanning vulnerability (#11459)
## Changes 🏗️

Addresses this code scanning alert
[security/code-scanning/156](https://github.com/Significant-Gravitas/AutoGPT/security/code-scanning/156)

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] No prototype pollution
2025-11-26 12:25:21 +00:00
Lluis Agusti
e6ed83462d fix(frontend): update next.config.mjs to not use standalone output 2025-11-26 19:13:33 +07:00
Nicholas Tindle
1851264a6a [Snyk] Security upgrade @sentry/nextjs from 10.22.0 to 10.27.0 (#11451)
![snyk-top-banner](https://res.cloudinary.com/snyk/image/upload/r-d/scm-platform/snyk-pull-requests/pr-banner-default.svg)

### Snyk has created this PR to fix 3 vulnerabilities in the yarn
dependencies of this project.

#### Snyk changed the following file(s):

- `autogpt_platform/frontend/package.json`


#### Note for
[zero-installs](https://yarnpkg.com/features/zero-installs) users

If you are using the Yarn feature
[zero-installs](https://yarnpkg.com/features/zero-installs) that was
introduced in Yarn V2, note that this PR does not update the
`.yarn/cache/` directory meaning this code cannot be pulled and
immediately developed on as one would expect for a zero-install project
- you will need to run `yarn` to update the contents of the
`./yarn/cache` directory.
If you are not using zero-install you can ignore this as your flow
should likely be unchanged.



<details>
<summary>⚠️ <b>Warning</b></summary>

```
Failed to update the yarn.lock, please update manually before merging.
```

</details>



#### Vulnerabilities that will be fixed with an upgrade:

|  | Issue |  
:-------------------------:|:-------------------------
![high
severity](https://res.cloudinary.com/snyk/image/upload/w_20,h_20/v1561977819/icon/h.png
'high severity') | Insertion of Sensitive Information Into Sent Data
<br/>[SNYK-JS-SENTRYCORE-14105053](https://snyk.io/vuln/SNYK-JS-SENTRYCORE-14105053)
![high
severity](https://res.cloudinary.com/snyk/image/upload/w_20,h_20/v1561977819/icon/h.png
'high severity') | Insertion of Sensitive Information Into Sent Data
<br/>[SNYK-JS-SENTRYNEXTJS-14105054](https://snyk.io/vuln/SNYK-JS-SENTRYNEXTJS-14105054)
![high
severity](https://res.cloudinary.com/snyk/image/upload/w_20,h_20/v1561977819/icon/h.png
'high severity') | Insertion of Sensitive Information Into Sent Data
<br/>[SNYK-JS-SENTRYNODECORE-14105055](https://snyk.io/vuln/SNYK-JS-SENTRYNODECORE-14105055)




---

> [!IMPORTANT]
>
> - Check the changes in this PR to ensure they won't cause issues with
your project.
> - Max score is 1000. Note that the real score may have changed since
the PR was raised.
> - This PR was automatically created by Snyk using the credentials of a
real user.

---

**Note:** _You are seeing this because you or someone else with access
to this repository has authorized Snyk to open fix PRs._

For more information: <img
src="https://api.segment.io/v1/pixel/track?data=eyJ3cml0ZUtleSI6InJyWmxZcEdHY2RyTHZsb0lYd0dUcVg4WkFRTnNCOUEwIiwiYW5vbnltb3VzSWQiOiIwOWQyNjk1Yy1hZDYyLTRkODItOTg1OS03Yzk4NDY0ZDc4YmQiLCJldmVudCI6IlBSIHZpZXdlZCIsInByb3BlcnRpZXMiOnsicHJJZCI6IjA5ZDI2OTVjLWFkNjItNGQ4Mi05ODU5LTdjOTg0NjRkNzhiZCJ9fQ=="
width="0" height="0"/>
🧐 [View latest project
report](https://app.snyk.io/org/significant-gravitas/project/3d924968-0cf3-4767-9609-501fa4962856?utm_source&#x3D;github&amp;utm_medium&#x3D;referral&amp;page&#x3D;fix-pr)
📜 [Customise PR
templates](https://docs.snyk.io/scan-using-snyk/pull-requests/snyk-fix-pull-or-merge-requests/customize-pr-templates?utm_source=github&utm_content=fix-pr-template)
🛠 [Adjust project
settings](https://app.snyk.io/org/significant-gravitas/project/3d924968-0cf3-4767-9609-501fa4962856?utm_source&#x3D;github&amp;utm_medium&#x3D;referral&amp;page&#x3D;fix-pr/settings)
📚 [Read about Snyk's upgrade
logic](https://docs.snyk.io/scan-with-snyk/snyk-open-source/manage-vulnerabilities/upgrade-package-versions-to-fix-vulnerabilities?utm_source=github&utm_content=fix-pr-template)

---

**Learn how to fix vulnerabilities with free interactive lessons:**

🦉 [Learn about vulnerability in an interactive lesson of Snyk
Learn.](https://learn.snyk.io/?loc&#x3D;fix-pr)

[//]: #
'snyk:metadata:{"breakingChangeRiskLevel":null,"FF_showPullRequestBreakingChanges":false,"FF_showPullRequestBreakingChangesWebSearch":false,"customTemplate":{"variablesUsed":[],"fieldsUsed":[]},"dependencies":[{"name":"@sentry/nextjs","from":"10.22.0","to":"10.27.0"}],"env":"prod","issuesToFix":["SNYK-JS-SENTRYCORE-14105053","SNYK-JS-SENTRYCORE-14105053","SNYK-JS-SENTRYCORE-14105053","SNYK-JS-SENTRYCORE-14105053","SNYK-JS-SENTRYCORE-14105053","SNYK-JS-SENTRYCORE-14105053","SNYK-JS-SENTRYNEXTJS-14105054","SNYK-JS-SENTRYNODECORE-14105055"],"prId":"09d2695c-ad62-4d82-9859-7c98464d78bd","prPublicId":"09d2695c-ad62-4d82-9859-7c98464d78bd","packageManager":"yarn","priorityScoreList":[null,null,null],"projectPublicId":"3d924968-0cf3-4767-9609-501fa4962856","projectUrl":"https://app.snyk.io/org/significant-gravitas/project/3d924968-0cf3-4767-9609-501fa4962856?utm_source=github&utm_medium=referral&page=fix-pr","prType":"fix","templateFieldSources":{"branchName":"default","commitMessage":"default","description":"default","title":"default"},"templateVariants":["updated-fix-title","pr-warning-shown"],"type":"auto","upgrade":["SNYK-JS-SENTRYCORE-14105053","SNYK-JS-SENTRYNEXTJS-14105054","SNYK-JS-SENTRYNODECORE-14105055"],"vulns":["SNYK-JS-SENTRYCORE-14105053","SNYK-JS-SENTRYNEXTJS-14105054","SNYK-JS-SENTRYNODECORE-14105055"],"patch":[],"isBreakingChange":false,"remediationStrategy":"vuln"}'

---------

Co-authored-by: snyk-bot <snyk-bot@snyk.io>
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
2025-11-25 17:32:40 +00:00
Reinier van der Leer
8f25d43089 Merge branch 'master' into dev 2025-11-25 17:55:14 +01:00
Reinier van der Leer
0c435c4afa dx: Update Node.js versions in GitHub workflows to match frontend requirement (#11449)
This unbreaks the Claude Code and Copilot workflows in our repo.

- Follow-up to #11288

### Changes 🏗️

- Update `node-version` on `actions/setup-node@v4` from v21 to v22
2025-11-25 14:48:57 +00:00
dependabot[bot]
18002cb8f0 chore(frontend/deps-dev): bump cross-env from 7.0.3 to 10.1.0 in /autogpt_platform/frontend (#11353)
Bumps [cross-env](https://github.com/kentcdodds/cross-env) from 7.0.3 to
10.1.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/kentcdodds/cross-env/releases">cross-env's
releases</a>.</em></p>
<blockquote>
<h2>v10.1.0</h2>
<h1><a
href="https://github.com/kentcdodds/cross-env/compare/v10.0.0...v10.1.0">10.1.0</a>
(2025-09-29)</h1>
<h3>Features</h3>
<ul>
<li>add support for default value syntax (<a
href="152ae6a85b">152ae6a</a>)</li>
</ul>
<p>For example:</p>
<pre lang="json"><code>&quot;dev:server&quot;: &quot;cross-env wrangler
dev --port ${PORT:-8787}&quot;,
</code></pre>
<p>If <code>PORT</code> is already set, use that value, otherwise
fallback to <code>8787</code>.</p>
<p>Learn more about <a
href="https://www.gnu.org/software/bash/manual/html_node/Shell-Parameter-Expansion.html">Shell
Parameter Expansion</a></p>
<h2>v10.0.0</h2>
<h1><a
href="https://github.com/kentcdodds/cross-env/compare/v9.0.0...v10.0.0">10.0.0</a>
(2025-07-25)</h1>
<p>TL;DR: You should probably not have to change anything if:</p>
<ul>
<li>You're using a modern maintained version of Node.js (v20+ is
tested)</li>
<li>You're only using the CLI (most of you are as that's the intended
purpose)</li>
</ul>
<p>In this release (which should have been v8 except I had some issues
with automated releases 🙈), I've updated all the things and modernized
the package. This happened in <a
href="https://redirect.github.com/kentcdodds/cross-env/issues/261">#261</a></p>
<p>Was this needed? Not really, but I just thought it'd be fun to
modernize this package.</p>
<p>Here's the highlights of what was done.</p>
<ul>
<li>Replace Jest with Vitest for testing</li>
<li>Convert all source files from .js to .ts with proper TypeScript
types</li>
<li>Use zshy for ESM-only builds (removes CJS support)</li>
<li>Adopt <code>@​epic-web/config</code> for TypeScript, ESLint, and
Prettier</li>
<li>Update to Node.js &gt;=20 requirement</li>
<li>Remove kcd-scripts dependency</li>
<li>Add comprehensive e2e tests with GitHub Actions matrix testing</li>
<li>Update GitHub workflow with caching and cross-platform testing</li>
<li>Modernize documentation and remove outdated sections</li>
<li>Update all dependencies to latest versions</li>
<li>Add proper TypeScript declarations and exports</li>
</ul>
<p>The tool maintains its original functionality while being completely
modernized with the latest tooling and best practices</p>
<h3>BREAKING CHANGES</h3>
<ul>
<li>This is a major rewrite that changes the module format from CommonJS
to ESM-only. The package now requires Node.js &gt;=20 and only exports
ESM modules (not relevant in most cases).</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="152ae6a85b"><code>152ae6a</code></a>
feat: add support ofr default value syntax</li>
<li><a
href="bd70d1ab25"><code>bd70d1a</code></a>
chore: upgrade zshy</li>
<li><a
href="8e0b190df9"><code>8e0b190</code></a>
chore(ci): get coverage</li>
<li><a
href="8635e80e81"><code>8635e80</code></a>
fix(release): manually release a major version</li>
<li><a
href="3a58f22360"><code>3a58f22</code></a>
chore: fix npmrc registry</li>
<li><a
href="b70bfff5ec"><code>b70bfff</code></a>
chore(ci): add names to steps and workflows</li>
<li><a
href="cc5759dc36"><code>cc5759d</code></a>
fix(release): manually release a major version</li>
<li><a
href="080a859190"><code>080a859</code></a>
chore: remove publish script</li>
<li><a
href="31e5bc70e7"><code>31e5bc7</code></a>
chore(ci): restore built files</li>
<li><a
href="81e9c34f55"><code>81e9c34</code></a>
chore(ci): add back semantic-release</li>
<li>Additional commits viewable in <a
href="https://github.com/kentcdodds/cross-env/compare/v7.0.3...v10.1.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=cross-env&package-manager=npm_and_yarn&previous-version=7.0.3&new-version=10.1.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
2025-11-25 14:28:01 +00:00
Ubbe
240a65e7b3 fix(frontend): show spinner login/logout (#11447)
## Changes 🏗️

Show spinners on the login/logout buttons while they are being
processed.

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Login with password: there is a spinner on the button while
logging in
  - [x] Logout: there is a spinner on the button while logging out
2025-11-25 20:16:16 +07:00
Ubbe
07368468a4 fix(frontend): make agent ouput clickable (#11433)
### Changes 🏗️

<img width="1490" height="432" alt="Screenshot 2025-11-24 at 23 26 12"
src="https://github.com/user-attachments/assets/e5a149f2-7751-4276-9b76-707db7afdd46"
/>


The agent output buttons on the new run page weren't always clickable
due to a missing `z-index`.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Login
  - [x] Open run in new runs page
  - [x] Ouput buttons are clickable without issues
2025-11-25 20:15:46 +07:00
Ubbe
52aac09577 feat(frontend): environment aware favicon (#11431)
## Changes 🏗️

Change the favicon colour in the tab depending on the environment, so
when you have multiple tabs open (production, staging, local... ) it is
clear which map to what.

### Local ( orange )

<img width="257" height="40" alt="Screenshot 2025-11-24 at 22 38 27"
src="https://github.com/user-attachments/assets/705ddf6b-cc4a-498a-ad15-ed2c60f6b397"
/>

### Dev ( green )

<img width="263" height="40" alt="Screenshot 2025-11-24 at 22 45 20"
src="https://github.com/user-attachments/assets/eda3ba16-8646-4032-ad3c-7a8fc4db778c"
/>

### Example

<img width="513" height="41" alt="Screenshot 2025-11-24 at 22 45 09"
src="https://github.com/user-attachments/assets/1a43f860-536a-465e-9c10-a68c5218a58c"
/>

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Load the app and the favicon colour matches the env
2025-11-25 20:15:31 +07:00
Bently
64a775dfa7 feat(backend/blocks): Add GPT-5.1 and GPT-5.1-codex (#11406)
This pr adds the latest gpt-5.1 and gpt-5.1-codex llm's from openai, as
well as update the price of the gpt-5-chat model

https://platform.openai.com/docs/models/gpt-5.1
https://platform.openai.com/docs/models/gpt-5.1-codex

I have also had to add a new codex block as it uses a different openai
API and has other options the main llm's dont use

<img width="231" height="755" alt="image"
src="https://github.com/user-attachments/assets/a4056633-7b0f-446f-ae86-d7755c5b88ec"
/>


#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Test the latest gpt-5.1 llm
  - [x] Test the latest gpt-5.1-codex block

---------

Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
Co-authored-by: Claude <noreply@anthropic.com>
2025-11-25 09:33:11 +00:00
Bently
5d97706bb8 feat(backend/blocks): Add claude opus 4.5 (#11446)
This PR adds the latest [claude opus
4.5](https://www.anthropic.com/news/claude-opus-4-5) model to the
platform

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Test and use the llm to make sure it works
2025-11-25 09:11:02 +00:00
dependabot[bot]
244f3c7c71 chore(backend/deps-dev): bump faker from 37.8.0 to 38.2.0 in /autogpt_platform/backend (#11435)
Bumps [faker](https://github.com/joke2k/faker) from 37.8.0 to 38.2.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/joke2k/faker/releases">faker's
releases</a>.</em></p>
<blockquote>
<h2>Release v38.2.0</h2>
<p>See <a
href="https://github.com/joke2k/faker/blob/refs/tags/v38.2.0/CHANGELOG.md">CHANGELOG.md</a>.</p>
<h2>Release v38.1.0</h2>
<p>See <a
href="https://github.com/joke2k/faker/blob/refs/tags/v38.1.0/CHANGELOG.md">CHANGELOG.md</a>.</p>
<h2>Release v38.0.0</h2>
<p>See <a
href="https://github.com/joke2k/faker/blob/refs/tags/v38.0.0/CHANGELOG.md">CHANGELOG.md</a>.</p>
<h2>Release v37.12.0</h2>
<p>See <a
href="https://github.com/joke2k/faker/blob/refs/tags/v37.12.0/CHANGELOG.md">CHANGELOG.md</a>.</p>
<h2>Release v37.11.0</h2>
<p>See <a
href="https://github.com/joke2k/faker/blob/refs/tags/v37.11.0/CHANGELOG.md">CHANGELOG.md</a>.</p>
<h2>Release v37.10.0</h2>
<p>See <a
href="https://github.com/joke2k/faker/blob/refs/tags/v37.10.0/CHANGELOG.md">CHANGELOG.md</a>.</p>
<h2>Release v37.9.0</h2>
<p>See <a
href="https://github.com/joke2k/faker/blob/refs/tags/v37.9.0/CHANGELOG.md">CHANGELOG.md</a>.</p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/joke2k/faker/blob/master/CHANGELOG.md">faker's
changelog</a>.</em></p>
<blockquote>
<h3><a
href="https://github.com/joke2k/faker/compare/v38.1.0...v38.2.0">v38.2.0
- 2025-11-19</a></h3>
<ul>
<li>Add localized UniqueProxy. Thanks <a
href="https://github.com/azmeuk"><code>@​azmeuk</code></a></li>
</ul>
<h3><a
href="https://github.com/joke2k/faker/compare/v38.0.0...v38.1.0">v38.1.0
- 2025-11-19</a></h3>
<ul>
<li>Add <code>person</code> provider for <code>ar_DZ</code> locale.
Thanks <a
href="https://github.com/othmane099"><code>@​othmane099</code></a>.</li>
<li>Add <code>person</code>, <code>phone_number</code>,
<code>date_time</code> for <code>fr_DZ</code> locale. Thanks <a
href="https://github.com/othmane099"><code>@​othmane099</code></a>.</li>
</ul>
<h3><a
href="https://github.com/joke2k/faker/compare/v37.12.0...v38.0.0">v38.0.0
- 2025-11-11</a></h3>
<ul>
<li>Drop support for Python 3.9</li>
<li>Add support for Python 3.14</li>
</ul>
<h3><a
href="https://github.com/joke2k/faker/compare/v37.11.0...v37.12.0">v37.12.0
- 2025-10-07</a></h3>
<ul>
<li>Add french VAT number. Thanks <a
href="https://github.com/fabien-michel"><code>@​fabien-michel</code></a>.</li>
</ul>
<h3><a
href="https://github.com/joke2k/faker/compare/v37.9.0...v37.11.0">v37.11.0
- 2025-10-07</a></h3>
<ul>
<li>Add French company APE code. Thanks <a
href="https://github.com/fabien-michel"><code>@​fabien-michel</code></a>.</li>
</ul>
<h3><a
href="https://github.com/joke2k/faker/compare/v37.8.0...v37.9.0">v37.9.0
- 2025-10-07</a></h3>
<ul>
<li>Add names generation to <code>en_KE</code> locale. Thanks <a
href="https://github.com/titustum"><code>@​titustum</code></a>.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="337f8faea2"><code>337f8fa</code></a>
Bump version: 38.1.0 → 38.2.0</li>
<li><a
href="d8fb7f20fa"><code>d8fb7f2</code></a>
📝 Update CHANGELOG.md</li>
<li><a
href="243e3174c0"><code>243e317</code></a>
lint docs</li>
<li><a
href="e398287902"><code>e398287</code></a>
📝 Update docs</li>
<li><a
href="3cc7f7750f"><code>3cc7f77</code></a>
feat: localized UniqueProxy (<a
href="https://redirect.github.com/joke2k/faker/issues/2279">#2279</a>)</li>
<li><a
href="8ba30da5f7"><code>8ba30da</code></a>
Bump version: 38.0.0 → 38.1.0</li>
<li><a
href="921bde120f"><code>921bde1</code></a>
📝 Update CHANGELOG.md</li>
<li><a
href="702e23b8e3"><code>702e23b</code></a>
fix newline</li>
<li><a
href="d5051a98db"><code>d5051a9</code></a>
add_faker_pk_pypi_link (<a
href="https://redirect.github.com/joke2k/faker/issues/2281">#2281</a>)</li>
<li><a
href="050de370cc"><code>050de37</code></a>
Add <code>person</code> provider for <code>ar_DZ</code> locale (<a
href="https://redirect.github.com/joke2k/faker/issues/2271">#2271</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/joke2k/faker/compare/v37.8.0...v38.2.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=faker&package-manager=pip&previous-version=37.8.0&new-version=38.2.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
2025-11-25 09:05:48 +00:00
Ubbe
355219acbd feat(frontend): fix tab titles (#11432)
## Changes 🏗️

Add some missing page titles, most importantly the missing one the new
runs page.

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the app
  - [x] Page titles are there
2025-11-25 15:47:38 +07:00
Ubbe
1ab66eaed4 fix(frontend): login/signup pages away redirects (#11430)
## Changes 🏗️

When the user is logged in and tries to navigate to `/login` or
`/signup` manually, redirect them away to `/marketplace`.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Login
  - [x] Go to `/login` or `/signup`
  - [x] You are redirected back into `/marketplace`
2025-11-25 15:47:29 +07:00
Bently
126d5838a0 feat(backend/blocks): add latest grok models (#11422)
This PR adds some of the latest grok models to the platform
``x-ai/grok-4-fast``, ``x-ai/grok-4.1-fast`` and ``ai/grok-code-fast-1``

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Test all of the latest grok models to make sure they work and they
do!

<img width="1089" height="714" alt="image"
src="https://github.com/user-attachments/assets/0d1e3984-69e8-432b-982a-b04c16bc4f41"
/>
2025-11-24 13:25:48 +00:00
Bently
643aea849b feat(backend/blocks): Add google banana pro (#11425)
This PR adds the latest google banana pro image generator and editor to
the platform and fixes up some of the prices for the image generation
models

I asked for ``Generate a image of a dog on a skateboard`` and this is
what i got:
<img width="2048" height="2048" alt="image"
src="https://github.com/user-attachments/assets/9b6c16d8-df8f-4fb6-a009-d6d342f9beb7"
/>

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Test the image generator and image editor block using the latest
google banana pro model and it works

---------

Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
2025-11-24 13:23:54 +00:00
Swifty
3b092f34d8 feat(platform): Add Get Linear Issues Block (#11415)
Added the ability to get all issues for a given project.

### Changes 🏗️

- added api query
- added new models
- added new block that gets all issues for a given project

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] I have ensured the new block works in dev
  - [x] I have ensured the other linear blocks still work
2025-11-24 11:43:10 +00:00
Swifty
0921d23628 fix(block): Improve error handling of SendEmailBlock (#11420)
Currently if the smtp server is not configured currently it results in a
platform error. This PR simplifies the error handling

### Changes 🏗️
 
- removed default value for smtp server host. 
- capture common errors and yield them as error

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Checked all tests still pass
2025-11-24 11:42:38 +00:00
Lluis Agusti
0edc669874 refactor(frontend): debug preview stealing dev 2025-11-21 10:39:12 +07:00
Abhimanyu Yadav
e64d3d9b99 feat(frontend): implement agent outputs viewer and improve UI/UX in new builder (#11421)
This PR introduces several improvements to the new builder experience:

**1. Agent Outputs Feature** 
- Implemented a new `AgentOutputs` component that displays execution
outputs from OUTPUT blocks
- Added a slide-out sheet UI to view agent outputs with proper
formatting
- Integrated with existing output renderers from the library view
- Shows output block names, descriptions, and rendered values
- Added beta badge to indicate feature is still experimental

**2. UI/UX Improvements** 🎨
- Fixed graph loading spinner color from violet to neutral zinc for
better consistency
- Adjusted node shadow styling for better visual hierarchy (reduced
shadow when not selected)
- Fixed credential field button spacing to prevent layout overflow
- Improved array editor widget delete button positioning
- Added proper link handling for integration redirects (opens in new
tab)
- Fixed object editor to handle null values gracefully

**3. Performance & State Management** 🚀
- Fixed race condition in run input dialog by awaiting execution before
closing
- Added proper history initialization after graph loads
- Added `outputSchema` to graph store for tracking output blocks
- Fixed search bar to maintain query state properly
- Added automatic fit view on graph load for better initial viewport

**4. Build Actions Bar** 🔧
- Reduced padding for more compact appearance
- Enabled/disabled Agent Outputs button based on presence of output
blocks
- Removed loading icon from manual run button when not executing

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Created and executed an agent with OUTPUT blocks to verify outputs
display correctly
- [x] Tested output viewer with different data types (text, JSON,
images, etc.)
- [x] Verified credential field layouts don't overflow in constrained
spaces
  - [x] Tested array editor delete functionality and button positioning
- [x] Confirmed graph loads with proper fit view and history
initialization
  - [x] Tested run input dialog closes only after execution starts
  - [x] Verified integration links open in new tabs
  - [x] Tested object editor with null values
2025-11-20 17:06:02 +00:00
Lluis Agusti
41dc39b97d fix(frontend): preview banner ammend 2025-11-20 23:38:51 +07:00
Ubbe
80e573f33b feat(frontend): PR preview banner (#11412)
## Changes 🏗️

<img width="900" height="757" alt="Screenshot 2025-11-19 at 12 18 38"
src="https://github.com/user-attachments/assets/e2c2a4cf-a05e-431e-853d-fb0a68729e54"
/>

When the dev environment is used for a PR preview, show a banner at the
top of the page to indicate this.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Create a PR preview against Dev
  - [x] Check it shows the banner once this is merged
  - [x] Or try locally with the env var set

### For configuration changes:

`NEXT_PUBLIC_PREVIEW_STEALING_DEV` is set programmatically via our Infra
CI.
2025-11-20 23:08:32 +07:00
dependabot[bot]
06d20e7e4c chore(backend/deps-dev): bump the development-dependencies group across 1 directory with 3 updates (#11411)
Bumps the development-dependencies group with 3 updates in the
/autogpt_platform/backend directory:
[pre-commit](https://github.com/pre-commit/pre-commit),
[pyright](https://github.com/RobertCraigie/pyright-python) and
[ruff](https://github.com/astral-sh/ruff).

Updates `pre-commit` from 4.3.0 to 4.4.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/pre-commit/pre-commit/releases">pre-commit's
releases</a>.</em></p>
<blockquote>
<h2>pre-commit v4.4.0</h2>
<h3>Features</h3>
<ul>
<li>Add <code>--fail-fast</code> option to <code>pre-commit run</code>.
<ul>
<li><a
href="https://redirect.github.com/pre-commit/pre-commit/issues/3528">#3528</a>
PR by <a
href="https://github.com/JulianMaurin"><code>@​JulianMaurin</code></a>.</li>
</ul>
</li>
<li>Upgrade <code>ruby-build</code> / <code>rbenv</code>.
<ul>
<li><a
href="https://redirect.github.com/pre-commit/pre-commit/issues/3566">#3566</a>
PR by <a
href="https://github.com/asottile"><code>@​asottile</code></a>.</li>
<li><a
href="https://redirect.github.com/pre-commit/pre-commit/issues/3565">#3565</a>
issue by <a
href="https://github.com/MRigal"><code>@​MRigal</code></a>.</li>
</ul>
</li>
<li>Add <code>language: unsupported</code> / <code>language:
unsupported_script</code> as aliases for <code>language: system</code> /
<code>language: script</code> (which will eventually be deprecated).
<ul>
<li><a
href="https://redirect.github.com/pre-commit/pre-commit/issues/3577">#3577</a>
PR by <a
href="https://github.com/asottile"><code>@​asottile</code></a>.</li>
</ul>
</li>
<li>Add support docker-in-docker detection for cgroups v2.
<ul>
<li><a
href="https://redirect.github.com/pre-commit/pre-commit/issues/3535">#3535</a>
PR by <a
href="https://github.com/br-rhrbacek"><code>@​br-rhrbacek</code></a>.</li>
<li><a
href="https://redirect.github.com/pre-commit/pre-commit/issues/3360">#3360</a>
issue by <a
href="https://github.com/JasonAlt"><code>@​JasonAlt</code></a>.</li>
</ul>
</li>
</ul>
<h3>Fixes</h3>
<ul>
<li>Handle when docker gives <code>SecurityOptions: null</code>.
<ul>
<li><a
href="https://redirect.github.com/pre-commit/pre-commit/issues/3537">#3537</a>
PR by <a
href="https://github.com/asottile"><code>@​asottile</code></a>.</li>
<li><a
href="https://redirect.github.com/pre-commit/pre-commit/issues/3514">#3514</a>
issue by <a
href="https://github.com/jenstroeger"><code>@​jenstroeger</code></a>.</li>
</ul>
</li>
<li>Fix error context for invalid <code>stages</code> in
<code>.pre-commit-config.yaml</code>.
<ul>
<li><a
href="https://redirect.github.com/pre-commit/pre-commit/issues/3576">#3576</a>
PR by <a
href="https://github.com/asottile"><code>@​asottile</code></a>.</li>
</ul>
</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/pre-commit/pre-commit/blob/main/CHANGELOG.md">pre-commit's
changelog</a>.</em></p>
<blockquote>
<h1>4.4.0 - 2025-11-08</h1>
<h3>Features</h3>
<ul>
<li>Add <code>--fail-fast</code> option to <code>pre-commit run</code>.
<ul>
<li><a
href="https://redirect.github.com/pre-commit/pre-commit/issues/3528">#3528</a>
PR by <a
href="https://github.com/JulianMaurin"><code>@​JulianMaurin</code></a>.</li>
</ul>
</li>
<li>Upgrade <code>ruby-build</code> / <code>rbenv</code>.
<ul>
<li><a
href="https://redirect.github.com/pre-commit/pre-commit/issues/3566">#3566</a>
PR by <a
href="https://github.com/asottile"><code>@​asottile</code></a>.</li>
<li><a
href="https://redirect.github.com/pre-commit/pre-commit/issues/3565">#3565</a>
issue by <a
href="https://github.com/MRigal"><code>@​MRigal</code></a>.</li>
</ul>
</li>
<li>Add <code>language: unsupported</code> / <code>language:
unsupported_script</code> as aliases
for <code>language: system</code> / <code>language: script</code> (which
will eventually be
deprecated).
<ul>
<li><a
href="https://redirect.github.com/pre-commit/pre-commit/issues/3577">#3577</a>
PR by <a
href="https://github.com/asottile"><code>@​asottile</code></a>.</li>
</ul>
</li>
<li>Add support docker-in-docker detection for cgroups v2.
<ul>
<li><a
href="https://redirect.github.com/pre-commit/pre-commit/issues/3535">#3535</a>
PR by <a
href="https://github.com/br-rhrbacek"><code>@​br-rhrbacek</code></a>.</li>
<li><a
href="https://redirect.github.com/pre-commit/pre-commit/issues/3360">#3360</a>
issue by <a
href="https://github.com/JasonAlt"><code>@​JasonAlt</code></a>.</li>
</ul>
</li>
</ul>
<h3>Fixes</h3>
<ul>
<li>Handle when docker gives <code>SecurityOptions: null</code>.
<ul>
<li><a
href="https://redirect.github.com/pre-commit/pre-commit/issues/3537">#3537</a>
PR by <a
href="https://github.com/asottile"><code>@​asottile</code></a>.</li>
<li><a
href="https://redirect.github.com/pre-commit/pre-commit/issues/3514">#3514</a>
issue by <a
href="https://github.com/jenstroeger"><code>@​jenstroeger</code></a>.</li>
</ul>
</li>
<li>Fix error context for invalid <code>stages</code> in
<code>.pre-commit-config.yaml</code>.
<ul>
<li><a
href="https://redirect.github.com/pre-commit/pre-commit/issues/3576">#3576</a>
PR by <a
href="https://github.com/asottile"><code>@​asottile</code></a>.</li>
</ul>
</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="17cf886473"><code>17cf886</code></a>
v4.4.0</li>
<li><a
href="cb63a5cb9a"><code>cb63a5c</code></a>
Merge pull request <a
href="https://redirect.github.com/pre-commit/pre-commit/issues/3535">#3535</a>
from br-rhrbacek/fix-cgroups</li>
<li><a
href="f80801d75a"><code>f80801d</code></a>
Fix docker-in-docker detection for cgroups v2</li>
<li><a
href="9143fc3545"><code>9143fc3</code></a>
Merge pull request <a
href="https://redirect.github.com/pre-commit/pre-commit/issues/3577">#3577</a>
from pre-commit/language-unsupported</li>
<li><a
href="725acc969a"><code>725acc9</code></a>
rename system and script languages to unsupported /
unsupported_script</li>
<li><a
href="3815e2e6d8"><code>3815e2e</code></a>
Merge pull request <a
href="https://redirect.github.com/pre-commit/pre-commit/issues/3576">#3576</a>
from pre-commit/fix-stages-config-error</li>
<li><a
href="aa2961c122"><code>aa2961c</code></a>
fix missing context in error for stages</li>
<li><a
href="46297f7cd6"><code>46297f7</code></a>
Merge pull request <a
href="https://redirect.github.com/pre-commit/pre-commit/issues/3575">#3575</a>
from pre-commit/rm-python3-hooks-repo</li>
<li><a
href="95eec75004"><code>95eec75</code></a>
rm python3_hooks_repo</li>
<li><a
href="5e4b3546f3"><code>5e4b354</code></a>
Merge pull request <a
href="https://redirect.github.com/pre-commit/pre-commit/issues/3574">#3574</a>
from pre-commit/rm-hook-with-spaces-test</li>
<li>Additional commits viewable in <a
href="https://github.com/pre-commit/pre-commit/compare/v4.3.0...v4.4.0">compare
view</a></li>
</ul>
</details>
<br />

Updates `pyright` from 1.1.406 to 1.1.407
<details>
<summary>Commits</summary>
<ul>
<li><a
href="53e8efb463"><code>53e8efb</code></a>
Pyright NPM Package update to 1.1.407 (<a
href="https://redirect.github.com/RobertCraigie/pyright-python/issues/356">#356</a>)</li>
<li>See full diff in <a
href="https://github.com/RobertCraigie/pyright-python/compare/v1.1.406...v1.1.407">compare
view</a></li>
</ul>
</details>
<br />

Updates `ruff` from 0.13.3 to 0.14.5
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/astral-sh/ruff/releases">ruff's
releases</a>.</em></p>
<blockquote>
<h2>0.14.5</h2>
<h2>Release Notes</h2>
<p>Released on 2025-11-13.</p>
<h3>Preview features</h3>
<ul>
<li>[<code>flake8-simplify</code>] Apply <code>SIM113</code> when index
variable is of type <code>int</code> (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21395">#21395</a>)</li>
<li>[<code>pydoclint</code>] Fix false positive when Sphinx directives
follow a &quot;Raises&quot; section (<code>DOC502</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20535">#20535</a>)</li>
<li>[<code>pydoclint</code>] Support NumPy-style comma-separated
parameters (<code>DOC102</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20972">#20972</a>)</li>
<li>[<code>refurb</code>] Auto-fix annotated assignments
(<code>FURB101</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21278">#21278</a>)</li>
<li>[<code>ruff</code>] Ignore <code>str()</code> when not used for
simple conversion (<code>RUF065</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21330">#21330</a>)</li>
</ul>
<h3>Bug fixes</h3>
<ul>
<li>Fix syntax error false positive on alternative <code>match</code>
patterns (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21362">#21362</a>)</li>
<li>[<code>flake8-simplify</code>] Fix false positive for iterable
initializers with generator arguments (<code>SIM222</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21187">#21187</a>)</li>
<li>[<code>pyupgrade</code>] Fix false positive on relative imports from
local <code>.builtins</code> module (<code>UP029</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21309">#21309</a>)</li>
<li>[<code>pyupgrade</code>] Consistently set the deprecated tag
(<code>UP035</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21396">#21396</a>)</li>
</ul>
<h3>Rule changes</h3>
<ul>
<li>[<code>refurb</code>] Detect empty f-strings (<code>FURB105</code>)
(<a
href="https://redirect.github.com/astral-sh/ruff/pull/21348">#21348</a>)</li>
</ul>
<h3>CLI</h3>
<ul>
<li>Add option to provide a reason to <code>--add-noqa</code> (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21294">#21294</a>)</li>
<li>Add upstream linter URL to <code>ruff linter
--output-format=json</code> (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21316">#21316</a>)</li>
<li>Add color to <code>--help</code> (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21337">#21337</a>)</li>
</ul>
<h3>Documentation</h3>
<ul>
<li>Add a new &quot;Opening a PR&quot; section to the contribution guide
(<a
href="https://redirect.github.com/astral-sh/ruff/pull/21298">#21298</a>)</li>
<li>Added the PyScripter IDE to the list of &quot;Who is using
Ruff?&quot; (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21402">#21402</a>)</li>
<li>Update PyCharm setup instructions (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21409">#21409</a>)</li>
<li>[<code>flake8-annotations</code>] Add link to
<code>allow-star-arg-any</code> option (<code>ANN401</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21326">#21326</a>)</li>
</ul>
<h3>Other changes</h3>
<ul>
<li>[<code>configuration</code>] Improve error message when
<code>line-length</code> exceeds <code>u16::MAX</code> (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21329">#21329</a>)</li>
</ul>
<h3>Contributors</h3>
<ul>
<li><a href="https://github.com/njhearp"><code>@​njhearp</code></a></li>
<li><a href="https://github.com/11happy"><code>@​11happy</code></a></li>
<li><a href="https://github.com/hugovk"><code>@​hugovk</code></a></li>
<li><a href="https://github.com/Gankra"><code>@​Gankra</code></a></li>
<li><a href="https://github.com/ntBre"><code>@​ntBre</code></a></li>
<li><a
href="https://github.com/pyscripter"><code>@​pyscripter</code></a></li>
<li><a
href="https://github.com/danparizher"><code>@​danparizher</code></a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md">ruff's
changelog</a>.</em></p>
<blockquote>
<h2>0.14.5</h2>
<p>Released on 2025-11-13.</p>
<h3>Preview features</h3>
<ul>
<li>[<code>flake8-simplify</code>] Apply <code>SIM113</code> when index
variable is of type <code>int</code> (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21395">#21395</a>)</li>
<li>[<code>pydoclint</code>] Fix false positive when Sphinx directives
follow a &quot;Raises&quot; section (<code>DOC502</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20535">#20535</a>)</li>
<li>[<code>pydoclint</code>] Support NumPy-style comma-separated
parameters (<code>DOC102</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20972">#20972</a>)</li>
<li>[<code>refurb</code>] Auto-fix annotated assignments
(<code>FURB101</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21278">#21278</a>)</li>
<li>[<code>ruff</code>] Ignore <code>str()</code> when not used for
simple conversion (<code>RUF065</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21330">#21330</a>)</li>
</ul>
<h3>Bug fixes</h3>
<ul>
<li>Fix syntax error false positive on alternative <code>match</code>
patterns (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21362">#21362</a>)</li>
<li>[<code>flake8-simplify</code>] Fix false positive for iterable
initializers with generator arguments (<code>SIM222</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21187">#21187</a>)</li>
<li>[<code>pyupgrade</code>] Fix false positive on relative imports from
local <code>.builtins</code> module (<code>UP029</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21309">#21309</a>)</li>
<li>[<code>pyupgrade</code>] Consistently set the deprecated tag
(<code>UP035</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21396">#21396</a>)</li>
</ul>
<h3>Rule changes</h3>
<ul>
<li>[<code>refurb</code>] Detect empty f-strings (<code>FURB105</code>)
(<a
href="https://redirect.github.com/astral-sh/ruff/pull/21348">#21348</a>)</li>
</ul>
<h3>CLI</h3>
<ul>
<li>Add option to provide a reason to <code>--add-noqa</code> (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21294">#21294</a>)</li>
<li>Add upstream linter URL to <code>ruff linter
--output-format=json</code> (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21316">#21316</a>)</li>
<li>Add color to <code>--help</code> (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21337">#21337</a>)</li>
</ul>
<h3>Documentation</h3>
<ul>
<li>Add a new &quot;Opening a PR&quot; section to the contribution guide
(<a
href="https://redirect.github.com/astral-sh/ruff/pull/21298">#21298</a>)</li>
<li>Added the PyScripter IDE to the list of &quot;Who is using
Ruff?&quot; (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21402">#21402</a>)</li>
<li>Update PyCharm setup instructions (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21409">#21409</a>)</li>
<li>[<code>flake8-annotations</code>] Add link to
<code>allow-star-arg-any</code> option (<code>ANN401</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21326">#21326</a>)</li>
</ul>
<h3>Other changes</h3>
<ul>
<li>[<code>configuration</code>] Improve error message when
<code>line-length</code> exceeds <code>u16::MAX</code> (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21329">#21329</a>)</li>
</ul>
<h3>Contributors</h3>
<ul>
<li><a href="https://github.com/njhearp"><code>@​njhearp</code></a></li>
<li><a href="https://github.com/11happy"><code>@​11happy</code></a></li>
<li><a href="https://github.com/hugovk"><code>@​hugovk</code></a></li>
<li><a href="https://github.com/Gankra"><code>@​Gankra</code></a></li>
<li><a href="https://github.com/ntBre"><code>@​ntBre</code></a></li>
<li><a
href="https://github.com/pyscripter"><code>@​pyscripter</code></a></li>
<li><a
href="https://github.com/danparizher"><code>@​danparizher</code></a></li>
<li><a
href="https://github.com/MichaReiser"><code>@​MichaReiser</code></a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="87dafb8787"><code>87dafb8</code></a>
Bump 0.14.5 (<a
href="https://redirect.github.com/astral-sh/ruff/issues/21435">#21435</a>)</li>
<li><a
href="9e80e5a3a6"><code>9e80e5a</code></a>
[ty] Support <code>type[…]</code> and <code>Type[…]</code> in implicit
type aliases (<a
href="https://redirect.github.com/astral-sh/ruff/issues/21421">#21421</a>)</li>
<li><a
href="f9cc26aa12"><code>f9cc26a</code></a>
[ty] Respect notebook cell boundaries when adding an auto import (<a
href="https://redirect.github.com/astral-sh/ruff/issues/21322">#21322</a>)</li>
<li><a
href="d49c326309"><code>d49c326</code></a>
Update PyCharm setup instructions (<a
href="https://redirect.github.com/astral-sh/ruff/issues/21409">#21409</a>)</li>
<li><a
href="e70fccbf25"><code>e70fccb</code></a>
[ty] Improve LSP test server logging (<a
href="https://redirect.github.com/astral-sh/ruff/issues/21432">#21432</a>)</li>
<li><a
href="90b32f3b3b"><code>90b32f3</code></a>
[ty] Ensure annotation/type expressions in stub files are always
deferred (<a
href="https://redirect.github.com/astral-sh/ruff/issues/2">#2</a>...</li>
<li><a
href="99694b6e4a"><code>99694b6</code></a>
Use <code>profiling</code> profile for <code>cargo test(linux,
release)</code> (<a
href="https://redirect.github.com/astral-sh/ruff/issues/21429">#21429</a>)</li>
<li><a
href="67e54fffe1"><code>67e54ff</code></a>
[ty] Fix panic for cyclic star imports (<a
href="https://redirect.github.com/astral-sh/ruff/issues/21428">#21428</a>)</li>
<li><a
href="a01b0d7780"><code>a01b0d7</code></a>
[ty] Press 'enter' to rerun all mdtests (<a
href="https://redirect.github.com/astral-sh/ruff/issues/21427">#21427</a>)</li>
<li><a
href="04ab9170d6"><code>04ab917</code></a>
[ty] Further improve subscript assignment diagnostics (<a
href="https://redirect.github.com/astral-sh/ruff/issues/21411">#21411</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/astral-sh/ruff/compare/0.13.3...0.14.5">compare
view</a></li>
</ul>
</details>
<br />


Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions


</details>

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
2025-11-20 07:47:06 +00:00
Bently
07b5fe859a feat(platform/backend): add gemini-3-pro-preview (#11413)
This adds gemini-3-pro-preview from openrouter

https://openrouter.ai/google/gemini-3-pro-preview

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x]  Test the gemini 3 model in the llm blocks and it works
2025-11-19 14:56:54 +00:00
Abhimanyu Yadav
746dbbac84 refactor(frontend): improve new builder performance and UX with position handling and store optimizations (#11397)
This PR introduces several performance and user experience improvements
to the new builder, focusing on node positioning, state management
optimizations, and visual enhancements.

The new builder had several issues that impacted developer experience
and runtime performance:
- Inefficient store subscriptions causing unnecessary re-renders
- No intelligent node positioning when adding blocks via clicking
- useEffect dependencies causing potential stale closures
- Width constraints missing on form fields affecting layout consistency

### Changes 🏗️

#### Performance Optimizations
- **Store subscription optimization**: Added `useShallow` from zustand
to prevent unnecessary re-renders in
[NodeContainer](file:///app/(platform)/build/components/FlowEditor/nodes/CustomNode/components/NodeContainer.tsx)
and
[NodeExecutionBadge](file:///app/(platform)/build/components/FlowEditor/nodes/CustomNode/components/NodeExecutionBadge.tsx)
- **useEffect cleanup**: Split combined useEffects in
[useFlow](file:///app/(platform)/build/hooks/useFlow.ts) for clearer
dependencies and better performance
- **Memoization**: Added `memo` to
[NewControlPanel](file:///app/(platform)/build/components/NewControlPanel/NewControlPanel.tsx)
to prevent unnecessary re-renders
- **Callback optimization**: Wrapped `onDrop` handler in `useCallback`
to prevent recreation on every render

#### UX Improvements  
- **Smart node positioning**: Implemented `findFreePosition` algorithm
in [helper.ts](file:///app/(platform)/build/components/helper.ts) that:
  - Automatically finds non-overlapping positions for new nodes
  - Tries right, left, then below existing nodes
  - Falls back to far-right position if no space available
- **Click-to-add blocks**: Added click handlers to blocks that:
  - Add the block at an intelligent position
- Automatically pan viewport to center the new node with smooth
animation
- **Visual feedback**: Added loading state with spinner icon for agent
blocks during fetch
- **Form field width**: Added `max-w-[340px]` constraint to prevent
overflow in
[FieldTemplate](file:///components/renderers/input-renderer/templates/FieldTemplate.tsx)


### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Create from scratch and execute an agent with at least 3 blocks
  - [x] Test adding blocks via drag-and-drop ensures no overlapping
  - [x] Test adding blocks via click positions them intelligently
  - [x] Test viewport animation when adding blocks via click
- [x] Import an agent from file upload, and confirm it executes
correctly
  - [x] Test loading spinner appears when adding agents from "My Agents"
- [x] Verify performance improvements by checking React DevTools for
reduced re-renders
2025-11-19 13:52:18 +00:00
Zamil Majdy
901bb31e14 feat(backend): parameterize activity status generation with customizable prompts (#11407)
## Summary

Implement comprehensive parameterization of the activity status
generation system to enable custom prompts for admin analytics
dashboard.

## Changes Made

### Core Function Enhancement (`activity_status_generator.py`)
- **Extract hardcoded prompts to constants**: `DEFAULT_SYSTEM_PROMPT`
and `DEFAULT_USER_PROMPT`
- **Add prompt parameters**: `system_prompt`, `user_prompt` with
defaults to maintain backward compatibility
- **Template substitution system**: User prompt supports
`{{GRAPH_NAME}}` and `{{EXECUTION_DATA}}` placeholders
- **Skip existing flag**: `skip_existing` parameter allows admin to
force regeneration of existing data
- **Maintain manager compatibility**: All existing calls continue to
work with default parameters

### Admin API Enhancement (`execution_analytics_routes.py`)
- **Custom prompt fields**: `system_prompt` and `user_prompt` optional
fields in `ExecutionAnalyticsRequest`
- **Skip existing control**: `skip_existing` boolean flag for admin
regeneration option
- **Template documentation**: Clear documentation of placeholder system
in field descriptions
- **Backward compatibility**: All existing API calls work unchanged

### Template System Design
- **Simple placeholder replacement**: `{{GRAPH_NAME}}` → actual graph
name, `{{EXECUTION_DATA}}` → JSON execution data
- **No dependencies**: Uses simple `string.replace()` for maximum
compatibility
- **JSON safety**: Execution data properly serialized as indented JSON
- **Validation tested**: Template substitution verified to work
correctly

## Key Features

### For Regular Users (Manager Integration)
- **No changes required**: Existing manager.py calls work unchanged
- **Default behavior preserved**: Same prompts and logic as before
- **Feature flag compatibility**: LaunchDarkly integration unchanged

### For Admin Analytics Dashboard
- **Custom system prompts**: Admins can override the AI evaluation
criteria
- **Custom user prompts**: Admins can modify the analysis instructions
with execution data templates
- **Force regeneration**: `skip_existing=False` allows reprocessing
existing executions with new prompts
- **Complete model list**: Access to all LLM models from `llm.py` (70+
models including GPT, Claude, Gemini, etc.)

## Technical Validation
-  Template substitution tested and working
-  Default behavior preserved for existing code
-  Admin API parameter validation working
-  All imports and function signatures correct
-  Backward compatibility maintained

## Use Cases Enabled
- **A/B testing**: Compare different prompt strategies on same execution
data
- **Custom evaluation**: Tailor success criteria for specific graph
types
- **Prompt optimization**: Iterate on prompt design based on admin
feedback
- **Bulk reprocessing**: Regenerate activity status with improved
prompts

## Testing
- Template substitution functionality verified
- Function signatures and imports validated
- Code formatting and linting passed
- Backward compatibility confirmed

## Breaking Changes
None - all existing functionality preserved with default parameters.

## Related Issues
Resolves the requirement to expose prompt customization on the frontend
execution analytics dashboard.

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-11-19 13:38:08 +00:00
Swifty
9438817702 fix(platform): Capture Sentry Block Errors Correctly (#11404)
Currently we are capturing block errors via the scope only, this change
captures the error directly.

### Changes 🏗️

- capture the error as well as the scope in the executor manager
- Update the block error message to include additional details
- remove the __str__ function from blockerror as it is no longer needed

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Checked that errors are still captured in dev
2025-11-19 12:21:47 +01:00
Abhimanyu Yadav
184a73de7d fix(frontend): Add custom validator to handle short-text and long-text formats in form renderer (#11395)
The rfjs library was throwing validation errors for our custom format
types `short-text` and `long-text` because these are not standard JSON
Schema formats. This was causing form validation to fail even though
these formats are valid in our application context.

<img width="792" height="85" alt="Screenshot 2025-11-18 at 9 39 08 AM"
src="https://github.com/user-attachments/assets/c75c584f-b991-483c-8779-fc93877028e0"
/>

### Changes 🏗️

- Created a custom validator using `@rjsf/validator-ajv8`'s
`customizeValidator` function
- Added support for `short-text` and `long-text` custom formats that
accept any string value
- Replaced the default validator with our custom validator in the
FormRenderer component
- Disabled strict mode and format validation in AJV options to prevent
validation errors for non-standard formats

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Create an agent with input blocks that use short-text format
  - [x] Create an agent with input blocks that use long-text format
  - [x] Execute the agent and verify no validation errors appear
  - [x] Verify that form submission works correctly with both formats
- [x] Test that other standard formats (email, URL, etc.) still work as
expected
2025-11-19 04:51:25 +00:00
Abhimanyu Yadav
1154f86a5c feat(frontend): add inline node title editing with double-click (#11370)
- depends on https://github.com/Significant-Gravitas/AutoGPT/pull/11368

This PR adds the ability to rename nodes directly in the flow editor by
double-clicking on their titles.


https://github.com/user-attachments/assets/1de3fc5c-f859-425e-b4cf-dfb21c3efe3d

### Changes 🏗️

- **Added inline node title editing functionality:**
  - Users can now double-click on any node title to enter edit mode
  - Custom titles are saved on Enter key or blur, canceled on Escape key
- Custom node names are persisted in the node's metadata as
`customized_name`
  - Added tooltip to display full title when text is truncated

- **Modified node data handling:**
- Updated `nodeStore` to include `customized_name` in metadata when
converting nodes
- Modified `helper.ts` to pass metadata (including custom titles) to
custom nodes
  - Added metadata property to `CustomNodeData` type

- **UI improvements:**
  - Added hover cursor indication for editable titles
  - Implemented proper focus management during editing
  - Maintained consistent styling between display and edit modes

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Double-click on various node types to enter edit mode
  - [x] Type new names and press Enter to save
  - [x] Press Escape to cancel editing and revert to original name
  - [x] Click outside the input field to save changes
  - [x] Verify custom names persist after page refresh
  - [x] Test with long node names to ensure tooltip appears
  - [x] Verify custom names are saved with the graph
- [x] Test editing on all node types (standard, input, output, webhook,
etc.)
2025-11-19 04:51:11 +00:00
Zamil Majdy
73c93cf554 fix(backend): resolve production failures with comprehensive token handling and conversation safety fixes (#11394)
## Summary

Resolves multiple production failures including execution
**6239b448-0434-4687-a42b-9ff0ddf01c1d** where AI Text Generator failed
with `'NoneType' object is not iterable`.

This implements comprehensive fixes addressing both the root cause
(unrealistic token limits) and masking issues (Sentry SDK bug +
conversation history null safety).

## Root Cause Analysis

Three interconnected issues caused production failures:

### 1. Unrealistic Perplexity Token Limits 
- **PERPLEXITY_SONAR**: 127,000 max_output_tokens (equivalent to ~95,000
words!)
- **PERPLEXITY_SONAR_DEEP_RESEARCH**: 128,000 max_output_tokens
- **Problem**: Newsletter generation defaulted to 127K output tokens 
- **Result**: Exceeded OpenRouter's 128K total limit, causing API
failures

### 2. Sentry SDK OpenAI Integration Bug 🐛
- **Location**: `sentry_sdk/integrations/openai.py:157`
- **Bug**: `for choice in response.choices:` failed when `choices=None`
- **Impact**: Masked real token limit errors with confusing TypeError

### 3. Conversation History Null Safety Issues ⚠️
- **Problem**: `get_pending_tool_calls()` expected non-null
conversation_history
- **Impact**: SmartDecisionMaker crashes when conversation_history is
None
- **Pattern**: Common in various LLM block scenarios

## Changes Made

###  Fix 1: Realistic Perplexity Token Limits (`backend/blocks/llm.py`)

```python
# Before (PROBLEMATIC)
LlmModel.PERPLEXITY_SONAR: ModelMetadata("open_router", 127000, 127000)
LlmModel.PERPLEXITY_SONAR_DEEP_RESEARCH: ModelMetadata("open_router", 128000, 128000)

# After (FIXED)
LlmModel.PERPLEXITY_SONAR: ModelMetadata("open_router", 127000, 8000)
LlmModel.PERPLEXITY_SONAR_DEEP_RESEARCH: ModelMetadata("open_router", 128000, 16000)
```

**Rationale:**
- **8K tokens** (SONAR): Matches industry standard, sufficient for long
content (6K words)
- **16K tokens** (DEEP_RESEARCH): Higher limit for research, supports
very long content (12K words)
- **Industry pattern**: 3-4% of context window (consistent with other
OpenRouter models)

###  Fix 2: Sentry SDK Upgrade (`pyproject.toml`)

- **Upgrade**: `^2.33.2` → `^2.44.0`  
- **Result**: OpenAI integration bug fixed in SDK (no code changes
needed)

###  Fix 3: Conversation History Null Safety
(`backend/blocks/smart_decision_maker.py`)

```python
# Before
def get_pending_tool_calls(conversation_history: list[Any]) -> dict[str, int]:

# After  
def get_pending_tool_calls(conversation_history: list[Any] | None) -> dict[str, int]:
    if not conversation_history:
        return {}
```

- **Added**: Proper null checking for conversation_history parameter
- **Prevents**: `'NoneType' object is not iterable` errors
- **Impact**: Improves SmartDecisionMaker reliability across all
scenarios

## Impact & Benefits

### 🎯 Production Reliability
-  **Prevents token limit errors** for realistic content generation
-  **Clear error handling** without masked Sentry TypeError crashes  
-  **Better conversation safety** with proper null checking
-  **Multiple failure scenarios resolved** comprehensively

### 📈 User Experience  
-  **Faster responses** (reasonable output lengths)
-  **Lower costs** (more focused content generation)
-  **More stable workflows** with better error handling
-  **Maintains flexibility** - users can override with explicit
`max_tokens`

### 🔧 Technical Improvements
-  **Follows industry standards** - aligns with other OpenRouter models
-  **Breaking change risk: LOW** - users can override if needed
-  **Root cause resolution** - fixes error chain at source
-  **Defensive programming** - better null safety patterns

## Validation

### Industry Analysis 
- Large context models typically use 8K-16K output limits (not 127K)
- Newsletter generation needs 650-10K tokens typically, not 127K tokens
- Pattern analysis of 13 OpenRouter models confirms 3-4% context ratio

### Production Testing 
- **Before**: Newsletter generation → 127K tokens → API failure → Sentry
crash
- **After**: Newsletter generation → 8K tokens → successful completion
- **Error handling**: Clear token limit errors instead of confusing
TypeErrors
- **Null safety**: Conversation history None/undefined handled
gracefully

### Dependencies 
- **Sentry SDK**: Confirmed 2.44.0 fixes OpenAI integration crashes
- **Poetry lock**: All dependencies updated successfully
- **Backward compatibility**: Maintained for existing workflows

## Related Issues

- Fixes flowExecutionID **6239b448-0434-4687-a42b-9ff0ddf01c1d** 
- Resolves AI Text Generator reliability issues
- Improves overall platform token handling and conversation safety
- Addresses multiple production failure patterns comprehensively

## Breaking Changes Assessment

**Risk Level**: 🟡 **LOW-MEDIUM**

- **Perplexity limits**: Users relying on 127K+ output would be limited
(likely unintentional usage)
- **Override available**: Users can explicitly set `max_tokens` for
custom limits
- **Conversation safety**: Only improves reliability, no breaking
changes
- **Most use cases**: Unaffected or improved by realistic defaults

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-11-18 22:32:58 +00:00
Zamil Majdy
02757d68f3 fix(backend): resolve marketplace agent access in get_graph_execution endpoint (#11396)
## Summary
Fixes critical issue where `GET
/graphs/{graph_id}/executions/{graph_exec_id}` failed for marketplace
agents with "Graph not found" errors due to incorrect version access
checking.

## Root Cause
The endpoint was checking access to the **latest version** of a graph
instead of the **specific version used in the execution**. This broke
marketplace agents when:

1. User executes a marketplace agent (e.g., v3)
2. Graph owner later publishes a new version (e.g., v4) 
3. User tries to view execution details
4. **BUG**: Code checked access to latest version (v4) instead of
execution version (v3)
5. If v4 wasn't published to marketplace → access denied → "Graph not
found"

## Original Problematic Code
```python
# routers/v1.py - get_graph_execution (WRONG ORDER)
graph = await graph_db.get_graph(graph_id=graph_id, user_id=user_id)  #  Uses LATEST version
if not graph:
    raise HTTPException(404, f"Graph #{graph_id} not found")

result = await execution_db.get_graph_execution(...)  # Gets execution data
```

## Solution
**Reordered operations** to check access against the **execution's
specific version**:

```python
# NEW CODE (CORRECT ORDER)
result = await execution_db.get_graph_execution(...)  #  Get execution FIRST

if not await graph_db.get_graph(
    graph_id=result.graph_id,
    version=result.graph_version,  #  Use execution's version, not latest!
    user_id=user_id,
):
    raise HTTPException(404, f"Graph #{graph_id} not found")
```

### Key Changes Made

1. **Fixed version access logic** (routers/v1.py:1075-1095):
   - Reordered operations to get execution data first 
   - Check access using `result.graph_version` instead of latest version
   - Applied same fix to external API routes

2. **Enhanced `get_graph()` marketplace fallback**
(data/graph.py:919-935):
   - Added proper marketplace lookup when user doesn't own the graph
   - Supports version-specific marketplace access checking
   - Maintains security by only allowing approved, non-deleted listings

3. **Activity status generator fix**
(activity_status_generator.py:139-144):
   - Use `skip_access_check=True` for internal system operations

4. **Missing block handling** (data/graph.py:94-103):
- Added `_UnknownBlockBase` placeholder for graceful handling of deleted
blocks

## Example Scenario Fixed
1. **User**: Installs marketplace agent "Blog Writer" v3
2. **Owner**: Later publishes v4 (not to marketplace yet)  
3. **User**: Runs the agent (executes v3)
4. **Before**: Viewing execution details fails because code checked v4
access
5. **After**:  Viewing execution details works because code checks v3
access

## Impact
-  **Marketplace agents work correctly**: Users can view execution
details for any marketplace agent version they've used
-  **Backward compatibility**: Existing owned graphs continue working
-  **Security maintained**: Only allows access to versions user
legitimately executed
-  **Version-aware access control**: Proper access checking for
specific versions, not just latest

## Testing
- [x] Marketplace agents: Execution details now accessible for all
executed versions
- [x] Owned graphs: Continue working as before
- [x] Version scenarios: Access control works correctly for specific
versions
- [x] Missing blocks: Graceful handling without errors

**Root issue resolved**: Version mismatch between execution version and
access check version that was breaking marketplace agent execution
viewing.

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-11-18 15:44:10 +00:00
Reinier van der Leer
2569576d78 fix(frontend/builder): Fix save-and-run with no agent inputs (#11401)
- Resolves #11390

This unbreaks the last step of the Builder tutorial :)

### Changes 🏗️

- Give `isSaving` time to propagate before calling dependent callback
`saveAndRun`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Run through Builder tutorial; Run (with implicit save) should work
at once
2025-11-18 12:49:37 +00:00
Ubbe
3b34c04a7a fix(frontend): logout console issues (#11400)
## Changes 🏗️

Fixed the logout errors by removing duplicate redirects. `serverLogout`
was calling `redirect("/login")` (which throws `NEXT_REDIRECT`), and
then `useSupabaseStore` was also calling `router.refresh()`, causing
conflicts.

Updated `serverLogout` to return a result object instead of redirecting,
and moved the redirect to the client using `router.push("/login")` after
logout completes. This removes the `NEXT_REDIRECT` error and ensures a
single redirect.

<img width="800" height="706" alt="Screenshot 2025-11-18 at 16 14 54"
src="https://github.com/user-attachments/assets/38e0e55c-f48d-4b25-a07b-d4729e229c70"
/>

Also addressed 401 errors during logout. Hooks like `useCredits` were
still making API calls after logout, causing "Authorization header is
missing" errors. Added a check in `_makeClientRequest` to detect
logout-in-progress and suppress authentication errors during that
window. This prevents console noise and avoids unnecessary error
handling.

<img width="800" height="742" alt="Screenshot 2025-11-18 at 16 14 45"
src="https://github.com/user-attachments/assets/6fb2270a-97a0-4411-9e5a-9b4b52117af3"
/>


## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Log out of your account
  - [x] There are no errors showing up on the browser devtools
2025-11-18 16:41:51 +07:00
Abhimanyu Yadav
34c9ecf6bc refactor(frontend): improve customNode reusability and add multi-block support (#11368)
This refactor improves developer experience (DX) by creating a more
maintainable and extensible architecture.

The previous `CustomNode` implementation had several issues:
- Code was duplicated across different node types (StandardNodeBlock,
OutputBlock, etc.)
- Poor separation of concerns with all logic in a single component
- Limited flexibility for handling different block types
- Inconsistent handle display logic across different node types

<img width="2133" height="831" alt="Screenshot 2025-11-12 at 9 25 10 PM"
src="https://github.com/user-attachments/assets/02864bba-9ffe-4629-98ab-1c43fa644844"
/>

## Changes 🏗️

- **Refactored CustomNode structure**:
- Extracted reusable components:
[`NodeContainer`](file:///Users/abhi/Documents/AutoGPT/autogpt_platform/frontend/src/app/(platform)/build/components/FlowEditor/nodes/CustomNode/components/NodeContainer.tsx),
[`NodeHeader`](file:///Users/abhi/Documents/AutoGPT/autogpt_platform/frontend/src/app/(platform)/build/components/FlowEditor/nodes/CustomNode/components/NodeHeader.tsx),
[`NodeAdvancedToggle`](file:///Users/abhi/Documents/AutoGPT/autogpt_platform/frontend/src/app/(platform)/build/components/FlowEditor/nodes/CustomNode/components/NodeAdvancedToggle.tsx),
[`WebhookDisclaimer`](file:///Users/abhi/Documents/AutoGPT/autogpt_platform/frontend/src/app/(platform)/build/components/FlowEditor/nodes/CustomNode/components/WebhookDisclaimer.tsx)
- Removed `StandardNodeBlock.tsx` and consolidated logic into
[`CustomNode.tsx`](file:///Users/abhi/Documents/AutoGPT/autogpt_platform/frontend/src/app/(platform)/build/components/FlowEditor/nodes/CustomNode/CustomNode.tsx)
- Moved
[`StickyNoteBlock`](file:///Users/abhi/Documents/AutoGPT/autogpt_platform/frontend/src/app/(platform)/build/components/FlowEditor/nodes/CustomNode/components/StickyNoteBlock.tsx)
to components folder for better organization

- **Added BlockUIType-specific logic**:
- Implemented conditional handle display based on block type (INPUT,
WEBHOOK, WEBHOOK_MANUAL blocks don't show handles)
- Added special handling for AGENT blocks with dynamic input/output
schemas
- Added webhook-specific disclaimer component with library agent
integration
  - Fixed OUTPUT block's name field to not show input handle

- **Enhanced FormCreator**:
  - Added `showHandles` prop for granular control
- Added `className` prop for styling flexibility (used for webhook
opacity)

- **Improved nodeStore**:
  - Added `getNodeBlockUIType` method for retrieving node UI types

- **UI/UX improvements**:
- Fixed duplicate gap classes in
[`BuilderActions`](file:///Users/abhi/Documents/AutoGPT/autogpt_platform/frontend/src/app/(platform)/build/components/BuilderActions/BuilderActions.tsx)
- Added proper styling for webhook blocks (disabled state with reduced
opacity)
  - Improved field template spacing for specific block types

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Create and test a standard node block with input/output handles
  - [x] Create and test INPUT block (verify no input handles)
  - [x] Create and test OUTPUT block (verify name field has no handle)
- [x] Create and test WEBHOOK block (verify disclaimer appears and form
is disabled)
  - [x] Create and test AGENT block with custom schemas
  - [x] Create and test sticky note block
  - [x] Verify advanced toggle works for all node types
  - [x] Test node execution badges display correctly
  - [x] Verify node selection highlighting works
2025-11-18 03:11:32 +00:00
Swifty
a66219fc1f fix(platform): Remove un-runnable agents from schedule (#11374)
Currently when an agent fails validation during a scheduled run, we
raise an error then try again, regardless of why.

This change removed the agent schedule and notifies the user 

### Changes 🏗️

- add schedule_id to the GraphExecutionJobArgs
- add agent_name to the GraphExecutionJobArgs
- Delete schedule on GraphValidationError
- Notify the user with a message that include the agent name

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] I have ensured the scheduler tests work with these changes
2025-11-17 15:24:40 +00:00
Bently
8b3a741f60 refactor(turnstile): Remove turnstile (#11387)
This PR removes turnstile from the platform.

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Test to make sure that turnstile is gone, it will be.
  - [x] Test logging in with out turnstile to make sure it still works
  - [x] Test registering a new account with out turnstile and it works
2025-11-17 15:14:31 +00:00
Bently
7c48598f44 fix(frontend): Replace question mark icon with "Give Feedback" text button (#11381)
## Summary
- Replaced the question mark icon with explicit "Give Feedback" text in
the feedback button
- Applied consistent styling to match the "Tutorial" button
- Removed QuestionMarkCircledIcon dependency from TallyPopup component

## Motivation
Users reported not knowing what the question mark icon was for, which
prevented them from discovering the feedback feature. Making the button
text-based and explicit removes this confusion.

## Changes
- Removed `QuestionMarkCircledIcon` import and icon element
- Changed button to display only "Give Feedback" text
- Added consistent styling (height, rounded corners, background color)
to match Tutorial button
- Button text can wrap to two lines if needed for better readability

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Check the UI to see that the question mark on the tally button has
been replaced with "Give Feedback"


Before
<img width="618" height="198" alt="image"
src="https://github.com/user-attachments/assets/0d4803eb-9a05-4a43-aaff-cc43b6d0cda4"
/>


After
<img width="298" height="126" alt="image"
src="https://github.com/user-attachments/assets/c1e1c3b5-94b4-4ad9-87e9-a0feca1143e3"
/>

---------

Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
2025-11-17 12:14:05 +00:00
Bently
804e3b403a feat(frontend): add mobile warning banner to login and signup pages (#11383)
## Summary
Adds a non-blocking warning banner to Login and Sign Up pages that
alerts mobile users about potential limitations in the mobile
experience.

## Changes
- Created `MobileWarningBanner` component in `src/components/auth/`
- Integrated banner into Login page (`/login`)
- Integrated banner into Sign Up page (`/signup`)
- Banner displays only on mobile devices (viewports < 768px)
- Uses existing `useBreakpoint` hook for responsive detection

## Design Details
- **Position**: Appears below the login/signup card (after the bottom
"Sign up"/"Log in" links)
- **Style**: Amber-themed warning banner with DeviceMobile icon
- **Message**: 
  - Title: "Heads up: AutoGPT works best on desktop"
- Description: "Some features may be limited on mobile. For the best
experience, consider switching to a desktop."
- **Behavior**: Non-blocking, no user interaction required

<img width="342" height="81" alt="image"
src="https://github.com/user-attachments/assets/b6584299-b388-4d8d-b951-02bd95915566"
/>

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
	- [x] Verified banner appears on mobile viewports (< 768px)
	- [x] Verified banner is hidden on desktop viewports (≥ 768px)
	- [x] Tested on Login page
	- [x] Tested on Sign Up page

<img width="342" height="758" alt="image"
src="https://github.com/user-attachments/assets/077b3e0a-ab9c-41c7-83b7-7ee80a3396fd"
/>

<img width="342" height="759" alt="image"
src="https://github.com/user-attachments/assets/77a64b28-748b-4d97-bd7c-67c55e5e9f22"
/>

---------

Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
2025-11-17 11:31:59 +00:00
Lluis Agusti
9c3f679f30 fix(frontend): simplify login redirect 2025-11-15 01:14:22 +07:00
Lluis Agusti
9977144b3d fix(frontend): onboarding fixes... (2) 2025-11-15 00:58:13 +07:00
Lluis Agusti
81d61a0c94 fix(frontend): redirects improvements... 2025-11-15 00:31:42 +07:00
Ubbe
e1e0fb7b25 fix(frontend): post login/signup/onboarding redirect clash (#11382)
## Changes 🏗️

### Issue 1: login/signup redirect conflict

There are 2 hooks, both on the login and signup pages, that attempt to
call `router.push` once a user logs in or is created.

The main offender seems to be this hook:
```tsx
  useEffect(() => {
    if (user) router.push("/");
  }, [user]);
```
Which is in place on both pages to prevent logged-in users from
accessing `/login` or `/signup`. What happens is when a user signs up or
logs in, if they need onboarding, there is a `router.push` down the line
to redirect them there, which conflicts with the one done in this hook.
 
**Solution**

I moved the logic from that hook to the `middleware.ts`, which is a
better place for it... It won't conflict anymore with onboarding
redirects done in those pages

### Issue 2: onboarding server redirects

Potential race condition: both the server component and the client
`<OnboardingProvider />` perform redirects. The server component
redirects happen first, but if onboarding state changes after mount, the
provider can redirect again, causing rapid mount/unmount cycles.

**Solution**

Make all onboarding redirects central in `/onboarding` which is now a
client component do in client redirects only and displaying a spinner
while it does so.

## Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Tested locally login/logout/signup and trying to access `/login`
and `/signup` being logged in
2025-11-14 15:30:58 +01:00
Reinier van der Leer
a054740aac dx(frontend): Fix source map upload configuration (#11378)
Source maps aren't being uploaded to Sentry, so debugging errors in
production is really hard.

### Changes 🏗️

- Fix config so source maps are found and uploaded to Sentry
- Disable deleting source maps after upload (so they are available in
the browser)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Tested locally
2025-11-13 17:23:37 +01:00
Abhimanyu Yadav
f78a6df96c feat(frontend): add static style and beads in custom edge (#11364)
<!-- Clearly explain the need for these changes: -->
This PR enhances the visual feedback in the flow editor by adding
animated "beads" that travel along edges during execution. This provides
users with clear, real-time visualization of data flow and execution
progress through the graph, making it easier to understand which
connections are active and track execution state.


https://github.com/user-attachments/assets/df4a4650-8192-403f-a200-15f6af95e384

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->
- **Added new edge data types and structure:**
- Added `CustomEdgeData` type with `isStatic`, `beadUp`, `beadDown`, and
`beadData` properties
  - Created `CustomEdge` type extending XYEdge with custom data
  
- **Implemented bead animation components:**
- Added `JSBeads.tsx` - JavaScript-based animation component with
real-time updates
- Added `SVGBeads.tsx` - SVG-based animation component (for future
consideration)
  - Added helper functions for path calculations and bead positioning
  
- **Updated edge rendering:**
  - Modified `CustomEdge` component to display beads during execution
  - Added static edge styling with dashed lines (`stroke-dasharray: 6`)
- Improved visual hierarchy with different stroke styles for
selected/unselected states
  
- **Refactored edge management:**
- Converted `edgeStore` from using `Connection` type to `CustomEdge`
type
- Added `updateEdgeBeads` and `resetEdgeBeads` methods for bead state
management
  - Updated `copyPasteStore` to work with new edge structure
  
- **Added support for static outputs:**
  - Added `staticOutput` property to `CustomNodeData`
- Static edges show continuous bead animation while regular edges show
one-time animation

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Create a flow with multiple blocks and verify beads animate along
edges during execution
- [x] Test that beads increment when execution starts (`beadUp`) and
decrement when completed (`beadDown`)
- [x] Verify static edges display with dashed lines and continuous
animation
  - [x] Confirm copy/paste operations preserve edge data and bead states
  - [x] Test edge animations performance with complex graphs (10+ nodes)
  - [x] Verify bead animations complete properly before disappearing
- [x] Test that multiple beads can animate on the same edge for
concurrent executions
- [x] Verify edge selection/deletion still works with new visualization
- [x] Test that bead state resets properly when starting new executions
2025-11-13 15:36:40 +00:00
Reinier van der Leer
4d43570552 Merge branch 'master' into dev 2025-11-13 12:44:43 +01:00
Bently
d3d78660da fix(frontend/builder): Generate unique IDs for copied blocks (#11344)
## Changes 🏗️

- Clear backend_id when pasting blocks to prevent duplicate ID errors
- Add copy/paste functionality to new FlowEditor
- Ensure pasted blocks use newly generated UUIDs when saving

Fixes issue where copying and pasting blocks would fail with 'Unique
constraint failed' error because the old backend_id was being reused
instead of the new node ID.

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Add any block to the builder, save and run the graph and wait for
it to finish, then copy and paste the first block and paste it, try to
use it and it should now work and not have any issues/errors



https://github.com/user-attachments/assets/c24f9a9a-8e4f-4988-8731-cddc34a0da13

---------

Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
2025-11-13 09:48:49 +00:00
Reinier van der Leer
749be06599 fix(blocks/ai): Make AI List Generator block more reliable (#11317)
- Resolves #11305

### Changes 🏗️

Make `AIListGeneratorBlock` more reliable:
- Leverage `AIStructuredResponseGenerator`'s robust
prompt/retry/validate logic
  - Use JSON format instead of Python list format
  - Add `force_json_output` toggle
- Fix output instructions in prompt (only string values allowed)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Works without `force_json_output`
  - [x] Works with `force_json_output`
  - [x] Retry mechanism works as intended
2025-11-13 09:46:23 +00:00
Krzysztof Czerwinski
27d886f05c feat(platform): WebSocket Onboarding notifications (#11335)
Use WebSocket notifications from the backend to display confetti.

### Changes 🏗️

- Send WebSocket notifications to the browser when new onboarding steps
are completed
- Handle WebSocket notifications events in the Wallet and use them
instead of frontend-based logic to play confetti (fixes confetti
appearing on every refresh)
- Scroll to newly completed tasks when wallet opens just before confetti
plays
- Fix: make `Run again` button complete `RE_RUN_AGENT` task

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Confetti are displayed when previously uncompleted tasks are
completed
  - [x] Confetti do not appear on page refresh
  - [x] Wallet scrolls on open before confetti is displayed
  - [x] `Run again` button completes `RE_RUN_AGENT` task
2025-11-13 09:45:13 +00:00
Abhimanyu Yadav
be01a1316a fix(tests/frontend): resolve flaky test due to duplicate heading elements on marketplace page (#11369)
This PR fixes a flaky test issue in the signup flow where Playwright's
strict mode was failing due to duplicate heading elements on the
marketplace page.

### Problem
The test was failing intermittently with the following error:

```
Error: strict mode violation: getByText('Bringing you AI agents designed by thinkers from around the world') resolved to 2 elements
```

This occurred because the marketplace page contains two identical `<h3>`
elements with the same text, causing Playwright's strict mode to throw
an error when trying to select a single element.


### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run E2E tests locally multiple times to ensure no flakiness
  - [x] Check CI pipeline runs successfully
2025-11-13 09:11:55 +00:00
Bently
32bb6705d1 fix(frontend/library): fix schedule display issues for recurring schedules (#11362)
Fixes two related bugs in the agent scheduling UI that caused confusion
for users setting up recurring schedules:

1. **"on day nan of every month" display bug**: When scheduling an agent
to repeat every N days (e.g., "every 2 days"), the schedule info panel
incorrectly displayed "on day nan of every month" instead of the correct
"Every N days at HH:MM" format.

2. **Confusing time picker for hourly intervals**: When setting up a
schedule with "every N hours", the UI displayed a time picker labeled
"at 9 o'clock" which was confusing because the time setting is ignored
for hourly intervals. Users were unclear about what this setting meant
or if it had any effect.

### Changes 🏗️

**Fixed `humanizeCronExpression` function**
(`autogpt_platform/frontend/src/lib/cron-expression-utils.ts`):
- Reordered cron expression parsing logic to handle day intervals
(`*/N`) before monthly checks
- Added `!dayOfMonth.startsWith("*/")` guard to monthly and yearly
checks to prevent misinterpreting day intervals as monthly day lists
- This ensures expressions like `0 9 */2 * *` (every 2 days at 9:00) are
correctly displayed as "Every 2 days at 09:00" instead of "on day nan of
every month"

**Updated `CronScheduler` component**
(`autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/AgentRunsView/components/ScheduleAgentModal/components/CronScheduler/CronScheduler.tsx`):
- Hide `TimeAt` component for custom intervals with unit "hours" (time
is ignored for hourly intervals)
- Pass context-aware label to `TimeAt`: "Starting at" for custom day
intervals, "At" for other frequencies
- This clarifies that the time setting is the starting time for day
intervals and removes confusion for hourly intervals

**Enhanced `TimeAt` component**
(`autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/AgentRunsView/components/ScheduleAgentModal/components/CronScheduler/TimeAt.tsx`):
- Added optional `label` prop (defaults to "At") to allow context-aware
labeling
- Component now displays "Starting at" when used with custom day
intervals for better clarity

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Schedule an agent with "Custom" frequency, "Every 2 days" interval
- verify it displays as "Every 2 days at HH:MM" in the schedule info
panel (not "on day nan of every month")
- [x] Schedule an agent with "Monthly" frequency - verify it displays
correctly (e.g., "On day 1, 15 of every month at HH:MM")


<img width="845" height="388" alt="image"
src="https://github.com/user-attachments/assets/02ed0b73-bf5e-48fd-a7b0-6f4d4687eb13"
/>
<img width="839" height="374" alt="image"
src="https://github.com/user-attachments/assets/be62eee2-3fdd-4b20-aecf-669c3c6c6fb2"
/>
2025-11-13 08:13:21 +00:00
Reinier van der Leer
536e2a5ec8 fix(blocks): Make Smart Decision Maker tool pin handling consistent and reliable (#11363)
- Resolves #11345

### Changes 🏗️

- Move tool use routing logic from frontend to backend: routing info was
being baked into graph links by the frontend, inconsistently, causing
issues
- Rework tool use routing to use target node ID instead of target block
name
- Add a bit of magic to `NodeOutputs` component to show tool node title
instead of ID

DX:
- Removed `build` from `.prettierignore` -> re-enable formatting for
builder components

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Use SDM block in a graph; verify it works
  - [x] Use SDM block with agent executor block as tool; verify it works
  - Tests for `parse_execution_output` pass (checked by CI)
2025-11-12 18:55:38 +01:00
Abhimanyu Yadav
a3e5f7fce2 fix(frontend): consolidate graph save logic to prevent duplicate event listeners (#11367)
## Summary

This PR fixes an issue where multiple keyboard save event listeners were
being registered when the same save hook was used in multiple
components, causing the graph to be saved multiple times (3x) when using
Ctrl/Cmd+S.

## Changes

- **Created a centralized `useSaveGraph` hook** in
`/hooks/useSaveGraph.ts` that encapsulates all graph saving logic
- **Refactored `useNewSaveControl`** to use the new centralized hook
instead of duplicating save logic
- **Updated `useRunGraph` and `useScheduleGraph`** to use the
centralized `useSaveGraph` hook directly
- **Simplified the save control component** by removing redundant logic
and using cleaner naming conventions

## Problem

The previous implementation had the save logic duplicated in
`useNewSaveControl`, and when this hook was used in multiple places
(NewSaveControl component, RunGraph, ScheduleGraph), each instance would
register its own keyboard event listener for Ctrl/Cmd+S. This caused:
- Multiple save requests being sent simultaneously
- "Unique constraint failed on the fields: ('id', 'version')" errors
from the backend
- Poor performance due to unnecessary re-renders

## Solution

By centralizing the save logic in a dedicated `useSaveGraph` hook:
- Save logic is now in one place, making it easier to maintain
- Components can use the save functionality without registering
duplicate event listeners
- The keyboard shortcut listener is only registered once in the
`useNewSaveControl` hook
- Other components (RunGraph, ScheduleGraph) can call `saveGraph`
directly without side effects

## Testing
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Verified Ctrl/Cmd+S saves the graph only once
  - [x] Tested save functionality from Save Control popup
- [x] Confirmed Run Graph and Schedule Graph still save before execution
  - [x] Verified no duplicate save requests in network tab
  - [x] Checked that save toast notifications appear correctly
2025-11-12 14:16:30 +00:00
Swifty
d674cb80e2 refactor(backend): Improve Block Error Handling (#11366)
We need a way to differentiate between serious errors that cause on call
alerts and block errors.
This PR address this need by ensuring all errors that occur during
execution of a block are of Subtype BlockError

### Changes 🏗️

- Introduced BlockErrors and its subtypes
- Updated current errors that are emitted by blocks to use BlockError
- Update executor manager, to errors emitted when running a block that
are not of type BlockError to BlockUnknownError

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] checked tests still work
  - [x] Ensured block error message is readable and useful
2025-11-12 13:58:42 +00:00
Abhimanyu Yadav
c3a6235cee feat(frontend): integrate drag-and-drop functionality in new builder (#11341)
In this PR, I’ve added drag-and-drop functionality to the new builder
using built-in HTML drag-and-drop.



https://github.com/user-attachments/assets/b27c281e-6216-4131-9a89-e10b0dd56a8f

### Changes

- Added ReactFlowProvider to manage flow state in BuilderPage and Flow
components.
- Implemented drag-and-drop support for blocks in the NewControlPanel,
allowing users to drag blocks from the menu and drop them onto the
canvas.
- Enhanced the Block component to handle drag events and provide visual
feedback during dragging.
- Updated useFlow hook to include onDragOver and onDrop handlers for
managing block placement.
- Adjusted nodeStore to accept position parameters for added blocks,
improving placement accuracy.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] I’ve tried dragging and dropping multiple blocks, and it works
perfectly as shown in the video.
2025-11-11 06:05:06 +00:00
Abhimanyu Yadav
43638defa2 feat(frontend): add context menu in custom node (#11342)
- depends one https://github.com/Significant-Gravitas/AutoGPT/pull/11339

In this PR, I’ve added a dropdown menu to the custom node. This allows
you to delete a node, copy a node, and if the node is an agent node, you
can also navigate to that specific agent.

<img width="633" height="403" alt="Screenshot 2025-11-08 at 7 24 38 PM"
src="https://github.com/user-attachments/assets/89dd2906-95f5-40a5-82d1-de05075e4f30"
/>

###Changes
- Added context menu to custom nodes with copy, delete, and open agent
options
- Added performance optimization with memo for custom edge

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] All three buttons are working perfectly.
  - [x] The “Go to graph” option is only visible in the sub-graph node.
2025-11-11 06:04:56 +00:00
Abhimanyu Yadav
9371528aab feat(frontend): add copy paste functionality in new builder (#11339)
In the new builder, I’ve added a copy-paste functionality using the
keyboard.


https://github.com/user-attachments/assets/3106ae86-3f47-4807-a598-9c0b166eaae9

### Changes 🏗️

- Added useCopyPasteKeyboard hook for handling keyboard shortcuts
- Created new copyPasteStore for state management
- Implemented performance optimizations (memo on CustomEdge)
- Updated nodeStore and edgeStore to support the functionality

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] The copy-paste functionality is working correctly as you can see
in the video.
2025-11-10 15:39:53 +00:00
Zamil Majdy
711c439642 fix(frontend): enable admin impersonation for server-side rendered requests (#11343)
## Summary
Fix admin impersonation not working for graph execution requests that
are server-side rendered.

## Problem
- Build page uses SSR, so API calls go through _makeServerRequest
instead of _makeClientRequest
- Server-side requests cannot access sessionStorage where impersonation
ID is stored
- Graph execution requests were missing X-Act-As-User-Id header

## Simple Solution
1. **Store impersonation in cookie** (useAdminImpersonation.ts): 
   - Set/clear cookie alongside sessionStorage for server access
2. **Read cookie on server** (_makeServerRequest in client.ts):
   - Check for impersonation cookie using Next.js cookies() API  
   - Create fake Request with X-Act-As-User-Id header
   - Pass to existing makeAuthenticatedRequest flow

## Changes Made
- useAdminImpersonation.ts: 2 lines to set/clear cookie
- client.ts: 1 method to read cookie and create header  
- No changes to existing proxy/header/helpers logic

## Result
-  Graph execution requests now include impersonation header
-  Works for both client-side and server-side rendered requests  
-  Minimal changes, leverages existing header forwarding logic
-  Backward compatible with all existing functionality

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-11-10 21:23:23 +07:00
Ubbe
33989f09d0 feat(frontend): supabase + zustand for speed (#11333)
### Changes 🏗️

This change uses [Zustand](https://github.com/pmndrs/zustand) (a
lightweight state management library) to centralize authentication state
across the app. Previously, each component mounting `useSupabase()`
would create its own local state, causing duplicate API calls and
inconsistent user data. Now, user state is cached globally with Zustand
- when multiple components need auth data, they share the same cached
state instead of each fetching separately. This reduces server load and
improves app responsiveness.

**File structure:**
```
src/lib/supabase/hooks/
├── useSupabase.ts           # React hook interface (modified)
├── useSupabaseStore.ts      # Zustand state management (new)
└── helpers.ts               # Pure business logic (new)
```

**What was extracted to helpers:**
- `ensureSupabaseClient()` - Singleton client initialization
- `fetchUser()` - User fetching with error handling  
- `validateSession()` - Session validation logic
- `refreshSession()` - Session refresh logic
- `handleStorageEvent()` - Cross-tab logout handling

### Checklist 📋

#### For code changes:

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Verified no TypeScript errors in modified files
  - [x] Tested login flow works correctly
  - [x] Tested logout flow works correctly
  - [x] Verified session validation on tab focus/visibility
  - [x] Tested cross-tab logout synchronization
  - [x] Confirmed WebSocket disconnection on logout
2025-11-10 12:08:37 +00:00
Zamil Majdy
d6ee402483 feat(platform): Add execution analytics admin endpoint with feature flag bypass (#11327)
This PR adds a comprehensive execution analytics admin endpoint that
generates AI-powered activity summaries and correctness scores for graph
executions, with proper feature flag bypass for admin use.

### Changes 🏗️

**Backend Changes:**
- Added admin endpoint: `/api/executions/admin/execution_analytics`
- Implemented feature flag bypass with `skip_feature_flag=True`
parameter for admin operations
- Fixed async database client usage (`get_db_async_client`) to resolve
async/await errors
- Added batch processing with configurable size limits to handle large
datasets
- Comprehensive error handling and logging for troubleshooting
- Renamed entire feature from "Activity Backfill" to "Execution
Analytics" for clarity

**Frontend Changes:**
- Created clean admin UI for execution analytics generation at
`/admin/execution-analytics`
- Built form with graph ID input, model selection dropdown, and optional
filters
- Implemented results table with status badges and detailed execution
information
- Added CSV export functionality for analytics results
- Integrated with generated TypeScript API client for proper
authentication
- Added proper error handling with toast notifications and loading
states

**Database & API:**
- Fixed critical async/await issue by switching from sync to async
database client
- Updated router configuration and endpoint naming for consistency
- Generated proper TypeScript types and API client integration
- Applied feature flag filtering at API level while bypassing for admin
operations

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:

**Test Plan:**
- [x] Admin can access execution analytics page at
`/admin/execution-analytics`
- [x] Form validation works correctly (requires graph ID, validates
inputs)
- [x] API endpoint `/api/executions/admin/execution_analytics` responds
correctly
- [x] Authentication works properly through generated API client
- [x] Analytics generation works with different LLM models (gpt-4o-mini,
gpt-4o, etc.)
- [x] Results display correctly with appropriate status badges
(success/failed/skipped)
- [x] CSV export functionality downloads correct data
- [x] Error handling displays appropriate toast messages
- [x] Feature flag bypass works for admin users (generates analytics
regardless of user flags)
- [x] Batch processing handles multiple executions correctly
- [x] Loading states show proper feedback during processing

#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] No configuration changes required for this feature

**Related to:** PR #11325 (base correctness score functionality)

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Zamil Majdy <majdyz@users.noreply.github.com>
2025-11-10 10:27:44 +00:00
Lluis Agusti
08a4a6652d fix(frontend): onboarding provider calls only when user is logged in 2025-11-10 16:53:55 +07:00
Lluis Agusti
58928b516b fix(frontend): onboarding provider calls only when user is logged in 2025-11-10 16:52:39 +07:00
Krzysztof Czerwinski
18bb78d93e feat(platform): WebSocket-based notifications (#11297)
This enables real time notifications from backend to browser via
WebSocket using Redis bus for moving notifications from REST process to
WebSocket process.
This is needed for (follow-up) backend-completion of onboarding tasks
with instant notifications.

### Changes 🏗️

- Add new `AsyncRedisNotificationEventBus` to enable publishing
notifications to the Redis event bus
- Consume notifications in `ws_api.py` similarly to execution events and
send them via WebSocket
- Store WebSocket user connections in `ConnectionManager`
- Add relevant tests and types

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Notifications are sent to the frontend
2025-11-09 01:42:20 +00:00
Swifty
8058b9487b feat(platform): Chat UI refinements with simplified tool status indicators (#11337) 2025-11-07 22:49:03 +01:00
Ubbe
e68896a25a feat(backend): allow regex on CORS allowed origins (#11336)
## Changes 🏗️

Allow dynamic URLs in the CORS config, to match them via regex. This
helps because currently we have Front-end preview deployments which are
isolated ( _nice they don't pollute or overrride other domains_ ) like:
```
https://autogpt-git-{branch_name}-{commit}-significant-gravitas.vercel.app
```
The Front-end builds and works there, but as soon as you login, any API
requests to endpoints that need auth will fail due to CORS, given our
current CORS config does not support dynamically generated domains.

### Changes

After these changes we can specify dynamic domains to be allowed under
CORS. I also made `localhost` disabled if the API is in production for
safety...

### Before

```yml
cors:
  allowOrigin: "https://dev-builder.agpt.co" # could only specify full URL strings, not dyamic ones
```

### After

```yml
cors:
  allowOrigins:
    - "https://dev-builder.agpt.co"
    - "regex:https://autogpt-git-[a-z0-9-]+\\.vercel\\.app" # dynamic domains supported via regex
```

### Files

- add `build_cors_params` utility to parse literal/regex origins and
block localhost in production (`backend/server/utils/cors.py`)
- apply the helper in both `AgentServer` and `WebsocketServer` so CORS
logic and validations remain consistent
- add reusable `override_config` testing helper and update existing
WebSocket tests to cover the shared CORS behavior
- introduce targeted unit tests for the new CORS helper
(`backend/server/utils/cors_test.py`)

## Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] We will know once we made the origin config changes on infra and
test with this...
2025-11-07 23:28:14 +07:00
Ubbe
dfed092869 dx(frontend): make preview deploys work + minor improvements (#11329)
## Changes 🏗️

Make sure we can login on preview deployments generated by Vercel to
test Front-end changes. As of now, the Cloudflare CAPTCHA verification
fails, we don't need to have it active there.

### Minor improvements

<img width="1599" height="755" alt="Screenshot 2025-11-06 at 16 18 10"
src="https://github.com/user-attachments/assets/0a3fb1f3-2d4d-49fe-885f-10f141dc0ce4"
/>

Prevent the following build error:
```
15:58:01.507
     at j (.next/server/app/(no-navbar)/onboarding/reset/page.js:1:5125)
15:58:01.507
     at <unknown> (.next/server/chunks/5826.js:2:14221)
15:58:01.507
     at b.handleCallbackErrors (.next/server/chunks/5826.js:43:43068)
15:58:01.507
     at <unknown> (.next/server/chunks/5826.js:2:14194) {
15:58:01.507
   description: "Route /onboarding/reset couldn't be rendered statically because it used `cookies`. See more info here: https://nextjs.org/docs/messages/dynamic-server-error",
15:58:01.507
   digest: 'DYNAMIC_SERVER_USAGE'
15:58:01.507
 }
 ```
by making the reset onboarding route a client one. I made a new component, `<LoadingSpinner />`, and that page will show it while onboarding it's being reset.

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] You can login/signup on the app and use it in the preview URL generated by Vercel
2025-11-07 21:03:57 +07:00
Swifty
5559d978d7 fix(platform): chat duplicate messages (#11332) 2025-11-06 17:20:46 +01:00
Bently
dcecb17bd1 feat(backend): Remove deprecated LLM models and add migration script (#11331)
These models have become deprecated
- deepseek-r1-distill-llama-70b
- gemma2-9b-it
- llama3-70b-8192
- llama3-8b-8192
- google/gemini-flash-1.5

I have removed them and setup a migration, the migration is to convert
all the old versions of the model to new versions, the model changes
will happen like so

- llama3-70b-8192 → llama-3.3-70b-versatile
- llama3-8b-8192 → llama-3.1-8b-instant
- google/gemini-flash-1.5 → google/gemini-2.5-flash
- deepseek-r1-distill-llama-70b → gpt-5-chat-latest
- gemma2-9b-it → gpt-5-chat-latest 

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Check to see if old models where removed
- [x] Check to see if migration worked and converted old models to new
one in graph
2025-11-06 12:36:42 +00:00
Swifty
a056d9e71a feature(backend): Limit Chat to Auth Users, Limit Agent Runs Per Chat (#11330) 2025-11-06 13:11:15 +01:00
Swifty
42b643579f feat(frontend): Chat UI Frontend (#11290) 2025-11-06 11:37:36 +01:00
seer-by-sentry[bot]
5b52ca9227 fix(platform): Implement backwards-compatible Set operations for input validation (#11197)
### Changes 🏗️

- Replaces `isSupersetOf` and `difference` Set operations with
backwards-compatible implementations using `Array.from` and
`every`/`filter` methods.
- This ensures compatibility with older JavaScript environments that may
not fully support modern Set operations.

Fixes
[BUILDER-451](https://sentry.io/organizations/significant-gravitas/issues/6952591149/).
The issue was that: ES2024 Set methods `isSupersetOf` and `difference`
are unsupported in iOS Safari 16.7, causing a TypeError during component
render.

This fix was generated by Seer in Sentry, triggered automatically. 👁️
Run ID: 2032240

Not quite right? [Click here to continue debugging with
Seer.](https://sentry.io/organizations/significant-gravitas/issues/6952591149/?seerDrawer=true)

### Checklist 📋

#### For code changes:
- [ ] I have clearly listed my changes in the PR description
- [ ] I have made a test plan
- [ ] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [ ] Test on iOS Safari 16.7 to ensure no TypeError occurs during
component render.
- [ ] Verify that the replaced `isSupersetOf` and `difference`
implementations function correctly in other supported browsers.

<details>
  <summary>Example test plan</summary>
  
  - [ ] Create from scratch and execute an agent with at least 3 blocks
- [ ] Import an agent from file upload, and confirm it executes
correctly
  - [ ] Upload agent to marketplace
- [ ] Import an agent from marketplace and confirm it executes correctly
  - [ ] Edit an agent from monitor, and confirm it executes correctly
</details>

#### For configuration changes:

- [ ] `.env.default` is updated or already compatible with my changes
- [ ] `docker-compose.yml` is updated or already compatible with my
changes
- [ ] I have included a list of my configuration changes in the PR
description (under **Changes**)

<details>
  <summary>Examples of configuration changes</summary>

  - Changing ports
  - Adding new services that need to communicate with each other
  - Secrets or environment variable changes
  - New or infrastructure changes such as databases
</details>

Co-authored-by: seer-by-sentry[bot] <157164994+seer-by-sentry[bot]@users.noreply.github.com>
2025-11-06 16:55:47 +07:00
Bently
8e83586d13 feat(frontend): Cookie consent banner and settings (#11306)
Implements a cookie consent banner and settings modal for GDPR
compliance, allowing users to manage preferences for analytics and
monitoring cookies. Integrates consent checks with Sentry, Vercel
Analytics, and Google Analytics, ensuring tracking is only enabled with
user permission. Refactors dialog components for improved layout and
adds consent management utilities and hooks.

#### For code changes:
- [x] Banner appears at bottom of page on first visit with rounded
corners and proper spacing (40px margins)
- [x] Banner shows three buttons: "Reject All", "Settings", and "Accept
All"
- [x] Clicking "Accept All" hides banner and enables
analytics/monitoring
- [x] Clicking "Reject All" hides banner and keeps analytics/monitoring
disabled
- [x] Banner does not reappear after consent is given (check
localStorage: `autogpt_cookie_consent`)

**Cookie Settings Modal:**
- [x] Clicking "Settings" button opens the Cookie Settings modal
- [x] Modal displays three categories: Essential Cookies (always
active), Analytics & Performance (toggle), Error Monitoring & Session
Replay (toggle)
- [x] Clicking "Save Preferences" saves custom settings and closes modal
- [x] Clicking "Accept All" enables all cookies and closes modal
- [x] Clicking "Reject All" disables optional cookies and closes modal
- [x] Modal can be closed with X button or clicking outside

**Consent Persistence:**
- [x] Refresh page after giving consent - banner should not reappear
- [x] Clear localStorage and refresh - banner should reappear
- [x] Consent choices persist across browser sessions

<img width="1123" height="126" alt="image"
src="https://github.com/user-attachments/assets/7425efab-b5cc-4449-802d-0e12bd65053b"
/>

<img width="1124" height="372" alt="image"
src="https://github.com/user-attachments/assets/2f28919a-97e8-44f5-9021-70d3836bb996"
/>
2025-11-06 09:25:43 +00:00
seer-by-sentry[bot]
df9850a141 fix(frontend): prevent state updates on unmounted OnboardingProvider (#11211)
<!-- Clearly explain the need for these changes: -->
Fixes
[BUILDER-48G](https://sentry.io/organizations/significant-gravitas/issues/6960009111/).
The issue was that: Asynchronous API update scheduled via
`setTimeout(0)` in `OnboardingProvider` creates a race condition,
causing React-DOM's portal cleanup (`removeChild`) to fail during
concurrent component unmounting.

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->
- Prevents state updates and API calls after the `OnboardingProvider`
component has been unmounted.
- Introduces a `isMounted` ref to track the component's mount status.
- Uses a `pendingUpdatesRef` to manage and cancel pending API update
promises on unmount, preventing memory leaks and errors.
- Ensures that API update errors are only logged if the component is
still mounted.


This fix was generated by Seer in Sentry, triggered by Craig Swift. 👁️
Run ID: 2058387

Not quite right? [Click here to continue debugging with
Seer.](https://sentry.io/organizations/significant-gravitas/issues/6960009111/?seerDrawer=true)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Onboarding works and does not throw errors when unmounted

---------

Co-authored-by: seer-by-sentry[bot] <157164994+seer-by-sentry[bot]@users.noreply.github.com>
Co-authored-by: Lluis Agusti <hi@llu.lu>
Co-authored-by: Ubbe <hi@ubbe.dev>
2025-11-06 16:20:23 +07:00
Reinier van der Leer
cbe4086e79 feat(frontend/marketplace): Update agent download UI (#11322)
- Resolves #11314

### Changes 🏗️

- Change "Download agent" CTA button to action link at bottom of summary
agent info
- Move agent ratings above CTA buttons to prevent it from jumping on
page load
- Update vertical spacings to more closely match designs

![example of new
look](https://github.com/user-attachments/assets/2db4d21e-b4f0-4930-9648-d4d4b20ef6fe)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Designer approves of new look
  - [x] Tests pass
2025-11-06 10:18:29 +01:00
seer-by-sentry[bot]
fbd5f34a61 fix(frontend): Ensure agent version is fetched when active_version_id exists (#11291)
### Changes 🏗️

Fixes
[BUILDER-4HJ](https://sentry.io/organizations/significant-gravitas/issues/6979388537/).
The issue was that: Server-side rendering failed to retrieve the
Supabase access token, causing authenticated API calls to omit the
Authorization header.

- Ensures that the agent version is fetched only when
`creator_agent.active_version_id` exists and the status code is 200.
- Enables the `prefetchGetV2GetAgentByStoreIdQuery` query when
`creator_agent.active_version_id` exists.

This fix was generated by Seer in Sentry, triggered by Craig Swift. 👁️
Run ID: 2234004

Not quite right? [Click here to continue debugging with
Seer.](https://sentry.io/organizations/significant-gravitas/issues/6979388537/?seerDrawer=true)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Loading marketplace works...

Co-authored-by: seer-by-sentry[bot] <157164994+seer-by-sentry[bot]@users.noreply.github.com>
Co-authored-by: Ubbe <hi@ubbe.dev>
2025-11-06 08:29:43 +00:00
Ubbe
0b680d4990 feat(frontend): <Button variant="link" /> (#11320)
## Changes 🏗️

<img width="800" height="800" alt="Screenshot 2025-11-04 at 23 05 22"
src="https://github.com/user-attachments/assets/ecb3f442-8f1b-4a80-a6c9-0c4b6d5e0427"
/>

New `<Button variant="link" />` for when you need to render an HTML
`<button>` but with our link styles.

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run Storybook locally
  - [x] Looks good
2025-11-06 15:06:56 +07:00
Zamil Majdy
6037f80502 feat(backend): Add correctness score to execution activity generation (#11325)
## Summary
Add AI-generated correctness score field to execution activity status
generation to provide quantitative assessment of how well executions
achieved their intended purpose.

New page:
<img width="1000" height="229" alt="image"
src="https://github.com/user-attachments/assets/5cb907cf-5bc7-4b96-8128-8eecccde9960"
/>

Old page:
<img width="1000" alt="image"
src="https://github.com/user-attachments/assets/ece0dfab-1e50-4121-9985-d585f7fcd4d2"
/>


## What Changed
- Added `correctness_score` field (float 0.0-1.0) to
`GraphExecutionStats` model
- **REFACTORED**: Removed duplicate `llm_utils.py` and reused existing
`AIStructuredResponseGeneratorBlock` logic
- Updated activity status generator to use structured responses instead
of plain text
- Modified prompts to include correctness assessment with 5-tier scoring
system:
  - 0.0-0.2: Failure
  - 0.2-0.4: Poor 
  - 0.4-0.6: Partial Success
  - 0.6-0.8: Mostly Successful
  - 0.8-1.0: Success
- Updated manager.py to extract and set both activity_status and
correctness_score
- Fixed tests to work with existing structured response interface

## Technical Details
- **Code Reuse**: Eliminated duplication by using existing
`AIStructuredResponseGeneratorBlock` instead of creating new LLM
utilities
- Added JSON validation with retry logic for malformed responses
- Maintained backward compatibility for existing activity status
functionality
- Score is clamped to valid 0.0-1.0 range and validated
- All type errors resolved and linting passes

## Test Plan
- [x] All existing tests pass with refactored structure
- [x] Structured LLM call functionality tested with success and error
cases
- [x] Activity status generation tested with various execution scenarios
- [x] Integration tests verify both fields are properly set in execution
stats
- [x] No code duplication - reuses existing block logic

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Zamil Majdy <majdyz@users.noreply.github.com>
2025-11-06 04:42:13 +00:00
Nicholas Tindle
37b3e4e82e feat(blocks)!: Update Exa search block to match latest API specification (#11185)
BREAKING CHANGE: Removed deprecated use_auto_prompt field from Input
schema. Existing workflows using this field will need to be updated to
use the type field set to "auto" instead.

## Summary of Changes 📝

This PR comprehensively updates all Exa search blocks to match the
latest Exa API specification and adds significant new functionality
through the Websets API integration.

### Core API Updates 🔄

- **Migration to Exa SDK**: Replaced manual API calls with the official
`exa_py` AsyncExa SDK across all blocks for better reliability and
maintainability
- **Removed deprecated fields**: Eliminated
`use_auto_prompt`/`useAutoprompt` field (breaking change)
- **Fixed incomplete field definitions**: Corrected `user_location`
field definition
- **Added new input fields**: Added `moderation` and `context` fields
for enhanced content filtering

### Enhanced Content Settings 🛠️

- **Text field improvements**: Support both boolean and advanced object
configurations
- **New content options**: 
  - Added `livecrawl` settings (never, fallback, always, preferred)
  - Added `subpages` support for deeper content retrieval
  - Added `extras` settings for links and images
  - Added `context` settings for additional contextual information
- **Updated settings**: Enhanced `highlight` and `summary`
configurations with new query and schema options

### Comprehensive Cost Tracking 💰

- Added detailed cost tracking models:
  - `CostDollars` for monetary costs
  - `CostCredits` for API credit tracking
  - `CostDuration` for time-based costs
- New output fields: `request_id`, `resolved_search_type`,
`cost_dollars`
- Improved response handling to conditionally yield fields based on
availability

### New Websets API Integration 🚀

Added eight new specialized blocks for Exa's Websets API:
- **`websets.py`**: Core webset management (create, get, list, delete)
- **`websets_search.py`**: Search operations within websets
- **`websets_items.py`**: Individual item management (add, get, update,
delete)
- **`websets_enrichment.py`**: Data enrichment operations
- **`websets_import_export.py`**: Bulk import/export functionality
- **`websets_monitor.py`**: Monitor and track webset changes
- **`websets_polling.py`**: Poll for updates and changes

### New Special-Purpose Blocks 🎯

- **`code_context.py`**: Code search capabilities for finding relevant
code snippets from open source repositories, documentation, and Stack
Overflow
- **`research.py`**: Asynchronous research capabilities that explore the
web, gather sources, synthesize findings, and return structured results
with citations

### Code Organization Improvements 📁

- **Removed legacy code**: Deleted `model.py` file containing deprecated
API models
- **Centralized helpers**: Consolidated shared models and utilities in
`helpers.py`
- **Improved modularity**: Each webset operation is now in its own
dedicated file

### Other Changes 🔧

- Updated `.gitignore` for better development workflow
- Updated `CLAUDE.md` with project-specific instructions
- Updated documentation in `docs/content/platform/new_blocks.md` with
error handling, data models, and file input guidelines
- Improved webhook block implementations with SDK integration

### Files Changed 📂

- **Modified (11 files)**:
  - `.gitignore`
  - `autogpt_platform/CLAUDE.md`
  - `autogpt_platform/backend/backend/blocks/exa/answers.py`
  - `autogpt_platform/backend/backend/blocks/exa/contents.py`
  - `autogpt_platform/backend/backend/blocks/exa/helpers.py`
  - `autogpt_platform/backend/backend/blocks/exa/search.py`
  - `autogpt_platform/backend/backend/blocks/exa/similar.py`
  - `autogpt_platform/backend/backend/blocks/exa/webhook_blocks.py`
  - `autogpt_platform/backend/backend/blocks/exa/websets.py`
  - `docs/content/platform/new_blocks.md`

- **Added (8 files)**:
  - `autogpt_platform/backend/backend/blocks/exa/code_context.py`
  - `autogpt_platform/backend/backend/blocks/exa/research.py`
  - `autogpt_platform/backend/backend/blocks/exa/websets_enrichment.py`
- `autogpt_platform/backend/backend/blocks/exa/websets_import_export.py`
  - `autogpt_platform/backend/backend/blocks/exa/websets_items.py`
  - `autogpt_platform/backend/backend/blocks/exa/websets_monitor.py`
  - `autogpt_platform/backend/backend/blocks/exa/websets_polling.py`
  - `autogpt_platform/backend/backend/blocks/exa/websets_search.py`

- **Deleted (1 file)**:
  - `autogpt_platform/backend/backend/blocks/exa/model.py`

### Migration Guide 🚦

For users with existing workflows using the deprecated `use_auto_prompt`
field:
1. Remove the `use_auto_prompt` field from your input configuration
2. Set the `type` field to `ExaSearchTypes.AUTO` (or "auto" in JSON) to
achieve the same behavior
3. Review any custom content settings as the structure has been enhanced

### Testing Recommendations 

- Test existing workflows to ensure they handle the breaking change
- Verify cost tracking fields are properly returned
- Test new content settings options (livecrawl, subpages, extras,
context)
- Validate websets functionality if using the new Websets API blocks

🤖 Generated with [Claude Code](https://claude.com/claude-code)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] made + ran a test agent for the blocks and flows between them
[Exa
Tests_v44.json](https://github.com/user-attachments/files/23226143/Exa.Tests_v44.json)

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Migrates Exa blocks to AsyncExa SDK, adds comprehensive
Websets/research/code-context blocks, updates existing
search/content/answers/similar, deletes legacy models, adjusts
tests/docs; breaking: remove `use_auto_prompt` in favor of
`type="auto"`.
> 
> - **Backend — Exa integration (SDK migration & BREAKING)**:
> - Replace manual HTTP calls with `exa_py.AsyncExa` across `search`,
`similar`, `contents`, `answers`, and webhooks; richer outputs
(citations, context, costs, resolved search type).
>   - BREAKING: remove `Input.use_auto_prompt`; use `type = "auto"`.
> - Centralize models/utilities in `exa/helpers.py` (content settings,
cost models, result mappers).
> - **New Blocks**:
> - **Websets**: management (`websets.py`), searches, items,
enrichments, imports/exports, monitors, polling (new files under
`exa/websets_*`).
> - **Research**: async research task create/get/wait/list
(`exa/research.py`).
> - **Code Context**: code snippet/context retrieval
(`exa/code_context.py`).
> - **Removals**:
>   - Delete deprecated `exa/model.py`.
> - **Docs & DX**:
> - Update `docs/new_blocks.md` (error handling, models, file input) and
`CLAUDE.md`; ignore backend logs in `.gitignore`.
> - **Frontend Tests**:
> - Split/extend “e” block tests and improve block add robustness in
Playwright (`build.spec.ts`, `build.page.ts`).
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
6e5e572322. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **New Features**
* Added multiple Exa research and webset management blocks for task
creation, monitoring, and completion tracking.
* Introduced new search capabilities including code context retrieval,
content search, and enhanced filtering options.
* Added webset enrichment, import/export, and item management
functionality.
  * Expanded search with location-based and category filters.

* **Documentation**
* Updated guidance on error handling, data models, and file input
handling.

* **Refactor**
* Modernized backend API integration with improved response structure
and error reporting.
  * Simplified configuration options for search operations.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-11-05 19:52:48 +00:00
Reinier van der Leer
de7c5b5c31 Merge branch 'master' into dev 2025-11-05 20:17:27 +01:00
Reinier van der Leer
d68dceb9c1 fix(backend/executor): Improve graph execution permission check (#11323)
- Resolves #11316
- Durable fix to replace #11318

### Changes 🏗️

- Expand graph execution permissions check
  - Don't require library membership for execution as sub-graph

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Can run sub-agent with non-latest graph version
- [x] Can run sub-agent that is available in Marketplace but not added
to Library
2025-11-05 17:13:41 +00:00
Zamil Majdy
193866232c hotfix(backend): fix rate-limited messages blocking queue by republishing to back (#11326)
## Summary
Fix critical queue blocking issue where rate-limited user messages
prevent other users' executions from being processed, causing the 135
late executions reported in production.

## Root Cause Analysis
When a user exceeds `max_concurrent_graph_executions_per_user` (25), the
executor uses `basic_nack(requeue=True)` which sends the message to the
**FRONT** of the RabbitMQ queue. This creates an infinite blocking loop
where:
1. Rate-limited message goes to front of queue
2. Gets processed, hits rate limit again  
3. Goes back to front of queue
4. Blocks all other users' messages indefinitely

## Solution Implementation

### 🔧 Core Changes
- **New setting**: `requeue_by_republishing` (default: `True`) in
`backend/util/settings.py`
- **Smart `_ack_message`**: Automatically uses republishing when
`requeue=True` and setting enabled
- **Efficient implementation**: Uses existing `self.run_client`
connection instead of creating new ones
- **Integration test**: Real RabbitMQ test validates queue ordering
behavior

### 🔄 Technical Implementation
**Before (blocking):**
```python
basic_nack(delivery_tag, requeue=True)  # Goes to FRONT of queue 
```

**After (non-blocking):**
```python
if requeue and self.config.requeue_by_republishing:
    # First: Republish to BACK of queue
    self.run_client.publish_message(...)
    # Then: Reject without requeue
    basic_nack(delivery_tag, requeue=False)
```

### 📊 Impact
-  **Other users' executions no longer blocked** by rate-limited users
-  **Fair queue processing** - FIFO behavior maintained for all users
-  **Rate limiting still works** - just doesn't block others
-  **Configurable** - can revert to old behavior with
`requeue_by_republishing=False`
-  **Zero performance impact** - uses existing connections

## Test Plan
- **Integration test**: `test_requeue_integration.py` validates real
RabbitMQ queue ordering
- **Scenario testing**: Confirms rate-limited messages go to back of
queue
- **Cross-user validation**: Verifies other users' messages process
correctly
- **Setting test**: Confirms configuration loads with correct defaults

## Deployment Strategy
This is a **hotfix** that can be deployed immediately:
- **Backward compatible**: Old behavior available via config
- **Safe default**: New behavior is safer than current state
- **No breaking changes**: All existing functionality preserved
- **Immediate relief**: Resolves production queue blocking

## Files Modified
- `backend/executor/manager.py`: Enhanced `_ack_message` logic and
`_requeue_message_to_back` method
- `backend/util/settings.py`: Added `requeue_by_republishing`
configuration field
- `test_requeue_integration.py`: Integration test for queue ordering
validation

## Related Issues
Fixes the 135 late executions issue where messages were stuck in QUEUED
state despite available executor capacity (583m/600m utilization).

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-11-05 16:24:07 +00:00
Krzysztof Czerwinski
979826f559 fix(platform): Wallet fixes (#11304)
### Changes 🏗️

- Unmask for Sentry:
  - Agent name&creator on onboarding cards
  - Edge paths
  - Block I/O names
- Prevent firing `onClick` when onboarding agents are loading
- Prevent confetti on null elements and top-left corner
- Fix tooltip on Wallet hover
- Fix `0` appearing in place of notification dot on the Wallet button

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Onboarding works and can be completed
  - [x] Wallet confetti works properly
  - [x] Tooltip works
2025-11-05 14:17:43 +00:00
Swifty
2f87e13d17 feat(platform): Chat system backend (#11230)
Implements foundational backend infrastructure for chat-based agent
interaction system. Users will be able to discover, configure, and run
marketplace agents through conversational AI.

**Note:** Chat routes are behind a feature flag 

### Changes 🏗️

**Core Chat System:**
- Chat service with LLM orchestration (Claude 3.5 Sonnet, Haiku, GPT-4)
- REST API routes for sessions and messages
- Database layer for chat persistence
- System prompts and configuration

**5 Conversational Tools:**
1. `find_agent` - Search marketplace by keywords
2. `get_agent_details` - Fetch agent info, inputs, credentials
3. `get_required_setup_info` - Check user readiness, missing credentials
4. `run_agent` - Execute agents immediately
5. `setup_agent` - Configure scheduled execution with cron

**Testing:**
- 28 tests across chat tools (23 passing, 5 skipped for scheduler)
- Test fixtures for simple, LLM, and Firecrawl agents
- Service and data layer tests

**Bug Fixes:**
- Fixed `setup_agent.py` to create schedules instead of immediate
execution
- Fixed graph lookup to use UUID instead of username/slug
- Fixed credential matching by provider/type instead of ID
- Fixed internal tool calls to use `._execute()` instead of `.execute()`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] All 28 chat tool tests pass (23 pass, 5 skip - require scheduler)
  - [x] Code formatting and linting pass
  - [x] Tool execution flow validated through unit tests
  - [x] Agent discovery, details, and execution tested
  - [x] Credential parsing and matching tested

#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

No configuration changes required - all existing settings compatible.
2025-11-05 13:49:01 +00:00
Zamil Majdy
2ad5a88a5c feat(frontend): Change copywriting for execution task summary (#11324)
### Changes 🏗️

Change copywriting for execution task summary

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Manual review
2025-11-05 12:59:21 +00:00
Abhimanyu Yadav
e9cd40c0d4 fix(frontend): Correctly rendering all types of outputs in the custom node in the new builder (#11258)
Currently, we are rendering text for all types of outputs, even if it’s
a video, image, or other type. So, In current we fixed it by rendering
them correctly. Also, some output actions weren’t working, so fixed them
also.

<img width="1486" height="1080" alt="Screenshot 2025-10-27 at 4 36
33 PM"
src="https://github.com/user-attachments/assets/4e4ee43f-5400-477e-8fa9-2914acf11466"
/>

<img width="463" height="683" alt="Screenshot 2025-10-27 at 4 39 00 PM"
src="https://github.com/user-attachments/assets/bfc09c00-58dd-4a0d-96a2-aa51cc282797"
/>

<img width="1455" height="753" alt="Screenshot 2025-10-27 at 4 36 56 PM"
src="https://github.com/user-attachments/assets/52870ffe-3e47-4b0f-bfa3-8d8bbe38cbbd"
/>

<img width="1131" height="1062" alt="Screenshot 2025-10-27 at 4 37
17 PM"
src="https://github.com/user-attachments/assets/e55040e9-33e6-45a8-8397-bf912e93840f"
/>

### Changes 🏗️

- Add a new design for the node output.
- Render the correct HTML tag for each type.
- Make all the output actions below the data section workable, such as
viewing the complete data or copying it.
- Add a “View more” button. We’re only seeing two pins of output. If we
have more pins, we can view all the output in a dialogue box.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
   - [x] able to render different types of output data correctly.
   - [x] All output actions are working perfectly.

---------

Co-authored-by: Krzysztof Czerwinski <34861343+kcze@users.noreply.github.com>
Co-authored-by: Ubbe <hi@ubbe.dev>
2025-11-05 07:01:49 +00:00
dependabot[bot]
4744675ef9 chore(frontend/deps-dev): bump the development-dependencies group across 1 directory with 13 updates (#11288)
Bumps the development-dependencies group with 13 updates in the
/autogpt_platform/frontend directory:

| Package | From | To |
| --- | --- | --- |
|
[@chromatic-com/storybook](https://github.com/chromaui/addon-visual-tests)
| `4.1.1` | `4.1.2` |
| [@playwright/test](https://github.com/microsoft/playwright) | `1.55.0`
| `1.56.1` |
|
[@tanstack/eslint-plugin-query](https://github.com/TanStack/query/tree/HEAD/packages/eslint-plugin-query)
| `5.86.0` | `5.91.2` |
|
[@tanstack/react-query-devtools](https://github.com/TanStack/query/tree/HEAD/packages/react-query-devtools)
| `5.87.3` | `5.90.2` |
|
[@types/node](https://github.com/DefinitelyTyped/DefinitelyTyped/tree/HEAD/types/node)
| `24.3.1` | `24.9.2` |
| [axe-playwright](https://github.com/abhinaba-ghosh/axe-playwright) |
`2.1.0` | `2.2.2` |
| [chromatic](https://github.com/chromaui/chromatic-cli) | `13.1.4` |
`13.3.2` |
| [msw](https://github.com/mswjs/msw) | `2.11.1` | `2.11.6` |
|
[msw-storybook-addon](https://github.com/mswjs/msw-storybook-addon/tree/HEAD/packages/msw-addon)
| `2.0.5` | `2.0.6` |
| [orval](https://github.com/orval-labs/orval) | `7.11.2` | `7.15.0` |
| [pbkdf2](https://github.com/browserify/pbkdf2) | `3.1.3` | `3.1.5` |
|
[prettier-plugin-tailwindcss](https://github.com/tailwindlabs/prettier-plugin-tailwindcss)
| `0.6.14` | `0.7.1` |
| [typescript](https://github.com/microsoft/TypeScript) | `5.9.2` |
`5.9.3` |


Updates `@chromatic-com/storybook` from 4.1.1 to 4.1.2
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/chromaui/addon-visual-tests/releases"><code>@​chromatic-com/storybook</code>'s
releases</a>.</em></p>
<blockquote>
<h2>v4.1.2</h2>
<h4>🐛 Bug Fix</h4>
<ul>
<li>Broaden version-range for storybook peerDependency to include
10.1.0-0 <a
href="https://redirect.github.com/chromaui/addon-visual-tests/pull/392">#392</a>
(<a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>)</li>
</ul>
<h4>Authors: 2</h4>
<ul>
<li><a
href="https://github.com/chromatic-support"><code>@​chromatic-support</code></a></li>
<li>Norbert de Langen (<a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>)</li>
</ul>
<h2>v4.1.2-next.4</h2>
<h4>⚠️ Pushed to <code>next</code></h4>
<ul>
<li>Broaden version-range for storybook peerDependency to include
10.2.0-0 and 10.3.0-0 (<a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>)</li>
</ul>
<h4>Authors: 1</h4>
<ul>
<li>Norbert de Langen (<a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>)</li>
</ul>
<h2>v4.1.2-next.3</h2>
<h4>⚠️ Pushed to <code>next</code></h4>
<ul>
<li>Update GitHub Actions workflow to fetch full git history and tags
with optimized settings (<a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>)</li>
</ul>
<h4>Authors: 1</h4>
<ul>
<li>Norbert de Langen (<a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>)</li>
</ul>
<h2>v4.1.2-next.2</h2>
<h4>⚠️ Pushed to <code>next</code></h4>
<ul>
<li>bump yarn version (<a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>)</li>
</ul>
<h4>Authors: 1</h4>
<ul>
<li>Norbert de Langen (<a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>)</li>
</ul>
<h2>v4.1.2-next.1</h2>
<h4>🐛 Bug Fix</h4>
<ul>
<li>Broaden version-range for storybook peerDependency to include
10.1.0-0 <a
href="https://redirect.github.com/chromaui/addon-visual-tests/pull/392">#392</a>
(<a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>)</li>
</ul>
<h4>Authors: 1</h4>
<ul>
<li>Norbert de Langen (<a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>)</li>
</ul>
<h2>v4.1.2-next.0</h2>
<h4>🐛 Bug Fix</h4>
<ul>
<li>Main <a
href="https://redirect.github.com/chromaui/addon-visual-tests/pull/391">#391</a>
(<a href="https://github.com/ndelangen"><code>@​ndelangen</code></a> <a
href="https://github.com/chromatic-support"><code>@​chromatic-support</code></a>)</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/chromaui/addon-visual-tests/blob/v4.1.2/CHANGELOG.md"><code>@​chromatic-com/storybook</code>'s
changelog</a>.</em></p>
<blockquote>
<h1>v4.1.2 (Wed Oct 29 2025)</h1>
<h4>🐛 Bug Fix</h4>
<ul>
<li>Broaden version-range for storybook peerDependency to include
10.1.0-0 <a
href="https://redirect.github.com/chromaui/addon-visual-tests/pull/392">#392</a>
(<a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>)</li>
<li>Main <a
href="https://redirect.github.com/chromaui/addon-visual-tests/pull/391">#391</a>
(<a href="https://github.com/ndelangen"><code>@​ndelangen</code></a> <a
href="https://github.com/chromatic-support"><code>@​chromatic-support</code></a>)</li>
</ul>
<h4>Authors: 2</h4>
<ul>
<li><a
href="https://github.com/chromatic-support"><code>@​chromatic-support</code></a></li>
<li>Norbert de Langen (<a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>)</li>
</ul>
<hr />
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="a3af186430"><code>a3af186</code></a>
Bump version to: 4.1.2 [skip ci]</li>
<li><a
href="5c28b499ed"><code>5c28b49</code></a>
Update CHANGELOG.md [skip ci]</li>
<li><a
href="b5fef8d290"><code>b5fef8d</code></a>
Merge pull request <a
href="https://redirect.github.com/chromaui/addon-visual-tests/issues/393">#393</a>
from chromaui/next</li>
<li><a
href="dbc88e7877"><code>dbc88e7</code></a>
Broaden version-range for storybook peerDependency to include 10.2.0-0
and 10...</li>
<li><a
href="2b73fc62cc"><code>2b73fc6</code></a>
Broaden version-range for storybook peerDependency to include 10.2.0-0
and 10...</li>
<li><a
href="70f0e52433"><code>70f0e52</code></a>
Update GitHub Actions workflow to fetch full git history and tags with
optimi...</li>
<li><a
href="5cf104a81e"><code>5cf104a</code></a>
bump yarn version</li>
<li><a
href="0cb03a5386"><code>0cb03a5</code></a>
Merge pull request <a
href="https://redirect.github.com/chromaui/addon-visual-tests/issues/392">#392</a>
from chromaui/norbert/broaden-version-range</li>
<li><a
href="aee436179e"><code>aee4361</code></a>
fix linting</li>
<li><a
href="86004684e8"><code>8600468</code></a>
regen lockfile for updates</li>
<li>Additional commits viewable in <a
href="https://github.com/chromaui/addon-visual-tests/compare/v4.1.1...v4.1.2">compare
view</a></li>
</ul>
</details>
<br />

Updates `@playwright/test` from 1.55.0 to 1.56.1
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/microsoft/playwright/releases"><code>@​playwright/test</code>'s
releases</a>.</em></p>
<blockquote>
<h2>v1.56.1</h2>
<h2>Highlights</h2>
<p><a
href="https://redirect.github.com/microsoft/playwright/issues/37871">#37871</a>
chore: allow local-network-access permission in chromium
<a
href="https://redirect.github.com/microsoft/playwright/issues/37891">#37891</a>
fix(agents): remove workspaceFolder ref from vscode mcp
<a
href="https://redirect.github.com/microsoft/playwright/issues/37759">#37759</a>
chore: rename agents to test agents
<a
href="https://redirect.github.com/microsoft/playwright/issues/37757">#37757</a>
chore(mcp): fallback to cwd when resolving test config</p>
<h2>Browser Versions</h2>
<ul>
<li>Chromium 141.0.7390.37</li>
<li>Mozilla Firefox 142.0.1</li>
<li>WebKit 26.0</li>
</ul>
<h2>v1.56.0</h2>
<h2>Playwright Agents</h2>
<p>Introducing Playwright Agents, three custom agent definitions
designed to guide LLMs through the core process of building a Playwright
test:</p>
<ul>
<li><strong>🎭 planner</strong> explores the app and produces a Markdown
test plan</li>
<li><strong>🎭 generator</strong> transforms the Markdown plan into the
Playwright Test files</li>
<li><strong>🎭 healer</strong> executes the test suite and automatically
repairs failing tests</li>
</ul>
<p>Run <code>npx playwright init-agents</code> with your client of
choice to generate the latest agent definitions:</p>
<pre lang="bash"><code># Generate agent files for each agentic loop
# Visual Studio Code
npx playwright init-agents --loop=vscode
# Claude Code
npx playwright init-agents --loop=claude
# opencode
npx playwright init-agents --loop=opencode
</code></pre>
<blockquote>
<p>[!NOTE]
VS Code v1.105 (currently on the VS Code Insiders channel) is needed for
the agentic experience in VS Code. It will become stable shortly, we are
a bit ahead of times with this functionality!</p>
</blockquote>
<p><a href="https://playwright.dev/docs/test-agents">Learn more about
Playwright Agents</a></p>
<h2>New APIs</h2>
<ul>
<li>New methods <a
href="https://playwright.dev/docs/api/class-page#page-console-messages">page.consoleMessages()</a>
and <a
href="https://playwright.dev/docs/api/class-page#page-page-errors">page.pageErrors()</a>
for retrieving the most recent console messages from the page</li>
<li>New method <a
href="https://playwright.dev/docs/api/class-page#page-requests">page.requests()</a>
for retrieving the most recent network requests from the page</li>
<li>Added <a
href="https://playwright.dev/docs/test-cli#test-list"><code>--test-list</code>
and <code>--test-list-invert</code></a> to allow manual specification of
specific tests from a file</li>
</ul>
<h2>UI Mode and HTML Reporter</h2>
<ul>
<li>Added option to <code>'html'</code> reporter to disable the
&quot;Copy prompt&quot; button</li>
<li>Added option to <code>'html'</code> reporter and UI Mode to merge
files, collapsing test and describe blocks into a single unified
list</li>
<li>Added option to UI Mode mirroring the
<code>--update-snapshots</code> options</li>
<li>Added option to UI Mode to run only a single worker at a time</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="54c711571a"><code>54c7115</code></a>
chore: revert &quot;minimal vscode version notice&quot; (<a
href="https://redirect.github.com/microsoft/playwright/issues/37892">#37892</a>)</li>
<li><a
href="7d45eb331a"><code>7d45eb3</code></a>
chore: mark v1.56.1 (<a
href="https://redirect.github.com/microsoft/playwright/issues/37784">#37784</a>)</li>
<li><a
href="e6ef6974be"><code>e6ef697</code></a>
cherry-pick(<a
href="https://redirect.github.com/microsoft/playwright/issues/37871">#37871</a>):
chore: allow local-network-access permission in chromium</li>
<li><a
href="932542c3c1"><code>932542c</code></a>
cherry-pick(<a
href="https://redirect.github.com/microsoft/playwright/issues/37891">#37891</a>):
fix(agents): remove workspaceFolder ref from vscode mcp</li>
<li><a
href="0662dd29ee"><code>0662dd2</code></a>
cherry-pick(<a
href="https://redirect.github.com/microsoft/playwright/issues/37759">#37759</a>):
chore: rename agents to test agents</li>
<li><a
href="919549ec2c"><code>919549e</code></a>
cherry-pick(<a
href="https://redirect.github.com/microsoft/playwright/issues/37758">#37758</a>):
docs: mention VS Code insiders in the agents docs</li>
<li><a
href="e593c64187"><code>e593c64</code></a>
cherry-pick(<a
href="https://redirect.github.com/microsoft/playwright/issues/37757">#37757</a>):
chore(mcp): fallback to cwd when resolving test config</li>
<li><a
href="a8a6e1049b"><code>a8a6e10</code></a>
cherry-pick(<a
href="https://redirect.github.com/microsoft/playwright/issues/37755">#37755</a>):
chore(mcp): minimal vscode version notice</li>
<li><a
href="f36b2eec65"><code>f36b2ee</code></a>
cherry-pick(<a
href="https://redirect.github.com/microsoft/playwright/issues/37731">#37731</a>):
docs: add agents video to agents page (<a
href="https://redirect.github.com/microsoft/playwright/issues/37733">#37733</a>)</li>
<li><a
href="b6af258d07"><code>b6af258</code></a>
cherry-pick(<a
href="https://redirect.github.com/microsoft/playwright/issues/37727">#37727</a>):
devops: fix NPM release step (<a
href="https://redirect.github.com/microsoft/playwright/issues/37728">#37728</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/microsoft/playwright/compare/v1.55.0...v1.56.1">compare
view</a></li>
</ul>
</details>
<details>
<summary>Maintainer changes</summary>
<p>This version was pushed to npm by [GitHub Actions](<a
href="https://www.npmjs.com/~GitHub">https://www.npmjs.com/~GitHub</a>
Actions), a new releaser for <code>@​playwright/test</code> since your
current version.</p>
</details>
<br />

Updates `@tanstack/eslint-plugin-query` from 5.86.0 to 5.91.2
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/TanStack/query/releases"><code>@​tanstack/eslint-plugin-query</code>'s
releases</a>.</em></p>
<blockquote>
<h2><code>@​tanstack/eslint-plugin-query</code><a
href="https://github.com/5"><code>@​5</code></a>.91.2</h2>
<h3>Patch Changes</h3>
<ul>
<li>fix: allow useQueries with combine property in no-unstable-deps rule
(<a
href="https://redirect.github.com/TanStack/query/pull/9720">#9720</a>)</li>
</ul>
<h2><code>@​tanstack/eslint-plugin-query</code><a
href="https://github.com/5"><code>@​5</code></a>.91.1</h2>
<h3>Patch Changes</h3>
<ul>
<li>avoid typescript import in no-void-query-fn rule (<a
href="https://redirect.github.com/TanStack/query/pull/9759">#9759</a>)</li>
</ul>
<h2><code>@​tanstack/eslint-plugin-query</code><a
href="https://github.com/5"><code>@​5</code></a>.91.0</h2>
<h3>Minor Changes</h3>
<ul>
<li>feat: improve type of exported plugin (<a
href="https://redirect.github.com/TanStack/query/pull/9700">#9700</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/TanStack/query/blob/main/packages/eslint-plugin-query/CHANGELOG.md"><code>@​tanstack/eslint-plugin-query</code>'s
changelog</a>.</em></p>
<blockquote>
<h2>5.91.2</h2>
<h3>Patch Changes</h3>
<ul>
<li>fix: allow useQueries with combine property in no-unstable-deps rule
(<a
href="https://redirect.github.com/TanStack/query/pull/9720">#9720</a>)</li>
</ul>
<h2>5.91.1</h2>
<h3>Patch Changes</h3>
<ul>
<li>avoid typescript import in no-void-query-fn rule (<a
href="https://redirect.github.com/TanStack/query/pull/9759">#9759</a>)</li>
</ul>
<h2>5.91.0</h2>
<h3>Minor Changes</h3>
<ul>
<li>feat: improve type of exported plugin (<a
href="https://redirect.github.com/TanStack/query/pull/9700">#9700</a>)</li>
</ul>
<h2>5.90.2</h2>
<h3>Patch Changes</h3>
<ul>
<li>fix: exhaustive-deps with variables and type assertions (<a
href="https://redirect.github.com/TanStack/query/pull/9687">#9687</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="9df1308181"><code>9df1308</code></a>
ci: Version Packages (<a
href="https://github.com/TanStack/query/tree/HEAD/packages/eslint-plugin-query/issues/9765">#9765</a>)</li>
<li><a
href="9595254b2f"><code>9595254</code></a>
fix: allow useQueries with combine property in no-unstable-deps rule (<a
href="https://github.com/TanStack/query/tree/HEAD/packages/eslint-plugin-query/issues/9720">#9720</a>)</li>
<li><a
href="b59925ebcc"><code>b59925e</code></a>
ci: Version Packages (<a
href="https://github.com/TanStack/query/tree/HEAD/packages/eslint-plugin-query/issues/9760">#9760</a>)</li>
<li><a
href="b9a478dc79"><code>b9a478d</code></a>
fix(eslint-plugin): avoid imports from typescript (<a
href="https://github.com/TanStack/query/tree/HEAD/packages/eslint-plugin-query/issues/9759">#9759</a>)</li>
<li><a
href="a242f98b82"><code>a242f98</code></a>
Revert &quot;chore(deps): update all non-major dependencies&quot; (<a
href="https://github.com/TanStack/query/tree/HEAD/packages/eslint-plugin-query/issues/9715">#9715</a>)</li>
<li><a
href="571bc184fd"><code>571bc18</code></a>
chore(deps): update all non-major dependencies (<a
href="https://github.com/TanStack/query/tree/HEAD/packages/eslint-plugin-query/issues/9712">#9712</a>)</li>
<li><a
href="dcc7bd9616"><code>dcc7bd9</code></a>
ci: Version Packages (<a
href="https://github.com/TanStack/query/tree/HEAD/packages/eslint-plugin-query/issues/9709">#9709</a>)</li>
<li><a
href="832fac36ab"><code>832fac3</code></a>
feat(eslint-plugin): improve type of exported plugin (<a
href="https://github.com/TanStack/query/tree/HEAD/packages/eslint-plugin-query/issues/9700">#9700</a>)</li>
<li><a
href="5e8fd61126"><code>5e8fd61</code></a>
chore: update <code>@​eslint-react/eslint-plugin</code> (<a
href="https://github.com/TanStack/query/tree/HEAD/packages/eslint-plugin-query/issues/9702">#9702</a>)</li>
<li><a
href="51ca638fac"><code>51ca638</code></a>
ci: Version Packages</li>
<li>Additional commits viewable in <a
href="https://github.com/TanStack/query/commits/@tanstack/eslint-plugin-query@5.91.2/packages/eslint-plugin-query">compare
view</a></li>
</ul>
</details>
<br />

Updates `@tanstack/react-query-devtools` from 5.87.3 to 5.90.2
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/TanStack/query/releases"><code>@​tanstack/react-query-devtools</code>'s
releases</a>.</em></p>
<blockquote>
<h2>v5.90.2</h2>
<p>Version 5.90.2 - 9/23/25, 7:37 AM</p>
<h2>Changes</h2>
<h3>Fix</h3>
<ul>
<li>types: onMutateResult is always defined in onSuccess callback (<a
href="https://github.com/TanStack/query/tree/HEAD/packages/react-query-devtools/issues/9677">#9677</a>)
(2cf3ec9) by Dominik Dorfmeister</li>
</ul>
<h2>Packages</h2>
<ul>
<li><code>@​tanstack/query-core</code><a
href="https://github.com/5"><code>@​5</code></a>.90.2</li>
<li><code>@​tanstack/react-query</code><a
href="https://github.com/5"><code>@​5</code></a>.90.2</li>
<li><code>@​tanstack/angular-query-experimental</code><a
href="https://github.com/5"><code>@​5</code></a>.90.2</li>
<li><code>@​tanstack/query-async-storage-persister</code><a
href="https://github.com/5"><code>@​5</code></a>.90.2</li>
<li><code>@​tanstack/query-broadcast-client-experimental</code><a
href="https://github.com/5"><code>@​5</code></a>.90.2</li>
<li><code>@​tanstack/query-persist-client-core</code><a
href="https://github.com/5"><code>@​5</code></a>.90.2</li>
<li><code>@​tanstack/query-sync-storage-persister</code><a
href="https://github.com/5"><code>@​5</code></a>.90.2</li>
<li><code>@​tanstack/react-query-devtools</code><a
href="https://github.com/5"><code>@​5</code></a>.90.2</li>
<li><code>@​tanstack/react-query-persist-client</code><a
href="https://github.com/5"><code>@​5</code></a>.90.2</li>
<li><code>@​tanstack/react-query-next-experimental</code><a
href="https://github.com/5"><code>@​5</code></a>.90.2</li>
<li><code>@​tanstack/solid-query</code><a
href="https://github.com/5"><code>@​5</code></a>.90.2</li>
<li><code>@​tanstack/solid-query-devtools</code><a
href="https://github.com/5"><code>@​5</code></a>.90.2</li>
<li><code>@​tanstack/solid-query-persist-client</code><a
href="https://github.com/5"><code>@​5</code></a>.90.2</li>
<li><code>@​tanstack/svelte-query</code><a
href="https://github.com/5"><code>@​5</code></a>.90.2</li>
<li><code>@​tanstack/svelte-query-devtools</code><a
href="https://github.com/5"><code>@​5</code></a>.90.2</li>
<li><code>@​tanstack/svelte-query-persist-client</code><a
href="https://github.com/5"><code>@​5</code></a>.90.2</li>
<li><code>@​tanstack/vue-query</code><a
href="https://github.com/5"><code>@​5</code></a>.90.2</li>
<li><code>@​tanstack/vue-query-devtools</code><a
href="https://github.com/5"><code>@​5</code></a>.90.2</li>
</ul>
<h2>v5.90.1</h2>
<p>Version 5.90.1 - 9/22/25, 6:41 AM</p>
<h2>Changes</h2>
<h3>Fix</h3>
<ul>
<li>vue-query: Support infiniteQueryOptions for MaybeRef argument (<a
href="https://github.com/TanStack/query/tree/HEAD/packages/react-query-devtools/issues/9634">#9634</a>)
(49243c8) by hriday330</li>
</ul>
<h3>Chore</h3>
<ul>
<li>deps: update marocchino/sticky-pull-request-comment digest to
fd19551 (<a
href="https://github.com/TanStack/query/tree/HEAD/packages/react-query-devtools/issues/9674">#9674</a>)
(cd4ef5c) by renovate[bot]</li>
</ul>
<h3>Ci</h3>
<ul>
<li>update checkout action (<a
href="https://github.com/TanStack/query/tree/HEAD/packages/react-query-devtools/issues/9673">#9673</a>)
(cbf0896) by Lachlan Collins</li>
<li>update workspace config (<a
href="https://github.com/TanStack/query/tree/HEAD/packages/react-query-devtools/issues/9671">#9671</a>)
(fb48985) by Lachlan Collins</li>
</ul>
<h3>Docs</h3>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="0eaafe0821"><code>0eaafe0</code></a>
release: v5.90.2</li>
<li><a
href="fcd23c9b1a"><code>fcd23c9</code></a>
release: v5.90.1</li>
<li><a
href="fb48985f63"><code>fb48985</code></a>
ci: update workspace config (<a
href="https://github.com/TanStack/query/tree/HEAD/packages/react-query-devtools/issues/9671">#9671</a>)</li>
<li><a
href="2a00fb6504"><code>2a00fb6</code></a>
release: v5.89.0</li>
<li><a
href="230435d112"><code>230435d</code></a>
release: v5.87.4</li>
<li>See full diff in <a
href="https://github.com/TanStack/query/commits/v5.90.2/packages/react-query-devtools">compare
view</a></li>
</ul>
</details>
<br />

Updates `@types/node` from 24.3.1 to 24.9.2
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/DefinitelyTyped/DefinitelyTyped/commits/HEAD/types/node">compare
view</a></li>
</ul>
</details>
<br />

Updates `axe-playwright` from 2.1.0 to 2.2.2
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/abhinaba-ghosh/axe-playwright/releases">axe-playwright's
releases</a>.</em></p>
<blockquote>
<h2>Release v2.2.2</h2>
<p>See <a
href="https://github.com/abhinaba-ghosh/axe-playwright/blob/v2.2.2/CHANGELOG.md">CHANGELOG.md</a>
for detailed changes.</p>
<h2>What's Changed</h2>
<ul>
<li>docs(types): add full type definitions and JSDoc for axe-playwright
m… by <a
href="https://github.com/abhinaba-ghosh"><code>@​abhinaba-ghosh</code></a>
in <a
href="https://redirect.github.com/abhinaba-ghosh/axe-playwright/pull/246">abhinaba-ghosh/axe-playwright#246</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/abhinaba-ghosh/axe-playwright/compare/v2.2.1...v2.2.2">https://github.com/abhinaba-ghosh/axe-playwright/compare/v2.2.1...v2.2.2</a></p>
<h2>Release v2.2.1</h2>
<p>See <a
href="https://github.com/abhinaba-ghosh/axe-playwright/blob/v2.2.1/CHANGELOG.md">CHANGELOG.md</a>
for detailed changes.</p>
<h2>What's Changed</h2>
<ul>
<li>fixed: cci junit report format by <a
href="https://github.com/abhinaba-ghosh"><code>@​abhinaba-ghosh</code></a>
in <a
href="https://redirect.github.com/abhinaba-ghosh/axe-playwright/pull/245">abhinaba-ghosh/axe-playwright#245</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/abhinaba-ghosh/axe-playwright/compare/v2.2.0...v2.2.1">https://github.com/abhinaba-ghosh/axe-playwright/compare/v2.2.0...v2.2.1</a></p>
<h2>Release v2.2.0</h2>
<p>See <a
href="https://github.com/abhinaba-ghosh/axe-playwright/blob/v2.2.0/CHANGELOG.md">CHANGELOG.md</a>
for detailed changes.</p>
<h2>What's Changed</h2>
<ul>
<li>checkA11y - <code>skipFailure</code> options not working when
reporter is 'junit' by <a
href="https://github.com/iamhoonse"><code>@​iamhoonse</code></a> in <a
href="https://redirect.github.com/abhinaba-ghosh/axe-playwright/pull/241">abhinaba-ghosh/axe-playwright#241</a></li>
<li>Feat/add change log generation by <a
href="https://github.com/abhinaba-ghosh"><code>@​abhinaba-ghosh</code></a>
in <a
href="https://redirect.github.com/abhinaba-ghosh/axe-playwright/pull/242">abhinaba-ghosh/axe-playwright#242</a></li>
<li>set skipFailure to true by <a
href="https://github.com/abhinaba-ghosh"><code>@​abhinaba-ghosh</code></a>
in <a
href="https://redirect.github.com/abhinaba-ghosh/axe-playwright/pull/243">abhinaba-ghosh/axe-playwright#243</a></li>
<li>Fix/test in ci by <a
href="https://github.com/abhinaba-ghosh"><code>@​abhinaba-ghosh</code></a>
in <a
href="https://redirect.github.com/abhinaba-ghosh/axe-playwright/pull/244">abhinaba-ghosh/axe-playwright#244</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/iamhoonse"><code>@​iamhoonse</code></a>
made their first contribution in <a
href="https://redirect.github.com/abhinaba-ghosh/axe-playwright/pull/241">abhinaba-ghosh/axe-playwright#241</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/abhinaba-ghosh/axe-playwright/compare/v2.1.0...v2.2.0">https://github.com/abhinaba-ghosh/axe-playwright/compare/v2.1.0...v2.2.0</a></p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/abhinaba-ghosh/axe-playwright/blob/master/CHANGELOG.md">axe-playwright's
changelog</a>.</em></p>
<blockquote>
<h3><a
href="https://github.com/abhinaba-ghosh/axe-playwright/compare/v2.2.1...v2.2.2">2.2.2</a>
(2025-09-12)</h3>
<h3>Bug Fixes</h3>
<ul>
<li><strong>types:</strong> ensure custom axe-playwright types are
resolved via typeRoots (<a
href="4ce358c7ec">4ce358c</a>)</li>
</ul>
<h3><a
href="https://github.com/abhinaba-ghosh/axe-playwright/compare/v2.2.0...v2.2.1">2.2.1</a>
(2025-09-10)</h3>
<h2><a
href="https://github.com/abhinaba-ghosh/axe-playwright/compare/v2.1.0...v2.2.0">2.2.0</a>
(2025-09-09)</h2>
<h3>Features</h3>
<ul>
<li>added changelog (<a
href="2f2001164b">2f20011</a>)</li>
</ul>
<h3>Bug Fixes</h3>
<ul>
<li>change reporter conditionals in <code>checkA11y()</code> to fix
<code>skipFailure</code> options not working when reporter is 'junit'
(<a
href="b4514d043e">b4514d0</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="7b5f359bfc"><code>7b5f359</code></a>
chore(release): 2.2.2</li>
<li><a
href="5f02048e03"><code>5f02048</code></a>
Merge pull request <a
href="https://redirect.github.com/abhinaba-ghosh/axe-playwright/issues/246">#246</a>
from abhinaba-ghosh/docs/axe-playwright-types</li>
<li><a
href="07a09be7e4"><code>07a09be</code></a>
chore(types): remove obsolete axe-playwright.d.ts file</li>
<li><a
href="4ce358c7ec"><code>4ce358c</code></a>
fix(types): ensure custom axe-playwright types are resolved via
typeRoots</li>
<li><a
href="7cdc2dc50c"><code>7cdc2dc</code></a>
docs(types): add full type definitions and JSDoc for axe-playwright
methods</li>
<li><a
href="0be5f49d11"><code>0be5f49</code></a>
chore(release): 2.2.1</li>
<li><a
href="6be4157af8"><code>6be4157</code></a>
Merge pull request <a
href="https://redirect.github.com/abhinaba-ghosh/axe-playwright/issues/245">#245</a>
from abhinaba-ghosh/fix/cci</li>
<li><a
href="7ea08dc53b"><code>7ea08dc</code></a>
fixed: cci junit report format</li>
<li><a
href="8bd78dc4a2"><code>8bd78dc</code></a>
Update CHANGELOG.md</li>
<li><a
href="0bca3b0b4b"><code>0bca3b0</code></a>
Update CHANGELOG.md</li>
<li>Additional commits viewable in <a
href="https://github.com/abhinaba-ghosh/axe-playwright/compare/v2.1.0...v2.2.2">compare
view</a></li>
</ul>
</details>
<br />

Updates `chromatic` from 13.1.4 to 13.3.2
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/chromaui/chromatic-cli/releases">chromatic's
releases</a>.</em></p>
<blockquote>
<h2>v13.3.2</h2>
<h4>🐛 Bug Fix</h4>
<ul>
<li>Fix Node 24 security warning <a
href="https://redirect.github.com/chromaui/chromatic-cli/pull/1215">#1215</a>
(<a href="https://github.com/codykaup"><code>@​codykaup</code></a>)</li>
<li>Remove <code>corepack enable</code> from GitHub Actions <a
href="https://redirect.github.com/chromaui/chromatic-cli/pull/1218">#1218</a>
(<a
href="https://github.com/justin-thurman"><code>@​justin-thurman</code></a>)</li>
<li>Setup permissions for NPM trusted publishing <a
href="https://redirect.github.com/chromaui/chromatic-cli/pull/1216">#1216</a>
(<a href="https://github.com/codykaup"><code>@​codykaup</code></a>)</li>
</ul>
<h4>Authors: 2</h4>
<ul>
<li>Cody Kaup (<a
href="https://github.com/codykaup"><code>@​codykaup</code></a>)</li>
<li>Justin Thurman (<a
href="https://github.com/justin-thurman"><code>@​justin-thurman</code></a>)</li>
</ul>
<h2>v13.3.1</h2>
<h4>🐛 Bug Fix</h4>
<ul>
<li>Prevent log file from writes during metadata upload <a
href="https://redirect.github.com/chromaui/chromatic-cli/pull/1214">#1214</a>
(<a href="https://github.com/codykaup"><code>@​codykaup</code></a>)</li>
</ul>
<h4>Authors: 1</h4>
<ul>
<li>Cody Kaup (<a
href="https://github.com/codykaup"><code>@​codykaup</code></a>)</li>
</ul>
<h2>v13.3.0</h2>
<h4>🚀 Enhancement</h4>
<ul>
<li>Use <code>--stats-json</code> with storybook version guard. <a
href="https://redirect.github.com/chromaui/chromatic-cli/pull/1210">#1210</a>
(<a href="https://github.com/jmhobbs"><code>@​jmhobbs</code></a>)</li>
</ul>
<h4>🐛 Bug Fix</h4>
<ul>
<li>Trim down some Sentry error reporting <a
href="https://redirect.github.com/chromaui/chromatic-cli/pull/1209">#1209</a>
(<a href="https://github.com/jmhobbs"><code>@​jmhobbs</code></a>)</li>
</ul>
<h4>Authors: 1</h4>
<ul>
<li>John Hobbs (<a
href="https://github.com/jmhobbs"><code>@​jmhobbs</code></a>)</li>
</ul>
<h2>v13.2.1</h2>
<h4>🐛 Bug Fix</h4>
<ul>
<li>Remove unused references to view layer <a
href="https://redirect.github.com/chromaui/chromatic-cli/pull/1207">#1207</a>
(<a
href="https://github.com/justin-thurman"><code>@​justin-thurman</code></a>)</li>
</ul>
<h4>Authors: 1</h4>
<ul>
<li>Justin Thurman (<a
href="https://github.com/justin-thurman"><code>@​justin-thurman</code></a>)</li>
</ul>
<h2>v13.2.0</h2>
<h4>🚀 Enhancement</h4>
<ul>
<li>Updated the mapping for the storybook entry with a recent change in
storybook-rsbuild <a
href="https://redirect.github.com/chromaui/chromatic-cli/pull/1206">#1206</a>
(<a
href="https://github.com/ethriel3695"><code>@​ethriel3695</code></a>)</li>
</ul>
<h4>Authors: 1</h4>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/chromaui/chromatic-cli/blob/main/CHANGELOG.md">chromatic's
changelog</a>.</em></p>
<blockquote>
<h1>v13.3.2 (Fri Oct 24 2025)</h1>
<h4>🐛 Bug Fix</h4>
<ul>
<li>Fix Node 24 security warning <a
href="https://redirect.github.com/chromaui/chromatic-cli/pull/1215">#1215</a>
(<a href="https://github.com/codykaup"><code>@​codykaup</code></a>)</li>
<li>Remove <code>corepack enable</code> from GitHub Actions <a
href="https://redirect.github.com/chromaui/chromatic-cli/pull/1218">#1218</a>
(<a
href="https://github.com/justin-thurman"><code>@​justin-thurman</code></a>)</li>
<li>Setup permissions for NPM trusted publishing <a
href="https://redirect.github.com/chromaui/chromatic-cli/pull/1216">#1216</a>
(<a href="https://github.com/codykaup"><code>@​codykaup</code></a>)</li>
</ul>
<h4>Authors: 2</h4>
<ul>
<li>Cody Kaup (<a
href="https://github.com/codykaup"><code>@​codykaup</code></a>)</li>
<li>Justin Thurman (<a
href="https://github.com/justin-thurman"><code>@​justin-thurman</code></a>)</li>
</ul>
<hr />
<h1>v13.3.1 (Tue Oct 21 2025)</h1>
<h4>🐛 Bug Fix</h4>
<ul>
<li>Prevent log file from writes during metadata upload <a
href="https://redirect.github.com/chromaui/chromatic-cli/pull/1214">#1214</a>
(<a href="https://github.com/codykaup"><code>@​codykaup</code></a>)</li>
</ul>
<h4>Authors: 1</h4>
<ul>
<li>Cody Kaup (<a
href="https://github.com/codykaup"><code>@​codykaup</code></a>)</li>
</ul>
<hr />
<h1>v13.3.0 (Mon Sep 29 2025)</h1>
<h4>🚀 Enhancement</h4>
<ul>
<li>Use <code>--stats-json</code> with storybook version guard. <a
href="https://redirect.github.com/chromaui/chromatic-cli/pull/1210">#1210</a>
(<a href="https://github.com/jmhobbs"><code>@​jmhobbs</code></a>)</li>
</ul>
<h4>🐛 Bug Fix</h4>
<ul>
<li>Trim down some Sentry error reporting <a
href="https://redirect.github.com/chromaui/chromatic-cli/pull/1209">#1209</a>
(<a href="https://github.com/jmhobbs"><code>@​jmhobbs</code></a>)</li>
</ul>
<h4>Authors: 1</h4>
<ul>
<li>John Hobbs (<a
href="https://github.com/jmhobbs"><code>@​jmhobbs</code></a>)</li>
</ul>
<hr />
<h1>v13.2.1 (Fri Sep 26 2025)</h1>
<h4>🐛 Bug Fix</h4>
<ul>
<li>Remove unused references to view layer <a
href="https://redirect.github.com/chromaui/chromatic-cli/pull/1207">#1207</a>
(<a
href="https://github.com/justin-thurman"><code>@​justin-thurman</code></a>)</li>
</ul>
<h4>Authors: 1</h4>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="1548300b71"><code>1548300</code></a>
Bump version to: 13.3.2 [skip ci]</li>
<li><a
href="39e30af667"><code>39e30af</code></a>
Update CHANGELOG.md [skip ci]</li>
<li><a
href="4e3f944c05"><code>4e3f944</code></a>
Merge pull request <a
href="https://redirect.github.com/chromaui/chromatic-cli/issues/1215">#1215</a>
from chromaui/cody/cap-3598-shell-true-causes-securi...</li>
<li><a
href="f73c621eb8"><code>f73c621</code></a>
Merge pull request <a
href="https://redirect.github.com/chromaui/chromatic-cli/issues/1218">#1218</a>
from chromaui/CAP-3707</li>
<li><a
href="b9d0d3b3c7"><code>b9d0d3b</code></a>
Remove corepack enable from actions</li>
<li><a
href="5b685c3f48"><code>5b685c3</code></a>
Clean up some unnecessary test helpers</li>
<li><a
href="12c5291b67"><code>12c5291</code></a>
execaCommand -&gt; execa</li>
<li><a
href="2141d58aa6"><code>2141d58</code></a>
Upgrade execa</li>
<li><a
href="7c3f508721"><code>7c3f508</code></a>
Merge pull request <a
href="https://redirect.github.com/chromaui/chromatic-cli/issues/1216">#1216</a>
from chromaui/cody/cap-3623-look-into-using-trusted-...</li>
<li><a
href="77dcf1f024"><code>77dcf1f</code></a>
Use trusted publishing instead of NPM_TOKEN</li>
<li>Additional commits viewable in <a
href="https://github.com/chromaui/chromatic-cli/compare/v13.1.4...v13.3.2">compare
view</a></li>
</ul>
</details>
<details>
<summary>Maintainer changes</summary>
<p>This version was pushed to npm by [GitHub Actions](<a
href="https://www.npmjs.com/~GitHub">https://www.npmjs.com/~GitHub</a>
Actions), a new releaser for chromatic since your current version.</p>
</details>
<br />

Updates `msw` from 2.11.1 to 2.11.6
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/mswjs/msw/releases">msw's
releases</a>.</em></p>
<blockquote>
<h2>v2.11.6 (2025-10-20)</h2>
<h3>Bug Fixes</h3>
<ul>
<li>update <code>@mswjs/interceptors</code> to 0.40.0 (<a
href="https://redirect.github.com/mswjs/msw/issues/2613">#2613</a>)
(50028b7f61b2f6ba9998ed111d9e517ef08bf538) <a
href="https://github.com/kettanaito"><code>@​kettanaito</code></a></li>
</ul>
<h2>v2.11.5 (2025-10-09)</h2>
<h3>Bug Fixes</h3>
<ul>
<li>export <code>onUnhandledRequest</code> for use in
<code>@​msw/playwright</code> (<a
href="https://redirect.github.com/mswjs/msw/issues/2562">#2562</a>)
(54ce91965796ebbdf04426c3f84e23e627d273bc) <a
href="https://github.com/zachatrocity"><code>@​zachatrocity</code></a>
<a
href="https://github.com/kettanaito"><code>@​kettanaito</code></a></li>
</ul>
<h2>v2.11.4 (2025-10-08)</h2>
<h3>Bug Fixes</h3>
<ul>
<li>add missing parameter documentation for <code>getResponse</code>
function (<a
href="https://redirect.github.com/mswjs/msw/issues/2580">#2580</a>)
(f33fb47c098be4f46d57fad16ea1c6b1d06ef471) <a
href="https://github.com/djpremier"><code>@​djpremier</code></a> <a
href="https://github.com/kettanaito"><code>@​kettanaito</code></a></li>
<li><strong>HttpResponse:</strong> preserve request body type after
cloning the request (<a
href="https://redirect.github.com/mswjs/msw/issues/2600">#2600</a>)
(f40515bf18486b4b3570b309159a10d609779f08) <a
href="https://github.com/Slessi"><code>@​Slessi</code></a> <a
href="https://github.com/kettanaito"><code>@​kettanaito</code></a></li>
<li>use <code>statuses</code> as a shim (<a
href="https://redirect.github.com/mswjs/msw/issues/2607">#2607</a>)
(fee715c503d1cd00ca70f0688dd28b6c297aa158) <a
href="https://github.com/kettanaito"><code>@​kettanaito</code></a></li>
<li>use <code>cookie</code> directly via a shim (<a
href="https://redirect.github.com/mswjs/msw/issues/2606">#2606</a>)
(29d0b53b8088e773ed9e7acb3abd7457b8a29965) <a
href="https://github.com/kettanaito"><code>@​kettanaito</code></a></li>
<li><strong>setupWorker:</strong> prevent <code>WorkerChannel</code>
access in fallback mode (<a
href="https://redirect.github.com/mswjs/msw/issues/2594">#2594</a>)
(1e653c9b06e9bc3e59f71d3500f31ced73f538cb) <a
href="https://github.com/kettanaito"><code>@​kettanaito</code></a></li>
<li><strong>setupWorker:</strong> remove unused
<code>deferNetworkRequestsUntil</code> (<a
href="https://redirect.github.com/mswjs/msw/issues/2595">#2595</a>)
(44d13d23ef1c9a75820c89961585cd01049a323c) <a
href="https://github.com/kettanaito"><code>@​kettanaito</code></a></li>
</ul>
<h2>v2.11.3 (2025-09-20)</h2>
<h3>Bug Fixes</h3>
<ul>
<li>migrate to <code>until-async</code> (<a
href="https://redirect.github.com/mswjs/msw/issues/2590">#2590</a>)
(7087b1e29eb7ca0a414eff36ed3c98d03147044b) <a
href="https://github.com/kettanaito"><code>@​kettanaito</code></a></li>
</ul>
<h2>v2.11.2 (2025-09-10)</h2>
<h3>Bug Fixes</h3>
<ul>
<li><strong>setupWorker:</strong> handle in-flight requests after
calling <code>worker.stop()</code> (<a
href="https://redirect.github.com/mswjs/msw/issues/2578">#2578</a>)
(97cf4c744d9b1a17f42ca65ac8ef93b2632b935b) <a
href="https://github.com/kettanaito"><code>@​kettanaito</code></a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="d4c287c127"><code>d4c287c</code></a>
chore(release): v2.11.6</li>
<li><a
href="24cfde8997"><code>24cfde8</code></a>
chore: add missing <code>@types/serviceworker</code> for development (<a
href="https://redirect.github.com/mswjs/msw/issues/2614">#2614</a>)</li>
<li><a
href="50028b7f61"><code>50028b7</code></a>
fix: update <code>@mswjs/interceptors</code> to 0.40.0 (<a
href="https://redirect.github.com/mswjs/msw/issues/2613">#2613</a>)</li>
<li><a
href="7a49fdadfc"><code>7a49fda</code></a>
chore(release): v2.11.5</li>
<li><a
href="54ce919657"><code>54ce919</code></a>
fix: export <code>onUnhandledRequest</code> for use in
<code>@​msw/playwright</code> (<a
href="https://redirect.github.com/mswjs/msw/issues/2562">#2562</a>)</li>
<li><a
href="326e2b574e"><code>326e2b5</code></a>
chore(release): v2.11.4</li>
<li><a
href="f33fb47c09"><code>f33fb47</code></a>
fix: add missing parameter documentation for <code>getResponse</code>
function (<a
href="https://redirect.github.com/mswjs/msw/issues/2580">#2580</a>)</li>
<li><a
href="f40515bf18"><code>f40515b</code></a>
fix(HttpResponse): preserve request body type after cloning the request
(<a
href="https://redirect.github.com/mswjs/msw/issues/2600">#2600</a>)</li>
<li><a
href="fee715c503"><code>fee715c</code></a>
fix: use <code>statuses</code> as a shim (<a
href="https://redirect.github.com/mswjs/msw/issues/2607">#2607</a>)</li>
<li><a
href="29d0b53b80"><code>29d0b53</code></a>
fix: use <code>cookie</code> directly via a shim (<a
href="https://redirect.github.com/mswjs/msw/issues/2606">#2606</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/mswjs/msw/compare/v2.11.1...v2.11.6">compare
view</a></li>
</ul>
</details>
<br />

Updates `msw-storybook-addon` from 2.0.5 to 2.0.6
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/mswjs/msw-storybook-addon/releases">msw-storybook-addon's
releases</a>.</em></p>
<blockquote>
<h2>v2.0.6</h2>
<h4>🐛 Bug Fix</h4>
<ul>
<li>fix: add a <code>@deprecated</code> tag to the
<code>mswDecorator</code> <a
href="https://redirect.github.com/mswjs/msw-storybook-addon/pull/178">#178</a>
(<a
href="https://github.com/connorshea"><code>@​connorshea</code></a>)</li>
</ul>
<h4>Authors: 1</h4>
<ul>
<li>Connor Shea (<a
href="https://github.com/connorshea"><code>@​connorshea</code></a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/mswjs/msw-storybook-addon/blob/main/packages/msw-addon/CHANGELOG.md">msw-storybook-addon's
changelog</a>.</em></p>
<blockquote>
<h1>v2.0.6 (Fri Oct 10 2025)</h1>
<h4>🐛 Bug Fix</h4>
<ul>
<li>fix: add a <code>@deprecated</code> tag to the
<code>mswDecorator</code> <a
href="https://redirect.github.com/mswjs/msw-storybook-addon/pull/178">#178</a>
(<a
href="https://github.com/connorshea"><code>@​connorshea</code></a>)</li>
</ul>
<h4>Authors: 1</h4>
<ul>
<li>Connor Shea (<a
href="https://github.com/connorshea"><code>@​connorshea</code></a>)</li>
</ul>
<hr />
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="ae7ce4de01"><code>ae7ce4d</code></a>
Update CHANGELOG.md [skip ci]</li>
<li><a
href="0b9594003c"><code>0b95940</code></a>
fix: add a <code>@deprecated</code> tag to the <code>mswDecorator</code>
(<a
href="https://github.com/mswjs/msw-storybook-addon/tree/HEAD/packages/msw-addon/issues/178">#178</a>)</li>
<li>See full diff in <a
href="https://github.com/mswjs/msw-storybook-addon/commits/v2.0.6/packages/msw-addon">compare
view</a></li>
</ul>
</details>
<br />

Updates `orval` from 7.11.2 to 7.15.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/orval-labs/orval/releases">orval's
releases</a>.</em></p>
<blockquote>
<h2>Release v7.15.0</h2>
<h2>What's Changed</h2>
<blockquote>
<p>[!IMPORTANT]<br />
<strong>Breaking change</strong>: Node <strong>22.18.0</strong> or
higher is required</p>
</blockquote>
<ul>
<li>chore(deps): bump actions/setup-node from 4 to 6 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/orval-labs/orval/pull/2453">orval-labs/orval#2453</a></li>
<li>fix: generate prefetch hook for hook based mutator by <a
href="https://github.com/aizerin"><code>@​aizerin</code></a> in <a
href="https://redirect.github.com/orval-labs/orval/pull/2456">orval-labs/orval#2456</a></li>
<li>feat: option to generate query invalidations by <a
href="https://github.com/aizerin"><code>@​aizerin</code></a> in <a
href="https://redirect.github.com/orval-labs/orval/pull/2457">orval-labs/orval#2457</a></li>
<li>chore(deps): bump hono from 4.9.7 to 4.10.2 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/orval-labs/orval/pull/2462">orval-labs/orval#2462</a></li>
<li>fix(hono): zValidator wrapper doesn't typecheck with 0.7.4 by <a
href="https://github.com/xandris"><code>@​xandris</code></a> in <a
href="https://redirect.github.com/orval-labs/orval/pull/2468">orval-labs/orval#2468</a></li>
<li>fix(zod): add line breaks to export constants by <a
href="https://github.com/melloware"><code>@​melloware</code></a> in <a
href="https://redirect.github.com/orval-labs/orval/pull/2469">orval-labs/orval#2469</a></li>
<li>fix(query): deduplicate error response types by <a
href="https://github.com/melloware"><code>@​melloware</code></a> in <a
href="https://redirect.github.com/orval-labs/orval/pull/2471">orval-labs/orval#2471</a></li>
<li>fix(zod): fixed bug where zod did not correctly use namespace import
by <a
href="https://github.com/chcederquist"><code>@​chcederquist</code></a>
in <a
href="https://redirect.github.com/orval-labs/orval/pull/2472">orval-labs/orval#2472</a></li>
<li>fix(axios): &quot;responseType: text&quot; incorrectly being added
by <a href="https://github.com/snebjorn"><code>@​snebjorn</code></a> in
<a
href="https://redirect.github.com/orval-labs/orval/pull/2473">orval-labs/orval#2473</a></li>
<li>Release v7.15.0 by <a
href="https://github.com/github-actions"><code>@​github-actions</code></a>[bot]
in <a
href="https://redirect.github.com/orval-labs/orval/pull/2474">orval-labs/orval#2474</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/aizerin"><code>@​aizerin</code></a> made
their first contribution in <a
href="https://redirect.github.com/orval-labs/orval/pull/2456">orval-labs/orval#2456</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/orval-labs/orval/compare/v7.14.0...v7.15.0">https://github.com/orval-labs/orval/compare/v7.14.0...v7.15.0</a></p>
<h2>Release v7.14.0</h2>
<blockquote>
<p>[!IMPORTANT]<br />
<strong>Breaking change</strong>: Node <strong>22.18.0</strong> or
higher is required</p>
</blockquote>
<h2>What's Changed</h2>
<ul>
<li>Breaking change: Node 22.18.0 or higher is required</li>
<li>Skip version check when using a Bun/Pnpm workspace catalog by <a
href="https://github.com/xorob0"><code>@​xorob0</code></a> in <a
href="https://redirect.github.com/orval-labs/orval/pull/2434">orval-labs/orval#2434</a></li>
<li>chore: removes the last require() usages by <a
href="https://github.com/snebjorn"><code>@​snebjorn</code></a> in <a
href="https://redirect.github.com/orval-labs/orval/pull/2435">orval-labs/orval#2435</a></li>
<li>Fix typo for infinite by <a
href="https://github.com/JurianArie"><code>@​JurianArie</code></a> in <a
href="https://redirect.github.com/orval-labs/orval/pull/2437">orval-labs/orval#2437</a></li>
<li>Docs: Update output.md by <a
href="https://github.com/gialmisi"><code>@​gialmisi</code></a> in <a
href="https://redirect.github.com/orval-labs/orval/pull/2438">orval-labs/orval#2438</a></li>
<li>fix: don't mangle external URL $ref by <a
href="https://github.com/xandris"><code>@​xandris</code></a> in <a
href="https://redirect.github.com/orval-labs/orval/pull/2439">orval-labs/orval#2439</a></li>
<li>fix(zod): consider path-level parameters when generating zod schemas
by <a href="https://github.com/xandris"><code>@​xandris</code></a> in <a
href="https://redirect.github.com/orval-labs/orval/pull/2440">orval-labs/orval#2440</a></li>
<li>fix(useOperationIdAsQueryKey): make sure operationName is returned
as string by <a
href="https://github.com/mortik"><code>@​mortik</code></a> in <a
href="https://redirect.github.com/orval-labs/orval/pull/2443">orval-labs/orval#2443</a></li>
<li>fix(hono): zValidator not imported consistently by <a
href="https://github.com/xandris"><code>@​xandris</code></a> in <a
href="https://redirect.github.com/orval-labs/orval/pull/2444">orval-labs/orval#2444</a></li>
<li>fix(zod): update type handling from 'any' to 'unknown' in schema
definitions by <a
href="https://github.com/alexmarqs"><code>@​alexmarqs</code></a> in <a
href="https://redirect.github.com/orval-labs/orval/pull/2445">orval-labs/orval#2445</a></li>
<li>fix(hono): some configs generate wrong imports by <a
href="https://github.com/xandris"><code>@​xandris</code></a> in <a
href="https://redirect.github.com/orval-labs/orval/pull/2448">orval-labs/orval#2448</a></li>
<li>Fix: Support OpenAPI 3.1 Multi-Type Arrays in Zod Generator by <a
href="https://github.com/rang-ali"><code>@​rang-ali</code></a> in <a
href="https://redirect.github.com/orval-labs/orval/pull/2447">orval-labs/orval#2447</a></li>
<li>Release v7.14.0 by <a
href="https://github.com/github-actions"><code>@​github-actions</code></a>[bot]
in <a
href="https://redirect.github.com/orval-labs/orval/pull/2449">orval-labs/orval#2449</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a
href="https://github.com/JurianArie"><code>@​JurianArie</code></a> made
their first contribution in <a
href="https://redirect.github.com/orval-labs/orval/pull/2437">orval-labs/orval#2437</a></li>
<li><a href="https://github.com/gialmisi"><code>@​gialmisi</code></a>
made their first contribution in <a
href="https://redirect.github.com/orval-labs/orval/pull/2438">orval-labs/orval#2438</a></li>
<li><a href="https://github.com/xandris"><code>@​xandris</code></a> made
their first contribution in <a
href="https://redirect.github.com/orval-labs/orval/pull/2439">orval-labs/orval#2439</a></li>
<li><a href="https://github.com/alexmarqs"><code>@​alexmarqs</code></a>
made their first contribution in <a
href="https://redirect.github.com/orval-labs/orval/pull/2445">orval-labs/orval#2445</a></li>
<li><a href="https://github.com/rang-ali"><code>@​rang-ali</code></a>
made their first contribution in <a
href="https://redirect.github.com/orval-labs/orval/pull/2447">orval-labs/orval#2447</a></li>
<li><a
href="https://github.com/github-actions"><code>@​github-actions</code></a>[bot]
made their first contribution in <a
href="https://redirect.github.com/orval-labs/orval/pull/2449">orval-labs/orval#2449</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/orval-labs/orval/compare/v7.13.2...v7.14.0">https://github.com/orval-labs/orval/compare/v7.13.2...v7.14.0</a></p>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="bdacff4b5d"><code>bdacff4</code></a>
chore(release): bump version to v7.15.0 (<a
href="https://redirect.github.com/orval-labs/orval/issues/2474">#2474</a>)</li>
<li><a
href="0dfe795fad"><code>0dfe795</code></a>
fix(axios): &quot;responseType: text&quot; incorrectly being added (<a
href="https://redirect.github.com/orval-labs/orval/issues/2473">#2473</a>)</li>
<li><a
href="b119209125"><code>b119209</code></a>
fix(zod): fixed bug where zod did not correctly use namespace import (<a
href="https://redirect.github.com/orval-labs/orval/issues/2472">#2472</a>)</li>
<li><a
href="8d0812b8cf"><code>8d0812b</code></a>
fix(query): deduplicate error response types (<a
href="https://redirect.github.com/orval-labs/orval/issues/2471">#2471</a>)</li>
<li><a
href="3592baec13"><code>3592bae</code></a>
Disable samples update step in tests workflow</li>
<li><a
href="86b6d4b093"><code>86b6d4b</code></a>
fix(zod): add line breaks to export constants (<a
href="https://redirect.github.com/orval-labs/orval/issues/2469">#2469</a>)</li>
<li><a
href="13e868b9b4"><code>13e868b</code></a>
fix(hono): zValidator wrapper doesn't typecheck with 0.7.4 (<a
href="https://redirect.github.com/orval-labs/orval/issues/2468">#2468</a>)</li>
<li><a
href="37a1f88109"><code>37a1f88</code></a>
chore(deps): bump hono from 4.9.7 to 4.10.2 (<a
href="https://redirect.github.com/orval-labs/orval/issues/2462">#2462</a>)</li>
<li><a
href="fa69296064"><code>fa69296</code></a>
chore(format): format code</li>
<li><a
href="d58241fba0"><code>d58241f</code></a>
Fix <a
href="https://redirect.github.com/orval-labs/orval/issues/2431">#2431</a>:
Input docs for secure URL</li>
<li>Additional commits viewable in <a
href="https://github.com/orval-labs/orval/compare/v7.11.2...v7.15.0">compare
view</a></li>
</ul>
</details>
<details>
<summary>Maintainer changes</summary>
<p>This version was pushed to npm by <a
href="https://www.npmjs.com/~melloware">melloware</a>, a new releaser
for orval since your current version.</p>
</details>
<br />

Updates `pbkdf2` from 3.1.3 to 3.1.5
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/browserify/pbkdf2/blob/master/CHANGELOG.md">pbkdf2's
changelog</a>.</em></p>
<blockquote>
<h2><a
href="https://github.com/browserify/pbkdf2/compare/v3.1.4...v3.1.5">v3.1.5</a>
- 2025-09-23</h2>
<h3>Commits</h3>
<ul>
<li>[Fix] only allow finite iterations <a
href="67bd94dbbf"><code>67bd94d</code></a></li>
<li>[Fix] restore node 0.10 support <a
href="8f59d962f7"><code>8f59d96</code></a></li>
<li>[Fix] check parameters before the &quot;no Promise&quot; bailout <a
href="d2dc5f052c"><code>d2dc5f0</code></a></li>
</ul>
<h2><a
href="https://github.com/browserify/pbkdf2/compare/v3.1.3...v3.1.4">v3.1.4</a>
- 2025-09-22</h2>
<h3>Commits</h3>
<ul>
<li>[Deps] update <code>create-hash</code>, <code>ripemd160</code>,
<code>sha.js</code>, <code>to-buffer</code> <a
href="8dbf49b382"><code>8dbf49b</code></a></li>
<li>[meta] update repo URLs <a
href="d15bc351de"><code>d15bc35</code></a></li>
<li>[Dev Deps] update <code>@ljharb/eslint-config</code> <a
href="aaf870b1d1"><code>aaf870b</code></a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="3687905291"><code>3687905</code></a>
v3.1.5</li>
<li><a
href="67bd94dbbf"><code>67bd94d</code></a>
[Fix] only allow finite iterations</li>
<li><a
href="8f59d962f7"><code>8f59d96</code></a>
[Fix] restore node 0.10 support</li>
<li><a
href="d2dc5f052c"><code>d2dc5f0</code></a>
[Fix] check parameters before the &quot;no Promise&quot; bailout</li>
<li><a
href="b2ad6154b9"><code>b2ad615</code></a>
v3.1.4</li>
<li><a
href="8dbf49b382"><code>8dbf49b</code></a>
[Deps] update <code>create-hash</code>, <code>ripemd160</code>,
<code>sha.js</code>, <code>to-buffer</code></li>
<li><a
href="aaf870b1d1"><code>aaf870b</code></a>
[Dev Deps] update <code>@ljharb/eslint-config</code></li>
<li><a
href="d15bc351de"><code>d15bc35</code></a>
[meta] update repo URLs</li>
<li>See full diff in <a
href="https://github.com/browserify/pbkdf2/compare/v3.1.3...v3.1.5">compare
view</a></li>
</ul>
</details>
<br />

Updates `prettier-plugin-tailwindcss` from 0.6.14 to 0.7.1
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/tailwindlabs/prettier-plugin-tailwindcss/releases">prettier-plugin-tailwindcss's
releases</a>.</em></p>
<blockquote>
<h2>v0.7.1</h2>
<h3>Fixed</h3>
<ul>
<li>Match against correct name of dynamic attributes when using regexes
(<a
href="https://redirect.github.com/tailwindlabs/prettier-plugin-tailwindcss/pull/410">#410</a>)</li>
</ul>
<h2>v0.7.0</h2>
<h3>Added</h3>
<ul>
<li>Format quotes in <code>@source</code>, <code>@plugin</code>, and
<code>@config</code> (<a
href="https://redirect.github.com/tailwindlabs/prettier-plugin-tailwindcss/pull/387">#387</a>)</li>
<li>Sort in function calls in Twig (<a
href="https://redirect.github.com/tailwindlabs/prettier-plugin-tailwindcss/pull/358">#358</a>)</li>
<li>Sort in callable template literals (<a
href="https://redirect.github.com/tailwindlabs/prettier-plugin-tailwindcss/pull/367">#367</a>)</li>
<li>Sort in function calls mixed with property accesses (<a
href="https://redirect.github.com/tailwindlabs/prettier-plugin-tailwindcss/pull/367">#367</a>)</li>
<li>Support regular expression patterns for attributes (<a
href="https://redirect.github.com/tailwindlabs/prettier-plugin-tailwindcss/pull/405">#405</a>)</li>
<li>Support regular expression patterns for function names (<a
href="https://redirect.github.com/tailwindlabs/prettier-plugin-tailwindcss/pull/405">#405</a>)</li>
</ul>
<h3>Changed</h3>
<ul>
<li>Improved monorepo support by loading Tailwind CSS relative to the
input file instead of prettier config file (<a
href="https://redirect.github.com/tailwindlabs/prettier-plugin-tailwindcss/pull/386">#386</a>)</li>
<li>Improved monorepo support by loading v3 configs relative to the
input file instead of prettier config file (<a
href="https://redirect.github.com/tailwindlabs/prettier-plugin-tailwindcss/pull/386">#386</a>)</li>
<li>Fallback to Tailwind CSS v4 instead of v3 by default (<a
href="https://redirect.github.com/tailwindlabs/prettier-plugin-tailwindcss/pull/390">#390</a>)</li>
<li>Don't augment global Prettier <code>ParserOptions</code> and
<code>RequiredOptions</code> types (<a
href="https://redirect.github.com/tailwindlabs/prettier-plugin-tailwindcss/pull/354">#354</a>)</li>
<li>Drop support for <code>prettier-plugin-import-sort</code> (<a
href="https://redirect.github.com/tailwindlabs/prettier-plugin-tailwindcss/pull/385">#385</a>)</li>
</ul>
<h3>Fixed</h3>
<ul>
<li>Handle quote escapes in LESS when sorting <code>@apply</code> (<a
href="https://redirect.github.com/tailwindlabs/prettier-plugin-tailwindcss/pull/392">#392</a>)</li>
<li>Fix whitespace removal inside nested concat and template expressions
(<a
href="https://redirect.github.com/tailwindlabs/prettier-plugin-tailwindcss/pull/396">#396</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/tailwindlabs/prettier-plugin-tailwindcss/blob/main/CHANGELOG.md">prettier-plugin-tailwindcss's
changelog</a>.</em></p>
<blockquote>
<h2>[0.7.1] - 2025-10-17</h2>
<h3>Fixed</h3>
<ul>
<li>Match against correct name of dynamic attributes when using regexes
(<a
href="https://redirect.github.com/tailwindlabs/prettier-plugin-tailwindcss/pull/410">#410</a>)</li>
</ul>
<h2>[0.7.0] - 2025-10-14</h2>
<h3>Added</h3>
<ul>
<li>Format quotes in <code>@source</code>, <code>@plugin</code>, and
<code>@config</code> (<a
href="https://redirect.github.com/tailwindlabs/prettier-plugin-tailwindcss/pull/387">#387</a>)</li>
<li>Sort in function calls in Twig (<a
href="https://redirect.github.com/tailwindlabs/prettier-plugin-tailwindcss/pull/358">#358</a>)</li>
<li>Sort in callable template literals (<a
href="https://redirect.github.com/tailwindlabs/prettier-plugin-tailwindcss/pull/367">#367</a>)</li>
<li>Sort in function calls mixed with property accesses (<a
href="https://redirect.github.com/tailwindlabs/prettier-plugin-tailwindcss/pull/367">#367</a>)</li>
<li>Support regular expression patterns for attributes (<a
href="https://redirect.github.com/tailwindlabs/prettier-plugin-tailwindcss/pull/405">#405</a>)</li>
<li>Support regular expression patterns for function names (<a
href="https://redirect.github.com/tailwindlabs/prettier-plugin-tailwindcss/pull/405">#405</a>)</li>
</ul>
<h3>Changed</h3>
<ul>
<li>Improved monorepo support by loading Tailwind CSS relative to the
input file instead of prettier config file (<a
href="https://redirect.github.com/tailwindlabs/prettier-plugin-tailwindcss/pull/386">#386</a>)</li>
<li>Improved monorepo support by loading v3 configs relative to the
input file instead of prettier config file (<a
href="https://redirect.github.com/tailwindlabs/prettier-plugin-tailwindcss/pull/386">#386</a>)</li>
<li>Fallback to Tailwind CSS v4 instead of v3 by default (<a
href="https://redirect.github.com/tailwindlabs/prettier-plugin-tailwindcss/pull/390">#390</a>)</li>
<li>Don't augment global Prettier <code>ParserOptions</code> and
<code>RequiredOptions</code> types (<a
href="https://redirect.github.com/tailwindlabs/prettier-plugin-tailwindcss/pull/354">#354</a>)</li>
<li>Drop support for <code>prettier-plugin-import-sort</code> (<a
href="https://redirect.github.com/tailwindlabs/prettier-plugin-tailwindcss/pull/385">#385</a>)</li>
</ul>
<h3>Fixed</h3>
<ul>
<li>Handle quote escapes in LESS when sorting <code>@apply</code> (<a
href="https://redirect.github.com/tailwindlabs/prettier-plugin-tailwindcss/pull/392">#392</a>)</li>
<li>Fix whitespace removal inside nested concat and template expressions
(<a
href="https://redirect.github.com/tailwindlabs/prettier-plugin-tailwindcss/pull/396">#396</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="a0fea3f3c2"><code>a0fea3f</code></a>
0.7.1</li>
<li><a href="https://github.com/tailwindlabs/pre...

_Description has been truncated_

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Lluis Agusti <hi@llu.lu>
2025-11-04 22:48:21 +07:00
Zamil Majdy
910fd2640d hotfix(backend): Temporarily disable library existence check for graph execution (#11318)
### Changes 🏗️

add_store_agent_to_library does not add subagents to the user library,
this check can cause issues.

### Checklist 📋

#### For code changes:
- [ ] I have clearly listed my changes in the PR description
- [ ] I have made a test plan
- [ ] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [ ] ...

<details>
  <summary>Example test plan</summary>
  
  - [ ] Create from scratch and execute an agent with at least 3 blocks
- [ ] Import an agent from file upload, and confirm it executes
correctly
  - [ ] Upload agent to marketplace
- [ ] Import an agent from marketplace and confirm it executes correctly
  - [ ] Edit an agent from monitor, and confirm it executes correctly
</details>

#### For configuration changes:

- [ ] `.env.default` is updated or already compatible with my changes
- [ ] `docker-compose.yml` is updated or already compatible with my
changes
- [ ] I have included a list of my configuration changes in the PR
description (under **Changes**)

<details>
  <summary>Examples of configuration changes</summary>

  - Changing ports
  - Adding new services that need to communicate with each other
  - Secrets or environment variable changes
  - New or infrastructure changes such as databases
</details>
2025-11-04 13:54:48 +00:00
Ubbe
eae2616fb5 fix(frontend): marketplace breadcrumbs typo (#11315)
## Changes 🏗️

Fixing a ✍🏽  typo found by @Pwuts 

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the app
  - [x] No typos on the breadcrumbs
2025-11-04 13:28:29 +00:00
Abhimanyu Yadav
ad3ea59d90 feat(frontend): Add cron-based scheduling functionality to new builder with input/credential support (#11312)
This PR introduces scheduling functionality to the new builder, allowing
users to create cron-based schedules for automated graph execution with
configurable inputs and credentials.


https://github.com/user-attachments/assets/20c1359f-a3d6-47bf-a881-4f22c657906c

## What's New

### 🚀 Features

#### Scheduling Infrastructure
- **CronSchedulerDialog Component**: Interactive dialog for creating
scheduled runs with:
  - Schedule name configuration
  - Cron expression builder with visual UI
  - Timezone support (displays user timezone or defaults to UTC)
  - Integration with backend scheduling API

- **ScheduleGraph Component**: New action button in builder actions
toolbar
  - Clock icon button to initiate scheduling workflow
  - Handles conditional flow based on input/credential requirements

#### Enhanced Input Management
- **Unified RunInputDialog**: Refactored to support both manual runs and
scheduled runs
- Dynamic "purpose" prop (`"run"` | `"schedule"`) for contextual
behavior
  - Seamless credential and input collection flow
  - Transitions to cron scheduler when scheduling

#### Builder Actions Improvements
- **New Action Buttons Layout**: Three primary actions in the builder
toolbar:
  1. Agent Outputs (placeholder for future implementation)
  2. Run Graph (play/stop button with gradient styling)
  3. Schedule Graph (clock icon for scheduling)

## Technical Details

### New Components
- `CronSchedulerDialog` - Main scheduling dialog component
- `useCronSchedulerDialog` - Hook managing scheduling logic and API
calls
- `ScheduleGraph` - Schedule button component
- `useScheduleGraph` - Hook for scheduling flow control
- `AgentOutputs` - Placeholder component for future outputs feature

### Modified Components
- `BuilderActions` - Added new action buttons
- `RunGraph` - Enhanced with tooltip support
- `RunInputDialog` - Made multi-purpose for run/schedule
- `useRunInputDialog` - Added scheduling dialog state management

### API Integration
- Uses `usePostV1CreateExecutionSchedule` for schedule creation
- Fetches user timezone with `useGetV1GetUserTimezone`
- Validates and passes graph ID, version, inputs, and credentials

## User Experience

1. **Without Inputs/Credentials**: 
   - Click schedule button → Opens cron scheduler directly

2. **With Inputs/Credentials**:
   - Click schedule button → Opens input dialog
   - Fill required fields → Click "Schedule Run"
   - Configure cron expression → Create schedule

3. **Timezone Awareness**:
   - Shows user's configured timezone
   - Warns if no timezone is set (defaults to UTC)
   - Provides link to timezone settings

## Testing Checklist

- [x] Create a schedule without inputs/credentials
- [x] Create a schedule with required inputs
- [x] Create a schedule with credentials
- [x] Verify timezone display (with and without user timezone)
2025-11-04 11:37:10 +00:00
Reinier van der Leer
69b6b732a2 feat(frontend/ui): Increase contrast of Switch component (#11309)
- Resolves #11308

### Changes 🏗️

- Change background color of `Switch` in unchecked state from
`neutral-200` to `zinc-300`

Before / after:
<center>
<img width="48%" alt="before"
src="https://github.com/user-attachments/assets/d23c9531-2f7e-49d3-8a92-f4ad40e9fa14"
/>

<img width="48%" alt="after"
src="https://github.com/user-attachments/assets/9f27fbee-081e-4b26-8b24-74d5d5cdcef8"
/>
</center>

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Visually verified new look
2025-11-04 08:10:24 +00:00
Abhimanyu Yadav
b1a2d21892 feat(frontend): add undo/redo functionality with keyboard shortcuts to flow builder (#11307)
This PR introduces comprehensive undo/redo functionality to the flow
builder, allowing users to revert and restore changes to their
workflows. The implementation includes keyboard shortcuts (Ctrl/Cmd+Z
for undo, Ctrl/Cmd+Y for redo) and visual controls in the UI.



https://github.com/user-attachments/assets/514253a6-4e86-4ac5-96b4-992180fb3b00



### What's New 🚀
- **Undo/Redo State Management**: Implemented a dedicated Zustand store
(`historyStore`) that tracks up to 50 historical states of nodes and
connections
- **Keyboard Shortcuts**: Added cross-platform keyboard shortcuts:
  - `Ctrl/Cmd + Z` for undo
  - `Ctrl/Cmd + Y` for redo
- **UI Controls**: Added dedicated undo/redo buttons to the control
panel with:
  - Visual feedback when actions are available/disabled
  - Tooltips for better user guidance
  - Proper accessibility attributes
- **Automatic History Tracking**: Integrated history tracking into node
operations (add, remove, position changes, data updates)

### Technical Details 🔧

#### Architecture
- **History Store** (`historyStore.ts`): Manages past and future states
using a stack-based approach
  - Stores snapshots of nodes and connections
  - Implements state deduplication to prevent duplicate history entries
  - Limits history to 50 states to manage memory usage

- **Integration Points**:
- `nodeStore.ts`: Modified to push state changes to history on relevant
operations
- `Flow.tsx`: Added the new `useFlowRealtime` hook for real-time updates
- `NewControlPanel.tsx`: Integrated the new `UndoRedoButtons` component

#### UI Improvements
- **Enhanced Control Panel Button**: Updated to support different HTML
elements (button/div) with proper role attributes for accessibility
- **Block Menu Tooltips**: Added tooltips to improve user guidance
- **Responsive UI**: Adjusted tooltip delays for better responsiveness
(100ms delay)

### Testing Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description 
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Create a new flow with multiple nodes and verify undo/redo works
for node additions
  - [x] Move nodes and verify position changes can be undone/redone
  - [x] Delete nodes and verify deletions can be undone
- [x] Test keyboard shortcuts (Ctrl/Cmd+Z and Ctrl/Cmd+Y) on different
platforms
- [x] Verify undo/redo buttons are disabled when no history is available
- [x] Test with complex flows (10+ nodes) to ensure performance remains
good
2025-11-04 05:17:29 +00:00
Zamil Majdy
a78b08f5e7 feat(platform): implement admin user impersonation with header-based authentication (#11298)
## Summary

Implement comprehensive admin user impersonation functionality to enable
admins to act on behalf of any user for debugging and support purposes.

## 🔐 Security Features

- **Admin Role Validation**: Only users with 'admin' role can
impersonate others
- **Header-Based Authentication**: Uses `X-Act-As-User-Id` header for
impersonation requests
- **Comprehensive Audit Logging**: All impersonation attempts logged
with admin details
- **Secure Error Handling**: Proper HTTP 403/401 responses for
unauthorized access
- **SSR Safety**: Client-side environment checks prevent server-side
rendering issues

## 🏗️ Architecture

### Backend Implementation (`autogpt_libs/auth/dependencies.py`)
- Enhanced `get_user_id` FastAPI dependency to process impersonation
headers
- Admin role verification using existing `verify_user()` function
- Audit trail logging with admin email, user ID, and target user
- Seamless integration with all existing routes using `get_user_id`
dependency

### Frontend Implementation
- **React Hook**: `useAdminImpersonation` for state management and API
calls
- **Security Banner**: Prominent warning when impersonation is active
- **Admin Panel**: Control interface for starting/stopping impersonation
- **Session Persistence**: Maintains impersonation state across page
refreshes
- **Full Page Refresh**: Ensures all data updates correctly on state
changes

### API Integration
- **Header Forwarding**: All API requests include impersonation header
when active
- **Proxy Support**: Next.js API proxy forwards headers to backend
- **Generated Hooks**: Compatible with existing React Query API hooks
- **Error Handling**: Graceful fallback for storage/authentication
failures

## 🎯 User Experience

### For Admins
1. Navigate to `/admin/impersonation` 
2. Enter target user ID (UUID format with validation)
3. System displays security banner during active impersonation
4. All API calls automatically use impersonated user context
5. Click "Stop Impersonation" to return to admin context

### Security Notice
- **Audit Trail**: All impersonation logged with `logger.info()`
including admin email
- **Session Isolation**: Impersonation state stored in sessionStorage
(not persistent)
- **No Token Manipulation**: Uses header-based approach, preserving
admin's JWT
- **Role Enforcement**: Backend validates admin role on every
impersonated request

## 🔧 Technical Details

### Constants & Configuration
- `IMPERSONATION_HEADER_NAME = "X-Act-As-User-Id"`
- `IMPERSONATION_STORAGE_KEY = "admin-impersonate-user-id"`
- Centralized in `frontend/src/lib/constants.ts` and
`autogpt_libs/auth/dependencies.py`

### Code Quality Improvements
- **DRY Principle**: Eliminated duplicate header forwarding logic
- **Icon Compliance**: Uses Phosphor Icons per coding guidelines  
- **Type Safety**: Proper TypeScript interfaces and error handling
- **SSR Compatibility**: Environment checks for client-side only
operations
- **Error Consistency**: Uniform silent failure with logging approach

### Testing
- Updated backend auth dependency tests for new function signatures
- Added Mock Request objects for comprehensive test coverage
- Maintained existing test functionality while extending capabilities

## 🚀 CodeRabbit Review Responses

All CodeRabbit feedback has been addressed:

1.  **DRY Principle**: Refactored duplicate header forwarding logic
2.  **Icon Library**: Replaced lucide-react with Phosphor Icons  
3.  **SSR Safety**: Added environment checks for sessionStorage
4.  **UI Improvements**: Synchronous initialization prevents flicker
5.  **Error Handling**: Consistent silent failure with logging
6.  **Backend Validation**: Confirmed comprehensive security
implementation
7.  **Type Safety**: Addressed TypeScript concerns
8.  **Code Standards**: Followed all coding guidelines and best
practices

## 🧪 Testing Instructions

1. **Login as Admin**: Ensure user has admin role
2. **Navigate to Panel**: Go to `/admin/impersonation`
3. **Test Impersonation**: Enter valid user UUID and start impersonation
4. **Verify Banner**: Security banner should appear at top of all pages
5. **Test API Calls**: Verify credits/graphs/etc show impersonated
user's data
6. **Check Logging**: Backend logs should show impersonation audit trail
7. **Stop Impersonation**: Verify return to admin context works
correctly

## 📝 Files Modified

### Backend
- `autogpt_libs/auth/dependencies.py` - Core impersonation logic
- `autogpt_libs/auth/dependencies_test.py` - Updated test signatures

### Frontend
- `src/hooks/useAdminImpersonation.ts` - State management hook
- `src/components/admin/AdminImpersonationBanner.tsx` - Security warning
banner
- `src/components/admin/AdminImpersonationPanel.tsx` - Admin control
interface
- `src/app/(platform)/admin/impersonation/page.tsx` - Admin page
- `src/app/(platform)/admin/layout.tsx` - Navigation integration
- `src/app/(platform)/layout.tsx` - Banner integration
- `src/lib/autogpt-server-api/client.ts` - Header injection for API
calls
- `src/lib/autogpt-server-api/helpers.ts` - Header forwarding logic
- `src/app/api/proxy/[...path]/route.ts` - Proxy header forwarding
- `src/app/api/mutators/custom-mutator.ts` - Enhanced error handling
- `src/lib/constants.ts` - Shared constants

## 🔒 Security Compliance

- **Authorization**: Admin role required for impersonation access
- **Authentication**: Uses existing JWT validation with additional role
checks
- **Audit Logging**: Comprehensive logging of all impersonation
activities
- **Error Handling**: Secure error responses without information leakage
- **Session Management**: Temporary sessionStorage without persistent
data
- **Header Validation**: Proper sanitization and validation of
impersonation headers

This implementation provides a secure, auditable, and user-friendly
admin impersonation system that integrates seamlessly with the existing
AutoGPT Platform architecture.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **New Features**
  * Admin user impersonation to view the app as another user.
* New "User Impersonation" admin page for entering target user IDs and
managing sessions.
  * Sidebar link for quick access to the impersonation page.
* Persistent impersonation state that updates app data (e.g., credits)
and survives page reloads.
* Top warning banner when impersonation is active with a Stop
Impersonation control.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-11-04 03:51:28 +00:00
Ubbe
5359f20070 feat(frontend): Google Drive Picker component (#11286)
## Changes 🏗️

<img width="800" height="876" alt="Screenshot_2025-10-29_at_22 56 43"
src="https://github.com/user-attachments/assets/e1d9cf62-0a81-4658-82c2-6e673d636479"
/>

New `<GoogleDrivePicker />` component that, when rendered:
- re-uses existing Google credentials OR asks the user to SSO
- uses the Google Drive Picker script to launch a modal for the user to
select files

We will need this 3 new environment variables on the Front-end for it to
work:
```
# Google Drive Picker
NEXT_PUBLIC_GOOGLE_CLIENT_ID=
NEXT_PUBLIC_GOOGLE_API_KEY=
NEXT_PUBLIC_GOOGLE_APP_ID=
```
Updated `.env.default` with them.

### Next

We need to figure out how to map this to an agent input type and update
the Back-end to accept the files as input.

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] I tried the whole flow

### For configuration changes:

- [x] `.env.default` is updated or already compatible with my changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
2025-11-03 13:48:28 +00:00
Abhimanyu Yadav
427c7eb1d4 feat(frontend): Add dynamic input dialog for agent execution with credential support (#11301)
### Changes 🏗️

This PR enhances the agent execution functionality by introducing a
dynamic input dialog that collects both regular inputs and credentials
before running agents.

<img width="1309" height="826" alt="Screenshot 2025-11-03 at 10 16
38 AM"
src="https://github.com/user-attachments/assets/2015da5d-055d-49c5-8e7e-31bd0fe369f4"
/>

####  New Features
- **Dynamic Input Dialog**: Added a new `RunInputDialog` component that
automatically detects when agents require inputs or credentials and
prompts users before execution
- **Credential Management**: Integrated credential input handling
directly into the execution flow, supporting various credential types
(API keys, OAuth, passwords)
- **Enhanced Run Controls**: Improved the `RunGraph` component with
better state management and visual feedback for running/stopping agents
- **Form Renderer**: Created a new unified `FormRenderer` component for
consistent input rendering across the application

#### 🔧 Refactoring
- **Input Renderer Migration**: Moved input renderer components from
FlowEditor-specific location to a shared components directory for better
reusability:
  - Migrated fields (AnyOfField, CredentialField, ObjectField)
- Migrated widgets (ArrayEditor, DateInput, SelectWidget, TextInput,
etc.)
  - Migrated templates (FieldTemplate, ArrayFieldTemplate)
- **State Management**: Enhanced `graphStore` with schemas for inputs
and credentials, including helper methods to check for their presence
- **Component Organization**: Restructured BuilderActions components for
better modularity

#### 🗑️ Cleanup
- Removed outdated FlowEditor documentation files (FORM_CREATOR.md,
README.md)
- Removed deprecated `RunGraph` and `useRunGraph` implementations from
FlowEditor
- Consolidated duplicate functionality into new shared components

#### 🎨 UI/UX Improvements
- Added gradient styling to Run/Stop button for better visual appeal
- Improved dialog layout with clear sections for Credentials and Inputs
- Enhanced form fields with size variants (small, medium, large) for
better responsiveness
- Added loading states and proper error handling during execution

### Technical Details
- The new system automatically detects input requirements from the graph
schema
- Credentials are handled separately with special UI treatment based on
credential type
- The dialog only appears when inputs or credentials are actually
required
- Execution flow: Save graph → Check for inputs/credentials → Show
dialog if needed → Execute with provided values

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Create an agent without inputs and verify it runs directly without
dialog
- [x] Create an agent with input blocks and verify the dialog appears
with correct fields
- [x] Create an agent requiring credentials and verify credential
selection/creation works
  - [x] Test agent execution with both inputs and credentials
  - [x] Verify Stop Agent functionality during execution
  - [x] Test error handling for invalid inputs or missing credentials
  - [x] Verify that the dialog closes properly after submission
  - [x] Test that execution state is properly reflected in the UI
2025-11-03 12:05:45 +00:00
Krzysztof Czerwinski
c17a2f807d fix(frontend): Reset beads on run (#11303)
Beads are reset when saving but not on run which can result in beads
from previous runs accumulating on the opened graph.

### Changes 🏗️

- Move bead reset code to function and call it before run

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Beads reset on every run
2025-11-03 09:23:39 +00:00
Krzysztof Czerwinski
f80739d38c Merge branch 'master' into dev 2025-11-03 10:28:57 +09:00
Krzysztof Czerwinski
f97e19f418 hotfix: Patch onboarding (#11299)
### Changes 🏗️

- Prevent removing progress of user onboarding tasks by merging arrays
on the backend instead of replacing them
- New endpoint for onboarding reset

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Tasks are not being reset
  - [x] `/onboarding/reset` works
2025-11-01 10:19:55 +01:00
Reinier van der Leer
42b9facd4a hotfix(backend/scheduler): Bump apscheduler to DST-fixed version 3.11.1 (#11294)
- #11273

- Bump `apscheduler` to v3.11.1 which contains a fix for the issue

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] "It's a rather ugly solution but the test proves that it works."
~the maintainer
  - [x] CI passes
2025-10-31 23:09:28 +01:00
Reinier van der Leer
a02b8d9ad7 fix(backend/scheduler): Bump apscheduler to DST-fixed version 3.11.1 (#11294)
- #11273

### Changes 🏗️

- Bump `apscheduler` to v3.11.1 which contains a fix for the issue

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] "It's a rather ugly solution but the test proves that it works."
~the maintainer
  - [x] CI passes
2025-10-31 21:40:44 +00:00
Nicholas Tindle
834617d221 hotfix(backend): Clarify prompt requirements for list generation for our friend claude (#11293) 2025-10-31 12:28:05 -05:00
Lluis Agusti
e6fb649ced Merge 'master' into 'dev' 2025-10-30 20:05:55 +07:00
Zamil Majdy
2f8cdf62ba feat(backend): Standardize error handling with BlockSchemaInput & BlockSchemaOutput base class (#11257)
<!-- Clearly explain the need for these changes: -->

This PR addresses the need for consistent error handling across all
blocks in the AutoGPT platform. Previously, each block had to manually
define an `error` field in their output schema, leading to code
duplication and potential inconsistencies. Some blocks might forget to
include the error field, making error handling unpredictable.

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->

- **Created `BlockSchemaOutput` base class**: New base class that
extends `BlockSchema` with a standardized `error` field
- **Created `BlockSchemaInput` base class**: Added for consistency and
future extensibility
- **Updated 140+ block implementations**: Changed all block `Output`
classes from `class Output(BlockSchema):` to `class
Output(BlockSchemaOutput):`
- **Removed manual error field definitions**: Eliminated hundreds of
duplicate `error: str = SchemaField(...)` definitions
- **Updated type annotations**: Changed `Block[BlockSchema,
BlockSchema]` to `Block[BlockSchemaInput, BlockSchemaOutput]` throughout
the codebase
- **Fixed imports**: Added `BlockSchemaInput` and `BlockSchemaOutput`
imports to all relevant files
- **Maintained backward compatibility**: Updated `EmptySchema` to
inherit from `BlockSchemaOutput`

**Key Benefits:**
- Consistent error handling across all blocks
- Reduced code duplication (removed ~200 lines of repetitive error field
definitions)
- Type safety improvements with distinct input/output schema types
- Blocks can still override error field with more specific descriptions
when needed

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Verified `poetry run format` passes (all linting, formatting, and
type checking)
- [x] Tested block instantiation works correctly (MediaDurationBlock,
UnrealTextToSpeechBlock)
- [x] Confirmed error fields are automatically present in all updated
blocks
- [x] Verified block loading system works (successfully loads 353+
blocks)
  - [x] Tested backward compatibility with EmptySchema
- [x] Confirmed blocks can still override error field with custom
descriptions
  - [x] Validated core schema inheritance chain works correctly

#### For configuration changes:

- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

*Note: No configuration changes were needed for this refactoring.*

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Lluis Agusti <hi@llu.lu>
Co-authored-by: Ubbe <hi@ubbe.dev>
2025-10-30 12:28:08 +00:00
seer-by-sentry[bot]
3dc5208f71 feat(backend): Increase max_field_size in aiohttp requests (#11261)
### Changes 🏗️

- Increased `max_field_size` in `aiohttp.ClientSession` to 16KB to
handle servers with large headers (e.g., long CSP headers).

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x]  Add unit test that checks it can now parse headers over 8k size

---------

Co-authored-by: seer-by-sentry[bot] <157164994+seer-by-sentry[bot]@users.noreply.github.com>
Co-authored-by: Swifty <craigswift13@gmail.com>
Co-authored-by: Ubbe <hi@ubbe.dev>
2025-10-30 10:41:22 +00:00
Ubbe
04493598e2 fix(frontend): more wallet popover fixes (#11285)
## Changes 🏗️

<img width="800" height="547" alt="Screenshot 2025-10-29 at 22 11 35"
src="https://github.com/user-attachments/assets/5c700ddc-d770-48ef-9847-7e652c5dedcb"
/>
<br /><br />

- Use
[`react-currency-input-field`](https://www.npmjs.com/package/react-currency-input-field)
for `<Input type="amount" />` under the hood
  - so it formats numbers nicely with `,` and `.`
- Simplify form logic
- Make the popover cover the trigger button when open
- Re-organize imports
- Show a `$` prefix in front of the amount inputs

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Login
  - [x] Open the wallet with credits enabled
  - [x] Play with the inputs

---------

Co-authored-by: Swifty <craigswift13@gmail.com>
2025-10-30 14:44:29 +04:00
seer-by-sentry[bot]
4140331731 fix(blocks/llm): Validate LLM summary responses are strings (#11275)
### Changes 🏗️

- Added validation to ensure that the `summary` and `final_summary`
returned by the LLM are strings.
- Raises a `ValueError` if the LLM returns a list or other non-string
type, providing a descriptive error message to aid debugging.

Fixes
[AUTOGPT-SERVER-6M4](https://sentry.io/organizations/significant-gravitas/issues/6978480131/).
The issue was that: LLM returned list of strings instead of single
string summary, causing `_combine_summaries` to fail on `join`.

This fix was generated by Seer in Sentry, triggered by Craig Swift. 👁️
Run ID: 2230933

Not quite right? [Click here to continue debugging with
Seer.](https://sentry.io/organizations/significant-gravitas/issues/6978480131/?seerDrawer=true)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Added a unit test to verify that a ValueError is raised when the
LLM returns a list instead of a string for summary or final_summary.

---------

Co-authored-by: seer-by-sentry[bot] <157164994+seer-by-sentry[bot]@users.noreply.github.com>
Co-authored-by: Swifty <craigswift13@gmail.com>
2025-10-30 09:52:50 +00:00
Swifty
594b1adcf7 fix(frontend): Fix marketplace sort by (#11284)
Marketplace sort by functionality was not working on the frontend. This
PR fixes it

### Changes 🏗️

- Add type hints for sort by
- Fix marketplace sort by drop downs


### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] tested locally
2025-10-30 08:46:11 +00:00
Swifty
cab6590908 fix(frontend): Safely parse error response body in handleFetchError (#11274)
### Changes 🏗️

- Ensures `handleFetchError` can handle non-JSON error responses (e.g.,
HTML error pages).
- Attempts to parse the response body as JSON, but falls back to text if
JSON parsing fails.
- Logs a warning to the console if JSON parsing fails.
- Sets `responseData` to null if parsing fails.

Fixes
[BUILDER-482](https://sentry.io/organizations/significant-gravitas/issues/6958135748/).
The issue was that: Frontend error handler unconditionally calls
`response.json()` on a non-JSON HTML error page starting with 'A'.

This fix was generated by Seer in Sentry, triggered by Craig Swift. 👁️
Run ID: 2206951

Not quite right? [Click here to continue debugging with
Seer.](https://sentry.io/organizations/significant-gravitas/issues/6958135748/?seerDrawer=true)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Test Plan:
    - [x] Created unit tests for the issue that caused the error
    - [x] Created unit tests to ensure responses are parsed gracefully
2025-10-29 16:22:47 +00:00
Swifty
a1ac109356 fix(backend): Further enhance sanitization of SQL raw queries (#11279)
### Changes 🏗️

Enhanced SQL query security in the store search functionality by
implementing proper parameterization to prevent SQL injection
vulnerabilities.

**Security Improvements:**
- Replaced string interpolation with PostgreSQL positional parameters
(`$1`, `$2`, etc.) for all user inputs
- Added ORDER BY whitelist validation to prevent injection via
`sorted_by` parameter
- Parameterized search term, creators array, category, and pagination
values
- Fixed variable naming conflict (`sql_where_clause` vs `where_clause`)

**Testing:**
- Added 4 comprehensive tests validating SQL injection prevention across
different attack vectors
- Tests verify that malicious input in search queries, filters, sorting,
and categories are safely handled
- All 10 tests in db_test.py pass successfully

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] All existing tests pass (10/10 tests passing)
  - [x] New security tests validate SQL injection prevention
  - [x] Verified parameterized queries handle malicious input safely
  - [x] Code formatting passes (`poetry run format`)

#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

*Note: No configuration changes required for this security fix*
2025-10-29 15:21:27 +00:00
Zamil Majdy
5506d59da1 fix(backend/executor): make graph execution permission check version-agnostic (#11283)
## Summary
Fix critical issue where pre-execution permission validation broke
execution of graphs that reference older versions of sub-graphs.

## Problem
The `validate_graph_execution_permissions` function was checking for the
specific version of a graph in the user's library. This caused failures
when:
1. A parent graph references an older version of a sub-graph  
2. The user updates the sub-graph to a newer version
3. The older version is no longer in their library
4. Execution of the parent graph fails with `GraphNotInLibraryError`

## Root Cause
In `backend/executor/utils.py` line 523, the function was checking for
the exact version, but sub-graphs legitimately reference older versions
that may no longer be in the library.

## Solution

### 1. Remove Version-Specific Check (backend/executor/utils.py)
- Remove `graph_version=graph.version` parameter from validation call
- Add explanatory comment about version-agnostic behavior
- Now only checks that the graph ID exists in user's library (any
version)

### 2. Enhance Documentation (backend/data/graph.py)  
- Update function docstring to explain version-agnostic behavior
- Document that `None` (now default) allows execution of any version
- Clarify this is important for sub-graph version compatibility

## Technical Details
The `validate_graph_execution_permissions` function was already designed
to handle version-agnostic checks when `graph_version=None`. By omitting
the version parameter, we skip the version check and only verify:
- Graph exists in user's library  
- Graph is not deleted/archived
- User has execution permissions

## Impact
-  Parent graphs can execute even when they reference older sub-graph
versions
-  Sub-graph updates don't break existing parent graphs  
-  Maintains security: still checks library membership and permissions
-  No breaking changes: version-specific validation still available
when needed

## Example Scenario Fixed
1. User creates parent graph that uses sub-graph v1
2. User updates sub-graph to v2 (v1 removed from library)  
3. Parent graph still references sub-graph v1
4. **Before**: Execution fails with `GraphNotInLibraryError`
5. **After**: Execution succeeds (version-agnostic permission check)

## Testing
- [x] Code formatting and linting passes
- [x] Type checking passes
- [x] No breaking changes to existing functionality
- [x] Security still maintained through library membership checks

## Files Changed
- `backend/executor/utils.py`: Remove version-specific permission check
- `backend/data/graph.py`: Enhanced documentation for version-agnostic
behavior

Closes #[issue-number-if-applicable]

Co-authored-by: Claude <noreply@anthropic.com>
2025-10-29 14:13:23 +00:00
Ubbe
749341100b fix(frontend): prevent Wallet rendering twice (#11282)
## Changes 🏗️

The `<Wallet />` was being rendered twice ( one hidden with CSS `hidden`
) because of the Navbar layout, which caused logic issues within the
wallet. I changed to render it conditionally via Javascript instance,
which is always better practice than use `hidden` specially for
components with actual logic.

I also moved the component files closer to where it is used ( in the
navbar ).

I have a Cursor plugin that removes imports when unused, but annoyingly
re-organizes them, hence the changes around that...

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Login
  - [x] There is only 1 Wallet in the DOM
2025-10-29 14:09:13 +00:00
Zamil Majdy
4922f88851 feat(backend/executor): Implement cascading stop for nested graph executions (#11277)
## Summary
Fixes critical issue where child executions spawned by
`AgentExecutorBlock` continue running after parent execution is stopped.
Implements parent-child execution tracking and recursive cascading stop
logic to ensure entire execution trees are terminated together.

## Background
When a parent graph execution containing `AgentExecutorBlock` nodes is
stopped, only the parent was terminated. Child executions continued
running, leading to:
-  Orphaned child executions consuming credits
-  No user control over execution trees  
-  Race conditions where children start after parent stops
-  Resource leaks from abandoned executions

## Core Changes

### 1. Database Schema (`schema.prisma` + migration)
```sql
-- Add nullable parent tracking field
ALTER TABLE "AgentGraphExecution" ADD COLUMN "parentGraphExecutionId" TEXT;

-- Add self-referential foreign key with graceful deletion
ALTER TABLE "AgentGraphExecution" ADD CONSTRAINT "AgentGraphExecution_parentGraphExecutionId_fkey" 
  FOREIGN KEY ("parentGraphExecutionId") REFERENCES "AgentGraphExecution"("id") 
  ON DELETE SET NULL ON UPDATE CASCADE;

-- Add index for efficient child queries
CREATE INDEX "AgentGraphExecution_parentGraphExecutionId_idx" 
  ON "AgentGraphExecution"("parentGraphExecutionId");
```

### 2. Parent ID Propagation (`backend/blocks/agent.py`)
```python
# Extract current graph execution ID and pass as parent to child
execution = add_graph_execution(
    # ... other params
    parent_graph_exec_id=graph_exec_id,  # NEW: Track parent relationship
)
```

### 3. Data Layer (`backend/data/execution.py`)
```python
async def get_child_graph_executions(parent_exec_id: str) -> list[GraphExecution]:
    """Get all child executions of a parent execution."""
    children = await AgentGraphExecution.prisma().find_many(
        where={"parentGraphExecutionId": parent_exec_id, "isDeleted": False}
    )
    return [GraphExecution.from_db(child) for child in children]
```

### 4. Cascading Stop Logic (`backend/executor/utils.py`)
```python
async def stop_graph_execution(
    user_id: str,
    graph_exec_id: str,
    wait_timeout: float = 15.0,
    cascade: bool = True,  # NEW parameter
):
    # 1. Find all child executions
    if cascade:
        children = await _get_child_executions(graph_exec_id)
        
        # 2. Stop all children recursively in parallel
        if children:
            await asyncio.gather(
                *[stop_graph_execution(user_id, child.id, wait_timeout, True) 
                  for child in children],
                return_exceptions=True,  # Don't fail parent if child fails
            )
    
    # 3. Stop the parent execution
    # ... existing stop logic
```

### 5. Race Condition Prevention (`backend/executor/manager.py`)
```python
# Before executing queued child, check if parent was terminated
if parent_graph_exec_id:
    parent_exec = get_db_client().get_graph_execution_meta(parent_graph_exec_id, user_id)
    if parent_exec and parent_exec.status == ExecutionStatus.TERMINATED:
        # Skip execution, mark child as terminated
        get_db_client().update_graph_execution_stats(
            graph_exec_id=graph_exec_id,
            status=ExecutionStatus.TERMINATED,
        )
        return  # Don't start orphaned child
```

## How It Works

### Before (Broken)
```
User stops parent execution
    ↓
Parent terminates ✓
    ↓
Child executions keep running ✗
    ↓
User cannot stop children ✗
```

### After (Fixed)
```
User stops parent execution
    ↓
Query database for all children
    ↓
Recursively stop all children in parallel
    ↓
Wait for children to terminate
    ↓
Stop parent execution
    ↓
All executions in tree stopped ✓
```

### Race Prevention
```
Child in QUEUED status
    ↓
Parent stopped
    ↓
Child picked up by executor
    ↓
Pre-flight check: parent TERMINATED?
    ↓
Yes → Skip execution, mark child TERMINATED
    ↓
Child never runs ✓
```

## Edge Cases Handled
 **Deep nesting** - Recursive cascading handles multi-level trees  
 **Queued children** - Pre-flight check prevents execution  
 **Race conditions** - Child spawned during stop operation  
 **Partial failures** - `return_exceptions=True` continues on error  
 **Multiple children** - Parallel stop via `asyncio.gather()`  
 **No parent** - Backward compatible (nullable field)  
 **Already completed** - Existing status check handles it  

## Performance Impact
- **Stop operation**: O(depth) with parallel execution vs O(1) before
- **Memory**: +36 bytes per execution (one UUID reference)
- **Database**: +1 query per tree level, indexed for efficiency

## API Changes (Backward Compatible)

### `stop_graph_execution()` - New Optional Parameter
```python
# Before
async def stop_graph_execution(user_id: str, graph_exec_id: str, wait_timeout: float = 15.0)

# After  
async def stop_graph_execution(user_id: str, graph_exec_id: str, wait_timeout: float = 15.0, cascade: bool = True)
```
**Default `cascade=True`** means existing callers get the new behavior
automatically.

### `add_graph_execution()` - New Optional Parameter
```python
async def add_graph_execution(..., parent_graph_exec_id: Optional[str] = None)
```

## Security & Safety
-  **User verification** - Users can only stop their own executions
(parent + children)
-  **No cycles** - Self-referential FK prevents infinite loops  
-  **Graceful degradation** - Errors in child stops don't block parent
stop
-  **Rate limits** - Existing execution rate limits still apply

## Testing Checklist

### Database Migration
- [x] Migration runs successfully  
- [x] Prisma client regenerates without errors
- [x] Existing tests pass

### Core Functionality  
- [ ] Manual test: Stop parent with running child → child stops
- [ ] Manual test: Stop parent with queued child → child never starts
- [ ] Unit test: Cascading stop with multiple children
- [ ] Unit test: Deep nesting (3+ levels)
- [ ] Integration test: Race condition prevention

## Breaking Changes
**None** - All changes are backward compatible with existing code.

## Rollback Plan
If issues arise:
1. **Code rollback**: Revert PR, redeploy
2. **Database rollback**: Drop column and constraints (non-destructive)

---

**Note**: This branch contains additional unrelated changes from merging
with `dev`. The core cascading stop feature involves only:
- `schema.prisma` + migration
- `backend/data/execution.py` 
- `backend/executor/utils.py`
- `backend/blocks/agent.py`
- `backend/executor/manager.py`

All other file changes are from dev branch updates and not part of this
feature.

🤖 Generated with [Claude Code](https://claude.ai/code)

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **New Features**
* Nested graph executions: parent-child tracking and retrieval of child
executions

* **Improvements**
* Cascading stop: stopping a parent optionally terminates child
executions
  * Parent execution IDs propagated through runs and surfaced in logs
  * Per-user/graph concurrent execution limits enforced

* **Bug Fixes**
* Skip enqueuing children if parent is terminated; robust handling when
parent-status checks fail

* **Tests**
  * Updated tests to cover parent linkage in graph creation
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-10-29 11:11:22 +00:00
Zamil Majdy
5fb142c656 fix(backend/executor): ensure cluster lock release on all execution submission failures (#11281)
## Root Cause
During rolling deployment, execution
`97058338-052a-4528-87f4-98c88416bb7f` got stuck in QUEUED state
because:

1. Pod acquired cluster lock successfully during shutdown  
2. Subsequent setup operations failed (ThreadPoolExecutor shutdown,
resource exhaustion, etc.)
3. **No error handling existed** around the critical section after lock
acquisition
4. Cluster lock remained stuck in Redis for 5 minutes (TTL timeout)
5. Other pods couldn't acquire the lock, leaving execution permanently
queued

## The Fix

### Problem: Critical Section Not Protected
The original code had no error handling for the entire critical section
after successful lock acquisition:
```python
# Original code - no error handling after lock acquired
current_owner = cluster_lock.try_acquire()
if current_owner != self.executor_id:
    return  # didn't get lock
    
# CRITICAL SECTION - any failure here leaves lock stuck
self._execution_locks[graph_exec_id] = cluster_lock  # Could fail: memory
logger.info("Acquired cluster lock...")              # Could fail: logging  
cancel_event = threading.Event()                     # Could fail: resources
future = self.executor.submit(...)                   # Could fail: shutdown
self.active_graph_runs[...] = (future, cancel_event) # Could fail: memory
```

### Solution: Wrap Entire Critical Section  
Protect ALL operations after successful lock acquisition:
```python
# Fixed code - comprehensive error handling
current_owner = cluster_lock.try_acquire()
if current_owner != self.executor_id:
    return  # didn't get lock

# Wrap ENTIRE critical section after successful acquisition
try:
    self._execution_locks[graph_exec_id] = cluster_lock
    logger.info("Acquired cluster lock...")
    cancel_event = threading.Event()
    future = self.executor.submit(...)
    self.active_graph_runs[...] = (future, cancel_event)
except Exception as e:
    # Release cluster lock before requeue
    cluster_lock.release()
    del self._execution_locks[graph_exec_id] 
    _ack_message(reject=True, requeue=True)
    return
```

### Why This Comprehensive Approach Works
- **Complete protection**: Any failure in critical section → lock
released
- **Proper cleanup order**: Lock released → message requeued → another
pod can try
- **Uses existing infrastructure**: Leverages established
`_ack_message()` requeue logic
- **Handles all scenarios**: ThreadPoolExecutor shutdown, resource
exhaustion, memory issues, logging failures

## Protected Failure Scenarios
1. **Memory exhaustion**: `_execution_locks` assignment or
`active_graph_runs` assignment
2. **Resource exhaustion**: `threading.Event()` creation fails
3. **ThreadPoolExecutor shutdown**: `executor.submit()` with "cannot
schedule new futures after shutdown"
4. **Logging system failures**: `logger.info()` calls fail
5. **Any unexpected exceptions**: Network issues, disk problems, etc.

## Validation
-  All existing tests pass  
-  Maintains exact same success path behavior
-  Comprehensive error handling for all failure points
-  Minimal code change with maximum protection

## Impact
- **Eliminates stuck executions** during pod lifecycle events (rolling
deployments, scaling, crashes)
- **Faster recovery**: Immediate requeue vs 5-minute Redis TTL wait
- **Higher reliability**: Handles ANY failure in the critical section
- **Production-ready**: Comprehensive solution for distributed lock
management

This prevents the exact race condition that caused execution
`97058338-052a-4528-87f4-98c88416bb7f` to be stuck for >300 seconds,
plus many other potential failure scenarios.

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-10-29 08:56:24 +00:00
Pratyush Singh
e14594ff4a fix: handle oversized notifications by sending summary email (#11119) (#11130)
📨 Fix: Handle Oversized Notification Emails
Summary

This PR adds logic to detect and handle oversized notification emails
exceeding Postmark’s 5 MB limit. Instead of retrying indefinitely, the
system now sends a lightweight summary email with key stats and a
dashboard link.

Changes

Added size check in EmailSender.send_templated()

Sends summary email when payload > ~4.5 MB

Prevents infinite retries and queue clogging

Added logs for oversized detection

Fixes #11119

---------

Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
2025-10-29 00:57:13 +00:00
Zamil Majdy
de70ede54a fix(backend): prevent execution of deleted agents and cleanup orphaned resources (#11243)
## Summary
Fix critical bug where deleted agents continue running scheduled and
triggered executions indefinitely, consuming credits without user
control.

## Problem
When agents are deleted from user libraries, their schedules and webhook
triggers remain active, leading to:
-  Uncontrolled resource consumption 
-  "Unknown agent" executions that charge credits
-  No way for users to stop orphaned executions
-  Accumulation of orphaned database records

## Solution

### 1. Prevention: Library Validation Before Execution
- Add `is_graph_in_user_library()` function with efficient database
queries
- Validate graph accessibility before all executions in
`validate_and_construct_node_execution_input()`
- Use specific `GraphNotInLibraryError` for clear error handling

### 2. Cleanup: Remove Schedules & Webhooks on Deletion
- Enhanced `delete_library_agent()` to clean up associated schedules and
webhooks
- Comprehensive cleanup functions for both scheduled and triggered
executions
- Proper database transaction handling

### 3. Error-Based Cleanup: Handle Existing Orphaned Resources
- Catch `GraphNotInLibraryError` in scheduler and webhook handlers
- Automatically clean up orphaned resources when execution fails
- Graceful degradation without breaking existing workflows

### 4. Migration: Clean Up Historical Orphans
- SQL migration to remove existing orphaned schedules and webhooks
- Performance index for faster cleanup queries
- Proper logging and error handling

## Key Changes

### Core Library Validation
```python
# backend/data/graph.py - Single source of truth
async def is_graph_in_user_library(graph_id: str, user_id: str, graph_version: Optional[int] = None) -> bool:
    where_clause = {"userId": user_id, "agentGraphId": graph_id, "isDeleted": False, "isArchived": False}
    if graph_version is not None:
        where_clause["agentGraphVersion"] = graph_version
    count = await LibraryAgent.prisma().count(where=where_clause)
    return count > 0
```

### Enhanced Agent Deletion
```python
# backend/server/v2/library/db.py
async def delete_library_agent(library_agent_id: str, user_id: str, soft_delete: bool = True) -> None:
    # ... existing deletion logic ...
    await _cleanup_schedules_for_graph(graph_id=graph_id, user_id=user_id)
    await _cleanup_webhooks_for_graph(graph_id=graph_id, user_id=user_id)
```

### Execution Prevention
```python
# backend/executor/utils.py
if not await gdb.is_graph_in_user_library(graph_id=graph_id, user_id=user_id, graph_version=graph.version):
    raise GraphNotInLibraryError(f"Graph #{graph_id} is not accessible in your library")
```

### Error-Based Cleanup
```python
# backend/executor/scheduler.py & backend/server/integrations/router.py
except GraphNotInLibraryError as e:
    logger.warning(f"Execution blocked for deleted/archived graph {graph_id}")
    await _cleanup_orphaned_resources_for_graph(graph_id, user_id)
```

## Technical Implementation

### Database Efficiency
- Use `count()` instead of `find_first()` for faster queries
- Add performance index: `idx_library_agent_user_graph_active`
- Follow existing `prisma.is_connected()` patterns

### Error Handling Hierarchy
- **`GraphNotInLibraryError`**: Specific exception for deleted/archived
graphs
- **`NotAuthorizedError`**: Generic authorization errors (preserved for
user ID mismatches)
- Clear error messages for better debugging

### Code Organization
- Single source of truth for library validation in
`backend/data/graph.py`
- Import from centralized location to avoid duplication
- Top-level imports following codebase conventions

## Testing & Validation

### Functional Testing
-  Library validation prevents execution of deleted agents
-  Cleanup functions remove schedules and webhooks properly  
-  Error-based cleanup handles orphaned resources gracefully
-  Migration removes existing orphaned records

### Integration Testing
-  All existing tests pass (including `test_store_listing_graph`)
-  No breaking changes to existing functionality
-  Proper error propagation and handling

### Performance Testing
-  Efficient database queries with proper indexing
-  Minimal overhead for normal execution flows
-  Cleanup operations don't impact performance

## Impact

### User Experience
- 🎯 **Immediate**: Deleted agents stop running automatically
- 🎯 **Ongoing**: No more unexpected credit charges from orphaned
executions
- 🎯 **Cleanup**: Historical orphaned resources are removed

### System Reliability
- 🔒 **Security**: Users can only execute agents they have access to
- 🧹 **Cleanup**: Automatic removal of orphaned database records
- 📈 **Performance**: Efficient validation with minimal overhead

### Developer Experience
- 🎯 **Clear Errors**: Specific exception types for better debugging
- 🔧 **Maintainable**: Centralized library validation logic
- 📚 **Documented**: Comprehensive error handling patterns

## Files Modified
- `backend/data/graph.py` - Library validation function
- `backend/server/v2/library/db.py` - Enhanced agent deletion with
cleanup
- `backend/executor/utils.py` - Execution validation and prevention
- `backend/executor/scheduler.py` - Error-based cleanup for schedules
- `backend/server/integrations/router.py` - Error-based cleanup for
webhooks
- `backend/util/exceptions.py` - Specific error type for deleted graphs
-
`migrations/20251023000000_cleanup_orphaned_schedules_and_webhooks/migration.sql`
- Historical cleanup

## Breaking Changes
None. All changes are backward compatible and preserve existing
functionality.

## Follow-up Tasks
- [ ] Monitor cleanup effectiveness in production
- [ ] Consider adding metrics for orphaned resource detection
- [ ] Potential optimization of cleanup batch operations

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-10-28 23:48:35 +00:00
Ubbe
59657eb42e fix(frontend): onboarding step 5 adjustments (#11276)
## Changes 🏗️

A couple of improvements on **Onboarding Step 5**:
- Show a spinner when the page is loading ( better contrast / context
than skeleton in this case )
- Prevent the run button being disabled if credentials failed to load
- while this is good/expected behavior, it will help us debug the issue
in production where credentials failed to load silently, given running
the agent it'll throw an error we can see

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Create a new account/signup
  - [x] On Onboarding Step 5 test the above
2025-10-28 13:58:04 +00:00
Reinier van der Leer
5e5f45a713 fix(backend): Fix various warnings (#11252)
- Resolves #11251

This fixes all the warnings mentioned in #11251, reducing noise and
making our logs and error alerts more useful :)

### Changes 🏗️

- Remove "Block {block_name} has multiple credential inputs" warning
(not actually an issue)
- Rename `json` attribute of `MainCodeExecutionResult` to `json_data`;
retain serialized name through a field alias
- Replace `Path(regex=...)` with `Path(pattern=...)` in
`get_shared_execution` endpoint parameter config
- Change Uvicorn's WebSocket module to new Sans-I/O implementation for
WS server
- Disable Uvicorn's WebSocket module for REST server
- Remove deprecated `enable_cleanup_closed=True` argument in
`CloudStorageHandler` implementation
- Replace Prisma transaction timeout `int` argument with a `timedelta`
value
- Update Sentry SDK to latest version (v2.42.1)
- Broaden filter for cleanup warnings from indirect dependency `litellm`
- Fix handling of `MissingConfigError` in REST server endpoints

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - Check that the warnings are actually gone
- [x] Deploy to dev environment and run a graph; check for any warnings
  - Test WebSocket server
- [x] Run an agent in the Builder; make sure real-time execution updates
still work
2025-10-28 13:18:45 +00:00
Swifty
b31d60276a fix(backend/store): Sanitize all sql terms (#11228)
Categories and Creators where not sanitized in the full text search

- apply sanitization to categories and creators

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] run tests to check it still works
2025-10-27 13:16:37 +01:00
Toran Bruce Richards
b52e95e1fc fix(blocks): Add missing error output pins to all Firecrawl blocks (#11256)
Added error output pins to all Firecrawl blocks as standard on the
AutoGPT platform. The base block execution code already handles error
yielding, so no try-catch logic was needed.

- FirecrawlScrapeBlock: Added error output pin for scrape failures
- FirecrawlCrawlBlock: Added error output pin for crawl failures
- FirecrawlExtractBlock: Added error output pin for extraction failures
- FirecrawlMapBlock: Added error output pin for map failures
- FirecrawlSearchBlock: Added error output pin for search failures

Resolves #11253

<!-- Clearly explain the need for these changes: -->

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [ ] I have clearly listed my changes in the PR description
- [ ] I have made a test plan
- [ ] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [ ] ...

<details>
  <summary>Example test plan</summary>
  
  - [ ] Create from scratch and execute an agent with at least 3 blocks
- [ ] Import an agent from file upload, and confirm it executes
correctly
  - [ ] Upload agent to marketplace
- [ ] Import an agent from marketplace and confirm it executes correctly
  - [ ] Edit an agent from monitor, and confirm it executes correctly
</details>

#### For configuration changes:

- [ ] `.env.default` is updated or already compatible with my changes
- [ ] `docker-compose.yml` is updated or already compatible with my
changes
- [ ] I have included a list of my configuration changes in the PR
description (under **Changes**)

<details>
  <summary>Examples of configuration changes</summary>

  - Changing ports
  - Adding new services that need to communicate with each other
  - Secrets or environment variable changes
  - New or infrastructure changes such as databases
</details>

Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Toran Bruce Richards <Torantulino@users.noreply.github.com>
2025-10-27 08:36:28 +00:00
Bently
f4ba02f2f1 feat(blocks/revid): Add cost configs for revid video blocks (#11242)
Updated block costs in `backend/backend/data/block_cost_config.py`:
  - **AIShortformVideoCreatorBlock**: Updated from 50 credits to 307
  - **AIAdMakerVideoCreatorBlock**: Added cost of 714 credits
  - **AIScreenshotToVideoAdBlock**: Added cost of 612 credits

  ### Checklist 📋

  #### For code changes:
  - [x] I have clearly listed my changes in the PR description
  - [x] I have made a test plan
  - [x] I have tested my changes according to the test plan:
- [x] Verify AIShortformVideoCreatorBlock costs 307 credits when
executed
- [x] Verify AIAdMakerVideoCreatorBlock costs 714 credits when executed
- [x] Verify AIScreenshotToVideoAdBlock costs 612 credits when executed
2025-10-24 18:35:37 +01:00
824 changed files with 95141 additions and 12598 deletions

View File

@@ -80,7 +80,7 @@ jobs:
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: "21"
node-version: "22"
- name: Enable corepack
run: corepack enable

View File

@@ -44,6 +44,12 @@ jobs:
with:
fetch-depth: 1
- name: Free Disk Space (Ubuntu)
uses: jlumbroso/free-disk-space@v1.3.1
with:
large-packages: false # slow
docker-images: false # limited benefit
# Backend Python/Poetry setup (mirrors platform-backend-ci.yml)
- name: Set up Python
uses: actions/setup-python@v5
@@ -90,7 +96,7 @@ jobs:
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: "21"
node-version: "22"
- name: Enable corepack
run: corepack enable

View File

@@ -78,7 +78,7 @@ jobs:
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: "21"
node-version: "22"
- name: Enable corepack
run: corepack enable
@@ -299,4 +299,4 @@ jobs:
echo "✅ AutoGPT Platform development environment setup complete!"
echo "🚀 Ready for development with Docker services running"
echo "📝 Backend server: poetry run serve (port 8000)"
echo "🌐 Frontend server: pnpm dev (port 3000)"
echo "🌐 Frontend server: pnpm dev (port 3000)"

View File

@@ -12,6 +12,10 @@ on:
- "autogpt_platform/frontend/**"
merge_group:
concurrency:
group: ${{ github.workflow }}-${{ github.event_name == 'merge_group' && format('merge-queue-{0}', github.ref) || format('{0}-{1}', github.ref, github.event.pull_request.number || github.sha) }}
cancel-in-progress: ${{ github.event_name == 'pull_request' }}
defaults:
run:
shell: bash
@@ -30,7 +34,7 @@ jobs:
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: "21"
node-version: "22.18.0"
- name: Enable corepack
run: corepack enable
@@ -62,7 +66,7 @@ jobs:
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: "21"
node-version: "22.18.0"
- name: Enable corepack
run: corepack enable
@@ -97,7 +101,7 @@ jobs:
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: "21"
node-version: "22.18.0"
- name: Enable corepack
run: corepack enable
@@ -138,7 +142,7 @@ jobs:
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: "21"
node-version: "22.18.0"
- name: Enable corepack
run: corepack enable

View File

@@ -12,6 +12,10 @@ on:
- "autogpt_platform/**"
merge_group:
concurrency:
group: ${{ github.workflow }}-${{ github.event_name == 'merge_group' && format('merge-queue-{0}', github.ref) || github.head_ref && format('pr-{0}', github.event.pull_request.number) || github.sha }}
cancel-in-progress: ${{ github.event_name == 'pull_request' }}
defaults:
run:
shell: bash
@@ -30,7 +34,7 @@ jobs:
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: "21"
node-version: "22.18.0"
- name: Enable corepack
run: corepack enable
@@ -66,7 +70,7 @@ jobs:
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: "21"
node-version: "22.18.0"
- name: Enable corepack
run: corepack enable

1
.gitignore vendored
View File

@@ -178,3 +178,4 @@ autogpt_platform/backend/settings.py
*.ign.*
.test-contents
.claude/settings.local.json
/autogpt_platform/backend/logs

View File

@@ -192,6 +192,8 @@ Quick steps:
Note: when making many new blocks analyze the interfaces for each of these blocks and picture if they would go well together in a graph based editor or would they struggle to connect productively?
ex: do the inputs and outputs tie well together?
If you get any pushback or hit complex block conditions check the new_blocks guide in the docs.
**Modifying the API:**
1. Update route in `/backend/backend/server/routers/`

View File

@@ -1,4 +1,4 @@
.PHONY: start-core stop-core logs-core format lint migrate run-backend run-frontend
.PHONY: start-core stop-core logs-core format lint migrate run-backend run-frontend load-store-agents
# Run just Supabase + Redis + RabbitMQ
start-core:
@@ -42,7 +42,10 @@ run-frontend:
test-data:
cd backend && poetry run python test/test_data_creator.py
load-store-agents:
cd backend && poetry run load-store-agents
help:
@echo "Usage: make <target>"
@echo "Targets:"
@@ -54,4 +57,5 @@ help:
@echo " migrate - Run backend database migrations"
@echo " run-backend - Run the backend FastAPI server"
@echo " run-frontend - Run the frontend Next.js development server"
@echo " test-data - Run the test data creator"
@echo " test-data - Run the test data creator"
@echo " load-store-agents - Load store agents from agents/ folder into test database"

View File

@@ -1,5 +1,10 @@
from .config import verify_settings
from .dependencies import get_user_id, requires_admin_user, requires_user
from .dependencies import (
get_optional_user_id,
get_user_id,
requires_admin_user,
requires_user,
)
from .helpers import add_auth_responses_to_openapi
from .models import User
@@ -8,6 +13,7 @@ __all__ = [
"get_user_id",
"requires_admin_user",
"requires_user",
"get_optional_user_id",
"add_auth_responses_to_openapi",
"User",
]

View File

@@ -4,11 +4,53 @@ FastAPI dependency functions for JWT-based authentication and authorization.
These are the high-level dependency functions used in route definitions.
"""
import logging
import fastapi
from fastapi.security import HTTPAuthorizationCredentials, HTTPBearer
from .jwt_utils import get_jwt_payload, verify_user
from .models import User
optional_bearer = HTTPBearer(auto_error=False)
# Header name for admin impersonation
IMPERSONATION_HEADER_NAME = "X-Act-As-User-Id"
logger = logging.getLogger(__name__)
def get_optional_user_id(
credentials: HTTPAuthorizationCredentials | None = fastapi.Security(
optional_bearer
),
) -> str | None:
"""
Attempts to extract the user ID ("sub" claim) from a Bearer JWT if provided.
This dependency allows for both authenticated and anonymous access. If a valid bearer token is
supplied, it parses the JWT and extracts the user ID. If the token is missing or invalid, it returns None,
treating the request as anonymous.
Args:
credentials: Optional HTTPAuthorizationCredentials object from FastAPI Security dependency.
Returns:
The user ID (str) extracted from the JWT "sub" claim, or None if no valid token is present.
"""
if not credentials:
return None
try:
# Parse JWT token to get user ID
from autogpt_libs.auth.jwt_utils import parse_jwt_token
payload = parse_jwt_token(credentials.credentials)
return payload.get("sub")
except Exception as e:
logger.debug(f"Auth token validation failed (anonymous access): {e}")
return None
async def requires_user(jwt_payload: dict = fastapi.Security(get_jwt_payload)) -> User:
"""
@@ -32,16 +74,44 @@ async def requires_admin_user(
return verify_user(jwt_payload, admin_only=True)
async def get_user_id(jwt_payload: dict = fastapi.Security(get_jwt_payload)) -> str:
async def get_user_id(
request: fastapi.Request, jwt_payload: dict = fastapi.Security(get_jwt_payload)
) -> str:
"""
FastAPI dependency that returns the ID of the authenticated user.
Supports admin impersonation via X-Act-As-User-Id header:
- If the header is present and user is admin, returns the impersonated user ID
- Otherwise returns the authenticated user's own ID
- Logs all impersonation actions for audit trail
Raises:
HTTPException: 401 for authentication failures or missing user ID
HTTPException: 403 if non-admin tries to use impersonation
"""
# Get the authenticated user's ID from JWT
user_id = jwt_payload.get("sub")
if not user_id:
raise fastapi.HTTPException(
status_code=401, detail="User ID not found in token"
)
# Check for admin impersonation header
impersonate_header = request.headers.get(IMPERSONATION_HEADER_NAME, "").strip()
if impersonate_header:
# Verify the authenticated user is an admin
authenticated_user = verify_user(jwt_payload, admin_only=False)
if authenticated_user.role != "admin":
raise fastapi.HTTPException(
status_code=403, detail="Only admin users can impersonate other users"
)
# Log the impersonation for audit trail
logger.info(
f"Admin impersonation: {authenticated_user.user_id} ({authenticated_user.email}) "
f"acting as user {impersonate_header} for requesting {request.method} {request.url}"
)
return impersonate_header
return user_id

View File

@@ -4,9 +4,10 @@ Tests the full authentication flow from HTTP requests to user validation.
"""
import os
from unittest.mock import Mock
import pytest
from fastapi import FastAPI, HTTPException, Security
from fastapi import FastAPI, HTTPException, Request, Security
from fastapi.testclient import TestClient
from pytest_mock import MockerFixture
@@ -45,6 +46,7 @@ class TestAuthDependencies:
"""Create a test client."""
return TestClient(app)
@pytest.mark.asyncio
async def test_requires_user_with_valid_jwt_payload(self, mocker: MockerFixture):
"""Test requires_user with valid JWT payload."""
jwt_payload = {"sub": "user-123", "role": "user", "email": "user@example.com"}
@@ -58,6 +60,7 @@ class TestAuthDependencies:
assert user.user_id == "user-123"
assert user.role == "user"
@pytest.mark.asyncio
async def test_requires_user_with_admin_jwt_payload(self, mocker: MockerFixture):
"""Test requires_user accepts admin users."""
jwt_payload = {
@@ -73,6 +76,7 @@ class TestAuthDependencies:
assert user.user_id == "admin-456"
assert user.role == "admin"
@pytest.mark.asyncio
async def test_requires_user_missing_sub(self):
"""Test requires_user with missing user ID."""
jwt_payload = {"role": "user", "email": "user@example.com"}
@@ -82,6 +86,7 @@ class TestAuthDependencies:
assert exc_info.value.status_code == 401
assert "User ID not found" in exc_info.value.detail
@pytest.mark.asyncio
async def test_requires_user_empty_sub(self):
"""Test requires_user with empty user ID."""
jwt_payload = {"sub": "", "role": "user"}
@@ -90,6 +95,7 @@ class TestAuthDependencies:
await requires_user(jwt_payload)
assert exc_info.value.status_code == 401
@pytest.mark.asyncio
async def test_requires_admin_user_with_admin(self, mocker: MockerFixture):
"""Test requires_admin_user with admin role."""
jwt_payload = {
@@ -105,6 +111,7 @@ class TestAuthDependencies:
assert user.user_id == "admin-789"
assert user.role == "admin"
@pytest.mark.asyncio
async def test_requires_admin_user_with_regular_user(self):
"""Test requires_admin_user rejects regular users."""
jwt_payload = {"sub": "user-123", "role": "user", "email": "user@example.com"}
@@ -114,6 +121,7 @@ class TestAuthDependencies:
assert exc_info.value.status_code == 403
assert "Admin access required" in exc_info.value.detail
@pytest.mark.asyncio
async def test_requires_admin_user_missing_role(self):
"""Test requires_admin_user with missing role."""
jwt_payload = {"sub": "user-123", "email": "user@example.com"}
@@ -121,31 +129,40 @@ class TestAuthDependencies:
with pytest.raises(KeyError):
await requires_admin_user(jwt_payload)
@pytest.mark.asyncio
async def test_get_user_id_with_valid_payload(self, mocker: MockerFixture):
"""Test get_user_id extracts user ID correctly."""
request = Mock(spec=Request)
request.headers = {}
jwt_payload = {"sub": "user-id-xyz", "role": "user"}
mocker.patch(
"autogpt_libs.auth.dependencies.get_jwt_payload", return_value=jwt_payload
)
user_id = await get_user_id(jwt_payload)
user_id = await get_user_id(request, jwt_payload)
assert user_id == "user-id-xyz"
@pytest.mark.asyncio
async def test_get_user_id_missing_sub(self):
"""Test get_user_id with missing user ID."""
request = Mock(spec=Request)
request.headers = {}
jwt_payload = {"role": "user"}
with pytest.raises(HTTPException) as exc_info:
await get_user_id(jwt_payload)
await get_user_id(request, jwt_payload)
assert exc_info.value.status_code == 401
assert "User ID not found" in exc_info.value.detail
@pytest.mark.asyncio
async def test_get_user_id_none_sub(self):
"""Test get_user_id with None user ID."""
request = Mock(spec=Request)
request.headers = {}
jwt_payload = {"sub": None, "role": "user"}
with pytest.raises(HTTPException) as exc_info:
await get_user_id(jwt_payload)
await get_user_id(request, jwt_payload)
assert exc_info.value.status_code == 401
@@ -170,6 +187,7 @@ class TestAuthDependenciesIntegration:
return _create_token
@pytest.mark.asyncio
async def test_endpoint_auth_enabled_no_token(self):
"""Test endpoints require token when auth is enabled."""
app = FastAPI()
@@ -184,6 +202,7 @@ class TestAuthDependenciesIntegration:
response = client.get("/test")
assert response.status_code == 401
@pytest.mark.asyncio
async def test_endpoint_with_valid_token(self, create_token):
"""Test endpoint with valid JWT token."""
app = FastAPI()
@@ -203,6 +222,7 @@ class TestAuthDependenciesIntegration:
assert response.status_code == 200
assert response.json()["user_id"] == "test-user"
@pytest.mark.asyncio
async def test_admin_endpoint_requires_admin_role(self, create_token):
"""Test admin endpoint rejects non-admin users."""
app = FastAPI()
@@ -240,6 +260,7 @@ class TestAuthDependenciesIntegration:
class TestAuthDependenciesEdgeCases:
"""Edge case tests for authentication dependencies."""
@pytest.mark.asyncio
async def test_dependency_with_complex_payload(self):
"""Test dependencies handle complex JWT payloads."""
complex_payload = {
@@ -263,6 +284,7 @@ class TestAuthDependenciesEdgeCases:
admin = await requires_admin_user(complex_payload)
assert admin.role == "admin"
@pytest.mark.asyncio
async def test_dependency_with_unicode_in_payload(self):
"""Test dependencies handle unicode in JWT payloads."""
unicode_payload = {
@@ -276,6 +298,7 @@ class TestAuthDependenciesEdgeCases:
assert "😀" in user.user_id
assert user.email == "测试@example.com"
@pytest.mark.asyncio
async def test_dependency_with_null_values(self):
"""Test dependencies handle null values in payload."""
null_payload = {
@@ -290,6 +313,7 @@ class TestAuthDependenciesEdgeCases:
assert user.user_id == "user-123"
assert user.email is None
@pytest.mark.asyncio
async def test_concurrent_requests_isolation(self):
"""Test that concurrent requests don't interfere with each other."""
payload1 = {"sub": "user-1", "role": "user"}
@@ -314,6 +338,7 @@ class TestAuthDependenciesEdgeCases:
({"sub": "user", "role": "user"}, "Admin access required", True),
],
)
@pytest.mark.asyncio
async def test_dependency_error_cases(
self, payload, expected_error: str, admin_only: bool
):
@@ -325,6 +350,7 @@ class TestAuthDependenciesEdgeCases:
verify_user(payload, admin_only=admin_only)
assert expected_error in exc_info.value.detail
@pytest.mark.asyncio
async def test_dependency_valid_user(self):
"""Test valid user case for dependency."""
# Import verify_user to test it directly since dependencies use FastAPI Security
@@ -333,3 +359,196 @@ class TestAuthDependenciesEdgeCases:
# Valid case
user = verify_user({"sub": "user", "role": "user"}, admin_only=False)
assert user.user_id == "user"
class TestAdminImpersonation:
"""Test suite for admin user impersonation functionality."""
@pytest.mark.asyncio
async def test_admin_impersonation_success(self, mocker: MockerFixture):
"""Test admin successfully impersonating another user."""
request = Mock(spec=Request)
request.headers = {"X-Act-As-User-Id": "target-user-123"}
jwt_payload = {
"sub": "admin-456",
"role": "admin",
"email": "admin@example.com",
}
# Mock verify_user to return admin user data
mock_verify_user = mocker.patch("autogpt_libs.auth.dependencies.verify_user")
mock_verify_user.return_value = Mock(
user_id="admin-456", email="admin@example.com", role="admin"
)
# Mock logger to verify audit logging
mock_logger = mocker.patch("autogpt_libs.auth.dependencies.logger")
mocker.patch(
"autogpt_libs.auth.dependencies.get_jwt_payload", return_value=jwt_payload
)
user_id = await get_user_id(request, jwt_payload)
# Should return the impersonated user ID
assert user_id == "target-user-123"
# Should log the impersonation attempt
mock_logger.info.assert_called_once()
log_call = mock_logger.info.call_args[0][0]
assert "Admin impersonation:" in log_call
assert "admin@example.com" in log_call
assert "target-user-123" in log_call
@pytest.mark.asyncio
async def test_non_admin_impersonation_attempt(self, mocker: MockerFixture):
"""Test non-admin user attempting impersonation returns 403."""
request = Mock(spec=Request)
request.headers = {"X-Act-As-User-Id": "target-user-123"}
jwt_payload = {
"sub": "regular-user",
"role": "user",
"email": "user@example.com",
}
# Mock verify_user to return regular user data
mock_verify_user = mocker.patch("autogpt_libs.auth.dependencies.verify_user")
mock_verify_user.return_value = Mock(
user_id="regular-user", email="user@example.com", role="user"
)
mocker.patch(
"autogpt_libs.auth.dependencies.get_jwt_payload", return_value=jwt_payload
)
with pytest.raises(HTTPException) as exc_info:
await get_user_id(request, jwt_payload)
assert exc_info.value.status_code == 403
assert "Only admin users can impersonate other users" in exc_info.value.detail
@pytest.mark.asyncio
async def test_impersonation_empty_header(self, mocker: MockerFixture):
"""Test impersonation with empty header falls back to regular user ID."""
request = Mock(spec=Request)
request.headers = {"X-Act-As-User-Id": ""}
jwt_payload = {
"sub": "admin-456",
"role": "admin",
"email": "admin@example.com",
}
mocker.patch(
"autogpt_libs.auth.dependencies.get_jwt_payload", return_value=jwt_payload
)
user_id = await get_user_id(request, jwt_payload)
# Should fall back to the admin's own user ID
assert user_id == "admin-456"
@pytest.mark.asyncio
async def test_impersonation_missing_header(self, mocker: MockerFixture):
"""Test normal behavior when impersonation header is missing."""
request = Mock(spec=Request)
request.headers = {} # No impersonation header
jwt_payload = {
"sub": "admin-456",
"role": "admin",
"email": "admin@example.com",
}
mocker.patch(
"autogpt_libs.auth.dependencies.get_jwt_payload", return_value=jwt_payload
)
user_id = await get_user_id(request, jwt_payload)
# Should return the admin's own user ID
assert user_id == "admin-456"
@pytest.mark.asyncio
async def test_impersonation_audit_logging_details(self, mocker: MockerFixture):
"""Test that impersonation audit logging includes all required details."""
request = Mock(spec=Request)
request.headers = {"X-Act-As-User-Id": "victim-user-789"}
jwt_payload = {
"sub": "admin-999",
"role": "admin",
"email": "superadmin@company.com",
}
# Mock verify_user to return admin user data
mock_verify_user = mocker.patch("autogpt_libs.auth.dependencies.verify_user")
mock_verify_user.return_value = Mock(
user_id="admin-999", email="superadmin@company.com", role="admin"
)
# Mock logger to capture audit trail
mock_logger = mocker.patch("autogpt_libs.auth.dependencies.logger")
mocker.patch(
"autogpt_libs.auth.dependencies.get_jwt_payload", return_value=jwt_payload
)
user_id = await get_user_id(request, jwt_payload)
# Verify all audit details are logged
assert user_id == "victim-user-789"
mock_logger.info.assert_called_once()
log_message = mock_logger.info.call_args[0][0]
assert "Admin impersonation:" in log_message
assert "superadmin@company.com" in log_message
assert "victim-user-789" in log_message
@pytest.mark.asyncio
async def test_impersonation_header_case_sensitivity(self, mocker: MockerFixture):
"""Test that impersonation header is case-sensitive."""
request = Mock(spec=Request)
# Use wrong case - should not trigger impersonation
request.headers = {"x-act-as-user-id": "target-user-123"}
jwt_payload = {
"sub": "admin-456",
"role": "admin",
"email": "admin@example.com",
}
mocker.patch(
"autogpt_libs.auth.dependencies.get_jwt_payload", return_value=jwt_payload
)
user_id = await get_user_id(request, jwt_payload)
# Should fall back to admin's own ID (header case mismatch)
assert user_id == "admin-456"
@pytest.mark.asyncio
async def test_impersonation_with_whitespace_header(self, mocker: MockerFixture):
"""Test impersonation with whitespace in header value."""
request = Mock(spec=Request)
request.headers = {"X-Act-As-User-Id": " target-user-123 "}
jwt_payload = {
"sub": "admin-456",
"role": "admin",
"email": "admin@example.com",
}
# Mock verify_user to return admin user data
mock_verify_user = mocker.patch("autogpt_libs.auth.dependencies.verify_user")
mock_verify_user.return_value = Mock(
user_id="admin-456", email="admin@example.com", role="admin"
)
# Mock logger
mock_logger = mocker.patch("autogpt_libs.auth.dependencies.logger")
mocker.patch(
"autogpt_libs.auth.dependencies.get_jwt_payload", return_value=jwt_payload
)
user_id = await get_user_id(request, jwt_payload)
# Should strip whitespace and impersonate successfully
assert user_id == "target-user-123"
mock_logger.info.assert_called_once()

View File

@@ -134,13 +134,6 @@ POSTMARK_WEBHOOK_TOKEN=
# Error Tracking
SENTRY_DSN=
# Cloudflare Turnstile (CAPTCHA) Configuration
# Get these from the Cloudflare Turnstile dashboard: https://dash.cloudflare.com/?to=/:account/turnstile
# This is the backend secret key
TURNSTILE_SECRET_KEY=
# This is the verify URL
TURNSTILE_VERIFY_URL=https://challenges.cloudflare.com/turnstile/v0/siteverify
# Feature Flags
LAUNCH_DARKLY_SDK_KEY=

View File

@@ -0,0 +1,242 @@
listing_id,storeListingVersionId,slug,agent_name,agent_video,agent_image,featured,sub_heading,description,categories,useForOnboarding,is_available
6e60a900-9d7d-490e-9af2-a194827ed632,d85882b8-633f-44ce-a315-c20a8c123d19,flux-ai-image-generator,Flux AI Image Generator,,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/ca154dd1-140e-454c-91bd-2d8a00de3f08.jpg"",""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/577d995d-bc38-40a9-a23f-1f30f5774bdb.jpg"",""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/415db1b7-115c-43ab-bd6c-4e9f7ef95be1.jpg""]",false,Transform ideas into breathtaking images,"Transform ideas into breathtaking images with this AI-powered Image Generator. Using cutting-edge Flux AI technology, the tool crafts highly detailed, photorealistic visuals from simple text prompts. Perfect for artists, marketers, and content creators, this generator produces unique images tailored to user specifications. From fantastical scenes to lifelike portraits, users can unleash creativity with professional-quality results in seconds. Easy to use and endlessly versatile, bring imagination to life with the AI Image Generator today!","[""creative""]",false,true
f11fc6e9-6166-4676-ac5d-f07127b270c1,c775f60d-b99f-418b-8fe0-53172258c3ce,youtube-transcription-scraper,YouTube Transcription Scraper,https://youtu.be/H8S3pU68lGE,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/65bce54b-0124-4b0d-9e3e-f9b89d0dc99e.jpg""]",false,Fetch the transcriptions from the most popular YouTube videos in your chosen topic,"Effortlessly gather transcriptions from multiple YouTube videos with this agent. It scrapes and compiles video transcripts into a clean, organized list, making it easy to extract insights, quotes, or content from various sources in one go. Ideal for researchers, content creators, and marketers looking to quickly analyze or repurpose video content.","[""writing""]",false,true
17908889-b599-4010-8e4f-bed19b8f3446,6e16e65a-ad34-4108-b4fd-4a23fced5ea2,business-ownerceo-finder,Decision Maker Lead Finder,,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/1020d94e-b6a2-4fa7-bbdf-2c218b0de563.jpg""]",false,Contact CEOs today,"Find the key decision-makers you need, fast.
This agent identifies business owners or CEOs of local companies in any area you choose. Simply enter what kind of businesses youre looking for and where, and it will:
* Search the area and gather public information
* Return names, roles, and contact details when available
* Provide smart Google search suggestions if details arent found
Perfect for:
* B2B sales teams seeking verified leads
* Recruiters sourcing local talent
* Researchers looking to connect with business leaders
Save hours of manual searching and get straight to the people who matter most.","[""business""]",true,true
72beca1d-45ea-4403-a7ce-e2af168ee428,415b7352-0dc6-4214-9d87-0ad3751b711d,smart-meeting-brief,Smart Meeting Prep,https://youtu.be/9ydZR2hkxaY,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/2f116ce1-63ae-4d39-a5cd-f514defc2b97.png"",""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/0a71a60a-2263-4f12-9836-9c76ab49f155.png"",""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/95327695-9184-403c-907a-a9d3bdafa6a5.png"",""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/2bc77788-790b-47d4-8a61-ce97b695e9f5.png""]",true,Business meeting briefings delivered daily,"Never walk into a meeting unprepared again. Every day at 4 pm, the Smart Meeting Prep Agent scans your calendar for tomorrow's external meetings. It reviews your past email exchanges, researches each participant's background and role, and compiles the insights into a concise briefing, so you can close your workday ready for tomorrow's calls.
How It Works
1. At 4 pm, the agent scans your calendar and identifies external meetings scheduled for the next day.
2. It reviews recent email threads with each participant to surface key relationship history and communication context.
3. It conducts online research to gather publicly available information on roles, company backgrounds, and relevant professional data.
4. It produces a unified briefing for each participant, including past exchange highlights, profile notes, and strategic conversation points.","[""personal""]",true,true
9fa5697a-617b-4fae-aea0-7dbbed279976,b8ceb480-a7a2-4c90-8513-181a49f7071f,automated-support-ai,Automated Support Agent,https://youtu.be/nBMfu_5sgDA,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/ed56febc-2205-4179-9e7e-505d8500b66c.png""]",true,Automate up to 80 percent of inbound support emails,"Overview:
Support teams spend countless hours on basic tickets. This agent automates repetitive customer support tasks. It reads incoming requests, researches your knowledge base, and responds automatically when confident. When unsure, it escalates to a human for final resolution.
How it Works:
New support emails are routed to the agent.
The agent checks internal documentation for answers.
It measures confidence in the answer found and either replies directly or escalates to a human.
Business Value:
Automating the easy 80 percent of support tickets allows your team to focus on high-value, complex customer issues, improving efficiency and response times.","[""business""]",false,true
2bdac92b-a12c-4131-bb46-0e3b89f61413,31daf49d-31d3-476b-aa4c-099abc59b458,unspirational-poster-maker,Unspirational Poster Maker,,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/6a490dac-27e5-405f-a4c4-8d1c55b85060.jpg"",""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/d343fbb5-478c-4e38-94df-4337293b61f1.jpg""]",false,Because adulting is hard,"This witty AI agent generates hilariously relatable ""motivational"" posters that tackle the everyday struggles of procrastination, overthinking, and workplace chaos with a blend of absurdity and sarcasm. From goldfish facing impossible tasks to cats in existential crises, The Unspirational Poster Maker designs tongue-in-cheek graphics and captions that mock productivity clichés and embrace our collective struggles to ""get it together."" Perfect for adding a touch of humour to the workday, these posters remind us that sometimes, all we can do is laugh at the chaos.","[""creative""]",false,true
9adf005e-2854-4cc7-98cf-f7103b92a7b7,a03b0d8c-4751-43d6-a54e-c3b7856ba4e3,ai-shortform-video-generator-create-viral-ready-content,AI Video Generator,,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/8d2670b9-fea5-4966-a597-0a4511bffdc3.png"",""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/aabe8aec-0110-4ce7-a259-4f86fe8fe07d.png""]",false,Create Viral-Ready Shorts Content in Seconds,"OVERVIEW
Transform any trending headline or broad topic into a polished, vertical short-form video in a single run.
The agent automates research, scriptwriting, metadata creation, and Revid.ai rendering, returning one ready-to-publish MP4 plus its title, script and hashtags.
HOW IT WORKS
1. Input a topic or an exact news headline.
2. The agent fetches live search results and selects the most engaging related story.
3. Key facts are summarised into concise research notes.
4. Claude writes a 3035 second script with visual cues, a three-second hook, tension loops, and a call-to-action.
5. GPT-4o generates an eye-catching title and one or two discoverability hashtags.
6. The script is sent to a state-of-the-art AI video generator to render a single 9:16 MP4 (default: 720 p, 30 fps, voice “Brian”, style “movingImage”, music “Bladerunner 2049”).
All voice, style and resolution settings can be adjusted in the Builder before you press ""Run"".
7. Output delivered: Title, Script, Hashtags, Video URL.
KEY USE CASES
- Broad-topic explainers (e.g. “Artificial Intelligence” or “Climate Tech”).
- Real-time newsjacking with a specific breaking headline.
- Product-launch spotlights and quick event recaps while interest is high.
BUSINESS VALUE
- One-click speed: from idea to finished video in minutes.
- Consistent brand look: Revid presets keep voice, style and aspect ratio on spec.
- No-code workflow: marketers create social video without design or development queues.
- Cloud convenience: Auto-GPT Cloud users are pre-configured with all required keys.
Self-hosted users simply add OpenAI, Anthropic, Perplexity (OpenRouter/Jina) and Revid keys once.
IMPORTANT NOTES
- The agent outputs exactly one video per execution. Run it again for additional shorts.
- Video rendering time varies; AI-generated footage may take several minutes.","[""writing""]",false,true
864e48ef-fee5-42c1-b6a4-2ae139db9fc1,55d40473-0f31-4ada-9e40-d3a7139fcbd4,automated-blog-writer,Automated SEO Blog Writer,https://youtu.be/nKcDCbDVobs,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/2dd5f95b-5b30-4bf8-a11b-bac776c5141a.jpg""]",true,"Automate research, writing, and publishing for high-ranking blog posts","Scale your blog with a fully automated content engine. The Automated SEO Blog Writer learns your brand voice, finds high-demand keywords, and creates SEO-optimized articles that attract organic traffic and boost visibility.
How it works:
1. Share your pitch, website, and values.
2. The agent studies your site and uncovers proven SEO opportunities.
3. It spends two hours researching and drafting each post.
4. You set the cadence—publishing runs on autopilot.
Business value: Consistently publish research-backed, optimized posts that build domain authority, rankings, and thought leadership while you focus on what matters most.
Use cases:
• Founders: Keep your blog active with no time drain.
• Agencies: Deliver scalable SEO content for clients.
• Strategists: Automate execution, focus on strategy.
• Marketers: Drive steady organic growth.
• Local businesses: Capture nearby search traffic.","[""writing""]",false,true
6046f42e-eb84-406f-bae0-8e052064a4fa,a548e507-09a7-4b30-909c-f63fcda10fff,lead-finder-local-businesses,Lead Finder,,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/abd6605f-d5f8-426b-af36-052e8ba5044f.webp""]",false,Auto-Prospect Like a Pro,"Turbo-charge your local lead generation with the AutoGPT Marketplaces top Google Maps prospecting agent. “Lead Finder: Local Businesses” delivers verified, ready-to-contact prospects in any niche and city—so you can focus on closing, not searching.
**WHAT IT DOES**
• Searches Google Maps via the official API (no scraping)
• Prompts like “dentists in Chicago” or “coffee shops near me”
• Returns: Name, Website, Rating, Reviews, **Phone & Address**
• Exports instantly to your CRM, sheet, or outreach workflow
**WHY YOULL LOVE IT**
✓ Hyper-targeted leads in minutes
✓ Unlimited searches & locations
✓ Zero CAPTCHAs or IP blocks
✓ Works on AutoGPT Cloud or self-hosted (with your API key)
✓ Cut prospecting time by 90%
**PERFECT FOR**
— Marketers & PPC agencies
— SEO consultants & designers
— SaaS founders & sales teams
Stop scrolling directories—start filling your pipeline. Start now and let AI prospect while you profit.
→ Click *Add to Library* and own your market today.","[""business""]",true,true
f623c862-24e9-44fc-8ce8-d8282bb51ad2,eafa21d3-bf14-4f63-a97f-a5ee41df83b3,linkedin-post-generator,LinkedIn Post Generator,,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/297f6a8e-81a8-43e2-b106-c7ad4a5662df.png"",""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/fceebdc1-aef6-4000-97fc-4ef587f56bda.png""]",false,Autocraft LinkedIn gold,"Create researchdriven, highimpact LinkedIn posts in minutes. This agent searches YouTube for the best videos on your chosen topic, pulls their transcripts, and distils the most valuable insights into a polished post ready for your company page or personal feed.
FEATURES
• Automated YouTube research discovers and analyses topranked videos so you dont have to
• AIcurated synthesis combines multiple transcripts into one authoritative narrative
• Full creative control adjust style, tone, objective, opinion, clarity, target word count and number of videos
• LinkedInoptimised output hook, 23 key points, CTA, strategic line breaks, 35 hashtags, no markdown
• Oneclick publish returns a readytopost text block (≤1 300 characters)
HOW IT WORKS
1. Enter a topic and your preferred writing parameters.
2. The agent builds a YouTube search, fetches the page, and extracts the top N video URLs.
3. It pulls each transcript, then feeds them—plus your settings—into Claude 3.5 Sonnet.
4. The model writes a concise, engaging post designed for maximum LinkedIn engagement.
USE CASES
• Thoughtleadership updates backed by fresh video research
• Rapid industry summaries after major events, webinars, or conferences
• Consistent LinkedIn content for busy founders, marketers, and creators
WHY YOULL LOVE IT
Save hours of manual research, avoid surfacelevel hottakes, and publish posts that showcase real expertise—without the heavy lift.","[""writing""]",true,true
7d4120ad-b6b3-4419-8bdb-7dd7d350ef32,e7bb29a1-23c7-4fee-aa3b-5426174b8c52,youtube-to-linkedin-post-converter,YouTube to LinkedIn Post Converter,,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/f084b326-a708-4396-be51-7ba59ad2ef32.png""]",false,Transform Your YouTube Videos into Engaging LinkedIn Posts with AI,"WHAT IT DOES:
This agent converts YouTube video content into a LinkedIn post by analyzing the video's transcript. It provides you with a tailored post that reflects the core ideas, key takeaways, and tone of the original video, optimizing it for engagement on LinkedIn.
HOW IT WORKS:
- You provide the URL to the YouTube video (required)
- You can choose the structure for the LinkedIn post (e.g., Personal Achievement Story, Lesson Learned, Thought Leadership, etc.)
- You can also select the tone (e.g., Inspirational, Analytical, Conversational, etc.)
- The transcript of the video is analyzed by the GPT-4 model and the Claude 3.5 Sonnet model
- The models extract key insights, memorable quotes, and the main points from the video
- Youll receive a LinkedIn post, formatted according to your chosen structure and tone, optimized for professional engagement
INPUTS:
- Source YouTube Video Provide the URL to the YouTube video
- Structure Choose the post format (e.g., Personal Achievement Story, Thought Leadership, etc.)
- Content Specify the main message or idea of the post (e.g., Hot Take, Key Takeaways, etc.)
- Tone Select the tone for the post (e.g., Conversational, Inspirational, etc.)
OUTPUT:
- LinkedIn Post A well-crafted, AI-generated LinkedIn post with a professional tone, based on the video content and your specified preferences
Perfect for content creators, marketers, and professionals who want to repurpose YouTube videos for LinkedIn and boost their professional branding.","[""writing""]",false,true
c61d6a83-ea48-4df8-b447-3da2d9fe5814,00fdd42c-a14c-4d19-a567-65374ea0e87f,personalized-morning-coffee-newsletter,Personal Newsletter,,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/f4b38e4c-8166-4caf-9411-96c9c4c82d4c.png""]",false,Start your day with personalized AI newsletters that deliver credibility and context for every interest or mood.,"This Personal Newsletter Agent provides a bespoke daily digest on your favorite topics and tone. Whether you prefer industry insights, lighthearted reads, or breaking news, this agent crafts your own unique newsletter to keep you informed and entertained.
How It Works
1. Enter your favorite topics, industries, or areas of interest.
2. Choose your tone—professional, casual, or humorous.
3. Set your preferred delivery cadence: daily or weekly.
4. The agent scans top sources and compiles 35 engaging stories, insights, and fun facts into a conversational newsletter.
Skip the morning scroll and enjoy a thoughtfully curated newsletter designed just for you. Stay ahead of trends, spark creative ideas, and enjoy an effortless, informed start to your day.
Use Cases
• Executives: Get a daily digest of market updates and leadership insights.
• Marketers: Receive curated creative trends and campaign inspiration.
• Entrepreneurs: Stay updated on your industry without information overload.","[""research""]",true,true
e2e49cfc-4a39-4d62-a6b3-c095f6d025ff,fc2c9976-0962-4625-a27b-d316573a9e7f,email-address-finder,Email Scout - Contact Finder Assistant,,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/da8a690a-7a8b-4c1d-b6f8-e2f840c0205d.jpg"",""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/6a2ac25c-1609-4881-8140-e6da2421afb3.jpg"",""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/26179263-fe06-45bd-b6a0-0754660a0a46.jpg""]",false,Find contact details from name and location using AI search,"Finding someone's professional email address can be time-consuming and frustrating. Manual searching across multiple websites, social profiles, and business directories often leads to dead ends or outdated information.
Email Scout automates this process by intelligently searching across publicly available sources when you provide a person's name and location. Simply input basic information like ""Tim Cook, USA"" or ""Sarah Smith, London"" and let the AI assistant do the work of finding potential contact details.
Key Features:
- Quick search from just name and location
- Scans multiple public sources
- Automated AI-powered search process
- Easy to use with simple inputs
Perfect for recruiters, business development professionals, researchers, and anyone needing to establish professional contact.
Note: This tool searches only publicly available information. Search results depend on what contact information people have made public. Some searches may not yield results if the information isn't publicly accessible.","[""""]",false,true
81bcc372-0922-4a36-bc35-f7b1e51d6939,e437cc95-e671-489d-b915-76561fba8c7f,ai-youtube-to-blog-converter,YouTube Video to SEO Blog Writer,,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/239e5a41-2515-4e1c-96ef-31d0d37ecbeb.webp"",""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/c7d96966-786f-4be6-ad7d-3a51c84efc0e.png"",""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/0275a74c-e2c2-4e29-a6e4-3a616c3c35dd.png""]",false,One link. One click. One powerful blog post.,"Effortlessly transform your YouTube videos into high-quality, SEO-optimized blog posts.
Your videos deserve a second life—in writing.
Make your content work twice as hard by repurposing it into engaging, searchable articles.
Perfect for content creators, marketers, and bloggers, this tool analyzes video content and generates well-structured blog posts tailored to your tone, audience, and word count. Just paste a YouTube URL and let the AI handle the rest.
FEATURES
• CONTENT ANALYSIS
Extracts key points from the video while preserving your message and intent.
• CUSTOMIZABLE OUTPUT
Select a tone that fits your audience: casual, professional, educational, or formal.
• SEO OPTIMIZATION
Automatically creates engaging titles and structured subheadings for better search visibility.
• USER-FRIENDLY
Repurpose your videos into written content to expand your reach and improve accessibility.
Whether you're looking to grow your blog, boost SEO, or simply get more out of your content, the AI YouTube-to-Blog Converter makes it effortless.
","[""writing""]",true,true
5c3510d2-fc8b-4053-8e19-67f53c86eb1a,f2cc74bb-f43f-4395-9c35-ecb30b5b4fc9,ai-webpage-copy-improver,AI Webpage Copy Improver,,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/d562d26f-5891-4b09-8859-fbb205972313.jpg""]",false,Boost Your Website's Search Engine Performance,"Elevate your web content with this powerful AI Webpage Copy Improver. Designed for marketers, SEO specialists, and web developers, this tool analyses and enhances website copy for maximum impact. Using advanced language models, it optimizes text for better clarity, SEO performance, and increased conversion rates. The AI examines your existing content, identifies areas for improvement, and generates refined copy that maintains your brand voice while boosting engagement. From homepage headlines to product descriptions, transform your web presence with AI-driven insights. Improve readability, incorporate targeted keywords, and craft compelling calls-to-action - all with the click of a button. Take your digital marketing to the next level with the AI Webpage Copy Improver.","[""marketing""]",true,true
94d03bd3-7d44-4d47-b60c-edb2f89508d6,b6f6f0d3-49f4-4e3b-8155-ffe9141b32c0,domain-name-finder,Domain Name Finder,,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/28545e09-b2b8-4916-b4c6-67f982510a78.jpeg""]",false,Instantly generate brand-ready domain names that are actually available,"Overview:
Finding a domain name that fits your brand shouldnt take hours of searching and failed checks. The Domain Name Finder Agent turns your pitch into hundreds of creative, brand-ready domain ideas—filtered by live availability so every result is actionable.
How It Works
1. Input your product pitch, company name, or core keywords.
2. The agent analyzes brand tone, audience, and industry context.
3. It generates a list of unique, memorable domains that match your criteria.
4. All names are pre-filtered for real-time availability, so you can register immediately.
Business Value
Save hours of guesswork and eliminate dead ends. Accelerate brand launches, startup naming, and campaign creation with ready-to-claim domains.
Key Use Cases
• Startup Founders: Quickly find brand-ready domains for MVP launches or rebrands.
• Marketers: Test name options across campaigns with instant availability data.
• Entrepreneurs: Validate ideas faster with instant domain options.","[""business""]",false,true
7a831906-daab-426f-9d66-bcf98d869426,516d813b-d1bc-470f-add7-c63a4b2c2bad,ai-function,AI Function,,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/620e8117-2ee1-4384-89e6-c2ef4ec3d9c9.webp"",""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/476259e2-5a79-4a7b-8e70-deeebfca70d7.png""]",false,Never Code Again,"AI FUNCTION MAGIC
Your AIpowered assistant for turning plainEnglish descriptions into working Python functions.
HOW IT WORKS
1. Describe what the function should do.
2. Specify the inputs it needs.
3. Receive the generated Python code.
FEATURES
- Effortless Function Generation: convert naturallanguage specs into complete functions.
- Customizable Inputs: define the parameters that matter to you.
- Versatile Use Cases: simulate data, automate tasks, prototype ideas.
- Seamless Integration: add the generated function directly to your codebase.
EXAMPLE
Request: “Create a function that generates 20 examples of fake people, each with a name, date of birth, job title, and age.”
Input parameter: number_of_people (default 20)
Result: a list of dictionaries such as
[
{ ""name"": ""Emma Martinez"", ""date_of_birth"": ""19921103"", ""job_title"": ""Data Analyst"", ""age"": 32 },
{ ""name"": ""Liam OConnor"", ""date_of_birth"": ""19850719"", ""job_title"": ""Marketing Manager"", ""age"": 39 },
…18 more entries…
]","[""development""]",false,true
1 listing_id storeListingVersionId slug agent_name agent_video agent_image featured sub_heading description categories useForOnboarding is_available
2 6e60a900-9d7d-490e-9af2-a194827ed632 d85882b8-633f-44ce-a315-c20a8c123d19 flux-ai-image-generator Flux AI Image Generator ["https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/ca154dd1-140e-454c-91bd-2d8a00de3f08.jpg","https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/577d995d-bc38-40a9-a23f-1f30f5774bdb.jpg","https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/415db1b7-115c-43ab-bd6c-4e9f7ef95be1.jpg"] false Transform ideas into breathtaking images Transform ideas into breathtaking images with this AI-powered Image Generator. Using cutting-edge Flux AI technology, the tool crafts highly detailed, photorealistic visuals from simple text prompts. Perfect for artists, marketers, and content creators, this generator produces unique images tailored to user specifications. From fantastical scenes to lifelike portraits, users can unleash creativity with professional-quality results in seconds. Easy to use and endlessly versatile, bring imagination to life with the AI Image Generator today! ["creative"] false true
3 f11fc6e9-6166-4676-ac5d-f07127b270c1 c775f60d-b99f-418b-8fe0-53172258c3ce youtube-transcription-scraper YouTube Transcription Scraper https://youtu.be/H8S3pU68lGE ["https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/65bce54b-0124-4b0d-9e3e-f9b89d0dc99e.jpg"] false Fetch the transcriptions from the most popular YouTube videos in your chosen topic Effortlessly gather transcriptions from multiple YouTube videos with this agent. It scrapes and compiles video transcripts into a clean, organized list, making it easy to extract insights, quotes, or content from various sources in one go. Ideal for researchers, content creators, and marketers looking to quickly analyze or repurpose video content. ["writing"] false true
4 17908889-b599-4010-8e4f-bed19b8f3446 6e16e65a-ad34-4108-b4fd-4a23fced5ea2 business-ownerceo-finder Decision Maker Lead Finder ["https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/1020d94e-b6a2-4fa7-bbdf-2c218b0de563.jpg"] false Contact CEOs today Find the key decision-makers you need, fast. This agent identifies business owners or CEOs of local companies in any area you choose. Simply enter what kind of businesses you’re looking for and where, and it will: * Search the area and gather public information * Return names, roles, and contact details when available * Provide smart Google search suggestions if details aren’t found Perfect for: * B2B sales teams seeking verified leads * Recruiters sourcing local talent * Researchers looking to connect with business leaders Save hours of manual searching and get straight to the people who matter most. ["business"] true true
5 72beca1d-45ea-4403-a7ce-e2af168ee428 415b7352-0dc6-4214-9d87-0ad3751b711d smart-meeting-brief Smart Meeting Prep https://youtu.be/9ydZR2hkxaY ["https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/2f116ce1-63ae-4d39-a5cd-f514defc2b97.png","https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/0a71a60a-2263-4f12-9836-9c76ab49f155.png","https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/95327695-9184-403c-907a-a9d3bdafa6a5.png","https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/2bc77788-790b-47d4-8a61-ce97b695e9f5.png"] true Business meeting briefings delivered daily Never walk into a meeting unprepared again. Every day at 4 pm, the Smart Meeting Prep Agent scans your calendar for tomorrow's external meetings. It reviews your past email exchanges, researches each participant's background and role, and compiles the insights into a concise briefing, so you can close your workday ready for tomorrow's calls. How It Works 1. At 4 pm, the agent scans your calendar and identifies external meetings scheduled for the next day. 2. It reviews recent email threads with each participant to surface key relationship history and communication context. 3. It conducts online research to gather publicly available information on roles, company backgrounds, and relevant professional data. 4. It produces a unified briefing for each participant, including past exchange highlights, profile notes, and strategic conversation points. ["personal"] true true
6 9fa5697a-617b-4fae-aea0-7dbbed279976 b8ceb480-a7a2-4c90-8513-181a49f7071f automated-support-ai Automated Support Agent https://youtu.be/nBMfu_5sgDA ["https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/ed56febc-2205-4179-9e7e-505d8500b66c.png"] true Automate up to 80 percent of inbound support emails Overview: Support teams spend countless hours on basic tickets. This agent automates repetitive customer support tasks. It reads incoming requests, researches your knowledge base, and responds automatically when confident. When unsure, it escalates to a human for final resolution. How it Works: New support emails are routed to the agent. The agent checks internal documentation for answers. It measures confidence in the answer found and either replies directly or escalates to a human. Business Value: Automating the easy 80 percent of support tickets allows your team to focus on high-value, complex customer issues, improving efficiency and response times. ["business"] false true
7 2bdac92b-a12c-4131-bb46-0e3b89f61413 31daf49d-31d3-476b-aa4c-099abc59b458 unspirational-poster-maker Unspirational Poster Maker ["https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/6a490dac-27e5-405f-a4c4-8d1c55b85060.jpg","https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/d343fbb5-478c-4e38-94df-4337293b61f1.jpg"] false Because adulting is hard This witty AI agent generates hilariously relatable "motivational" posters that tackle the everyday struggles of procrastination, overthinking, and workplace chaos with a blend of absurdity and sarcasm. From goldfish facing impossible tasks to cats in existential crises, The Unspirational Poster Maker designs tongue-in-cheek graphics and captions that mock productivity clichés and embrace our collective struggles to "get it together." Perfect for adding a touch of humour to the workday, these posters remind us that sometimes, all we can do is laugh at the chaos. ["creative"] false true
8 9adf005e-2854-4cc7-98cf-f7103b92a7b7 a03b0d8c-4751-43d6-a54e-c3b7856ba4e3 ai-shortform-video-generator-create-viral-ready-content AI Video Generator ["https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/8d2670b9-fea5-4966-a597-0a4511bffdc3.png","https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/aabe8aec-0110-4ce7-a259-4f86fe8fe07d.png"] false Create Viral-Ready Shorts Content in Seconds OVERVIEW Transform any trending headline or broad topic into a polished, vertical short-form video in a single run. The agent automates research, scriptwriting, metadata creation, and Revid.ai rendering, returning one ready-to-publish MP4 plus its title, script and hashtags. HOW IT WORKS 1. Input a topic or an exact news headline. 2. The agent fetches live search results and selects the most engaging related story. 3. Key facts are summarised into concise research notes. 4. Claude writes a 30–35 second script with visual cues, a three-second hook, tension loops, and a call-to-action. 5. GPT-4o generates an eye-catching title and one or two discoverability hashtags. 6. The script is sent to a state-of-the-art AI video generator to render a single 9:16 MP4 (default: 720 p, 30 fps, voice “Brian”, style “movingImage”, music “Bladerunner 2049”). – All voice, style and resolution settings can be adjusted in the Builder before you press "Run". 7. Output delivered: Title, Script, Hashtags, Video URL. KEY USE CASES - Broad-topic explainers (e.g. “Artificial Intelligence” or “Climate Tech”). - Real-time newsjacking with a specific breaking headline. - Product-launch spotlights and quick event recaps while interest is high. BUSINESS VALUE - One-click speed: from idea to finished video in minutes. - Consistent brand look: Revid presets keep voice, style and aspect ratio on spec. - No-code workflow: marketers create social video without design or development queues. - Cloud convenience: Auto-GPT Cloud users are pre-configured with all required keys. Self-hosted users simply add OpenAI, Anthropic, Perplexity (OpenRouter/Jina) and Revid keys once. IMPORTANT NOTES - The agent outputs exactly one video per execution. Run it again for additional shorts. - Video rendering time varies; AI-generated footage may take several minutes. ["writing"] false true
9 864e48ef-fee5-42c1-b6a4-2ae139db9fc1 55d40473-0f31-4ada-9e40-d3a7139fcbd4 automated-blog-writer Automated SEO Blog Writer https://youtu.be/nKcDCbDVobs ["https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/2dd5f95b-5b30-4bf8-a11b-bac776c5141a.jpg"] true Automate research, writing, and publishing for high-ranking blog posts Scale your blog with a fully automated content engine. The Automated SEO Blog Writer learns your brand voice, finds high-demand keywords, and creates SEO-optimized articles that attract organic traffic and boost visibility. How it works: 1. Share your pitch, website, and values. 2. The agent studies your site and uncovers proven SEO opportunities. 3. It spends two hours researching and drafting each post. 4. You set the cadence—publishing runs on autopilot. Business value: Consistently publish research-backed, optimized posts that build domain authority, rankings, and thought leadership while you focus on what matters most. Use cases: • Founders: Keep your blog active with no time drain. • Agencies: Deliver scalable SEO content for clients. • Strategists: Automate execution, focus on strategy. • Marketers: Drive steady organic growth. • Local businesses: Capture nearby search traffic. ["writing"] false true
10 6046f42e-eb84-406f-bae0-8e052064a4fa a548e507-09a7-4b30-909c-f63fcda10fff lead-finder-local-businesses Lead Finder ["https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/abd6605f-d5f8-426b-af36-052e8ba5044f.webp"] false Auto-Prospect Like a Pro Turbo-charge your local lead generation with the AutoGPT Marketplace’s top Google Maps prospecting agent. “Lead Finder: Local Businesses” delivers verified, ready-to-contact prospects in any niche and city—so you can focus on closing, not searching. **WHAT IT DOES** • Searches Google Maps via the official API (no scraping) • Prompts like “dentists in Chicago” or “coffee shops near me” • Returns: Name, Website, Rating, Reviews, **Phone & Address** • Exports instantly to your CRM, sheet, or outreach workflow **WHY YOU’LL LOVE IT** ✓ Hyper-targeted leads in minutes ✓ Unlimited searches & locations ✓ Zero CAPTCHAs or IP blocks ✓ Works on AutoGPT Cloud or self-hosted (with your API key) ✓ Cut prospecting time by 90% **PERFECT FOR** — Marketers & PPC agencies — SEO consultants & designers — SaaS founders & sales teams Stop scrolling directories—start filling your pipeline. Start now and let AI prospect while you profit. → Click *Add to Library* and own your market today. ["business"] true true
11 f623c862-24e9-44fc-8ce8-d8282bb51ad2 eafa21d3-bf14-4f63-a97f-a5ee41df83b3 linkedin-post-generator LinkedIn Post Generator ["https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/297f6a8e-81a8-43e2-b106-c7ad4a5662df.png","https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/fceebdc1-aef6-4000-97fc-4ef587f56bda.png"] false Auto‑craft LinkedIn gold Create research‑driven, high‑impact LinkedIn posts in minutes. This agent searches YouTube for the best videos on your chosen topic, pulls their transcripts, and distils the most valuable insights into a polished post ready for your company page or personal feed. FEATURES • Automated YouTube research – discovers and analyses top‑ranked videos so you don’t have to • AI‑curated synthesis – combines multiple transcripts into one authoritative narrative • Full creative control – adjust style, tone, objective, opinion, clarity, target word count and number of videos • LinkedIn‑optimised output – hook, 2‑3 key points, CTA, strategic line breaks, 3‑5 hashtags, no markdown • One‑click publish – returns a ready‑to‑post text block (≤1 300 characters) HOW IT WORKS 1. Enter a topic and your preferred writing parameters. 2. The agent builds a YouTube search, fetches the page, and extracts the top N video URLs. 3. It pulls each transcript, then feeds them—plus your settings—into Claude 3.5 Sonnet. 4. The model writes a concise, engaging post designed for maximum LinkedIn engagement. USE CASES • Thought‑leadership updates backed by fresh video research • Rapid industry summaries after major events, webinars, or conferences • Consistent LinkedIn content for busy founders, marketers, and creators WHY YOU’LL LOVE IT Save hours of manual research, avoid surface‑level hot‑takes, and publish posts that showcase real expertise—without the heavy lift. ["writing"] true true
12 7d4120ad-b6b3-4419-8bdb-7dd7d350ef32 e7bb29a1-23c7-4fee-aa3b-5426174b8c52 youtube-to-linkedin-post-converter YouTube to LinkedIn Post Converter ["https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/f084b326-a708-4396-be51-7ba59ad2ef32.png"] false Transform Your YouTube Videos into Engaging LinkedIn Posts with AI WHAT IT DOES: This agent converts YouTube video content into a LinkedIn post by analyzing the video's transcript. It provides you with a tailored post that reflects the core ideas, key takeaways, and tone of the original video, optimizing it for engagement on LinkedIn. HOW IT WORKS: - You provide the URL to the YouTube video (required) - You can choose the structure for the LinkedIn post (e.g., Personal Achievement Story, Lesson Learned, Thought Leadership, etc.) - You can also select the tone (e.g., Inspirational, Analytical, Conversational, etc.) - The transcript of the video is analyzed by the GPT-4 model and the Claude 3.5 Sonnet model - The models extract key insights, memorable quotes, and the main points from the video - You’ll receive a LinkedIn post, formatted according to your chosen structure and tone, optimized for professional engagement INPUTS: - Source YouTube Video – Provide the URL to the YouTube video - Structure – Choose the post format (e.g., Personal Achievement Story, Thought Leadership, etc.) - Content – Specify the main message or idea of the post (e.g., Hot Take, Key Takeaways, etc.) - Tone – Select the tone for the post (e.g., Conversational, Inspirational, etc.) OUTPUT: - LinkedIn Post – A well-crafted, AI-generated LinkedIn post with a professional tone, based on the video content and your specified preferences Perfect for content creators, marketers, and professionals who want to repurpose YouTube videos for LinkedIn and boost their professional branding. ["writing"] false true
13 c61d6a83-ea48-4df8-b447-3da2d9fe5814 00fdd42c-a14c-4d19-a567-65374ea0e87f personalized-morning-coffee-newsletter Personal Newsletter ["https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/f4b38e4c-8166-4caf-9411-96c9c4c82d4c.png"] false Start your day with personalized AI newsletters that deliver credibility and context for every interest or mood. This Personal Newsletter Agent provides a bespoke daily digest on your favorite topics and tone. Whether you prefer industry insights, lighthearted reads, or breaking news, this agent crafts your own unique newsletter to keep you informed and entertained. How It Works 1. Enter your favorite topics, industries, or areas of interest. 2. Choose your tone—professional, casual, or humorous. 3. Set your preferred delivery cadence: daily or weekly. 4. The agent scans top sources and compiles 3–5 engaging stories, insights, and fun facts into a conversational newsletter. Skip the morning scroll and enjoy a thoughtfully curated newsletter designed just for you. Stay ahead of trends, spark creative ideas, and enjoy an effortless, informed start to your day. Use Cases • Executives: Get a daily digest of market updates and leadership insights. • Marketers: Receive curated creative trends and campaign inspiration. • Entrepreneurs: Stay updated on your industry without information overload. ["research"] true true
14 e2e49cfc-4a39-4d62-a6b3-c095f6d025ff fc2c9976-0962-4625-a27b-d316573a9e7f email-address-finder Email Scout - Contact Finder Assistant ["https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/da8a690a-7a8b-4c1d-b6f8-e2f840c0205d.jpg","https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/6a2ac25c-1609-4881-8140-e6da2421afb3.jpg","https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/26179263-fe06-45bd-b6a0-0754660a0a46.jpg"] false Find contact details from name and location using AI search Finding someone's professional email address can be time-consuming and frustrating. Manual searching across multiple websites, social profiles, and business directories often leads to dead ends or outdated information. Email Scout automates this process by intelligently searching across publicly available sources when you provide a person's name and location. Simply input basic information like "Tim Cook, USA" or "Sarah Smith, London" and let the AI assistant do the work of finding potential contact details. Key Features: - Quick search from just name and location - Scans multiple public sources - Automated AI-powered search process - Easy to use with simple inputs Perfect for recruiters, business development professionals, researchers, and anyone needing to establish professional contact. Note: This tool searches only publicly available information. Search results depend on what contact information people have made public. Some searches may not yield results if the information isn't publicly accessible. [""] false true
15 81bcc372-0922-4a36-bc35-f7b1e51d6939 e437cc95-e671-489d-b915-76561fba8c7f ai-youtube-to-blog-converter YouTube Video to SEO Blog Writer ["https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/239e5a41-2515-4e1c-96ef-31d0d37ecbeb.webp","https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/c7d96966-786f-4be6-ad7d-3a51c84efc0e.png","https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/0275a74c-e2c2-4e29-a6e4-3a616c3c35dd.png"] false One link. One click. One powerful blog post. Effortlessly transform your YouTube videos into high-quality, SEO-optimized blog posts. Your videos deserve a second life—in writing. Make your content work twice as hard by repurposing it into engaging, searchable articles. Perfect for content creators, marketers, and bloggers, this tool analyzes video content and generates well-structured blog posts tailored to your tone, audience, and word count. Just paste a YouTube URL and let the AI handle the rest. FEATURES • CONTENT ANALYSIS Extracts key points from the video while preserving your message and intent. • CUSTOMIZABLE OUTPUT Select a tone that fits your audience: casual, professional, educational, or formal. • SEO OPTIMIZATION Automatically creates engaging titles and structured subheadings for better search visibility. • USER-FRIENDLY Repurpose your videos into written content to expand your reach and improve accessibility. Whether you're looking to grow your blog, boost SEO, or simply get more out of your content, the AI YouTube-to-Blog Converter makes it effortless. ["writing"] true true
16 5c3510d2-fc8b-4053-8e19-67f53c86eb1a f2cc74bb-f43f-4395-9c35-ecb30b5b4fc9 ai-webpage-copy-improver AI Webpage Copy Improver ["https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/d562d26f-5891-4b09-8859-fbb205972313.jpg"] false Boost Your Website's Search Engine Performance Elevate your web content with this powerful AI Webpage Copy Improver. Designed for marketers, SEO specialists, and web developers, this tool analyses and enhances website copy for maximum impact. Using advanced language models, it optimizes text for better clarity, SEO performance, and increased conversion rates. The AI examines your existing content, identifies areas for improvement, and generates refined copy that maintains your brand voice while boosting engagement. From homepage headlines to product descriptions, transform your web presence with AI-driven insights. Improve readability, incorporate targeted keywords, and craft compelling calls-to-action - all with the click of a button. Take your digital marketing to the next level with the AI Webpage Copy Improver. ["marketing"] true true
17 94d03bd3-7d44-4d47-b60c-edb2f89508d6 b6f6f0d3-49f4-4e3b-8155-ffe9141b32c0 domain-name-finder Domain Name Finder ["https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/28545e09-b2b8-4916-b4c6-67f982510a78.jpeg"] false Instantly generate brand-ready domain names that are actually available Overview: Finding a domain name that fits your brand shouldn’t take hours of searching and failed checks. The Domain Name Finder Agent turns your pitch into hundreds of creative, brand-ready domain ideas—filtered by live availability so every result is actionable. How It Works 1. Input your product pitch, company name, or core keywords. 2. The agent analyzes brand tone, audience, and industry context. 3. It generates a list of unique, memorable domains that match your criteria. 4. All names are pre-filtered for real-time availability, so you can register immediately. Business Value Save hours of guesswork and eliminate dead ends. Accelerate brand launches, startup naming, and campaign creation with ready-to-claim domains. Key Use Cases • Startup Founders: Quickly find brand-ready domains for MVP launches or rebrands. • Marketers: Test name options across campaigns with instant availability data. • Entrepreneurs: Validate ideas faster with instant domain options. ["business"] false true
18 7a831906-daab-426f-9d66-bcf98d869426 516d813b-d1bc-470f-add7-c63a4b2c2bad ai-function AI Function ["https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/620e8117-2ee1-4384-89e6-c2ef4ec3d9c9.webp","https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/476259e2-5a79-4a7b-8e70-deeebfca70d7.png"] false Never Code Again AI FUNCTION MAGIC Your AI‑powered assistant for turning plain‑English descriptions into working Python functions. HOW IT WORKS 1. Describe what the function should do. 2. Specify the inputs it needs. 3. Receive the generated Python code. FEATURES - Effortless Function Generation: convert natural‑language specs into complete functions. - Customizable Inputs: define the parameters that matter to you. - Versatile Use Cases: simulate data, automate tasks, prototype ideas. - Seamless Integration: add the generated function directly to your codebase. EXAMPLE Request: “Create a function that generates 20 examples of fake people, each with a name, date of birth, job title, and age.” Input parameter: number_of_people (default 20) Result: a list of dictionaries such as [ { "name": "Emma Martinez", "date_of_birth": "1992‑11‑03", "job_title": "Data Analyst", "age": 32 }, { "name": "Liam O’Connor", "date_of_birth": "1985‑07‑19", "job_title": "Marketing Manager", "age": 39 }, …18 more entries… ] ["development"] false true

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,590 @@
{
"id": "7b2e2095-782a-4f8d-adda-e62b661bccf5",
"version": 29,
"is_active": false,
"name": "Unspirational Poster Maker",
"description": "This witty AI agent generates hilariously relatable \"motivational\" posters that tackle the everyday struggles of procrastination, overthinking, and workplace chaos with a blend of absurdity and sarcasm. From goldfish facing impossible tasks to cats in existential crises, The Unspirational Poster Maker designs tongue-in-cheek graphics and captions that mock productivity clich\u00e9s and embrace our collective struggles to \"get it together.\" Perfect for adding a touch of humour to the workday, these posters remind us that sometimes, all we can do is laugh at the chaos.",
"instructions": null,
"recommended_schedule_cron": null,
"nodes": [
{
"id": "5ac3727a-1ea7-436b-a902-ef1bfd883a30",
"block_id": "363ae599-353e-4804-937e-b2ee3cef3da4",
"input_default": {
"name": "Generated Image",
"description": "The resulting generated image ready for you to review and post."
},
"metadata": {
"position": {
"x": 2329.937006807125,
"y": 80.49068076698347
}
},
"input_links": [
{
"id": "c6c511e8-e6a4-4969-9bc8-f67d60c1e229",
"source_id": "86665e90-ffbf-48fb-ad3f-e5d31fd50c51",
"sink_id": "5ac3727a-1ea7-436b-a902-ef1bfd883a30",
"source_name": "result",
"sink_name": "value",
"is_static": false
},
{
"id": "20845dda-91de-4508-8077-0504b1a5ae03",
"source_id": "28bda769-b88b-44c9-be5c-52c2667f137e",
"sink_id": "5ac3727a-1ea7-436b-a902-ef1bfd883a30",
"source_name": "result",
"sink_name": "value",
"is_static": false
},
{
"id": "6524c611-774b-45e9-899d-9a6aa80c549c",
"source_id": "e7cdc1a2-4427-4a8a-a31b-63c8e74842f8",
"sink_id": "5ac3727a-1ea7-436b-a902-ef1bfd883a30",
"source_name": "result",
"sink_name": "value",
"is_static": false
},
{
"id": "714a0821-e5ba-4af7-9432-50491adda7b1",
"source_id": "576c5677-9050-4d1c-aad4-36b820c04fef",
"sink_id": "5ac3727a-1ea7-436b-a902-ef1bfd883a30",
"source_name": "result",
"sink_name": "value",
"is_static": false
}
],
"output_links": [],
"graph_id": "7b2e2095-782a-4f8d-adda-e62b661bccf5",
"graph_version": 29,
"webhook_id": null,
"webhook": null
},
{
"id": "7e026d19-f9a6-412f-8082-610f9ba0c410",
"block_id": "c0a8e994-ebf1-4a9c-a4d8-89d09c86741b",
"input_default": {
"name": "Theme",
"value": "Cooking"
},
"metadata": {
"position": {
"x": -1219.5966324967521,
"y": 80.50339731789956
}
},
"input_links": [],
"output_links": [
{
"id": "8c2bd1f7-b17b-4835-81b6-bb336097aa7a",
"source_id": "7e026d19-f9a6-412f-8082-610f9ba0c410",
"sink_id": "7543b9b0-0409-4cf8-bc4e-e0336273e2c4",
"source_name": "result",
"sink_name": "prompt_values_#_THEME",
"is_static": true
}
],
"graph_id": "7b2e2095-782a-4f8d-adda-e62b661bccf5",
"graph_version": 29,
"webhook_id": null,
"webhook": null
},
{
"id": "28bda769-b88b-44c9-be5c-52c2667f137e",
"block_id": "6ab085e2-20b3-4055-bc3e-08036e01eca6",
"input_default": {
"upscale": "No Upscale"
},
"metadata": {
"position": {
"x": 1132.373897280427,
"y": 88.44610377514573
}
},
"input_links": [
{
"id": "54588c74-e090-4e49-89e4-844b9952a585",
"source_id": "7543b9b0-0409-4cf8-bc4e-e0336273e2c4",
"sink_id": "28bda769-b88b-44c9-be5c-52c2667f137e",
"source_name": "response",
"sink_name": "prompt",
"is_static": false
}
],
"output_links": [
{
"id": "20845dda-91de-4508-8077-0504b1a5ae03",
"source_id": "28bda769-b88b-44c9-be5c-52c2667f137e",
"sink_id": "5ac3727a-1ea7-436b-a902-ef1bfd883a30",
"source_name": "result",
"sink_name": "value",
"is_static": false
}
],
"graph_id": "7b2e2095-782a-4f8d-adda-e62b661bccf5",
"graph_version": 29,
"webhook_id": null,
"webhook": null
},
{
"id": "e7cdc1a2-4427-4a8a-a31b-63c8e74842f8",
"block_id": "6ab085e2-20b3-4055-bc3e-08036e01eca6",
"input_default": {
"upscale": "No Upscale"
},
"metadata": {
"position": {
"x": 590.7543882245375,
"y": 85.69546832466654
}
},
"input_links": [
{
"id": "66646786-3006-4417-a6b7-0158f2603d1d",
"source_id": "7543b9b0-0409-4cf8-bc4e-e0336273e2c4",
"sink_id": "e7cdc1a2-4427-4a8a-a31b-63c8e74842f8",
"source_name": "response",
"sink_name": "prompt",
"is_static": false
}
],
"output_links": [
{
"id": "6524c611-774b-45e9-899d-9a6aa80c549c",
"source_id": "e7cdc1a2-4427-4a8a-a31b-63c8e74842f8",
"sink_id": "5ac3727a-1ea7-436b-a902-ef1bfd883a30",
"source_name": "result",
"sink_name": "value",
"is_static": false
}
],
"graph_id": "7b2e2095-782a-4f8d-adda-e62b661bccf5",
"graph_version": 29,
"webhook_id": null,
"webhook": null
},
{
"id": "576c5677-9050-4d1c-aad4-36b820c04fef",
"block_id": "6ab085e2-20b3-4055-bc3e-08036e01eca6",
"input_default": {
"upscale": "No Upscale"
},
"metadata": {
"position": {
"x": 60.48904654237981,
"y": 86.06183359510214
}
},
"input_links": [
{
"id": "201d3e03-bc06-4cee-846d-4c3c804d8857",
"source_id": "7543b9b0-0409-4cf8-bc4e-e0336273e2c4",
"sink_id": "576c5677-9050-4d1c-aad4-36b820c04fef",
"source_name": "response",
"sink_name": "prompt",
"is_static": false
}
],
"output_links": [
{
"id": "714a0821-e5ba-4af7-9432-50491adda7b1",
"source_id": "576c5677-9050-4d1c-aad4-36b820c04fef",
"sink_id": "5ac3727a-1ea7-436b-a902-ef1bfd883a30",
"source_name": "result",
"sink_name": "value",
"is_static": false
}
],
"graph_id": "7b2e2095-782a-4f8d-adda-e62b661bccf5",
"graph_version": 29,
"webhook_id": null,
"webhook": null
},
{
"id": "86665e90-ffbf-48fb-ad3f-e5d31fd50c51",
"block_id": "6ab085e2-20b3-4055-bc3e-08036e01eca6",
"input_default": {
"prompt": "A cat sprawled dramatically across an important-looking document during a work-from-home meeting, making direct eye contact with the camera while knocking over a coffee mug in slow motion. Text Overlay: \"Chaos is a career path. Be the obstacle everyone has to work around.\"",
"upscale": "No Upscale"
},
"metadata": {
"position": {
"x": 1668.3572666956795,
"y": 89.69665262457966
}
},
"input_links": [
{
"id": "509b7587-1940-4a06-808d-edde9a74f400",
"source_id": "7543b9b0-0409-4cf8-bc4e-e0336273e2c4",
"sink_id": "86665e90-ffbf-48fb-ad3f-e5d31fd50c51",
"source_name": "response",
"sink_name": "prompt",
"is_static": false
}
],
"output_links": [
{
"id": "c6c511e8-e6a4-4969-9bc8-f67d60c1e229",
"source_id": "86665e90-ffbf-48fb-ad3f-e5d31fd50c51",
"sink_id": "5ac3727a-1ea7-436b-a902-ef1bfd883a30",
"source_name": "result",
"sink_name": "value",
"is_static": false
}
],
"graph_id": "7b2e2095-782a-4f8d-adda-e62b661bccf5",
"graph_version": 29,
"webhook_id": null,
"webhook": null
},
{
"id": "7543b9b0-0409-4cf8-bc4e-e0336273e2c4",
"block_id": "1f292d4a-41a4-4977-9684-7c8d560b9f91",
"input_default": {
"model": "gpt-4o",
"prompt": "<example_output>\nA photo of a sloth lounging on a desk, with its head resting on a keyboard. The keyboard is on top of a laptop with a blank spreadsheet open. A to-do list is placed beside the laptop, with the top item written as \"Do literally anything\". There is a text overlay that says \"If you can't outwork them, outnap them.\".\n</example_output>\n\nCreate a relatable satirical, snarky, user-deprecating motivational style image based on the theme: \"{{THEME}}\".\n\nOutput only the image description and caption, without any additional commentary or formatting.",
"prompt_values": {}
},
"metadata": {
"position": {
"x": -561.1139207164056,
"y": 78.60434452403524
}
},
"input_links": [
{
"id": "8c2bd1f7-b17b-4835-81b6-bb336097aa7a",
"source_id": "7e026d19-f9a6-412f-8082-610f9ba0c410",
"sink_id": "7543b9b0-0409-4cf8-bc4e-e0336273e2c4",
"source_name": "result",
"sink_name": "prompt_values_#_THEME",
"is_static": true
}
],
"output_links": [
{
"id": "54588c74-e090-4e49-89e4-844b9952a585",
"source_id": "7543b9b0-0409-4cf8-bc4e-e0336273e2c4",
"sink_id": "28bda769-b88b-44c9-be5c-52c2667f137e",
"source_name": "response",
"sink_name": "prompt",
"is_static": false
},
{
"id": "201d3e03-bc06-4cee-846d-4c3c804d8857",
"source_id": "7543b9b0-0409-4cf8-bc4e-e0336273e2c4",
"sink_id": "576c5677-9050-4d1c-aad4-36b820c04fef",
"source_name": "response",
"sink_name": "prompt",
"is_static": false
},
{
"id": "509b7587-1940-4a06-808d-edde9a74f400",
"source_id": "7543b9b0-0409-4cf8-bc4e-e0336273e2c4",
"sink_id": "86665e90-ffbf-48fb-ad3f-e5d31fd50c51",
"source_name": "response",
"sink_name": "prompt",
"is_static": false
},
{
"id": "66646786-3006-4417-a6b7-0158f2603d1d",
"source_id": "7543b9b0-0409-4cf8-bc4e-e0336273e2c4",
"sink_id": "e7cdc1a2-4427-4a8a-a31b-63c8e74842f8",
"source_name": "response",
"sink_name": "prompt",
"is_static": false
}
],
"graph_id": "7b2e2095-782a-4f8d-adda-e62b661bccf5",
"graph_version": 29,
"webhook_id": null,
"webhook": null
}
],
"links": [
{
"id": "66646786-3006-4417-a6b7-0158f2603d1d",
"source_id": "7543b9b0-0409-4cf8-bc4e-e0336273e2c4",
"sink_id": "e7cdc1a2-4427-4a8a-a31b-63c8e74842f8",
"source_name": "response",
"sink_name": "prompt",
"is_static": false
},
{
"id": "c6c511e8-e6a4-4969-9bc8-f67d60c1e229",
"source_id": "86665e90-ffbf-48fb-ad3f-e5d31fd50c51",
"sink_id": "5ac3727a-1ea7-436b-a902-ef1bfd883a30",
"source_name": "result",
"sink_name": "value",
"is_static": false
},
{
"id": "6524c611-774b-45e9-899d-9a6aa80c549c",
"source_id": "e7cdc1a2-4427-4a8a-a31b-63c8e74842f8",
"sink_id": "5ac3727a-1ea7-436b-a902-ef1bfd883a30",
"source_name": "result",
"sink_name": "value",
"is_static": false
},
{
"id": "20845dda-91de-4508-8077-0504b1a5ae03",
"source_id": "28bda769-b88b-44c9-be5c-52c2667f137e",
"sink_id": "5ac3727a-1ea7-436b-a902-ef1bfd883a30",
"source_name": "result",
"sink_name": "value",
"is_static": false
},
{
"id": "8c2bd1f7-b17b-4835-81b6-bb336097aa7a",
"source_id": "7e026d19-f9a6-412f-8082-610f9ba0c410",
"sink_id": "7543b9b0-0409-4cf8-bc4e-e0336273e2c4",
"source_name": "result",
"sink_name": "prompt_values_#_THEME",
"is_static": true
},
{
"id": "201d3e03-bc06-4cee-846d-4c3c804d8857",
"source_id": "7543b9b0-0409-4cf8-bc4e-e0336273e2c4",
"sink_id": "576c5677-9050-4d1c-aad4-36b820c04fef",
"source_name": "response",
"sink_name": "prompt",
"is_static": false
},
{
"id": "714a0821-e5ba-4af7-9432-50491adda7b1",
"source_id": "576c5677-9050-4d1c-aad4-36b820c04fef",
"sink_id": "5ac3727a-1ea7-436b-a902-ef1bfd883a30",
"source_name": "result",
"sink_name": "value",
"is_static": false
},
{
"id": "54588c74-e090-4e49-89e4-844b9952a585",
"source_id": "7543b9b0-0409-4cf8-bc4e-e0336273e2c4",
"sink_id": "28bda769-b88b-44c9-be5c-52c2667f137e",
"source_name": "response",
"sink_name": "prompt",
"is_static": false
},
{
"id": "509b7587-1940-4a06-808d-edde9a74f400",
"source_id": "7543b9b0-0409-4cf8-bc4e-e0336273e2c4",
"sink_id": "86665e90-ffbf-48fb-ad3f-e5d31fd50c51",
"source_name": "response",
"sink_name": "prompt",
"is_static": false
}
],
"forked_from_id": null,
"forked_from_version": null,
"sub_graphs": [],
"user_id": "",
"created_at": "2024-12-20T19:58:34.390Z",
"input_schema": {
"type": "object",
"properties": {
"Theme": {
"advanced": false,
"secret": false,
"title": "Theme",
"default": "Cooking"
}
},
"required": []
},
"output_schema": {
"type": "object",
"properties": {
"Generated Image": {
"advanced": false,
"secret": false,
"title": "Generated Image",
"description": "The resulting generated image ready for you to review and post."
}
},
"required": [
"Generated Image"
]
},
"has_external_trigger": false,
"has_human_in_the_loop": false,
"trigger_setup_info": null,
"credentials_input_schema": {
"properties": {
"ideogram_api_key_credentials": {
"credentials_provider": [
"ideogram"
],
"credentials_types": [
"api_key"
],
"properties": {
"id": {
"title": "Id",
"type": "string"
},
"title": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Title"
},
"provider": {
"const": "ideogram",
"title": "Provider",
"type": "string"
},
"type": {
"const": "api_key",
"title": "Type",
"type": "string"
}
},
"required": [
"id",
"provider",
"type"
],
"title": "CredentialsMetaInput[Literal[<ProviderName.IDEOGRAM: 'ideogram'>], Literal['api_key']]",
"type": "object",
"discriminator_values": []
},
"openai_api_key_credentials": {
"credentials_provider": [
"openai"
],
"credentials_types": [
"api_key"
],
"properties": {
"id": {
"title": "Id",
"type": "string"
},
"title": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Title"
},
"provider": {
"const": "openai",
"title": "Provider",
"type": "string"
},
"type": {
"const": "api_key",
"title": "Type",
"type": "string"
}
},
"required": [
"id",
"provider",
"type"
],
"title": "CredentialsMetaInput[Literal[<ProviderName.OPENAI: 'openai'>], Literal['api_key']]",
"type": "object",
"discriminator": "model",
"discriminator_mapping": {
"Llama-3.3-70B-Instruct": "llama_api",
"Llama-3.3-8B-Instruct": "llama_api",
"Llama-4-Maverick-17B-128E-Instruct-FP8": "llama_api",
"Llama-4-Scout-17B-16E-Instruct-FP8": "llama_api",
"Qwen/Qwen2.5-72B-Instruct-Turbo": "aiml_api",
"amazon/nova-lite-v1": "open_router",
"amazon/nova-micro-v1": "open_router",
"amazon/nova-pro-v1": "open_router",
"claude-3-7-sonnet-20250219": "anthropic",
"claude-3-haiku-20240307": "anthropic",
"claude-haiku-4-5-20251001": "anthropic",
"claude-opus-4-1-20250805": "anthropic",
"claude-opus-4-20250514": "anthropic",
"claude-opus-4-5-20251101": "anthropic",
"claude-sonnet-4-20250514": "anthropic",
"claude-sonnet-4-5-20250929": "anthropic",
"cohere/command-r-08-2024": "open_router",
"cohere/command-r-plus-08-2024": "open_router",
"deepseek/deepseek-chat": "open_router",
"deepseek/deepseek-r1-0528": "open_router",
"dolphin-mistral:latest": "ollama",
"google/gemini-2.0-flash-001": "open_router",
"google/gemini-2.0-flash-lite-001": "open_router",
"google/gemini-2.5-flash": "open_router",
"google/gemini-2.5-flash-lite-preview-06-17": "open_router",
"google/gemini-2.5-pro-preview-03-25": "open_router",
"google/gemini-3-pro-preview": "open_router",
"gpt-3.5-turbo": "openai",
"gpt-4-turbo": "openai",
"gpt-4.1-2025-04-14": "openai",
"gpt-4.1-mini-2025-04-14": "openai",
"gpt-4o": "openai",
"gpt-4o-mini": "openai",
"gpt-5-2025-08-07": "openai",
"gpt-5-chat-latest": "openai",
"gpt-5-mini-2025-08-07": "openai",
"gpt-5-nano-2025-08-07": "openai",
"gpt-5.1-2025-11-13": "openai",
"gryphe/mythomax-l2-13b": "open_router",
"llama-3.1-8b-instant": "groq",
"llama-3.3-70b-versatile": "groq",
"llama3": "ollama",
"llama3.1:405b": "ollama",
"llama3.2": "ollama",
"llama3.3": "ollama",
"meta-llama/Llama-3.2-3B-Instruct-Turbo": "aiml_api",
"meta-llama/Llama-3.3-70B-Instruct-Turbo": "aiml_api",
"meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo": "aiml_api",
"meta-llama/llama-4-maverick": "open_router",
"meta-llama/llama-4-scout": "open_router",
"microsoft/wizardlm-2-8x22b": "open_router",
"mistralai/mistral-nemo": "open_router",
"moonshotai/kimi-k2": "open_router",
"nousresearch/hermes-3-llama-3.1-405b": "open_router",
"nousresearch/hermes-3-llama-3.1-70b": "open_router",
"nvidia/llama-3.1-nemotron-70b-instruct": "aiml_api",
"o1": "openai",
"o1-mini": "openai",
"o3-2025-04-16": "openai",
"o3-mini": "openai",
"openai/gpt-oss-120b": "open_router",
"openai/gpt-oss-20b": "open_router",
"perplexity/sonar": "open_router",
"perplexity/sonar-deep-research": "open_router",
"perplexity/sonar-pro": "open_router",
"qwen/qwen3-235b-a22b-thinking-2507": "open_router",
"qwen/qwen3-coder": "open_router",
"v0-1.0-md": "v0",
"v0-1.5-lg": "v0",
"v0-1.5-md": "v0",
"x-ai/grok-4": "open_router",
"x-ai/grok-4-fast": "open_router",
"x-ai/grok-4.1-fast": "open_router",
"x-ai/grok-code-fast-1": "open_router"
},
"discriminator_values": [
"gpt-4o"
]
}
},
"required": [
"ideogram_api_key_credentials",
"openai_api_key_credentials"
],
"title": "UnspirationalPosterMakerCredentialsInputSchema",
"type": "object"
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,447 @@
{
"id": "622849a7-5848-4838-894d-01f8f07e3fad",
"version": 18,
"is_active": true,
"name": "AI Function",
"description": "## AI-Powered Function Magic: Never code again!\nProvide a description of a python function and your inputs and AI will provide the results.",
"instructions": null,
"recommended_schedule_cron": null,
"nodes": [
{
"id": "26ff2973-3f9a-451d-b902-d45e5da0a7fe",
"block_id": "363ae599-353e-4804-937e-b2ee3cef3da4",
"input_default": {
"name": "return",
"title": null,
"value": null,
"format": "",
"secret": false,
"advanced": false,
"description": "The value returned by the function"
},
"metadata": {
"position": {
"x": 1598.8622921127233,
"y": 291.59140862204725
}
},
"input_links": [
{
"id": "caecc1de-fdbc-4fd9-9570-074057bb15f9",
"source_id": "c5d16ee4-de9e-4d93-bf32-ac2d15760d5b",
"sink_id": "26ff2973-3f9a-451d-b902-d45e5da0a7fe",
"source_name": "response",
"sink_name": "value",
"is_static": false
}
],
"output_links": [],
"graph_id": "622849a7-5848-4838-894d-01f8f07e3fad",
"graph_version": 18,
"webhook_id": null,
"webhook": null
},
{
"id": "c5d16ee4-de9e-4d93-bf32-ac2d15760d5b",
"block_id": "1f292d4a-41a4-4977-9684-7c8d560b9f91",
"input_default": {
"model": "o3-mini",
"retry": 3,
"prompt": "{{ARGS}}",
"sys_prompt": "You are now the following python function:\n\n```\n# {{DESCRIPTION}}\n{{FUNCTION}}\n```\n\nThe user will provide your input arguments.\nOnly respond with your `return` value.\nDo not include any commentary or additional text in your response. \nDo not include ``` backticks or any other decorators.",
"ollama_host": "localhost:11434",
"prompt_values": {}
},
"metadata": {
"position": {
"x": 995,
"y": 290.50000000000006
}
},
"input_links": [
{
"id": "dc7cb15f-76cc-4533-b96c-dd9e3f7f75ed",
"source_id": "4eab3a55-20f2-4c1d-804c-7377ba8202d2",
"sink_id": "c5d16ee4-de9e-4d93-bf32-ac2d15760d5b",
"source_name": "result",
"sink_name": "prompt_values_#_FUNCTION",
"is_static": true
},
{
"id": "093bdca5-9f44-42f9-8e1c-276dd2971675",
"source_id": "844530de-2354-46d8-b748-67306b7bbca1",
"sink_id": "c5d16ee4-de9e-4d93-bf32-ac2d15760d5b",
"source_name": "result",
"sink_name": "prompt_values_#_ARGS",
"is_static": true
},
{
"id": "6c63d8ee-b63d-4ff6-bae0-7db8f99bb7af",
"source_id": "0fd6ef54-c1cd-478d-b764-17e40f882b99",
"sink_id": "c5d16ee4-de9e-4d93-bf32-ac2d15760d5b",
"source_name": "result",
"sink_name": "prompt_values_#_DESCRIPTION",
"is_static": true
}
],
"output_links": [
{
"id": "caecc1de-fdbc-4fd9-9570-074057bb15f9",
"source_id": "c5d16ee4-de9e-4d93-bf32-ac2d15760d5b",
"sink_id": "26ff2973-3f9a-451d-b902-d45e5da0a7fe",
"source_name": "response",
"sink_name": "value",
"is_static": false
}
],
"graph_id": "622849a7-5848-4838-894d-01f8f07e3fad",
"graph_version": 18,
"webhook_id": null,
"webhook": null
},
{
"id": "4eab3a55-20f2-4c1d-804c-7377ba8202d2",
"block_id": "7fcd3bcb-8e1b-4e69-903d-32d3d4a92158",
"input_default": {
"name": "Function Definition",
"title": null,
"value": "def fake_people(n: int) -> list[dict]:",
"secret": false,
"advanced": false,
"description": "The function definition (text). This is what you would type on the first line of the function when programming.\n\ne.g \"def fake_people(n: int) -> list[dict]:\"",
"placeholder_values": []
},
"metadata": {
"position": {
"x": -672.6908629664215,
"y": 302.42044359789116
}
},
"input_links": [],
"output_links": [
{
"id": "dc7cb15f-76cc-4533-b96c-dd9e3f7f75ed",
"source_id": "4eab3a55-20f2-4c1d-804c-7377ba8202d2",
"sink_id": "c5d16ee4-de9e-4d93-bf32-ac2d15760d5b",
"source_name": "result",
"sink_name": "prompt_values_#_FUNCTION",
"is_static": true
}
],
"graph_id": "622849a7-5848-4838-894d-01f8f07e3fad",
"graph_version": 18,
"webhook_id": null,
"webhook": null
},
{
"id": "844530de-2354-46d8-b748-67306b7bbca1",
"block_id": "7fcd3bcb-8e1b-4e69-903d-32d3d4a92158",
"input_default": {
"name": "Arguments",
"title": null,
"value": "20",
"secret": false,
"advanced": false,
"description": "The function's inputs\n\ne.g \"20\"",
"placeholder_values": []
},
"metadata": {
"position": {
"x": -158.1623599617334,
"y": 295.410856928333
}
},
"input_links": [],
"output_links": [
{
"id": "093bdca5-9f44-42f9-8e1c-276dd2971675",
"source_id": "844530de-2354-46d8-b748-67306b7bbca1",
"sink_id": "c5d16ee4-de9e-4d93-bf32-ac2d15760d5b",
"source_name": "result",
"sink_name": "prompt_values_#_ARGS",
"is_static": true
}
],
"graph_id": "622849a7-5848-4838-894d-01f8f07e3fad",
"graph_version": 18,
"webhook_id": null,
"webhook": null
},
{
"id": "0fd6ef54-c1cd-478d-b764-17e40f882b99",
"block_id": "90a56ffb-7024-4b2b-ab50-e26c5e5ab8ba",
"input_default": {
"name": "Description",
"title": null,
"value": "Generates n examples of fake data representing people, each with a name, DoB, Job title, and an age.",
"secret": false,
"advanced": false,
"description": "Describe what the function does.\n\ne.g \"Generates n examples of fake data representing people, each with a name, DoB, Job title, and an age.\"",
"placeholder_values": []
},
"metadata": {
"position": {
"x": 374.4548658057796,
"y": 290.3779121974126
}
},
"input_links": [],
"output_links": [
{
"id": "6c63d8ee-b63d-4ff6-bae0-7db8f99bb7af",
"source_id": "0fd6ef54-c1cd-478d-b764-17e40f882b99",
"sink_id": "c5d16ee4-de9e-4d93-bf32-ac2d15760d5b",
"source_name": "result",
"sink_name": "prompt_values_#_DESCRIPTION",
"is_static": true
}
],
"graph_id": "622849a7-5848-4838-894d-01f8f07e3fad",
"graph_version": 18,
"webhook_id": null,
"webhook": null
}
],
"links": [
{
"id": "caecc1de-fdbc-4fd9-9570-074057bb15f9",
"source_id": "c5d16ee4-de9e-4d93-bf32-ac2d15760d5b",
"sink_id": "26ff2973-3f9a-451d-b902-d45e5da0a7fe",
"source_name": "response",
"sink_name": "value",
"is_static": false
},
{
"id": "6c63d8ee-b63d-4ff6-bae0-7db8f99bb7af",
"source_id": "0fd6ef54-c1cd-478d-b764-17e40f882b99",
"sink_id": "c5d16ee4-de9e-4d93-bf32-ac2d15760d5b",
"source_name": "result",
"sink_name": "prompt_values_#_DESCRIPTION",
"is_static": true
},
{
"id": "093bdca5-9f44-42f9-8e1c-276dd2971675",
"source_id": "844530de-2354-46d8-b748-67306b7bbca1",
"sink_id": "c5d16ee4-de9e-4d93-bf32-ac2d15760d5b",
"source_name": "result",
"sink_name": "prompt_values_#_ARGS",
"is_static": true
},
{
"id": "dc7cb15f-76cc-4533-b96c-dd9e3f7f75ed",
"source_id": "4eab3a55-20f2-4c1d-804c-7377ba8202d2",
"sink_id": "c5d16ee4-de9e-4d93-bf32-ac2d15760d5b",
"source_name": "result",
"sink_name": "prompt_values_#_FUNCTION",
"is_static": true
}
],
"forked_from_id": null,
"forked_from_version": null,
"sub_graphs": [],
"user_id": "",
"created_at": "2025-04-19T17:10:48.857Z",
"input_schema": {
"type": "object",
"properties": {
"Function Definition": {
"advanced": false,
"anyOf": [
{
"format": "short-text",
"type": "string"
},
{
"type": "null"
}
],
"secret": false,
"title": "Function Definition",
"description": "The function definition (text). This is what you would type on the first line of the function when programming.\n\ne.g \"def fake_people(n: int) -> list[dict]:\"",
"default": "def fake_people(n: int) -> list[dict]:"
},
"Arguments": {
"advanced": false,
"anyOf": [
{
"format": "short-text",
"type": "string"
},
{
"type": "null"
}
],
"secret": false,
"title": "Arguments",
"description": "The function's inputs\n\ne.g \"20\"",
"default": "20"
},
"Description": {
"advanced": false,
"anyOf": [
{
"format": "long-text",
"type": "string"
},
{
"type": "null"
}
],
"secret": false,
"title": "Description",
"description": "Describe what the function does.\n\ne.g \"Generates n examples of fake data representing people, each with a name, DoB, Job title, and an age.\"",
"default": "Generates n examples of fake data representing people, each with a name, DoB, Job title, and an age."
}
},
"required": []
},
"output_schema": {
"type": "object",
"properties": {
"return": {
"advanced": false,
"secret": false,
"title": "return",
"description": "The value returned by the function"
}
},
"required": [
"return"
]
},
"has_external_trigger": false,
"has_human_in_the_loop": false,
"trigger_setup_info": null,
"credentials_input_schema": {
"properties": {
"openai_api_key_credentials": {
"credentials_provider": [
"openai"
],
"credentials_types": [
"api_key"
],
"properties": {
"id": {
"title": "Id",
"type": "string"
},
"title": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Title"
},
"provider": {
"const": "openai",
"title": "Provider",
"type": "string"
},
"type": {
"const": "api_key",
"title": "Type",
"type": "string"
}
},
"required": [
"id",
"provider",
"type"
],
"title": "CredentialsMetaInput[Literal[<ProviderName.OPENAI: 'openai'>], Literal['api_key']]",
"type": "object",
"discriminator": "model",
"discriminator_mapping": {
"Llama-3.3-70B-Instruct": "llama_api",
"Llama-3.3-8B-Instruct": "llama_api",
"Llama-4-Maverick-17B-128E-Instruct-FP8": "llama_api",
"Llama-4-Scout-17B-16E-Instruct-FP8": "llama_api",
"Qwen/Qwen2.5-72B-Instruct-Turbo": "aiml_api",
"amazon/nova-lite-v1": "open_router",
"amazon/nova-micro-v1": "open_router",
"amazon/nova-pro-v1": "open_router",
"claude-3-7-sonnet-20250219": "anthropic",
"claude-3-haiku-20240307": "anthropic",
"claude-haiku-4-5-20251001": "anthropic",
"claude-opus-4-1-20250805": "anthropic",
"claude-opus-4-20250514": "anthropic",
"claude-opus-4-5-20251101": "anthropic",
"claude-sonnet-4-20250514": "anthropic",
"claude-sonnet-4-5-20250929": "anthropic",
"cohere/command-r-08-2024": "open_router",
"cohere/command-r-plus-08-2024": "open_router",
"deepseek/deepseek-chat": "open_router",
"deepseek/deepseek-r1-0528": "open_router",
"dolphin-mistral:latest": "ollama",
"google/gemini-2.0-flash-001": "open_router",
"google/gemini-2.0-flash-lite-001": "open_router",
"google/gemini-2.5-flash": "open_router",
"google/gemini-2.5-flash-lite-preview-06-17": "open_router",
"google/gemini-2.5-pro-preview-03-25": "open_router",
"google/gemini-3-pro-preview": "open_router",
"gpt-3.5-turbo": "openai",
"gpt-4-turbo": "openai",
"gpt-4.1-2025-04-14": "openai",
"gpt-4.1-mini-2025-04-14": "openai",
"gpt-4o": "openai",
"gpt-4o-mini": "openai",
"gpt-5-2025-08-07": "openai",
"gpt-5-chat-latest": "openai",
"gpt-5-mini-2025-08-07": "openai",
"gpt-5-nano-2025-08-07": "openai",
"gpt-5.1-2025-11-13": "openai",
"gryphe/mythomax-l2-13b": "open_router",
"llama-3.1-8b-instant": "groq",
"llama-3.3-70b-versatile": "groq",
"llama3": "ollama",
"llama3.1:405b": "ollama",
"llama3.2": "ollama",
"llama3.3": "ollama",
"meta-llama/Llama-3.2-3B-Instruct-Turbo": "aiml_api",
"meta-llama/Llama-3.3-70B-Instruct-Turbo": "aiml_api",
"meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo": "aiml_api",
"meta-llama/llama-4-maverick": "open_router",
"meta-llama/llama-4-scout": "open_router",
"microsoft/wizardlm-2-8x22b": "open_router",
"mistralai/mistral-nemo": "open_router",
"moonshotai/kimi-k2": "open_router",
"nousresearch/hermes-3-llama-3.1-405b": "open_router",
"nousresearch/hermes-3-llama-3.1-70b": "open_router",
"nvidia/llama-3.1-nemotron-70b-instruct": "aiml_api",
"o1": "openai",
"o1-mini": "openai",
"o3-2025-04-16": "openai",
"o3-mini": "openai",
"openai/gpt-oss-120b": "open_router",
"openai/gpt-oss-20b": "open_router",
"perplexity/sonar": "open_router",
"perplexity/sonar-deep-research": "open_router",
"perplexity/sonar-pro": "open_router",
"qwen/qwen3-235b-a22b-thinking-2507": "open_router",
"qwen/qwen3-coder": "open_router",
"v0-1.0-md": "v0",
"v0-1.5-lg": "v0",
"v0-1.5-md": "v0",
"x-ai/grok-4": "open_router",
"x-ai/grok-4-fast": "open_router",
"x-ai/grok-4.1-fast": "open_router",
"x-ai/grok-code-fast-1": "open_router"
},
"discriminator_values": [
"o3-mini"
]
}
},
"required": [
"openai_api_key_credentials"
],
"title": "AIFunctionCredentialsInputSchema",
"type": "object"
}
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because one or more lines are too long

File diff suppressed because it is too large Load Diff

File diff suppressed because one or more lines are too long

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,403 @@
{
"id": "ed2091cf-5b27-45a9-b3ea-42396f95b256",
"version": 12,
"is_active": true,
"name": "Flux AI Image Generator",
"description": "Transform ideas into breathtaking images with this AI-powered Image Generator. Using cutting-edge Flux AI technology, the tool crafts highly detailed, photorealistic visuals from simple text prompts. Perfect for artists, marketers, and content creators, this generator produces unique images tailored to user specifications. From fantastical scenes to lifelike portraits, users can unleash creativity with professional-quality results in seconds. Easy to use and endlessly versatile, bring imagination to life with the AI Image Generator today!",
"instructions": null,
"recommended_schedule_cron": null,
"nodes": [
{
"id": "7482c59d-725f-4686-82b9-0dfdc4e92316",
"block_id": "cc10ff7b-7753-4ff2-9af6-9399b1a7eddc",
"input_default": {
"text": "Press the \"Advanced\" toggle and input your replicate API key.\n\nYou can get one here:\nhttps://replicate.com/account/api-tokens\n"
},
"metadata": {
"position": {
"x": 872.8268131538296,
"y": 614.9436919065381
}
},
"input_links": [],
"output_links": [],
"graph_id": "ed2091cf-5b27-45a9-b3ea-42396f95b256",
"graph_version": 12,
"webhook_id": null,
"webhook": null
},
{
"id": "0d1dec1a-e4ee-4349-9673-449a01bbf14e",
"block_id": "363ae599-353e-4804-937e-b2ee3cef3da4",
"input_default": {
"name": "Generated Image"
},
"metadata": {
"position": {
"x": 1453.6844137728922,
"y": 963.2466395125115
}
},
"input_links": [
{
"id": "06665d23-2f3d-4445-8f22-573446fcff5b",
"source_id": "50bc23e9-f2b7-4959-8710-99679ed9eeea",
"sink_id": "0d1dec1a-e4ee-4349-9673-449a01bbf14e",
"source_name": "result",
"sink_name": "value",
"is_static": false
}
],
"output_links": [],
"graph_id": "ed2091cf-5b27-45a9-b3ea-42396f95b256",
"graph_version": 12,
"webhook_id": null,
"webhook": null
},
{
"id": "6f24c45f-1548-4eda-9784-da06ce0abef8",
"block_id": "c0a8e994-ebf1-4a9c-a4d8-89d09c86741b",
"input_default": {
"name": "Image Subject",
"value": "Otto the friendly, purple \"Chief Automation Octopus\" helping people automate their tedious tasks.",
"description": "The subject of the image"
},
"metadata": {
"position": {
"x": -314.43009631839783,
"y": 962.935949165938
}
},
"input_links": [],
"output_links": [
{
"id": "1077c61a-a32a-4ed7-becf-11bcf835b914",
"source_id": "6f24c45f-1548-4eda-9784-da06ce0abef8",
"sink_id": "0d1bca9a-d9b8-4bfd-a19c-fe50b54f4b12",
"source_name": "result",
"sink_name": "prompt_values_#_TOPIC",
"is_static": true
}
],
"graph_id": "ed2091cf-5b27-45a9-b3ea-42396f95b256",
"graph_version": 12,
"webhook_id": null,
"webhook": null
},
{
"id": "50bc23e9-f2b7-4959-8710-99679ed9eeea",
"block_id": "90f8c45e-e983-4644-aa0b-b4ebe2f531bc",
"input_default": {
"prompt": "dog",
"output_format": "png",
"replicate_model_name": "Flux Pro 1.1"
},
"metadata": {
"position": {
"x": 873.0119949791526,
"y": 966.1604399052493
}
},
"input_links": [
{
"id": "a17ec505-9377-4700-8fe0-124ca81d43a9",
"source_id": "0d1bca9a-d9b8-4bfd-a19c-fe50b54f4b12",
"sink_id": "50bc23e9-f2b7-4959-8710-99679ed9eeea",
"source_name": "response",
"sink_name": "prompt",
"is_static": false
}
],
"output_links": [
{
"id": "06665d23-2f3d-4445-8f22-573446fcff5b",
"source_id": "50bc23e9-f2b7-4959-8710-99679ed9eeea",
"sink_id": "0d1dec1a-e4ee-4349-9673-449a01bbf14e",
"source_name": "result",
"sink_name": "value",
"is_static": false
}
],
"graph_id": "ed2091cf-5b27-45a9-b3ea-42396f95b256",
"graph_version": 12,
"webhook_id": null,
"webhook": null
},
{
"id": "0d1bca9a-d9b8-4bfd-a19c-fe50b54f4b12",
"block_id": "1f292d4a-41a4-4977-9684-7c8d560b9f91",
"input_default": {
"model": "gpt-4o-mini",
"prompt": "Generate an incredibly detailed, photorealistic image prompt about {{TOPIC}}, describing the camera it's taken with and prompting the diffusion model to use all the best quality techniques.\n\nOutput only the prompt with no additional commentary.",
"prompt_values": {}
},
"metadata": {
"position": {
"x": 277.3057034159709,
"y": 962.8382498113764
}
},
"input_links": [
{
"id": "1077c61a-a32a-4ed7-becf-11bcf835b914",
"source_id": "6f24c45f-1548-4eda-9784-da06ce0abef8",
"sink_id": "0d1bca9a-d9b8-4bfd-a19c-fe50b54f4b12",
"source_name": "result",
"sink_name": "prompt_values_#_TOPIC",
"is_static": true
}
],
"output_links": [
{
"id": "a17ec505-9377-4700-8fe0-124ca81d43a9",
"source_id": "0d1bca9a-d9b8-4bfd-a19c-fe50b54f4b12",
"sink_id": "50bc23e9-f2b7-4959-8710-99679ed9eeea",
"source_name": "response",
"sink_name": "prompt",
"is_static": false
}
],
"graph_id": "ed2091cf-5b27-45a9-b3ea-42396f95b256",
"graph_version": 12,
"webhook_id": null,
"webhook": null
}
],
"links": [
{
"id": "1077c61a-a32a-4ed7-becf-11bcf835b914",
"source_id": "6f24c45f-1548-4eda-9784-da06ce0abef8",
"sink_id": "0d1bca9a-d9b8-4bfd-a19c-fe50b54f4b12",
"source_name": "result",
"sink_name": "prompt_values_#_TOPIC",
"is_static": true
},
{
"id": "06665d23-2f3d-4445-8f22-573446fcff5b",
"source_id": "50bc23e9-f2b7-4959-8710-99679ed9eeea",
"sink_id": "0d1dec1a-e4ee-4349-9673-449a01bbf14e",
"source_name": "result",
"sink_name": "value",
"is_static": false
},
{
"id": "a17ec505-9377-4700-8fe0-124ca81d43a9",
"source_id": "0d1bca9a-d9b8-4bfd-a19c-fe50b54f4b12",
"sink_id": "50bc23e9-f2b7-4959-8710-99679ed9eeea",
"source_name": "response",
"sink_name": "prompt",
"is_static": false
}
],
"forked_from_id": null,
"forked_from_version": null,
"sub_graphs": [],
"user_id": "",
"created_at": "2024-12-20T18:46:11.492Z",
"input_schema": {
"type": "object",
"properties": {
"Image Subject": {
"advanced": false,
"secret": false,
"title": "Image Subject",
"description": "The subject of the image",
"default": "Otto the friendly, purple \"Chief Automation Octopus\" helping people automate their tedious tasks."
}
},
"required": []
},
"output_schema": {
"type": "object",
"properties": {
"Generated Image": {
"advanced": false,
"secret": false,
"title": "Generated Image"
}
},
"required": [
"Generated Image"
]
},
"has_external_trigger": false,
"has_human_in_the_loop": false,
"trigger_setup_info": null,
"credentials_input_schema": {
"properties": {
"replicate_api_key_credentials": {
"credentials_provider": [
"replicate"
],
"credentials_types": [
"api_key"
],
"properties": {
"id": {
"title": "Id",
"type": "string"
},
"title": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Title"
},
"provider": {
"const": "replicate",
"title": "Provider",
"type": "string"
},
"type": {
"const": "api_key",
"title": "Type",
"type": "string"
}
},
"required": [
"id",
"provider",
"type"
],
"title": "CredentialsMetaInput[Literal[<ProviderName.REPLICATE: 'replicate'>], Literal['api_key']]",
"type": "object",
"discriminator_values": []
},
"openai_api_key_credentials": {
"credentials_provider": [
"openai"
],
"credentials_types": [
"api_key"
],
"properties": {
"id": {
"title": "Id",
"type": "string"
},
"title": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Title"
},
"provider": {
"const": "openai",
"title": "Provider",
"type": "string"
},
"type": {
"const": "api_key",
"title": "Type",
"type": "string"
}
},
"required": [
"id",
"provider",
"type"
],
"title": "CredentialsMetaInput[Literal[<ProviderName.OPENAI: 'openai'>], Literal['api_key']]",
"type": "object",
"discriminator": "model",
"discriminator_mapping": {
"Llama-3.3-70B-Instruct": "llama_api",
"Llama-3.3-8B-Instruct": "llama_api",
"Llama-4-Maverick-17B-128E-Instruct-FP8": "llama_api",
"Llama-4-Scout-17B-16E-Instruct-FP8": "llama_api",
"Qwen/Qwen2.5-72B-Instruct-Turbo": "aiml_api",
"amazon/nova-lite-v1": "open_router",
"amazon/nova-micro-v1": "open_router",
"amazon/nova-pro-v1": "open_router",
"claude-3-7-sonnet-20250219": "anthropic",
"claude-3-haiku-20240307": "anthropic",
"claude-haiku-4-5-20251001": "anthropic",
"claude-opus-4-1-20250805": "anthropic",
"claude-opus-4-20250514": "anthropic",
"claude-opus-4-5-20251101": "anthropic",
"claude-sonnet-4-20250514": "anthropic",
"claude-sonnet-4-5-20250929": "anthropic",
"cohere/command-r-08-2024": "open_router",
"cohere/command-r-plus-08-2024": "open_router",
"deepseek/deepseek-chat": "open_router",
"deepseek/deepseek-r1-0528": "open_router",
"dolphin-mistral:latest": "ollama",
"google/gemini-2.0-flash-001": "open_router",
"google/gemini-2.0-flash-lite-001": "open_router",
"google/gemini-2.5-flash": "open_router",
"google/gemini-2.5-flash-lite-preview-06-17": "open_router",
"google/gemini-2.5-pro-preview-03-25": "open_router",
"google/gemini-3-pro-preview": "open_router",
"gpt-3.5-turbo": "openai",
"gpt-4-turbo": "openai",
"gpt-4.1-2025-04-14": "openai",
"gpt-4.1-mini-2025-04-14": "openai",
"gpt-4o": "openai",
"gpt-4o-mini": "openai",
"gpt-5-2025-08-07": "openai",
"gpt-5-chat-latest": "openai",
"gpt-5-mini-2025-08-07": "openai",
"gpt-5-nano-2025-08-07": "openai",
"gpt-5.1-2025-11-13": "openai",
"gryphe/mythomax-l2-13b": "open_router",
"llama-3.1-8b-instant": "groq",
"llama-3.3-70b-versatile": "groq",
"llama3": "ollama",
"llama3.1:405b": "ollama",
"llama3.2": "ollama",
"llama3.3": "ollama",
"meta-llama/Llama-3.2-3B-Instruct-Turbo": "aiml_api",
"meta-llama/Llama-3.3-70B-Instruct-Turbo": "aiml_api",
"meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo": "aiml_api",
"meta-llama/llama-4-maverick": "open_router",
"meta-llama/llama-4-scout": "open_router",
"microsoft/wizardlm-2-8x22b": "open_router",
"mistralai/mistral-nemo": "open_router",
"moonshotai/kimi-k2": "open_router",
"nousresearch/hermes-3-llama-3.1-405b": "open_router",
"nousresearch/hermes-3-llama-3.1-70b": "open_router",
"nvidia/llama-3.1-nemotron-70b-instruct": "aiml_api",
"o1": "openai",
"o1-mini": "openai",
"o3-2025-04-16": "openai",
"o3-mini": "openai",
"openai/gpt-oss-120b": "open_router",
"openai/gpt-oss-20b": "open_router",
"perplexity/sonar": "open_router",
"perplexity/sonar-deep-research": "open_router",
"perplexity/sonar-pro": "open_router",
"qwen/qwen3-235b-a22b-thinking-2507": "open_router",
"qwen/qwen3-coder": "open_router",
"v0-1.0-md": "v0",
"v0-1.5-lg": "v0",
"v0-1.5-md": "v0",
"x-ai/grok-4": "open_router",
"x-ai/grok-4-fast": "open_router",
"x-ai/grok-4.1-fast": "open_router",
"x-ai/grok-code-fast-1": "open_router"
},
"discriminator_values": [
"gpt-4o-mini"
]
}
},
"required": [
"replicate_api_key_credentials",
"openai_api_key_credentials"
],
"title": "FluxAIImageGeneratorCredentialsInputSchema",
"type": "object"
}
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,505 @@
{
"id": "0d440799-44ba-4d6c-85b3-b3739f1e1287",
"version": 12,
"is_active": true,
"name": "AI Webpage Copy Improver",
"description": "Elevate your web content with this powerful AI Webpage Copy Improver. Designed for marketers, SEO specialists, and web developers, this tool analyses and enhances website copy for maximum impact. Using advanced language models, it optimizes text for better clarity, SEO performance, and increased conversion rates. The AI examines your existing content, identifies areas for improvement, and generates refined copy that maintains your brand voice while boosting engagement. From homepage headlines to product descriptions, transform your web presence with AI-driven insights. Improve readability, incorporate targeted keywords, and craft compelling calls-to-action - all with the click of a button. Take your digital marketing to the next level with the AI Webpage Copy Improver.",
"instructions": null,
"recommended_schedule_cron": null,
"nodes": [
{
"id": "130ec496-f75d-4fe2-9cd6-8c00d08ea4a7",
"block_id": "363ae599-353e-4804-937e-b2ee3cef3da4",
"input_default": {
"name": "Improved Webpage Copy"
},
"metadata": {
"position": {
"x": 1039.5884372540172,
"y": -0.8359099621230968
}
},
"input_links": [
{
"id": "d4334477-3616-454f-a430-614ca27f5b36",
"source_id": "c9924577-70d8-4ccb-9106-6f796df09ef9",
"sink_id": "130ec496-f75d-4fe2-9cd6-8c00d08ea4a7",
"source_name": "response",
"sink_name": "value",
"is_static": false
}
],
"output_links": [],
"graph_id": "0d440799-44ba-4d6c-85b3-b3739f1e1287",
"graph_version": 12,
"webhook_id": null,
"webhook": null
},
{
"id": "cefccd07-fe70-4feb-bf76-46b20aaa5d35",
"block_id": "363ae599-353e-4804-937e-b2ee3cef3da4",
"input_default": {
"name": "Original Page Analysis",
"description": "Analysis of the webpage as it currently stands."
},
"metadata": {
"position": {
"x": 1037.7724103954706,
"y": -606.5934325506903
}
},
"input_links": [
{
"id": "f979ab78-0903-4f19-a7c2-a419d5d81aef",
"source_id": "08612ce2-625b-4c17-accd-3acace7b6477",
"sink_id": "cefccd07-fe70-4feb-bf76-46b20aaa5d35",
"source_name": "response",
"sink_name": "value",
"is_static": false
}
],
"output_links": [],
"graph_id": "0d440799-44ba-4d6c-85b3-b3739f1e1287",
"graph_version": 12,
"webhook_id": null,
"webhook": null
},
{
"id": "375f8bc3-afd9-4025-ad8e-9aeb329af7ce",
"block_id": "c0a8e994-ebf1-4a9c-a4d8-89d09c86741b",
"input_default": {
"name": "Homepage URL",
"value": "https://agpt.co",
"description": "Enter the URL of the homepage you want to improve"
},
"metadata": {
"position": {
"x": -1195.1455674454749,
"y": 0
}
},
"input_links": [],
"output_links": [
{
"id": "cbb12335-fefd-4560-9fff-98675130fbad",
"source_id": "375f8bc3-afd9-4025-ad8e-9aeb329af7ce",
"sink_id": "b40595c6-dba3-4779-a129-cd4f01fff103",
"source_name": "result",
"sink_name": "url",
"is_static": true
}
],
"graph_id": "0d440799-44ba-4d6c-85b3-b3739f1e1287",
"graph_version": 12,
"webhook_id": null,
"webhook": null
},
{
"id": "b40595c6-dba3-4779-a129-cd4f01fff103",
"block_id": "436c3984-57fd-4b85-8e9a-459b356883bd",
"input_default": {
"raw_content": false
},
"metadata": {
"position": {
"x": -631.7330786555249,
"y": 1.9638396496230826
}
},
"input_links": [
{
"id": "cbb12335-fefd-4560-9fff-98675130fbad",
"source_id": "375f8bc3-afd9-4025-ad8e-9aeb329af7ce",
"sink_id": "b40595c6-dba3-4779-a129-cd4f01fff103",
"source_name": "result",
"sink_name": "url",
"is_static": true
}
],
"output_links": [
{
"id": "adfa6113-77b3-4e32-b136-3e694b87553e",
"source_id": "b40595c6-dba3-4779-a129-cd4f01fff103",
"sink_id": "c9924577-70d8-4ccb-9106-6f796df09ef9",
"source_name": "content",
"sink_name": "prompt_values_#_CONTENT",
"is_static": false
},
{
"id": "5d5656fd-4208-4296-bc70-e39cc31caada",
"source_id": "b40595c6-dba3-4779-a129-cd4f01fff103",
"sink_id": "08612ce2-625b-4c17-accd-3acace7b6477",
"source_name": "content",
"sink_name": "prompt_values_#_CONTENT",
"is_static": false
}
],
"graph_id": "0d440799-44ba-4d6c-85b3-b3739f1e1287",
"graph_version": 12,
"webhook_id": null,
"webhook": null
},
{
"id": "c9924577-70d8-4ccb-9106-6f796df09ef9",
"block_id": "1f292d4a-41a4-4977-9684-7c8d560b9f91",
"input_default": {
"model": "gpt-4o",
"prompt": "Current Webpage Content:\n```\n{{CONTENT}}\n```\n\nBased on the following analysis of the webpage content:\n\n```\n{{ANALYSIS}}\n```\n\nRewrite and improve the content to address the identified issues. Focus on:\n1. Enhancing clarity and readability\n2. Optimizing for SEO (suggest and incorporate relevant keywords)\n3. Improving calls-to-action for better conversion rates\n4. Refining the structure and organization\n5. Maintaining brand consistency while improving the overall tone\n\nProvide the improved content in HTML format inside a code-block with \"```\" backticks, preserving the original structure where appropriate. Also, include a brief summary of the changes made and their potential impact.",
"prompt_values": {}
},
"metadata": {
"position": {
"x": 488.37278423303917,
"y": 0
}
},
"input_links": [
{
"id": "adfa6113-77b3-4e32-b136-3e694b87553e",
"source_id": "b40595c6-dba3-4779-a129-cd4f01fff103",
"sink_id": "c9924577-70d8-4ccb-9106-6f796df09ef9",
"source_name": "content",
"sink_name": "prompt_values_#_CONTENT",
"is_static": false
},
{
"id": "6bcca45d-c9d5-439e-ac43-e4a1264d8f57",
"source_id": "08612ce2-625b-4c17-accd-3acace7b6477",
"sink_id": "c9924577-70d8-4ccb-9106-6f796df09ef9",
"source_name": "response",
"sink_name": "prompt_values_#_ANALYSIS",
"is_static": false
}
],
"output_links": [
{
"id": "d4334477-3616-454f-a430-614ca27f5b36",
"source_id": "c9924577-70d8-4ccb-9106-6f796df09ef9",
"sink_id": "130ec496-f75d-4fe2-9cd6-8c00d08ea4a7",
"source_name": "response",
"sink_name": "value",
"is_static": false
}
],
"graph_id": "0d440799-44ba-4d6c-85b3-b3739f1e1287",
"graph_version": 12,
"webhook_id": null,
"webhook": null
},
{
"id": "08612ce2-625b-4c17-accd-3acace7b6477",
"block_id": "1f292d4a-41a4-4977-9684-7c8d560b9f91",
"input_default": {
"model": "gpt-4o",
"prompt": "Analyze the following webpage content and provide a detailed report on its current state, including strengths and weaknesses in terms of clarity, SEO optimization, and potential for conversion:\n\n{{CONTENT}}\n\nInclude observations on:\n1. Overall readability and clarity\n2. Use of keywords and SEO-friendly language\n3. Effectiveness of calls-to-action\n4. Structure and organization of content\n5. Tone and brand consistency",
"prompt_values": {}
},
"metadata": {
"position": {
"x": -72.66206703605442,
"y": -0.58403945075381
}
},
"input_links": [
{
"id": "5d5656fd-4208-4296-bc70-e39cc31caada",
"source_id": "b40595c6-dba3-4779-a129-cd4f01fff103",
"sink_id": "08612ce2-625b-4c17-accd-3acace7b6477",
"source_name": "content",
"sink_name": "prompt_values_#_CONTENT",
"is_static": false
}
],
"output_links": [
{
"id": "f979ab78-0903-4f19-a7c2-a419d5d81aef",
"source_id": "08612ce2-625b-4c17-accd-3acace7b6477",
"sink_id": "cefccd07-fe70-4feb-bf76-46b20aaa5d35",
"source_name": "response",
"sink_name": "value",
"is_static": false
},
{
"id": "6bcca45d-c9d5-439e-ac43-e4a1264d8f57",
"source_id": "08612ce2-625b-4c17-accd-3acace7b6477",
"sink_id": "c9924577-70d8-4ccb-9106-6f796df09ef9",
"source_name": "response",
"sink_name": "prompt_values_#_ANALYSIS",
"is_static": false
}
],
"graph_id": "0d440799-44ba-4d6c-85b3-b3739f1e1287",
"graph_version": 12,
"webhook_id": null,
"webhook": null
}
],
"links": [
{
"id": "adfa6113-77b3-4e32-b136-3e694b87553e",
"source_id": "b40595c6-dba3-4779-a129-cd4f01fff103",
"sink_id": "c9924577-70d8-4ccb-9106-6f796df09ef9",
"source_name": "content",
"sink_name": "prompt_values_#_CONTENT",
"is_static": false
},
{
"id": "d4334477-3616-454f-a430-614ca27f5b36",
"source_id": "c9924577-70d8-4ccb-9106-6f796df09ef9",
"sink_id": "130ec496-f75d-4fe2-9cd6-8c00d08ea4a7",
"source_name": "response",
"sink_name": "value",
"is_static": false
},
{
"id": "5d5656fd-4208-4296-bc70-e39cc31caada",
"source_id": "b40595c6-dba3-4779-a129-cd4f01fff103",
"sink_id": "08612ce2-625b-4c17-accd-3acace7b6477",
"source_name": "content",
"sink_name": "prompt_values_#_CONTENT",
"is_static": false
},
{
"id": "f979ab78-0903-4f19-a7c2-a419d5d81aef",
"source_id": "08612ce2-625b-4c17-accd-3acace7b6477",
"sink_id": "cefccd07-fe70-4feb-bf76-46b20aaa5d35",
"source_name": "response",
"sink_name": "value",
"is_static": false
},
{
"id": "6bcca45d-c9d5-439e-ac43-e4a1264d8f57",
"source_id": "08612ce2-625b-4c17-accd-3acace7b6477",
"sink_id": "c9924577-70d8-4ccb-9106-6f796df09ef9",
"source_name": "response",
"sink_name": "prompt_values_#_ANALYSIS",
"is_static": false
},
{
"id": "cbb12335-fefd-4560-9fff-98675130fbad",
"source_id": "375f8bc3-afd9-4025-ad8e-9aeb329af7ce",
"sink_id": "b40595c6-dba3-4779-a129-cd4f01fff103",
"source_name": "result",
"sink_name": "url",
"is_static": true
}
],
"forked_from_id": null,
"forked_from_version": null,
"sub_graphs": [],
"user_id": "",
"created_at": "2024-12-20T19:47:22.036Z",
"input_schema": {
"type": "object",
"properties": {
"Homepage URL": {
"advanced": false,
"secret": false,
"title": "Homepage URL",
"description": "Enter the URL of the homepage you want to improve",
"default": "https://agpt.co"
}
},
"required": []
},
"output_schema": {
"type": "object",
"properties": {
"Improved Webpage Copy": {
"advanced": false,
"secret": false,
"title": "Improved Webpage Copy"
},
"Original Page Analysis": {
"advanced": false,
"secret": false,
"title": "Original Page Analysis",
"description": "Analysis of the webpage as it currently stands."
}
},
"required": [
"Improved Webpage Copy",
"Original Page Analysis"
]
},
"has_external_trigger": false,
"has_human_in_the_loop": false,
"trigger_setup_info": null,
"credentials_input_schema": {
"properties": {
"jina_api_key_credentials": {
"credentials_provider": [
"jina"
],
"credentials_types": [
"api_key"
],
"properties": {
"id": {
"title": "Id",
"type": "string"
},
"title": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Title"
},
"provider": {
"const": "jina",
"title": "Provider",
"type": "string"
},
"type": {
"const": "api_key",
"title": "Type",
"type": "string"
}
},
"required": [
"id",
"provider",
"type"
],
"title": "CredentialsMetaInput[Literal[<ProviderName.JINA: 'jina'>], Literal['api_key']]",
"type": "object",
"discriminator_values": []
},
"openai_api_key_credentials": {
"credentials_provider": [
"openai"
],
"credentials_types": [
"api_key"
],
"properties": {
"id": {
"title": "Id",
"type": "string"
},
"title": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Title"
},
"provider": {
"const": "openai",
"title": "Provider",
"type": "string"
},
"type": {
"const": "api_key",
"title": "Type",
"type": "string"
}
},
"required": [
"id",
"provider",
"type"
],
"title": "CredentialsMetaInput[Literal[<ProviderName.OPENAI: 'openai'>], Literal['api_key']]",
"type": "object",
"discriminator": "model",
"discriminator_mapping": {
"Llama-3.3-70B-Instruct": "llama_api",
"Llama-3.3-8B-Instruct": "llama_api",
"Llama-4-Maverick-17B-128E-Instruct-FP8": "llama_api",
"Llama-4-Scout-17B-16E-Instruct-FP8": "llama_api",
"Qwen/Qwen2.5-72B-Instruct-Turbo": "aiml_api",
"amazon/nova-lite-v1": "open_router",
"amazon/nova-micro-v1": "open_router",
"amazon/nova-pro-v1": "open_router",
"claude-3-7-sonnet-20250219": "anthropic",
"claude-3-haiku-20240307": "anthropic",
"claude-haiku-4-5-20251001": "anthropic",
"claude-opus-4-1-20250805": "anthropic",
"claude-opus-4-20250514": "anthropic",
"claude-opus-4-5-20251101": "anthropic",
"claude-sonnet-4-20250514": "anthropic",
"claude-sonnet-4-5-20250929": "anthropic",
"cohere/command-r-08-2024": "open_router",
"cohere/command-r-plus-08-2024": "open_router",
"deepseek/deepseek-chat": "open_router",
"deepseek/deepseek-r1-0528": "open_router",
"dolphin-mistral:latest": "ollama",
"google/gemini-2.0-flash-001": "open_router",
"google/gemini-2.0-flash-lite-001": "open_router",
"google/gemini-2.5-flash": "open_router",
"google/gemini-2.5-flash-lite-preview-06-17": "open_router",
"google/gemini-2.5-pro-preview-03-25": "open_router",
"google/gemini-3-pro-preview": "open_router",
"gpt-3.5-turbo": "openai",
"gpt-4-turbo": "openai",
"gpt-4.1-2025-04-14": "openai",
"gpt-4.1-mini-2025-04-14": "openai",
"gpt-4o": "openai",
"gpt-4o-mini": "openai",
"gpt-5-2025-08-07": "openai",
"gpt-5-chat-latest": "openai",
"gpt-5-mini-2025-08-07": "openai",
"gpt-5-nano-2025-08-07": "openai",
"gpt-5.1-2025-11-13": "openai",
"gryphe/mythomax-l2-13b": "open_router",
"llama-3.1-8b-instant": "groq",
"llama-3.3-70b-versatile": "groq",
"llama3": "ollama",
"llama3.1:405b": "ollama",
"llama3.2": "ollama",
"llama3.3": "ollama",
"meta-llama/Llama-3.2-3B-Instruct-Turbo": "aiml_api",
"meta-llama/Llama-3.3-70B-Instruct-Turbo": "aiml_api",
"meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo": "aiml_api",
"meta-llama/llama-4-maverick": "open_router",
"meta-llama/llama-4-scout": "open_router",
"microsoft/wizardlm-2-8x22b": "open_router",
"mistralai/mistral-nemo": "open_router",
"moonshotai/kimi-k2": "open_router",
"nousresearch/hermes-3-llama-3.1-405b": "open_router",
"nousresearch/hermes-3-llama-3.1-70b": "open_router",
"nvidia/llama-3.1-nemotron-70b-instruct": "aiml_api",
"o1": "openai",
"o1-mini": "openai",
"o3-2025-04-16": "openai",
"o3-mini": "openai",
"openai/gpt-oss-120b": "open_router",
"openai/gpt-oss-20b": "open_router",
"perplexity/sonar": "open_router",
"perplexity/sonar-deep-research": "open_router",
"perplexity/sonar-pro": "open_router",
"qwen/qwen3-235b-a22b-thinking-2507": "open_router",
"qwen/qwen3-coder": "open_router",
"v0-1.0-md": "v0",
"v0-1.5-lg": "v0",
"v0-1.5-md": "v0",
"x-ai/grok-4": "open_router",
"x-ai/grok-4-fast": "open_router",
"x-ai/grok-4.1-fast": "open_router",
"x-ai/grok-code-fast-1": "open_router"
},
"discriminator_values": [
"gpt-4o"
]
}
},
"required": [
"jina_api_key_credentials",
"openai_api_key_credentials"
],
"title": "AIWebpageCopyImproverCredentialsInputSchema",
"type": "object"
}
}

View File

@@ -0,0 +1,615 @@
{
"id": "4c6b68cb-bb75-4044-b1cb-2cee3fd39b26",
"version": 29,
"is_active": true,
"name": "Email Address Finder",
"description": "Input information of a business and find their email address",
"instructions": null,
"recommended_schedule_cron": null,
"nodes": [
{
"id": "04cad535-9f1a-4876-8b07-af5897d8c282",
"block_id": "c0a8e994-ebf1-4a9c-a4d8-89d09c86741b",
"input_default": {
"name": "Address",
"value": "USA"
},
"metadata": {
"position": {
"x": 1047.9357219838776,
"y": 1067.9123910370954
}
},
"input_links": [],
"output_links": [
{
"id": "aac29f7b-3cd1-4c91-9a2a-72a8301c0957",
"source_id": "04cad535-9f1a-4876-8b07-af5897d8c282",
"sink_id": "28b5ddcc-dc20-41cc-ad21-c54ff459f694",
"source_name": "result",
"sink_name": "values_#_ADDRESS",
"is_static": true
}
],
"graph_id": "4c6b68cb-bb75-4044-b1cb-2cee3fd39b26",
"graph_version": 29,
"webhook_id": null,
"webhook": null
},
{
"id": "a6e7355e-5bf8-4b09-b11c-a5e140389981",
"block_id": "3146e4fe-2cdd-4f29-bd12-0c9d5bb4deb0",
"input_default": {
"group": 1,
"pattern": "<email>(.*?)<\\/email>"
},
"metadata": {
"position": {
"x": 3381.2821481740634,
"y": 246.091098184158
}
},
"input_links": [
{
"id": "9f8188ce-1f3d-46fb-acda-b2a57c0e5da6",
"source_id": "510937b3-0134-4e45-b2ba-05a447bbaf50",
"sink_id": "a6e7355e-5bf8-4b09-b11c-a5e140389981",
"source_name": "response",
"sink_name": "text",
"is_static": false
}
],
"output_links": [
{
"id": "b15b5143-27b7-486e-a166-4095e72e5235",
"source_id": "a6e7355e-5bf8-4b09-b11c-a5e140389981",
"sink_id": "266b7255-11c4-4b88-99e2-85db31a2e865",
"source_name": "negative",
"sink_name": "values_#_Result",
"is_static": false
},
{
"id": "23591872-3c6b-4562-87d3-5b6ade698e48",
"source_id": "a6e7355e-5bf8-4b09-b11c-a5e140389981",
"sink_id": "310c8fab-2ae6-4158-bd48-01dbdc434130",
"source_name": "positive",
"sink_name": "value",
"is_static": false
}
],
"graph_id": "4c6b68cb-bb75-4044-b1cb-2cee3fd39b26",
"graph_version": 29,
"webhook_id": null,
"webhook": null
},
{
"id": "310c8fab-2ae6-4158-bd48-01dbdc434130",
"block_id": "363ae599-353e-4804-937e-b2ee3cef3da4",
"input_default": {
"name": "Email"
},
"metadata": {
"position": {
"x": 4525.4246310882,
"y": 246.36913665010354
}
},
"input_links": [
{
"id": "d87b07ea-dcec-4d38-a644-2c1d741ea3cb",
"source_id": "266b7255-11c4-4b88-99e2-85db31a2e865",
"sink_id": "310c8fab-2ae6-4158-bd48-01dbdc434130",
"source_name": "output",
"sink_name": "value",
"is_static": false
},
{
"id": "23591872-3c6b-4562-87d3-5b6ade698e48",
"source_id": "a6e7355e-5bf8-4b09-b11c-a5e140389981",
"sink_id": "310c8fab-2ae6-4158-bd48-01dbdc434130",
"source_name": "positive",
"sink_name": "value",
"is_static": false
}
],
"output_links": [],
"graph_id": "4c6b68cb-bb75-4044-b1cb-2cee3fd39b26",
"graph_version": 29,
"webhook_id": null,
"webhook": null
},
{
"id": "4a41df99-ffe2-4c12-b528-632979c9c030",
"block_id": "87840993-2053-44b7-8da4-187ad4ee518c",
"input_default": {},
"metadata": {
"position": {
"x": 2182.7499999999995,
"y": 242.00001144409185
}
},
"input_links": [
{
"id": "2e411d3d-79ba-4958-9c1c-b76a45a2e649",
"source_id": "28b5ddcc-dc20-41cc-ad21-c54ff459f694",
"sink_id": "4a41df99-ffe2-4c12-b528-632979c9c030",
"source_name": "output",
"sink_name": "query",
"is_static": false
}
],
"output_links": [
{
"id": "899cc7d8-a96b-4107-b3c6-4c78edcf0c6b",
"source_id": "4a41df99-ffe2-4c12-b528-632979c9c030",
"sink_id": "510937b3-0134-4e45-b2ba-05a447bbaf50",
"source_name": "results",
"sink_name": "prompt_values_#_WEBSITE_CONTENT",
"is_static": false
}
],
"graph_id": "4c6b68cb-bb75-4044-b1cb-2cee3fd39b26",
"graph_version": 29,
"webhook_id": null,
"webhook": null
},
{
"id": "9708a10a-8be0-4c44-abb3-bd0f7c594794",
"block_id": "c0a8e994-ebf1-4a9c-a4d8-89d09c86741b",
"input_default": {
"name": "Business Name",
"value": "Tim Cook"
},
"metadata": {
"position": {
"x": 1049.9704155272595,
"y": 244.49931152418344
}
},
"input_links": [],
"output_links": [
{
"id": "946b522c-365f-4ee0-96f9-28863d9882ea",
"source_id": "9708a10a-8be0-4c44-abb3-bd0f7c594794",
"sink_id": "28b5ddcc-dc20-41cc-ad21-c54ff459f694",
"source_name": "result",
"sink_name": "values_#_NAME",
"is_static": true
},
{
"id": "43e920a7-0bb4-4fae-9a22-91df95c7342a",
"source_id": "9708a10a-8be0-4c44-abb3-bd0f7c594794",
"sink_id": "510937b3-0134-4e45-b2ba-05a447bbaf50",
"source_name": "result",
"sink_name": "prompt_values_#_BUSINESS_NAME",
"is_static": true
}
],
"graph_id": "4c6b68cb-bb75-4044-b1cb-2cee3fd39b26",
"graph_version": 29,
"webhook_id": null,
"webhook": null
},
{
"id": "28b5ddcc-dc20-41cc-ad21-c54ff459f694",
"block_id": "db7d8f02-2f44-4c55-ab7a-eae0941f0c30",
"input_default": {
"format": "Email Address of {{NAME}}, {{ADDRESS}}",
"values": {}
},
"metadata": {
"position": {
"x": 1625.25,
"y": 243.25001144409185
}
},
"input_links": [
{
"id": "946b522c-365f-4ee0-96f9-28863d9882ea",
"source_id": "9708a10a-8be0-4c44-abb3-bd0f7c594794",
"sink_id": "28b5ddcc-dc20-41cc-ad21-c54ff459f694",
"source_name": "result",
"sink_name": "values_#_NAME",
"is_static": true
},
{
"id": "aac29f7b-3cd1-4c91-9a2a-72a8301c0957",
"source_id": "04cad535-9f1a-4876-8b07-af5897d8c282",
"sink_id": "28b5ddcc-dc20-41cc-ad21-c54ff459f694",
"source_name": "result",
"sink_name": "values_#_ADDRESS",
"is_static": true
}
],
"output_links": [
{
"id": "2e411d3d-79ba-4958-9c1c-b76a45a2e649",
"source_id": "28b5ddcc-dc20-41cc-ad21-c54ff459f694",
"sink_id": "4a41df99-ffe2-4c12-b528-632979c9c030",
"source_name": "output",
"sink_name": "query",
"is_static": false
}
],
"graph_id": "4c6b68cb-bb75-4044-b1cb-2cee3fd39b26",
"graph_version": 29,
"webhook_id": null,
"webhook": null
},
{
"id": "266b7255-11c4-4b88-99e2-85db31a2e865",
"block_id": "db7d8f02-2f44-4c55-ab7a-eae0941f0c30",
"input_default": {
"format": "Failed to find email. \nResult:\n{{RESULT}}",
"values": {}
},
"metadata": {
"position": {
"x": 3949.7493830805934,
"y": 705.209819698647
}
},
"input_links": [
{
"id": "b15b5143-27b7-486e-a166-4095e72e5235",
"source_id": "a6e7355e-5bf8-4b09-b11c-a5e140389981",
"sink_id": "266b7255-11c4-4b88-99e2-85db31a2e865",
"source_name": "negative",
"sink_name": "values_#_Result",
"is_static": false
}
],
"output_links": [
{
"id": "d87b07ea-dcec-4d38-a644-2c1d741ea3cb",
"source_id": "266b7255-11c4-4b88-99e2-85db31a2e865",
"sink_id": "310c8fab-2ae6-4158-bd48-01dbdc434130",
"source_name": "output",
"sink_name": "value",
"is_static": false
}
],
"graph_id": "4c6b68cb-bb75-4044-b1cb-2cee3fd39b26",
"graph_version": 29,
"webhook_id": null,
"webhook": null
},
{
"id": "510937b3-0134-4e45-b2ba-05a447bbaf50",
"block_id": "1f292d4a-41a4-4977-9684-7c8d560b9f91",
"input_default": {
"model": "claude-sonnet-4-5-20250929",
"prompt": "<business_website>\n{{WEBSITE_CONTENT}}\n</business_website>\n\nExtract the Contact Email of {{BUSINESS_NAME}}.\n\nIf no email that can be used to contact {{BUSINESS_NAME}} is present, output `N/A`.\nDo not share any emails other than the email for this specific entity.\n\nIf multiple present pick the likely best one.\n\nRespond with the email (or N/A) inside <email></email> tags.\n\nExample Response:\n\n<thoughts_or_comments>\nThere were many emails present, but luckily one was for {{BUSINESS_NAME}} which I have included below.\n</thoughts_or_comments>\n<email>\nexample@email.com\n</email>",
"prompt_values": {}
},
"metadata": {
"position": {
"x": 2774.879259081777,
"y": 243.3102035752969
}
},
"input_links": [
{
"id": "43e920a7-0bb4-4fae-9a22-91df95c7342a",
"source_id": "9708a10a-8be0-4c44-abb3-bd0f7c594794",
"sink_id": "510937b3-0134-4e45-b2ba-05a447bbaf50",
"source_name": "result",
"sink_name": "prompt_values_#_BUSINESS_NAME",
"is_static": true
},
{
"id": "899cc7d8-a96b-4107-b3c6-4c78edcf0c6b",
"source_id": "4a41df99-ffe2-4c12-b528-632979c9c030",
"sink_id": "510937b3-0134-4e45-b2ba-05a447bbaf50",
"source_name": "results",
"sink_name": "prompt_values_#_WEBSITE_CONTENT",
"is_static": false
}
],
"output_links": [
{
"id": "9f8188ce-1f3d-46fb-acda-b2a57c0e5da6",
"source_id": "510937b3-0134-4e45-b2ba-05a447bbaf50",
"sink_id": "a6e7355e-5bf8-4b09-b11c-a5e140389981",
"source_name": "response",
"sink_name": "text",
"is_static": false
}
],
"graph_id": "4c6b68cb-bb75-4044-b1cb-2cee3fd39b26",
"graph_version": 29,
"webhook_id": null,
"webhook": null
}
],
"links": [
{
"id": "9f8188ce-1f3d-46fb-acda-b2a57c0e5da6",
"source_id": "510937b3-0134-4e45-b2ba-05a447bbaf50",
"sink_id": "a6e7355e-5bf8-4b09-b11c-a5e140389981",
"source_name": "response",
"sink_name": "text",
"is_static": false
},
{
"id": "b15b5143-27b7-486e-a166-4095e72e5235",
"source_id": "a6e7355e-5bf8-4b09-b11c-a5e140389981",
"sink_id": "266b7255-11c4-4b88-99e2-85db31a2e865",
"source_name": "negative",
"sink_name": "values_#_Result",
"is_static": false
},
{
"id": "d87b07ea-dcec-4d38-a644-2c1d741ea3cb",
"source_id": "266b7255-11c4-4b88-99e2-85db31a2e865",
"sink_id": "310c8fab-2ae6-4158-bd48-01dbdc434130",
"source_name": "output",
"sink_name": "value",
"is_static": false
},
{
"id": "946b522c-365f-4ee0-96f9-28863d9882ea",
"source_id": "9708a10a-8be0-4c44-abb3-bd0f7c594794",
"sink_id": "28b5ddcc-dc20-41cc-ad21-c54ff459f694",
"source_name": "result",
"sink_name": "values_#_NAME",
"is_static": true
},
{
"id": "23591872-3c6b-4562-87d3-5b6ade698e48",
"source_id": "a6e7355e-5bf8-4b09-b11c-a5e140389981",
"sink_id": "310c8fab-2ae6-4158-bd48-01dbdc434130",
"source_name": "positive",
"sink_name": "value",
"is_static": false
},
{
"id": "43e920a7-0bb4-4fae-9a22-91df95c7342a",
"source_id": "9708a10a-8be0-4c44-abb3-bd0f7c594794",
"sink_id": "510937b3-0134-4e45-b2ba-05a447bbaf50",
"source_name": "result",
"sink_name": "prompt_values_#_BUSINESS_NAME",
"is_static": true
},
{
"id": "2e411d3d-79ba-4958-9c1c-b76a45a2e649",
"source_id": "28b5ddcc-dc20-41cc-ad21-c54ff459f694",
"sink_id": "4a41df99-ffe2-4c12-b528-632979c9c030",
"source_name": "output",
"sink_name": "query",
"is_static": false
},
{
"id": "aac29f7b-3cd1-4c91-9a2a-72a8301c0957",
"source_id": "04cad535-9f1a-4876-8b07-af5897d8c282",
"sink_id": "28b5ddcc-dc20-41cc-ad21-c54ff459f694",
"source_name": "result",
"sink_name": "values_#_ADDRESS",
"is_static": true
},
{
"id": "899cc7d8-a96b-4107-b3c6-4c78edcf0c6b",
"source_id": "4a41df99-ffe2-4c12-b528-632979c9c030",
"sink_id": "510937b3-0134-4e45-b2ba-05a447bbaf50",
"source_name": "results",
"sink_name": "prompt_values_#_WEBSITE_CONTENT",
"is_static": false
}
],
"forked_from_id": null,
"forked_from_version": null,
"sub_graphs": [],
"user_id": "",
"created_at": "2025-01-03T00:46:30.244Z",
"input_schema": {
"type": "object",
"properties": {
"Address": {
"advanced": false,
"secret": false,
"title": "Address",
"default": "USA"
},
"Business Name": {
"advanced": false,
"secret": false,
"title": "Business Name",
"default": "Tim Cook"
}
},
"required": []
},
"output_schema": {
"type": "object",
"properties": {
"Email": {
"advanced": false,
"secret": false,
"title": "Email"
}
},
"required": [
"Email"
]
},
"has_external_trigger": false,
"has_human_in_the_loop": false,
"trigger_setup_info": null,
"credentials_input_schema": {
"properties": {
"jina_api_key_credentials": {
"credentials_provider": [
"jina"
],
"credentials_types": [
"api_key"
],
"properties": {
"id": {
"title": "Id",
"type": "string"
},
"title": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Title"
},
"provider": {
"const": "jina",
"title": "Provider",
"type": "string"
},
"type": {
"const": "api_key",
"title": "Type",
"type": "string"
}
},
"required": [
"id",
"provider",
"type"
],
"title": "CredentialsMetaInput[Literal[<ProviderName.JINA: 'jina'>], Literal['api_key']]",
"type": "object",
"discriminator_values": []
},
"anthropic_api_key_credentials": {
"credentials_provider": [
"anthropic"
],
"credentials_types": [
"api_key"
],
"properties": {
"id": {
"title": "Id",
"type": "string"
},
"title": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Title"
},
"provider": {
"const": "anthropic",
"title": "Provider",
"type": "string"
},
"type": {
"const": "api_key",
"title": "Type",
"type": "string"
}
},
"required": [
"id",
"provider",
"type"
],
"title": "CredentialsMetaInput[Literal[<ProviderName.ANTHROPIC: 'anthropic'>], Literal['api_key']]",
"type": "object",
"discriminator": "model",
"discriminator_mapping": {
"Llama-3.3-70B-Instruct": "llama_api",
"Llama-3.3-8B-Instruct": "llama_api",
"Llama-4-Maverick-17B-128E-Instruct-FP8": "llama_api",
"Llama-4-Scout-17B-16E-Instruct-FP8": "llama_api",
"Qwen/Qwen2.5-72B-Instruct-Turbo": "aiml_api",
"amazon/nova-lite-v1": "open_router",
"amazon/nova-micro-v1": "open_router",
"amazon/nova-pro-v1": "open_router",
"claude-3-7-sonnet-20250219": "anthropic",
"claude-3-haiku-20240307": "anthropic",
"claude-haiku-4-5-20251001": "anthropic",
"claude-opus-4-1-20250805": "anthropic",
"claude-opus-4-20250514": "anthropic",
"claude-opus-4-5-20251101": "anthropic",
"claude-sonnet-4-20250514": "anthropic",
"claude-sonnet-4-5-20250929": "anthropic",
"cohere/command-r-08-2024": "open_router",
"cohere/command-r-plus-08-2024": "open_router",
"deepseek/deepseek-chat": "open_router",
"deepseek/deepseek-r1-0528": "open_router",
"dolphin-mistral:latest": "ollama",
"google/gemini-2.0-flash-001": "open_router",
"google/gemini-2.0-flash-lite-001": "open_router",
"google/gemini-2.5-flash": "open_router",
"google/gemini-2.5-flash-lite-preview-06-17": "open_router",
"google/gemini-2.5-pro-preview-03-25": "open_router",
"google/gemini-3-pro-preview": "open_router",
"gpt-3.5-turbo": "openai",
"gpt-4-turbo": "openai",
"gpt-4.1-2025-04-14": "openai",
"gpt-4.1-mini-2025-04-14": "openai",
"gpt-4o": "openai",
"gpt-4o-mini": "openai",
"gpt-5-2025-08-07": "openai",
"gpt-5-chat-latest": "openai",
"gpt-5-mini-2025-08-07": "openai",
"gpt-5-nano-2025-08-07": "openai",
"gpt-5.1-2025-11-13": "openai",
"gryphe/mythomax-l2-13b": "open_router",
"llama-3.1-8b-instant": "groq",
"llama-3.3-70b-versatile": "groq",
"llama3": "ollama",
"llama3.1:405b": "ollama",
"llama3.2": "ollama",
"llama3.3": "ollama",
"meta-llama/Llama-3.2-3B-Instruct-Turbo": "aiml_api",
"meta-llama/Llama-3.3-70B-Instruct-Turbo": "aiml_api",
"meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo": "aiml_api",
"meta-llama/llama-4-maverick": "open_router",
"meta-llama/llama-4-scout": "open_router",
"microsoft/wizardlm-2-8x22b": "open_router",
"mistralai/mistral-nemo": "open_router",
"moonshotai/kimi-k2": "open_router",
"nousresearch/hermes-3-llama-3.1-405b": "open_router",
"nousresearch/hermes-3-llama-3.1-70b": "open_router",
"nvidia/llama-3.1-nemotron-70b-instruct": "aiml_api",
"o1": "openai",
"o1-mini": "openai",
"o3-2025-04-16": "openai",
"o3-mini": "openai",
"openai/gpt-oss-120b": "open_router",
"openai/gpt-oss-20b": "open_router",
"perplexity/sonar": "open_router",
"perplexity/sonar-deep-research": "open_router",
"perplexity/sonar-pro": "open_router",
"qwen/qwen3-235b-a22b-thinking-2507": "open_router",
"qwen/qwen3-coder": "open_router",
"v0-1.0-md": "v0",
"v0-1.5-lg": "v0",
"v0-1.5-md": "v0",
"x-ai/grok-4": "open_router",
"x-ai/grok-4-fast": "open_router",
"x-ai/grok-4.1-fast": "open_router",
"x-ai/grok-code-fast-1": "open_router"
},
"discriminator_values": [
"claude-sonnet-4-5-20250929"
]
}
},
"required": [
"jina_api_key_credentials",
"anthropic_api_key_credentials"
],
"title": "EmailAddressFinderCredentialsInputSchema",
"type": "object"
}
}

View File

@@ -7,10 +7,11 @@ from backend.data.block import (
BlockInput,
BlockOutput,
BlockSchema,
BlockSchemaInput,
BlockType,
get_block,
)
from backend.data.execution import ExecutionStatus, NodesInputMasks
from backend.data.execution import ExecutionContext, ExecutionStatus, NodesInputMasks
from backend.data.model import NodeExecutionStats, SchemaField
from backend.util.json import validate_with_jsonschema
from backend.util.retry import func_retry
@@ -19,7 +20,7 @@ _logger = logging.getLogger(__name__)
class AgentExecutorBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
user_id: str = SchemaField(description="User ID")
graph_id: str = SchemaField(description="Graph ID")
graph_version: int = SchemaField(description="Graph Version")
@@ -53,6 +54,7 @@ class AgentExecutorBlock(Block):
return validate_with_jsonschema(cls.get_input_schema(data), data)
class Output(BlockSchema):
# Use BlockSchema to avoid automatic error field that could clash with graph outputs
pass
def __init__(self):
@@ -65,8 +67,14 @@ class AgentExecutorBlock(Block):
categories={BlockCategory.AGENT},
)
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
async def run(
self,
input_data: Input,
*,
graph_exec_id: str,
execution_context: ExecutionContext,
**kwargs,
) -> BlockOutput:
from backend.executor import utils as execution_utils
graph_exec = await execution_utils.add_graph_execution(
@@ -75,6 +83,9 @@ class AgentExecutorBlock(Block):
user_id=input_data.user_id,
inputs=input_data.inputs,
nodes_input_masks=input_data.nodes_input_masks,
execution_context=execution_context.model_copy(
update={"parent_execution_id": graph_exec_id},
),
)
logger = execution_utils.LogMetadata(

View File

@@ -10,7 +10,12 @@ from backend.blocks.llm import (
LLMResponse,
llm_call,
)
from backend.data.block import BlockCategory, BlockOutput, BlockSchema
from backend.data.block import (
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.model import APIKeyCredentials, NodeExecutionStats, SchemaField
@@ -23,7 +28,7 @@ class AIConditionBlock(AIBlockBase):
It provides the same yes/no data pass-through functionality as the standard ConditionBlock.
"""
class Input(BlockSchema):
class Input(BlockSchemaInput):
input_value: Any = SchemaField(
description="The input value to evaluate with the AI condition",
placeholder="Enter the value to be evaluated (text, number, or any data)",
@@ -50,7 +55,7 @@ class AIConditionBlock(AIBlockBase):
)
credentials: AICredentials = AICredentialsField()
class Output(BlockSchema):
class Output(BlockSchemaOutput):
result: bool = SchemaField(
description="The result of the AI condition evaluation (True or False)"
)

View File

@@ -1,3 +1,4 @@
import asyncio
from enum import Enum
from typing import Literal
@@ -5,7 +6,13 @@ from pydantic import SecretStr
from replicate.client import Client as ReplicateClient
from replicate.helpers import FileOutput
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.model import (
APIKeyCredentials,
CredentialsField,
@@ -13,11 +20,26 @@ from backend.data.model import (
SchemaField,
)
from backend.integrations.providers import ProviderName
from backend.util.file import MediaFileType
from backend.util.file import MediaFileType, store_media_file
class GeminiImageModel(str, Enum):
NANO_BANANA = "google/nano-banana"
NANO_BANANA_PRO = "google/nano-banana-pro"
class AspectRatio(str, Enum):
MATCH_INPUT_IMAGE = "match_input_image"
ASPECT_1_1 = "1:1"
ASPECT_2_3 = "2:3"
ASPECT_3_2 = "3:2"
ASPECT_3_4 = "3:4"
ASPECT_4_3 = "4:3"
ASPECT_4_5 = "4:5"
ASPECT_5_4 = "5:4"
ASPECT_9_16 = "9:16"
ASPECT_16_9 = "16:9"
ASPECT_21_9 = "21:9"
class OutputFormat(str, Enum):
@@ -42,7 +64,7 @@ TEST_CREDENTIALS_INPUT = {
class AIImageCustomizerBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput[
Literal[ProviderName.REPLICATE], Literal["api_key"]
] = CredentialsField(
@@ -62,15 +84,19 @@ class AIImageCustomizerBlock(Block):
default=[],
title="Input Images",
)
aspect_ratio: AspectRatio = SchemaField(
description="Aspect ratio of the generated image",
default=AspectRatio.MATCH_INPUT_IMAGE,
title="Aspect Ratio",
)
output_format: OutputFormat = SchemaField(
description="Format of the output image",
default=OutputFormat.PNG,
title="Output Format",
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
image_url: MediaFileType = SchemaField(description="URL of the generated image")
error: str = SchemaField(description="Error message if generation failed")
def __init__(self):
super().__init__(
@@ -86,6 +112,7 @@ class AIImageCustomizerBlock(Block):
"prompt": "Make the scene more vibrant and colorful",
"model": GeminiImageModel.NANO_BANANA,
"images": [],
"aspect_ratio": AspectRatio.MATCH_INPUT_IMAGE,
"output_format": OutputFormat.JPG,
"credentials": TEST_CREDENTIALS_INPUT,
},
@@ -110,11 +137,25 @@ class AIImageCustomizerBlock(Block):
**kwargs,
) -> BlockOutput:
try:
# Convert local file paths to Data URIs (base64) so Replicate can access them
processed_images = await asyncio.gather(
*(
store_media_file(
graph_exec_id=graph_exec_id,
file=img,
user_id=user_id,
return_content=True,
)
for img in input_data.images
)
)
result = await self.run_model(
api_key=credentials.api_key,
model_name=input_data.model.value,
prompt=input_data.prompt,
images=input_data.images,
images=processed_images,
aspect_ratio=input_data.aspect_ratio.value,
output_format=input_data.output_format.value,
)
yield "image_url", result
@@ -127,12 +168,14 @@ class AIImageCustomizerBlock(Block):
model_name: str,
prompt: str,
images: list[MediaFileType],
aspect_ratio: str,
output_format: str,
) -> MediaFileType:
client = ReplicateClient(api_token=api_key.get_secret_value())
input_params: dict = {
"prompt": prompt,
"aspect_ratio": aspect_ratio,
"output_format": output_format,
}

View File

@@ -5,7 +5,7 @@ from pydantic import SecretStr
from replicate.client import Client as ReplicateClient
from replicate.helpers import FileOutput
from backend.data.block import Block, BlockCategory, BlockSchema
from backend.data.block import Block, BlockCategory, BlockSchemaInput, BlockSchemaOutput
from backend.data.model import (
APIKeyCredentials,
CredentialsField,
@@ -60,6 +60,14 @@ SIZE_TO_RECRAFT_DIMENSIONS = {
ImageSize.TALL: "1024x1536",
}
SIZE_TO_NANO_BANANA_RATIO = {
ImageSize.SQUARE: "1:1",
ImageSize.LANDSCAPE: "4:3",
ImageSize.PORTRAIT: "3:4",
ImageSize.WIDE: "16:9",
ImageSize.TALL: "9:16",
}
class ImageStyle(str, Enum):
"""
@@ -98,10 +106,11 @@ class ImageGenModel(str, Enum):
FLUX_ULTRA = "Flux 1.1 Pro Ultra"
RECRAFT = "Recraft v3"
SD3_5 = "Stable Diffusion 3.5 Medium"
NANO_BANANA_PRO = "Nano Banana Pro"
class AIImageGeneratorBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput[
Literal[ProviderName.REPLICATE], Literal["api_key"]
] = CredentialsField(
@@ -135,9 +144,8 @@ class AIImageGeneratorBlock(Block):
title="Image Style",
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
image_url: str = SchemaField(description="URL of the generated image")
error: str = SchemaField(description="Error message if generation failed")
def __init__(self):
super().__init__(
@@ -262,6 +270,20 @@ class AIImageGeneratorBlock(Block):
)
return output
elif input_data.model == ImageGenModel.NANO_BANANA_PRO:
# Use Nano Banana Pro (Google Gemini 3 Pro Image)
input_params = {
"prompt": modified_prompt,
"aspect_ratio": SIZE_TO_NANO_BANANA_RATIO[input_data.size],
"resolution": "2K", # Default to 2K for good quality/cost balance
"output_format": "jpg",
"safety_filter_level": "block_only_high", # Most permissive
}
output = await self._run_client(
credentials, "google/nano-banana-pro", input_params
)
return output
except Exception as e:
raise RuntimeError(f"Failed to generate image: {str(e)}")

View File

@@ -6,7 +6,13 @@ from typing import Literal
from pydantic import SecretStr
from replicate.client import Client as ReplicateClient
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.model import (
APIKeyCredentials,
CredentialsField,
@@ -54,7 +60,7 @@ class NormalizationStrategy(str, Enum):
class AIMusicGeneratorBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput[
Literal[ProviderName.REPLICATE], Literal["api_key"]
] = CredentialsField(
@@ -107,9 +113,8 @@ class AIMusicGeneratorBlock(Block):
title="Normalization Strategy",
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
result: str = SchemaField(description="URL of the generated audio file")
error: str = SchemaField(description="Error message if the model run failed")
def __init__(self):
super().__init__(

View File

@@ -6,7 +6,13 @@ from typing import Literal
from pydantic import SecretStr
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.model import (
APIKeyCredentials,
CredentialsField,
@@ -148,7 +154,7 @@ logger = logging.getLogger(__name__)
class AIShortformVideoCreatorBlock(Block):
"""Creates a shortform texttovideo clip using stock or AI imagery."""
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput[
Literal[ProviderName.REVID], Literal["api_key"]
] = CredentialsField(
@@ -187,9 +193,8 @@ class AIShortformVideoCreatorBlock(Block):
placeholder=VisualMediaType.STOCK_VIDEOS,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
video_url: str = SchemaField(description="The URL of the created video")
error: str = SchemaField(description="Error message if the request failed")
async def create_webhook(self) -> tuple[str, str]:
"""Create a new webhook URL for receiving notifications."""
@@ -336,7 +341,7 @@ class AIShortformVideoCreatorBlock(Block):
class AIAdMakerVideoCreatorBlock(Block):
"""Generates a 30second vertical AI advert using optional usersupplied imagery."""
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput[
Literal[ProviderName.REVID], Literal["api_key"]
] = CredentialsField(
@@ -364,9 +369,8 @@ class AIAdMakerVideoCreatorBlock(Block):
description="Restrict visuals to supplied images only.", default=True
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
video_url: str = SchemaField(description="URL of the finished advert")
error: str = SchemaField(description="Error message on failure")
async def create_webhook(self) -> tuple[str, str]:
"""Create a new webhook URL for receiving notifications."""
@@ -524,7 +528,7 @@ class AIAdMakerVideoCreatorBlock(Block):
class AIScreenshotToVideoAdBlock(Block):
"""Creates an advert where the supplied screenshot is narrated by an AI avatar."""
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput[
Literal[ProviderName.REVID], Literal["api_key"]
] = CredentialsField(description="Revid.ai API key")
@@ -542,9 +546,8 @@ class AIScreenshotToVideoAdBlock(Block):
default=AudioTrack.DONT_STOP_ME_ABSTRACT_FUTURE_BASS
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
video_url: str = SchemaField(description="Rendered video URL")
error: str = SchemaField(description="Error, if encountered")
async def create_webhook(self) -> tuple[str, str]:
"""Create a new webhook URL for receiving notifications."""

View File

@@ -1371,7 +1371,7 @@ async def create_base(
if tables:
params["tables"] = tables
print(params)
logger.debug(f"Creating Airtable base with params: {params}")
response = await Requests().post(
"https://api.airtable.com/v0/meta/bases",

View File

@@ -9,7 +9,8 @@ from backend.sdk import (
Block,
BlockCategory,
BlockOutput,
BlockSchema,
BlockSchemaInput,
BlockSchemaOutput,
CredentialsMetaInput,
SchemaField,
)
@@ -23,7 +24,7 @@ class AirtableCreateBaseBlock(Block):
Creates a new base in an Airtable workspace, or returns existing base if one with the same name exists.
"""
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = airtable.credentials_field(
description="Airtable API credentials"
)
@@ -53,7 +54,7 @@ class AirtableCreateBaseBlock(Block):
],
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
base_id: str = SchemaField(description="The ID of the created or found base")
tables: list[dict] = SchemaField(description="Array of table objects")
table: dict = SchemaField(description="A single table object")
@@ -118,7 +119,7 @@ class AirtableListBasesBlock(Block):
Lists all bases in an Airtable workspace that the user has access to.
"""
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = airtable.credentials_field(
description="Airtable API credentials"
)
@@ -129,7 +130,7 @@ class AirtableListBasesBlock(Block):
description="Pagination offset from previous request", default=""
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
bases: list[dict] = SchemaField(description="Array of base objects")
offset: Optional[str] = SchemaField(
description="Offset for next page (null if no more bases)", default=None

View File

@@ -9,7 +9,8 @@ from backend.sdk import (
Block,
BlockCategory,
BlockOutput,
BlockSchema,
BlockSchemaInput,
BlockSchemaOutput,
CredentialsMetaInput,
SchemaField,
)
@@ -31,7 +32,7 @@ class AirtableListRecordsBlock(Block):
Lists records from an Airtable table with optional filtering, sorting, and pagination.
"""
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = airtable.credentials_field(
description="Airtable API credentials"
)
@@ -65,7 +66,7 @@ class AirtableListRecordsBlock(Block):
default=False,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
records: list[dict] = SchemaField(description="Array of record objects")
offset: Optional[str] = SchemaField(
description="Offset for next page (null if no more records)", default=None
@@ -137,7 +138,7 @@ class AirtableGetRecordBlock(Block):
Retrieves a single record from an Airtable table by its ID.
"""
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = airtable.credentials_field(
description="Airtable API credentials"
)
@@ -153,7 +154,7 @@ class AirtableGetRecordBlock(Block):
default=False,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
id: str = SchemaField(description="The record ID")
fields: dict = SchemaField(description="The record fields")
created_time: str = SchemaField(description="The record created time")
@@ -217,7 +218,7 @@ class AirtableCreateRecordsBlock(Block):
Creates one or more records in an Airtable table.
"""
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = airtable.credentials_field(
description="Airtable API credentials"
)
@@ -239,7 +240,7 @@ class AirtableCreateRecordsBlock(Block):
default=None,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
records: list[dict] = SchemaField(description="Array of created record objects")
details: dict = SchemaField(description="Details of the created records")
@@ -290,7 +291,7 @@ class AirtableUpdateRecordsBlock(Block):
Updates one or more existing records in an Airtable table.
"""
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = airtable.credentials_field(
description="Airtable API credentials"
)
@@ -306,7 +307,7 @@ class AirtableUpdateRecordsBlock(Block):
default=None,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
records: list[dict] = SchemaField(description="Array of updated record objects")
def __init__(self):
@@ -339,7 +340,7 @@ class AirtableDeleteRecordsBlock(Block):
Deletes one or more records from an Airtable table.
"""
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = airtable.credentials_field(
description="Airtable API credentials"
)
@@ -351,7 +352,7 @@ class AirtableDeleteRecordsBlock(Block):
description="Array of upto 10 record IDs to delete"
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
records: list[dict] = SchemaField(description="Array of deletion results")
def __init__(self):

View File

@@ -7,7 +7,8 @@ from backend.sdk import (
Block,
BlockCategory,
BlockOutput,
BlockSchema,
BlockSchemaInput,
BlockSchemaOutput,
CredentialsMetaInput,
Requests,
SchemaField,
@@ -23,13 +24,13 @@ class AirtableListSchemaBlock(Block):
fields, and views.
"""
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = airtable.credentials_field(
description="Airtable API credentials"
)
base_id: str = SchemaField(description="The Airtable base ID")
class Output(BlockSchema):
class Output(BlockSchemaOutput):
base_schema: dict = SchemaField(
description="Complete base schema with tables, fields, and views"
)
@@ -66,7 +67,7 @@ class AirtableCreateTableBlock(Block):
Creates a new table in an Airtable base with specified fields and views.
"""
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = airtable.credentials_field(
description="Airtable API credentials"
)
@@ -77,7 +78,7 @@ class AirtableCreateTableBlock(Block):
default=[{"name": "Name", "type": "singleLineText"}],
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
table: dict = SchemaField(description="Created table object")
table_id: str = SchemaField(description="ID of the created table")
@@ -109,7 +110,7 @@ class AirtableUpdateTableBlock(Block):
Updates an existing table's properties such as name or description.
"""
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = airtable.credentials_field(
description="Airtable API credentials"
)
@@ -125,7 +126,7 @@ class AirtableUpdateTableBlock(Block):
description="The date dependency of the table to update", default=None
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
table: dict = SchemaField(description="Updated table object")
def __init__(self):
@@ -157,7 +158,7 @@ class AirtableCreateFieldBlock(Block):
Adds a new field (column) to an existing Airtable table.
"""
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = airtable.credentials_field(
description="Airtable API credentials"
)
@@ -176,7 +177,7 @@ class AirtableCreateFieldBlock(Block):
description="The options of the field to create", default=None
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
field: dict = SchemaField(description="Created field object")
field_id: str = SchemaField(description="ID of the created field")
@@ -209,7 +210,7 @@ class AirtableUpdateFieldBlock(Block):
Updates an existing field's properties in an Airtable table.
"""
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = airtable.credentials_field(
description="Airtable API credentials"
)
@@ -225,7 +226,7 @@ class AirtableUpdateFieldBlock(Block):
advanced=False,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
field: dict = SchemaField(description="Updated field object")
def __init__(self):

View File

@@ -3,7 +3,8 @@ from backend.sdk import (
Block,
BlockCategory,
BlockOutput,
BlockSchema,
BlockSchemaInput,
BlockSchemaOutput,
BlockType,
BlockWebhookConfig,
CredentialsMetaInput,
@@ -32,7 +33,7 @@ class AirtableWebhookTriggerBlock(Block):
Thin wrapper just forwards the payloads one at a time to the next block.
"""
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = airtable.credentials_field(
description="Airtable API credentials"
)
@@ -43,7 +44,7 @@ class AirtableWebhookTriggerBlock(Block):
description="Airtable webhook event filter"
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
payload: WebhookPayload = SchemaField(description="Airtable webhook payload")
def __init__(self):

View File

@@ -10,14 +10,20 @@ from backend.blocks.apollo.models import (
PrimaryPhone,
SearchOrganizationsRequest,
)
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.model import CredentialsField, SchemaField
class SearchOrganizationsBlock(Block):
"""Search for organizations in Apollo"""
class Input(BlockSchema):
class Input(BlockSchemaInput):
organization_num_employees_range: list[int] = SchemaField(
description="""The number range of employees working for the company. This enables you to find companies based on headcount. You can add multiple ranges to expand your search results.
@@ -69,7 +75,7 @@ To find IDs, identify the values for organization_id when you call this endpoint
description="Apollo credentials",
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
organizations: list[Organization] = SchemaField(
description="List of organizations found",
default_factory=list,

View File

@@ -14,14 +14,20 @@ from backend.blocks.apollo.models import (
SearchPeopleRequest,
SenorityLevels,
)
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.model import CredentialsField, SchemaField
class SearchPeopleBlock(Block):
"""Search for people in Apollo"""
class Input(BlockSchema):
class Input(BlockSchemaInput):
person_titles: list[str] = SchemaField(
description="""Job titles held by the people you want to find. For a person to be included in search results, they only need to match 1 of the job titles you add. Adding more job titles expands your search results.
@@ -109,7 +115,7 @@ class SearchPeopleBlock(Block):
description="Apollo credentials",
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
people: list[Contact] = SchemaField(
description="List of people found",
default_factory=list,

View File

@@ -6,14 +6,20 @@ from backend.blocks.apollo._auth import (
ApolloCredentialsInput,
)
from backend.blocks.apollo.models import Contact, EnrichPersonRequest
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.model import CredentialsField, SchemaField
class GetPersonDetailBlock(Block):
"""Get detailed person data with Apollo API, including email reveal"""
class Input(BlockSchema):
class Input(BlockSchemaInput):
person_id: str = SchemaField(
description="Apollo person ID to enrich (most accurate method)",
default="",
@@ -68,7 +74,7 @@ class GetPersonDetailBlock(Block):
description="Apollo credentials",
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
contact: Contact = SchemaField(
description="Enriched contact information",
)

View File

@@ -3,7 +3,7 @@ from typing import Optional
from pydantic import BaseModel, Field
from backend.data.block import BlockSchema
from backend.data.block import BlockSchemaInput
from backend.data.model import SchemaField, UserIntegrations
from backend.integrations.ayrshare import AyrshareClient
from backend.util.clients import get_database_manager_async_client
@@ -17,7 +17,7 @@ async def get_profile_key(user_id: str):
return user_integrations.managed_credentials.ayrshare_profile_key
class BaseAyrshareInput(BlockSchema):
class BaseAyrshareInput(BlockSchemaInput):
"""Base input model for Ayrshare social media posts with common fields."""
post: str = SchemaField(

View File

@@ -3,7 +3,7 @@ from backend.sdk import (
Block,
BlockCategory,
BlockOutput,
BlockSchema,
BlockSchemaOutput,
BlockType,
SchemaField,
)
@@ -38,7 +38,7 @@ class PostToBlueskyBlock(Block):
advanced=True,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
post_result: PostResponse = SchemaField(description="The result of the post")
post: PostIds = SchemaField(description="The result of the post")

View File

@@ -3,7 +3,7 @@ from backend.sdk import (
Block,
BlockCategory,
BlockOutput,
BlockSchema,
BlockSchemaOutput,
BlockType,
SchemaField,
)
@@ -101,7 +101,7 @@ class PostToFacebookBlock(Block):
description="URL for custom link preview", default="", advanced=True
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
post_result: PostResponse = SchemaField(description="The result of the post")
post: PostIds = SchemaField(description="The result of the post")

View File

@@ -3,7 +3,7 @@ from backend.sdk import (
Block,
BlockCategory,
BlockOutput,
BlockSchema,
BlockSchemaOutput,
BlockType,
SchemaField,
)
@@ -94,7 +94,7 @@ class PostToGMBBlock(Block):
advanced=True,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
post_result: PostResponse = SchemaField(description="The result of the post")
post: PostIds = SchemaField(description="The result of the post")

View File

@@ -5,7 +5,7 @@ from backend.sdk import (
Block,
BlockCategory,
BlockOutput,
BlockSchema,
BlockSchemaOutput,
BlockType,
SchemaField,
)
@@ -94,7 +94,7 @@ class PostToInstagramBlock(Block):
advanced=True,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
post_result: PostResponse = SchemaField(description="The result of the post")
post: PostIds = SchemaField(description="The result of the post")

View File

@@ -3,7 +3,7 @@ from backend.sdk import (
Block,
BlockCategory,
BlockOutput,
BlockSchema,
BlockSchemaOutput,
BlockType,
SchemaField,
)
@@ -94,7 +94,7 @@ class PostToLinkedInBlock(Block):
advanced=True,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
post_result: PostResponse = SchemaField(description="The result of the post")
post: PostIds = SchemaField(description="The result of the post")

View File

@@ -3,7 +3,7 @@ from backend.sdk import (
Block,
BlockCategory,
BlockOutput,
BlockSchema,
BlockSchemaOutput,
BlockType,
SchemaField,
)
@@ -73,7 +73,7 @@ class PostToPinterestBlock(Block):
advanced=True,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
post_result: PostResponse = SchemaField(description="The result of the post")
post: PostIds = SchemaField(description="The result of the post")

View File

@@ -3,7 +3,7 @@ from backend.sdk import (
Block,
BlockCategory,
BlockOutput,
BlockSchema,
BlockSchemaOutput,
BlockType,
SchemaField,
)
@@ -19,7 +19,7 @@ class PostToRedditBlock(Block):
pass # Uses all base fields
class Output(BlockSchema):
class Output(BlockSchemaOutput):
post_result: PostResponse = SchemaField(description="The result of the post")
post: PostIds = SchemaField(description="The result of the post")

View File

@@ -3,7 +3,7 @@ from backend.sdk import (
Block,
BlockCategory,
BlockOutput,
BlockSchema,
BlockSchemaOutput,
BlockType,
SchemaField,
)
@@ -43,7 +43,7 @@ class PostToSnapchatBlock(Block):
advanced=True,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
post_result: PostResponse = SchemaField(description="The result of the post")
post: PostIds = SchemaField(description="The result of the post")

View File

@@ -3,7 +3,7 @@ from backend.sdk import (
Block,
BlockCategory,
BlockOutput,
BlockSchema,
BlockSchemaOutput,
BlockType,
SchemaField,
)
@@ -38,7 +38,7 @@ class PostToTelegramBlock(Block):
advanced=True,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
post_result: PostResponse = SchemaField(description="The result of the post")
post: PostIds = SchemaField(description="The result of the post")

View File

@@ -3,7 +3,7 @@ from backend.sdk import (
Block,
BlockCategory,
BlockOutput,
BlockSchema,
BlockSchemaOutput,
BlockType,
SchemaField,
)
@@ -31,7 +31,7 @@ class PostToThreadsBlock(Block):
advanced=False,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
post_result: PostResponse = SchemaField(description="The result of the post")
post: PostIds = SchemaField(description="The result of the post")

View File

@@ -5,7 +5,7 @@ from backend.sdk import (
Block,
BlockCategory,
BlockOutput,
BlockSchema,
BlockSchemaOutput,
BlockType,
SchemaField,
)
@@ -98,7 +98,7 @@ class PostToTikTokBlock(Block):
advanced=True,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
post_result: PostResponse = SchemaField(description="The result of the post")
post: PostIds = SchemaField(description="The result of the post")

View File

@@ -3,7 +3,7 @@ from backend.sdk import (
Block,
BlockCategory,
BlockOutput,
BlockSchema,
BlockSchemaOutput,
BlockType,
SchemaField,
)
@@ -97,7 +97,7 @@ class PostToXBlock(Block):
advanced=True,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
post_result: PostResponse = SchemaField(description="The result of the post")
post: PostIds = SchemaField(description="The result of the post")

View File

@@ -6,7 +6,7 @@ from backend.sdk import (
Block,
BlockCategory,
BlockOutput,
BlockSchema,
BlockSchemaOutput,
BlockType,
SchemaField,
)
@@ -119,7 +119,7 @@ class PostToYouTubeBlock(Block):
advanced=True,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
post_result: PostResponse = SchemaField(description="The result of the post")
post: PostIds = SchemaField(description="The result of the post")

View File

@@ -9,7 +9,8 @@ from backend.sdk import (
Block,
BlockCategory,
BlockOutput,
BlockSchema,
BlockSchemaInput,
BlockSchemaOutput,
CredentialsMetaInput,
SchemaField,
)
@@ -23,7 +24,7 @@ class BaasBotJoinMeetingBlock(Block):
Deploy a bot immediately or at a scheduled start_time to join and record a meeting.
"""
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = baas.credentials_field(
description="Meeting BaaS API credentials"
)
@@ -57,7 +58,7 @@ class BaasBotJoinMeetingBlock(Block):
description="Custom metadata to attach to the bot", default={}
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
bot_id: str = SchemaField(description="UUID of the deployed bot")
join_response: dict = SchemaField(
description="Full response from join operation"
@@ -103,13 +104,13 @@ class BaasBotLeaveMeetingBlock(Block):
Force the bot to exit the call.
"""
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = baas.credentials_field(
description="Meeting BaaS API credentials"
)
bot_id: str = SchemaField(description="UUID of the bot to remove from meeting")
class Output(BlockSchema):
class Output(BlockSchemaOutput):
left: bool = SchemaField(description="Whether the bot successfully left")
def __init__(self):
@@ -138,7 +139,7 @@ class BaasBotFetchMeetingDataBlock(Block):
Pull MP4 URL, transcript & metadata for a completed meeting.
"""
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = baas.credentials_field(
description="Meeting BaaS API credentials"
)
@@ -147,7 +148,7 @@ class BaasBotFetchMeetingDataBlock(Block):
description="Include transcript data in response", default=True
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
mp4_url: str = SchemaField(
description="URL to download the meeting recording (time-limited)"
)
@@ -185,13 +186,13 @@ class BaasBotDeleteRecordingBlock(Block):
Purge MP4 + transcript data for privacy or storage management.
"""
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = baas.credentials_field(
description="Meeting BaaS API credentials"
)
bot_id: str = SchemaField(description="UUID of the bot whose data to delete")
class Output(BlockSchema):
class Output(BlockSchemaOutput):
deleted: bool = SchemaField(
description="Whether the data was successfully deleted"
)

View File

@@ -11,7 +11,8 @@ from backend.sdk import (
Block,
BlockCategory,
BlockOutput,
BlockSchema,
BlockSchemaInput,
BlockSchemaOutput,
CredentialsMetaInput,
Requests,
SchemaField,
@@ -27,7 +28,7 @@ TEST_CREDENTIALS = APIKeyCredentials(
)
class TextModification(BlockSchema):
class TextModification(BlockSchemaInput):
name: str = SchemaField(
description="The name of the layer to modify in the template"
)
@@ -60,7 +61,7 @@ class TextModification(BlockSchema):
class BannerbearTextOverlayBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = bannerbear.credentials_field(
description="API credentials for Bannerbear"
)
@@ -96,7 +97,7 @@ class BannerbearTextOverlayBlock(Block):
advanced=True,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
success: bool = SchemaField(
description="Whether the image generation was successfully initiated"
)
@@ -105,7 +106,6 @@ class BannerbearTextOverlayBlock(Block):
)
uid: str = SchemaField(description="Unique identifier for the generated image")
status: str = SchemaField(description="Status of the image generation")
error: str = SchemaField(description="Error message if the operation failed")
def __init__(self):
super().__init__(

View File

@@ -1,14 +1,21 @@
import enum
from typing import Any
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema, BlockType
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
BlockType,
)
from backend.data.model import SchemaField
from backend.util.file import store_media_file
from backend.util.type import MediaFileType, convert
class FileStoreBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
file_in: MediaFileType = SchemaField(
description="The file to store in the temporary directory, it can be a URL, data URI, or local path."
)
@@ -19,7 +26,7 @@ class FileStoreBlock(Block):
title="Produce Base64 Output",
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
file_out: MediaFileType = SchemaField(
description="The relative path to the stored file in the temporary directory."
)
@@ -57,7 +64,7 @@ class StoreValueBlock(Block):
The block output will be static, the output can be consumed multiple times.
"""
class Input(BlockSchema):
class Input(BlockSchemaInput):
input: Any = SchemaField(
description="Trigger the block to produce the output. "
"The value is only used when `data` is None."
@@ -68,7 +75,7 @@ class StoreValueBlock(Block):
default=None,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
output: Any = SchemaField(description="The stored data retained in the block.")
def __init__(self):
@@ -94,10 +101,10 @@ class StoreValueBlock(Block):
class PrintToConsoleBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
text: Any = SchemaField(description="The data to print to the console.")
class Output(BlockSchema):
class Output(BlockSchemaOutput):
output: Any = SchemaField(description="The data printed to the console.")
status: str = SchemaField(description="The status of the print operation.")
@@ -121,10 +128,10 @@ class PrintToConsoleBlock(Block):
class NoteBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
text: str = SchemaField(description="The text to display in the sticky note.")
class Output(BlockSchema):
class Output(BlockSchemaOutput):
output: str = SchemaField(description="The text to display in the sticky note.")
def __init__(self):
@@ -154,15 +161,14 @@ class TypeOptions(enum.Enum):
class UniversalTypeConverterBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
value: Any = SchemaField(
description="The value to convert to a universal type."
)
type: TypeOptions = SchemaField(description="The type to convert the value to.")
class Output(BlockSchema):
class Output(BlockSchemaOutput):
value: Any = SchemaField(description="The converted value.")
error: str = SchemaField(description="Error message if conversion failed.")
def __init__(self):
super().__init__(
@@ -195,10 +201,10 @@ class ReverseListOrderBlock(Block):
A block which takes in a list and returns it in the opposite order.
"""
class Input(BlockSchema):
class Input(BlockSchemaInput):
input_list: list[Any] = SchemaField(description="The list to reverse")
class Output(BlockSchema):
class Output(BlockSchemaOutput):
reversed_list: list[Any] = SchemaField(description="The list in reversed order")
def __init__(self):

View File

@@ -2,7 +2,13 @@ import os
import re
from typing import Type
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.model import SchemaField
@@ -15,12 +21,12 @@ class BlockInstallationBlock(Block):
for development purposes only.
"""
class Input(BlockSchema):
class Input(BlockSchemaInput):
code: str = SchemaField(
description="Python code of the block to be installed",
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
success: str = SchemaField(
description="Success message if the block is installed successfully",
)

View File

@@ -1,7 +1,13 @@
from enum import Enum
from typing import Any
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.model import SchemaField
from backend.util.type import convert
@@ -16,7 +22,7 @@ class ComparisonOperator(Enum):
class ConditionBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
value1: Any = SchemaField(
description="Enter the first value for comparison",
placeholder="For example: 10 or 'hello' or True",
@@ -40,7 +46,7 @@ class ConditionBlock(Block):
default=None,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
result: bool = SchemaField(
description="The result of the condition evaluation (True or False)"
)
@@ -111,7 +117,7 @@ class ConditionBlock(Block):
class IfInputMatchesBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
input: Any = SchemaField(
description="The input to match against",
placeholder="For example: 10 or 'hello' or True",
@@ -131,7 +137,7 @@ class IfInputMatchesBlock(Block):
default=None,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
result: bool = SchemaField(
description="The result of the condition evaluation (True or False)"
)

View File

@@ -4,9 +4,15 @@ from typing import Any, Literal, Optional
from e2b_code_interpreter import AsyncSandbox
from e2b_code_interpreter import Result as E2BExecutionResult
from e2b_code_interpreter.charts import Chart as E2BExecutionResultChart
from pydantic import BaseModel, JsonValue, SecretStr
from pydantic import BaseModel, Field, JsonValue, SecretStr
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.model import (
APIKeyCredentials,
CredentialsField,
@@ -61,7 +67,7 @@ class MainCodeExecutionResult(BaseModel):
jpeg: Optional[str] = None
pdf: Optional[str] = None
latex: Optional[str] = None
json: Optional[JsonValue] = None # type: ignore (reportIncompatibleMethodOverride)
json_data: Optional[JsonValue] = Field(None, alias="json")
javascript: Optional[str] = None
data: Optional[dict] = None
chart: Optional[Chart] = None
@@ -159,7 +165,7 @@ class ExecuteCodeBlock(Block, BaseE2BExecutorMixin):
# TODO : Add support to upload and download files
# NOTE: Currently, you can only customize the CPU and Memory
# by creating a pre customized sandbox template
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput[
Literal[ProviderName.E2B], Literal["api_key"]
] = CredentialsField(
@@ -217,7 +223,7 @@ class ExecuteCodeBlock(Block, BaseE2BExecutorMixin):
advanced=True,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
main_result: MainCodeExecutionResult = SchemaField(
title="Main Result", description="The main result from the code execution"
)
@@ -232,7 +238,6 @@ class ExecuteCodeBlock(Block, BaseE2BExecutorMixin):
description="Standard output logs from execution"
)
stderr_logs: str = SchemaField(description="Standard error logs from execution")
error: str = SchemaField(description="Error message if execution failed")
def __init__(self):
super().__init__(
@@ -296,7 +301,7 @@ class ExecuteCodeBlock(Block, BaseE2BExecutorMixin):
class InstantiateCodeSandboxBlock(Block, BaseE2BExecutorMixin):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput[
Literal[ProviderName.E2B], Literal["api_key"]
] = CredentialsField(
@@ -346,7 +351,7 @@ class InstantiateCodeSandboxBlock(Block, BaseE2BExecutorMixin):
advanced=True,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
sandbox_id: str = SchemaField(description="ID of the sandbox instance")
response: str = SchemaField(
title="Text Result",
@@ -356,7 +361,6 @@ class InstantiateCodeSandboxBlock(Block, BaseE2BExecutorMixin):
description="Standard output logs from execution"
)
stderr_logs: str = SchemaField(description="Standard error logs from execution")
error: str = SchemaField(description="Error message if execution failed")
def __init__(self):
super().__init__(
@@ -421,7 +425,7 @@ class InstantiateCodeSandboxBlock(Block, BaseE2BExecutorMixin):
class ExecuteCodeStepBlock(Block, BaseE2BExecutorMixin):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput[
Literal[ProviderName.E2B], Literal["api_key"]
] = CredentialsField(
@@ -454,7 +458,7 @@ class ExecuteCodeStepBlock(Block, BaseE2BExecutorMixin):
default=False,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
main_result: MainCodeExecutionResult = SchemaField(
title="Main Result", description="The main result from the code execution"
)
@@ -469,7 +473,6 @@ class ExecuteCodeStepBlock(Block, BaseE2BExecutorMixin):
description="Standard output logs from execution"
)
stderr_logs: str = SchemaField(description="Standard error logs from execution")
error: str = SchemaField(description="Error message if execution failed")
def __init__(self):
super().__init__(

View File

@@ -1,17 +1,23 @@
import re
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.model import SchemaField
class CodeExtractionBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
text: str = SchemaField(
description="Text containing code blocks to extract (e.g., AI response)",
placeholder="Enter text containing code blocks",
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
html: str = SchemaField(description="Extracted HTML code")
css: str = SchemaField(description="Extracted CSS code")
javascript: str = SchemaField(description="Extracted JavaScript code")

View File

@@ -0,0 +1,224 @@
from dataclasses import dataclass
from enum import Enum
from typing import Any, Literal
from openai import AsyncOpenAI
from openai.types.responses import Response as OpenAIResponse
from pydantic import SecretStr
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.model import (
APIKeyCredentials,
CredentialsField,
CredentialsMetaInput,
NodeExecutionStats,
SchemaField,
)
from backend.integrations.providers import ProviderName
@dataclass
class CodexCallResult:
"""Structured response returned by Codex invocations."""
response: str
reasoning: str
response_id: str
class CodexModel(str, Enum):
"""Codex-capable OpenAI models."""
GPT5_1_CODEX = "gpt-5.1-codex"
class CodexReasoningEffort(str, Enum):
"""Configuration for the Responses API reasoning effort."""
NONE = "none"
LOW = "low"
MEDIUM = "medium"
HIGH = "high"
CodexCredentials = CredentialsMetaInput[
Literal[ProviderName.OPENAI], Literal["api_key"]
]
TEST_CREDENTIALS = APIKeyCredentials(
id="e2fcb203-3f2d-4ad4-a344-8df3bc7db36b",
provider="openai",
api_key=SecretStr("mock-openai-api-key"),
title="Mock OpenAI API key",
expires_at=None,
)
TEST_CREDENTIALS_INPUT = {
"provider": TEST_CREDENTIALS.provider,
"id": TEST_CREDENTIALS.id,
"type": TEST_CREDENTIALS.type,
"title": TEST_CREDENTIALS.title,
}
def CodexCredentialsField() -> CodexCredentials:
return CredentialsField(
description="OpenAI API key with access to Codex models (Responses API).",
)
class CodeGenerationBlock(Block):
"""Block that talks to Codex models via the OpenAI Responses API."""
class Input(BlockSchemaInput):
prompt: str = SchemaField(
description="Primary coding request passed to the Codex model.",
placeholder="Generate a Python function that reverses a list.",
)
system_prompt: str = SchemaField(
title="System Prompt",
default=(
"You are Codex, an elite software engineer. "
"Favor concise, working code and highlight important caveats."
),
description="Optional instructions injected via the Responses API instructions field.",
advanced=True,
)
model: CodexModel = SchemaField(
title="Codex Model",
default=CodexModel.GPT5_1_CODEX,
description="Codex-optimized model served via the Responses API.",
advanced=False,
)
reasoning_effort: CodexReasoningEffort = SchemaField(
title="Reasoning Effort",
default=CodexReasoningEffort.MEDIUM,
description="Controls the Responses API reasoning budget. Select 'none' to skip reasoning configs.",
advanced=True,
)
max_output_tokens: int | None = SchemaField(
title="Max Output Tokens",
default=2048,
description="Upper bound for generated tokens (hard limit 128,000). Leave blank to let OpenAI decide.",
advanced=True,
)
credentials: CodexCredentials = CodexCredentialsField()
class Output(BlockSchemaOutput):
response: str = SchemaField(
description="Code-focused response returned by the Codex model."
)
reasoning: str = SchemaField(
description="Reasoning summary returned by the model, if available.",
default="",
)
response_id: str = SchemaField(
description="ID of the Responses API call for auditing/debugging.",
default="",
)
def __init__(self):
super().__init__(
id="86a2a099-30df-47b4-b7e4-34ae5f83e0d5",
description="Generate or refactor code using OpenAI's Codex (Responses API).",
categories={BlockCategory.AI, BlockCategory.DEVELOPER_TOOLS},
input_schema=CodeGenerationBlock.Input,
output_schema=CodeGenerationBlock.Output,
test_input=[
{
"prompt": "Write a TypeScript function that deduplicates an array.",
"credentials": TEST_CREDENTIALS_INPUT,
}
],
test_output=[
("response", str),
("reasoning", str),
("response_id", str),
],
test_mock={
"call_codex": lambda *_args, **_kwargs: CodexCallResult(
response="function dedupe<T>(items: T[]): T[] { return [...new Set(items)]; }",
reasoning="Used Set to remove duplicates in O(n).",
response_id="resp_test",
)
},
test_credentials=TEST_CREDENTIALS,
)
self.execution_stats = NodeExecutionStats()
async def call_codex(
self,
*,
credentials: APIKeyCredentials,
model: CodexModel,
prompt: str,
system_prompt: str,
max_output_tokens: int | None,
reasoning_effort: CodexReasoningEffort,
) -> CodexCallResult:
"""Invoke the OpenAI Responses API."""
client = AsyncOpenAI(api_key=credentials.api_key.get_secret_value())
request_payload: dict[str, Any] = {
"model": model.value,
"input": prompt,
}
if system_prompt:
request_payload["instructions"] = system_prompt
if max_output_tokens is not None:
request_payload["max_output_tokens"] = max_output_tokens
if reasoning_effort != CodexReasoningEffort.NONE:
request_payload["reasoning"] = {"effort": reasoning_effort.value}
response = await client.responses.create(**request_payload)
if not isinstance(response, OpenAIResponse):
raise TypeError(f"Expected OpenAIResponse, got {type(response).__name__}")
# Extract data directly from typed response
text_output = response.output_text or ""
reasoning_summary = (
str(response.reasoning.summary)
if response.reasoning and response.reasoning.summary
else ""
)
response_id = response.id or ""
# Update usage stats
self.execution_stats.input_token_count = (
response.usage.input_tokens if response.usage else 0
)
self.execution_stats.output_token_count = (
response.usage.output_tokens if response.usage else 0
)
self.execution_stats.llm_call_count += 1
return CodexCallResult(
response=text_output,
reasoning=reasoning_summary,
response_id=response_id,
)
async def run(
self,
input_data: Input,
*,
credentials: APIKeyCredentials,
**_kwargs,
) -> BlockOutput:
result = await self.call_codex(
credentials=credentials,
model=input_data.model,
prompt=input_data.prompt,
system_prompt=input_data.system_prompt,
max_output_tokens=input_data.max_output_tokens,
reasoning_effort=input_data.reasoning_effort,
)
yield "response", result.response
yield "reasoning", result.reasoning
yield "response_id", result.response_id

View File

@@ -5,7 +5,8 @@ from backend.data.block import (
BlockCategory,
BlockManualWebhookConfig,
BlockOutput,
BlockSchema,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.model import SchemaField
from backend.integrations.providers import ProviderName
@@ -27,10 +28,10 @@ class TranscriptionDataModel(BaseModel):
class CompassAITriggerBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
payload: TranscriptionDataModel = SchemaField(hidden=True)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
transcription: str = SchemaField(
description="The contents of the compass transcription."
)

View File

@@ -1,16 +1,22 @@
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.model import SchemaField
class WordCharacterCountBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
text: str = SchemaField(
description="Input text to count words and characters",
placeholder="Enter your text here",
advanced=False,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
word_count: int = SchemaField(description="Number of words in the input text")
character_count: int = SchemaField(
description="Number of characters in the input text"

View File

@@ -1,6 +1,12 @@
from typing import Any, List
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.model import SchemaField
from backend.util.json import loads
from backend.util.mock import MockObject
@@ -12,13 +18,13 @@ from backend.util.prompt import estimate_token_count_str
class CreateDictionaryBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
values: dict[str, Any] = SchemaField(
description="Key-value pairs to create the dictionary with",
placeholder="e.g., {'name': 'Alice', 'age': 25}",
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
dictionary: dict[str, Any] = SchemaField(
description="The created dictionary containing the specified key-value pairs"
)
@@ -62,7 +68,7 @@ class CreateDictionaryBlock(Block):
class AddToDictionaryBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
dictionary: dict[Any, Any] = SchemaField(
default_factory=dict,
description="The dictionary to add the entry to. If not provided, a new dictionary will be created.",
@@ -86,11 +92,10 @@ class AddToDictionaryBlock(Block):
advanced=True,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
updated_dictionary: dict = SchemaField(
description="The dictionary with the new entry added."
)
error: str = SchemaField(description="Error message if the operation failed.")
def __init__(self):
super().__init__(
@@ -141,11 +146,11 @@ class AddToDictionaryBlock(Block):
class FindInDictionaryBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
input: Any = SchemaField(description="Dictionary to lookup from")
key: str | int = SchemaField(description="Key to lookup in the dictionary")
class Output(BlockSchema):
class Output(BlockSchemaOutput):
output: Any = SchemaField(description="Value found for the given key")
missing: Any = SchemaField(
description="Value of the input that missing the key"
@@ -201,7 +206,7 @@ class FindInDictionaryBlock(Block):
class RemoveFromDictionaryBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
dictionary: dict[Any, Any] = SchemaField(
description="The dictionary to modify."
)
@@ -210,12 +215,11 @@ class RemoveFromDictionaryBlock(Block):
default=False, description="Whether to return the removed value."
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
updated_dictionary: dict[Any, Any] = SchemaField(
description="The dictionary after removal."
)
removed_value: Any = SchemaField(description="The removed value if requested.")
error: str = SchemaField(description="Error message if the operation failed.")
def __init__(self):
super().__init__(
@@ -251,19 +255,18 @@ class RemoveFromDictionaryBlock(Block):
class ReplaceDictionaryValueBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
dictionary: dict[Any, Any] = SchemaField(
description="The dictionary to modify."
)
key: str | int = SchemaField(description="Key to replace the value for.")
value: Any = SchemaField(description="The new value for the given key.")
class Output(BlockSchema):
class Output(BlockSchemaOutput):
updated_dictionary: dict[Any, Any] = SchemaField(
description="The dictionary after replacement."
)
old_value: Any = SchemaField(description="The value that was replaced.")
error: str = SchemaField(description="Error message if the operation failed.")
def __init__(self):
super().__init__(
@@ -300,10 +303,10 @@ class ReplaceDictionaryValueBlock(Block):
class DictionaryIsEmptyBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
dictionary: dict[Any, Any] = SchemaField(description="The dictionary to check.")
class Output(BlockSchema):
class Output(BlockSchemaOutput):
is_empty: bool = SchemaField(description="True if the dictionary is empty.")
def __init__(self):
@@ -327,7 +330,7 @@ class DictionaryIsEmptyBlock(Block):
class CreateListBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
values: List[Any] = SchemaField(
description="A list of values to be combined into a new list.",
placeholder="e.g., ['Alice', 25, True]",
@@ -343,11 +346,10 @@ class CreateListBlock(Block):
advanced=True,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
list: List[Any] = SchemaField(
description="The created list containing the specified values."
)
error: str = SchemaField(description="Error message if list creation failed.")
def __init__(self):
super().__init__(
@@ -404,7 +406,7 @@ class CreateListBlock(Block):
class AddToListBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
list: List[Any] = SchemaField(
default_factory=list,
advanced=False,
@@ -425,11 +427,10 @@ class AddToListBlock(Block):
description="The position to insert the new entry. If not provided, the entry will be appended to the end of the list.",
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
updated_list: List[Any] = SchemaField(
description="The list with the new entry added."
)
error: str = SchemaField(description="Error message if the operation failed.")
def __init__(self):
super().__init__(
@@ -484,11 +485,11 @@ class AddToListBlock(Block):
class FindInListBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
list: List[Any] = SchemaField(description="The list to search in.")
value: Any = SchemaField(description="The value to search for.")
class Output(BlockSchema):
class Output(BlockSchemaOutput):
index: int = SchemaField(description="The index of the value in the list.")
found: bool = SchemaField(
description="Whether the value was found in the list."
@@ -526,15 +527,14 @@ class FindInListBlock(Block):
class GetListItemBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
list: List[Any] = SchemaField(description="The list to get the item from.")
index: int = SchemaField(
description="The 0-based index of the item (supports negative indices)."
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
item: Any = SchemaField(description="The item at the specified index.")
error: str = SchemaField(description="Error message if the operation failed.")
def __init__(self):
super().__init__(
@@ -561,7 +561,7 @@ class GetListItemBlock(Block):
class RemoveFromListBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
list: List[Any] = SchemaField(description="The list to modify.")
value: Any = SchemaField(
default=None, description="Value to remove from the list."
@@ -574,10 +574,9 @@ class RemoveFromListBlock(Block):
default=False, description="Whether to return the removed item."
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
updated_list: List[Any] = SchemaField(description="The list after removal.")
removed_item: Any = SchemaField(description="The removed item if requested.")
error: str = SchemaField(description="Error message if the operation failed.")
def __init__(self):
super().__init__(
@@ -618,17 +617,16 @@ class RemoveFromListBlock(Block):
class ReplaceListItemBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
list: List[Any] = SchemaField(description="The list to modify.")
index: int = SchemaField(
description="Index of the item to replace (supports negative indices)."
)
value: Any = SchemaField(description="The new value for the given index.")
class Output(BlockSchema):
class Output(BlockSchemaOutput):
updated_list: List[Any] = SchemaField(description="The list after replacement.")
old_item: Any = SchemaField(description="The item that was replaced.")
error: str = SchemaField(description="Error message if the operation failed.")
def __init__(self):
super().__init__(
@@ -663,10 +661,10 @@ class ReplaceListItemBlock(Block):
class ListIsEmptyBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
list: List[Any] = SchemaField(description="The list to check.")
class Output(BlockSchema):
class Output(BlockSchemaOutput):
is_empty: bool = SchemaField(description="True if the list is empty.")
def __init__(self):

View File

@@ -8,7 +8,8 @@ from backend.sdk import (
Block,
BlockCategory,
BlockOutput,
BlockSchema,
BlockSchemaInput,
BlockSchemaOutput,
CredentialsMetaInput,
SchemaField,
UserPasswordCredentials,
@@ -18,7 +19,7 @@ from ._api import DataForSeoClient
from ._config import dataforseo
class KeywordSuggestion(BlockSchema):
class KeywordSuggestion(BlockSchemaInput):
"""Schema for a keyword suggestion result."""
keyword: str = SchemaField(description="The keyword suggestion")
@@ -45,7 +46,7 @@ class KeywordSuggestion(BlockSchema):
class DataForSeoKeywordSuggestionsBlock(Block):
"""Block for getting keyword suggestions from DataForSEO Labs."""
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = dataforseo.credentials_field(
description="DataForSEO credentials (username and password)"
)
@@ -77,7 +78,7 @@ class DataForSeoKeywordSuggestionsBlock(Block):
le=3000,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
suggestions: List[KeywordSuggestion] = SchemaField(
description="List of keyword suggestions with metrics"
)
@@ -90,7 +91,6 @@ class DataForSeoKeywordSuggestionsBlock(Block):
seed_keyword: str = SchemaField(
description="The seed keyword used for the query"
)
error: str = SchemaField(description="Error message if the API call failed")
def __init__(self):
super().__init__(
@@ -213,12 +213,12 @@ class DataForSeoKeywordSuggestionsBlock(Block):
class KeywordSuggestionExtractorBlock(Block):
"""Extracts individual fields from a KeywordSuggestion object."""
class Input(BlockSchema):
class Input(BlockSchemaInput):
suggestion: KeywordSuggestion = SchemaField(
description="The keyword suggestion object to extract fields from"
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
keyword: str = SchemaField(description="The keyword suggestion")
search_volume: Optional[int] = SchemaField(
description="Monthly search volume", default=None

View File

@@ -8,7 +8,8 @@ from backend.sdk import (
Block,
BlockCategory,
BlockOutput,
BlockSchema,
BlockSchemaInput,
BlockSchemaOutput,
CredentialsMetaInput,
SchemaField,
UserPasswordCredentials,
@@ -18,7 +19,7 @@ from ._api import DataForSeoClient
from ._config import dataforseo
class RelatedKeyword(BlockSchema):
class RelatedKeyword(BlockSchemaInput):
"""Schema for a related keyword result."""
keyword: str = SchemaField(description="The related keyword")
@@ -45,7 +46,7 @@ class RelatedKeyword(BlockSchema):
class DataForSeoRelatedKeywordsBlock(Block):
"""Block for getting related keywords from DataForSEO Labs."""
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = dataforseo.credentials_field(
description="DataForSEO credentials (username and password)"
)
@@ -85,7 +86,7 @@ class DataForSeoRelatedKeywordsBlock(Block):
le=4,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
related_keywords: List[RelatedKeyword] = SchemaField(
description="List of related keywords with metrics"
)
@@ -98,7 +99,6 @@ class DataForSeoRelatedKeywordsBlock(Block):
seed_keyword: str = SchemaField(
description="The seed keyword used for the query"
)
error: str = SchemaField(description="Error message if the API call failed")
def __init__(self):
super().__init__(
@@ -231,12 +231,12 @@ class DataForSeoRelatedKeywordsBlock(Block):
class RelatedKeywordExtractorBlock(Block):
"""Extracts individual fields from a RelatedKeyword object."""
class Input(BlockSchema):
class Input(BlockSchemaInput):
related_keyword: RelatedKeyword = SchemaField(
description="The related keyword object to extract fields from"
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
keyword: str = SchemaField(description="The related keyword")
search_volume: Optional[int] = SchemaField(
description="Monthly search volume", default=None

View File

@@ -1,17 +1,23 @@
import codecs
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.model import SchemaField
class TextDecoderBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
text: str = SchemaField(
description="A string containing escaped characters to be decoded",
placeholder='Your entire text block with \\n and \\" escaped characters',
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
decoded_text: str = SchemaField(
description="The decoded text with escape sequences processed"
)

View File

@@ -1,13 +1,20 @@
import base64
import io
import mimetypes
from enum import Enum
from pathlib import Path
from typing import Any
from typing import Any, Literal, cast
import discord
from pydantic import SecretStr
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.model import APIKeyCredentials, SchemaField
from backend.util.file import store_media_file
from backend.util.request import Requests
@@ -27,11 +34,24 @@ TEST_CREDENTIALS = TEST_BOT_CREDENTIALS
TEST_CREDENTIALS_INPUT = TEST_BOT_CREDENTIALS_INPUT
class ThreadArchiveDuration(str, Enum):
"""Discord thread auto-archive duration options"""
ONE_HOUR = "60"
ONE_DAY = "1440"
THREE_DAYS = "4320"
ONE_WEEK = "10080"
def to_minutes(self) -> int:
"""Convert the duration string to minutes for Discord API"""
return int(self.value)
class ReadDiscordMessagesBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: DiscordCredentials = DiscordCredentialsField()
class Output(BlockSchema):
class Output(BlockSchemaOutput):
message_content: str = SchemaField(
description="The content of the message received"
)
@@ -164,7 +184,7 @@ class ReadDiscordMessagesBlock(Block):
class SendDiscordMessageBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: DiscordCredentials = DiscordCredentialsField()
message_content: str = SchemaField(
description="The content of the message to send"
@@ -178,7 +198,7 @@ class SendDiscordMessageBlock(Block):
default="",
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
status: str = SchemaField(
description="The status of the operation (e.g., 'Message sent', 'Error')"
)
@@ -310,7 +330,7 @@ class SendDiscordMessageBlock(Block):
class SendDiscordDMBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: DiscordCredentials = DiscordCredentialsField()
user_id: str = SchemaField(
description="The Discord user ID to send the DM to (e.g., '123456789012345678')"
@@ -319,7 +339,7 @@ class SendDiscordDMBlock(Block):
description="The content of the direct message to send"
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
status: str = SchemaField(description="The status of the operation")
message_id: str = SchemaField(description="The ID of the sent message")
@@ -399,7 +419,7 @@ class SendDiscordDMBlock(Block):
class SendDiscordEmbedBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: DiscordCredentials = DiscordCredentialsField()
channel_identifier: str = SchemaField(
description="Channel ID or channel name to send the embed to"
@@ -436,7 +456,7 @@ class SendDiscordEmbedBlock(Block):
default=[],
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
status: str = SchemaField(description="Operation status")
message_id: str = SchemaField(description="ID of the sent embed message")
@@ -586,7 +606,7 @@ class SendDiscordEmbedBlock(Block):
class SendDiscordFileBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: DiscordCredentials = DiscordCredentialsField()
channel_identifier: str = SchemaField(
description="Channel ID or channel name to send the file to"
@@ -607,7 +627,7 @@ class SendDiscordFileBlock(Block):
description="Optional message to send with the file", default=""
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
status: str = SchemaField(description="Operation status")
message_id: str = SchemaField(description="ID of the sent message")
@@ -788,7 +808,7 @@ class SendDiscordFileBlock(Block):
class ReplyToDiscordMessageBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: DiscordCredentials = DiscordCredentialsField()
channel_id: str = SchemaField(
description="The channel ID where the message to reply to is located"
@@ -799,7 +819,7 @@ class ReplyToDiscordMessageBlock(Block):
description="Whether to mention the original message author", default=True
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
status: str = SchemaField(description="Operation status")
reply_id: str = SchemaField(description="ID of the reply message")
@@ -913,13 +933,13 @@ class ReplyToDiscordMessageBlock(Block):
class DiscordUserInfoBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: DiscordCredentials = DiscordCredentialsField()
user_id: str = SchemaField(
description="The Discord user ID to get information about"
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
user_id: str = SchemaField(
description="The user's ID (passed through for chaining)"
)
@@ -1030,7 +1050,7 @@ class DiscordUserInfoBlock(Block):
class DiscordChannelInfoBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: DiscordCredentials = DiscordCredentialsField()
channel_identifier: str = SchemaField(
description="Channel name or channel ID to look up"
@@ -1041,7 +1061,7 @@ class DiscordChannelInfoBlock(Block):
default="",
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
channel_id: str = SchemaField(description="The channel's ID")
channel_name: str = SchemaField(description="The channel's name")
server_id: str = SchemaField(description="The server's ID")
@@ -1160,3 +1180,211 @@ class DiscordChannelInfoBlock(Block):
raise ValueError(f"Login error occurred: {login_err}")
except Exception as e:
raise ValueError(f"An error occurred: {e}")
class CreateDiscordThreadBlock(Block):
class Input(BlockSchemaInput):
credentials: DiscordCredentials = DiscordCredentialsField()
channel_name: str = SchemaField(
description="Channel ID or channel name to create the thread in"
)
server_name: str = SchemaField(
description="Server name (only needed if using channel name)",
advanced=True,
default="",
)
thread_name: str = SchemaField(description="The name of the thread to create")
is_private: bool = SchemaField(
description="Whether to create a private thread (requires Boost Level 2+) or public thread",
default=False,
)
auto_archive_duration: ThreadArchiveDuration = SchemaField(
description="Duration before the thread is automatically archived",
advanced=True,
default=ThreadArchiveDuration.ONE_WEEK,
)
message_content: str = SchemaField(
description="Optional initial message to send in the thread",
advanced=True,
default="",
)
class Output(BlockSchemaOutput):
status: str = SchemaField(description="Operation status")
thread_id: str = SchemaField(description="ID of the created thread")
thread_name: str = SchemaField(description="Name of the created thread")
def __init__(self):
super().__init__(
id="e8f3c9a2-7b5d-4f1e-9c6a-3d8e2b4f7a1c",
input_schema=CreateDiscordThreadBlock.Input,
output_schema=CreateDiscordThreadBlock.Output,
description="Creates a new thread in a Discord channel.",
categories={BlockCategory.SOCIAL},
test_input={
"channel_name": "general",
"thread_name": "Test Thread",
"is_private": False,
"auto_archive_duration": ThreadArchiveDuration.ONE_HOUR,
"credentials": TEST_CREDENTIALS_INPUT,
},
test_output=[
("status", "Thread created successfully"),
("thread_id", "123456789012345678"),
("thread_name", "Test Thread"),
],
test_mock={
"create_thread": lambda *args, **kwargs: {
"status": "Thread created successfully",
"thread_id": "123456789012345678",
"thread_name": "Test Thread",
}
},
test_credentials=TEST_CREDENTIALS,
)
async def create_thread(
self,
token: str,
channel_name: str,
server_name: str | None,
thread_name: str,
is_private: bool,
auto_archive_duration: ThreadArchiveDuration,
message_content: str,
) -> dict:
intents = discord.Intents.default()
intents.guilds = True
intents.message_content = True # Required for sending messages in threads
client = discord.Client(intents=intents)
result = {}
@client.event
async def on_ready():
channel = None
# Try to parse as channel ID first
try:
channel_id = int(channel_name)
try:
channel = await client.fetch_channel(channel_id)
except discord.errors.NotFound:
result["status"] = f"Channel with ID {channel_id} not found"
await client.close()
return
except discord.errors.Forbidden:
result["status"] = (
f"Bot does not have permission to view channel {channel_id}"
)
await client.close()
return
except ValueError:
# Not an ID, treat as channel name
# Collect all matching channels to detect duplicates
matching_channels = []
for guild in client.guilds:
# Skip guilds if server_name is provided and doesn't match
if (
server_name
and server_name.strip()
and guild.name != server_name
):
continue
for ch in guild.text_channels:
if ch.name == channel_name:
matching_channels.append(ch)
if not matching_channels:
result["status"] = f"Channel not found: {channel_name}"
await client.close()
return
elif len(matching_channels) > 1:
result["status"] = (
f"Multiple channels named '{channel_name}' found. "
"Please specify server_name to disambiguate."
)
await client.close()
return
else:
channel = matching_channels[0]
if not channel:
result["status"] = "Failed to resolve channel"
await client.close()
return
# Type check - ensure it's a text channel that can create threads
if not hasattr(channel, "create_thread"):
result["status"] = (
f"Channel {channel_name} cannot create threads (not a text channel)"
)
await client.close()
return
# After the hasattr check, we know channel is a TextChannel
channel = cast(discord.TextChannel, channel)
try:
# Create the thread using discord.py 2.0+ API
thread_type = (
discord.ChannelType.private_thread
if is_private
else discord.ChannelType.public_thread
)
# Cast to the specific Literal type that discord.py expects
duration_minutes = cast(
Literal[60, 1440, 4320, 10080], auto_archive_duration.to_minutes()
)
# The 'type' parameter exists in discord.py 2.0+ but isn't in type stubs yet
# pyright: ignore[reportCallIssue]
thread = await channel.create_thread(
name=thread_name,
type=thread_type,
auto_archive_duration=duration_minutes,
)
# Send initial message if provided
if message_content:
await thread.send(message_content)
result["status"] = "Thread created successfully"
result["thread_id"] = str(thread.id)
result["thread_name"] = thread.name
except discord.errors.Forbidden as e:
result["status"] = (
f"Bot does not have permission to create threads in this channel. {str(e)}"
)
except Exception as e:
result["status"] = f"Error creating thread: {str(e)}"
finally:
await client.close()
await client.start(token)
return result
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
try:
result = await self.create_thread(
token=credentials.api_key.get_secret_value(),
channel_name=input_data.channel_name,
server_name=input_data.server_name or None,
thread_name=input_data.thread_name,
is_private=input_data.is_private,
auto_archive_duration=input_data.auto_archive_duration,
message_content=input_data.message_content,
)
yield "status", result.get("status", "Unknown error")
if "thread_id" in result:
yield "thread_id", result["thread_id"]
if "thread_name" in result:
yield "thread_name", result["thread_name"]
except discord.errors.LoginFailure as login_err:
raise ValueError(f"Login error occurred: {login_err}")

View File

@@ -2,7 +2,13 @@
Discord OAuth-based blocks.
"""
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.model import OAuth2Credentials, SchemaField
from ._api import DiscordOAuthUser, get_current_user
@@ -21,12 +27,12 @@ class DiscordGetCurrentUserBlock(Block):
This block requires Discord OAuth2 credentials (not bot tokens).
"""
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: DiscordOAuthCredentialsInput = DiscordOAuthCredentialsField(
["identify"]
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
user_id: str = SchemaField(description="The authenticated user's Discord ID")
username: str = SchemaField(description="The user's username")
avatar_url: str = SchemaField(description="URL to the user's avatar image")

View File

@@ -1,11 +1,19 @@
import smtplib
import socket
import ssl
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText
from typing import Literal
from pydantic import BaseModel, ConfigDict, SecretStr
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.model import (
CredentialsField,
CredentialsMetaInput,
@@ -42,16 +50,14 @@ def SMTPCredentialsField() -> SMTPCredentialsInput:
class SMTPConfig(BaseModel):
smtp_server: str = SchemaField(
default="smtp.example.com", description="SMTP server address"
)
smtp_server: str = SchemaField(description="SMTP server address")
smtp_port: int = SchemaField(default=25, description="SMTP port number")
model_config = ConfigDict(title="SMTP Config")
class SendEmailBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
to_email: str = SchemaField(
description="Recipient email address", placeholder="recipient@example.com"
)
@@ -61,13 +67,10 @@ class SendEmailBlock(Block):
body: str = SchemaField(
description="Body of the email", placeholder="Enter the email body"
)
config: SMTPConfig = SchemaField(
description="SMTP Config",
default=SMTPConfig(),
)
config: SMTPConfig = SchemaField(description="SMTP Config")
credentials: SMTPCredentialsInput = SMTPCredentialsField()
class Output(BlockSchema):
class Output(BlockSchemaOutput):
status: str = SchemaField(description="Status of the email sending operation")
error: str = SchemaField(
description="Error message if the email sending failed"
@@ -114,7 +117,7 @@ class SendEmailBlock(Block):
msg["Subject"] = subject
msg.attach(MIMEText(body, "plain"))
with smtplib.SMTP(smtp_server, smtp_port) as server:
with smtplib.SMTP(smtp_server, smtp_port, timeout=30) as server:
server.starttls()
server.login(smtp_username, smtp_password)
server.sendmail(smtp_username, to_email, msg.as_string())
@@ -124,10 +127,59 @@ class SendEmailBlock(Block):
async def run(
self, input_data: Input, *, credentials: SMTPCredentials, **kwargs
) -> BlockOutput:
yield "status", self.send_email(
config=input_data.config,
to_email=input_data.to_email,
subject=input_data.subject,
body=input_data.body,
credentials=credentials,
)
try:
status = self.send_email(
config=input_data.config,
to_email=input_data.to_email,
subject=input_data.subject,
body=input_data.body,
credentials=credentials,
)
yield "status", status
except socket.gaierror:
yield "error", (
f"Cannot connect to SMTP server '{input_data.config.smtp_server}'. "
"Please verify the server address is correct."
)
except socket.timeout:
yield "error", (
f"Connection timeout to '{input_data.config.smtp_server}' "
f"on port {input_data.config.smtp_port}. "
"The server may be down or unreachable."
)
except ConnectionRefusedError:
yield "error", (
f"Connection refused to '{input_data.config.smtp_server}' "
f"on port {input_data.config.smtp_port}. "
"Common SMTP ports are: 587 (TLS), 465 (SSL), 25 (plain). "
"Please verify the port is correct."
)
except smtplib.SMTPNotSupportedError:
yield "error", (
f"STARTTLS not supported by server '{input_data.config.smtp_server}'. "
"Try using port 465 for SSL or port 25 for unencrypted connection."
)
except ssl.SSLError as e:
yield "error", (
f"SSL/TLS error when connecting to '{input_data.config.smtp_server}': {str(e)}. "
"The server may require a different security protocol."
)
except smtplib.SMTPAuthenticationError:
yield "error", (
"Authentication failed. Please verify your username and password are correct."
)
except smtplib.SMTPRecipientsRefused:
yield "error", (
f"Recipient email address '{input_data.to_email}' was rejected by the server. "
"Please verify the email address is valid."
)
except smtplib.SMTPSenderRefused:
yield "error", (
"Sender email address defined in the credentials that where used"
"was rejected by the server. "
"Please verify your account is authorized to send emails."
)
except smtplib.SMTPDataError as e:
yield "error", f"Email data rejected by server: {str(e)}"
except Exception as e:
raise e

View File

@@ -8,7 +8,13 @@ which provides access to LinkedIn profile data and related information.
import logging
from typing import Optional
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.model import APIKeyCredentials, CredentialsField, SchemaField
from backend.util.type import MediaFileType
@@ -29,7 +35,7 @@ logger = logging.getLogger(__name__)
class GetLinkedinProfileBlock(Block):
"""Block to fetch LinkedIn profile data using Enrichlayer API."""
class Input(BlockSchema):
class Input(BlockSchemaInput):
"""Input schema for GetLinkedinProfileBlock."""
linkedin_url: str = SchemaField(
@@ -80,13 +86,12 @@ class GetLinkedinProfileBlock(Block):
description="Enrichlayer API credentials"
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
"""Output schema for GetLinkedinProfileBlock."""
profile: PersonProfileResponse = SchemaField(
description="LinkedIn profile data"
)
error: str = SchemaField(description="Error message if the request failed")
def __init__(self):
"""Initialize GetLinkedinProfileBlock."""
@@ -199,7 +204,7 @@ class GetLinkedinProfileBlock(Block):
class LinkedinPersonLookupBlock(Block):
"""Block to look up LinkedIn profiles by person's information using Enrichlayer API."""
class Input(BlockSchema):
class Input(BlockSchemaInput):
"""Input schema for LinkedinPersonLookupBlock."""
first_name: str = SchemaField(
@@ -242,13 +247,12 @@ class LinkedinPersonLookupBlock(Block):
description="Enrichlayer API credentials"
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
"""Output schema for LinkedinPersonLookupBlock."""
lookup_result: PersonLookupResponse = SchemaField(
description="LinkedIn profile lookup result"
)
error: str = SchemaField(description="Error message if the request failed")
def __init__(self):
"""Initialize LinkedinPersonLookupBlock."""
@@ -346,7 +350,7 @@ class LinkedinPersonLookupBlock(Block):
class LinkedinRoleLookupBlock(Block):
"""Block to look up LinkedIn profiles by role in a company using Enrichlayer API."""
class Input(BlockSchema):
class Input(BlockSchemaInput):
"""Input schema for LinkedinRoleLookupBlock."""
role: str = SchemaField(
@@ -366,13 +370,12 @@ class LinkedinRoleLookupBlock(Block):
description="Enrichlayer API credentials"
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
"""Output schema for LinkedinRoleLookupBlock."""
role_lookup_result: RoleLookupResponse = SchemaField(
description="LinkedIn role lookup result"
)
error: str = SchemaField(description="Error message if the request failed")
def __init__(self):
"""Initialize LinkedinRoleLookupBlock."""
@@ -449,7 +452,7 @@ class LinkedinRoleLookupBlock(Block):
class GetLinkedinProfilePictureBlock(Block):
"""Block to get LinkedIn profile pictures using Enrichlayer API."""
class Input(BlockSchema):
class Input(BlockSchemaInput):
"""Input schema for GetLinkedinProfilePictureBlock."""
linkedin_profile_url: str = SchemaField(
@@ -460,13 +463,12 @@ class GetLinkedinProfilePictureBlock(Block):
description="Enrichlayer API credentials"
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
"""Output schema for GetLinkedinProfilePictureBlock."""
profile_picture_url: MediaFileType = SchemaField(
description="LinkedIn profile picture URL"
)
error: str = SchemaField(description="Error message if the request failed")
def __init__(self):
"""Initialize GetLinkedinProfilePictureBlock."""

View File

@@ -0,0 +1,22 @@
"""
Test credentials and helpers for Exa blocks.
"""
from pydantic import SecretStr
from backend.data.model import APIKeyCredentials
TEST_CREDENTIALS = APIKeyCredentials(
id="01234567-89ab-cdef-0123-456789abcdef",
provider="exa",
api_key=SecretStr("mock-exa-api-key"),
title="Mock Exa API key",
expires_at=None,
)
TEST_CREDENTIALS_INPUT = {
"provider": TEST_CREDENTIALS.provider,
"id": TEST_CREDENTIALS.id,
"type": TEST_CREDENTIALS.type,
"title": TEST_CREDENTIALS.title,
}

View File

@@ -1,55 +1,59 @@
from typing import Optional
from exa_py import AsyncExa
from exa_py.api import AnswerResponse
from pydantic import BaseModel
from backend.sdk import (
APIKeyCredentials,
BaseModel,
Block,
BlockCategory,
BlockOutput,
BlockSchema,
BlockSchemaInput,
BlockSchemaOutput,
CredentialsMetaInput,
Requests,
MediaFileType,
SchemaField,
)
from ._config import exa
class CostBreakdown(BaseModel):
keywordSearch: float
neuralSearch: float
contentText: float
contentHighlight: float
contentSummary: float
class AnswerCitation(BaseModel):
"""Citation model for answer endpoint."""
id: str = SchemaField(description="The temporary ID for the document")
url: str = SchemaField(description="The URL of the search result")
title: Optional[str] = SchemaField(description="The title of the search result")
author: Optional[str] = SchemaField(description="The author of the content")
publishedDate: Optional[str] = SchemaField(
description="An estimate of the creation date"
)
text: Optional[str] = SchemaField(description="The full text content of the source")
image: Optional[MediaFileType] = SchemaField(
description="The URL of the image associated with the result"
)
favicon: Optional[MediaFileType] = SchemaField(
description="The URL of the favicon for the domain"
)
class SearchBreakdown(BaseModel):
search: float
contents: float
breakdown: CostBreakdown
class PerRequestPrices(BaseModel):
neuralSearch_1_25_results: float
neuralSearch_26_100_results: float
neuralSearch_100_plus_results: float
keywordSearch_1_100_results: float
keywordSearch_100_plus_results: float
class PerPagePrices(BaseModel):
contentText: float
contentHighlight: float
contentSummary: float
class CostDollars(BaseModel):
total: float
breakDown: list[SearchBreakdown]
perRequestPrices: PerRequestPrices
perPagePrices: PerPagePrices
@classmethod
def from_sdk(cls, sdk_citation) -> "AnswerCitation":
"""Convert SDK AnswerResult (dataclass) to our Pydantic model."""
return cls(
id=getattr(sdk_citation, "id", ""),
url=getattr(sdk_citation, "url", ""),
title=getattr(sdk_citation, "title", None),
author=getattr(sdk_citation, "author", None),
publishedDate=getattr(sdk_citation, "published_date", None),
text=getattr(sdk_citation, "text", None),
image=getattr(sdk_citation, "image", None),
favicon=getattr(sdk_citation, "favicon", None),
)
class ExaAnswerBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = exa.credentials_field(
description="The Exa integration requires an API Key."
)
@@ -58,31 +62,21 @@ class ExaAnswerBlock(Block):
placeholder="What is the latest valuation of SpaceX?",
)
text: bool = SchemaField(
default=False,
description="If true, the response includes full text content in the search results",
advanced=True,
)
model: str = SchemaField(
default="exa",
description="The search model to use (exa or exa-pro)",
placeholder="exa",
advanced=True,
description="Include full text content in the search results used for the answer",
default=True,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
answer: str = SchemaField(
description="The generated answer based on search results"
)
citations: list[dict] = SchemaField(
description="Search results used to generate the answer",
default_factory=list,
citations: list[AnswerCitation] = SchemaField(
description="Search results used to generate the answer"
)
cost_dollars: CostDollars = SchemaField(
description="Cost breakdown of the request"
)
error: str = SchemaField(
description="Error message if the request failed", default=""
citation: AnswerCitation = SchemaField(
description="Individual citation from the answer"
)
error: str = SchemaField(description="Error message if the request failed")
def __init__(self):
super().__init__(
@@ -96,26 +90,24 @@ class ExaAnswerBlock(Block):
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
url = "https://api.exa.ai/answer"
headers = {
"Content-Type": "application/json",
"x-api-key": credentials.api_key.get_secret_value(),
}
aexa = AsyncExa(api_key=credentials.api_key.get_secret_value())
# Build the payload
payload = {
"query": input_data.query,
"text": input_data.text,
"model": input_data.model,
}
# Get answer using SDK (stream=False for blocks) - this IS async, needs await
response = await aexa.answer(
query=input_data.query, text=input_data.text, stream=False
)
try:
response = await Requests().post(url, headers=headers, json=payload)
data = response.json()
# this should remain true as long as they don't start defaulting to streaming only.
# provides a bit of safety for sdk updates.
assert type(response) is AnswerResponse
yield "answer", data.get("answer", "")
yield "citations", data.get("citations", [])
yield "cost_dollars", data.get("costDollars", {})
yield "answer", response.answer
except Exception as e:
yield "error", str(e)
citations = [
AnswerCitation.from_sdk(sdk_citation)
for sdk_citation in response.citations or []
]
yield "citations", citations
for citation in citations:
yield "citation", citation

View File

@@ -0,0 +1,118 @@
"""
Exa Code Context Block
Provides code search capabilities to find relevant code snippets and examples
from open source repositories, documentation, and Stack Overflow.
"""
from typing import Union
from pydantic import BaseModel
from backend.sdk import (
APIKeyCredentials,
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
CredentialsMetaInput,
Requests,
SchemaField,
)
from ._config import exa
class CodeContextResponse(BaseModel):
"""Stable output model for code context responses."""
request_id: str
query: str
response: str
results_count: int
cost_dollars: str
search_time: float
output_tokens: int
@classmethod
def from_api(cls, data: dict) -> "CodeContextResponse":
"""Convert API response to our stable model."""
return cls(
request_id=data.get("requestId", ""),
query=data.get("query", ""),
response=data.get("response", ""),
results_count=data.get("resultsCount", 0),
cost_dollars=data.get("costDollars", ""),
search_time=data.get("searchTime", 0.0),
output_tokens=data.get("outputTokens", 0),
)
class ExaCodeContextBlock(Block):
"""Get relevant code snippets and examples from open source repositories."""
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = exa.credentials_field(
description="The Exa integration requires an API Key."
)
query: str = SchemaField(
description="Search query to find relevant code snippets. Describe what you're trying to do or what code you're looking for.",
placeholder="how to use React hooks for state management",
)
tokens_num: Union[str, int] = SchemaField(
default="dynamic",
description="Token limit for response. Use 'dynamic' for automatic sizing, 5000 for standard queries, or 10000 for comprehensive examples.",
placeholder="dynamic",
)
class Output(BlockSchemaOutput):
request_id: str = SchemaField(description="Unique identifier for this request")
query: str = SchemaField(description="The search query used")
response: str = SchemaField(
description="Formatted code snippets and contextual examples with sources"
)
results_count: int = SchemaField(
description="Number of code sources found and included"
)
cost_dollars: str = SchemaField(description="Cost of this request in dollars")
search_time: float = SchemaField(
description="Time taken to search in milliseconds"
)
output_tokens: int = SchemaField(description="Number of tokens in the response")
def __init__(self):
super().__init__(
id="8f9e0d1c-2b3a-4567-8901-23456789abcd",
description="Search billions of GitHub repos, docs, and Stack Overflow for relevant code examples",
categories={BlockCategory.SEARCH, BlockCategory.DEVELOPER_TOOLS},
input_schema=ExaCodeContextBlock.Input,
output_schema=ExaCodeContextBlock.Output,
)
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
url = "https://api.exa.ai/context"
headers = {
"Content-Type": "application/json",
"x-api-key": credentials.api_key.get_secret_value(),
}
payload = {
"query": input_data.query,
"tokensNum": input_data.tokens_num,
}
response = await Requests().post(url, headers=headers, json=payload)
data = response.json()
context = CodeContextResponse.from_api(data)
yield "request_id", context.request_id
yield "query", context.query
yield "response", context.response
yield "results_count", context.results_count
yield "cost_dollars", context.cost_dollars
yield "search_time", context.search_time
yield "output_tokens", context.output_tokens

View File

@@ -1,39 +1,127 @@
from enum import Enum
from typing import Optional
from exa_py import AsyncExa
from pydantic import BaseModel
from backend.sdk import (
APIKeyCredentials,
Block,
BlockCategory,
BlockOutput,
BlockSchema,
BlockSchemaInput,
BlockSchemaOutput,
CredentialsMetaInput,
Requests,
SchemaField,
)
from ._config import exa
from .helpers import ContentSettings
from .helpers import (
CostDollars,
ExaSearchResults,
ExtrasSettings,
HighlightSettings,
LivecrawlTypes,
SummarySettings,
)
class ContentStatusTag(str, Enum):
CRAWL_NOT_FOUND = "CRAWL_NOT_FOUND"
CRAWL_TIMEOUT = "CRAWL_TIMEOUT"
CRAWL_LIVECRAWL_TIMEOUT = "CRAWL_LIVECRAWL_TIMEOUT"
SOURCE_NOT_AVAILABLE = "SOURCE_NOT_AVAILABLE"
CRAWL_UNKNOWN_ERROR = "CRAWL_UNKNOWN_ERROR"
class ContentError(BaseModel):
tag: Optional[ContentStatusTag] = SchemaField(
default=None, description="Specific error type"
)
httpStatusCode: Optional[int] = SchemaField(
default=None, description="The corresponding HTTP status code"
)
class ContentStatus(BaseModel):
id: str = SchemaField(description="The URL that was requested")
status: str = SchemaField(
description="Status of the content fetch operation (success or error)"
)
error: Optional[ContentError] = SchemaField(
default=None, description="Error details, only present when status is 'error'"
)
class ExaContentsBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = exa.credentials_field(
description="The Exa integration requires an API Key."
)
ids: list[str] = SchemaField(
description="Array of document IDs obtained from searches"
urls: list[str] = SchemaField(
description="Array of URLs to crawl (preferred over 'ids')",
default_factory=list,
advanced=False,
)
contents: ContentSettings = SchemaField(
description="Content retrieval settings",
default=ContentSettings(),
ids: list[str] = SchemaField(
description="[DEPRECATED - use 'urls' instead] Array of document IDs obtained from searches",
default_factory=list,
advanced=True,
)
text: bool = SchemaField(
description="Retrieve text content from pages",
default=True,
)
highlights: HighlightSettings = SchemaField(
description="Text snippets most relevant from each page",
default=HighlightSettings(),
)
summary: SummarySettings = SchemaField(
description="LLM-generated summary of the webpage",
default=SummarySettings(),
)
livecrawl: Optional[LivecrawlTypes] = SchemaField(
description="Livecrawling options: never, fallback (default), always, preferred",
default=LivecrawlTypes.FALLBACK,
advanced=True,
)
livecrawl_timeout: Optional[int] = SchemaField(
description="Timeout for livecrawling in milliseconds",
default=10000,
advanced=True,
)
subpages: Optional[int] = SchemaField(
description="Number of subpages to crawl", default=0, ge=0, advanced=True
)
subpage_target: Optional[str | list[str]] = SchemaField(
description="Keyword(s) to find specific subpages of search results",
default=None,
advanced=True,
)
extras: ExtrasSettings = SchemaField(
description="Extra parameters for additional content",
default=ExtrasSettings(),
advanced=True,
)
class Output(BlockSchema):
results: list = SchemaField(
description="List of document contents", default_factory=list
class Output(BlockSchemaOutput):
results: list[ExaSearchResults] = SchemaField(
description="List of document contents with metadata"
)
error: str = SchemaField(
description="Error message if the request failed", default=""
result: ExaSearchResults = SchemaField(
description="Single document content result"
)
context: str = SchemaField(
description="A formatted string of the results ready for LLMs"
)
request_id: str = SchemaField(description="Unique identifier for the request")
statuses: list[ContentStatus] = SchemaField(
description="Status information for each requested URL"
)
cost_dollars: Optional[CostDollars] = SchemaField(
description="Cost breakdown for the request"
)
error: str = SchemaField(description="Error message if the request failed")
def __init__(self):
super().__init__(
@@ -47,23 +135,91 @@ class ExaContentsBlock(Block):
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
url = "https://api.exa.ai/contents"
headers = {
"Content-Type": "application/json",
"x-api-key": credentials.api_key.get_secret_value(),
}
if not input_data.urls and not input_data.ids:
raise ValueError("Either 'urls' or 'ids' must be provided")
# Convert ContentSettings to API format
payload = {
"ids": input_data.ids,
"text": input_data.contents.text,
"highlights": input_data.contents.highlights,
"summary": input_data.contents.summary,
}
sdk_kwargs = {}
try:
response = await Requests().post(url, headers=headers, json=payload)
data = response.json()
yield "results", data.get("results", [])
except Exception as e:
yield "error", str(e)
# Prefer urls over ids
if input_data.urls:
sdk_kwargs["urls"] = input_data.urls
elif input_data.ids:
sdk_kwargs["ids"] = input_data.ids
if input_data.text:
sdk_kwargs["text"] = {"includeHtmlTags": True}
# Handle highlights - only include if modified from defaults
if input_data.highlights and (
input_data.highlights.num_sentences != 1
or input_data.highlights.highlights_per_url != 1
or input_data.highlights.query is not None
):
highlights_dict = {}
highlights_dict["numSentences"] = input_data.highlights.num_sentences
highlights_dict["highlightsPerUrl"] = (
input_data.highlights.highlights_per_url
)
if input_data.highlights.query:
highlights_dict["query"] = input_data.highlights.query
sdk_kwargs["highlights"] = highlights_dict
# Handle summary - only include if modified from defaults
if input_data.summary and (
input_data.summary.query is not None
or input_data.summary.schema is not None
):
summary_dict = {}
if input_data.summary.query:
summary_dict["query"] = input_data.summary.query
if input_data.summary.schema:
summary_dict["schema"] = input_data.summary.schema
sdk_kwargs["summary"] = summary_dict
if input_data.livecrawl:
sdk_kwargs["livecrawl"] = input_data.livecrawl.value
if input_data.livecrawl_timeout is not None:
sdk_kwargs["livecrawl_timeout"] = input_data.livecrawl_timeout
if input_data.subpages is not None:
sdk_kwargs["subpages"] = input_data.subpages
if input_data.subpage_target:
sdk_kwargs["subpage_target"] = input_data.subpage_target
# Handle extras - only include if modified from defaults
if input_data.extras and (
input_data.extras.links > 0 or input_data.extras.image_links > 0
):
extras_dict = {}
if input_data.extras.links:
extras_dict["links"] = input_data.extras.links
if input_data.extras.image_links:
extras_dict["image_links"] = input_data.extras.image_links
sdk_kwargs["extras"] = extras_dict
# Always enable context for LLM-ready output
sdk_kwargs["context"] = True
aexa = AsyncExa(api_key=credentials.api_key.get_secret_value())
response = await aexa.get_contents(**sdk_kwargs)
converted_results = [
ExaSearchResults.from_sdk(sdk_result)
for sdk_result in response.results or []
]
yield "results", converted_results
for result in converted_results:
yield "result", result
if response.context:
yield "context", response.context
if response.statuses:
yield "statuses", response.statuses
if response.cost_dollars:
yield "cost_dollars", response.cost_dollars

View File

@@ -1,51 +1,150 @@
from typing import Optional
from enum import Enum
from typing import Any, Dict, Literal, Optional, Union
from backend.sdk import BaseModel, SchemaField
from backend.sdk import BaseModel, MediaFileType, SchemaField
class TextSettings(BaseModel):
max_characters: int = SchemaField(
default=1000,
class LivecrawlTypes(str, Enum):
NEVER = "never"
FALLBACK = "fallback"
ALWAYS = "always"
PREFERRED = "preferred"
class TextEnabled(BaseModel):
discriminator: Literal["enabled"] = "enabled"
class TextDisabled(BaseModel):
discriminator: Literal["disabled"] = "disabled"
class TextAdvanced(BaseModel):
discriminator: Literal["advanced"] = "advanced"
max_characters: Optional[int] = SchemaField(
default=None,
description="Maximum number of characters to return",
placeholder="1000",
)
include_html_tags: bool = SchemaField(
default=False,
description="Whether to include HTML tags in the text",
description="Include HTML tags in the response, helps LLMs understand text structure",
placeholder="False",
)
class HighlightSettings(BaseModel):
num_sentences: int = SchemaField(
default=3,
default=1,
description="Number of sentences per highlight",
placeholder="3",
placeholder="1",
ge=1,
)
highlights_per_url: int = SchemaField(
default=3,
default=1,
description="Number of highlights per URL",
placeholder="3",
placeholder="1",
ge=1,
)
query: Optional[str] = SchemaField(
default=None,
description="Custom query to direct the LLM's selection of highlights",
placeholder="Key advancements",
)
class SummarySettings(BaseModel):
query: Optional[str] = SchemaField(
default="",
description="Query string for summarization",
placeholder="Enter query",
default=None,
description="Custom query for the LLM-generated summary",
placeholder="Main developments",
)
schema: Optional[dict] = SchemaField( # type: ignore
default=None,
description="JSON schema for structured output from summary",
advanced=True,
)
class ExtrasSettings(BaseModel):
links: int = SchemaField(
default=0,
description="Number of URLs to return from each webpage",
placeholder="1",
ge=0,
)
image_links: int = SchemaField(
default=0,
description="Number of images to return for each result",
placeholder="1",
ge=0,
)
class ContextEnabled(BaseModel):
discriminator: Literal["enabled"] = "enabled"
class ContextDisabled(BaseModel):
discriminator: Literal["disabled"] = "disabled"
class ContextAdvanced(BaseModel):
discriminator: Literal["advanced"] = "advanced"
max_characters: Optional[int] = SchemaField(
default=None,
description="Maximum character limit for context string",
placeholder="10000",
)
class ContentSettings(BaseModel):
text: TextSettings = SchemaField(
default=TextSettings(),
text: Optional[Union[bool, TextEnabled, TextDisabled, TextAdvanced]] = SchemaField(
default=None,
description="Text content retrieval. Boolean for simple enable/disable or object for advanced settings",
)
highlights: HighlightSettings = SchemaField(
default=HighlightSettings(),
highlights: Optional[HighlightSettings] = SchemaField(
default=None,
description="Text snippets most relevant from each page",
)
summary: SummarySettings = SchemaField(
default=SummarySettings(),
summary: Optional[SummarySettings] = SchemaField(
default=None,
description="LLM-generated summary of the webpage",
)
livecrawl: Optional[LivecrawlTypes] = SchemaField(
default=None,
description="Livecrawling options: never, fallback, always, preferred",
advanced=True,
)
livecrawl_timeout: Optional[int] = SchemaField(
default=None,
description="Timeout for livecrawling in milliseconds",
placeholder="10000",
advanced=True,
)
subpages: Optional[int] = SchemaField(
default=None,
description="Number of subpages to crawl",
placeholder="0",
ge=0,
advanced=True,
)
subpage_target: Optional[Union[str, list[str]]] = SchemaField(
default=None,
description="Keyword(s) to find specific subpages of search results",
advanced=True,
)
extras: Optional[ExtrasSettings] = SchemaField(
default=None,
description="Extra parameters for additional content",
advanced=True,
)
context: Optional[Union[bool, ContextEnabled, ContextDisabled, ContextAdvanced]] = (
SchemaField(
default=None,
description="Format search results into a context string for LLMs",
advanced=True,
)
)
@@ -127,3 +226,225 @@ class WebsetEnrichmentConfig(BaseModel):
default=None,
description="Options for the enrichment",
)
# Shared result models
class ExaSearchExtras(BaseModel):
links: list[str] = SchemaField(
default_factory=list, description="Array of links from the search result"
)
imageLinks: list[str] = SchemaField(
default_factory=list, description="Array of image links from the search result"
)
class ExaSearchResults(BaseModel):
title: str | None = None
url: str | None = None
publishedDate: str | None = None
author: str | None = None
id: str
image: MediaFileType | None = None
favicon: MediaFileType | None = None
text: str | None = None
highlights: list[str] = SchemaField(default_factory=list)
highlightScores: list[float] = SchemaField(default_factory=list)
summary: str | None = None
subpages: list[dict] = SchemaField(default_factory=list)
extras: ExaSearchExtras | None = None
@classmethod
def from_sdk(cls, sdk_result) -> "ExaSearchResults":
"""Convert SDK Result (dataclass) to our Pydantic model."""
return cls(
id=getattr(sdk_result, "id", ""),
url=getattr(sdk_result, "url", None),
title=getattr(sdk_result, "title", None),
author=getattr(sdk_result, "author", None),
publishedDate=getattr(sdk_result, "published_date", None),
text=getattr(sdk_result, "text", None),
highlights=getattr(sdk_result, "highlights", None) or [],
highlightScores=getattr(sdk_result, "highlight_scores", None) or [],
summary=getattr(sdk_result, "summary", None),
subpages=getattr(sdk_result, "subpages", None) or [],
image=getattr(sdk_result, "image", None),
favicon=getattr(sdk_result, "favicon", None),
extras=getattr(sdk_result, "extras", None),
)
# Cost tracking models
class CostBreakdown(BaseModel):
keywordSearch: float = SchemaField(default=0.0)
neuralSearch: float = SchemaField(default=0.0)
contentText: float = SchemaField(default=0.0)
contentHighlight: float = SchemaField(default=0.0)
contentSummary: float = SchemaField(default=0.0)
class CostBreakdownItem(BaseModel):
search: float = SchemaField(default=0.0)
contents: float = SchemaField(default=0.0)
breakdown: CostBreakdown = SchemaField(default_factory=CostBreakdown)
class PerRequestPrices(BaseModel):
neuralSearch_1_25_results: float = SchemaField(default=0.005)
neuralSearch_26_100_results: float = SchemaField(default=0.025)
neuralSearch_100_plus_results: float = SchemaField(default=1.0)
keywordSearch_1_100_results: float = SchemaField(default=0.0025)
keywordSearch_100_plus_results: float = SchemaField(default=3.0)
class PerPagePrices(BaseModel):
contentText: float = SchemaField(default=0.001)
contentHighlight: float = SchemaField(default=0.001)
contentSummary: float = SchemaField(default=0.001)
class CostDollars(BaseModel):
total: float = SchemaField(description="Total dollar cost for your request")
breakDown: list[CostBreakdownItem] = SchemaField(
default_factory=list, description="Breakdown of costs by operation type"
)
perRequestPrices: PerRequestPrices = SchemaField(
default_factory=PerRequestPrices,
description="Standard price per request for different operations",
)
perPagePrices: PerPagePrices = SchemaField(
default_factory=PerPagePrices,
description="Standard price per page for different content operations",
)
# Helper functions for payload processing
def process_text_field(
text: Union[bool, TextEnabled, TextDisabled, TextAdvanced, None]
) -> Optional[Union[bool, Dict[str, Any]]]:
"""Process text field for API payload."""
if text is None:
return None
# Handle backward compatibility with boolean
if isinstance(text, bool):
return text
elif isinstance(text, TextDisabled):
return False
elif isinstance(text, TextEnabled):
return True
elif isinstance(text, TextAdvanced):
text_dict = {}
if text.max_characters:
text_dict["maxCharacters"] = text.max_characters
if text.include_html_tags:
text_dict["includeHtmlTags"] = text.include_html_tags
return text_dict if text_dict else True
return None
def process_contents_settings(contents: Optional[ContentSettings]) -> Dict[str, Any]:
"""Process ContentSettings into API payload format."""
if not contents:
return {}
content_settings = {}
# Handle text field (can be boolean or object)
text_value = process_text_field(contents.text)
if text_value is not None:
content_settings["text"] = text_value
# Handle highlights
if contents.highlights:
highlights_dict: Dict[str, Any] = {
"numSentences": contents.highlights.num_sentences,
"highlightsPerUrl": contents.highlights.highlights_per_url,
}
if contents.highlights.query:
highlights_dict["query"] = contents.highlights.query
content_settings["highlights"] = highlights_dict
if contents.summary:
summary_dict = {}
if contents.summary.query:
summary_dict["query"] = contents.summary.query
if contents.summary.schema:
summary_dict["schema"] = contents.summary.schema
content_settings["summary"] = summary_dict
if contents.livecrawl:
content_settings["livecrawl"] = contents.livecrawl.value
if contents.livecrawl_timeout is not None:
content_settings["livecrawlTimeout"] = contents.livecrawl_timeout
if contents.subpages is not None:
content_settings["subpages"] = contents.subpages
if contents.subpage_target:
content_settings["subpageTarget"] = contents.subpage_target
if contents.extras:
extras_dict = {}
if contents.extras.links:
extras_dict["links"] = contents.extras.links
if contents.extras.image_links:
extras_dict["imageLinks"] = contents.extras.image_links
content_settings["extras"] = extras_dict
context_value = process_context_field(contents.context)
if context_value is not None:
content_settings["context"] = context_value
return content_settings
def process_context_field(
context: Union[bool, dict, ContextEnabled, ContextDisabled, ContextAdvanced, None]
) -> Optional[Union[bool, Dict[str, int]]]:
"""Process context field for API payload."""
if context is None:
return None
# Handle backward compatibility with boolean
if isinstance(context, bool):
return context if context else None
elif isinstance(context, dict) and "maxCharacters" in context:
return {"maxCharacters": context["maxCharacters"]}
elif isinstance(context, ContextDisabled):
return None # Don't send context field at all when disabled
elif isinstance(context, ContextEnabled):
return True
elif isinstance(context, ContextAdvanced):
if context.max_characters:
return {"maxCharacters": context.max_characters}
return True
return None
def format_date_fields(
input_data: Any, date_field_mapping: Dict[str, str]
) -> Dict[str, str]:
"""Format datetime fields for API payload."""
formatted_dates = {}
for input_field, api_field in date_field_mapping.items():
value = getattr(input_data, input_field, None)
if value:
formatted_dates[api_field] = value.strftime("%Y-%m-%dT%H:%M:%S.000Z")
return formatted_dates
def add_optional_fields(
input_data: Any,
field_mapping: Dict[str, str],
payload: Dict[str, Any],
process_enums: bool = False,
) -> None:
"""Add optional fields to payload if they have values."""
for input_field, api_field in field_mapping.items():
value = getattr(input_data, input_field, None)
if value: # Only add non-empty values
if process_enums and hasattr(value, "value"):
payload[api_field] = value.value
else:
payload[api_field] = value

View File

@@ -1,247 +0,0 @@
from datetime import datetime
from enum import Enum
from typing import Any, Dict, List, Optional
from pydantic import BaseModel, Field
# Enum definitions based on available options
class WebsetStatus(str, Enum):
IDLE = "idle"
PENDING = "pending"
RUNNING = "running"
PAUSED = "paused"
class WebsetSearchStatus(str, Enum):
CREATED = "created"
# Add more if known, based on example it's "created"
class ImportStatus(str, Enum):
PENDING = "pending"
# Add more if known
class ImportFormat(str, Enum):
CSV = "csv"
# Add more if known
class EnrichmentStatus(str, Enum):
PENDING = "pending"
# Add more if known
class EnrichmentFormat(str, Enum):
TEXT = "text"
# Add more if known
class MonitorStatus(str, Enum):
ENABLED = "enabled"
# Add more if known
class MonitorBehaviorType(str, Enum):
SEARCH = "search"
# Add more if known
class MonitorRunStatus(str, Enum):
CREATED = "created"
# Add more if known
class CanceledReason(str, Enum):
WEBSET_DELETED = "webset_deleted"
# Add more if known
class FailedReason(str, Enum):
INVALID_FORMAT = "invalid_format"
# Add more if known
class Confidence(str, Enum):
HIGH = "high"
# Add more if known
# Nested models
class Entity(BaseModel):
type: str
class Criterion(BaseModel):
description: str
successRate: Optional[int] = None
class ExcludeItem(BaseModel):
source: str = Field(default="import")
id: str
class Relationship(BaseModel):
definition: str
limit: Optional[float] = None
class ScopeItem(BaseModel):
source: str = Field(default="import")
id: str
relationship: Optional[Relationship] = None
class Progress(BaseModel):
found: int
analyzed: int
completion: int
timeLeft: int
class Bounds(BaseModel):
min: int
max: int
class Expected(BaseModel):
total: int
confidence: str = Field(default="high") # Use str or Confidence enum
bounds: Bounds
class Recall(BaseModel):
expected: Expected
reasoning: str
class WebsetSearch(BaseModel):
id: str
object: str = Field(default="webset_search")
status: str = Field(default="created") # Or use WebsetSearchStatus
websetId: str
query: str
entity: Entity
criteria: List[Criterion]
count: int
behavior: str = Field(default="override")
exclude: List[ExcludeItem]
scope: List[ScopeItem]
progress: Progress
recall: Recall
metadata: Dict[str, Any] = Field(default_factory=dict)
canceledAt: Optional[datetime] = None
canceledReason: Optional[str] = Field(default=None) # Or use CanceledReason
createdAt: datetime
updatedAt: datetime
class ImportEntity(BaseModel):
type: str
class Import(BaseModel):
id: str
object: str = Field(default="import")
status: str = Field(default="pending") # Or use ImportStatus
format: str = Field(default="csv") # Or use ImportFormat
entity: ImportEntity
title: str
count: int
metadata: Dict[str, Any] = Field(default_factory=dict)
failedReason: Optional[str] = Field(default=None) # Or use FailedReason
failedAt: Optional[datetime] = None
failedMessage: Optional[str] = None
createdAt: datetime
updatedAt: datetime
class Option(BaseModel):
label: str
class WebsetEnrichment(BaseModel):
id: str
object: str = Field(default="webset_enrichment")
status: str = Field(default="pending") # Or use EnrichmentStatus
websetId: str
title: str
description: str
format: str = Field(default="text") # Or use EnrichmentFormat
options: List[Option]
instructions: str
metadata: Dict[str, Any] = Field(default_factory=dict)
createdAt: datetime
updatedAt: datetime
class Cadence(BaseModel):
cron: str
timezone: str = Field(default="Etc/UTC")
class BehaviorConfig(BaseModel):
query: Optional[str] = None
criteria: Optional[List[Criterion]] = None
entity: Optional[Entity] = None
count: Optional[int] = None
behavior: Optional[str] = Field(default=None)
class Behavior(BaseModel):
type: str = Field(default="search") # Or use MonitorBehaviorType
config: BehaviorConfig
class MonitorRun(BaseModel):
id: str
object: str = Field(default="monitor_run")
status: str = Field(default="created") # Or use MonitorRunStatus
monitorId: str
type: str = Field(default="search")
completedAt: Optional[datetime] = None
failedAt: Optional[datetime] = None
failedReason: Optional[str] = None
canceledAt: Optional[datetime] = None
createdAt: datetime
updatedAt: datetime
class Monitor(BaseModel):
id: str
object: str = Field(default="monitor")
status: str = Field(default="enabled") # Or use MonitorStatus
websetId: str
cadence: Cadence
behavior: Behavior
lastRun: Optional[MonitorRun] = None
nextRunAt: Optional[datetime] = None
metadata: Dict[str, Any] = Field(default_factory=dict)
createdAt: datetime
updatedAt: datetime
class Webset(BaseModel):
id: str
object: str = Field(default="webset")
status: WebsetStatus
externalId: Optional[str] = None
title: Optional[str] = None
searches: List[WebsetSearch]
imports: List[Import]
enrichments: List[WebsetEnrichment]
monitors: List[Monitor]
streams: List[Any]
createdAt: datetime
updatedAt: datetime
metadata: Dict[str, Any] = Field(default_factory=dict)
class ListWebsets(BaseModel):
data: List[Webset]
hasMore: bool
nextCursor: Optional[str] = None

View File

@@ -0,0 +1,518 @@
"""
Exa Research Task Blocks
Provides asynchronous research capabilities that explore the web, gather sources,
synthesize findings, and return structured results with citations.
"""
import asyncio
import time
from enum import Enum
from typing import Any, Dict, List, Optional
from pydantic import BaseModel
from backend.sdk import (
APIKeyCredentials,
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
CredentialsMetaInput,
Requests,
SchemaField,
)
from ._config import exa
class ResearchModel(str, Enum):
"""Available research models."""
FAST = "exa-research-fast"
STANDARD = "exa-research"
PRO = "exa-research-pro"
class ResearchStatus(str, Enum):
"""Research task status."""
PENDING = "pending"
RUNNING = "running"
COMPLETED = "completed"
CANCELED = "canceled"
FAILED = "failed"
class ResearchCostModel(BaseModel):
"""Cost breakdown for a research request."""
total: float
num_searches: int
num_pages: int
reasoning_tokens: int
@classmethod
def from_api(cls, data: dict) -> "ResearchCostModel":
"""Convert API response, rounding fractional counts to integers."""
return cls(
total=data.get("total", 0.0),
num_searches=int(round(data.get("numSearches", 0))),
num_pages=int(round(data.get("numPages", 0))),
reasoning_tokens=int(round(data.get("reasoningTokens", 0))),
)
class ResearchOutputModel(BaseModel):
"""Research output with content and optional structured data."""
content: str
parsed: Optional[Dict[str, Any]] = None
class ResearchTaskModel(BaseModel):
"""Stable output model for research tasks."""
research_id: str
created_at: int
model: str
instructions: str
status: str
output_schema: Optional[Dict[str, Any]] = None
output: Optional[ResearchOutputModel] = None
cost_dollars: Optional[ResearchCostModel] = None
finished_at: Optional[int] = None
error: Optional[str] = None
@classmethod
def from_api(cls, data: dict) -> "ResearchTaskModel":
"""Convert API response to our stable model."""
output_data = data.get("output")
output = None
if output_data:
output = ResearchOutputModel(
content=output_data.get("content", ""),
parsed=output_data.get("parsed"),
)
cost_data = data.get("costDollars")
cost = None
if cost_data:
cost = ResearchCostModel.from_api(cost_data)
return cls(
research_id=data.get("researchId", ""),
created_at=data.get("createdAt", 0),
model=data.get("model", "exa-research"),
instructions=data.get("instructions", ""),
status=data.get("status", "pending"),
output_schema=data.get("outputSchema"),
output=output,
cost_dollars=cost,
finished_at=data.get("finishedAt"),
error=data.get("error"),
)
class ExaCreateResearchBlock(Block):
"""Create an asynchronous research task that explores the web and synthesizes findings."""
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = exa.credentials_field(
description="The Exa integration requires an API Key."
)
instructions: str = SchemaField(
description="Research instructions - clearly define what information to find, how to conduct research, and desired output format.",
placeholder="Research the top 5 AI coding assistants, their features, pricing, and user reviews",
)
model: ResearchModel = SchemaField(
default=ResearchModel.STANDARD,
description="Research model: 'fast' for quick results, 'standard' for balanced quality, 'pro' for thorough analysis",
)
output_schema: Optional[dict] = SchemaField(
default=None,
description="JSON Schema to enforce structured output. When provided, results are validated and returned as parsed JSON.",
advanced=True,
)
wait_for_completion: bool = SchemaField(
default=True,
description="Wait for research to complete before returning. Ensures you get results immediately.",
)
polling_timeout: int = SchemaField(
default=600,
description="Maximum time to wait for completion in seconds (only if wait_for_completion is True)",
advanced=True,
ge=1,
le=3600,
)
class Output(BlockSchemaOutput):
research_id: str = SchemaField(
description="Unique identifier for tracking this research request"
)
status: str = SchemaField(description="Final status of the research")
model: str = SchemaField(description="The research model used")
instructions: str = SchemaField(
description="The research instructions provided"
)
created_at: int = SchemaField(
description="When the research was created (Unix timestamp in ms)"
)
output_content: Optional[str] = SchemaField(
description="Research output as text (only if wait_for_completion was True and completed)"
)
output_parsed: Optional[dict] = SchemaField(
description="Structured JSON output (only if wait_for_completion and outputSchema were provided)"
)
cost_total: Optional[float] = SchemaField(
description="Total cost in USD (only if wait_for_completion was True and completed)"
)
elapsed_time: Optional[float] = SchemaField(
description="Time taken to complete in seconds (only if wait_for_completion was True)"
)
def __init__(self):
super().__init__(
id="a1f2e3d4-c5b6-4a78-9012-3456789abcde",
description="Create research task with optional waiting - explores web and synthesizes findings with citations",
categories={BlockCategory.SEARCH, BlockCategory.AI},
input_schema=ExaCreateResearchBlock.Input,
output_schema=ExaCreateResearchBlock.Output,
)
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
url = "https://api.exa.ai/research/v1"
headers = {
"Content-Type": "application/json",
"x-api-key": credentials.api_key.get_secret_value(),
}
payload: Dict[str, Any] = {
"model": input_data.model.value,
"instructions": input_data.instructions,
}
if input_data.output_schema:
payload["outputSchema"] = input_data.output_schema
response = await Requests().post(url, headers=headers, json=payload)
data = response.json()
research_id = data.get("researchId", "")
if input_data.wait_for_completion:
start_time = time.time()
get_url = f"https://api.exa.ai/research/v1/{research_id}"
get_headers = {"x-api-key": credentials.api_key.get_secret_value()}
check_interval = 10
while time.time() - start_time < input_data.polling_timeout:
poll_response = await Requests().get(url=get_url, headers=get_headers)
poll_data = poll_response.json()
status = poll_data.get("status", "")
if status in ["completed", "failed", "canceled"]:
elapsed = time.time() - start_time
research = ResearchTaskModel.from_api(poll_data)
yield "research_id", research.research_id
yield "status", research.status
yield "model", research.model
yield "instructions", research.instructions
yield "created_at", research.created_at
yield "elapsed_time", elapsed
if research.output:
yield "output_content", research.output.content
yield "output_parsed", research.output.parsed
if research.cost_dollars:
yield "cost_total", research.cost_dollars.total
return
await asyncio.sleep(check_interval)
raise ValueError(
f"Research did not complete within {input_data.polling_timeout} seconds"
)
else:
yield "research_id", research_id
yield "status", data.get("status", "pending")
yield "model", data.get("model", input_data.model.value)
yield "instructions", data.get("instructions", input_data.instructions)
yield "created_at", data.get("createdAt", 0)
class ExaGetResearchBlock(Block):
"""Get the status and results of a research task."""
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = exa.credentials_field(
description="The Exa integration requires an API Key."
)
research_id: str = SchemaField(
description="The ID of the research task to retrieve",
placeholder="01jszdfs0052sg4jc552sg4jc5",
)
include_events: bool = SchemaField(
default=False,
description="Include detailed event log of research operations",
advanced=True,
)
class Output(BlockSchemaOutput):
research_id: str = SchemaField(description="The research task identifier")
status: str = SchemaField(
description="Current status: pending, running, completed, canceled, or failed"
)
instructions: str = SchemaField(
description="The original research instructions"
)
model: str = SchemaField(description="The research model used")
created_at: int = SchemaField(
description="When research was created (Unix timestamp in ms)"
)
finished_at: Optional[int] = SchemaField(
description="When research finished (Unix timestamp in ms, if completed/canceled/failed)"
)
output_content: Optional[str] = SchemaField(
description="Research output as text (if completed)"
)
output_parsed: Optional[dict] = SchemaField(
description="Structured JSON output matching outputSchema (if provided and completed)"
)
cost_total: Optional[float] = SchemaField(
description="Total cost in USD (if completed)"
)
cost_searches: Optional[int] = SchemaField(
description="Number of searches performed (if completed)"
)
cost_pages: Optional[int] = SchemaField(
description="Number of pages crawled (if completed)"
)
cost_reasoning_tokens: Optional[int] = SchemaField(
description="AI tokens used for reasoning (if completed)"
)
error_message: Optional[str] = SchemaField(
description="Error message if research failed"
)
events: Optional[List[dict]] = SchemaField(
description="Detailed event log (if include_events was True)"
)
def __init__(self):
super().__init__(
id="b2e3f4a5-6789-4bcd-9012-3456789abcde",
description="Get status and results of a research task",
categories={BlockCategory.SEARCH},
input_schema=ExaGetResearchBlock.Input,
output_schema=ExaGetResearchBlock.Output,
)
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
url = f"https://api.exa.ai/research/v1/{input_data.research_id}"
headers = {
"x-api-key": credentials.api_key.get_secret_value(),
}
params = {}
if input_data.include_events:
params["events"] = "true"
response = await Requests().get(url, headers=headers, params=params)
data = response.json()
research = ResearchTaskModel.from_api(data)
yield "research_id", research.research_id
yield "status", research.status
yield "instructions", research.instructions
yield "model", research.model
yield "created_at", research.created_at
yield "finished_at", research.finished_at
if research.output:
yield "output_content", research.output.content
yield "output_parsed", research.output.parsed
if research.cost_dollars:
yield "cost_total", research.cost_dollars.total
yield "cost_searches", research.cost_dollars.num_searches
yield "cost_pages", research.cost_dollars.num_pages
yield "cost_reasoning_tokens", research.cost_dollars.reasoning_tokens
yield "error_message", research.error
if input_data.include_events:
yield "events", data.get("events", [])
class ExaWaitForResearchBlock(Block):
"""Wait for a research task to complete with progress tracking."""
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = exa.credentials_field(
description="The Exa integration requires an API Key."
)
research_id: str = SchemaField(
description="The ID of the research task to wait for",
placeholder="01jszdfs0052sg4jc552sg4jc5",
)
timeout: int = SchemaField(
default=600,
description="Maximum time to wait in seconds",
ge=1,
le=3600,
)
check_interval: int = SchemaField(
default=10,
description="Seconds between status checks",
advanced=True,
ge=1,
le=60,
)
class Output(BlockSchemaOutput):
research_id: str = SchemaField(description="The research task identifier")
final_status: str = SchemaField(description="Final status when polling stopped")
output_content: Optional[str] = SchemaField(
description="Research output as text (if completed)"
)
output_parsed: Optional[dict] = SchemaField(
description="Structured JSON output (if outputSchema was provided and completed)"
)
cost_total: Optional[float] = SchemaField(description="Total cost in USD")
elapsed_time: float = SchemaField(description="Total time waited in seconds")
timed_out: bool = SchemaField(
description="Whether polling timed out before completion"
)
def __init__(self):
super().__init__(
id="c3d4e5f6-7890-4abc-9012-3456789abcde",
description="Wait for a research task to complete with configurable timeout",
categories={BlockCategory.SEARCH},
input_schema=ExaWaitForResearchBlock.Input,
output_schema=ExaWaitForResearchBlock.Output,
)
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
start_time = time.time()
url = f"https://api.exa.ai/research/v1/{input_data.research_id}"
headers = {
"x-api-key": credentials.api_key.get_secret_value(),
}
while time.time() - start_time < input_data.timeout:
response = await Requests().get(url, headers=headers)
data = response.json()
status = data.get("status", "")
if status in ["completed", "failed", "canceled"]:
elapsed = time.time() - start_time
research = ResearchTaskModel.from_api(data)
yield "research_id", research.research_id
yield "final_status", research.status
yield "elapsed_time", elapsed
yield "timed_out", False
if research.output:
yield "output_content", research.output.content
yield "output_parsed", research.output.parsed
if research.cost_dollars:
yield "cost_total", research.cost_dollars.total
return
await asyncio.sleep(input_data.check_interval)
elapsed = time.time() - start_time
response = await Requests().get(url, headers=headers)
data = response.json()
yield "research_id", input_data.research_id
yield "final_status", data.get("status", "unknown")
yield "elapsed_time", elapsed
yield "timed_out", True
class ExaListResearchBlock(Block):
"""List all research tasks with pagination support."""
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = exa.credentials_field(
description="The Exa integration requires an API Key."
)
cursor: Optional[str] = SchemaField(
default=None,
description="Cursor for pagination through results",
advanced=True,
)
limit: int = SchemaField(
default=10,
description="Number of research tasks to return (1-50)",
ge=1,
le=50,
advanced=True,
)
class Output(BlockSchemaOutput):
research_tasks: List[ResearchTaskModel] = SchemaField(
description="List of research tasks ordered by creation time (newest first)"
)
research_task: ResearchTaskModel = SchemaField(
description="Individual research task (yielded for each task)"
)
has_more: bool = SchemaField(
description="Whether there are more tasks to paginate through"
)
next_cursor: Optional[str] = SchemaField(
description="Cursor for the next page of results"
)
def __init__(self):
super().__init__(
id="d4e5f6a7-8901-4bcd-9012-3456789abcde",
description="List all research tasks with pagination support",
categories={BlockCategory.SEARCH},
input_schema=ExaListResearchBlock.Input,
output_schema=ExaListResearchBlock.Output,
)
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
url = "https://api.exa.ai/research/v1"
headers = {
"x-api-key": credentials.api_key.get_secret_value(),
}
params: Dict[str, Any] = {
"limit": input_data.limit,
}
if input_data.cursor:
params["cursor"] = input_data.cursor
response = await Requests().get(url, headers=headers, params=params)
data = response.json()
tasks = [ResearchTaskModel.from_api(task) for task in data.get("data", [])]
yield "research_tasks", tasks
for task in tasks:
yield "research_task", task
yield "has_more", data.get("hasMore", False)
yield "next_cursor", data.get("nextCursor")

View File

@@ -1,32 +1,66 @@
from datetime import datetime
from enum import Enum
from typing import Optional
from exa_py import AsyncExa
from backend.sdk import (
APIKeyCredentials,
Block,
BlockCategory,
BlockOutput,
BlockSchema,
BlockSchemaInput,
BlockSchemaOutput,
CredentialsMetaInput,
Requests,
SchemaField,
)
from ._config import exa
from .helpers import ContentSettings
from .helpers import (
ContentSettings,
CostDollars,
ExaSearchResults,
process_contents_settings,
)
class ExaSearchTypes(Enum):
KEYWORD = "keyword"
NEURAL = "neural"
FAST = "fast"
AUTO = "auto"
class ExaSearchCategories(Enum):
COMPANY = "company"
RESEARCH_PAPER = "research paper"
NEWS = "news"
PDF = "pdf"
GITHUB = "github"
TWEET = "tweet"
PERSONAL_SITE = "personal site"
LINKEDIN_PROFILE = "linkedin profile"
FINANCIAL_REPORT = "financial report"
class ExaSearchBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = exa.credentials_field(
description="The Exa integration requires an API Key."
)
query: str = SchemaField(description="The search query")
use_auto_prompt: bool = SchemaField(
description="Whether to use autoprompt", default=True, advanced=True
type: ExaSearchTypes = SchemaField(
description="Type of search", default=ExaSearchTypes.AUTO, advanced=True
)
type: str = SchemaField(description="Type of search", default="", advanced=True)
category: str = SchemaField(
description="Category to search within", default="", advanced=True
category: ExaSearchCategories | None = SchemaField(
description="Category to search within: company, research paper, news, pdf, github, tweet, personal site, linkedin profile, financial report",
default=None,
advanced=True,
)
user_location: str | None = SchemaField(
description="The two-letter ISO country code of the user (e.g., 'US')",
default=None,
advanced=True,
)
number_of_results: int = SchemaField(
description="Number of results to return", default=10, advanced=True
@@ -39,17 +73,17 @@ class ExaSearchBlock(Block):
default_factory=list,
advanced=True,
)
start_crawl_date: datetime = SchemaField(
description="Start date for crawled content"
start_crawl_date: datetime | None = SchemaField(
description="Start date for crawled content", advanced=True, default=None
)
end_crawl_date: datetime = SchemaField(
description="End date for crawled content"
end_crawl_date: datetime | None = SchemaField(
description="End date for crawled content", advanced=True, default=None
)
start_published_date: datetime = SchemaField(
description="Start date for published content"
start_published_date: datetime | None = SchemaField(
description="Start date for published content", advanced=True, default=None
)
end_published_date: datetime = SchemaField(
description="End date for published content"
end_published_date: datetime | None = SchemaField(
description="End date for published content", advanced=True, default=None
)
include_text: list[str] = SchemaField(
description="Text patterns to include", default_factory=list, advanced=True
@@ -62,14 +96,30 @@ class ExaSearchBlock(Block):
default=ContentSettings(),
advanced=True,
)
moderation: bool = SchemaField(
description="Enable content moderation to filter unsafe content from search results",
default=False,
advanced=True,
)
class Output(BlockSchema):
results: list = SchemaField(
description="List of search results", default_factory=list
class Output(BlockSchemaOutput):
results: list[ExaSearchResults] = SchemaField(
description="List of search results"
)
error: str = SchemaField(
description="Error message if the request failed",
result: ExaSearchResults = SchemaField(description="Single search result")
context: str = SchemaField(
description="A formatted string of the search results ready for LLMs."
)
search_type: str = SchemaField(
description="For auto searches, indicates which search type was selected."
)
resolved_search_type: str = SchemaField(
description="The search type that was actually used for this request (neural or keyword)"
)
cost_dollars: Optional[CostDollars] = SchemaField(
description="Cost breakdown for the request"
)
error: str = SchemaField(description="Error message if the request failed")
def __init__(self):
super().__init__(
@@ -83,51 +133,76 @@ class ExaSearchBlock(Block):
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
url = "https://api.exa.ai/search"
headers = {
"Content-Type": "application/json",
"x-api-key": credentials.api_key.get_secret_value(),
}
payload = {
sdk_kwargs = {
"query": input_data.query,
"useAutoprompt": input_data.use_auto_prompt,
"numResults": input_data.number_of_results,
"contents": input_data.contents.model_dump(),
"num_results": input_data.number_of_results,
}
date_field_mapping = {
"start_crawl_date": "startCrawlDate",
"end_crawl_date": "endCrawlDate",
"start_published_date": "startPublishedDate",
"end_published_date": "endPublishedDate",
}
if input_data.type:
sdk_kwargs["type"] = input_data.type.value
# Add dates if they exist
for input_field, api_field in date_field_mapping.items():
value = getattr(input_data, input_field, None)
if value:
payload[api_field] = value.strftime("%Y-%m-%dT%H:%M:%S.000Z")
if input_data.category:
sdk_kwargs["category"] = input_data.category.value
optional_field_mapping = {
"type": "type",
"category": "category",
"include_domains": "includeDomains",
"exclude_domains": "excludeDomains",
"include_text": "includeText",
"exclude_text": "excludeText",
}
if input_data.user_location:
sdk_kwargs["user_location"] = input_data.user_location
# Add other fields
for input_field, api_field in optional_field_mapping.items():
value = getattr(input_data, input_field)
if value: # Only add non-empty values
payload[api_field] = value
# Handle domains
if input_data.include_domains:
sdk_kwargs["include_domains"] = input_data.include_domains
if input_data.exclude_domains:
sdk_kwargs["exclude_domains"] = input_data.exclude_domains
try:
response = await Requests().post(url, headers=headers, json=payload)
data = response.json()
# Extract just the results array from the response
yield "results", data.get("results", [])
except Exception as e:
yield "error", str(e)
# Handle dates
if input_data.start_crawl_date:
sdk_kwargs["start_crawl_date"] = input_data.start_crawl_date.isoformat()
if input_data.end_crawl_date:
sdk_kwargs["end_crawl_date"] = input_data.end_crawl_date.isoformat()
if input_data.start_published_date:
sdk_kwargs["start_published_date"] = (
input_data.start_published_date.isoformat()
)
if input_data.end_published_date:
sdk_kwargs["end_published_date"] = input_data.end_published_date.isoformat()
# Handle text filters
if input_data.include_text:
sdk_kwargs["include_text"] = input_data.include_text
if input_data.exclude_text:
sdk_kwargs["exclude_text"] = input_data.exclude_text
if input_data.moderation:
sdk_kwargs["moderation"] = input_data.moderation
# heck if we need to use search_and_contents
content_settings = process_contents_settings(input_data.contents)
aexa = AsyncExa(api_key=credentials.api_key.get_secret_value())
if content_settings:
sdk_kwargs["text"] = content_settings.get("text", False)
if "highlights" in content_settings:
sdk_kwargs["highlights"] = content_settings["highlights"]
if "summary" in content_settings:
sdk_kwargs["summary"] = content_settings["summary"]
response = await aexa.search_and_contents(**sdk_kwargs)
else:
response = await aexa.search(**sdk_kwargs)
converted_results = [
ExaSearchResults.from_sdk(sdk_result)
for sdk_result in response.results or []
]
yield "results", converted_results
for result in converted_results:
yield "result", result
if response.context:
yield "context", response.context
if response.resolved_search_type:
yield "resolved_search_type", response.resolved_search_type
if response.cost_dollars:
yield "cost_dollars", response.cost_dollars

View File

@@ -1,23 +1,30 @@
from datetime import datetime
from typing import Any
from typing import Optional
from exa_py import AsyncExa
from backend.sdk import (
APIKeyCredentials,
Block,
BlockCategory,
BlockOutput,
BlockSchema,
BlockSchemaInput,
BlockSchemaOutput,
CredentialsMetaInput,
Requests,
SchemaField,
)
from ._config import exa
from .helpers import ContentSettings
from .helpers import (
ContentSettings,
CostDollars,
ExaSearchResults,
process_contents_settings,
)
class ExaFindSimilarBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = exa.credentials_field(
description="The Exa integration requires an API Key."
)
@@ -28,7 +35,7 @@ class ExaFindSimilarBlock(Block):
description="Number of results to return", default=10, advanced=True
)
include_domains: list[str] = SchemaField(
description="Domains to include in search",
description="List of domains to include in the search. If specified, results will only come from these domains.",
default_factory=list,
advanced=True,
)
@@ -37,17 +44,17 @@ class ExaFindSimilarBlock(Block):
default_factory=list,
advanced=True,
)
start_crawl_date: datetime = SchemaField(
description="Start date for crawled content"
start_crawl_date: Optional[datetime] = SchemaField(
description="Start date for crawled content", advanced=True, default=None
)
end_crawl_date: datetime = SchemaField(
description="End date for crawled content"
end_crawl_date: Optional[datetime] = SchemaField(
description="End date for crawled content", advanced=True, default=None
)
start_published_date: datetime = SchemaField(
description="Start date for published content"
start_published_date: Optional[datetime] = SchemaField(
description="Start date for published content", advanced=True, default=None
)
end_published_date: datetime = SchemaField(
description="End date for published content"
end_published_date: Optional[datetime] = SchemaField(
description="End date for published content", advanced=True, default=None
)
include_text: list[str] = SchemaField(
description="Text patterns to include (max 1 string, up to 5 words)",
@@ -64,15 +71,27 @@ class ExaFindSimilarBlock(Block):
default=ContentSettings(),
advanced=True,
)
moderation: bool = SchemaField(
description="Enable content moderation to filter unsafe content from search results",
default=False,
advanced=True,
)
class Output(BlockSchema):
results: list[Any] = SchemaField(
description="List of similar documents with title, URL, published date, author, and score",
default_factory=list,
class Output(BlockSchemaOutput):
results: list[ExaSearchResults] = SchemaField(
description="List of similar documents with metadata and content"
)
error: str = SchemaField(
description="Error message if the request failed", default=""
result: ExaSearchResults = SchemaField(
description="Single similar document result"
)
context: str = SchemaField(
description="A formatted string of the results ready for LLMs."
)
request_id: str = SchemaField(description="Unique identifier for the request")
cost_dollars: Optional[CostDollars] = SchemaField(
description="Cost breakdown for the request"
)
error: str = SchemaField(description="Error message if the request failed")
def __init__(self):
super().__init__(
@@ -86,47 +105,65 @@ class ExaFindSimilarBlock(Block):
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
url = "https://api.exa.ai/findSimilar"
headers = {
"Content-Type": "application/json",
"x-api-key": credentials.api_key.get_secret_value(),
}
payload = {
sdk_kwargs = {
"url": input_data.url,
"numResults": input_data.number_of_results,
"contents": input_data.contents.model_dump(),
"num_results": input_data.number_of_results,
}
optional_field_mapping = {
"include_domains": "includeDomains",
"exclude_domains": "excludeDomains",
"include_text": "includeText",
"exclude_text": "excludeText",
}
# Handle domains
if input_data.include_domains:
sdk_kwargs["include_domains"] = input_data.include_domains
if input_data.exclude_domains:
sdk_kwargs["exclude_domains"] = input_data.exclude_domains
# Add optional fields if they have values
for input_field, api_field in optional_field_mapping.items():
value = getattr(input_data, input_field)
if value: # Only add non-empty values
payload[api_field] = value
# Handle dates
if input_data.start_crawl_date:
sdk_kwargs["start_crawl_date"] = input_data.start_crawl_date.isoformat()
if input_data.end_crawl_date:
sdk_kwargs["end_crawl_date"] = input_data.end_crawl_date.isoformat()
if input_data.start_published_date:
sdk_kwargs["start_published_date"] = (
input_data.start_published_date.isoformat()
)
if input_data.end_published_date:
sdk_kwargs["end_published_date"] = input_data.end_published_date.isoformat()
date_field_mapping = {
"start_crawl_date": "startCrawlDate",
"end_crawl_date": "endCrawlDate",
"start_published_date": "startPublishedDate",
"end_published_date": "endPublishedDate",
}
# Handle text filters
if input_data.include_text:
sdk_kwargs["include_text"] = input_data.include_text
if input_data.exclude_text:
sdk_kwargs["exclude_text"] = input_data.exclude_text
# Add dates if they exist
for input_field, api_field in date_field_mapping.items():
value = getattr(input_data, input_field, None)
if value:
payload[api_field] = value.strftime("%Y-%m-%dT%H:%M:%S.000Z")
if input_data.moderation:
sdk_kwargs["moderation"] = input_data.moderation
try:
response = await Requests().post(url, headers=headers, json=payload)
data = response.json()
yield "results", data.get("results", [])
except Exception as e:
yield "error", str(e)
# check if we need to use find_similar_and_contents
content_settings = process_contents_settings(input_data.contents)
aexa = AsyncExa(api_key=credentials.api_key.get_secret_value())
if content_settings:
# Use find_similar_and_contents when contents are requested
sdk_kwargs["text"] = content_settings.get("text", False)
if "highlights" in content_settings:
sdk_kwargs["highlights"] = content_settings["highlights"]
if "summary" in content_settings:
sdk_kwargs["summary"] = content_settings["summary"]
response = await aexa.find_similar_and_contents(**sdk_kwargs)
else:
response = await aexa.find_similar(**sdk_kwargs)
converted_results = [
ExaSearchResults.from_sdk(sdk_result)
for sdk_result in response.results or []
]
yield "results", converted_results
for result in converted_results:
yield "result", result
if response.context:
yield "context", response.context
if response.cost_dollars:
yield "cost_dollars", response.cost_dollars

View File

@@ -9,7 +9,8 @@ from backend.sdk import (
Block,
BlockCategory,
BlockOutput,
BlockSchema,
BlockSchemaInput,
BlockSchemaOutput,
BlockType,
BlockWebhookConfig,
CredentialsMetaInput,
@@ -84,7 +85,7 @@ class ExaWebsetWebhookBlock(Block):
including creation, updates, searches, and exports.
"""
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = exa.credentials_field(
description="Exa API credentials for webhook management"
)
@@ -104,7 +105,7 @@ class ExaWebsetWebhookBlock(Block):
description="Webhook payload data", default={}, hidden=True
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
event_type: str = SchemaField(description="Type of event that occurred")
event_id: str = SchemaField(description="Unique identifier for this event")
webset_id: str = SchemaField(description="ID of the affected webset")
@@ -131,45 +132,33 @@ class ExaWebsetWebhookBlock(Block):
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
"""Process incoming Exa webhook payload."""
try:
payload = input_data.payload
payload = input_data.payload
# Extract event details
event_type = payload.get("eventType", "unknown")
event_id = payload.get("eventId", "")
# Extract event details
event_type = payload.get("eventType", "unknown")
event_id = payload.get("eventId", "")
# Get webset ID from payload or input
webset_id = payload.get("websetId", input_data.webset_id)
# Get webset ID from payload or input
webset_id = payload.get("websetId", input_data.webset_id)
# Check if we should process this event based on filter
should_process = self._should_process_event(
event_type, input_data.event_filter
)
# Check if we should process this event based on filter
should_process = self._should_process_event(event_type, input_data.event_filter)
if not should_process:
# Skip events that don't match our filter
return
if not should_process:
# Skip events that don't match our filter
return
# Extract event data
event_data = payload.get("data", {})
timestamp = payload.get("occurredAt", payload.get("createdAt", ""))
metadata = payload.get("metadata", {})
# Extract event data
event_data = payload.get("data", {})
timestamp = payload.get("occurredAt", payload.get("createdAt", ""))
metadata = payload.get("metadata", {})
yield "event_type", event_type
yield "event_id", event_id
yield "webset_id", webset_id
yield "data", event_data
yield "timestamp", timestamp
yield "metadata", metadata
except Exception as e:
# Handle errors gracefully
yield "event_type", "error"
yield "event_id", ""
yield "webset_id", input_data.webset_id
yield "data", {"error": str(e)}
yield "timestamp", ""
yield "metadata", {}
yield "event_type", event_type
yield "event_id", event_id
yield "webset_id", webset_id
yield "data", event_data
yield "timestamp", timestamp
yield "metadata", metadata
def _should_process_event(
self, event_type: str, event_filter: WebsetEventFilter

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,554 @@
"""
Exa Websets Enrichment Management Blocks
This module provides blocks for creating and managing enrichments on webset items,
allowing extraction of additional structured data from existing items.
"""
from enum import Enum
from typing import Any, Dict, List, Optional
from exa_py import AsyncExa
from exa_py.websets.types import WebsetEnrichment as SdkWebsetEnrichment
from pydantic import BaseModel
from backend.sdk import (
APIKeyCredentials,
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
CredentialsMetaInput,
Requests,
SchemaField,
)
from ._config import exa
# Mirrored model for stability
class WebsetEnrichmentModel(BaseModel):
"""Stable output model mirroring SDK WebsetEnrichment."""
id: str
webset_id: str
status: str
title: Optional[str]
description: str
format: str
options: List[str]
instructions: Optional[str]
metadata: Dict[str, Any]
created_at: str
updated_at: str
@classmethod
def from_sdk(cls, enrichment: SdkWebsetEnrichment) -> "WebsetEnrichmentModel":
"""Convert SDK WebsetEnrichment to our stable model."""
# Extract options
options_list = []
if enrichment.options:
for option in enrichment.options:
option_dict = option.model_dump(by_alias=True)
options_list.append(option_dict.get("label", ""))
return cls(
id=enrichment.id,
webset_id=enrichment.webset_id,
status=(
enrichment.status.value
if hasattr(enrichment.status, "value")
else str(enrichment.status)
),
title=enrichment.title,
description=enrichment.description,
format=(
enrichment.format.value
if enrichment.format and hasattr(enrichment.format, "value")
else "text"
),
options=options_list,
instructions=enrichment.instructions,
metadata=enrichment.metadata if enrichment.metadata else {},
created_at=(
enrichment.created_at.isoformat() if enrichment.created_at else ""
),
updated_at=(
enrichment.updated_at.isoformat() if enrichment.updated_at else ""
),
)
class EnrichmentFormat(str, Enum):
"""Format types for enrichment responses."""
TEXT = "text" # Free text response
DATE = "date" # Date/datetime format
NUMBER = "number" # Numeric value
OPTIONS = "options" # Multiple choice from provided options
EMAIL = "email" # Email address format
PHONE = "phone" # Phone number format
class ExaCreateEnrichmentBlock(Block):
"""Create a new enrichment to extract additional data from webset items."""
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = exa.credentials_field(
description="The Exa integration requires an API Key."
)
webset_id: str = SchemaField(
description="The ID or external ID of the Webset",
placeholder="webset-id-or-external-id",
)
description: str = SchemaField(
description="What data to extract from each item",
placeholder="Extract the company's main product or service offering",
)
title: Optional[str] = SchemaField(
default=None,
description="Short title for this enrichment (auto-generated if not provided)",
placeholder="Main Product",
)
format: EnrichmentFormat = SchemaField(
default=EnrichmentFormat.TEXT,
description="Expected format of the extracted data",
)
options: list[str] = SchemaField(
default_factory=list,
description="Available options when format is 'options'",
placeholder='["B2B", "B2C", "Both", "Unknown"]',
advanced=True,
)
apply_to_existing: bool = SchemaField(
default=True,
description="Apply this enrichment to existing items in the webset",
)
metadata: Optional[dict] = SchemaField(
default=None,
description="Metadata to attach to the enrichment",
advanced=True,
)
wait_for_completion: bool = SchemaField(
default=False,
description="Wait for the enrichment to complete on existing items",
)
polling_timeout: int = SchemaField(
default=300,
description="Maximum time to wait for completion in seconds",
advanced=True,
ge=1,
le=600,
)
class Output(BlockSchemaOutput):
enrichment_id: str = SchemaField(
description="The unique identifier for the created enrichment"
)
webset_id: str = SchemaField(
description="The webset this enrichment belongs to"
)
status: str = SchemaField(description="Current status of the enrichment")
title: str = SchemaField(description="Title of the enrichment")
description: str = SchemaField(
description="Description of what data is extracted"
)
format: str = SchemaField(description="Format of the extracted data")
instructions: str = SchemaField(
description="Generated instructions for the enrichment"
)
items_enriched: Optional[int] = SchemaField(
description="Number of items enriched (if wait_for_completion was True)"
)
completion_time: Optional[float] = SchemaField(
description="Time taken to complete in seconds (if wait_for_completion was True)"
)
def __init__(self):
super().__init__(
id="71146ae8-0cb1-4a15-8cde-eae30de71cb6",
description="Create enrichments to extract additional structured data from webset items",
categories={BlockCategory.AI, BlockCategory.SEARCH},
input_schema=ExaCreateEnrichmentBlock.Input,
output_schema=ExaCreateEnrichmentBlock.Output,
)
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
import time
# Build the payload
payload: dict[str, Any] = {
"description": input_data.description,
"format": input_data.format.value,
}
# Add title if provided
if input_data.title:
payload["title"] = input_data.title
# Add options for 'options' format
if input_data.format == EnrichmentFormat.OPTIONS and input_data.options:
payload["options"] = [{"label": opt} for opt in input_data.options]
# Add metadata if provided
if input_data.metadata:
payload["metadata"] = input_data.metadata
start_time = time.time()
# Use AsyncExa SDK
aexa = AsyncExa(api_key=credentials.api_key.get_secret_value())
sdk_enrichment = aexa.websets.enrichments.create(
webset_id=input_data.webset_id, params=payload
)
enrichment_id = sdk_enrichment.id
status = (
sdk_enrichment.status.value
if hasattr(sdk_enrichment.status, "value")
else str(sdk_enrichment.status)
)
# If wait_for_completion is True and apply_to_existing is True, poll for completion
if input_data.wait_for_completion and input_data.apply_to_existing:
import asyncio
poll_interval = 5
max_interval = 30
poll_start = time.time()
items_enriched = 0
while time.time() - poll_start < input_data.polling_timeout:
current_enrich = aexa.websets.enrichments.get(
webset_id=input_data.webset_id, id=enrichment_id
)
current_status = (
current_enrich.status.value
if hasattr(current_enrich.status, "value")
else str(current_enrich.status)
)
if current_status in ["completed", "failed", "cancelled"]:
# Estimate items from webset searches
webset = aexa.websets.get(id=input_data.webset_id)
if webset.searches:
for search in webset.searches:
if search.progress:
items_enriched += search.progress.found
completion_time = time.time() - start_time
yield "enrichment_id", enrichment_id
yield "webset_id", input_data.webset_id
yield "status", current_status
yield "title", sdk_enrichment.title
yield "description", input_data.description
yield "format", input_data.format.value
yield "instructions", sdk_enrichment.instructions
yield "items_enriched", items_enriched
yield "completion_time", completion_time
return
await asyncio.sleep(poll_interval)
poll_interval = min(poll_interval * 1.5, max_interval)
# Timeout
completion_time = time.time() - start_time
yield "enrichment_id", enrichment_id
yield "webset_id", input_data.webset_id
yield "status", status
yield "title", sdk_enrichment.title
yield "description", input_data.description
yield "format", input_data.format.value
yield "instructions", sdk_enrichment.instructions
yield "items_enriched", 0
yield "completion_time", completion_time
else:
yield "enrichment_id", enrichment_id
yield "webset_id", input_data.webset_id
yield "status", status
yield "title", sdk_enrichment.title
yield "description", input_data.description
yield "format", input_data.format.value
yield "instructions", sdk_enrichment.instructions
class ExaGetEnrichmentBlock(Block):
"""Get the status and details of a webset enrichment."""
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = exa.credentials_field(
description="The Exa integration requires an API Key."
)
webset_id: str = SchemaField(
description="The ID or external ID of the Webset",
placeholder="webset-id-or-external-id",
)
enrichment_id: str = SchemaField(
description="The ID of the enrichment to retrieve",
placeholder="enrichment-id",
)
class Output(BlockSchemaOutput):
enrichment_id: str = SchemaField(
description="The unique identifier for the enrichment"
)
status: str = SchemaField(description="Current status of the enrichment")
title: str = SchemaField(description="Title of the enrichment")
description: str = SchemaField(
description="Description of what data is extracted"
)
format: str = SchemaField(description="Format of the extracted data")
options: list[str] = SchemaField(
description="Available options (for 'options' format)"
)
instructions: str = SchemaField(
description="Generated instructions for the enrichment"
)
created_at: str = SchemaField(description="When the enrichment was created")
updated_at: str = SchemaField(
description="When the enrichment was last updated"
)
metadata: dict = SchemaField(description="Metadata attached to the enrichment")
def __init__(self):
super().__init__(
id="b8c9d0e1-f2a3-4567-89ab-cdef01234567",
description="Get the status and details of a webset enrichment",
categories={BlockCategory.SEARCH},
input_schema=ExaGetEnrichmentBlock.Input,
output_schema=ExaGetEnrichmentBlock.Output,
)
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
# Use AsyncExa SDK
aexa = AsyncExa(api_key=credentials.api_key.get_secret_value())
sdk_enrichment = aexa.websets.enrichments.get(
webset_id=input_data.webset_id, id=input_data.enrichment_id
)
enrichment = WebsetEnrichmentModel.from_sdk(sdk_enrichment)
yield "enrichment_id", enrichment.id
yield "status", enrichment.status
yield "title", enrichment.title
yield "description", enrichment.description
yield "format", enrichment.format
yield "options", enrichment.options
yield "instructions", enrichment.instructions
yield "created_at", enrichment.created_at
yield "updated_at", enrichment.updated_at
yield "metadata", enrichment.metadata
class ExaUpdateEnrichmentBlock(Block):
"""Update an existing enrichment configuration."""
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = exa.credentials_field(
description="The Exa integration requires an API Key."
)
webset_id: str = SchemaField(
description="The ID or external ID of the Webset",
placeholder="webset-id-or-external-id",
)
enrichment_id: str = SchemaField(
description="The ID of the enrichment to update",
placeholder="enrichment-id",
)
description: Optional[str] = SchemaField(
default=None,
description="New description for what data to extract",
)
format: Optional[EnrichmentFormat] = SchemaField(
default=None,
description="New format for the extracted data",
)
options: Optional[list[str]] = SchemaField(
default=None,
description="New options when format is 'options'",
)
metadata: Optional[dict] = SchemaField(
default=None,
description="New metadata to attach to the enrichment",
)
class Output(BlockSchemaOutput):
enrichment_id: str = SchemaField(
description="The unique identifier for the enrichment"
)
status: str = SchemaField(description="Current status of the enrichment")
title: str = SchemaField(description="Title of the enrichment")
description: str = SchemaField(description="Updated description")
format: str = SchemaField(description="Updated format")
success: str = SchemaField(description="Whether the update was successful")
def __init__(self):
super().__init__(
id="c8d5c5fb-9684-4a29-bd2a-5b38d71776c9",
description="Update an existing enrichment configuration",
categories={BlockCategory.SEARCH},
input_schema=ExaUpdateEnrichmentBlock.Input,
output_schema=ExaUpdateEnrichmentBlock.Output,
)
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
url = f"https://api.exa.ai/websets/v0/websets/{input_data.webset_id}/enrichments/{input_data.enrichment_id}"
headers = {
"Content-Type": "application/json",
"x-api-key": credentials.api_key.get_secret_value(),
}
# Build the update payload
payload = {}
if input_data.description is not None:
payload["description"] = input_data.description
if input_data.format is not None:
payload["format"] = input_data.format.value
if input_data.options is not None:
payload["options"] = [{"label": opt} for opt in input_data.options]
if input_data.metadata is not None:
payload["metadata"] = input_data.metadata
try:
response = await Requests().patch(url, headers=headers, json=payload)
data = response.json()
yield "enrichment_id", data.get("id", "")
yield "status", data.get("status", "")
yield "title", data.get("title", "")
yield "description", data.get("description", "")
yield "format", data.get("format", "")
yield "success", "true"
except ValueError as e:
# Re-raise user input validation errors
raise ValueError(f"Failed to update enrichment: {e}") from e
# Let all other exceptions propagate naturally
class ExaDeleteEnrichmentBlock(Block):
"""Delete an enrichment from a webset."""
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = exa.credentials_field(
description="The Exa integration requires an API Key."
)
webset_id: str = SchemaField(
description="The ID or external ID of the Webset",
placeholder="webset-id-or-external-id",
)
enrichment_id: str = SchemaField(
description="The ID of the enrichment to delete",
placeholder="enrichment-id",
)
class Output(BlockSchemaOutput):
enrichment_id: str = SchemaField(description="The ID of the deleted enrichment")
success: str = SchemaField(description="Whether the deletion was successful")
def __init__(self):
super().__init__(
id="b250de56-2ca6-4237-a7b8-b5684892189f",
description="Delete an enrichment from a webset",
categories={BlockCategory.SEARCH},
input_schema=ExaDeleteEnrichmentBlock.Input,
output_schema=ExaDeleteEnrichmentBlock.Output,
)
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
# Use AsyncExa SDK
aexa = AsyncExa(api_key=credentials.api_key.get_secret_value())
deleted_enrichment = aexa.websets.enrichments.delete(
webset_id=input_data.webset_id, id=input_data.enrichment_id
)
yield "enrichment_id", deleted_enrichment.id
yield "success", "true"
class ExaCancelEnrichmentBlock(Block):
"""Cancel a running enrichment operation."""
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = exa.credentials_field(
description="The Exa integration requires an API Key."
)
webset_id: str = SchemaField(
description="The ID or external ID of the Webset",
placeholder="webset-id-or-external-id",
)
enrichment_id: str = SchemaField(
description="The ID of the enrichment to cancel",
placeholder="enrichment-id",
)
class Output(BlockSchemaOutput):
enrichment_id: str = SchemaField(
description="The ID of the canceled enrichment"
)
status: str = SchemaField(description="Status after cancellation")
items_enriched_before_cancel: int = SchemaField(
description="Approximate number of items enriched before cancellation"
)
success: str = SchemaField(
description="Whether the cancellation was successful"
)
def __init__(self):
super().__init__(
id="7e1f8f0f-b6ab-43b3-bd1d-0c534a649295",
description="Cancel a running enrichment operation",
categories={BlockCategory.SEARCH},
input_schema=ExaCancelEnrichmentBlock.Input,
output_schema=ExaCancelEnrichmentBlock.Output,
)
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
# Use AsyncExa SDK
aexa = AsyncExa(api_key=credentials.api_key.get_secret_value())
canceled_enrichment = aexa.websets.enrichments.cancel(
webset_id=input_data.webset_id, id=input_data.enrichment_id
)
# Try to estimate how many items were enriched before cancellation
items_enriched = 0
items_response = aexa.websets.items.list(
webset_id=input_data.webset_id, limit=100
)
for sdk_item in items_response.data:
# Check if this enrichment is present
for enrich_result in sdk_item.enrichments:
if enrich_result.enrichment_id == input_data.enrichment_id:
items_enriched += 1
break
status = (
canceled_enrichment.status.value
if hasattr(canceled_enrichment.status, "value")
else str(canceled_enrichment.status)
)
yield "enrichment_id", canceled_enrichment.id
yield "status", status
yield "items_enriched_before_cancel", items_enriched
yield "success", "true"

View File

@@ -0,0 +1,676 @@
"""
Exa Websets Import/Export Management Blocks
This module provides blocks for importing data into websets from CSV files
and exporting webset data in various formats.
"""
import csv
import json
from enum import Enum
from io import StringIO
from typing import Optional, Union
from exa_py import AsyncExa
from exa_py.websets.types import CreateImportResponse
from exa_py.websets.types import Import as SdkImport
from pydantic import BaseModel
from backend.sdk import (
APIKeyCredentials,
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
CredentialsMetaInput,
SchemaField,
)
from ._config import exa
from ._test import TEST_CREDENTIALS, TEST_CREDENTIALS_INPUT
# Mirrored model for stability - don't use SDK types directly in block outputs
class ImportModel(BaseModel):
"""Stable output model mirroring SDK Import."""
id: str
status: str
title: str
format: str
entity_type: str
count: int
upload_url: Optional[str] # Only in CreateImportResponse
upload_valid_until: Optional[str] # Only in CreateImportResponse
failed_reason: str
failed_message: str
metadata: dict
created_at: str
updated_at: str
@classmethod
def from_sdk(
cls, import_obj: Union[SdkImport, CreateImportResponse]
) -> "ImportModel":
"""Convert SDK Import or CreateImportResponse to our stable model."""
# Extract entity type from union (may be None)
entity_type = "unknown"
if import_obj.entity:
entity_dict = import_obj.entity.model_dump(by_alias=True, exclude_none=True)
entity_type = entity_dict.get("type", "unknown")
# Handle status enum
status_str = (
import_obj.status.value
if hasattr(import_obj.status, "value")
else str(import_obj.status)
)
# Handle format enum
format_str = (
import_obj.format.value
if hasattr(import_obj.format, "value")
else str(import_obj.format)
)
# Handle failed_reason enum (may be None or enum)
failed_reason_str = ""
if import_obj.failed_reason:
failed_reason_str = (
import_obj.failed_reason.value
if hasattr(import_obj.failed_reason, "value")
else str(import_obj.failed_reason)
)
return cls(
id=import_obj.id,
status=status_str,
title=import_obj.title or "",
format=format_str,
entity_type=entity_type,
count=int(import_obj.count or 0),
upload_url=getattr(
import_obj, "upload_url", None
), # Only in CreateImportResponse
upload_valid_until=getattr(
import_obj, "upload_valid_until", None
), # Only in CreateImportResponse
failed_reason=failed_reason_str,
failed_message=import_obj.failed_message or "",
metadata=import_obj.metadata or {},
created_at=(
import_obj.created_at.isoformat() if import_obj.created_at else ""
),
updated_at=(
import_obj.updated_at.isoformat() if import_obj.updated_at else ""
),
)
class ImportFormat(str, Enum):
"""Supported import formats."""
CSV = "csv"
# JSON = "json" # Future support
class ImportEntityType(str, Enum):
"""Entity types for imports."""
COMPANY = "company"
PERSON = "person"
ARTICLE = "article"
RESEARCH_PAPER = "research_paper"
CUSTOM = "custom"
class ExportFormat(str, Enum):
"""Supported export formats."""
JSON = "json"
CSV = "csv"
JSON_LINES = "jsonl"
class ExaCreateImportBlock(Block):
"""Create an import to load external data that can be used with websets."""
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = exa.credentials_field(
description="The Exa integration requires an API Key."
)
title: str = SchemaField(
description="Title for this import",
placeholder="Customer List Import",
)
csv_data: str = SchemaField(
description="CSV data to import (as a string)",
placeholder="name,url\nAcme Corp,https://acme.com\nExample Inc,https://example.com",
)
entity_type: ImportEntityType = SchemaField(
default=ImportEntityType.COMPANY,
description="Type of entities being imported",
)
entity_description: Optional[str] = SchemaField(
default=None,
description="Description for custom entity type",
advanced=True,
)
identifier_column: int = SchemaField(
default=0,
description="Column index containing the identifier (0-based)",
ge=0,
)
url_column: Optional[int] = SchemaField(
default=None,
description="Column index containing URLs (optional)",
ge=0,
advanced=True,
)
metadata: Optional[dict] = SchemaField(
default=None,
description="Metadata to attach to the import",
advanced=True,
)
class Output(BlockSchemaOutput):
import_id: str = SchemaField(
description="The unique identifier for the created import"
)
status: str = SchemaField(description="Current status of the import")
title: str = SchemaField(description="Title of the import")
count: int = SchemaField(description="Number of items in the import")
entity_type: str = SchemaField(description="Type of entities imported")
upload_url: Optional[str] = SchemaField(
description="Upload URL for CSV data (only if csv_data not provided in request)"
)
upload_valid_until: Optional[str] = SchemaField(
description="Expiration time for upload URL (only if upload_url is provided)"
)
created_at: str = SchemaField(description="When the import was created")
def __init__(self):
super().__init__(
id="020a35d8-8a53-4e60-8b60-1de5cbab1df3",
description="Import CSV data to use with websets for targeted searches",
categories={BlockCategory.DATA},
input_schema=ExaCreateImportBlock.Input,
output_schema=ExaCreateImportBlock.Output,
test_input={
"credentials": TEST_CREDENTIALS_INPUT,
"title": "Test Import",
"csv_data": "name,url\nAcme,https://acme.com",
"entity_type": ImportEntityType.COMPANY,
"identifier_column": 0,
},
test_output=[
("import_id", "import-123"),
("status", "pending"),
("title", "Test Import"),
("count", 1),
("entity_type", "company"),
("upload_url", None),
("upload_valid_until", None),
("created_at", "2024-01-01T00:00:00"),
],
test_credentials=TEST_CREDENTIALS,
test_mock=self._create_test_mock(),
)
@staticmethod
def _create_test_mock():
"""Create test mocks for the AsyncExa SDK."""
from datetime import datetime
from unittest.mock import MagicMock
# Create mock SDK import object
mock_import = MagicMock()
mock_import.id = "import-123"
mock_import.status = MagicMock(value="pending")
mock_import.title = "Test Import"
mock_import.format = MagicMock(value="csv")
mock_import.count = 1
mock_import.upload_url = None
mock_import.upload_valid_until = None
mock_import.failed_reason = None
mock_import.failed_message = ""
mock_import.metadata = {}
mock_import.created_at = datetime.fromisoformat("2024-01-01T00:00:00")
mock_import.updated_at = datetime.fromisoformat("2024-01-01T00:00:00")
# Mock entity
mock_entity = MagicMock()
mock_entity.model_dump = MagicMock(return_value={"type": "company"})
mock_import.entity = mock_entity
return {
"_get_client": lambda *args, **kwargs: MagicMock(
websets=MagicMock(
imports=MagicMock(create=lambda *args, **kwargs: mock_import)
)
)
}
def _get_client(self, api_key: str) -> AsyncExa:
"""Get Exa client (separated for testing)."""
return AsyncExa(api_key=api_key)
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
aexa = self._get_client(credentials.api_key.get_secret_value())
csv_reader = csv.reader(StringIO(input_data.csv_data))
rows = list(csv_reader)
count = len(rows) - 1 if len(rows) > 1 else 0
size = len(input_data.csv_data.encode("utf-8"))
payload = {
"title": input_data.title,
"format": ImportFormat.CSV.value,
"count": count,
"size": size,
"csv": {
"identifier": input_data.identifier_column,
},
}
# Add URL column if specified
if input_data.url_column is not None:
payload["csv"]["url"] = input_data.url_column
# Add entity configuration
entity = {"type": input_data.entity_type.value}
if (
input_data.entity_type == ImportEntityType.CUSTOM
and input_data.entity_description
):
entity["description"] = input_data.entity_description
payload["entity"] = entity
# Add metadata if provided
if input_data.metadata:
payload["metadata"] = input_data.metadata
sdk_import = aexa.websets.imports.create(
params=payload, csv_data=input_data.csv_data
)
import_obj = ImportModel.from_sdk(sdk_import)
yield "import_id", import_obj.id
yield "status", import_obj.status
yield "title", import_obj.title
yield "count", import_obj.count
yield "entity_type", import_obj.entity_type
yield "upload_url", import_obj.upload_url
yield "upload_valid_until", import_obj.upload_valid_until
yield "created_at", import_obj.created_at
class ExaGetImportBlock(Block):
"""Get the status and details of an import."""
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = exa.credentials_field(
description="The Exa integration requires an API Key."
)
import_id: str = SchemaField(
description="The ID of the import to retrieve",
placeholder="import-id",
)
class Output(BlockSchemaOutput):
import_id: str = SchemaField(description="The unique identifier for the import")
status: str = SchemaField(description="Current status of the import")
title: str = SchemaField(description="Title of the import")
format: str = SchemaField(description="Format of the imported data")
entity_type: str = SchemaField(description="Type of entities imported")
count: int = SchemaField(description="Number of items imported")
upload_url: Optional[str] = SchemaField(
description="Upload URL for CSV data (if import not yet uploaded)"
)
upload_valid_until: Optional[str] = SchemaField(
description="Expiration time for upload URL (if applicable)"
)
failed_reason: Optional[str] = SchemaField(
description="Reason for failure (if applicable)"
)
failed_message: Optional[str] = SchemaField(
description="Detailed failure message (if applicable)"
)
created_at: str = SchemaField(description="When the import was created")
updated_at: str = SchemaField(description="When the import was last updated")
metadata: dict = SchemaField(description="Metadata attached to the import")
def __init__(self):
super().__init__(
id="236663c8-a8dc-45f7-a050-2676bb0a3dd2",
description="Get the status and details of an import",
categories={BlockCategory.DATA},
input_schema=ExaGetImportBlock.Input,
output_schema=ExaGetImportBlock.Output,
)
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
# Use AsyncExa SDK
aexa = AsyncExa(api_key=credentials.api_key.get_secret_value())
sdk_import = aexa.websets.imports.get(import_id=input_data.import_id)
import_obj = ImportModel.from_sdk(sdk_import)
# Yield all fields
yield "import_id", import_obj.id
yield "status", import_obj.status
yield "title", import_obj.title
yield "format", import_obj.format
yield "entity_type", import_obj.entity_type
yield "count", import_obj.count
yield "upload_url", import_obj.upload_url
yield "upload_valid_until", import_obj.upload_valid_until
yield "failed_reason", import_obj.failed_reason
yield "failed_message", import_obj.failed_message
yield "created_at", import_obj.created_at
yield "updated_at", import_obj.updated_at
yield "metadata", import_obj.metadata
class ExaListImportsBlock(Block):
"""List all imports with pagination."""
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = exa.credentials_field(
description="The Exa integration requires an API Key."
)
limit: int = SchemaField(
default=25,
description="Number of imports to return",
ge=1,
le=100,
)
cursor: Optional[str] = SchemaField(
default=None,
description="Cursor for pagination",
advanced=True,
)
class Output(BlockSchemaOutput):
imports: list[dict] = SchemaField(description="List of imports")
import_item: dict = SchemaField(
description="Individual import (yielded for each import)"
)
has_more: bool = SchemaField(
description="Whether there are more imports to paginate through"
)
next_cursor: Optional[str] = SchemaField(
description="Cursor for the next page of results"
)
def __init__(self):
super().__init__(
id="65323630-f7e9-4692-a624-184ba14c0686",
description="List all imports with pagination support",
categories={BlockCategory.DATA},
input_schema=ExaListImportsBlock.Input,
output_schema=ExaListImportsBlock.Output,
)
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
# Use AsyncExa SDK
aexa = AsyncExa(api_key=credentials.api_key.get_secret_value())
response = aexa.websets.imports.list(
cursor=input_data.cursor,
limit=input_data.limit,
)
# Convert SDK imports to our stable models
imports = [ImportModel.from_sdk(i) for i in response.data]
yield "imports", [i.model_dump() for i in imports]
for import_obj in imports:
yield "import_item", import_obj.model_dump()
yield "has_more", response.has_more
yield "next_cursor", response.next_cursor
class ExaDeleteImportBlock(Block):
"""Delete an import."""
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = exa.credentials_field(
description="The Exa integration requires an API Key."
)
import_id: str = SchemaField(
description="The ID of the import to delete",
placeholder="import-id",
)
class Output(BlockSchemaOutput):
import_id: str = SchemaField(description="The ID of the deleted import")
success: str = SchemaField(description="Whether the deletion was successful")
def __init__(self):
super().__init__(
id="81ae30ed-c7ba-4b5d-8483-b726846e570c",
description="Delete an import",
categories={BlockCategory.DATA},
input_schema=ExaDeleteImportBlock.Input,
output_schema=ExaDeleteImportBlock.Output,
)
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
# Use AsyncExa SDK
aexa = AsyncExa(api_key=credentials.api_key.get_secret_value())
deleted_import = aexa.websets.imports.delete(import_id=input_data.import_id)
yield "import_id", deleted_import.id
yield "success", "true"
class ExaExportWebsetBlock(Block):
"""Export all data from a webset in various formats."""
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = exa.credentials_field(
description="The Exa integration requires an API Key."
)
webset_id: str = SchemaField(
description="The ID or external ID of the Webset to export",
placeholder="webset-id-or-external-id",
)
format: ExportFormat = SchemaField(
default=ExportFormat.JSON,
description="Export format",
)
include_content: bool = SchemaField(
default=True,
description="Include full content in export",
)
include_enrichments: bool = SchemaField(
default=True,
description="Include enrichment data in export",
)
max_items: int = SchemaField(
default=100,
description="Maximum number of items to export",
ge=1,
le=100,
)
class Output(BlockSchemaOutput):
export_data: str = SchemaField(
description="Exported data in the requested format"
)
item_count: int = SchemaField(description="Number of items exported")
total_items: int = SchemaField(
description="Total number of items in the webset"
)
truncated: bool = SchemaField(
description="Whether the export was truncated due to max_items limit"
)
format: str = SchemaField(description="Format of the exported data")
def __init__(self):
super().__init__(
id="5da9d0fd-4b5b-4318-8302-8f71d0ccce9d",
description="Export webset data in JSON, CSV, or JSON Lines format",
categories={BlockCategory.DATA},
input_schema=ExaExportWebsetBlock.Input,
output_schema=ExaExportWebsetBlock.Output,
test_input={
"credentials": TEST_CREDENTIALS_INPUT,
"webset_id": "test-webset",
"format": ExportFormat.JSON,
"include_content": True,
"include_enrichments": True,
"max_items": 10,
},
test_output=[
("export_data", str),
("item_count", 2),
("total_items", 2),
("truncated", False),
("format", "json"),
],
test_credentials=TEST_CREDENTIALS,
test_mock=self._create_test_mock(),
)
@staticmethod
def _create_test_mock():
"""Create test mocks for the AsyncExa SDK."""
from unittest.mock import MagicMock
# Create mock webset items
mock_item1 = MagicMock()
mock_item1.model_dump = MagicMock(
return_value={
"id": "item-1",
"url": "https://example.com",
"title": "Test Item 1",
}
)
mock_item2 = MagicMock()
mock_item2.model_dump = MagicMock(
return_value={
"id": "item-2",
"url": "https://example.org",
"title": "Test Item 2",
}
)
# Create mock iterator
mock_items = [mock_item1, mock_item2]
return {
"_get_client": lambda *args, **kwargs: MagicMock(
websets=MagicMock(
items=MagicMock(list_all=lambda *args, **kwargs: iter(mock_items))
)
)
}
def _get_client(self, api_key: str) -> AsyncExa:
"""Get Exa client (separated for testing)."""
return AsyncExa(api_key=api_key)
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
# Use AsyncExa SDK
aexa = self._get_client(credentials.api_key.get_secret_value())
try:
all_items = []
# Use SDK's list_all iterator to fetch items
item_iterator = aexa.websets.items.list_all(
webset_id=input_data.webset_id, limit=input_data.max_items
)
for sdk_item in item_iterator:
if len(all_items) >= input_data.max_items:
break
# Convert to dict for export
item_dict = sdk_item.model_dump(by_alias=True, exclude_none=True)
all_items.append(item_dict)
# Calculate total and truncated
total_items = len(all_items) # SDK doesn't provide total count
truncated = len(all_items) >= input_data.max_items
# Process items based on include flags
if not input_data.include_content:
for item in all_items:
item.pop("content", None)
if not input_data.include_enrichments:
for item in all_items:
item.pop("enrichments", None)
# Format the export data
export_data = ""
if input_data.format == ExportFormat.JSON:
export_data = json.dumps(all_items, indent=2, default=str)
elif input_data.format == ExportFormat.JSON_LINES:
lines = [json.dumps(item, default=str) for item in all_items]
export_data = "\n".join(lines)
elif input_data.format == ExportFormat.CSV:
# Extract all unique keys for CSV headers
all_keys = set()
for item in all_items:
all_keys.update(self._flatten_dict(item).keys())
# Create CSV
output = StringIO()
writer = csv.DictWriter(output, fieldnames=sorted(all_keys))
writer.writeheader()
for item in all_items:
flat_item = self._flatten_dict(item)
writer.writerow(flat_item)
export_data = output.getvalue()
yield "export_data", export_data
yield "item_count", len(all_items)
yield "total_items", total_items
yield "truncated", truncated
yield "format", input_data.format.value
except ValueError as e:
# Re-raise user input validation errors
raise ValueError(f"Failed to export webset: {e}") from e
# Let all other exceptions propagate naturally
def _flatten_dict(self, d: dict, parent_key: str = "", sep: str = "_") -> dict:
"""Flatten nested dictionaries for CSV export."""
items = []
for k, v in d.items():
new_key = f"{parent_key}{sep}{k}" if parent_key else k
if isinstance(v, dict):
items.extend(self._flatten_dict(v, new_key, sep=sep).items())
elif isinstance(v, list):
# Convert lists to JSON strings for CSV
items.append((new_key, json.dumps(v, default=str)))
else:
items.append((new_key, v))
return dict(items)

View File

@@ -0,0 +1,591 @@
"""
Exa Websets Item Management Blocks
This module provides blocks for managing items within Exa websets, including
retrieving, listing, deleting, and bulk operations on webset items.
"""
from typing import Any, Dict, List, Optional
from exa_py import AsyncExa
from exa_py.websets.types import WebsetItem as SdkWebsetItem
from exa_py.websets.types import (
WebsetItemArticleProperties,
WebsetItemCompanyProperties,
WebsetItemCustomProperties,
WebsetItemPersonProperties,
WebsetItemResearchPaperProperties,
)
from pydantic import AnyUrl, BaseModel
from backend.sdk import (
APIKeyCredentials,
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
CredentialsMetaInput,
SchemaField,
)
from ._config import exa
# Mirrored model for enrichment results
class EnrichmentResultModel(BaseModel):
"""Stable output model mirroring SDK EnrichmentResult."""
enrichment_id: str
format: str
result: Optional[List[str]]
reasoning: Optional[str]
references: List[Dict[str, Any]]
@classmethod
def from_sdk(cls, sdk_enrich) -> "EnrichmentResultModel":
"""Convert SDK EnrichmentResult to our model."""
format_str = (
sdk_enrich.format.value
if hasattr(sdk_enrich.format, "value")
else str(sdk_enrich.format)
)
# Convert references to dicts
references_list = []
if sdk_enrich.references:
for ref in sdk_enrich.references:
references_list.append(ref.model_dump(by_alias=True, exclude_none=True))
return cls(
enrichment_id=sdk_enrich.enrichment_id,
format=format_str,
result=sdk_enrich.result,
reasoning=sdk_enrich.reasoning,
references=references_list,
)
# Mirrored model for stability - don't use SDK types directly in block outputs
class WebsetItemModel(BaseModel):
"""Stable output model mirroring SDK WebsetItem."""
id: str
url: Optional[AnyUrl]
title: str
content: str
entity_data: Dict[str, Any]
enrichments: Dict[str, EnrichmentResultModel]
created_at: str
updated_at: str
@classmethod
def from_sdk(cls, item: SdkWebsetItem) -> "WebsetItemModel":
"""Convert SDK WebsetItem to our stable model."""
# Extract properties from the union type
properties_dict = {}
url_value = None
title = ""
content = ""
if hasattr(item, "properties") and item.properties:
properties_dict = item.properties.model_dump(
by_alias=True, exclude_none=True
)
# URL is always available on all property types
url_value = item.properties.url
# Extract title using isinstance checks on the union type
if isinstance(item.properties, WebsetItemPersonProperties):
title = item.properties.person.name
content = "" # Person type has no content
elif isinstance(item.properties, WebsetItemCompanyProperties):
title = item.properties.company.name
content = item.properties.content or ""
elif isinstance(item.properties, WebsetItemArticleProperties):
title = item.properties.description
content = item.properties.content or ""
elif isinstance(item.properties, WebsetItemResearchPaperProperties):
title = item.properties.description
content = item.properties.content or ""
elif isinstance(item.properties, WebsetItemCustomProperties):
title = item.properties.description
content = item.properties.content or ""
else:
# Fallback
title = item.properties.description
content = getattr(item.properties, "content", "")
# Convert enrichments from list to dict keyed by enrichment_id using Pydantic models
enrichments_dict: Dict[str, EnrichmentResultModel] = {}
if hasattr(item, "enrichments") and item.enrichments:
for sdk_enrich in item.enrichments:
enrich_model = EnrichmentResultModel.from_sdk(sdk_enrich)
enrichments_dict[enrich_model.enrichment_id] = enrich_model
return cls(
id=item.id,
url=url_value,
title=title,
content=content or "",
entity_data=properties_dict,
enrichments=enrichments_dict,
created_at=item.created_at.isoformat() if item.created_at else "",
updated_at=item.updated_at.isoformat() if item.updated_at else "",
)
class ExaGetWebsetItemBlock(Block):
"""Get a specific item from a webset by its ID."""
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = exa.credentials_field(
description="The Exa integration requires an API Key."
)
webset_id: str = SchemaField(
description="The ID or external ID of the Webset",
placeholder="webset-id-or-external-id",
)
item_id: str = SchemaField(
description="The ID of the specific item to retrieve",
placeholder="item-id",
)
class Output(BlockSchemaOutput):
item_id: str = SchemaField(description="The unique identifier for the item")
url: str = SchemaField(description="The URL of the original source")
title: str = SchemaField(description="The title of the item")
content: str = SchemaField(description="The main content of the item")
entity_data: dict = SchemaField(description="Entity-specific structured data")
enrichments: dict = SchemaField(description="Enrichment data added to the item")
created_at: str = SchemaField(
description="When the item was added to the webset"
)
updated_at: str = SchemaField(description="When the item was last updated")
def __init__(self):
super().__init__(
id="c4a7d9e2-8f3b-4a6c-9d8e-a5b6c7d8e9f0",
description="Get a specific item from a webset by its ID",
categories={BlockCategory.SEARCH},
input_schema=ExaGetWebsetItemBlock.Input,
output_schema=ExaGetWebsetItemBlock.Output,
)
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
aexa = AsyncExa(api_key=credentials.api_key.get_secret_value())
sdk_item = aexa.websets.items.get(
webset_id=input_data.webset_id, id=input_data.item_id
)
item = WebsetItemModel.from_sdk(sdk_item)
yield "item_id", item.id
yield "url", item.url
yield "title", item.title
yield "content", item.content
yield "entity_data", item.entity_data
yield "enrichments", item.enrichments
yield "created_at", item.created_at
yield "updated_at", item.updated_at
class ExaListWebsetItemsBlock(Block):
"""List items in a webset with pagination and optional filtering."""
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = exa.credentials_field(
description="The Exa integration requires an API Key."
)
webset_id: str = SchemaField(
description="The ID or external ID of the Webset",
placeholder="webset-id-or-external-id",
)
limit: int = SchemaField(
default=25,
description="Number of items to return (1-100)",
ge=1,
le=100,
)
cursor: Optional[str] = SchemaField(
default=None,
description="Cursor for pagination through results",
advanced=True,
)
wait_for_items: bool = SchemaField(
default=False,
description="Wait for items to be available if webset is still processing",
advanced=True,
)
wait_timeout: int = SchemaField(
default=60,
description="Maximum time to wait for items in seconds",
advanced=True,
ge=1,
le=300,
)
class Output(BlockSchemaOutput):
items: list[WebsetItemModel] = SchemaField(
description="List of webset items",
)
webset_id: str = SchemaField(
description="The ID of the webset",
)
item: WebsetItemModel = SchemaField(
description="Individual item (yielded for each item in the list)",
)
has_more: bool = SchemaField(
description="Whether there are more items to paginate through",
)
next_cursor: Optional[str] = SchemaField(
description="Cursor for the next page of results",
)
def __init__(self):
super().__init__(
id="7b5e8c9f-01a2-43c4-95e6-f7a8b9c0d1e2",
description="List items in a webset with pagination support",
categories={BlockCategory.SEARCH},
input_schema=ExaListWebsetItemsBlock.Input,
output_schema=ExaListWebsetItemsBlock.Output,
)
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
aexa = AsyncExa(api_key=credentials.api_key.get_secret_value())
if input_data.wait_for_items:
import asyncio
import time
start_time = time.time()
interval = 2
response = None
while time.time() - start_time < input_data.wait_timeout:
response = aexa.websets.items.list(
webset_id=input_data.webset_id,
cursor=input_data.cursor,
limit=input_data.limit,
)
if response.data:
break
await asyncio.sleep(interval)
interval = min(interval * 1.2, 10)
if not response:
response = aexa.websets.items.list(
webset_id=input_data.webset_id,
cursor=input_data.cursor,
limit=input_data.limit,
)
else:
response = aexa.websets.items.list(
webset_id=input_data.webset_id,
cursor=input_data.cursor,
limit=input_data.limit,
)
items = [WebsetItemModel.from_sdk(item) for item in response.data]
yield "items", items
for item in items:
yield "item", item
yield "has_more", response.has_more
yield "next_cursor", response.next_cursor
yield "webset_id", input_data.webset_id
class ExaDeleteWebsetItemBlock(Block):
"""Delete a specific item from a webset."""
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = exa.credentials_field(
description="The Exa integration requires an API Key."
)
webset_id: str = SchemaField(
description="The ID or external ID of the Webset",
placeholder="webset-id-or-external-id",
)
item_id: str = SchemaField(
description="The ID of the item to delete",
placeholder="item-id",
)
class Output(BlockSchemaOutput):
item_id: str = SchemaField(description="The ID of the deleted item")
success: str = SchemaField(description="Whether the deletion was successful")
def __init__(self):
super().__init__(
id="12c57fbe-c270-4877-a2b6-d2d05529ba79",
description="Delete a specific item from a webset",
categories={BlockCategory.SEARCH},
input_schema=ExaDeleteWebsetItemBlock.Input,
output_schema=ExaDeleteWebsetItemBlock.Output,
)
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
aexa = AsyncExa(api_key=credentials.api_key.get_secret_value())
deleted_item = aexa.websets.items.delete(
webset_id=input_data.webset_id, id=input_data.item_id
)
yield "item_id", deleted_item.id
yield "success", "true"
class ExaBulkWebsetItemsBlock(Block):
"""Get all items from a webset in a single operation (with size limits)."""
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = exa.credentials_field(
description="The Exa integration requires an API Key."
)
webset_id: str = SchemaField(
description="The ID or external ID of the Webset",
placeholder="webset-id-or-external-id",
)
max_items: int = SchemaField(
default=100,
description="Maximum number of items to retrieve (1-1000). Note: Large values may take longer.",
ge=1,
le=1000,
)
include_enrichments: bool = SchemaField(
default=True,
description="Include enrichment data for each item",
)
include_content: bool = SchemaField(
default=True,
description="Include full content for each item",
)
class Output(BlockSchemaOutput):
items: list[WebsetItemModel] = SchemaField(
description="All items from the webset"
)
item: WebsetItemModel = SchemaField(
description="Individual item (yielded for each item)"
)
total_retrieved: int = SchemaField(
description="Total number of items retrieved"
)
truncated: bool = SchemaField(
description="Whether results were truncated due to max_items limit"
)
def __init__(self):
super().__init__(
id="dbd619f5-476e-4395-af9a-a7a7c0fb8c4e",
description="Get all items from a webset in bulk (with configurable limits)",
categories={BlockCategory.SEARCH},
input_schema=ExaBulkWebsetItemsBlock.Input,
output_schema=ExaBulkWebsetItemsBlock.Output,
)
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
# Use AsyncExa SDK
aexa = AsyncExa(api_key=credentials.api_key.get_secret_value())
all_items: List[WebsetItemModel] = []
item_iterator = aexa.websets.items.list_all(
webset_id=input_data.webset_id, limit=input_data.max_items
)
for sdk_item in item_iterator:
if len(all_items) >= input_data.max_items:
break
item = WebsetItemModel.from_sdk(sdk_item)
if not input_data.include_enrichments:
item.enrichments = {}
if not input_data.include_content:
item.content = ""
all_items.append(item)
yield "items", all_items
for item in all_items:
yield "item", item
yield "total_retrieved", len(all_items)
yield "truncated", len(all_items) >= input_data.max_items
class ExaWebsetItemsSummaryBlock(Block):
"""Get a summary of items in a webset without retrieving all data."""
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = exa.credentials_field(
description="The Exa integration requires an API Key."
)
webset_id: str = SchemaField(
description="The ID or external ID of the Webset",
placeholder="webset-id-or-external-id",
)
sample_size: int = SchemaField(
default=5,
description="Number of sample items to include",
ge=0,
le=10,
)
class Output(BlockSchemaOutput):
total_items: int = SchemaField(
description="Total number of items in the webset"
)
entity_type: str = SchemaField(description="Type of entities in the webset")
sample_items: list[WebsetItemModel] = SchemaField(
description="Sample of items from the webset"
)
enrichment_columns: list[str] = SchemaField(
description="List of enrichment columns available"
)
def __init__(self):
super().__init__(
id="db7813ad-10bd-4652-8623-5667d6fecdd5",
description="Get a summary of webset items without retrieving all data",
categories={BlockCategory.SEARCH},
input_schema=ExaWebsetItemsSummaryBlock.Input,
output_schema=ExaWebsetItemsSummaryBlock.Output,
)
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
# Use AsyncExa SDK
aexa = AsyncExa(api_key=credentials.api_key.get_secret_value())
webset = aexa.websets.get(id=input_data.webset_id)
entity_type = "unknown"
if webset.searches:
first_search = webset.searches[0]
if first_search.entity:
# The entity is a union type, extract type field
entity_dict = first_search.entity.model_dump(by_alias=True)
entity_type = entity_dict.get("type", "unknown")
# Get enrichment columns
enrichment_columns = []
if webset.enrichments:
enrichment_columns = [
e.title if e.title else e.description for e in webset.enrichments
]
# Get sample items if requested
sample_items: List[WebsetItemModel] = []
if input_data.sample_size > 0:
items_response = aexa.websets.items.list(
webset_id=input_data.webset_id, limit=input_data.sample_size
)
# Convert to our stable models
sample_items = [
WebsetItemModel.from_sdk(item) for item in items_response.data
]
total_items = 0
if webset.searches:
for search in webset.searches:
if search.progress:
total_items += search.progress.found
yield "total_items", total_items
yield "entity_type", entity_type
yield "sample_items", sample_items
yield "enrichment_columns", enrichment_columns
class ExaGetNewItemsBlock(Block):
"""Get items added to a webset since a specific cursor (incremental processing helper)."""
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = exa.credentials_field(
description="The Exa integration requires an API Key."
)
webset_id: str = SchemaField(
description="The ID or external ID of the Webset",
placeholder="webset-id-or-external-id",
)
since_cursor: Optional[str] = SchemaField(
default=None,
description="Cursor from previous run - only items after this will be returned. Leave empty on first run.",
placeholder="cursor-from-previous-run",
)
max_items: int = SchemaField(
default=100,
description="Maximum number of new items to retrieve",
ge=1,
le=1000,
)
class Output(BlockSchemaOutput):
new_items: list[WebsetItemModel] = SchemaField(
description="Items added since the cursor"
)
item: WebsetItemModel = SchemaField(
description="Individual item (yielded for each new item)"
)
count: int = SchemaField(description="Number of new items found")
next_cursor: Optional[str] = SchemaField(
description="Save this cursor for the next run to get only newer items"
)
has_more: bool = SchemaField(
description="Whether there are more new items beyond max_items"
)
def __init__(self):
super().__init__(
id="3ff9bdf5-9613-4d21-8a60-90eb8b69c414",
description="Get items added since a cursor - enables incremental processing without reprocessing",
categories={BlockCategory.SEARCH, BlockCategory.DATA},
input_schema=ExaGetNewItemsBlock.Input,
output_schema=ExaGetNewItemsBlock.Output,
)
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
# Use AsyncExa SDK
aexa = AsyncExa(api_key=credentials.api_key.get_secret_value())
# Get items starting from cursor
response = aexa.websets.items.list(
webset_id=input_data.webset_id,
cursor=input_data.since_cursor,
limit=input_data.max_items,
)
# Convert SDK items to our stable models
new_items = [WebsetItemModel.from_sdk(item) for item in response.data]
# Yield the full list
yield "new_items", new_items
# Yield individual items for processing
for item in new_items:
yield "item", item
# Yield metadata for next run
yield "count", len(new_items)
yield "next_cursor", response.next_cursor
yield "has_more", response.has_more

View File

@@ -0,0 +1,600 @@
"""
Exa Websets Monitor Management Blocks
This module provides blocks for creating and managing monitors that automatically
keep websets updated with fresh data on a schedule.
"""
from enum import Enum
from typing import Optional
from exa_py import AsyncExa
from exa_py.websets.types import Monitor as SdkMonitor
from pydantic import BaseModel
from backend.sdk import (
APIKeyCredentials,
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
CredentialsMetaInput,
SchemaField,
)
from ._config import exa
from ._test import TEST_CREDENTIALS, TEST_CREDENTIALS_INPUT
# Mirrored model for stability - don't use SDK types directly in block outputs
class MonitorModel(BaseModel):
"""Stable output model mirroring SDK Monitor."""
id: str
status: str
webset_id: str
behavior_type: str
behavior_config: dict
cron_expression: str
timezone: str
next_run_at: str
last_run: dict
metadata: dict
created_at: str
updated_at: str
@classmethod
def from_sdk(cls, monitor: SdkMonitor) -> "MonitorModel":
"""Convert SDK Monitor to our stable model."""
# Extract behavior information
behavior_dict = monitor.behavior.model_dump(by_alias=True, exclude_none=True)
behavior_type = behavior_dict.get("type", "unknown")
behavior_config = behavior_dict.get("config", {})
# Extract cadence information
cadence_dict = monitor.cadence.model_dump(by_alias=True, exclude_none=True)
cron_expr = cadence_dict.get("cron", "")
timezone = cadence_dict.get("timezone", "Etc/UTC")
# Extract last run information
last_run_dict = {}
if monitor.last_run:
last_run_dict = monitor.last_run.model_dump(
by_alias=True, exclude_none=True
)
# Handle status enum
status_str = (
monitor.status.value
if hasattr(monitor.status, "value")
else str(monitor.status)
)
return cls(
id=monitor.id,
status=status_str,
webset_id=monitor.webset_id,
behavior_type=behavior_type,
behavior_config=behavior_config,
cron_expression=cron_expr,
timezone=timezone,
next_run_at=monitor.next_run_at.isoformat() if monitor.next_run_at else "",
last_run=last_run_dict,
metadata=monitor.metadata or {},
created_at=monitor.created_at.isoformat() if monitor.created_at else "",
updated_at=monitor.updated_at.isoformat() if monitor.updated_at else "",
)
class MonitorStatus(str, Enum):
"""Status of a monitor."""
ENABLED = "enabled"
DISABLED = "disabled"
PAUSED = "paused"
class MonitorBehaviorType(str, Enum):
"""Type of behavior for a monitor."""
SEARCH = "search" # Run new searches
REFRESH = "refresh" # Refresh existing items
class SearchBehavior(str, Enum):
"""How search results interact with existing items."""
APPEND = "append"
OVERRIDE = "override"
class ExaCreateMonitorBlock(Block):
"""Create a monitor to automatically keep a webset updated on a schedule."""
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = exa.credentials_field(
description="The Exa integration requires an API Key."
)
webset_id: str = SchemaField(
description="The ID or external ID of the Webset to monitor",
placeholder="webset-id-or-external-id",
)
# Schedule configuration
cron_expression: str = SchemaField(
description="Cron expression for scheduling (5 fields, max once per day)",
placeholder="0 9 * * 1", # Every Monday at 9 AM
)
timezone: str = SchemaField(
default="Etc/UTC",
description="IANA timezone for the schedule",
placeholder="America/New_York",
advanced=True,
)
# Behavior configuration
behavior_type: MonitorBehaviorType = SchemaField(
default=MonitorBehaviorType.SEARCH,
description="Type of monitor behavior (search for new items or refresh existing)",
)
# Search configuration (for SEARCH behavior)
search_query: Optional[str] = SchemaField(
default=None,
description="Search query for finding new items (required for search behavior)",
placeholder="AI startups that raised funding in the last week",
)
search_count: int = SchemaField(
default=10,
description="Number of items to find in each search",
ge=1,
le=100,
)
search_criteria: list[str] = SchemaField(
default_factory=list,
description="Criteria that items must meet",
advanced=True,
)
search_behavior: SearchBehavior = SchemaField(
default=SearchBehavior.APPEND,
description="How new results interact with existing items",
advanced=True,
)
entity_type: Optional[str] = SchemaField(
default=None,
description="Type of entity to search for (company, person, etc.)",
advanced=True,
)
# Refresh configuration (for REFRESH behavior)
refresh_content: bool = SchemaField(
default=True,
description="Refresh content from source URLs (for refresh behavior)",
advanced=True,
)
refresh_enrichments: bool = SchemaField(
default=True,
description="Re-run enrichments on items (for refresh behavior)",
advanced=True,
)
# Metadata
metadata: Optional[dict] = SchemaField(
default=None,
description="Metadata to attach to the monitor",
advanced=True,
)
class Output(BlockSchemaOutput):
monitor_id: str = SchemaField(
description="The unique identifier for the created monitor"
)
webset_id: str = SchemaField(description="The webset this monitor belongs to")
status: str = SchemaField(description="Status of the monitor")
behavior_type: str = SchemaField(description="Type of monitor behavior")
next_run_at: Optional[str] = SchemaField(
description="When the monitor will next run"
)
cron_expression: str = SchemaField(description="The schedule cron expression")
timezone: str = SchemaField(description="The timezone for scheduling")
created_at: str = SchemaField(description="When the monitor was created")
def __init__(self):
super().__init__(
id="f8a9b0c1-d2e3-4567-890a-bcdef1234567",
description="Create automated monitors to keep websets updated with fresh data on a schedule",
categories={BlockCategory.SEARCH},
input_schema=ExaCreateMonitorBlock.Input,
output_schema=ExaCreateMonitorBlock.Output,
test_input={
"credentials": TEST_CREDENTIALS_INPUT,
"webset_id": "test-webset",
"cron_expression": "0 9 * * 1",
"behavior_type": MonitorBehaviorType.SEARCH,
"search_query": "AI startups",
"search_count": 10,
},
test_output=[
("monitor_id", "monitor-123"),
("webset_id", "test-webset"),
("status", "enabled"),
("behavior_type", "search"),
("next_run_at", "2024-01-01T00:00:00"),
("cron_expression", "0 9 * * 1"),
("timezone", "Etc/UTC"),
("created_at", "2024-01-01T00:00:00"),
],
test_credentials=TEST_CREDENTIALS,
test_mock=self._create_test_mock(),
)
@staticmethod
def _create_test_mock():
"""Create test mocks for the AsyncExa SDK."""
from datetime import datetime
from unittest.mock import MagicMock
# Create mock SDK monitor object
mock_monitor = MagicMock()
mock_monitor.id = "monitor-123"
mock_monitor.status = MagicMock(value="enabled")
mock_monitor.webset_id = "test-webset"
mock_monitor.next_run_at = datetime.fromisoformat("2024-01-01T00:00:00")
mock_monitor.created_at = datetime.fromisoformat("2024-01-01T00:00:00")
mock_monitor.updated_at = datetime.fromisoformat("2024-01-01T00:00:00")
mock_monitor.metadata = {}
mock_monitor.last_run = None
# Mock behavior
mock_behavior = MagicMock()
mock_behavior.model_dump = MagicMock(
return_value={"type": "search", "config": {}}
)
mock_monitor.behavior = mock_behavior
# Mock cadence
mock_cadence = MagicMock()
mock_cadence.model_dump = MagicMock(
return_value={"cron": "0 9 * * 1", "timezone": "Etc/UTC"}
)
mock_monitor.cadence = mock_cadence
return {
"_get_client": lambda *args, **kwargs: MagicMock(
websets=MagicMock(
monitors=MagicMock(create=lambda *args, **kwargs: mock_monitor)
)
)
}
def _get_client(self, api_key: str) -> AsyncExa:
"""Get Exa client (separated for testing)."""
return AsyncExa(api_key=api_key)
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
aexa = self._get_client(credentials.api_key.get_secret_value())
# Build the payload
payload = {
"websetId": input_data.webset_id,
"cadence": {
"cron": input_data.cron_expression,
"timezone": input_data.timezone,
},
}
# Build behavior configuration based on type
if input_data.behavior_type == MonitorBehaviorType.SEARCH:
behavior_config = {
"query": input_data.search_query or "",
"count": input_data.search_count,
"behavior": input_data.search_behavior.value,
}
if input_data.search_criteria:
behavior_config["criteria"] = [
{"description": c} for c in input_data.search_criteria
]
if input_data.entity_type:
behavior_config["entity"] = {"type": input_data.entity_type}
payload["behavior"] = {
"type": "search",
"config": behavior_config,
}
else:
# REFRESH behavior
payload["behavior"] = {
"type": "refresh",
"config": {
"content": input_data.refresh_content,
"enrichments": input_data.refresh_enrichments,
},
}
# Add metadata if provided
if input_data.metadata:
payload["metadata"] = input_data.metadata
sdk_monitor = aexa.websets.monitors.create(params=payload)
monitor = MonitorModel.from_sdk(sdk_monitor)
# Yield all fields
yield "monitor_id", monitor.id
yield "webset_id", monitor.webset_id
yield "status", monitor.status
yield "behavior_type", monitor.behavior_type
yield "next_run_at", monitor.next_run_at
yield "cron_expression", monitor.cron_expression
yield "timezone", monitor.timezone
yield "created_at", monitor.created_at
class ExaGetMonitorBlock(Block):
"""Get the details and status of a monitor."""
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = exa.credentials_field(
description="The Exa integration requires an API Key."
)
monitor_id: str = SchemaField(
description="The ID of the monitor to retrieve",
placeholder="monitor-id",
)
class Output(BlockSchemaOutput):
monitor_id: str = SchemaField(
description="The unique identifier for the monitor"
)
webset_id: str = SchemaField(description="The webset this monitor belongs to")
status: str = SchemaField(description="Current status of the monitor")
behavior_type: str = SchemaField(description="Type of monitor behavior")
behavior_config: dict = SchemaField(
description="Configuration for the monitor behavior"
)
cron_expression: str = SchemaField(description="The schedule cron expression")
timezone: str = SchemaField(description="The timezone for scheduling")
next_run_at: Optional[str] = SchemaField(
description="When the monitor will next run"
)
last_run: Optional[dict] = SchemaField(
description="Information about the last run"
)
created_at: str = SchemaField(description="When the monitor was created")
updated_at: str = SchemaField(description="When the monitor was last updated")
metadata: dict = SchemaField(description="Metadata attached to the monitor")
def __init__(self):
super().__init__(
id="5c852a2d-d505-4a56-b711-7def8dd14e72",
description="Get the details and status of a webset monitor",
categories={BlockCategory.SEARCH},
input_schema=ExaGetMonitorBlock.Input,
output_schema=ExaGetMonitorBlock.Output,
)
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
# Use AsyncExa SDK
aexa = AsyncExa(api_key=credentials.api_key.get_secret_value())
sdk_monitor = aexa.websets.monitors.get(monitor_id=input_data.monitor_id)
monitor = MonitorModel.from_sdk(sdk_monitor)
# Yield all fields
yield "monitor_id", monitor.id
yield "webset_id", monitor.webset_id
yield "status", monitor.status
yield "behavior_type", monitor.behavior_type
yield "behavior_config", monitor.behavior_config
yield "cron_expression", monitor.cron_expression
yield "timezone", monitor.timezone
yield "next_run_at", monitor.next_run_at
yield "last_run", monitor.last_run
yield "created_at", monitor.created_at
yield "updated_at", monitor.updated_at
yield "metadata", monitor.metadata
class ExaUpdateMonitorBlock(Block):
"""Update a monitor's configuration."""
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = exa.credentials_field(
description="The Exa integration requires an API Key."
)
monitor_id: str = SchemaField(
description="The ID of the monitor to update",
placeholder="monitor-id",
)
status: Optional[MonitorStatus] = SchemaField(
default=None,
description="New status for the monitor",
)
cron_expression: Optional[str] = SchemaField(
default=None,
description="New cron expression for scheduling",
)
timezone: Optional[str] = SchemaField(
default=None,
description="New timezone for the schedule",
advanced=True,
)
metadata: Optional[dict] = SchemaField(
default=None,
description="New metadata for the monitor",
advanced=True,
)
class Output(BlockSchemaOutput):
monitor_id: str = SchemaField(
description="The unique identifier for the monitor"
)
status: str = SchemaField(description="Updated status of the monitor")
next_run_at: Optional[str] = SchemaField(
description="When the monitor will next run"
)
updated_at: str = SchemaField(description="When the monitor was updated")
success: str = SchemaField(description="Whether the update was successful")
def __init__(self):
super().__init__(
id="245102c3-6af3-4515-a308-c2210b7939d2",
description="Update a monitor's status, schedule, or metadata",
categories={BlockCategory.SEARCH},
input_schema=ExaUpdateMonitorBlock.Input,
output_schema=ExaUpdateMonitorBlock.Output,
)
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
# Use AsyncExa SDK
aexa = AsyncExa(api_key=credentials.api_key.get_secret_value())
# Build update payload
payload = {}
if input_data.status is not None:
payload["status"] = input_data.status.value
if input_data.cron_expression is not None or input_data.timezone is not None:
cadence = {}
if input_data.cron_expression:
cadence["cron"] = input_data.cron_expression
if input_data.timezone:
cadence["timezone"] = input_data.timezone
payload["cadence"] = cadence
if input_data.metadata is not None:
payload["metadata"] = input_data.metadata
sdk_monitor = aexa.websets.monitors.update(
monitor_id=input_data.monitor_id, params=payload
)
# Convert to our stable model
monitor = MonitorModel.from_sdk(sdk_monitor)
# Yield fields
yield "monitor_id", monitor.id
yield "status", monitor.status
yield "next_run_at", monitor.next_run_at
yield "updated_at", monitor.updated_at
yield "success", "true"
class ExaDeleteMonitorBlock(Block):
"""Delete a monitor from a webset."""
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = exa.credentials_field(
description="The Exa integration requires an API Key."
)
monitor_id: str = SchemaField(
description="The ID of the monitor to delete",
placeholder="monitor-id",
)
class Output(BlockSchemaOutput):
monitor_id: str = SchemaField(description="The ID of the deleted monitor")
success: str = SchemaField(description="Whether the deletion was successful")
def __init__(self):
super().__init__(
id="f16f9b10-0c4d-4db8-997d-7b96b6026094",
description="Delete a monitor from a webset",
categories={BlockCategory.SEARCH},
input_schema=ExaDeleteMonitorBlock.Input,
output_schema=ExaDeleteMonitorBlock.Output,
)
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
# Use AsyncExa SDK
aexa = AsyncExa(api_key=credentials.api_key.get_secret_value())
deleted_monitor = aexa.websets.monitors.delete(monitor_id=input_data.monitor_id)
yield "monitor_id", deleted_monitor.id
yield "success", "true"
class ExaListMonitorsBlock(Block):
"""List all monitors with pagination."""
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = exa.credentials_field(
description="The Exa integration requires an API Key."
)
webset_id: Optional[str] = SchemaField(
default=None,
description="Filter monitors by webset ID",
placeholder="webset-id",
)
limit: int = SchemaField(
default=25,
description="Number of monitors to return",
ge=1,
le=100,
)
cursor: Optional[str] = SchemaField(
default=None,
description="Cursor for pagination",
advanced=True,
)
class Output(BlockSchemaOutput):
monitors: list[dict] = SchemaField(description="List of monitors")
monitor: dict = SchemaField(
description="Individual monitor (yielded for each monitor)"
)
has_more: bool = SchemaField(
description="Whether there are more monitors to paginate through"
)
next_cursor: Optional[str] = SchemaField(
description="Cursor for the next page of results"
)
def __init__(self):
super().__init__(
id="f06e2b38-5397-4e8f-aa85-491149dd98df",
description="List all monitors with optional webset filtering",
categories={BlockCategory.SEARCH},
input_schema=ExaListMonitorsBlock.Input,
output_schema=ExaListMonitorsBlock.Output,
)
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
# Use AsyncExa SDK
aexa = AsyncExa(api_key=credentials.api_key.get_secret_value())
response = aexa.websets.monitors.list(
cursor=input_data.cursor,
limit=input_data.limit,
webset_id=input_data.webset_id,
)
# Convert SDK monitors to our stable models
monitors = [MonitorModel.from_sdk(m) for m in response.data]
# Yield the full list
yield "monitors", [m.model_dump() for m in monitors]
# Yield individual monitors for graph chaining
for monitor in monitors:
yield "monitor", monitor.model_dump()
# Yield pagination metadata
yield "has_more", response.has_more
yield "next_cursor", response.next_cursor

View File

@@ -0,0 +1,600 @@
"""
Exa Websets Polling Blocks
This module provides dedicated polling blocks for waiting on webset operations
to complete, with progress tracking and timeout management.
"""
import asyncio
import time
from enum import Enum
from typing import Any, Dict
from exa_py import AsyncExa
from pydantic import BaseModel
from backend.sdk import (
APIKeyCredentials,
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
CredentialsMetaInput,
SchemaField,
)
from ._config import exa
# Import WebsetItemModel for use in enrichment samples
# This is safe as websets_items doesn't import from websets_polling
from .websets_items import WebsetItemModel
# Model for sample enrichment data
class SampleEnrichmentModel(BaseModel):
"""Sample enrichment result for display."""
item_id: str
item_title: str
enrichment_data: Dict[str, Any]
class WebsetTargetStatus(str, Enum):
IDLE = "idle"
COMPLETED = "completed"
RUNNING = "running"
PAUSED = "paused"
ANY_COMPLETE = "any_complete" # Either idle or completed
class ExaWaitForWebsetBlock(Block):
"""Wait for a webset to reach a specific status with progress tracking."""
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = exa.credentials_field(
description="The Exa integration requires an API Key."
)
webset_id: str = SchemaField(
description="The ID or external ID of the Webset to monitor",
placeholder="webset-id-or-external-id",
)
target_status: WebsetTargetStatus = SchemaField(
default=WebsetTargetStatus.IDLE,
description="Status to wait for (idle=all operations complete, completed=search done, running=actively processing)",
)
timeout: int = SchemaField(
default=300,
description="Maximum time to wait in seconds",
ge=1,
le=1800, # 30 minutes max
)
check_interval: int = SchemaField(
default=5,
description="Initial interval between status checks in seconds",
advanced=True,
ge=1,
le=60,
)
max_interval: int = SchemaField(
default=30,
description="Maximum interval between checks (for exponential backoff)",
advanced=True,
ge=5,
le=120,
)
include_progress: bool = SchemaField(
default=True,
description="Include detailed progress information in output",
)
class Output(BlockSchemaOutput):
webset_id: str = SchemaField(description="The webset ID that was monitored")
final_status: str = SchemaField(description="The final status of the webset")
elapsed_time: float = SchemaField(description="Total time elapsed in seconds")
item_count: int = SchemaField(description="Number of items found")
search_progress: dict = SchemaField(
description="Detailed search progress information"
)
enrichment_progress: dict = SchemaField(
description="Detailed enrichment progress information"
)
timed_out: bool = SchemaField(description="Whether the operation timed out")
def __init__(self):
super().__init__(
id="619d71e8-b72a-434d-8bd4-23376dd0342c",
description="Wait for a webset to reach a specific status with progress tracking",
categories={BlockCategory.SEARCH},
input_schema=ExaWaitForWebsetBlock.Input,
output_schema=ExaWaitForWebsetBlock.Output,
)
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
start_time = time.time()
aexa = AsyncExa(api_key=credentials.api_key.get_secret_value())
try:
if input_data.target_status in [
WebsetTargetStatus.IDLE,
WebsetTargetStatus.ANY_COMPLETE,
]:
final_webset = aexa.websets.wait_until_idle(
id=input_data.webset_id,
timeout=input_data.timeout,
poll_interval=input_data.check_interval,
)
elapsed = time.time() - start_time
status_str = (
final_webset.status.value
if hasattr(final_webset.status, "value")
else str(final_webset.status)
)
item_count = 0
if final_webset.searches:
for search in final_webset.searches:
if search.progress:
item_count += search.progress.found
# Extract progress if requested
search_progress = {}
enrichment_progress = {}
if input_data.include_progress:
webset_dict = final_webset.model_dump(
by_alias=True, exclude_none=True
)
search_progress = self._extract_search_progress(webset_dict)
enrichment_progress = self._extract_enrichment_progress(webset_dict)
yield "webset_id", input_data.webset_id
yield "final_status", status_str
yield "elapsed_time", elapsed
yield "item_count", item_count
if input_data.include_progress:
yield "search_progress", search_progress
yield "enrichment_progress", enrichment_progress
yield "timed_out", False
else:
# For other status targets, manually poll
interval = input_data.check_interval
while time.time() - start_time < input_data.timeout:
# Get current webset status
webset = aexa.websets.get(id=input_data.webset_id)
current_status = (
webset.status.value
if hasattr(webset.status, "value")
else str(webset.status)
)
# Check if target status reached
if current_status == input_data.target_status.value:
elapsed = time.time() - start_time
# Estimate item count from search progress
item_count = 0
if webset.searches:
for search in webset.searches:
if search.progress:
item_count += search.progress.found
search_progress = {}
enrichment_progress = {}
if input_data.include_progress:
webset_dict = webset.model_dump(
by_alias=True, exclude_none=True
)
search_progress = self._extract_search_progress(webset_dict)
enrichment_progress = self._extract_enrichment_progress(
webset_dict
)
yield "webset_id", input_data.webset_id
yield "final_status", current_status
yield "elapsed_time", elapsed
yield "item_count", item_count
if input_data.include_progress:
yield "search_progress", search_progress
yield "enrichment_progress", enrichment_progress
yield "timed_out", False
return
# Wait before next check with exponential backoff
await asyncio.sleep(interval)
interval = min(interval * 1.5, input_data.max_interval)
# Timeout reached
elapsed = time.time() - start_time
webset = aexa.websets.get(id=input_data.webset_id)
final_status = (
webset.status.value
if hasattr(webset.status, "value")
else str(webset.status)
)
item_count = 0
if webset.searches:
for search in webset.searches:
if search.progress:
item_count += search.progress.found
search_progress = {}
enrichment_progress = {}
if input_data.include_progress:
webset_dict = webset.model_dump(by_alias=True, exclude_none=True)
search_progress = self._extract_search_progress(webset_dict)
enrichment_progress = self._extract_enrichment_progress(webset_dict)
yield "webset_id", input_data.webset_id
yield "final_status", final_status
yield "elapsed_time", elapsed
yield "item_count", item_count
if input_data.include_progress:
yield "search_progress", search_progress
yield "enrichment_progress", enrichment_progress
yield "timed_out", True
except asyncio.TimeoutError:
raise ValueError(
f"Polling timed out after {input_data.timeout} seconds"
) from None
def _extract_search_progress(self, webset_data: dict) -> dict:
"""Extract search progress information from webset data."""
progress = {}
searches = webset_data.get("searches", [])
for idx, search in enumerate(searches):
search_id = search.get("id", f"search_{idx}")
search_progress = search.get("progress", {})
progress[search_id] = {
"status": search.get("status", "unknown"),
"found": search_progress.get("found", 0),
"analyzed": search_progress.get("analyzed", 0),
"completion": search_progress.get("completion", 0),
"time_left": search_progress.get("timeLeft", 0),
}
return progress
def _extract_enrichment_progress(self, webset_data: dict) -> dict:
"""Extract enrichment progress information from webset data."""
progress = {}
enrichments = webset_data.get("enrichments", [])
for idx, enrichment in enumerate(enrichments):
enrich_id = enrichment.get("id", f"enrichment_{idx}")
progress[enrich_id] = {
"status": enrichment.get("status", "unknown"),
"title": enrichment.get("title", ""),
"description": enrichment.get("description", ""),
}
return progress
class ExaWaitForSearchBlock(Block):
"""Wait for a specific webset search to complete with progress tracking."""
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = exa.credentials_field(
description="The Exa integration requires an API Key."
)
webset_id: str = SchemaField(
description="The ID or external ID of the Webset",
placeholder="webset-id-or-external-id",
)
search_id: str = SchemaField(
description="The ID of the search to monitor",
placeholder="search-id",
)
timeout: int = SchemaField(
default=300,
description="Maximum time to wait in seconds",
ge=1,
le=1800,
)
check_interval: int = SchemaField(
default=5,
description="Initial interval between status checks in seconds",
advanced=True,
ge=1,
le=60,
)
class Output(BlockSchemaOutput):
search_id: str = SchemaField(description="The search ID that was monitored")
final_status: str = SchemaField(description="The final status of the search")
items_found: int = SchemaField(
description="Number of items found by the search"
)
items_analyzed: int = SchemaField(description="Number of items analyzed")
completion_percentage: int = SchemaField(
description="Completion percentage (0-100)"
)
elapsed_time: float = SchemaField(description="Total time elapsed in seconds")
recall_info: dict = SchemaField(
description="Information about expected results and confidence"
)
timed_out: bool = SchemaField(description="Whether the operation timed out")
def __init__(self):
super().__init__(
id="14da21ae-40a1-41bc-a111-c8e5c9ef012b",
description="Wait for a specific webset search to complete with progress tracking",
categories={BlockCategory.SEARCH},
input_schema=ExaWaitForSearchBlock.Input,
output_schema=ExaWaitForSearchBlock.Output,
)
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
start_time = time.time()
interval = input_data.check_interval
max_interval = 30
# Use AsyncExa SDK
aexa = AsyncExa(api_key=credentials.api_key.get_secret_value())
try:
while time.time() - start_time < input_data.timeout:
# Get current search status using SDK
search = aexa.websets.searches.get(
webset_id=input_data.webset_id, id=input_data.search_id
)
# Extract status
status = (
search.status.value
if hasattr(search.status, "value")
else str(search.status)
)
# Check if search is complete
if status in ["completed", "failed", "canceled"]:
elapsed = time.time() - start_time
# Extract progress information
progress_dict = {}
if search.progress:
progress_dict = search.progress.model_dump(
by_alias=True, exclude_none=True
)
# Extract recall information
recall_info = {}
if search.recall:
recall_dict = search.recall.model_dump(
by_alias=True, exclude_none=True
)
expected = recall_dict.get("expected", {})
recall_info = {
"expected_total": expected.get("total", 0),
"confidence": expected.get("confidence", ""),
"min_expected": expected.get("bounds", {}).get("min", 0),
"max_expected": expected.get("bounds", {}).get("max", 0),
"reasoning": recall_dict.get("reasoning", ""),
}
yield "search_id", input_data.search_id
yield "final_status", status
yield "items_found", progress_dict.get("found", 0)
yield "items_analyzed", progress_dict.get("analyzed", 0)
yield "completion_percentage", progress_dict.get("completion", 0)
yield "elapsed_time", elapsed
yield "recall_info", recall_info
yield "timed_out", False
return
# Wait before next check with exponential backoff
await asyncio.sleep(interval)
interval = min(interval * 1.5, max_interval)
# Timeout reached
elapsed = time.time() - start_time
# Get last known status
search = aexa.websets.searches.get(
webset_id=input_data.webset_id, id=input_data.search_id
)
final_status = (
search.status.value
if hasattr(search.status, "value")
else str(search.status)
)
progress_dict = {}
if search.progress:
progress_dict = search.progress.model_dump(
by_alias=True, exclude_none=True
)
yield "search_id", input_data.search_id
yield "final_status", final_status
yield "items_found", progress_dict.get("found", 0)
yield "items_analyzed", progress_dict.get("analyzed", 0)
yield "completion_percentage", progress_dict.get("completion", 0)
yield "elapsed_time", elapsed
yield "timed_out", True
except asyncio.TimeoutError:
raise ValueError(
f"Search polling timed out after {input_data.timeout} seconds"
) from None
class ExaWaitForEnrichmentBlock(Block):
"""Wait for a webset enrichment to complete with progress tracking."""
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = exa.credentials_field(
description="The Exa integration requires an API Key."
)
webset_id: str = SchemaField(
description="The ID or external ID of the Webset",
placeholder="webset-id-or-external-id",
)
enrichment_id: str = SchemaField(
description="The ID of the enrichment to monitor",
placeholder="enrichment-id",
)
timeout: int = SchemaField(
default=300,
description="Maximum time to wait in seconds",
ge=1,
le=1800,
)
check_interval: int = SchemaField(
default=5,
description="Initial interval between status checks in seconds",
advanced=True,
ge=1,
le=60,
)
sample_results: bool = SchemaField(
default=True,
description="Include sample enrichment results in output",
)
class Output(BlockSchemaOutput):
enrichment_id: str = SchemaField(
description="The enrichment ID that was monitored"
)
final_status: str = SchemaField(
description="The final status of the enrichment"
)
items_enriched: int = SchemaField(
description="Number of items successfully enriched"
)
enrichment_title: str = SchemaField(
description="Title/description of the enrichment"
)
elapsed_time: float = SchemaField(description="Total time elapsed in seconds")
sample_data: list[SampleEnrichmentModel] = SchemaField(
description="Sample of enriched data (if requested)"
)
timed_out: bool = SchemaField(description="Whether the operation timed out")
def __init__(self):
super().__init__(
id="a11865c3-ac80-4721-8a40-ac4e3b71a558",
description="Wait for a webset enrichment to complete with progress tracking",
categories={BlockCategory.SEARCH},
input_schema=ExaWaitForEnrichmentBlock.Input,
output_schema=ExaWaitForEnrichmentBlock.Output,
)
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
start_time = time.time()
interval = input_data.check_interval
max_interval = 30
# Use AsyncExa SDK
aexa = AsyncExa(api_key=credentials.api_key.get_secret_value())
try:
while time.time() - start_time < input_data.timeout:
# Get current enrichment status using SDK
enrichment = aexa.websets.enrichments.get(
webset_id=input_data.webset_id, id=input_data.enrichment_id
)
# Extract status
status = (
enrichment.status.value
if hasattr(enrichment.status, "value")
else str(enrichment.status)
)
# Check if enrichment is complete
if status in ["completed", "failed", "canceled"]:
elapsed = time.time() - start_time
# Get sample enriched items if requested
sample_data = []
items_enriched = 0
if input_data.sample_results and status == "completed":
sample_data, items_enriched = (
await self._get_sample_enrichments(
input_data.webset_id, input_data.enrichment_id, aexa
)
)
yield "enrichment_id", input_data.enrichment_id
yield "final_status", status
yield "items_enriched", items_enriched
yield "enrichment_title", enrichment.title or enrichment.description or ""
yield "elapsed_time", elapsed
if input_data.sample_results:
yield "sample_data", sample_data
yield "timed_out", False
return
# Wait before next check with exponential backoff
await asyncio.sleep(interval)
interval = min(interval * 1.5, max_interval)
# Timeout reached
elapsed = time.time() - start_time
# Get last known status
enrichment = aexa.websets.enrichments.get(
webset_id=input_data.webset_id, id=input_data.enrichment_id
)
final_status = (
enrichment.status.value
if hasattr(enrichment.status, "value")
else str(enrichment.status)
)
title = enrichment.title or enrichment.description or ""
yield "enrichment_id", input_data.enrichment_id
yield "final_status", final_status
yield "items_enriched", 0
yield "enrichment_title", title
yield "elapsed_time", elapsed
yield "timed_out", True
except asyncio.TimeoutError:
raise ValueError(
f"Enrichment polling timed out after {input_data.timeout} seconds"
) from None
async def _get_sample_enrichments(
self, webset_id: str, enrichment_id: str, aexa: AsyncExa
) -> tuple[list[SampleEnrichmentModel], int]:
"""Get sample enriched data and count."""
# Get a few items to see enrichment results using SDK
response = aexa.websets.items.list(webset_id=webset_id, limit=5)
sample_data: list[SampleEnrichmentModel] = []
enriched_count = 0
for sdk_item in response.data:
# Convert to our WebsetItemModel first
item = WebsetItemModel.from_sdk(sdk_item)
# Check if this item has the enrichment we're looking for
if enrichment_id in item.enrichments:
enriched_count += 1
enrich_model = item.enrichments[enrichment_id]
# Create sample using our typed model
sample = SampleEnrichmentModel(
item_id=item.id,
item_title=item.title,
enrichment_data=enrich_model.model_dump(exclude_none=True),
)
sample_data.append(sample)
return sample_data, enriched_count

View File

@@ -0,0 +1,650 @@
"""
Exa Websets Search Management Blocks
This module provides blocks for creating and managing searches within websets,
including adding new searches, checking status, and canceling operations.
"""
from enum import Enum
from typing import Any, Dict, List, Optional
from exa_py import AsyncExa
from exa_py.websets.types import WebsetSearch as SdkWebsetSearch
from pydantic import BaseModel
from backend.sdk import (
APIKeyCredentials,
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
CredentialsMetaInput,
SchemaField,
)
from ._config import exa
# Mirrored model for stability
class WebsetSearchModel(BaseModel):
"""Stable output model mirroring SDK WebsetSearch."""
id: str
webset_id: str
status: str
query: str
entity_type: str
criteria: List[Dict[str, Any]]
count: int
behavior: str
progress: Dict[str, Any]
recall: Optional[Dict[str, Any]]
created_at: str
updated_at: str
canceled_at: Optional[str]
canceled_reason: Optional[str]
metadata: Dict[str, Any]
@classmethod
def from_sdk(cls, search: SdkWebsetSearch) -> "WebsetSearchModel":
"""Convert SDK WebsetSearch to our stable model."""
# Extract entity type
entity_type = "auto"
if search.entity:
entity_dict = search.entity.model_dump(by_alias=True)
entity_type = entity_dict.get("type", "auto")
# Convert criteria
criteria = [c.model_dump(by_alias=True) for c in search.criteria]
# Convert progress
progress_dict = {}
if search.progress:
progress_dict = search.progress.model_dump(by_alias=True)
# Convert recall
recall_dict = None
if search.recall:
recall_dict = search.recall.model_dump(by_alias=True)
return cls(
id=search.id,
webset_id=search.webset_id,
status=(
search.status.value
if hasattr(search.status, "value")
else str(search.status)
),
query=search.query,
entity_type=entity_type,
criteria=criteria,
count=search.count,
behavior=search.behavior.value if search.behavior else "override",
progress=progress_dict,
recall=recall_dict,
created_at=search.created_at.isoformat() if search.created_at else "",
updated_at=search.updated_at.isoformat() if search.updated_at else "",
canceled_at=search.canceled_at.isoformat() if search.canceled_at else None,
canceled_reason=(
search.canceled_reason.value if search.canceled_reason else None
),
metadata=search.metadata if search.metadata else {},
)
class SearchBehavior(str, Enum):
"""Behavior for how new search results interact with existing items."""
OVERRIDE = "override" # Replace existing items
APPEND = "append" # Add to existing items
MERGE = "merge" # Merge with existing items
class SearchEntityType(str, Enum):
COMPANY = "company"
PERSON = "person"
ARTICLE = "article"
RESEARCH_PAPER = "research_paper"
CUSTOM = "custom"
AUTO = "auto"
class ExaCreateWebsetSearchBlock(Block):
"""Add a new search to an existing webset."""
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = exa.credentials_field(
description="The Exa integration requires an API Key."
)
webset_id: str = SchemaField(
description="The ID or external ID of the Webset",
placeholder="webset-id-or-external-id",
)
query: str = SchemaField(
description="Search query describing what to find",
placeholder="Engineering managers at Fortune 500 companies",
)
count: int = SchemaField(
default=10,
description="Number of items to find",
ge=1,
le=1000,
)
# Entity configuration
entity_type: SearchEntityType = SchemaField(
default=SearchEntityType.AUTO,
description="Type of entity to search for",
)
entity_description: Optional[str] = SchemaField(
default=None,
description="Description for custom entity type",
advanced=True,
)
# Criteria for verification
criteria: list[str] = SchemaField(
default_factory=list,
description="List of criteria that items must meet. If not provided, auto-detected from query.",
advanced=True,
)
# Advanced search options
behavior: SearchBehavior = SchemaField(
default=SearchBehavior.APPEND,
description="How new results interact with existing items",
advanced=True,
)
recall: bool = SchemaField(
default=True,
description="Enable recall estimation for expected results",
advanced=True,
)
# Exclude sources
exclude_source_ids: list[str] = SchemaField(
default_factory=list,
description="IDs of imports/websets to exclude from results",
advanced=True,
)
exclude_source_types: list[str] = SchemaField(
default_factory=list,
description="Types of sources to exclude ('import' or 'webset')",
advanced=True,
)
# Scope sources
scope_source_ids: list[str] = SchemaField(
default_factory=list,
description="IDs of imports/websets to limit search scope to",
advanced=True,
)
scope_source_types: list[str] = SchemaField(
default_factory=list,
description="Types of scope sources ('import' or 'webset')",
advanced=True,
)
scope_relationships: list[str] = SchemaField(
default_factory=list,
description="Relationship definitions for hop searches",
advanced=True,
)
scope_relationship_limits: list[int] = SchemaField(
default_factory=list,
description="Limits on related entities to find",
advanced=True,
)
metadata: Optional[dict] = SchemaField(
default=None,
description="Metadata to attach to the search",
advanced=True,
)
# Polling options
wait_for_completion: bool = SchemaField(
default=False,
description="Wait for the search to complete before returning",
)
polling_timeout: int = SchemaField(
default=300,
description="Maximum time to wait for completion in seconds",
advanced=True,
ge=1,
le=600,
)
class Output(BlockSchemaOutput):
search_id: str = SchemaField(
description="The unique identifier for the created search"
)
webset_id: str = SchemaField(description="The webset this search belongs to")
status: str = SchemaField(description="Current status of the search")
query: str = SchemaField(description="The search query")
expected_results: dict = SchemaField(
description="Recall estimation of expected results"
)
items_found: Optional[int] = SchemaField(
description="Number of items found (if wait_for_completion was True)"
)
completion_time: Optional[float] = SchemaField(
description="Time taken to complete in seconds (if wait_for_completion was True)"
)
def __init__(self):
super().__init__(
id="342ff776-2e2c-4cdb-b392-4eeb34b21d5f",
description="Add a new search to an existing webset to find more items",
categories={BlockCategory.SEARCH},
input_schema=ExaCreateWebsetSearchBlock.Input,
output_schema=ExaCreateWebsetSearchBlock.Output,
)
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
import time
# Build the payload
payload = {
"query": input_data.query,
"count": input_data.count,
"behavior": input_data.behavior.value,
"recall": input_data.recall,
}
# Add entity configuration
if input_data.entity_type != SearchEntityType.AUTO:
entity = {"type": input_data.entity_type.value}
if (
input_data.entity_type == SearchEntityType.CUSTOM
and input_data.entity_description
):
entity["description"] = input_data.entity_description
payload["entity"] = entity
# Add criteria if provided
if input_data.criteria:
payload["criteria"] = [{"description": c} for c in input_data.criteria]
# Add exclude sources
if input_data.exclude_source_ids:
exclude_list = []
for idx, src_id in enumerate(input_data.exclude_source_ids):
src_type = "import"
if input_data.exclude_source_types and idx < len(
input_data.exclude_source_types
):
src_type = input_data.exclude_source_types[idx]
exclude_list.append({"source": src_type, "id": src_id})
payload["exclude"] = exclude_list
# Add scope sources
if input_data.scope_source_ids:
scope_list: list[dict[str, Any]] = []
for idx, src_id in enumerate(input_data.scope_source_ids):
scope_item: dict[str, Any] = {"source": "import", "id": src_id}
if input_data.scope_source_types and idx < len(
input_data.scope_source_types
):
scope_item["source"] = input_data.scope_source_types[idx]
# Add relationship if provided
if input_data.scope_relationships and idx < len(
input_data.scope_relationships
):
relationship: dict[str, Any] = {
"definition": input_data.scope_relationships[idx]
}
if input_data.scope_relationship_limits and idx < len(
input_data.scope_relationship_limits
):
relationship["limit"] = input_data.scope_relationship_limits[
idx
]
scope_item["relationship"] = relationship
scope_list.append(scope_item)
payload["scope"] = scope_list
# Add metadata if provided
if input_data.metadata:
payload["metadata"] = input_data.metadata
start_time = time.time()
aexa = AsyncExa(api_key=credentials.api_key.get_secret_value())
sdk_search = aexa.websets.searches.create(
webset_id=input_data.webset_id, params=payload
)
search_id = sdk_search.id
status = (
sdk_search.status.value
if hasattr(sdk_search.status, "value")
else str(sdk_search.status)
)
# Extract expected results from recall
expected_results = {}
if sdk_search.recall:
recall_dict = sdk_search.recall.model_dump(by_alias=True)
expected = recall_dict.get("expected", {})
expected_results = {
"total": expected.get("total", 0),
"confidence": expected.get("confidence", ""),
"min": expected.get("bounds", {}).get("min", 0),
"max": expected.get("bounds", {}).get("max", 0),
"reasoning": recall_dict.get("reasoning", ""),
}
# If wait_for_completion is True, poll for completion
if input_data.wait_for_completion:
import asyncio
poll_interval = 5
max_interval = 30
poll_start = time.time()
while time.time() - poll_start < input_data.polling_timeout:
current_search = aexa.websets.searches.get(
webset_id=input_data.webset_id, id=search_id
)
current_status = (
current_search.status.value
if hasattr(current_search.status, "value")
else str(current_search.status)
)
if current_status in ["completed", "failed", "cancelled"]:
items_found = 0
if current_search.progress:
items_found = current_search.progress.found
completion_time = time.time() - start_time
yield "search_id", search_id
yield "webset_id", input_data.webset_id
yield "status", current_status
yield "query", input_data.query
yield "expected_results", expected_results
yield "items_found", items_found
yield "completion_time", completion_time
return
await asyncio.sleep(poll_interval)
poll_interval = min(poll_interval * 1.5, max_interval)
# Timeout - yield what we have
yield "search_id", search_id
yield "webset_id", input_data.webset_id
yield "status", status
yield "query", input_data.query
yield "expected_results", expected_results
yield "items_found", 0
yield "completion_time", time.time() - start_time
else:
yield "search_id", search_id
yield "webset_id", input_data.webset_id
yield "status", status
yield "query", input_data.query
yield "expected_results", expected_results
class ExaGetWebsetSearchBlock(Block):
"""Get the status and details of a webset search."""
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = exa.credentials_field(
description="The Exa integration requires an API Key."
)
webset_id: str = SchemaField(
description="The ID or external ID of the Webset",
placeholder="webset-id-or-external-id",
)
search_id: str = SchemaField(
description="The ID of the search to retrieve",
placeholder="search-id",
)
class Output(BlockSchemaOutput):
search_id: str = SchemaField(description="The unique identifier for the search")
status: str = SchemaField(description="Current status of the search")
query: str = SchemaField(description="The search query")
entity_type: str = SchemaField(description="Type of entity being searched")
criteria: list[dict] = SchemaField(description="Criteria used for verification")
progress: dict = SchemaField(description="Search progress information")
recall: dict = SchemaField(description="Recall estimation information")
created_at: str = SchemaField(description="When the search was created")
updated_at: str = SchemaField(description="When the search was last updated")
canceled_at: Optional[str] = SchemaField(
description="When the search was canceled (if applicable)"
)
canceled_reason: Optional[str] = SchemaField(
description="Reason for cancellation (if applicable)"
)
metadata: dict = SchemaField(description="Metadata attached to the search")
def __init__(self):
super().__init__(
id="4fa3e627-a0ff-485f-8732-52148051646c",
description="Get the status and details of a webset search",
categories={BlockCategory.SEARCH},
input_schema=ExaGetWebsetSearchBlock.Input,
output_schema=ExaGetWebsetSearchBlock.Output,
)
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
# Use AsyncExa SDK
aexa = AsyncExa(api_key=credentials.api_key.get_secret_value())
sdk_search = aexa.websets.searches.get(
webset_id=input_data.webset_id, id=input_data.search_id
)
search = WebsetSearchModel.from_sdk(sdk_search)
# Extract progress information
progress_info = {
"found": search.progress.get("found", 0),
"analyzed": search.progress.get("analyzed", 0),
"completion": search.progress.get("completion", 0),
"time_left": search.progress.get("timeLeft", 0),
}
# Extract recall information
recall_data = {}
if search.recall:
expected = search.recall.get("expected", {})
recall_data = {
"expected_total": expected.get("total", 0),
"confidence": expected.get("confidence", ""),
"min_expected": expected.get("bounds", {}).get("min", 0),
"max_expected": expected.get("bounds", {}).get("max", 0),
"reasoning": search.recall.get("reasoning", ""),
}
yield "search_id", search.id
yield "status", search.status
yield "query", search.query
yield "entity_type", search.entity_type
yield "criteria", search.criteria
yield "progress", progress_info
yield "recall", recall_data
yield "created_at", search.created_at
yield "updated_at", search.updated_at
yield "canceled_at", search.canceled_at
yield "canceled_reason", search.canceled_reason
yield "metadata", search.metadata
class ExaCancelWebsetSearchBlock(Block):
"""Cancel a running webset search."""
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = exa.credentials_field(
description="The Exa integration requires an API Key."
)
webset_id: str = SchemaField(
description="The ID or external ID of the Webset",
placeholder="webset-id-or-external-id",
)
search_id: str = SchemaField(
description="The ID of the search to cancel",
placeholder="search-id",
)
class Output(BlockSchemaOutput):
search_id: str = SchemaField(description="The ID of the canceled search")
status: str = SchemaField(description="Status after cancellation")
items_found_before_cancel: int = SchemaField(
description="Number of items found before cancellation"
)
success: str = SchemaField(
description="Whether the cancellation was successful"
)
def __init__(self):
super().__init__(
id="74ef9f1e-ae89-4c7f-9d7d-d217214815b4",
description="Cancel a running webset search",
categories={BlockCategory.SEARCH},
input_schema=ExaCancelWebsetSearchBlock.Input,
output_schema=ExaCancelWebsetSearchBlock.Output,
)
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
# Use AsyncExa SDK
aexa = AsyncExa(api_key=credentials.api_key.get_secret_value())
canceled_search = aexa.websets.searches.cancel(
webset_id=input_data.webset_id, id=input_data.search_id
)
# Extract items found before cancellation
items_found = 0
if canceled_search.progress:
items_found = canceled_search.progress.found
status = (
canceled_search.status.value
if hasattr(canceled_search.status, "value")
else str(canceled_search.status)
)
yield "search_id", canceled_search.id
yield "status", status
yield "items_found_before_cancel", items_found
yield "success", "true"
class ExaFindOrCreateSearchBlock(Block):
"""Find existing search by query or create new one (prevents duplicate searches)."""
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = exa.credentials_field(
description="The Exa integration requires an API Key."
)
webset_id: str = SchemaField(
description="The ID or external ID of the Webset",
placeholder="webset-id-or-external-id",
)
query: str = SchemaField(
description="Search query to find or create",
placeholder="AI companies in San Francisco",
)
count: int = SchemaField(
default=10,
description="Number of items to find (only used if creating new search)",
ge=1,
le=1000,
)
entity_type: SearchEntityType = SchemaField(
default=SearchEntityType.AUTO,
description="Entity type (only used if creating)",
advanced=True,
)
behavior: SearchBehavior = SchemaField(
default=SearchBehavior.OVERRIDE,
description="Search behavior (only used if creating)",
advanced=True,
)
class Output(BlockSchemaOutput):
search_id: str = SchemaField(description="The search ID (existing or new)")
webset_id: str = SchemaField(description="The webset ID")
status: str = SchemaField(description="Current search status")
query: str = SchemaField(description="The search query")
was_created: bool = SchemaField(
description="True if search was newly created, False if already existed"
)
items_found: int = SchemaField(
description="Number of items found (0 if still running)"
)
def __init__(self):
super().__init__(
id="cbdb05ac-cb73-4b03-a493-6d34e9a011da",
description="Find existing search by query or create new - prevents duplicate searches in workflows",
categories={BlockCategory.SEARCH},
input_schema=ExaFindOrCreateSearchBlock.Input,
output_schema=ExaFindOrCreateSearchBlock.Output,
)
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
# Use AsyncExa SDK
aexa = AsyncExa(api_key=credentials.api_key.get_secret_value())
# Get webset to check existing searches
webset = aexa.websets.get(id=input_data.webset_id)
# Look for existing search with same query
existing_search = None
if webset.searches:
for search in webset.searches:
if search.query.strip().lower() == input_data.query.strip().lower():
existing_search = search
break
if existing_search:
# Found existing search
search = WebsetSearchModel.from_sdk(existing_search)
yield "search_id", search.id
yield "webset_id", input_data.webset_id
yield "status", search.status
yield "query", search.query
yield "was_created", False
yield "items_found", search.progress.get("found", 0)
else:
# Create new search
payload: Dict[str, Any] = {
"query": input_data.query,
"count": input_data.count,
"behavior": input_data.behavior.value,
}
# Add entity if not auto
if input_data.entity_type != SearchEntityType.AUTO:
payload["entity"] = {"type": input_data.entity_type.value}
sdk_search = aexa.websets.searches.create(
webset_id=input_data.webset_id, params=payload
)
search = WebsetSearchModel.from_sdk(sdk_search)
yield "search_id", search.id
yield "webset_id", input_data.webset_id
yield "status", search.status
yield "query", search.query
yield "was_created", True
yield "items_found", 0 # Newly created, no items yet

View File

@@ -10,7 +10,13 @@ from backend.blocks.fal._auth import (
FalCredentialsField,
FalCredentialsInput,
)
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.model import SchemaField
from backend.util.request import ClientResponseError, Requests
@@ -24,7 +30,7 @@ class FalModel(str, Enum):
class AIVideoGeneratorBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
prompt: str = SchemaField(
description="Description of the video to generate.",
placeholder="A dog running in a field.",
@@ -36,7 +42,7 @@ class AIVideoGeneratorBlock(Block):
)
credentials: FalCredentialsInput = FalCredentialsField()
class Output(BlockSchema):
class Output(BlockSchemaOutput):
video_url: str = SchemaField(description="The URL of the generated video.")
error: str = SchemaField(
description="Error message if video generation failed."

View File

@@ -9,7 +9,8 @@ from backend.sdk import (
Block,
BlockCategory,
BlockOutput,
BlockSchema,
BlockSchemaInput,
BlockSchemaOutput,
CredentialsMetaInput,
SchemaField,
)
@@ -19,7 +20,7 @@ from ._format_utils import convert_to_format_options
class FirecrawlCrawlBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = firecrawl.credentials_field()
url: str = SchemaField(description="The URL to crawl")
limit: int = SchemaField(description="The number of pages to crawl", default=10)
@@ -39,7 +40,7 @@ class FirecrawlCrawlBlock(Block):
description="The format of the crawl", default=[ScrapeFormat.MARKDOWN]
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
data: list[dict[str, Any]] = SchemaField(description="The result of the crawl")
markdown: str = SchemaField(description="The markdown of the crawl")
html: str = SchemaField(description="The html of the crawl")
@@ -55,6 +56,10 @@ class FirecrawlCrawlBlock(Block):
change_tracking: dict[str, Any] = SchemaField(
description="The change tracking of the crawl"
)
error: str = SchemaField(
description="Error message if the crawl failed",
default="",
)
def __init__(self):
super().__init__(

View File

@@ -9,7 +9,8 @@ from backend.sdk import (
BlockCost,
BlockCostType,
BlockOutput,
BlockSchema,
BlockSchemaInput,
BlockSchemaOutput,
CredentialsMetaInput,
SchemaField,
cost,
@@ -20,7 +21,7 @@ from ._config import firecrawl
@cost(BlockCost(2, BlockCostType.RUN))
class FirecrawlExtractBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = firecrawl.credentials_field()
urls: list[str] = SchemaField(
description="The URLs to crawl - at least one is required. Wildcards are supported. (/*)"
@@ -37,8 +38,12 @@ class FirecrawlExtractBlock(Block):
default=False,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
data: dict[str, Any] = SchemaField(description="The result of the crawl")
error: str = SchemaField(
description="Error message if the extraction failed",
default="",
)
def __init__(self):
super().__init__(

View File

@@ -7,7 +7,8 @@ from backend.sdk import (
Block,
BlockCategory,
BlockOutput,
BlockSchema,
BlockSchemaInput,
BlockSchemaOutput,
CredentialsMetaInput,
SchemaField,
)
@@ -16,16 +17,20 @@ from ._config import firecrawl
class FirecrawlMapWebsiteBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = firecrawl.credentials_field()
url: str = SchemaField(description="The website url to map")
class Output(BlockSchema):
class Output(BlockSchemaOutput):
links: list[str] = SchemaField(description="List of URLs found on the website")
results: list[dict[str, Any]] = SchemaField(
description="List of search results with url, title, and description"
)
error: str = SchemaField(
description="Error message if the map failed",
default="",
)
def __init__(self):
super().__init__(

View File

@@ -8,7 +8,8 @@ from backend.sdk import (
Block,
BlockCategory,
BlockOutput,
BlockSchema,
BlockSchemaInput,
BlockSchemaOutput,
CredentialsMetaInput,
SchemaField,
)
@@ -18,7 +19,7 @@ from ._format_utils import convert_to_format_options
class FirecrawlScrapeBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = firecrawl.credentials_field()
url: str = SchemaField(description="The URL to crawl")
limit: int = SchemaField(description="The number of pages to crawl", default=10)
@@ -38,7 +39,7 @@ class FirecrawlScrapeBlock(Block):
description="The format of the crawl", default=[ScrapeFormat.MARKDOWN]
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
data: dict[str, Any] = SchemaField(description="The result of the crawl")
markdown: str = SchemaField(description="The markdown of the crawl")
html: str = SchemaField(description="The html of the crawl")
@@ -54,6 +55,10 @@ class FirecrawlScrapeBlock(Block):
change_tracking: dict[str, Any] = SchemaField(
description="The change tracking of the crawl"
)
error: str = SchemaField(
description="Error message if the scrape failed",
default="",
)
def __init__(self):
super().__init__(

View File

@@ -9,7 +9,8 @@ from backend.sdk import (
Block,
BlockCategory,
BlockOutput,
BlockSchema,
BlockSchemaInput,
BlockSchemaOutput,
CredentialsMetaInput,
SchemaField,
)
@@ -19,7 +20,7 @@ from ._format_utils import convert_to_format_options
class FirecrawlSearchBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = firecrawl.credentials_field()
query: str = SchemaField(description="The query to search for")
limit: int = SchemaField(description="The number of pages to crawl", default=10)
@@ -35,9 +36,13 @@ class FirecrawlSearchBlock(Block):
description="Returns the content of the search if specified", default=[]
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
data: dict[str, Any] = SchemaField(description="The result of the search")
site: dict[str, Any] = SchemaField(description="The site of the search")
error: str = SchemaField(
description="Error message if the search failed",
default="",
)
def __init__(self):
super().__init__(

View File

@@ -5,7 +5,13 @@ from pydantic import SecretStr
from replicate.client import Client as ReplicateClient
from replicate.helpers import FileOutput
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.model import (
APIKeyCredentials,
CredentialsField,
@@ -57,7 +63,7 @@ class AspectRatio(str, Enum):
class AIImageEditorBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput[
Literal[ProviderName.REPLICATE], Literal["api_key"]
] = CredentialsField(
@@ -90,11 +96,10 @@ class AIImageEditorBlock(Block):
title="Model",
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
output_image: MediaFileType = SchemaField(
description="URL of the transformed image"
)
error: str = SchemaField(description="Error message if generation failed")
def __init__(self):
super().__init__(

Some files were not shown because too many files have changed in this diff Show More