Compare commits

..

33 Commits

Author SHA1 Message Date
Swifty
402bec4595 pr comments 2026-01-08 14:43:07 +01:00
Swifty
1c8cba9c5f fix more linting issues 2026-01-08 13:30:51 +01:00
Swifty
072c647baa fix addintal formatting issues 2026-01-08 13:24:30 +01:00
Swifty
d5f490b85d Merge branch 'dev' into swiftyos/fix-linting-errors 2026-01-08 13:07:05 +01:00
Swifty
6686de1701 fix linter errors 2026-01-08 13:04:02 +01:00
Ubbe
fc25e008b3 feat(frontend): update library agent cards to use DS (#11720)
## Changes 🏗️

<img width="700" height="838" alt="Screenshot 2026-01-07 at 16 11 04"
src="https://github.com/user-attachments/assets/0b38d2e1-d4a8-4036-862c-b35c82c496c2"
/>

- Update the agent library cards to new designs
- Update page to use Design System components
- Allow to edit/delete/duplicate agents on the library list page
- Add missing actions on library agent detail page

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run locally and test the above


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **New Features**
* Marketplace info shown on agent cards and improved favoriting with
optimistic UI and feedback.
  * Delete agent and delete schedule flows with confirmation dialogs.

* **Refactor**
* New composable form system, modernized upload dialog, streamlined
search bar, and multiple library components converted to named exports
with layout tweaks.
  * New agent card menu and favorite button UI.

* **Chores**
  * Removed notification UI and dropped a drag-drop dependency.

* **Tests**
  * Increased timeouts and stabilized upload/pagination flows.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2026-01-08 18:28:27 +07:00
Abhimanyu Yadav
a81ac150da fix(frontend): add word wrapping to CodeRenderer and improve output actions visibility (#11724)
## Changes 🏗️
- Updated the `CodeRenderer` component to add `whitespace-pre-wrap` and
`break-words` CSS classes to the `<code>` element
- This enables proper wrapping of long code lines while preserving
whitespace formatting

Before


![image.png](https://app.graphite.com/user-attachments/assets/aca769cc-0f6f-4e25-8cdd-c491fcbf21bb.png)

After

![Screenshot 2026-01-08 at
3.02.53 PM.png](https://app.graphite.com/user-attachments/assets/99e23efa-be2a-441b-b0d6-50fa2a08cdb0.png)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Verified code with long lines wraps correctly
  - [x] Confirmed whitespace and indentation are preserved
  - [x] Tested code display in various viewport sizes

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **Bug Fixes**
* Code blocks now preserve whitespace and wrap long lines for improved
readability.
* Output action controls are hidden when there is only a single output
item, reducing unnecessary UI elements.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2026-01-08 11:13:47 +00:00
Abhimanyu Yadav
49ee087496 feat(frontend): add new integration images for Webshare and WordPress (#11725)
### Changes 🏗️

Added two new integration icons to the frontend:
- `webshare_proxy.png` - Icon for WebShare Proxy integration
- `wordpress.png` - Icon for WordPress integration

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verified both icons display correctly in the integrations section
  - [x] Confirmed icons render properly at different screen sizes
  - [x] Checked that the icons maintain quality when scaled

#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
2026-01-08 11:13:34 +00:00
Ubbe
b0855e8cf2 feat(frontend): context menu right click new builder (#11703)
## Changes 🏗️

<img width="250" height="504" alt="Screenshot 2026-01-06 at 17 53 26"
src="https://github.com/user-attachments/assets/52013448-f49c-46b6-b86a-39f98270cbc3"
/>

<img width="300" height="544" alt="Screenshot 2026-01-06 at 17 53 29"
src="https://github.com/user-attachments/assets/e6334034-68e4-4346-9092-3774ab3e8445"
/>

On the **New Builder**:
- right-click on a node menu make it show the context menu
- use the same menu for right-click and when clicking on `...`

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run locally and test the above



<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **New Features**
* Added a custom right-click context menu for nodes with Copy, Open
agent (when available), and Delete actions; browser default menu is
suppressed while preserving zoom/drag/wiring.
* Introduced reusable SecondaryMenu primitives for context and dropdown
menus.

* **Documentation**
* Added Storybook examples demonstrating the context menu and dropdown
menu usage.

* **Style**
* Updated menu styling and icons with improved consistency and dark-mode
support.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2026-01-08 17:35:49 +07:00
Abhimanyu Yadav
5e2146dd76 feat(frontend): add CustomSchemaField wrapper for dynamic form field routing
(#11722)

### Changes 🏗️

This PR introduces automatic UI schema generation for custom form
fields, eliminating manual field mapping.

#### 1. **generateUiSchemaForCustomFields Utility**
(`generate-ui-schema.ts`) - New File
   - Auto-generates `ui:field` settings for custom fields
   - Detects custom fields using `findCustomFieldId()` matcher
   - Handles nested objects and array items recursively
   - Merges with existing UI schema without overwriting

#### 2. **FormRenderer Integration** (`FormRenderer.tsx`)
   - Imports and uses `generateUiSchemaForCustomFields`
   - Creates merged UI schema with `useMemo`
   - Passes merged schema to Form component
   - Enables automatic custom field detection

#### 3. **Preprocessor Cleanup** (`input-schema-pre-processor.ts`)
   - Removed manual `$id` assignment for custom fields
   - Removed unused `findCustomFieldId` import
   - Simplified to focus only on type validation

### Why these changes?

- Custom fields now auto-detect without manual `ui:field` configuration
- Uses standard RJSF approach (UI schema) for field routing
- Centralized custom field detection logic improves maintainability

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Verify custom fields render correctly when present in schema
- [x] Verify standard fields continue to render with default SchemaField
- [x] Verify multiple instances of same custom field type have unique
IDs
  - [x] Test form submission with custom fields

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Bug Fixes**
* Improved custom field rendering in forms by optimizing the UI schema
generation process.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2026-01-08 08:47:52 +00:00
Abhimanyu Yadav
103a62c9da feat(frontend/builder): add filters to blocks menu (#11654)
### Changes 🏗️

This PR adds filtering functionality to the new blocks menu, allowing
users to filter search results by category and creator.

**New Components:**
- `BlockMenuFilters`: Main filter component displaying active filters
and filter chips
- `FilterSheet`: Slide-out panel for selecting filters with categories
and creators
- `BlockMenuSearchContent`: Refactored search results display component

**Features Added:**
- Filter by categories: Blocks, Integrations, Marketplace agents, My
agents
- Filter by creator: Shows all available creators from search results
- Category counts: Display number of results per category
- Interactive filter chips with animations (using framer-motion)
- Hover states showing result counts on filter chips
- "All filters" sheet with apply/clear functionality

**State Management:**
- Extended `blockMenuStore` with filter state management
- Added `filters`, `creators`, `creators_list`, and `categoryCounts` to
store
- Integrated filters with search API (`filter` and `by_creator`
parameters)

**Refactoring:**
- Moved search logic from `BlockMenuSearch` to `BlockMenuSearchContent`
- Renamed `useBlockMenuSearch` to `useBlockMenuSearchContent`
- Moved helper functions to `BlockMenuSearchContent` directory

**API Changes:**
- Updated `custom-mutator.ts` to properly handle query parameter
encoding


### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Search for blocks and verify filter chips appear
- [x] Click "All filters" and verify filter sheet opens with categories
- [x] Select/deselect category filters and verify results update
accordingly
  - [x] Filter by creator and verify only blocks from that creator show
  - [x] Clear all filters and verify reset to default state
  - [x] Verify filter counts display correctly
  - [x] Test filter chip hover animations
2026-01-08 08:02:21 +00:00
Bentlybro
fc8434fb30 Merge branch 'master' into dev 2026-01-07 12:02:15 +00:00
Ubbe
3ae08cd48e feat(frontend): use Google Drive Picker on new builder (#11702)
## Changes 🏗️

<img width="600" height="960" alt="Screenshot 2026-01-06 at 17 40 23"
src="https://github.com/user-attachments/assets/61085ec5-a367-45c7-acaa-e3fc0f0af647"
/>

- So when using Google Blocks on the new builder, it shows Google Drive
Picket 🏁

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
  - [x] Run app locally and test the above


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **New Features**
* Added a Google Drive picker field and widget for forms with an
always-visible remove button and improved single/multi selection
handling.

* **Bug Fixes**
* Better validation and normalization of selected files and consolidated
error messaging.
* Adjusted layout spacing around the picker and selected files for
clearer display.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2026-01-07 17:07:09 +07:00
Swifty
4db13837b9 Revert "extracted frontend changes out of the hackathon/copilot branch"
This reverts commit df87867625.
2026-01-07 09:27:25 +01:00
Swifty
df87867625 extracted frontend changes out of the hackathon/copilot branch 2026-01-07 09:25:10 +01:00
Abhimanyu Yadav
e503126170 feat(frontend): upgrade RJSF to v6 and implement new FormRenderer system
(#11677)

Fixes #11686

### Changes 🏗️

This PR upgrades the React JSON Schema Form (RJSF) library from v5 to v6
and introduces a complete rewrite of the form rendering system with
improved architecture and new features.

#### Core Library Updates
- Upgraded `@rjsf/core` from 5.24.13 to 6.1.2
- Upgraded `@rjsf/utils` from 5.24.13 to 6.1.2
- Added `@radix-ui/react-slider` 1.3.6 for new slider components

#### New Form Renderer Architecture
- **Base Templates**: Created modular base templates for arrays,
objects, and standard fields
- **AnyOf Support**: Implemented `AnyOfField` component with type
selector for union types
- **Array Fields**: New `ArrayFieldTemplate`, `ArrayFieldItemTemplate`,
and `ArraySchemaField` with context provider
- **Object Fields**: Enhanced `ObjectFieldTemplate` with better support
for additional properties via `WrapIfAdditionalTemplate`
- **Field Templates**: New `TitleField`, `DescriptionField`, and
`FieldTemplate` with improved styling
- **Custom Widgets**: Implemented TextWidget, SelectWidget,
CheckboxWidget, FileWidget, DateWidget, TimeWidget, and DateTimeWidget
- **Button Components**: Custom AddButton, RemoveButton, and CopyButton
components

#### Node Handle System Refactor
- Split `NodeHandle` into `InputNodeHandle` and `OutputNodeHandle` for
better separation of concerns
- Refactored handle ID generation logic in `helpers.ts` with new
`generateHandleIdFromTitleId` function
- Improved handle connection detection using edge store
- Added support for nested output handles (objects within outputs)

#### Edge Store Improvements
- Added `removeEdgesByHandlePrefix` method for bulk edge removal
- Improved `isInputConnected` with handle ID cleanup
- Optimized `updateEdgeBeads` to only update when changes occur
- Better edge management with `applyEdgeChanges`

#### Node Store Enhancements
- Added `syncHardcodedValuesWithHandleIds` method to maintain
consistency between form data and handle connections
- Better handling of additional properties in objects
- Improved path parsing with `parseHandleIdToPath` and
`ensurePathExists`

#### Draft Recovery Improvements
- Added diff calculation with `calculateDraftDiff` to show what changed
- New `formatDiffSummary` to display changes in a readable format (e.g.,
"+2/-1 blocks, +3 connections")
- Better visual feedback for draft changes

#### UI/UX Enhancements
- Fixed node container width to 350px for consistency
- Improved field error display with inline error messages
- Better spacing and styling throughout forms
- Enhanced tooltip support for field descriptions
- Improved array item controls with better button placement
- Context-aware field sizing (small/large)

#### Output Handler Updates
- Recursive rendering of nested output properties
- Better type display with color coding
- Improved handle connections for complex output schemas

#### Migration & Cleanup
- Updated `RunInputDialog` to use new FormRenderer
- Updated `FormCreator` to use new FormRenderer
- Moved OAuth callback types to separate file
- Updated import paths from `input-renderer` to `InputRenderer`
- Removed unused console.log statements
- Added `type="button"` to buttons to prevent form submission

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Test form rendering with various field types (text, number,
boolean, arrays, objects)
  - [x] Test anyOf field type selector functionality
  - [x] Test array item addition/removal
  - [x] Test nested object fields with additional properties
  - [x] Test input/output node handle connections
  - [x] Test draft recovery with diff display
  - [x] Verify backward compatibility with existing agents
  - [x] Test field validation and error display
  - [x] Verify handle ID generation for complex schemas

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **New Features**
* Improved form field rendering with enhanced support for optional
types, arrays, and nested objects.
* Enhanced draft recovery display showing detailed difference tracking
(added, removed, modified items).
  * Better OAuth popup callback handling with structured message types.

* **Bug Fixes**
  * Improved node handle ID normalization and synchronization.
  * Enhanced edge management for complex field changes.
  * Fixed styling consistency across form components.

* **Dependencies**
  * Updated React JSON Schema Form library to version 6.1.2.
  * Added Radix UI slider component support.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2026-01-07 05:06:34 +00:00
Zamil Majdy
7ee28197a3 docs(gitbook): sync documentation updates with dev branch (#11709)
## Summary

Sync GitBook documentation changes from the gitbook branch to dev. This
PR contains comprehensive documentation updates including new assets,
content restructuring, and infrastructure improvements.

## Changes 🏗️

### Documentation Updates
- **New GitBook Assets**: Added 9 new documentation images and
screenshots
  - Platform overview images (AGPT_Platform.png, Banner_image.png)
- Feature illustrations (Contribute.png, Integrations.png, hosted.jpg,
no-code.jpg, api-reference.jpg)
  - Screenshots and examples for better user guidance
- **Content Updates**: Enhanced README.md and SUMMARY.md with improved
structure and navigation
- **Visual Documentation**: Added comprehensive visual guides for
platform features

### Infrastructure 
- **Cloudflare Worker**: Added redirect handler for docs.agpt.co →
agpt.co/docs migration
  - Complete URL mapping for 71+ redirect patterns
  - Handles platform blocks restructuring and edge cases
  - Ready for deployment to Cloudflare Workers

### Merge Conflict Resolution
- **Clean merge from dev**: Successfully merged dev's major backend
restructuring (server/ → api/)
- **File resurrection fix**: Removed files that were accidentally
resurrected during merge conflict resolution
  - Cleaned up BuilderActionButton.tsx (deleted in dev)
  - Cleaned up old PreviewBanner.tsx location (moved in dev)
  - Synced pnpm-lock.yaml and layout.tsx with dev's current state

## Technical Details

This PR represents a careful synchronization that:
1. **Preserves all GitBook documentation work** while staying current
with dev
2. **Maintains clean diff**: Only documentation-related changes remain
after merge cleanup
3. **Resolves merge conflicts**: Handled major backend API restructuring
without breaking docs
4. **Infrastructure ready**: Cloudflare Worker ready for docs migration
deployment

## Files Changed
- `docs/`: GitBook documentation assets and content
- `autogpt_platform/cloudflare_worker.js`: Docs infrastructure for URL
redirects

## Validation
-  All TypeScript compilation errors resolved
-  Pre-commit hooks passing (Prettier, TypeCheck)
-  Only documentation changes remain in diff vs dev
-  Cloudflare Worker tested with comprehensive URL mapping
-  No non-documentation code changes after cleanup

## Deployment Notes
The Cloudflare Worker can be deployed via:
```bash
# Cloudflare Dashboard → Workers → Create → Paste code → Add route docs.agpt.co/*
```

This completes the GitBook synchronization and prepares for docs site
migration.

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: bobby.gaffin <bobby.gaffin@agpt.co>
Co-authored-by: Bently <Github@bentlybro.com>
Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
Co-authored-by: Swifty <craigswift13@gmail.com>
Co-authored-by: Ubbe <hi@ubbe.dev>
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Lluis Agusti <hi@llu.lu>
2026-01-07 02:11:11 +00:00
Nicholas Tindle
818de26d24 fix(platform/blocks): XMLParserBlock list object error (#11517)
<!-- Clearly explain the need for these changes: -->

### Need for these changes 💡

The `XMLParserBlock` was susceptible to crashing with an
`AttributeError: 'List' object has no attribute 'add_text'` when
processing malformed XML inputs, such as documents with multiple root
elements or stray text outside the root. This PR introduces robust
validation to prevent these crashes and provide clear, actionable error
messages to users.

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->

- Added a `_validate_tokens` static method to `XMLParserBlock` to
perform pre-parsing validation on the token stream. This method ensures
the XML input has a single root element and no text content outside of
it.
- Modified the `XMLParserBlock.run` method to call `_validate_tokens`
immediately after tokenization and before passing the tokens to
`gravitasml.Parser`.
- Introduced a new test case, `test_rejects_text_outside_root`, in
`test_blocks_dos_vulnerability.py` to verify that the `XMLParserBlock`
correctly raises a `ValueError` when encountering XML with text outside
the root element.
- Imported `Token` for type hinting in `xml_parser.py`.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Confirm that the `test_rejects_text_outside_root` test passes,
asserting that `ValueError` is raised for invalid XML.
  - [x] Confirm that other relevant XML parsing tests continue to pass.


---
Linear Issue:
[OPEN-2835](https://linear.app/autogpt/issue/OPEN-2835/blockunknownerror-raised-by-xmlparserblock-with-message-list-object)

<a
href="https://cursor.com/background-agent?bcId=bc-4495ea93-6836-412c-b2e3-0adb31113169"><picture><source
media="(prefers-color-scheme: dark)"
srcset="https://cursor.com/open-in-cursor-dark.svg"><source
media="(prefers-color-scheme: light)"
srcset="https://cursor.com/open-in-cursor-light.svg"><img alt="Open in
Cursor"
src="https://cursor.com/open-in-cursor.svg"></picture></a>&nbsp;<a
href="https://cursor.com/agents?id=bc-4495ea93-6836-412c-b2e3-0adb31113169"><picture><source
media="(prefers-color-scheme: dark)"
srcset="https://cursor.com/open-in-web-dark.svg"><source
media="(prefers-color-scheme: light)"
srcset="https://cursor.com/open-in-web-light.svg"><img alt="Open in Web"
src="https://cursor.com/open-in-web.svg"></picture></a>


<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Strengthens XML parsing robustness and error clarity.
> 
> - Adds `_validate_tokens` in `XMLParserBlock` to ensure a single root
element, balanced tags, and no text outside the root before parsing
> - Updates `run` to `list(tokenize(...))` and validate tokens prior to
`Parser.parse()`; maintains 10MB input size guard
> - Introduces `test_rejects_text_outside_root` asserting a readable
`ValueError` for trailing text
> - Bumps `gravitasml` to `0.1.4` in `pyproject.toml` and lockfile
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
22cc5149c5. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Bug Fixes**
* Improved XML parsing validation with stricter enforcement of
single-root elements and prevention of trailing text, providing clearer
error messages for invalid XML input.

* **Tests**
* Added test coverage for XML parser validation of invalid root text
scenarios.

* **Chores**
  * Updated GravitasML dependency to latest compatible version.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Co-authored-by: Cursor Agent <cursoragent@cursor.com>
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <ntindle@users.noreply.github.com>
2026-01-06 20:02:53 +00:00
Nicholas Tindle
cb08def96c feat(blocks): Add Google Docs integration blocks (#11608)
Introduces a new module with blocks for Google Docs operations,
including reading, creating, appending, inserting, formatting,
exporting, sharing, and managing public access for Google Docs. Updates
dependencies in pyproject.toml and poetry.lock to support these
features.



https://github.com/user-attachments/assets/3597366b-a9eb-4f8e-8a0a-5a0bc8ebc09b



<!-- Clearly explain the need for these changes: -->

### Changes 🏗️
Adds lots of basic docs tools + a dependency to use them with markdown

Block | Description | Key Features
-- | -- | --
Read & Create |   |  
GoogleDocsReadBlock | Read content from a Google Doc | Returns text
content, title, revision ID
GoogleDocsCreateBlock | Create a new Google Doc | Title, optional
initial content
GoogleDocsGetMetadataBlock | Get document metadata | Title, revision ID,
locale, suggested modes
GoogleDocsGetStructureBlock | Get document structure with indexes | Flat
segments or detailed hierarchy; shows start/end indexes
Plain Text Operations |   |  
GoogleDocsAppendPlainTextBlock | Append plain text to end | No
formatting applied
GoogleDocsInsertPlainTextBlock | Insert plain text at position |
Requires index; no formatting
GoogleDocsFindReplacePlainTextBlock | Find and replace plain text |
Case-sensitive option; no formatting on replacement
Markdown Operations | (ideal for LLM/AI output) |  
GoogleDocsAppendMarkdownBlock | Append Markdown to end | Full formatting
via gravitas-md2gdocs
GoogleDocsInsertMarkdownAtBlock | Insert Markdown at position | Requires
index
GoogleDocsReplaceAllWithMarkdownBlock | Replace entire doc with Markdown
| Clears and rewrites
GoogleDocsReplaceRangeWithMarkdownBlock | Replace index range with
Markdown | Requires start/end index
GoogleDocsReplaceContentWithMarkdownBlock | Find text and replace with
Markdown | Text-based search; great for templates
Structural Operations |   |  
GoogleDocsInsertTableBlock | Insert a table | Rows/columns OR content
array; optional Markdown in cells
GoogleDocsInsertPageBreakBlock | Insert a page break | Position index (0
= end)
GoogleDocsDeleteContentBlock | Delete content range | Requires start/end
index
GoogleDocsFormatTextBlock | Apply formatting to text range | Bold,
italic, underline, font size/color, etc.
Export & Sharing |   |  
GoogleDocsExportBlock | Export to different formats | PDF, DOCX, TXT,
HTML, RTF, ODT, EPUB
GoogleDocsShareBlock | Share with specific users | Reader, commenter,
writer, owner roles
GoogleDocsSetPublicAccessBlock | Set public access level | Private,
anyone with link (view/comment/edit)


<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Build, run, verify, and upload a block super test
- [x] [Google Docs Super
Agent_v8.json](https://github.com/user-attachments/files/24134215/Google.Docs.Super.Agent_v8.json)
works


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Chores**
  * Updated backend dependencies.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Adds end-to-end Google Docs capabilities under
`backend/blocks/google/docs.py`, including rich Markdown support.
> 
> - New blocks: read/create docs; plain-text
`append`/`insert`/`find_replace`/`delete`; text `format`;
`insert_table`; `insert_page_break`; `get_metadata`; `get_structure`
> - Markdown-powered blocks (via `gravitas_md2gdocs.to_requests`):
`append_markdown`, `insert_markdown_at`, `replace_all_with_markdown`,
`replace_range_with_markdown`, `replace_content_with_markdown`
> - Export and sharing: `export` (PDF/DOCX/TXT/HTML/RTF/ODT/EPUB),
`share` (user roles), `set_public_access`
> - Dependency updates: add `gravitas-md2gdocs` to `pyproject.toml` and
update `poetry.lock`
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
73512a95b2. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: Cursor Agent <cursoragent@cursor.com>
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <ntindle@users.noreply.github.com>
2026-01-05 18:36:56 +00:00
Krzysztof Czerwinski
ac2daee5f8 feat(backend): Add GPT-5.2 and update default models (#11652)
### Changes 🏗️

- Add OpenAI `GPT-5.2` with metadata&cost
- Add const `DEFAULT_LLM_MODEL` (set to GPT-5.2) and use it instead of
hardcoded model across llm blocks and tests

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] GPT-5.2 is set as default and works on llm blocks
2026-01-05 16:13:35 +00:00
lif
266e0d79d4 fix(blocks): add YouTube Shorts URL support (#11659)
## Summary
Added support for parsing YouTube Shorts URLs (`youtube.com/shorts/...`)
in the TranscribeYoutubeVideoBlock to extract video IDs correctly.

## Changes
- Modified `_extract_video_id` method in `youtube.py` to handle Shorts
URL format
- Added test cases for YouTube Shorts URL extraction

## Related Issue
Fixes #11500

## Test Plan
- [x] Added unit tests for YouTube Shorts URL extraction
- [x] Verified existing YouTube URL formats still work
- [x] CI should pass all existing tests

---------

Co-authored-by: Ubbe <hi@ubbe.dev>
2026-01-05 16:11:45 +00:00
lif
01f443190e fix(frontend): allow empty values in number inputs and fix AnyOfField toggle (#11661)
<!-- ⚠️ Reminder: Think about your Changeset/Docs changes! -->
<!-- If you are introducing new blocks or features, document them for
users. -->
<!-- Reference:
https://github.com/Significant-Gravitas/AutoGPT/blob/dev/CONTRIBUTING.md
-->

## Summary

This PR fixes two related issues with number/integer inputs in the
frontend:

1. **HTMLType typo fix**: INTEGER input type was incorrectly mapped to
`htmlType: 'account'` (which is not a valid HTML input type) instead of
`htmlType: 'number'`.

2. **AnyOfField toggle fix**: When a user cleared a number input field,
the input would disappear because `useAnyOfField` checked for both
`null` AND `undefined` in `isEnabled`. This PR changes it to only check
for explicit `null` (set by toggle off), allowing `undefined` (empty
input) to keep the field visible.

### Root cause analysis

When a user cleared a number input:
1. `handleChange` returned `undefined` (because `v === "" ? undefined :
Number(v)`)
2. In `useAnyOfField`, `isEnabled = formData !== null && formData !==
undefined` became `false`
3. The input field disappeared

### Fix

Changed `useAnyOfField.tsx` line 67:
```diff
- const isEnabled = formData !== null && formData !== undefined;
+ const isEnabled = formData !== null;
```

This way:
- When toggle is OFF → `formData` is `null` → `isEnabled` is `false`
(input hidden) ✓
- When toggle is ON but input is cleared → `formData` is `undefined` →
`isEnabled` is `true` (input visible) ✓

## Test plan

- [x] Verified INTEGER inputs now render correctly with `type="number"`
- [x] Verified clearing a number input keeps the field visible
- [x] Verified toggling the nullable switch still works correctly

Fixes #11594

🤖 AI-assisted development disclaimer: This PR was developed with
assistance from Claude Code.

---------

Signed-off-by: majiayu000 <1835304752@qq.com>
Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
2026-01-05 16:10:47 +00:00
Ubbe
bdba0033de refactor(frontend): move NodeInput files (#11695)
## Changes 🏗️

Move the `<NodeInput />` component next to the old builder code where it
is used.

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run app locally and click around, E2E is fine
2026-01-05 10:29:12 +00:00
Abhimanyu Yadav
b87c64ce38 feat(frontend): Add delete key bindings to ReactFlow editor
(#11693)

Issues fixed by this PR
- https://github.com/Significant-Gravitas/AutoGPT/issues/11688
- https://github.com/Significant-Gravitas/AutoGPT/issues/11687

### **Changes 🏗️**

Added keyboard delete functionality to the ReactFlow editor by enabling
the `deleteKeyCode` prop with both "Backspace" and "Delete" keys. This
allows users to delete selected nodes and edges using standard keyboard
shortcuts, improving the editing experience.

**Modified:**

- `Flow.tsx`: Added `deleteKeyCode={["Backspace", "Delete"]}` prop to
the ReactFlow component to enable deletion of selected elements via
keyboard

### **Checklist 📋**

#### **For code changes:**

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Select a node in the flow editor and press Delete key to confirm
it deletes
- [x] Select a node in the flow editor and press Backspace key to
confirm it deletes
    - [x] Verify deletion works for multiple selected elements
2026-01-05 10:28:57 +00:00
Ubbe
003affca43 refactor(frontend): fix new builder buttons (#11696)
## Changes 🏗️

<img width="800" height="964" alt="Screenshot 2026-01-05 at 15 26 21"
src="https://github.com/user-attachments/assets/f8c7fc47-894a-4db2-b2f1-62b4d70e8453"
/>

- Adjust the new builder to use the Design System components
- Re-structure imports to match formatting rules
- Small improvement on `use-get-flag`
- Move file which is the main hook

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run locally and check the new buttons look good
2026-01-05 09:09:47 +00:00
Abhimanyu Yadav
290d0d9a9b feat(frontend): add auto-save Draft Recovery feature with IndexedDB persistence
(#11658)

## Summary
Implements an auto-save draft recovery system that persists unsaved flow
builder state across browser sessions, tab closures, and refreshes. When
users return to a flow with unsaved changes, they can choose to restore
or discard the draft via an intuitive recovery popup.



https://github.com/user-attachments/assets/0f77173b-7834-48d2-b7aa-73c6cd2eaff6



## Changes 🏗️

### Core Features
- **Draft Recovery Popup** (`DraftRecoveryPopup.tsx`)
  - Displays amber-themed notification with unsaved changes metadata
  - Shows node count, edge count, and relative time since last save
  - Provides restore and discard actions with tooltips
  - Auto-dismisses on click outside or ESC key

- **Auto-Save System** (`useDraftManager.ts`)
  - Automatically saves draft state every 15 seconds
  - Saves on browser tab close/refresh via `beforeunload`
  - Tracks nodes, edges, graph schemas, node counter, and flow version
  - Smart dirty checking - only saves when actual changes detected
  - Cleans up expired drafts (24-hour TTL)

- **IndexedDB Persistence** (`db.ts`, `draft-service.ts`)
  - Uses Dexie library for reliable client-side storage
- Handles both existing flows (by flowID) and new flows (via temp
session IDs)
- Compares draft state with current state to determine if recovery
needed
  - Automatically clears drafts after successful save

### Integration Changes
- **Flow Editor** (`Flow.tsx`)
  - Integrated `DraftRecoveryPopup` component
  - Passes `isInitialLoadComplete` state for proper timing

- **useFlow Hook** (`useFlow.ts`)
  - Added `isInitialLoadComplete` state to track when flow is ready
  - Ensures draft check happens after initial graph load
  - Resets state on flow/version changes

- **useCopyPaste Hook** (`useCopyPaste.ts`)
  - Refactored to manage keyboard event listeners internally
  - Simplified integration by removing external event handler setup

- **useSaveGraph Hook** (`useSaveGraph.ts`)
  - Clears draft after successful save (both create and update)
  - Removes temp flow ID from session storage on first save

### Dependencies
- Added `dexie@4.2.1` - Modern IndexedDB wrapper for reliable
client-side storage

## Technical Details

**Auto-Save Flow:**
1. User makes changes to nodes/edges
2. Change triggers 15-second debounced save
3. Draft saved to IndexedDB with timestamp
4. On save, current state compared with last saved state
5. Only saves if meaningful changes detected

**Recovery Flow:**
1. User loads flow/refreshes page
2. After initial load completes, check for existing draft
3. Compare draft with current state
4. If different and non-empty, show recovery popup
5. User chooses to restore or discard
6. Draft cleared after either action

**Session Management:**
- Existing flows: Use actual flowID for draft key

### Test Plan 🧪

- [x] Create a new flow with 3+ blocks and connections, wait 15+
seconds, then refresh the page - verify recovery popup appears with
correct counts and restoring works
- [x] Create a flow with blocks, refresh, then click "Discard" button on
recovery popup - verify popup disappears and draft is deleted
- [x] Add blocks to a flow, save successfully - verify draft is cleared
from IndexedDB (check DevTools > Application > IndexedDB)
- [x] Make changes to an existing flow, refresh page - verify recovery
popup shows and restoring preserves all changes correctly
- [x] Verify empty flows (0 nodes) don't trigger recovery popup or save
drafts
2025-12-31 14:49:53 +00:00
Abhimanyu Yadav
fba61c72ed feat(frontend): fix duplicate publish button and improve BuilderActionButton styling
(#11669)

Fixes duplicate "Publish to Marketplace" buttons in the builder by
adding a `showTrigger` prop to control modal trigger visibility.

<img width="296" height="99" alt="Screenshot 2025-12-23 at 8 18 58 AM"
src="https://github.com/user-attachments/assets/d5dbfba8-e854-4c0c-a6b7-da47133ec815"
/>


### Changes 🏗️

**BuilderActionButton.tsx**

- Removed borders on hover and active states for a cleaner visual
appearance
- Added `hover:border-none` and `active:border-none` to maintain
consistent styling during interactions

**PublishToMarketplace.tsx**

- Pass `showTrigger={false}` to `PublishAgentModal` to hide the default
trigger button
- This prevents duplicate buttons when a custom trigger is already
rendered

**PublishAgentModal.tsx**

- Added `showTrigger` prop (defaults to `true`) to conditionally render
the modal trigger
- Allows parent components to control whether the built-in trigger
button should be displayed
- Maintains backward compatibility with existing usage

### Checklist 📋

#### For code changes:

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verify only one "Publish to Marketplace" button appears in the
builder
- [x] Confirm button hover/active states display correctly without
border artifacts
- [x] Verify modal can still be triggered programmatically without the
trigger button
2025-12-31 09:46:12 +00:00
Nicholas Tindle
79d45a15d0 feat(platform): Deduplicate insufficient funds Discord + email notifications (#11672)
Add Redis-based deduplication for insufficient funds notifications (both
Discord alerts and user emails) when users run out of credits. This
prevents spamming users and the PRODUCT Discord channel with repeated
alerts for the same user+agent combination.

### Changes 🏗️

- **Redis-based deduplication** (`backend/executor/manager.py`):
- Add `INSUFFICIENT_FUNDS_NOTIFIED_PREFIX` constant for Redis key prefix
- Add `INSUFFICIENT_FUNDS_NOTIFIED_TTL_SECONDS` (30 days) as fallback
cleanup
- Implement deduplication in `_handle_insufficient_funds_notif` using
Redis `SET NX`
- Skip both email (`ZERO_BALANCE`) and Discord notifications for
duplicate alerts per user+agent
- Add `clear_insufficient_funds_notifications(user_id)` function to
remove all notification flags for a user

- **Clear flags on credit top-up** (`backend/data/credit.py`):
- Call `clear_insufficient_funds_notifications` in `_top_up_credits`
after successful auto-charge
- Call `clear_insufficient_funds_notifications` in `fulfill_checkout`
after successful manual top-up
- This allows users to receive notifications again if they run out of
funds in the future

- **Comprehensive test coverage**
(`backend/executor/manager_insufficient_funds_test.py`):
  - Test first-time notification sends both email and Discord alert
  - Test duplicate notifications are skipped for same user+agent
  - Test different agents for same user get separate alerts
  - Test clearing notifications removes all keys for a user
  - Test handling when no notification keys exist
- Test notifications still sent when Redis fails (graceful degradation)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] First insufficient funds alert sends both email and Discord
notification
  - [x] Duplicate alerts for same user+agent are skipped
  - [x] Different agents for same user each get their own notification
  - [x] Topping up credits clears notification flags
  - [x] Redis failure gracefully falls back to sending notifications
  - [x] 30-day TTL provides automatic cleanup as fallback
  - [x] Manually test this works with scheduled agents
 

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Introduces Redis-backed deduplication for insufficient-funds alerts
and resets flags on successful credit additions.
> 
> - **Dedup insufficient-funds alerts** in `executor/manager.py` using
Redis `SET NX` with `INSUFFICIENT_FUNDS_NOTIFIED_PREFIX` and 30‑day TTL;
skips duplicate ZERO_BALANCE email + Discord alerts per
`user_id`+`graph_id`, with graceful fallback if Redis fails.
> - **Reset notification flags on credit increases** by adding
`clear_insufficient_funds_notifications(user_id)` and invoking it when
enabling/adding positive `GRANT`/`TOP_UP` transactions in
`data/credit.py`.
> - **Tests** (`executor/manager_insufficient_funds_test.py`):
first-time vs duplicate behavior, per-agent separation, clearing keys
(including no-key and Redis-error cases), and clearing on
`_add_transaction`/`_enable_transaction`.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
1a4413b3a1. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: Ubbe <hi@ubbe.dev>
Co-authored-by: Claude <noreply@anthropic.com>
2025-12-30 18:10:30 +00:00
Ubbe
66f0d97ca2 fix(frontend): hide better chat link if not enabled (#11648)
## Changes 🏗️

- Make `<Navbar />` a client component so its rendering is more
predictable
- Remove the `useMemo()` for the chat link to prevent the flash...
- Make sure chat is added to the navbar links only after checking the
flag is enabled
- Improve logout with `useTransition`
- Simplify feature flags setup

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run locally and test the above

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Ensures the `Chat` nav item is hidden when the feature flag is off
across desktop and mobile nav.
> 
> - Inline-filters `loggedInLinks` to skip `Chat` when `Flag.CHAT` is
false for both `NavbarLink` rendering and `MobileNavBar` menu items
> - Removes `useMemo`/`linksWithChat` helper; maps directly over
`loggedInLinks` and filters nulls in mobile, keeping icon mapping intact
> - Cleans up unused `useMemo` import
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
79c42d87b4. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->
2025-12-30 13:21:53 +00:00
Ubbe
5894a8fcdf fix(frontend): use DS Dialog on old builder (#11643)
## Changes 🏗️

Use the Design System `<Dialog />` on the old builder, which supports
long content scrolling ( the current one does not, causing issues in
graphs with many run inputs )...

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run locally and test the above


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **New Features**
* Added Enhanced Rendering toggle for improved output handling and
display (controlled via feature flag)

* **Improvements**
  * Refined dialog layouts and user interactions
* Enhanced copy-to-clipboard functionality with toast notifications upon
copying

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-12-30 20:22:57 +07:00
Ubbe
dff8efa35d fix(frontend): favico colour override issue (#11681)
## Changes 🏗️

Sometimes, on Dev, when navigating between pages, the Favico colour
would revert from Green 🟢 (Dev) to Purple 🟣(Default). That's because the
`/marketplace` page had custom code overriding it that I didn't notice
earlier...

I also made it use the Next.js metadata API, so it handles the favicon
correctly across navigations.

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run locally and test the above
2025-12-30 20:22:32 +07:00
seer-by-sentry[bot]
e26822998f fix: Handle missing or null 'items' key in DataForSEO Related Keywords block (#10989)
### Changes 🏗️

- Modified the DataForSEO Related Keywords block to handle cases where
the 'items' key is missing or has a null value in the API response.
- Ensures that the code gracefully handles these scenarios by defaulting
to an empty list, preventing potential errors. Fixes
[AUTOGPT-SERVER-66D](https://sentry.io/organizations/significant-gravitas/issues/6902944636/).

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] The DataForSEO API now returns an empty list when there are no
results, preventing the code from attempting to iterate on a null value.

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Strengthens parsing of DataForSEO Labs response to avoid errors when
`items` is missing or null.
> 
> - In `backend/blocks/dataforseo/related_keywords.py` `run()`, sets
`items = first_result.get("items") or []` when `first_result` is a
`dict`, otherwise `[]`, ensuring safe iteration
> - Prevents exceptions and yields empty results when no items are
returned
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
cc465ddbf2. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

Co-authored-by: seer-by-sentry[bot] <157164994+seer-by-sentry[bot]@users.noreply.github.com>
Co-authored-by: Toran Bruce Richards <toran.richards@gmail.com>
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <ntindle@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2025-12-26 16:17:24 +00:00
Ubbe
4a7bc006a8 hotfix(frontend): chat should be disabled by default (#11639)
### Changes 🏗️

Chat should be disabled by default; otherwise, it flashes, and if Launch
Darkly fails to fail, it is dangerous.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run locally with Launch Darkly disabled and test the above
2025-12-18 19:04:13 +01:00
627 changed files with 181709 additions and 32814 deletions

View File

@@ -29,7 +29,7 @@ jobs:
strategy:
fail-fast: false
matrix:
python-version: ["3.12", "3.13", "3.14"]
python-version: ["3.10"]
platform-os: [ubuntu, macos, macos-arm64, windows]
runs-on: ${{ matrix.platform-os != 'macos-arm64' && format('{0}-latest', matrix.platform-os) || 'macos-14' }}

View File

@@ -11,6 +11,9 @@ on:
- 'classic/original_autogpt/**'
- 'classic/forge/**'
- 'classic/benchmark/**'
- 'classic/run'
- 'classic/cli.py'
- 'classic/setup.py'
- '!**/*.md'
pull_request:
branches: [ master, dev, release-* ]
@@ -19,6 +22,9 @@ on:
- 'classic/original_autogpt/**'
- 'classic/forge/**'
- 'classic/benchmark/**'
- 'classic/run'
- 'classic/cli.py'
- 'classic/setup.py'
- '!**/*.md'
defaults:
@@ -53,15 +59,10 @@ jobs:
run: |
curl -sSL https://install.python-poetry.org | python -
- name: Install dependencies
working-directory: ./classic/${{ matrix.agent-name }}/
run: poetry install
- name: Run regression tests
run: |
./run agent start ${{ matrix.agent-name }}
cd ${{ matrix.agent-name }}
poetry run serve &
sleep 10 # Wait for server to start
poetry run agbenchmark --mock --test=BasicRetrieval --test=Battleship --test=WebArenaTask_0
poetry run agbenchmark --test=WriteFile
env:

View File

@@ -23,7 +23,7 @@ defaults:
shell: bash
env:
min-python-version: '3.12'
min-python-version: '3.10'
jobs:
test:
@@ -33,7 +33,7 @@ jobs:
strategy:
fail-fast: false
matrix:
python-version: ["3.12", "3.13", "3.14"]
python-version: ["3.10"]
platform-os: [ubuntu, macos, macos-arm64, windows]
runs-on: ${{ matrix.platform-os != 'macos-arm64' && format('{0}-latest', matrix.platform-os) || 'macos-14' }}
defaults:
@@ -128,16 +128,11 @@ jobs:
run: |
curl -sSL https://install.python-poetry.org | python -
- name: Install agent dependencies
working-directory: classic/${{ matrix.agent-name }}
run: poetry install
- name: Run regression tests
working-directory: classic
run: |
./run agent start ${{ matrix.agent-name }}
cd ${{ matrix.agent-name }}
poetry run python -m forge &
sleep 10 # Wait for server to start
set +e # Ignore non-zero exit codes and continue execution
echo "Running the following command: poetry run agbenchmark --maintain --mock"

View File

@@ -31,7 +31,7 @@ jobs:
strategy:
fail-fast: false
matrix:
python-version: ["3.12", "3.13", "3.14"]
python-version: ["3.10"]
platform-os: [ubuntu, macos, macos-arm64, windows]
runs-on: ${{ matrix.platform-os != 'macos-arm64' && format('{0}-latest', matrix.platform-os) || 'macos-14' }}

View File

@@ -0,0 +1,60 @@
name: Classic - Frontend CI/CD
on:
push:
branches:
- master
- dev
- 'ci-test*' # This will match any branch that starts with "ci-test"
paths:
- 'classic/frontend/**'
- '.github/workflows/classic-frontend-ci.yml'
pull_request:
paths:
- 'classic/frontend/**'
- '.github/workflows/classic-frontend-ci.yml'
jobs:
build:
permissions:
contents: write
pull-requests: write
runs-on: ubuntu-latest
env:
BUILD_BRANCH: ${{ format('classic-frontend-build/{0}', github.ref_name) }}
steps:
- name: Checkout Repo
uses: actions/checkout@v4
- name: Setup Flutter
uses: subosito/flutter-action@v2
with:
flutter-version: '3.13.2'
- name: Build Flutter to Web
run: |
cd classic/frontend
flutter build web --base-href /app/
# - name: Commit and Push to ${{ env.BUILD_BRANCH }}
# if: github.event_name == 'push'
# run: |
# git config --local user.email "action@github.com"
# git config --local user.name "GitHub Action"
# git add classic/frontend/build/web
# git checkout -B ${{ env.BUILD_BRANCH }}
# git commit -m "Update frontend build to ${GITHUB_SHA:0:7}" -a
# git push -f origin ${{ env.BUILD_BRANCH }}
- name: Create PR ${{ env.BUILD_BRANCH }} -> ${{ github.ref_name }}
if: github.event_name == 'push'
uses: peter-evans/create-pull-request@v7
with:
add-paths: classic/frontend/build/web
base: ${{ github.ref_name }}
branch: ${{ env.BUILD_BRANCH }}
delete-branch: true
title: "Update frontend build in `${{ github.ref_name }}`"
body: "This PR updates the frontend build based on commit ${{ github.sha }}."
commit-message: "Update frontend build based on commit ${{ github.sha }}"

View File

@@ -59,7 +59,7 @@ jobs:
needs: get-changed-parts
runs-on: ubuntu-latest
env:
min-python-version: "3.12"
min-python-version: "3.10"
strategy:
matrix:
@@ -111,7 +111,7 @@ jobs:
needs: get-changed-parts
runs-on: ubuntu-latest
env:
min-python-version: "3.12"
min-python-version: "3.10"
strategy:
matrix:

3
.gitignore vendored
View File

@@ -3,7 +3,6 @@
classic/original_autogpt/keys.py
classic/original_autogpt/*.json
auto_gpt_workspace/*
.autogpt/
*.mpeg
.env
# Root .env files
@@ -178,5 +177,5 @@ autogpt_platform/backend/settings.py
*.ign.*
.test-contents
**/.claude/settings.local.json
.claude/settings.local.json
/autogpt_platform/backend/logs

View File

@@ -3,6 +3,7 @@ from datetime import UTC, datetime
from os import getenv
import pytest
from prisma.types import ProfileCreateInput
from pydantic import SecretStr
from backend.api.features.chat.model import ChatSession
@@ -49,13 +50,13 @@ async def setup_test_data():
# 1b. Create a profile with username for the user (required for store agent lookup)
username = user.email.split("@")[0]
await prisma.profile.create(
data={
"userId": user.id,
"username": username,
"name": f"Test User {username}",
"description": "Test user profile",
"links": [], # Required field - empty array for test profiles
}
data=ProfileCreateInput(
userId=user.id,
username=username,
name=f"Test User {username}",
description="Test user profile",
links=[], # Required field - empty array for test profiles
)
)
# 2. Create a test graph with agent input -> agent output
@@ -172,13 +173,13 @@ async def setup_llm_test_data():
# 1b. Create a profile with username for the user (required for store agent lookup)
username = user.email.split("@")[0]
await prisma.profile.create(
data={
"userId": user.id,
"username": username,
"name": f"Test User {username}",
"description": "Test user profile for LLM tests",
"links": [], # Required field - empty array for test profiles
}
data=ProfileCreateInput(
userId=user.id,
username=username,
name=f"Test User {username}",
description="Test user profile for LLM tests",
links=[], # Required field - empty array for test profiles
)
)
# 2. Create test OpenAI credentials for the user
@@ -332,13 +333,13 @@ async def setup_firecrawl_test_data():
# 1b. Create a profile with username for the user (required for store agent lookup)
username = user.email.split("@")[0]
await prisma.profile.create(
data={
"userId": user.id,
"username": username,
"name": f"Test User {username}",
"description": "Test user profile for Firecrawl tests",
"links": [], # Required field - empty array for test profiles
}
data=ProfileCreateInput(
userId=user.id,
username=username,
name=f"Test User {username}",
description="Test user profile for Firecrawl tests",
links=[], # Required field - empty array for test profiles
)
)
# NOTE: We deliberately do NOT create Firecrawl credentials for this user

View File

@@ -817,18 +817,16 @@ async def add_store_agent_to_library(
# Create LibraryAgent entry
added_agent = await prisma.models.LibraryAgent.prisma().create(
data={
"User": {"connect": {"id": user_id}},
"AgentGraph": {
data=prisma.types.LibraryAgentCreateInput(
User={"connect": {"id": user_id}},
AgentGraph={
"connect": {
"graphVersionId": {"id": graph.id, "version": graph.version}
}
},
"isCreatedByUser": False,
"settings": SafeJson(
_initialize_graph_settings(graph_model).model_dump()
),
},
isCreatedByUser=False,
settings=SafeJson(_initialize_graph_settings(graph_model).model_dump()),
),
include=library_agent_include(
user_id, include_nodes=False, include_executions=False
),

View File

@@ -27,6 +27,13 @@ from prisma.models import OAuthApplication as PrismaOAuthApplication
from prisma.models import OAuthAuthorizationCode as PrismaOAuthAuthorizationCode
from prisma.models import OAuthRefreshToken as PrismaOAuthRefreshToken
from prisma.models import User as PrismaUser
from prisma.types import (
OAuthAccessTokenCreateInput,
OAuthApplicationCreateInput,
OAuthAuthorizationCodeCreateInput,
OAuthRefreshTokenCreateInput,
UserCreateInput,
)
from backend.api.rest_api import app
@@ -48,11 +55,11 @@ def test_user_id() -> str:
async def test_user(server, test_user_id: str):
"""Create a test user in the database."""
await PrismaUser.prisma().create(
data={
"id": test_user_id,
"email": f"oauth-test-{test_user_id}@example.com",
"name": "OAuth Test User",
}
data=UserCreateInput(
id=test_user_id,
email=f"oauth-test-{test_user_id}@example.com",
name="OAuth Test User",
)
)
yield test_user_id
@@ -77,22 +84,22 @@ async def test_oauth_app(test_user: str):
client_secret_hash, client_secret_salt = keysmith.hash_key(client_secret_plaintext)
await PrismaOAuthApplication.prisma().create(
data={
"id": app_id,
"name": "Test OAuth App",
"description": "Test application for integration tests",
"clientId": client_id,
"clientSecret": client_secret_hash,
"clientSecretSalt": client_secret_salt,
"redirectUris": [
data=OAuthApplicationCreateInput(
id=app_id,
name="Test OAuth App",
description="Test application for integration tests",
clientId=client_id,
clientSecret=client_secret_hash,
clientSecretSalt=client_secret_salt,
redirectUris=[
"https://example.com/callback",
"http://localhost:3000/callback",
],
"grantTypes": ["authorization_code", "refresh_token"],
"scopes": [APIKeyPermission.EXECUTE_GRAPH, APIKeyPermission.READ_GRAPH],
"ownerId": test_user,
"isActive": True,
}
grantTypes=["authorization_code", "refresh_token"],
scopes=[APIKeyPermission.EXECUTE_GRAPH, APIKeyPermission.READ_GRAPH],
ownerId=test_user,
isActive=True,
)
)
yield {
@@ -296,19 +303,19 @@ async def inactive_oauth_app(test_user: str):
client_secret_hash, client_secret_salt = keysmith.hash_key(client_secret_plaintext)
await PrismaOAuthApplication.prisma().create(
data={
"id": app_id,
"name": "Inactive OAuth App",
"description": "Inactive test application",
"clientId": client_id,
"clientSecret": client_secret_hash,
"clientSecretSalt": client_secret_salt,
"redirectUris": ["https://example.com/callback"],
"grantTypes": ["authorization_code", "refresh_token"],
"scopes": [APIKeyPermission.EXECUTE_GRAPH],
"ownerId": test_user,
"isActive": False, # Inactive!
}
data=OAuthApplicationCreateInput(
id=app_id,
name="Inactive OAuth App",
description="Inactive test application",
clientId=client_id,
clientSecret=client_secret_hash,
clientSecretSalt=client_secret_salt,
redirectUris=["https://example.com/callback"],
grantTypes=["authorization_code", "refresh_token"],
scopes=[APIKeyPermission.EXECUTE_GRAPH],
ownerId=test_user,
isActive=False, # Inactive!
)
)
yield {
@@ -699,14 +706,14 @@ async def test_token_authorization_code_expired(
now = datetime.now(timezone.utc)
await PrismaOAuthAuthorizationCode.prisma().create(
data={
"code": expired_code,
"applicationId": test_oauth_app["id"],
"userId": test_user,
"scopes": [APIKeyPermission.EXECUTE_GRAPH],
"redirectUri": test_oauth_app["redirect_uri"],
"expiresAt": now - timedelta(hours=1), # Already expired
}
data=OAuthAuthorizationCodeCreateInput(
code=expired_code,
applicationId=test_oauth_app["id"],
userId=test_user,
scopes=[APIKeyPermission.EXECUTE_GRAPH],
redirectUri=test_oauth_app["redirect_uri"],
expiresAt=now - timedelta(hours=1), # Already expired
)
)
response = await client.post(
@@ -942,13 +949,13 @@ async def test_token_refresh_expired(
now = datetime.now(timezone.utc)
await PrismaOAuthRefreshToken.prisma().create(
data={
"token": expired_token_hash,
"applicationId": test_oauth_app["id"],
"userId": test_user,
"scopes": [APIKeyPermission.EXECUTE_GRAPH],
"expiresAt": now - timedelta(days=1), # Already expired
}
data=OAuthRefreshTokenCreateInput(
token=expired_token_hash,
applicationId=test_oauth_app["id"],
userId=test_user,
scopes=[APIKeyPermission.EXECUTE_GRAPH],
expiresAt=now - timedelta(days=1), # Already expired
)
)
response = await client.post(
@@ -980,14 +987,14 @@ async def test_token_refresh_revoked(
now = datetime.now(timezone.utc)
await PrismaOAuthRefreshToken.prisma().create(
data={
"token": revoked_token_hash,
"applicationId": test_oauth_app["id"],
"userId": test_user,
"scopes": [APIKeyPermission.EXECUTE_GRAPH],
"expiresAt": now + timedelta(days=30), # Not expired
"revokedAt": now - timedelta(hours=1), # But revoked
}
data=OAuthRefreshTokenCreateInput(
token=revoked_token_hash,
applicationId=test_oauth_app["id"],
userId=test_user,
scopes=[APIKeyPermission.EXECUTE_GRAPH],
expiresAt=now + timedelta(days=30), # Not expired
revokedAt=now - timedelta(hours=1), # But revoked
)
)
response = await client.post(
@@ -1013,19 +1020,19 @@ async def other_oauth_app(test_user: str):
client_secret_hash, client_secret_salt = keysmith.hash_key(client_secret_plaintext)
await PrismaOAuthApplication.prisma().create(
data={
"id": app_id,
"name": "Other OAuth App",
"description": "Second test application",
"clientId": client_id,
"clientSecret": client_secret_hash,
"clientSecretSalt": client_secret_salt,
"redirectUris": ["https://other.example.com/callback"],
"grantTypes": ["authorization_code", "refresh_token"],
"scopes": [APIKeyPermission.EXECUTE_GRAPH],
"ownerId": test_user,
"isActive": True,
}
data=OAuthApplicationCreateInput(
id=app_id,
name="Other OAuth App",
description="Second test application",
clientId=client_id,
clientSecret=client_secret_hash,
clientSecretSalt=client_secret_salt,
redirectUris=["https://other.example.com/callback"],
grantTypes=["authorization_code", "refresh_token"],
scopes=[APIKeyPermission.EXECUTE_GRAPH],
ownerId=test_user,
isActive=True,
)
)
yield {
@@ -1052,13 +1059,13 @@ async def test_token_refresh_wrong_application(
now = datetime.now(timezone.utc)
await PrismaOAuthRefreshToken.prisma().create(
data={
"token": token_hash,
"applicationId": test_oauth_app["id"], # Belongs to test_oauth_app
"userId": test_user,
"scopes": [APIKeyPermission.EXECUTE_GRAPH],
"expiresAt": now + timedelta(days=30),
}
data=OAuthRefreshTokenCreateInput(
token=token_hash,
applicationId=test_oauth_app["id"], # Belongs to test_oauth_app
userId=test_user,
scopes=[APIKeyPermission.EXECUTE_GRAPH],
expiresAt=now + timedelta(days=30),
)
)
# Try to use it with `other_oauth_app`
@@ -1267,19 +1274,19 @@ async def test_validate_access_token_fails_when_app_disabled(
client_secret_hash, client_secret_salt = keysmith.hash_key(client_secret_plaintext)
await PrismaOAuthApplication.prisma().create(
data={
"id": app_id,
"name": "App To Be Disabled",
"description": "Test app for disabled validation",
"clientId": client_id,
"clientSecret": client_secret_hash,
"clientSecretSalt": client_secret_salt,
"redirectUris": ["https://example.com/callback"],
"grantTypes": ["authorization_code"],
"scopes": [APIKeyPermission.EXECUTE_GRAPH],
"ownerId": test_user,
"isActive": True,
}
data=OAuthApplicationCreateInput(
id=app_id,
name="App To Be Disabled",
description="Test app for disabled validation",
clientId=client_id,
clientSecret=client_secret_hash,
clientSecretSalt=client_secret_salt,
redirectUris=["https://example.com/callback"],
grantTypes=["authorization_code"],
scopes=[APIKeyPermission.EXECUTE_GRAPH],
ownerId=test_user,
isActive=True,
)
)
# Create an access token directly in the database
@@ -1288,13 +1295,13 @@ async def test_validate_access_token_fails_when_app_disabled(
now = datetime.now(timezone.utc)
await PrismaOAuthAccessToken.prisma().create(
data={
"token": token_hash,
"applicationId": app_id,
"userId": test_user,
"scopes": [APIKeyPermission.EXECUTE_GRAPH],
"expiresAt": now + timedelta(hours=1),
}
data=OAuthAccessTokenCreateInput(
token=token_hash,
applicationId=app_id,
userId=test_user,
scopes=[APIKeyPermission.EXECUTE_GRAPH],
expiresAt=now + timedelta(hours=1),
)
)
# Token should be valid while app is active
@@ -1561,19 +1568,19 @@ async def test_revoke_token_from_different_app_fails_silently(
)
await PrismaOAuthApplication.prisma().create(
data={
"id": app2_id,
"name": "Second Test OAuth App",
"description": "Second test application for cross-app revocation test",
"clientId": app2_client_id,
"clientSecret": app2_client_secret_hash,
"clientSecretSalt": app2_client_secret_salt,
"redirectUris": ["https://other-app.com/callback"],
"grantTypes": ["authorization_code", "refresh_token"],
"scopes": [APIKeyPermission.EXECUTE_GRAPH, APIKeyPermission.READ_GRAPH],
"ownerId": test_user,
"isActive": True,
}
data=OAuthApplicationCreateInput(
id=app2_id,
name="Second Test OAuth App",
description="Second test application for cross-app revocation test",
clientId=app2_client_id,
clientSecret=app2_client_secret_hash,
clientSecretSalt=app2_client_secret_salt,
redirectUris=["https://other-app.com/callback"],
grantTypes=["authorization_code", "refresh_token"],
scopes=[APIKeyPermission.EXECUTE_GRAPH, APIKeyPermission.READ_GRAPH],
ownerId=test_user,
isActive=True,
)
)
# App 2 tries to revoke App 1's access token

View File

@@ -249,7 +249,9 @@ async def log_search_term(search_query: str):
date = datetime.now(timezone.utc).replace(hour=0, minute=0, second=0, microsecond=0)
try:
await prisma.models.SearchTerms.prisma().create(
data={"searchTerm": search_query, "createdDate": date}
data=prisma.types.SearchTermsCreateInput(
searchTerm=search_query, createdDate=date
)
)
except Exception as e:
# Fail silently here so that logging search terms doesn't break the app
@@ -1456,11 +1458,9 @@ async def _approve_sub_agent(
# Create new version if no matching version found
next_version = max((v.version for v in listing.Versions or []), default=0) + 1
await prisma.models.StoreListingVersion.prisma(tx).create(
data={
**_create_sub_agent_version_data(sub_graph, heading, main_agent_name),
"version": next_version,
"storeListingId": listing.id,
}
data=_create_sub_agent_version_data(
sub_graph, heading, main_agent_name, next_version, listing.id
)
)
await prisma.models.StoreListing.prisma(tx).update(
where={"id": listing.id}, data={"hasApprovedVersion": True}
@@ -1468,10 +1468,14 @@ async def _approve_sub_agent(
def _create_sub_agent_version_data(
sub_graph: prisma.models.AgentGraph, heading: str, main_agent_name: str
sub_graph: prisma.models.AgentGraph,
heading: str,
main_agent_name: str,
version: typing.Optional[int] = None,
store_listing_id: typing.Optional[str] = None,
) -> prisma.types.StoreListingVersionCreateInput:
"""Create store listing version data for a sub-agent"""
return prisma.types.StoreListingVersionCreateInput(
data = prisma.types.StoreListingVersionCreateInput(
agentGraphId=sub_graph.id,
agentGraphVersion=sub_graph.version,
name=sub_graph.name or heading,
@@ -1486,6 +1490,11 @@ def _create_sub_agent_version_data(
imageUrls=[], # Sub-agents don't need images
categories=[], # Sub-agents don't need categories
)
if version is not None:
data["version"] = version
if store_listing_id is not None:
data["storeListingId"] = store_listing_id
return data
async def review_store_submission(

View File

@@ -39,7 +39,7 @@ import backend.data.user
import backend.integrations.webhooks.utils
import backend.util.service
import backend.util.settings
from backend.blocks.llm import LlmModel
from backend.blocks.llm import DEFAULT_LLM_MODEL
from backend.data.model import Credentials
from backend.integrations.providers import ProviderName
from backend.monitoring.instrumentation import instrument_fastapi
@@ -113,7 +113,7 @@ async def lifespan_context(app: fastapi.FastAPI):
await backend.data.user.migrate_and_encrypt_user_integrations()
await backend.data.graph.fix_llm_provider_credentials()
await backend.data.graph.migrate_llm_models(LlmModel.GPT4O)
await backend.data.graph.migrate_llm_models(DEFAULT_LLM_MODEL)
await backend.integrations.webhooks.utils.migrate_legacy_triggered_graphs()
with launch_darkly_context():

View File

@@ -1,6 +1,7 @@
from typing import Any
from backend.blocks.llm import (
DEFAULT_LLM_MODEL,
TEST_CREDENTIALS,
TEST_CREDENTIALS_INPUT,
AIBlockBase,
@@ -49,7 +50,7 @@ class AIConditionBlock(AIBlockBase):
)
model: LlmModel = SchemaField(
title="LLM Model",
default=LlmModel.GPT4O,
default=DEFAULT_LLM_MODEL,
description="The language model to use for evaluating the condition.",
advanced=False,
)
@@ -81,7 +82,7 @@ class AIConditionBlock(AIBlockBase):
"condition": "the input is an email address",
"yes_value": "Valid email",
"no_value": "Not an email",
"model": LlmModel.GPT4O,
"model": DEFAULT_LLM_MODEL,
"credentials": TEST_CREDENTIALS_INPUT,
},
test_credentials=TEST_CREDENTIALS,

View File

@@ -182,13 +182,10 @@ class DataForSeoRelatedKeywordsBlock(Block):
if results and len(results) > 0:
# results is a list, get the first element
first_result = results[0] if isinstance(results, list) else results
items = (
first_result.get("items", [])
if isinstance(first_result, dict)
else []
)
# Ensure items is never None
if items is None:
# Handle missing key, null value, or valid list value
if isinstance(first_result, dict):
items = first_result.get("items") or []
else:
items = []
for item in items:
# Extract keyword_data from the item

File diff suppressed because it is too large Load Diff

View File

@@ -92,8 +92,9 @@ class LlmModel(str, Enum, metaclass=LlmModelMeta):
O1 = "o1"
O1_MINI = "o1-mini"
# GPT-5 models
GPT5 = "gpt-5-2025-08-07"
GPT5_2 = "gpt-5.2-2025-12-11"
GPT5_1 = "gpt-5.1-2025-11-13"
GPT5 = "gpt-5-2025-08-07"
GPT5_MINI = "gpt-5-mini-2025-08-07"
GPT5_NANO = "gpt-5-nano-2025-08-07"
GPT5_CHAT = "gpt-5-chat-latest"
@@ -194,8 +195,9 @@ MODEL_METADATA = {
LlmModel.O1: ModelMetadata("openai", 200000, 100000), # o1-2024-12-17
LlmModel.O1_MINI: ModelMetadata("openai", 128000, 65536), # o1-mini-2024-09-12
# GPT-5 models
LlmModel.GPT5: ModelMetadata("openai", 400000, 128000),
LlmModel.GPT5_2: ModelMetadata("openai", 400000, 128000),
LlmModel.GPT5_1: ModelMetadata("openai", 400000, 128000),
LlmModel.GPT5: ModelMetadata("openai", 400000, 128000),
LlmModel.GPT5_MINI: ModelMetadata("openai", 400000, 128000),
LlmModel.GPT5_NANO: ModelMetadata("openai", 400000, 128000),
LlmModel.GPT5_CHAT: ModelMetadata("openai", 400000, 16384),
@@ -303,6 +305,8 @@ MODEL_METADATA = {
LlmModel.V0_1_0_MD: ModelMetadata("v0", 128000, 64000),
}
DEFAULT_LLM_MODEL = LlmModel.GPT5_2
for model in LlmModel:
if model not in MODEL_METADATA:
raise ValueError(f"Missing MODEL_METADATA metadata for model: {model}")
@@ -790,7 +794,7 @@ class AIStructuredResponseGeneratorBlock(AIBlockBase):
)
model: LlmModel = SchemaField(
title="LLM Model",
default=LlmModel.GPT4O,
default=DEFAULT_LLM_MODEL,
description="The language model to use for answering the prompt.",
advanced=False,
)
@@ -855,7 +859,7 @@ class AIStructuredResponseGeneratorBlock(AIBlockBase):
input_schema=AIStructuredResponseGeneratorBlock.Input,
output_schema=AIStructuredResponseGeneratorBlock.Output,
test_input={
"model": LlmModel.GPT4O,
"model": DEFAULT_LLM_MODEL,
"credentials": TEST_CREDENTIALS_INPUT,
"expected_format": {
"key1": "value1",
@@ -1221,7 +1225,7 @@ class AITextGeneratorBlock(AIBlockBase):
)
model: LlmModel = SchemaField(
title="LLM Model",
default=LlmModel.GPT4O,
default=DEFAULT_LLM_MODEL,
description="The language model to use for answering the prompt.",
advanced=False,
)
@@ -1317,7 +1321,7 @@ class AITextSummarizerBlock(AIBlockBase):
)
model: LlmModel = SchemaField(
title="LLM Model",
default=LlmModel.GPT4O,
default=DEFAULT_LLM_MODEL,
description="The language model to use for summarizing the text.",
)
focus: str = SchemaField(
@@ -1534,7 +1538,7 @@ class AIConversationBlock(AIBlockBase):
)
model: LlmModel = SchemaField(
title="LLM Model",
default=LlmModel.GPT4O,
default=DEFAULT_LLM_MODEL,
description="The language model to use for the conversation.",
)
credentials: AICredentials = AICredentialsField()
@@ -1572,7 +1576,7 @@ class AIConversationBlock(AIBlockBase):
},
{"role": "user", "content": "Where was it played?"},
],
"model": LlmModel.GPT4O,
"model": DEFAULT_LLM_MODEL,
"credentials": TEST_CREDENTIALS_INPUT,
},
test_credentials=TEST_CREDENTIALS,
@@ -1635,7 +1639,7 @@ class AIListGeneratorBlock(AIBlockBase):
)
model: LlmModel = SchemaField(
title="LLM Model",
default=LlmModel.GPT4O,
default=DEFAULT_LLM_MODEL,
description="The language model to use for generating the list.",
advanced=True,
)
@@ -1692,7 +1696,7 @@ class AIListGeneratorBlock(AIBlockBase):
"drawing explorers to uncover its mysteries. Each planet showcases the limitless possibilities of "
"fictional worlds."
),
"model": LlmModel.GPT4O,
"model": DEFAULT_LLM_MODEL,
"credentials": TEST_CREDENTIALS_INPUT,
"max_retries": 3,
"force_json_output": False,

View File

@@ -226,7 +226,7 @@ class SmartDecisionMakerBlock(Block):
)
model: llm.LlmModel = SchemaField(
title="LLM Model",
default=llm.LlmModel.GPT4O,
default=llm.DEFAULT_LLM_MODEL,
description="The language model to use for answering the prompt.",
advanced=False,
)

View File

@@ -196,6 +196,15 @@ class TestXMLParserBlockSecurity:
async for _ in block.run(XMLParserBlock.Input(input_xml=large_xml)):
pass
async def test_rejects_text_outside_root(self):
"""Ensure parser surfaces readable errors for invalid root text."""
block = XMLParserBlock()
invalid_xml = "<root><child>value</child></root> trailing"
with pytest.raises(ValueError, match="text outside the root element"):
async for _ in block.run(XMLParserBlock.Input(input_xml=invalid_xml)):
pass
class TestStoreMediaFileSecurity:
"""Test file storage security limits."""

View File

@@ -28,7 +28,7 @@ class TestLLMStatsTracking:
response = await llm.llm_call(
credentials=llm.TEST_CREDENTIALS,
llm_model=llm.LlmModel.GPT4O,
llm_model=llm.DEFAULT_LLM_MODEL,
prompt=[{"role": "user", "content": "Hello"}],
max_tokens=100,
)
@@ -65,7 +65,7 @@ class TestLLMStatsTracking:
input_data = llm.AIStructuredResponseGeneratorBlock.Input(
prompt="Test prompt",
expected_format={"key1": "desc1", "key2": "desc2"},
model=llm.LlmModel.GPT4O,
model=llm.DEFAULT_LLM_MODEL,
credentials=llm.TEST_CREDENTIALS_INPUT, # type: ignore # type: ignore
)
@@ -109,7 +109,7 @@ class TestLLMStatsTracking:
# Run the block
input_data = llm.AITextGeneratorBlock.Input(
prompt="Generate text",
model=llm.LlmModel.GPT4O,
model=llm.DEFAULT_LLM_MODEL,
credentials=llm.TEST_CREDENTIALS_INPUT, # type: ignore
)
@@ -170,7 +170,7 @@ class TestLLMStatsTracking:
input_data = llm.AIStructuredResponseGeneratorBlock.Input(
prompt="Test prompt",
expected_format={"key1": "desc1", "key2": "desc2"},
model=llm.LlmModel.GPT4O,
model=llm.DEFAULT_LLM_MODEL,
credentials=llm.TEST_CREDENTIALS_INPUT, # type: ignore
retry=2,
)
@@ -228,7 +228,7 @@ class TestLLMStatsTracking:
input_data = llm.AITextSummarizerBlock.Input(
text=long_text,
model=llm.LlmModel.GPT4O,
model=llm.DEFAULT_LLM_MODEL,
credentials=llm.TEST_CREDENTIALS_INPUT, # type: ignore
max_tokens=100, # Small chunks
chunk_overlap=10,
@@ -299,7 +299,7 @@ class TestLLMStatsTracking:
# Test with very short text (should only need 1 chunk + 1 final summary)
input_data = llm.AITextSummarizerBlock.Input(
text="This is a short text.",
model=llm.LlmModel.GPT4O,
model=llm.DEFAULT_LLM_MODEL,
credentials=llm.TEST_CREDENTIALS_INPUT, # type: ignore
max_tokens=1000, # Large enough to avoid chunking
)
@@ -346,7 +346,7 @@ class TestLLMStatsTracking:
{"role": "assistant", "content": "Hi there!"},
{"role": "user", "content": "How are you?"},
],
model=llm.LlmModel.GPT4O,
model=llm.DEFAULT_LLM_MODEL,
credentials=llm.TEST_CREDENTIALS_INPUT, # type: ignore
)
@@ -387,7 +387,7 @@ class TestLLMStatsTracking:
# Run the block
input_data = llm.AIListGeneratorBlock.Input(
focus="test items",
model=llm.LlmModel.GPT4O,
model=llm.DEFAULT_LLM_MODEL,
credentials=llm.TEST_CREDENTIALS_INPUT, # type: ignore
max_retries=3,
)
@@ -469,7 +469,7 @@ class TestLLMStatsTracking:
input_data = llm.AIStructuredResponseGeneratorBlock.Input(
prompt="Test",
expected_format={"result": "desc"},
model=llm.LlmModel.GPT4O,
model=llm.DEFAULT_LLM_MODEL,
credentials=llm.TEST_CREDENTIALS_INPUT, # type: ignore
)
@@ -513,7 +513,7 @@ class TestAITextSummarizerValidation:
# Create input data
input_data = llm.AITextSummarizerBlock.Input(
text="Some text to summarize",
model=llm.LlmModel.GPT4O,
model=llm.DEFAULT_LLM_MODEL,
credentials=llm.TEST_CREDENTIALS_INPUT, # type: ignore
style=llm.SummaryStyle.BULLET_POINTS,
)
@@ -558,7 +558,7 @@ class TestAITextSummarizerValidation:
# Create input data
input_data = llm.AITextSummarizerBlock.Input(
text="Some text to summarize",
model=llm.LlmModel.GPT4O,
model=llm.DEFAULT_LLM_MODEL,
credentials=llm.TEST_CREDENTIALS_INPUT, # type: ignore
style=llm.SummaryStyle.BULLET_POINTS,
max_tokens=1000,
@@ -593,7 +593,7 @@ class TestAITextSummarizerValidation:
# Create input data
input_data = llm.AITextSummarizerBlock.Input(
text="Some text to summarize",
model=llm.LlmModel.GPT4O,
model=llm.DEFAULT_LLM_MODEL,
credentials=llm.TEST_CREDENTIALS_INPUT, # type: ignore
)
@@ -623,7 +623,7 @@ class TestAITextSummarizerValidation:
# Create input data
input_data = llm.AITextSummarizerBlock.Input(
text="Some text to summarize",
model=llm.LlmModel.GPT4O,
model=llm.DEFAULT_LLM_MODEL,
credentials=llm.TEST_CREDENTIALS_INPUT, # type: ignore
max_tokens=1000,
)
@@ -654,7 +654,7 @@ class TestAITextSummarizerValidation:
# Create input data
input_data = llm.AITextSummarizerBlock.Input(
text="Some text to summarize",
model=llm.LlmModel.GPT4O,
model=llm.DEFAULT_LLM_MODEL,
credentials=llm.TEST_CREDENTIALS_INPUT, # type: ignore
)

View File

@@ -233,7 +233,7 @@ async def test_smart_decision_maker_tracks_llm_stats():
# Create test input
input_data = SmartDecisionMakerBlock.Input(
prompt="Should I continue with this task?",
model=llm_module.LlmModel.GPT4O,
model=llm_module.DEFAULT_LLM_MODEL,
credentials=llm_module.TEST_CREDENTIALS_INPUT, # type: ignore
agent_mode_max_iterations=0,
)
@@ -335,7 +335,7 @@ async def test_smart_decision_maker_parameter_validation():
input_data = SmartDecisionMakerBlock.Input(
prompt="Search for keywords",
model=llm_module.LlmModel.GPT4O,
model=llm_module.DEFAULT_LLM_MODEL,
credentials=llm_module.TEST_CREDENTIALS_INPUT, # type: ignore
retry=2, # Set retry to 2 for testing
agent_mode_max_iterations=0,
@@ -402,7 +402,7 @@ async def test_smart_decision_maker_parameter_validation():
input_data = SmartDecisionMakerBlock.Input(
prompt="Search for keywords",
model=llm_module.LlmModel.GPT4O,
model=llm_module.DEFAULT_LLM_MODEL,
credentials=llm_module.TEST_CREDENTIALS_INPUT, # type: ignore
agent_mode_max_iterations=0,
)
@@ -462,7 +462,7 @@ async def test_smart_decision_maker_parameter_validation():
input_data = SmartDecisionMakerBlock.Input(
prompt="Search for keywords",
model=llm_module.LlmModel.GPT4O,
model=llm_module.DEFAULT_LLM_MODEL,
credentials=llm_module.TEST_CREDENTIALS_INPUT, # type: ignore
agent_mode_max_iterations=0,
)
@@ -526,7 +526,7 @@ async def test_smart_decision_maker_parameter_validation():
input_data = SmartDecisionMakerBlock.Input(
prompt="Search for keywords",
model=llm_module.LlmModel.GPT4O,
model=llm_module.DEFAULT_LLM_MODEL,
credentials=llm_module.TEST_CREDENTIALS_INPUT, # type: ignore
agent_mode_max_iterations=0,
)
@@ -648,7 +648,7 @@ async def test_smart_decision_maker_raw_response_conversion():
input_data = SmartDecisionMakerBlock.Input(
prompt="Test prompt",
model=llm_module.LlmModel.GPT4O,
model=llm_module.DEFAULT_LLM_MODEL,
credentials=llm_module.TEST_CREDENTIALS_INPUT, # type: ignore
retry=2,
agent_mode_max_iterations=0,
@@ -722,7 +722,7 @@ async def test_smart_decision_maker_raw_response_conversion():
):
input_data = SmartDecisionMakerBlock.Input(
prompt="Simple prompt",
model=llm_module.LlmModel.GPT4O,
model=llm_module.DEFAULT_LLM_MODEL,
credentials=llm_module.TEST_CREDENTIALS_INPUT, # type: ignore
agent_mode_max_iterations=0,
)
@@ -778,7 +778,7 @@ async def test_smart_decision_maker_raw_response_conversion():
):
input_data = SmartDecisionMakerBlock.Input(
prompt="Another test",
model=llm_module.LlmModel.GPT4O,
model=llm_module.DEFAULT_LLM_MODEL,
credentials=llm_module.TEST_CREDENTIALS_INPUT, # type: ignore
agent_mode_max_iterations=0,
)
@@ -931,7 +931,7 @@ async def test_smart_decision_maker_agent_mode():
# Test agent mode with max_iterations = 3
input_data = SmartDecisionMakerBlock.Input(
prompt="Complete this task using tools",
model=llm_module.LlmModel.GPT4O,
model=llm_module.DEFAULT_LLM_MODEL,
credentials=llm_module.TEST_CREDENTIALS_INPUT, # type: ignore
agent_mode_max_iterations=3, # Enable agent mode with 3 max iterations
)
@@ -1020,7 +1020,7 @@ async def test_smart_decision_maker_traditional_mode_default():
# Test default behavior (traditional mode)
input_data = SmartDecisionMakerBlock.Input(
prompt="Test prompt",
model=llm_module.LlmModel.GPT4O,
model=llm_module.DEFAULT_LLM_MODEL,
credentials=llm_module.TEST_CREDENTIALS_INPUT, # type: ignore
agent_mode_max_iterations=0, # Traditional mode
)

View File

@@ -373,7 +373,7 @@ async def test_output_yielding_with_dynamic_fields():
input_data = block.input_schema(
prompt="Create a user dictionary",
credentials=llm.TEST_CREDENTIALS_INPUT,
model=llm.LlmModel.GPT4O,
model=llm.DEFAULT_LLM_MODEL,
agent_mode_max_iterations=0, # Use traditional mode to test output yielding
)
@@ -594,7 +594,7 @@ async def test_validation_errors_dont_pollute_conversation():
input_data = block.input_schema(
prompt="Test prompt",
credentials=llm.TEST_CREDENTIALS_INPUT,
model=llm.LlmModel.GPT4O,
model=llm.DEFAULT_LLM_MODEL,
retry=3, # Allow retries
agent_mode_max_iterations=1,
)

View File

@@ -1,5 +1,5 @@
from gravitasml.parser import Parser
from gravitasml.token import tokenize
from gravitasml.token import Token, tokenize
from backend.data.block import Block, BlockOutput, BlockSchemaInput, BlockSchemaOutput
from backend.data.model import SchemaField
@@ -25,6 +25,38 @@ class XMLParserBlock(Block):
],
)
@staticmethod
def _validate_tokens(tokens: list[Token]) -> None:
"""Ensure the XML has a single root element and no stray text."""
if not tokens:
raise ValueError("XML input is empty.")
depth = 0
root_seen = False
for token in tokens:
if token.type == "TAG_OPEN":
if depth == 0 and root_seen:
raise ValueError("XML must have a single root element.")
depth += 1
if depth == 1:
root_seen = True
elif token.type == "TAG_CLOSE":
depth -= 1
if depth < 0:
raise SyntaxError("Unexpected closing tag in XML input.")
elif token.type in {"TEXT", "ESCAPE"}:
if depth == 0 and token.value:
raise ValueError(
"XML contains text outside the root element; "
"wrap content in a single root tag."
)
if depth != 0:
raise SyntaxError("Unclosed tag detected in XML input.")
if not root_seen:
raise ValueError("XML must include a root element.")
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
# Security fix: Add size limits to prevent XML bomb attacks
MAX_XML_SIZE = 10 * 1024 * 1024 # 10MB limit for XML input
@@ -35,7 +67,9 @@ class XMLParserBlock(Block):
)
try:
tokens = tokenize(input_data.input_xml)
tokens = list(tokenize(input_data.input_xml))
self._validate_tokens(tokens)
parser = Parser(tokens)
parsed_result = parser.parse()
yield "parsed_xml", parsed_result

View File

@@ -111,6 +111,8 @@ class TranscribeYoutubeVideoBlock(Block):
return parsed_url.path.split("/")[2]
if parsed_url.path[:3] == "/v/":
return parsed_url.path.split("/")[2]
if parsed_url.path.startswith("/shorts/"):
return parsed_url.path.split("/")[2]
raise ValueError(f"Invalid YouTube URL: {url}")
def get_transcript(

View File

@@ -42,6 +42,7 @@ from urllib.parse import urlparse
import click
from autogpt_libs.api_key.keysmith import APIKeySmith
from prisma.enums import APIKeyPermission
from prisma.types import OAuthApplicationCreateInput
keysmith = APIKeySmith()
@@ -147,7 +148,7 @@ def format_sql_insert(creds: dict) -> str:
sql = f"""
-- ============================================================
-- OAuth Application: {creds['name']}
-- OAuth Application: {creds["name"]}
-- Generated: {now_iso} UTC
-- ============================================================
@@ -167,14 +168,14 @@ INSERT INTO "OAuthApplication" (
"isActive"
)
VALUES (
'{creds['id']}',
'{creds["id"]}',
NOW(),
NOW(),
'{creds['name']}',
{f"'{creds['description']}'" if creds['description'] else 'NULL'},
'{creds['client_id']}',
'{creds['client_secret_hash']}',
'{creds['client_secret_salt']}',
'{creds["name"]}',
{f"'{creds['description']}'" if creds["description"] else "NULL"},
'{creds["client_id"]}',
'{creds["client_secret_hash"]}',
'{creds["client_secret_salt"]}',
ARRAY{redirect_uris_pg}::TEXT[],
ARRAY{grant_types_pg}::TEXT[],
ARRAY{scopes_pg}::"APIKeyPermission"[],
@@ -186,8 +187,8 @@ VALUES (
-- ⚠️ IMPORTANT: Save these credentials securely!
-- ============================================================
--
-- Client ID: {creds['client_id']}
-- Client Secret: {creds['client_secret_plaintext']}
-- Client ID: {creds["client_id"]}
-- Client Secret: {creds["client_secret_plaintext"]}
--
-- ⚠️ The client secret is shown ONLY ONCE!
-- ⚠️ Store it securely and share only with the application developer.
@@ -200,7 +201,7 @@ VALUES (
-- To verify the application was created:
-- SELECT "clientId", name, scopes, "redirectUris", "isActive"
-- FROM "OAuthApplication"
-- WHERE "clientId" = '{creds['client_id']}';
-- WHERE "clientId" = '{creds["client_id"]}';
"""
return sql
@@ -834,19 +835,19 @@ async def create_test_app_in_db(
# Insert into database
app = await OAuthApplication.prisma().create(
data={
"id": creds["id"],
"name": creds["name"],
"description": creds["description"],
"clientId": creds["client_id"],
"clientSecret": creds["client_secret_hash"],
"clientSecretSalt": creds["client_secret_salt"],
"redirectUris": creds["redirect_uris"],
"grantTypes": creds["grant_types"],
"scopes": creds["scopes"],
"ownerId": owner_id,
"isActive": True,
}
data=OAuthApplicationCreateInput(
id=creds["id"],
name=creds["name"],
description=creds["description"],
clientId=creds["client_id"],
clientSecret=creds["client_secret_hash"],
clientSecretSalt=creds["client_secret_salt"],
redirectUris=creds["redirect_uris"],
grantTypes=creds["grant_types"],
scopes=creds["scopes"],
ownerId=owner_id,
isActive=True,
)
)
click.echo(f"✓ Created test OAuth application: {app.clientId}")

View File

@@ -6,7 +6,7 @@ from typing import Literal, Optional
from autogpt_libs.api_key.keysmith import APIKeySmith
from prisma.enums import APIKeyPermission, APIKeyStatus
from prisma.models import APIKey as PrismaAPIKey
from prisma.types import APIKeyWhereUniqueInput
from prisma.types import APIKeyCreateInput, APIKeyWhereUniqueInput
from pydantic import Field
from backend.data.includes import MAX_USER_API_KEYS_FETCH
@@ -82,17 +82,17 @@ async def create_api_key(
generated_key = keysmith.generate_key()
saved_key_obj = await PrismaAPIKey.prisma().create(
data={
"id": str(uuid.uuid4()),
"name": name,
"head": generated_key.head,
"tail": generated_key.tail,
"hash": generated_key.hash,
"salt": generated_key.salt,
"permissions": [p for p in permissions],
"description": description,
"userId": user_id,
}
data=APIKeyCreateInput(
id=str(uuid.uuid4()),
name=name,
head=generated_key.head,
tail=generated_key.tail,
hash=generated_key.hash,
salt=generated_key.salt,
permissions=[p for p in permissions],
description=description,
userId=user_id,
)
)
return APIKeyInfo.from_db(saved_key_obj), generated_key.key

View File

@@ -22,7 +22,12 @@ from prisma.models import OAuthAccessToken as PrismaOAuthAccessToken
from prisma.models import OAuthApplication as PrismaOAuthApplication
from prisma.models import OAuthAuthorizationCode as PrismaOAuthAuthorizationCode
from prisma.models import OAuthRefreshToken as PrismaOAuthRefreshToken
from prisma.types import OAuthApplicationUpdateInput
from prisma.types import (
OAuthAccessTokenCreateInput,
OAuthApplicationUpdateInput,
OAuthAuthorizationCodeCreateInput,
OAuthRefreshTokenCreateInput,
)
from pydantic import BaseModel, Field, SecretStr
from .base import APIAuthorizationInfo
@@ -359,17 +364,17 @@ async def create_authorization_code(
expires_at = now + AUTHORIZATION_CODE_TTL
saved_code = await PrismaOAuthAuthorizationCode.prisma().create(
data={
"id": str(uuid.uuid4()),
"code": code,
"expiresAt": expires_at,
"applicationId": application_id,
"userId": user_id,
"scopes": [s for s in scopes],
"redirectUri": redirect_uri,
"codeChallenge": code_challenge,
"codeChallengeMethod": code_challenge_method,
}
data=OAuthAuthorizationCodeCreateInput(
id=str(uuid.uuid4()),
code=code,
expiresAt=expires_at,
applicationId=application_id,
userId=user_id,
scopes=[s for s in scopes],
redirectUri=redirect_uri,
codeChallenge=code_challenge,
codeChallengeMethod=code_challenge_method,
)
)
return OAuthAuthorizationCodeInfo.from_db(saved_code)
@@ -490,14 +495,14 @@ async def create_access_token(
expires_at = now + ACCESS_TOKEN_TTL
saved_token = await PrismaOAuthAccessToken.prisma().create(
data={
"id": str(uuid.uuid4()),
"token": token_hash, # SHA256 hash for direct lookup
"expiresAt": expires_at,
"applicationId": application_id,
"userId": user_id,
"scopes": [s for s in scopes],
}
data=OAuthAccessTokenCreateInput(
id=str(uuid.uuid4()),
token=token_hash, # SHA256 hash for direct lookup
expiresAt=expires_at,
applicationId=application_id,
userId=user_id,
scopes=[s for s in scopes],
)
)
return OAuthAccessToken.from_db(saved_token, plaintext_token=plaintext_token)
@@ -607,14 +612,14 @@ async def create_refresh_token(
expires_at = now + REFRESH_TOKEN_TTL
saved_token = await PrismaOAuthRefreshToken.prisma().create(
data={
"id": str(uuid.uuid4()),
"token": token_hash, # SHA256 hash for direct lookup
"expiresAt": expires_at,
"applicationId": application_id,
"userId": user_id,
"scopes": [s for s in scopes],
}
data=OAuthRefreshTokenCreateInput(
id=str(uuid.uuid4()),
token=token_hash, # SHA256 hash for direct lookup
expiresAt=expires_at,
applicationId=application_id,
userId=user_id,
scopes=[s for s in scopes],
)
)
return OAuthRefreshToken.from_db(saved_token, plaintext_token=plaintext_token)

View File

@@ -59,12 +59,13 @@ from backend.integrations.credentials_store import (
MODEL_COST: dict[LlmModel, int] = {
LlmModel.O3: 4,
LlmModel.O3_MINI: 2, # $1.10 / $4.40
LlmModel.O1: 16, # $15 / $60
LlmModel.O3_MINI: 2,
LlmModel.O1: 16,
LlmModel.O1_MINI: 4,
# GPT-5 models
LlmModel.GPT5: 2,
LlmModel.GPT5_2: 6,
LlmModel.GPT5_1: 5,
LlmModel.GPT5: 2,
LlmModel.GPT5_MINI: 1,
LlmModel.GPT5_NANO: 1,
LlmModel.GPT5_CHAT: 5,
@@ -87,7 +88,7 @@ MODEL_COST: dict[LlmModel, int] = {
LlmModel.AIML_API_LLAMA3_3_70B: 1,
LlmModel.AIML_API_META_LLAMA_3_1_70B: 1,
LlmModel.AIML_API_LLAMA_3_2_3B: 1,
LlmModel.LLAMA3_3_70B: 1, # $0.59 / $0.79
LlmModel.LLAMA3_3_70B: 1,
LlmModel.LLAMA3_1_8B: 1,
LlmModel.OLLAMA_LLAMA3_3: 1,
LlmModel.OLLAMA_LLAMA3_2: 1,

View File

@@ -341,6 +341,19 @@ class UserCreditBase(ABC):
if result:
# UserBalance is already updated by the CTE
# Clear insufficient funds notification flags when credits are added
# so user can receive alerts again if they run out in the future.
if transaction.amount > 0 and transaction.type in [
CreditTransactionType.GRANT,
CreditTransactionType.TOP_UP,
]:
from backend.executor.manager import (
clear_insufficient_funds_notifications,
)
await clear_insufficient_funds_notifications(user_id)
return result[0]["balance"]
async def _add_transaction(
@@ -530,6 +543,22 @@ class UserCreditBase(ABC):
if result:
new_balance, tx_key = result[0]["balance"], result[0]["transactionKey"]
# UserBalance is already updated by the CTE
# Clear insufficient funds notification flags when credits are added
# so user can receive alerts again if they run out in the future.
if (
amount > 0
and is_active
and transaction_type
in [CreditTransactionType.GRANT, CreditTransactionType.TOP_UP]
):
# Lazy import to avoid circular dependency with executor.manager
from backend.executor.manager import (
clear_insufficient_funds_notifications,
)
await clear_insufficient_funds_notifications(user_id)
return new_balance, tx_key
# If no result, either user doesn't exist or insufficient balance

View File

@@ -11,6 +11,7 @@ import pytest
from prisma.enums import CreditTransactionType
from prisma.errors import UniqueViolationError
from prisma.models import CreditTransaction, User, UserBalance
from prisma.types import UserBalanceCreateInput, UserBalanceUpsertInput, UserCreateInput
from backend.data.credit import UserCredit
from backend.util.json import SafeJson
@@ -21,11 +22,11 @@ async def create_test_user(user_id: str) -> None:
"""Create a test user for ceiling tests."""
try:
await User.prisma().create(
data={
"id": user_id,
"email": f"test-{user_id}@example.com",
"name": f"Test User {user_id[:8]}",
}
data=UserCreateInput(
id=user_id,
email=f"test-{user_id}@example.com",
name=f"Test User {user_id[:8]}",
)
)
except UniqueViolationError:
# User already exists, continue
@@ -33,7 +34,10 @@ async def create_test_user(user_id: str) -> None:
await UserBalance.prisma().upsert(
where={"userId": user_id},
data={"create": {"userId": user_id, "balance": 0}, "update": {"balance": 0}},
data=UserBalanceUpsertInput(
create=UserBalanceCreateInput(userId=user_id, balance=0),
update={"balance": 0},
),
)

View File

@@ -14,6 +14,7 @@ import pytest
from prisma.enums import CreditTransactionType
from prisma.errors import UniqueViolationError
from prisma.models import CreditTransaction, User, UserBalance
from prisma.types import UserBalanceCreateInput, UserBalanceUpsertInput, UserCreateInput
from backend.data.credit import POSTGRES_INT_MAX, UsageTransactionMetadata, UserCredit
from backend.util.exceptions import InsufficientBalanceError
@@ -28,11 +29,11 @@ async def create_test_user(user_id: str) -> None:
"""Create a test user with initial balance."""
try:
await User.prisma().create(
data={
"id": user_id,
"email": f"test-{user_id}@example.com",
"name": f"Test User {user_id[:8]}",
}
data=UserCreateInput(
id=user_id,
email=f"test-{user_id}@example.com",
name=f"Test User {user_id[:8]}",
)
)
except UniqueViolationError:
# User already exists, continue
@@ -41,7 +42,10 @@ async def create_test_user(user_id: str) -> None:
# Ensure UserBalance record exists
await UserBalance.prisma().upsert(
where={"userId": user_id},
data={"create": {"userId": user_id, "balance": 0}, "update": {"balance": 0}},
data=UserBalanceUpsertInput(
create=UserBalanceCreateInput(userId=user_id, balance=0),
update={"balance": 0},
),
)
@@ -342,10 +346,10 @@ async def test_integer_overflow_protection(server: SpinTestServer):
# First, set balance near max
await UserBalance.prisma().upsert(
where={"userId": user_id},
data={
"create": {"userId": user_id, "balance": max_int - 100},
"update": {"balance": max_int - 100},
},
data=UserBalanceUpsertInput(
create=UserBalanceCreateInput(userId=user_id, balance=max_int - 100),
update={"balance": max_int - 100},
),
)
# Try to add more than possible - should clamp to POSTGRES_INT_MAX
@@ -507,7 +511,7 @@ async def test_concurrent_multiple_spends_sufficient_balance(server: SpinTestSer
sorted_timings = sorted(timings.items(), key=lambda x: x[1]["start"])
print("\nExecution order by start time:")
for i, (label, timing) in enumerate(sorted_timings):
print(f" {i+1}. {label}: {timing['start']:.4f} -> {timing['end']:.4f}")
print(f" {i + 1}. {label}: {timing['start']:.4f} -> {timing['end']:.4f}")
# Check for overlap (true concurrency) vs serialization
overlaps = []
@@ -546,7 +550,7 @@ async def test_concurrent_multiple_spends_sufficient_balance(server: SpinTestSer
print("\nDatabase transaction order (by createdAt):")
for i, tx in enumerate(transactions):
print(
f" {i+1}. Amount {tx.amount}, Running balance: {tx.runningBalance}, Created: {tx.createdAt}"
f" {i + 1}. Amount {tx.amount}, Running balance: {tx.runningBalance}, Created: {tx.createdAt}"
)
# Verify running balances are chronologically consistent (ordered by createdAt)
@@ -707,7 +711,7 @@ async def test_prove_database_locking_behavior(server: SpinTestServer):
for i, result in enumerate(sorted_results):
print(
f" {i+1}. {result['label']}: DB operation took {result['db_duration']:.4f}s"
f" {i + 1}. {result['label']}: DB operation took {result['db_duration']:.4f}s"
)
# Check if any operations overlapped at the database level

View File

@@ -8,6 +8,7 @@ which would have caught the CreditTransactionType enum casting bug.
import pytest
from prisma.enums import CreditTransactionType
from prisma.models import CreditTransaction, User, UserBalance
from prisma.types import UserCreateInput
from backend.data.credit import (
AutoTopUpConfig,
@@ -29,12 +30,12 @@ async def cleanup_test_user():
# Create the user first
try:
await User.prisma().create(
data={
"id": user_id,
"email": f"test-{user_id}@example.com",
"topUpConfig": SafeJson({}),
"timezone": "UTC",
}
data=UserCreateInput(
id=user_id,
email=f"test-{user_id}@example.com",
topUpConfig=SafeJson({}),
timezone="UTC",
)
)
except Exception:
# User might already exist, that's fine

View File

@@ -12,6 +12,12 @@ import pytest
import stripe
from prisma.enums import CreditTransactionType
from prisma.models import CreditRefundRequest, CreditTransaction, User, UserBalance
from prisma.types import (
CreditRefundRequestCreateInput,
CreditTransactionCreateInput,
UserBalanceCreateInput,
UserCreateInput,
)
from backend.data.credit import UserCredit
from backend.util.json import SafeJson
@@ -35,32 +41,32 @@ async def setup_test_user_with_topup():
# Create user
await User.prisma().create(
data={
"id": REFUND_TEST_USER_ID,
"email": f"{REFUND_TEST_USER_ID}@example.com",
"name": "Refund Test User",
}
data=UserCreateInput(
id=REFUND_TEST_USER_ID,
email=f"{REFUND_TEST_USER_ID}@example.com",
name="Refund Test User",
)
)
# Create user balance
await UserBalance.prisma().create(
data={
"userId": REFUND_TEST_USER_ID,
"balance": 1000, # $10
}
data=UserBalanceCreateInput(
userId=REFUND_TEST_USER_ID,
balance=1000, # $10
)
)
# Create a top-up transaction that can be refunded
topup_tx = await CreditTransaction.prisma().create(
data={
"userId": REFUND_TEST_USER_ID,
"amount": 1000,
"type": CreditTransactionType.TOP_UP,
"transactionKey": "pi_test_12345",
"runningBalance": 1000,
"isActive": True,
"metadata": SafeJson({"stripe_payment_intent": "pi_test_12345"}),
}
data=CreditTransactionCreateInput(
userId=REFUND_TEST_USER_ID,
amount=1000,
type=CreditTransactionType.TOP_UP,
transactionKey="pi_test_12345",
runningBalance=1000,
isActive=True,
metadata=SafeJson({"stripe_payment_intent": "pi_test_12345"}),
)
)
return topup_tx
@@ -93,12 +99,12 @@ async def test_deduct_credits_atomic(server: SpinTestServer):
# Create refund request record (simulating webhook flow)
await CreditRefundRequest.prisma().create(
data={
"userId": REFUND_TEST_USER_ID,
"amount": 500,
"transactionKey": topup_tx.transactionKey, # Should match the original transaction
"reason": "Test refund",
}
data=CreditRefundRequestCreateInput(
userId=REFUND_TEST_USER_ID,
amount=500,
transactionKey=topup_tx.transactionKey, # Should match the original transaction
reason="Test refund",
)
)
# Call deduct_credits
@@ -286,12 +292,12 @@ async def test_concurrent_refunds(server: SpinTestServer):
refund_requests = []
for i in range(5):
req = await CreditRefundRequest.prisma().create(
data={
"userId": REFUND_TEST_USER_ID,
"amount": 100, # $1 each
"transactionKey": topup_tx.transactionKey,
"reason": f"Test refund {i}",
}
data=CreditRefundRequestCreateInput(
userId=REFUND_TEST_USER_ID,
amount=100, # $1 each
transactionKey=topup_tx.transactionKey,
reason=f"Test refund {i}",
)
)
refund_requests.append(req)

View File

@@ -3,6 +3,11 @@ from datetime import datetime, timedelta, timezone
import pytest
from prisma.enums import CreditTransactionType
from prisma.models import CreditTransaction, UserBalance
from prisma.types import (
CreditTransactionCreateInput,
UserBalanceCreateInput,
UserBalanceUpsertInput,
)
from backend.blocks.llm import AITextGeneratorBlock
from backend.data.block import get_block
@@ -23,10 +28,10 @@ async def disable_test_user_transactions():
old_date = datetime.now(timezone.utc) - timedelta(days=35) # More than a month ago
await UserBalance.prisma().upsert(
where={"userId": DEFAULT_USER_ID},
data={
"create": {"userId": DEFAULT_USER_ID, "balance": 0},
"update": {"balance": 0, "updatedAt": old_date},
},
data=UserBalanceUpsertInput(
create=UserBalanceCreateInput(userId=DEFAULT_USER_ID, balance=0),
update={"balance": 0, "updatedAt": old_date},
),
)
@@ -140,23 +145,23 @@ async def test_block_credit_reset(server: SpinTestServer):
# Manually create a transaction with month 1 timestamp to establish history
await CreditTransaction.prisma().create(
data={
"userId": DEFAULT_USER_ID,
"amount": 100,
"type": CreditTransactionType.TOP_UP,
"runningBalance": 1100,
"isActive": True,
"createdAt": month1, # Set specific timestamp
}
data=CreditTransactionCreateInput(
userId=DEFAULT_USER_ID,
amount=100,
type=CreditTransactionType.TOP_UP,
runningBalance=1100,
isActive=True,
createdAt=month1, # Set specific timestamp
)
)
# Update user balance to match
await UserBalance.prisma().upsert(
where={"userId": DEFAULT_USER_ID},
data={
"create": {"userId": DEFAULT_USER_ID, "balance": 1100},
"update": {"balance": 1100},
},
data=UserBalanceUpsertInput(
create=UserBalanceCreateInput(userId=DEFAULT_USER_ID, balance=1100),
update={"balance": 1100},
),
)
# Now test month 2 behavior
@@ -175,14 +180,14 @@ async def test_block_credit_reset(server: SpinTestServer):
# Create a month 2 transaction to update the last transaction time
await CreditTransaction.prisma().create(
data={
"userId": DEFAULT_USER_ID,
"amount": -700, # Spent 700 to get to 400
"type": CreditTransactionType.USAGE,
"runningBalance": 400,
"isActive": True,
"createdAt": month2,
}
data=CreditTransactionCreateInput(
userId=DEFAULT_USER_ID,
amount=-700, # Spent 700 to get to 400
type=CreditTransactionType.USAGE,
runningBalance=400,
isActive=True,
createdAt=month2,
)
)
# Move to month 3

View File

@@ -12,6 +12,7 @@ import pytest
from prisma.enums import CreditTransactionType
from prisma.errors import UniqueViolationError
from prisma.models import CreditTransaction, User, UserBalance
from prisma.types import UserBalanceCreateInput, UserBalanceUpsertInput, UserCreateInput
from backend.data.credit import POSTGRES_INT_MIN, UserCredit
from backend.util.test import SpinTestServer
@@ -21,11 +22,11 @@ async def create_test_user(user_id: str) -> None:
"""Create a test user for underflow tests."""
try:
await User.prisma().create(
data={
"id": user_id,
"email": f"test-{user_id}@example.com",
"name": f"Test User {user_id[:8]}",
}
data=UserCreateInput(
id=user_id,
email=f"test-{user_id}@example.com",
name=f"Test User {user_id[:8]}",
)
)
except UniqueViolationError:
# User already exists, continue
@@ -33,7 +34,10 @@ async def create_test_user(user_id: str) -> None:
await UserBalance.prisma().upsert(
where={"userId": user_id},
data={"create": {"userId": user_id, "balance": 0}, "update": {"balance": 0}},
data=UserBalanceUpsertInput(
create=UserBalanceCreateInput(userId=user_id, balance=0),
update={"balance": 0},
),
)
@@ -66,14 +70,14 @@ async def test_debug_underflow_step_by_step(server: SpinTestServer):
initial_balance_target = POSTGRES_INT_MIN + 100
# Use direct database update to set the balance close to underflow
from prisma.models import UserBalance
await UserBalance.prisma().upsert(
where={"userId": user_id},
data={
"create": {"userId": user_id, "balance": initial_balance_target},
"update": {"balance": initial_balance_target},
},
data=UserBalanceUpsertInput(
create=UserBalanceCreateInput(
userId=user_id, balance=initial_balance_target
),
update={"balance": initial_balance_target},
),
)
current_balance = await credit_system.get_credits(user_id)
@@ -110,10 +114,10 @@ async def test_debug_underflow_step_by_step(server: SpinTestServer):
# Set balance to exactly POSTGRES_INT_MIN
await UserBalance.prisma().upsert(
where={"userId": user_id},
data={
"create": {"userId": user_id, "balance": POSTGRES_INT_MIN},
"update": {"balance": POSTGRES_INT_MIN},
},
data=UserBalanceUpsertInput(
create=UserBalanceCreateInput(userId=user_id, balance=POSTGRES_INT_MIN),
update={"balance": POSTGRES_INT_MIN},
),
)
edge_balance = await credit_system.get_credits(user_id)
@@ -147,15 +151,13 @@ async def test_underflow_protection_large_refunds(server: SpinTestServer):
# Set up balance close to underflow threshold to test the protection
# Set balance to POSTGRES_INT_MIN + 1000, then try to subtract 2000
# This should trigger underflow protection
from prisma.models import UserBalance
test_balance = POSTGRES_INT_MIN + 1000
await UserBalance.prisma().upsert(
where={"userId": user_id},
data={
"create": {"userId": user_id, "balance": test_balance},
"update": {"balance": test_balance},
},
data=UserBalanceUpsertInput(
create=UserBalanceCreateInput(userId=user_id, balance=test_balance),
update={"balance": test_balance},
),
)
current_balance = await credit_system.get_credits(user_id)
@@ -212,15 +214,13 @@ async def test_multiple_large_refunds_cumulative_underflow(server: SpinTestServe
try:
# Set up balance close to underflow threshold
from prisma.models import UserBalance
initial_balance = POSTGRES_INT_MIN + 500 # Close to minimum but with some room
await UserBalance.prisma().upsert(
where={"userId": user_id},
data={
"create": {"userId": user_id, "balance": initial_balance},
"update": {"balance": initial_balance},
},
data=UserBalanceUpsertInput(
create=UserBalanceCreateInput(userId=user_id, balance=initial_balance),
update={"balance": initial_balance},
),
)
# Apply multiple refunds that would cumulatively underflow
@@ -295,10 +295,10 @@ async def test_concurrent_large_refunds_no_underflow(server: SpinTestServer):
initial_balance = POSTGRES_INT_MIN + 1000 # Close to minimum
await UserBalance.prisma().upsert(
where={"userId": user_id},
data={
"create": {"userId": user_id, "balance": initial_balance},
"update": {"balance": initial_balance},
},
data=UserBalanceUpsertInput(
create=UserBalanceCreateInput(userId=user_id, balance=initial_balance),
update={"balance": initial_balance},
),
)
async def large_refund(amount: int, label: str):

View File

@@ -14,6 +14,7 @@ import pytest
from prisma.enums import CreditTransactionType
from prisma.errors import UniqueViolationError
from prisma.models import CreditTransaction, User, UserBalance
from prisma.types import UserBalanceCreateInput, UserCreateInput
from backend.data.credit import UsageTransactionMetadata, UserCredit
from backend.util.json import SafeJson
@@ -24,11 +25,11 @@ async def create_test_user(user_id: str) -> None:
"""Create a test user for migration tests."""
try:
await User.prisma().create(
data={
"id": user_id,
"email": f"test-{user_id}@example.com",
"name": f"Test User {user_id[:8]}",
}
data=UserCreateInput(
id=user_id,
email=f"test-{user_id}@example.com",
name=f"Test User {user_id[:8]}",
)
)
except UniqueViolationError:
# User already exists, continue
@@ -121,7 +122,7 @@ async def test_detect_stale_user_balance_queries(server: SpinTestServer):
try:
# Create UserBalance with specific value
await UserBalance.prisma().create(
data={"userId": user_id, "balance": 5000} # $50
data=UserBalanceCreateInput(userId=user_id, balance=5000) # $50
)
# Verify that get_credits returns UserBalance value (5000), not any stale User.balance value
@@ -160,7 +161,9 @@ async def test_concurrent_operations_use_userbalance_only(server: SpinTestServer
try:
# Set initial balance in UserBalance
await UserBalance.prisma().create(data={"userId": user_id, "balance": 1000})
await UserBalance.prisma().create(
data=UserBalanceCreateInput(userId=user_id, balance=1000)
)
# Run concurrent operations to ensure they all use UserBalance atomic operations
async def concurrent_spend(amount: int, label: str):

View File

@@ -28,6 +28,7 @@ from prisma.models import (
AgentNodeExecutionKeyValueData,
)
from prisma.types import (
AgentGraphExecutionCreateInput,
AgentGraphExecutionUpdateManyMutationInput,
AgentGraphExecutionWhereInput,
AgentNodeExecutionCreateInput,
@@ -35,7 +36,6 @@ from prisma.types import (
AgentNodeExecutionKeyValueDataCreateInput,
AgentNodeExecutionUpdateInput,
AgentNodeExecutionWhereInput,
AgentNodeExecutionWhereUniqueInput,
)
from pydantic import BaseModel, ConfigDict, JsonValue, ValidationError
from pydantic.fields import Field
@@ -709,18 +709,18 @@ async def create_graph_execution(
The id of the AgentGraphExecution and the list of ExecutionResult for each node.
"""
result = await AgentGraphExecution.prisma().create(
data={
"agentGraphId": graph_id,
"agentGraphVersion": graph_version,
"executionStatus": ExecutionStatus.INCOMPLETE,
"inputs": SafeJson(inputs),
"credentialInputs": (
data=AgentGraphExecutionCreateInput(
agentGraphId=graph_id,
agentGraphVersion=graph_version,
executionStatus=ExecutionStatus.INCOMPLETE,
inputs=SafeJson(inputs),
credentialInputs=(
SafeJson(credential_inputs) if credential_inputs else Json({})
),
"nodesInputMasks": (
nodesInputMasks=(
SafeJson(nodes_input_masks) if nodes_input_masks else Json({})
),
"NodeExecutions": {
NodeExecutions={
"create": [
AgentNodeExecutionCreateInput(
agentNodeId=node_id,
@@ -736,10 +736,10 @@ async def create_graph_execution(
for node_id, node_input in starting_nodes_input
]
},
"userId": user_id,
"agentPresetId": preset_id,
"parentGraphExecutionId": parent_graph_exec_id,
},
userId=user_id,
agentPresetId=preset_id,
parentGraphExecutionId=parent_graph_exec_id,
),
include=GRAPH_EXECUTION_INCLUDE_WITH_NODES,
)
@@ -831,10 +831,10 @@ async def upsert_execution_output(
"""
Insert AgentNodeExecutionInputOutput record for as one of AgentNodeExecution.Output.
"""
data: AgentNodeExecutionInputOutputCreateInput = {
"name": output_name,
"referencedByOutputExecId": node_exec_id,
}
data = AgentNodeExecutionInputOutputCreateInput(
name=output_name,
referencedByOutputExecId=node_exec_id,
)
if output_data is not None:
data["data"] = SafeJson(output_data)
await AgentNodeExecutionInputOutput.prisma().create(data=data)
@@ -964,6 +964,12 @@ async def update_node_execution_status(
execution_data: BlockInput | None = None,
stats: dict[str, Any] | None = None,
) -> NodeExecutionResult:
"""
Update a node execution's status with validation of allowed transitions.
⚠️ Internal executor use only - no user_id check. Callers (executor/manager.py)
are responsible for validating user authorization before invoking this function.
"""
if status == ExecutionStatus.QUEUED and execution_data is None:
raise ValueError("Execution data must be provided when queuing an execution.")
@@ -974,25 +980,27 @@ async def update_node_execution_status(
f"Invalid status transition: {status} has no valid source statuses"
)
if res := await AgentNodeExecution.prisma().update(
where=cast(
AgentNodeExecutionWhereUniqueInput,
{
"id": node_exec_id,
"executionStatus": {"in": [s.value for s in allowed_from]},
},
),
# Fetch current execution to validate status transition
current = await AgentNodeExecution.prisma().find_unique(
where={"id": node_exec_id}, include=EXECUTION_RESULT_INCLUDE
)
if not current:
raise ValueError(f"Execution {node_exec_id} not found.")
# Validate current status allows transition to the new status
if current.executionStatus not in allowed_from:
# Return current state without updating if transition is not allowed
return NodeExecutionResult.from_db(current)
# Perform the update with only the unique identifier
res = await AgentNodeExecution.prisma().update(
where={"id": node_exec_id},
data=_get_update_status_data(status, execution_data, stats),
include=EXECUTION_RESULT_INCLUDE,
):
return NodeExecutionResult.from_db(res)
if res := await AgentNodeExecution.prisma().find_unique(
where={"id": node_exec_id}, include=EXECUTION_RESULT_INCLUDE
):
return NodeExecutionResult.from_db(res)
raise ValueError(f"Execution {node_exec_id} not found.")
)
if not res:
raise ValueError(f"Failed to update execution {node_exec_id}.")
return NodeExecutionResult.from_db(res)
def _get_update_status_data(

View File

@@ -10,7 +10,11 @@ from typing import Optional
from prisma.enums import ReviewStatus
from prisma.models import PendingHumanReview
from prisma.types import PendingHumanReviewUpdateInput
from prisma.types import (
PendingHumanReviewCreateInput,
PendingHumanReviewUpdateInput,
PendingHumanReviewUpsertInput,
)
from pydantic import BaseModel
from backend.api.features.executions.review.model import (
@@ -66,20 +70,20 @@ async def get_or_create_human_review(
# Upsert - get existing or create new review
review = await PendingHumanReview.prisma().upsert(
where={"nodeExecId": node_exec_id},
data={
"create": {
"userId": user_id,
"nodeExecId": node_exec_id,
"graphExecId": graph_exec_id,
"graphId": graph_id,
"graphVersion": graph_version,
"payload": SafeJson(input_data),
"instructions": message,
"editable": editable,
"status": ReviewStatus.WAITING,
},
"update": {}, # Do nothing on update - keep existing review as is
},
data=PendingHumanReviewUpsertInput(
create=PendingHumanReviewCreateInput(
userId=user_id,
nodeExecId=node_exec_id,
graphExecId=graph_exec_id,
graphId=graph_id,
graphVersion=graph_version,
payload=SafeJson(input_data),
instructions=message,
editable=editable,
status=ReviewStatus.WAITING,
),
update={}, # Do nothing on update - keep existing review as is
),
)
logger.info(

View File

@@ -7,7 +7,11 @@ import prisma
import pydantic
from prisma.enums import OnboardingStep
from prisma.models import UserOnboarding
from prisma.types import UserOnboardingCreateInput, UserOnboardingUpdateInput
from prisma.types import (
UserOnboardingCreateInput,
UserOnboardingUpdateInput,
UserOnboardingUpsertInput,
)
from backend.api.features.store.model import StoreAgentDetails
from backend.api.model import OnboardingNotificationPayload
@@ -92,6 +96,7 @@ async def reset_user_onboarding(user_id: str):
async def update_user_onboarding(user_id: str, data: UserOnboardingUpdate):
update: UserOnboardingUpdateInput = {}
# get_user_onboarding guarantees the record exists via upsert
onboarding = await get_user_onboarding(user_id)
if data.walletShown:
update["walletShown"] = data.walletShown
@@ -110,12 +115,14 @@ async def update_user_onboarding(user_id: str, data: UserOnboardingUpdate):
if data.onboardingAgentExecutionId is not None:
update["onboardingAgentExecutionId"] = data.onboardingAgentExecutionId
# The create branch is never taken since get_user_onboarding ensures the record exists,
# but upsert requires a create payload so we provide a minimal one
return await UserOnboarding.prisma().upsert(
where={"userId": user_id},
data={
"create": {"userId": user_id, **update},
"update": update,
},
data=UserOnboardingUpsertInput(
create=UserOnboardingCreateInput(userId=user_id),
update=update,
),
)

View File

@@ -114,6 +114,40 @@ utilization_gauge = Gauge(
"Ratio of active graph runs to max graph workers",
)
# Redis key prefix for tracking insufficient funds Discord notifications.
# We only send one notification per user per agent until they top up credits.
INSUFFICIENT_FUNDS_NOTIFIED_PREFIX = "insufficient_funds_discord_notified"
# TTL for the notification flag (30 days) - acts as a fallback cleanup
INSUFFICIENT_FUNDS_NOTIFIED_TTL_SECONDS = 30 * 24 * 60 * 60
async def clear_insufficient_funds_notifications(user_id: str) -> int:
"""
Clear all insufficient funds notification flags for a user.
This should be called when a user tops up their credits, allowing
Discord notifications to be sent again if they run out of funds.
Args:
user_id: The user ID to clear notifications for.
Returns:
The number of keys that were deleted.
"""
try:
redis_client = await redis.get_redis_async()
pattern = f"{INSUFFICIENT_FUNDS_NOTIFIED_PREFIX}:{user_id}:*"
keys = [key async for key in redis_client.scan_iter(match=pattern)]
if keys:
return await redis_client.delete(*keys)
return 0
except Exception as e:
logger.warning(
f"Failed to clear insufficient funds notification flags for user "
f"{user_id}: {e}"
)
return 0
# Thread-local storage for ExecutionProcessor instances
_tls = threading.local()
@@ -1261,12 +1295,40 @@ class ExecutionProcessor:
graph_id: str,
e: InsufficientBalanceError,
):
# Check if we've already sent a notification for this user+agent combo.
# We only send one notification per user per agent until they top up credits.
redis_key = f"{INSUFFICIENT_FUNDS_NOTIFIED_PREFIX}:{user_id}:{graph_id}"
try:
redis_client = redis.get_redis()
# SET NX returns True only if the key was newly set (didn't exist)
is_new_notification = redis_client.set(
redis_key,
"1",
nx=True,
ex=INSUFFICIENT_FUNDS_NOTIFIED_TTL_SECONDS,
)
if not is_new_notification:
# Already notified for this user+agent, skip all notifications
logger.debug(
f"Skipping duplicate insufficient funds notification for "
f"user={user_id}, graph={graph_id}"
)
return
except Exception as redis_error:
# If Redis fails, log and continue to send the notification
# (better to occasionally duplicate than to never notify)
logger.warning(
f"Failed to check/set insufficient funds notification flag in Redis: "
f"{redis_error}"
)
shortfall = abs(e.amount) - e.balance
metadata = db_client.get_graph_metadata(graph_id)
base_url = (
settings.config.frontend_base_url or settings.config.platform_base_url
)
# Queue user email notification
queue_notification(
NotificationEventModel(
user_id=user_id,
@@ -1280,6 +1342,7 @@ class ExecutionProcessor:
)
)
# Send Discord system alert
try:
user_email = db_client.get_user_email_by_id(user_id)

View File

@@ -0,0 +1,560 @@
from unittest.mock import AsyncMock, MagicMock, patch
import pytest
from prisma.enums import NotificationType
from backend.data.notifications import ZeroBalanceData
from backend.executor.manager import (
INSUFFICIENT_FUNDS_NOTIFIED_PREFIX,
ExecutionProcessor,
clear_insufficient_funds_notifications,
)
from backend.util.exceptions import InsufficientBalanceError
from backend.util.test import SpinTestServer
async def async_iter(items):
"""Helper to create an async iterator from a list."""
for item in items:
yield item
@pytest.mark.asyncio(loop_scope="session")
async def test_handle_insufficient_funds_sends_discord_alert_first_time(
server: SpinTestServer,
):
"""Test that the first insufficient funds notification sends a Discord alert."""
execution_processor = ExecutionProcessor()
user_id = "test-user-123"
graph_id = "test-graph-456"
error = InsufficientBalanceError(
message="Insufficient balance",
user_id=user_id,
balance=72, # $0.72
amount=-714, # Attempting to spend $7.14
)
with patch(
"backend.executor.manager.queue_notification"
) as mock_queue_notif, patch(
"backend.executor.manager.get_notification_manager_client"
) as mock_get_client, patch(
"backend.executor.manager.settings"
) as mock_settings, patch(
"backend.executor.manager.redis"
) as mock_redis_module:
# Setup mocks
mock_client = MagicMock()
mock_get_client.return_value = mock_client
mock_settings.config.frontend_base_url = "https://test.com"
# Mock Redis to simulate first-time notification (set returns True)
mock_redis_client = MagicMock()
mock_redis_module.get_redis.return_value = mock_redis_client
mock_redis_client.set.return_value = True # Key was newly set
# Create mock database client
mock_db_client = MagicMock()
mock_graph_metadata = MagicMock()
mock_graph_metadata.name = "Test Agent"
mock_db_client.get_graph_metadata.return_value = mock_graph_metadata
mock_db_client.get_user_email_by_id.return_value = "test@example.com"
# Test the insufficient funds handler
execution_processor._handle_insufficient_funds_notif(
db_client=mock_db_client,
user_id=user_id,
graph_id=graph_id,
e=error,
)
# Verify notification was queued
mock_queue_notif.assert_called_once()
notification_call = mock_queue_notif.call_args[0][0]
assert notification_call.type == NotificationType.ZERO_BALANCE
assert notification_call.user_id == user_id
assert isinstance(notification_call.data, ZeroBalanceData)
assert notification_call.data.current_balance == 72
# Verify Redis was checked with correct key pattern
expected_key = f"{INSUFFICIENT_FUNDS_NOTIFIED_PREFIX}:{user_id}:{graph_id}"
mock_redis_client.set.assert_called_once()
call_args = mock_redis_client.set.call_args
assert call_args[0][0] == expected_key
assert call_args[1]["nx"] is True
# Verify Discord alert was sent
mock_client.discord_system_alert.assert_called_once()
discord_message = mock_client.discord_system_alert.call_args[0][0]
assert "Insufficient Funds Alert" in discord_message
assert "test@example.com" in discord_message
assert "Test Agent" in discord_message
@pytest.mark.asyncio(loop_scope="session")
async def test_handle_insufficient_funds_skips_duplicate_notifications(
server: SpinTestServer,
):
"""Test that duplicate insufficient funds notifications skip both email and Discord."""
execution_processor = ExecutionProcessor()
user_id = "test-user-123"
graph_id = "test-graph-456"
error = InsufficientBalanceError(
message="Insufficient balance",
user_id=user_id,
balance=72,
amount=-714,
)
with patch(
"backend.executor.manager.queue_notification"
) as mock_queue_notif, patch(
"backend.executor.manager.get_notification_manager_client"
) as mock_get_client, patch(
"backend.executor.manager.settings"
) as mock_settings, patch(
"backend.executor.manager.redis"
) as mock_redis_module:
# Setup mocks
mock_client = MagicMock()
mock_get_client.return_value = mock_client
mock_settings.config.frontend_base_url = "https://test.com"
# Mock Redis to simulate duplicate notification (set returns False/None)
mock_redis_client = MagicMock()
mock_redis_module.get_redis.return_value = mock_redis_client
mock_redis_client.set.return_value = None # Key already existed
# Create mock database client
mock_db_client = MagicMock()
mock_db_client.get_graph_metadata.return_value = MagicMock(name="Test Agent")
# Test the insufficient funds handler
execution_processor._handle_insufficient_funds_notif(
db_client=mock_db_client,
user_id=user_id,
graph_id=graph_id,
e=error,
)
# Verify email notification was NOT queued (deduplication worked)
mock_queue_notif.assert_not_called()
# Verify Discord alert was NOT sent (deduplication worked)
mock_client.discord_system_alert.assert_not_called()
@pytest.mark.asyncio(loop_scope="session")
async def test_handle_insufficient_funds_different_agents_get_separate_alerts(
server: SpinTestServer,
):
"""Test that different agents for the same user get separate Discord alerts."""
execution_processor = ExecutionProcessor()
user_id = "test-user-123"
graph_id_1 = "test-graph-111"
graph_id_2 = "test-graph-222"
error = InsufficientBalanceError(
message="Insufficient balance",
user_id=user_id,
balance=72,
amount=-714,
)
with patch("backend.executor.manager.queue_notification"), patch(
"backend.executor.manager.get_notification_manager_client"
) as mock_get_client, patch(
"backend.executor.manager.settings"
) as mock_settings, patch(
"backend.executor.manager.redis"
) as mock_redis_module:
mock_client = MagicMock()
mock_get_client.return_value = mock_client
mock_settings.config.frontend_base_url = "https://test.com"
mock_redis_client = MagicMock()
mock_redis_module.get_redis.return_value = mock_redis_client
# Both calls return True (first time for each agent)
mock_redis_client.set.return_value = True
mock_db_client = MagicMock()
mock_graph_metadata = MagicMock()
mock_graph_metadata.name = "Test Agent"
mock_db_client.get_graph_metadata.return_value = mock_graph_metadata
mock_db_client.get_user_email_by_id.return_value = "test@example.com"
# First agent notification
execution_processor._handle_insufficient_funds_notif(
db_client=mock_db_client,
user_id=user_id,
graph_id=graph_id_1,
e=error,
)
# Second agent notification
execution_processor._handle_insufficient_funds_notif(
db_client=mock_db_client,
user_id=user_id,
graph_id=graph_id_2,
e=error,
)
# Verify Discord alerts were sent for both agents
assert mock_client.discord_system_alert.call_count == 2
# Verify Redis was called with different keys
assert mock_redis_client.set.call_count == 2
calls = mock_redis_client.set.call_args_list
assert (
calls[0][0][0]
== f"{INSUFFICIENT_FUNDS_NOTIFIED_PREFIX}:{user_id}:{graph_id_1}"
)
assert (
calls[1][0][0]
== f"{INSUFFICIENT_FUNDS_NOTIFIED_PREFIX}:{user_id}:{graph_id_2}"
)
@pytest.mark.asyncio(loop_scope="session")
async def test_clear_insufficient_funds_notifications(server: SpinTestServer):
"""Test that clearing notifications removes all keys for a user."""
user_id = "test-user-123"
with patch("backend.executor.manager.redis") as mock_redis_module:
mock_redis_client = MagicMock()
# get_redis_async is an async function, so we need AsyncMock for it
mock_redis_module.get_redis_async = AsyncMock(return_value=mock_redis_client)
# Mock scan_iter to return some keys as an async iterator
mock_keys = [
f"{INSUFFICIENT_FUNDS_NOTIFIED_PREFIX}:{user_id}:graph-1",
f"{INSUFFICIENT_FUNDS_NOTIFIED_PREFIX}:{user_id}:graph-2",
f"{INSUFFICIENT_FUNDS_NOTIFIED_PREFIX}:{user_id}:graph-3",
]
mock_redis_client.scan_iter.return_value = async_iter(mock_keys)
# delete is awaited, so use AsyncMock
mock_redis_client.delete = AsyncMock(return_value=3)
# Clear notifications
result = await clear_insufficient_funds_notifications(user_id)
# Verify correct pattern was used
expected_pattern = f"{INSUFFICIENT_FUNDS_NOTIFIED_PREFIX}:{user_id}:*"
mock_redis_client.scan_iter.assert_called_once_with(match=expected_pattern)
# Verify delete was called with all keys
mock_redis_client.delete.assert_called_once_with(*mock_keys)
# Verify return value
assert result == 3
@pytest.mark.asyncio(loop_scope="session")
async def test_clear_insufficient_funds_notifications_no_keys(server: SpinTestServer):
"""Test clearing notifications when there are no keys to clear."""
user_id = "test-user-no-notifications"
with patch("backend.executor.manager.redis") as mock_redis_module:
mock_redis_client = MagicMock()
# get_redis_async is an async function, so we need AsyncMock for it
mock_redis_module.get_redis_async = AsyncMock(return_value=mock_redis_client)
# Mock scan_iter to return no keys as an async iterator
mock_redis_client.scan_iter.return_value = async_iter([])
# Clear notifications
result = await clear_insufficient_funds_notifications(user_id)
# Verify delete was not called
mock_redis_client.delete.assert_not_called()
# Verify return value
assert result == 0
@pytest.mark.asyncio(loop_scope="session")
async def test_clear_insufficient_funds_notifications_handles_redis_error(
server: SpinTestServer,
):
"""Test that clearing notifications handles Redis errors gracefully."""
user_id = "test-user-redis-error"
with patch("backend.executor.manager.redis") as mock_redis_module:
# Mock get_redis_async to raise an error
mock_redis_module.get_redis_async = AsyncMock(
side_effect=Exception("Redis connection failed")
)
# Clear notifications should not raise, just return 0
result = await clear_insufficient_funds_notifications(user_id)
# Verify it returned 0 (graceful failure)
assert result == 0
@pytest.mark.asyncio(loop_scope="session")
async def test_handle_insufficient_funds_continues_on_redis_error(
server: SpinTestServer,
):
"""Test that both email and Discord notifications are still sent when Redis fails."""
execution_processor = ExecutionProcessor()
user_id = "test-user-123"
graph_id = "test-graph-456"
error = InsufficientBalanceError(
message="Insufficient balance",
user_id=user_id,
balance=72,
amount=-714,
)
with patch(
"backend.executor.manager.queue_notification"
) as mock_queue_notif, patch(
"backend.executor.manager.get_notification_manager_client"
) as mock_get_client, patch(
"backend.executor.manager.settings"
) as mock_settings, patch(
"backend.executor.manager.redis"
) as mock_redis_module:
mock_client = MagicMock()
mock_get_client.return_value = mock_client
mock_settings.config.frontend_base_url = "https://test.com"
# Mock Redis to raise an error
mock_redis_client = MagicMock()
mock_redis_module.get_redis.return_value = mock_redis_client
mock_redis_client.set.side_effect = Exception("Redis connection error")
mock_db_client = MagicMock()
mock_graph_metadata = MagicMock()
mock_graph_metadata.name = "Test Agent"
mock_db_client.get_graph_metadata.return_value = mock_graph_metadata
mock_db_client.get_user_email_by_id.return_value = "test@example.com"
# Test the insufficient funds handler
execution_processor._handle_insufficient_funds_notif(
db_client=mock_db_client,
user_id=user_id,
graph_id=graph_id,
e=error,
)
# Verify email notification was still queued despite Redis error
mock_queue_notif.assert_called_once()
# Verify Discord alert was still sent despite Redis error
mock_client.discord_system_alert.assert_called_once()
@pytest.mark.asyncio(loop_scope="session")
async def test_add_transaction_clears_notifications_on_grant(server: SpinTestServer):
"""Test that _add_transaction clears notification flags when adding GRANT credits."""
from prisma.enums import CreditTransactionType
from backend.data.credit import UserCredit
user_id = "test-user-grant-clear"
with patch("backend.data.credit.query_raw_with_schema") as mock_query, patch(
"backend.executor.manager.redis"
) as mock_redis_module:
# Mock the query to return a successful transaction
mock_query.return_value = [{"balance": 1000, "transactionKey": "test-tx-key"}]
# Mock async Redis for notification clearing
mock_redis_client = MagicMock()
mock_redis_module.get_redis_async = AsyncMock(return_value=mock_redis_client)
mock_redis_client.scan_iter.return_value = async_iter(
[f"{INSUFFICIENT_FUNDS_NOTIFIED_PREFIX}:{user_id}:graph-1"]
)
mock_redis_client.delete = AsyncMock(return_value=1)
# Create a concrete instance
credit_model = UserCredit()
# Call _add_transaction with GRANT type (should clear notifications)
await credit_model._add_transaction(
user_id=user_id,
amount=500, # Positive amount
transaction_type=CreditTransactionType.GRANT,
is_active=True, # Active transaction
)
# Verify notification clearing was called
mock_redis_module.get_redis_async.assert_called_once()
mock_redis_client.scan_iter.assert_called_once_with(
match=f"{INSUFFICIENT_FUNDS_NOTIFIED_PREFIX}:{user_id}:*"
)
@pytest.mark.asyncio(loop_scope="session")
async def test_add_transaction_clears_notifications_on_top_up(server: SpinTestServer):
"""Test that _add_transaction clears notification flags when adding TOP_UP credits."""
from prisma.enums import CreditTransactionType
from backend.data.credit import UserCredit
user_id = "test-user-topup-clear"
with patch("backend.data.credit.query_raw_with_schema") as mock_query, patch(
"backend.executor.manager.redis"
) as mock_redis_module:
# Mock the query to return a successful transaction
mock_query.return_value = [{"balance": 2000, "transactionKey": "test-tx-key-2"}]
# Mock async Redis for notification clearing
mock_redis_client = MagicMock()
mock_redis_module.get_redis_async = AsyncMock(return_value=mock_redis_client)
mock_redis_client.scan_iter.return_value = async_iter([])
mock_redis_client.delete = AsyncMock(return_value=0)
credit_model = UserCredit()
# Call _add_transaction with TOP_UP type (should clear notifications)
await credit_model._add_transaction(
user_id=user_id,
amount=1000, # Positive amount
transaction_type=CreditTransactionType.TOP_UP,
is_active=True,
)
# Verify notification clearing was attempted
mock_redis_module.get_redis_async.assert_called_once()
@pytest.mark.asyncio(loop_scope="session")
async def test_add_transaction_skips_clearing_for_inactive_transaction(
server: SpinTestServer,
):
"""Test that _add_transaction does NOT clear notifications for inactive transactions."""
from prisma.enums import CreditTransactionType
from backend.data.credit import UserCredit
user_id = "test-user-inactive"
with patch("backend.data.credit.query_raw_with_schema") as mock_query, patch(
"backend.executor.manager.redis"
) as mock_redis_module:
# Mock the query to return a successful transaction
mock_query.return_value = [{"balance": 500, "transactionKey": "test-tx-key-3"}]
# Mock async Redis
mock_redis_client = MagicMock()
mock_redis_module.get_redis_async = AsyncMock(return_value=mock_redis_client)
credit_model = UserCredit()
# Call _add_transaction with is_active=False (should NOT clear notifications)
await credit_model._add_transaction(
user_id=user_id,
amount=500,
transaction_type=CreditTransactionType.TOP_UP,
is_active=False, # Inactive - pending Stripe payment
)
# Verify notification clearing was NOT called
mock_redis_module.get_redis_async.assert_not_called()
@pytest.mark.asyncio(loop_scope="session")
async def test_add_transaction_skips_clearing_for_usage_transaction(
server: SpinTestServer,
):
"""Test that _add_transaction does NOT clear notifications for USAGE transactions."""
from prisma.enums import CreditTransactionType
from backend.data.credit import UserCredit
user_id = "test-user-usage"
with patch("backend.data.credit.query_raw_with_schema") as mock_query, patch(
"backend.executor.manager.redis"
) as mock_redis_module:
# Mock the query to return a successful transaction
mock_query.return_value = [{"balance": 400, "transactionKey": "test-tx-key-4"}]
# Mock async Redis
mock_redis_client = MagicMock()
mock_redis_module.get_redis_async = AsyncMock(return_value=mock_redis_client)
credit_model = UserCredit()
# Call _add_transaction with USAGE type (spending, should NOT clear)
await credit_model._add_transaction(
user_id=user_id,
amount=-100, # Negative - spending credits
transaction_type=CreditTransactionType.USAGE,
is_active=True,
)
# Verify notification clearing was NOT called
mock_redis_module.get_redis_async.assert_not_called()
@pytest.mark.asyncio(loop_scope="session")
async def test_enable_transaction_clears_notifications(server: SpinTestServer):
"""Test that _enable_transaction clears notification flags when enabling a TOP_UP."""
from prisma.enums import CreditTransactionType
from backend.data.credit import UserCredit
user_id = "test-user-enable"
with patch("backend.data.credit.CreditTransaction") as mock_credit_tx, patch(
"backend.data.credit.query_raw_with_schema"
) as mock_query, patch("backend.executor.manager.redis") as mock_redis_module:
# Mock finding the pending transaction
mock_transaction = MagicMock()
mock_transaction.amount = 1000
mock_transaction.type = CreditTransactionType.TOP_UP
mock_credit_tx.prisma.return_value.find_first = AsyncMock(
return_value=mock_transaction
)
# Mock the query to return updated balance
mock_query.return_value = [{"balance": 1500}]
# Mock async Redis for notification clearing
mock_redis_client = MagicMock()
mock_redis_module.get_redis_async = AsyncMock(return_value=mock_redis_client)
mock_redis_client.scan_iter.return_value = async_iter(
[f"{INSUFFICIENT_FUNDS_NOTIFIED_PREFIX}:{user_id}:graph-1"]
)
mock_redis_client.delete = AsyncMock(return_value=1)
credit_model = UserCredit()
# Call _enable_transaction (simulates Stripe checkout completion)
from backend.util.json import SafeJson
await credit_model._enable_transaction(
transaction_key="cs_test_123",
user_id=user_id,
metadata=SafeJson({"payment": "completed"}),
)
# Verify notification clearing was called
mock_redis_module.get_redis_async.assert_called_once()
mock_redis_client.scan_iter.assert_called_once_with(
match=f"{INSUFFICIENT_FUNDS_NOTIFIED_PREFIX}:{user_id}:*"
)

View File

@@ -1906,16 +1906,32 @@ httpx = {version = ">=0.26,<0.29", extras = ["http2"]}
pydantic = ">=1.10,<3"
pyjwt = ">=2.10.1,<3.0.0"
[[package]]
name = "gravitas-md2gdocs"
version = "0.1.0"
description = "Convert Markdown to Google Docs API requests"
optional = false
python-versions = ">=3.10"
groups = ["main"]
files = [
{file = "gravitas_md2gdocs-0.1.0-py3-none-any.whl", hash = "sha256:0cb0627779fdd65c1604818af4142eea1b25d055060183363de1bae4d9e46508"},
{file = "gravitas_md2gdocs-0.1.0.tar.gz", hash = "sha256:bb3122fe9fa35c528f3f00b785d3f1398d350082d5d03f60f56c895bdcc68033"},
]
[package.extras]
dev = ["google-auth-oauthlib (>=1.0.0)", "pytest (>=7.0.0)", "pytest-cov (>=4.0.0)", "python-dotenv (>=1.0.0)", "ruff (>=0.1.0)"]
google = ["google-api-python-client (>=2.0.0)", "google-auth (>=2.0.0)"]
[[package]]
name = "gravitasml"
version = "0.1.3"
version = "0.1.4"
description = ""
optional = false
python-versions = "<4.0,>=3.10"
groups = ["main"]
files = [
{file = "gravitasml-0.1.3-py3-none-any.whl", hash = "sha256:51ff98b4564b7a61f7796f18d5f2558b919d30b3722579296089645b7bc18b85"},
{file = "gravitasml-0.1.3.tar.gz", hash = "sha256:04d240b9fa35878252d57a36032130b6516487468847fcdced1022c032a20f57"},
{file = "gravitasml-0.1.4-py3-none-any.whl", hash = "sha256:671a18b11d3d8a0e270c6a80c72cd058458b18d5ef7560d00010e962ab1bca74"},
{file = "gravitasml-0.1.4.tar.gz", hash = "sha256:35d0d9fec7431817482d53d9c976e375557c3e041d1eb6928e809324a8c866e3"},
]
[package.dependencies]
@@ -7279,4 +7295,4 @@ cffi = ["cffi (>=1.11)"]
[metadata]
lock-version = "2.1"
python-versions = ">=3.10,<3.14"
content-hash = "13b191b2a1989d3321ff713c66ff6f5f4f3b82d15df4d407e0e5dbf87d7522c4"
content-hash = "a93ba0cea3b465cb6ec3e3f258b383b09f84ea352ccfdbfa112902cde5653fc6"

View File

@@ -27,7 +27,7 @@ google-api-python-client = "^2.177.0"
google-auth-oauthlib = "^1.2.2"
google-cloud-storage = "^3.2.0"
googlemaps = "^4.10.0"
gravitasml = "^0.1.3"
gravitasml = "^0.1.4"
groq = "^0.30.0"
html2text = "^2024.2.26"
jinja2 = "^3.1.6"
@@ -82,6 +82,7 @@ firecrawl-py = "^4.3.6"
exa-py = "^1.14.20"
croniter = "^6.0.0"
stagehand = "^0.5.1"
gravitas-md2gdocs = "^0.1.0"
[tool.poetry.group.dev.dependencies]
aiohappyeyeballs = "^2.6.1"

View File

@@ -0,0 +1,113 @@
from unittest.mock import Mock
from backend.blocks.google.docs import GoogleDocsFormatTextBlock
def _make_mock_docs_service() -> Mock:
service = Mock()
# Ensure chained call exists: service.documents().batchUpdate(...).execute()
service.documents.return_value.batchUpdate.return_value.execute.return_value = {}
return service
def test_format_text_parses_shorthand_hex_color():
block = GoogleDocsFormatTextBlock()
service = _make_mock_docs_service()
result = block._format_text(
service,
document_id="doc_1",
start_index=1,
end_index=2,
bold=False,
italic=False,
underline=False,
font_size=0,
foreground_color="#FFF",
)
assert result["success"] is True
# Verify request body contains correct rgbColor for white.
_, kwargs = service.documents.return_value.batchUpdate.call_args
requests = kwargs["body"]["requests"]
rgb = requests[0]["updateTextStyle"]["textStyle"]["foregroundColor"]["color"][
"rgbColor"
]
assert rgb == {"red": 1.0, "green": 1.0, "blue": 1.0}
def test_format_text_parses_full_hex_color():
block = GoogleDocsFormatTextBlock()
service = _make_mock_docs_service()
result = block._format_text(
service,
document_id="doc_1",
start_index=1,
end_index=2,
bold=False,
italic=False,
underline=False,
font_size=0,
foreground_color="#FF0000",
)
assert result["success"] is True
_, kwargs = service.documents.return_value.batchUpdate.call_args
requests = kwargs["body"]["requests"]
rgb = requests[0]["updateTextStyle"]["textStyle"]["foregroundColor"]["color"][
"rgbColor"
]
assert rgb == {"red": 1.0, "green": 0.0, "blue": 0.0}
def test_format_text_ignores_invalid_color_when_other_fields_present():
block = GoogleDocsFormatTextBlock()
service = _make_mock_docs_service()
result = block._format_text(
service,
document_id="doc_1",
start_index=1,
end_index=2,
bold=True,
italic=False,
underline=False,
font_size=0,
foreground_color="#GGG",
)
assert result["success"] is True
assert "warning" in result
# Should still apply bold, but should NOT include foregroundColor in textStyle.
_, kwargs = service.documents.return_value.batchUpdate.call_args
requests = kwargs["body"]["requests"]
text_style = requests[0]["updateTextStyle"]["textStyle"]
fields = requests[0]["updateTextStyle"]["fields"]
assert text_style == {"bold": True}
assert fields == "bold"
def test_format_text_invalid_color_only_does_not_call_api():
block = GoogleDocsFormatTextBlock()
service = _make_mock_docs_service()
result = block._format_text(
service,
document_id="doc_1",
start_index=1,
end_index=2,
bold=False,
italic=False,
underline=False,
font_size=0,
foreground_color="#F",
)
assert result["success"] is False
assert "Invalid foreground_color" in result["message"]
service.documents.return_value.batchUpdate.assert_not_called()

View File

@@ -37,6 +37,18 @@ class TestTranscribeYoutubeVideoBlock:
video_id = self.youtube_block.extract_video_id(url)
assert video_id == "dQw4w9WgXcQ"
def test_extract_video_id_shorts_url(self):
"""Test extracting video ID from YouTube Shorts URL."""
url = "https://www.youtube.com/shorts/dtUqwMu3e-g"
video_id = self.youtube_block.extract_video_id(url)
assert video_id == "dtUqwMu3e-g"
def test_extract_video_id_shorts_url_with_params(self):
"""Test extracting video ID from YouTube Shorts URL with query parameters."""
url = "https://www.youtube.com/shorts/dtUqwMu3e-g?feature=share"
video_id = self.youtube_block.extract_video_id(url)
assert video_id == "dtUqwMu3e-g"
@patch("backend.blocks.youtube.YouTubeTranscriptApi")
def test_get_transcript_english_available(self, mock_api_class):
"""Test getting transcript when English is available."""

View File

@@ -22,6 +22,7 @@ import random
from typing import Any, Dict, List
from faker import Faker
from prisma.types import AgentBlockCreateInput
# Import API functions from the backend
from backend.api.features.library.db import create_library_agent, create_preset
@@ -179,12 +180,12 @@ class TestDataCreator:
for block in blocks_to_create:
try:
await prisma.agentblock.create(
data={
"id": block.id,
"name": block.name,
"inputSchema": "{}",
"outputSchema": "{}",
}
data=AgentBlockCreateInput(
id=block.id,
name=block.name,
inputSchema="{}",
outputSchema="{}",
)
)
except Exception as e:
print(f"Error creating block {block.name}: {e}")

View File

@@ -30,13 +30,19 @@ from prisma.types import (
AgentGraphCreateInput,
AgentNodeCreateInput,
AgentNodeLinkCreateInput,
AgentPresetCreateInput,
AnalyticsDetailsCreateInput,
AnalyticsMetricsCreateInput,
APIKeyCreateInput,
CreditTransactionCreateInput,
IntegrationWebhookCreateInput,
LibraryAgentCreateInput,
ProfileCreateInput,
StoreListingCreateInput,
StoreListingReviewCreateInput,
StoreListingVersionCreateInput,
UserCreateInput,
UserOnboardingCreateInput,
)
faker = Faker()
@@ -172,14 +178,14 @@ async def main():
for _ in range(num_presets): # Create 1 AgentPreset per user
graph = random.choice(agent_graphs)
preset = await db.agentpreset.create(
data={
"name": faker.sentence(nb_words=3),
"description": faker.text(max_nb_chars=200),
"userId": user.id,
"agentGraphId": graph.id,
"agentGraphVersion": graph.version,
"isActive": True,
}
data=AgentPresetCreateInput(
name=faker.sentence(nb_words=3),
description=faker.text(max_nb_chars=200),
userId=user.id,
agentGraphId=graph.id,
agentGraphVersion=graph.version,
isActive=True,
)
)
agent_presets.append(preset)
@@ -220,18 +226,18 @@ async def main():
)
library_agent = await db.libraryagent.create(
data={
"userId": user.id,
"agentGraphId": graph.id,
"agentGraphVersion": graph.version,
"creatorId": creator_profile.id if creator_profile else None,
"imageUrl": get_image() if random.random() < 0.5 else None,
"useGraphIsActiveVersion": random.choice([True, False]),
"isFavorite": random.choice([True, False]),
"isCreatedByUser": random.choice([True, False]),
"isArchived": random.choice([True, False]),
"isDeleted": random.choice([True, False]),
}
data=LibraryAgentCreateInput(
userId=user.id,
agentGraphId=graph.id,
agentGraphVersion=graph.version,
creatorId=creator_profile.id if creator_profile else None,
imageUrl=get_image() if random.random() < 0.5 else None,
useGraphIsActiveVersion=random.choice([True, False]),
isFavorite=random.choice([True, False]),
isCreatedByUser=random.choice([True, False]),
isArchived=random.choice([True, False]),
isDeleted=random.choice([True, False]),
)
)
library_agents.append(library_agent)
@@ -392,13 +398,13 @@ async def main():
user = random.choice(users)
slug = faker.slug()
listing = await db.storelisting.create(
data={
"agentGraphId": graph.id,
"agentGraphVersion": graph.version,
"owningUserId": user.id,
"hasApprovedVersion": random.choice([True, False]),
"slug": slug,
}
data=StoreListingCreateInput(
agentGraphId=graph.id,
agentGraphVersion=graph.version,
owningUserId=user.id,
hasApprovedVersion=random.choice([True, False]),
slug=slug,
)
)
store_listings.append(listing)
@@ -408,26 +414,26 @@ async def main():
for listing in store_listings:
graph = [g for g in agent_graphs if g.id == listing.agentGraphId][0]
version = await db.storelistingversion.create(
data={
"agentGraphId": graph.id,
"agentGraphVersion": graph.version,
"name": graph.name or faker.sentence(nb_words=3),
"subHeading": faker.sentence(),
"videoUrl": get_video_url() if random.random() < 0.3 else None,
"imageUrls": [get_image() for _ in range(3)],
"description": faker.text(),
"categories": [faker.word() for _ in range(3)],
"isFeatured": random.choice([True, False]),
"isAvailable": True,
"storeListingId": listing.id,
"submissionStatus": random.choice(
data=StoreListingVersionCreateInput(
agentGraphId=graph.id,
agentGraphVersion=graph.version,
name=graph.name or faker.sentence(nb_words=3),
subHeading=faker.sentence(),
videoUrl=get_video_url() if random.random() < 0.3 else None,
imageUrls=[get_image() for _ in range(3)],
description=faker.text(),
categories=[faker.word() for _ in range(3)],
isFeatured=random.choice([True, False]),
isAvailable=True,
storeListingId=listing.id,
submissionStatus=random.choice(
[
prisma.enums.SubmissionStatus.PENDING,
prisma.enums.SubmissionStatus.APPROVED,
prisma.enums.SubmissionStatus.REJECTED,
]
),
}
)
)
store_listing_versions.append(version)
@@ -469,51 +475,49 @@ async def main():
try:
await db.useronboarding.create(
data={
"userId": user.id,
"completedSteps": completed_steps,
"walletShown": random.choice([True, False]),
"notified": (
data=UserOnboardingCreateInput(
userId=user.id,
completedSteps=completed_steps,
walletShown=random.choice([True, False]),
notified=(
random.sample(completed_steps, k=min(3, len(completed_steps)))
if completed_steps
else []
),
"rewardedFor": (
rewardedFor=(
random.sample(completed_steps, k=min(2, len(completed_steps)))
if completed_steps
else []
),
"usageReason": (
usageReason=(
random.choice(["personal", "business", "research", "learning"])
if random.random() < 0.7
else None
),
"integrations": random.sample(
integrations=random.sample(
["github", "google", "discord", "slack"], k=random.randint(0, 2)
),
"otherIntegrations": (
faker.word() if random.random() < 0.2 else None
),
"selectedStoreListingVersionId": (
otherIntegrations=(faker.word() if random.random() < 0.2 else None),
selectedStoreListingVersionId=(
random.choice(store_listing_versions).id
if store_listing_versions and random.random() < 0.5
else None
),
"onboardingAgentExecutionId": (
onboardingAgentExecutionId=(
random.choice(agent_graph_executions).id
if agent_graph_executions and random.random() < 0.3
else None
),
"agentRuns": random.randint(0, 10),
}
agentRuns=random.randint(0, 10),
)
)
except Exception as e:
print(f"Error creating onboarding for user {user.id}: {e}")
# Try simpler version
await db.useronboarding.create(
data={
"userId": user.id,
}
data=UserOnboardingCreateInput(
userId=user.id,
)
)
# Insert IntegrationWebhooks for some users
@@ -544,20 +548,20 @@ async def main():
for user in users:
api_key = APIKeySmith().generate_key()
await db.apikey.create(
data={
"name": faker.word(),
"head": api_key.head,
"tail": api_key.tail,
"hash": api_key.hash,
"salt": api_key.salt,
"status": prisma.enums.APIKeyStatus.ACTIVE,
"permissions": [
data=APIKeyCreateInput(
name=faker.word(),
head=api_key.head,
tail=api_key.tail,
hash=api_key.hash,
salt=api_key.salt,
status=prisma.enums.APIKeyStatus.ACTIVE,
permissions=[
prisma.enums.APIKeyPermission.EXECUTE_GRAPH,
prisma.enums.APIKeyPermission.READ_GRAPH,
],
"description": faker.text(),
"userId": user.id,
}
description=faker.text(),
userId=user.id,
)
)
# Refresh materialized views

View File

@@ -16,6 +16,7 @@ from datetime import datetime, timedelta
import prisma.enums
from faker import Faker
from prisma import Json, Prisma
from prisma.types import CreditTransactionCreateInput, StoreListingReviewCreateInput
faker = Faker()
@@ -166,16 +167,16 @@ async def main():
score = random.choices([1, 2, 3, 4, 5], weights=[5, 10, 20, 40, 25])[0]
await db.storelistingreview.create(
data={
"storeListingVersionId": version.id,
"reviewByUserId": reviewer.id,
"score": score,
"comments": (
data=StoreListingReviewCreateInput(
storeListingVersionId=version.id,
reviewByUserId=reviewer.id,
score=score,
comments=(
faker.text(max_nb_chars=200)
if random.random() < 0.7
else None
),
}
)
)
new_reviews_count += 1
@@ -244,17 +245,17 @@ async def main():
)
await db.credittransaction.create(
data={
"userId": user.id,
"amount": amount,
"type": transaction_type,
"metadata": Json(
data=CreditTransactionCreateInput(
userId=user.id,
amount=amount,
type=transaction_type,
metadata=Json(
{
"source": "test_updater",
"timestamp": datetime.now().isoformat(),
}
),
}
)
)
transaction_count += 1

View File

@@ -0,0 +1,146 @@
/**
* Cloudflare Workers Script for docs.agpt.co → agpt.co/docs migration
*
* Deploy this script to handle all redirects with a single JavaScript file.
* No rule limits, easy to maintain, handles all edge cases.
*/
// URL mapping for special cases that don't follow patterns
const SPECIAL_MAPPINGS = {
// Root page
'/': '/docs/platform',
// Special cases that don't follow standard patterns
'/platform/d_id/': '/docs/integrations/block-integrations/d-id',
'/platform/blocks/blocks/': '/docs/integrations',
'/platform/blocks/decoder_block/': '/docs/integrations/block-integrations/text-decoder',
'/platform/blocks/http': '/docs/integrations/block-integrations/send-web-request',
'/platform/blocks/llm/': '/docs/integrations/block-integrations/ai-and-llm',
'/platform/blocks/time_blocks': '/docs/integrations/block-integrations/time-and-date',
'/platform/blocks/text_to_speech_block': '/docs/integrations/block-integrations/text-to-speech',
'/platform/blocks/ai_shortform_video_block': '/docs/integrations/block-integrations/ai-shortform-video',
'/platform/blocks/replicate_flux_advanced': '/docs/integrations/block-integrations/replicate-flux-advanced',
'/platform/blocks/flux_kontext': '/docs/integrations/block-integrations/flux-kontext',
'/platform/blocks/ai_condition/': '/docs/integrations/block-integrations/ai-condition',
'/platform/blocks/email_block': '/docs/integrations/block-integrations/email',
'/platform/blocks/google_maps': '/docs/integrations/block-integrations/google-maps',
'/platform/blocks/google/gmail': '/docs/integrations/block-integrations/gmail',
'/platform/blocks/github/issues/': '/docs/integrations/block-integrations/github-issues',
'/platform/blocks/github/repo/': '/docs/integrations/block-integrations/github-repo',
'/platform/blocks/github/pull_requests': '/docs/integrations/block-integrations/github-pull-requests',
'/platform/blocks/twitter/twitter': '/docs/integrations/block-integrations/twitter',
'/classic/setup/': '/docs/classic/setup/setting-up-autogpt-classic',
'/code-of-conduct/': '/docs/classic/help-us-improve-autogpt/code-of-conduct',
'/contributing/': '/docs/classic/contributing',
'/contribute/': '/docs/contribute',
'/forge/components/introduction/': '/docs/classic/forge/introduction'
};
/**
* Transform path by replacing underscores with hyphens and removing trailing slashes
*/
function transformPath(path) {
return path.replace(/_/g, '-').replace(/\/$/, '');
}
/**
* Handle docs.agpt.co redirects
*/
function handleDocsRedirect(url) {
const pathname = url.pathname;
// Check special mappings first
if (SPECIAL_MAPPINGS[pathname]) {
return `https://agpt.co${SPECIAL_MAPPINGS[pathname]}`;
}
// Pattern-based redirects
// Platform blocks: /platform/blocks/* → /docs/integrations/block-integrations/*
if (pathname.startsWith('/platform/blocks/')) {
const blockName = pathname.substring('/platform/blocks/'.length);
const transformedName = transformPath(blockName);
return `https://agpt.co/docs/integrations/block-integrations/${transformedName}`;
}
// Platform contributing: /platform/contributing/* → /docs/platform/contributing/*
if (pathname.startsWith('/platform/contributing/')) {
const subPath = pathname.substring('/platform/contributing/'.length);
return `https://agpt.co/docs/platform/contributing/${subPath}`;
}
// Platform general: /platform/* → /docs/platform/* (with underscore→hyphen)
if (pathname.startsWith('/platform/')) {
const subPath = pathname.substring('/platform/'.length);
const transformedPath = transformPath(subPath);
return `https://agpt.co/docs/platform/${transformedPath}`;
}
// Forge components: /forge/components/* → /docs/classic/forge/introduction/*
if (pathname.startsWith('/forge/components/')) {
const subPath = pathname.substring('/forge/components/'.length);
return `https://agpt.co/docs/classic/forge/introduction/${subPath}`;
}
// Forge general: /forge/* → /docs/classic/forge/*
if (pathname.startsWith('/forge/')) {
const subPath = pathname.substring('/forge/'.length);
return `https://agpt.co/docs/classic/forge/${subPath}`;
}
// Classic: /classic/* → /docs/classic/*
if (pathname.startsWith('/classic/')) {
const subPath = pathname.substring('/classic/'.length);
return `https://agpt.co/docs/classic/${subPath}`;
}
// Default fallback
return 'https://agpt.co/docs/';
}
/**
* Main Worker function
*/
export default {
async fetch(request, env, ctx) {
const url = new URL(request.url);
// Only handle docs.agpt.co requests
if (url.hostname === 'docs.agpt.co') {
const redirectUrl = handleDocsRedirect(url);
return new Response(null, {
status: 301,
headers: {
'Location': redirectUrl,
'Cache-Control': 'max-age=300' // Cache redirects for 5 minutes
}
});
}
// For non-docs requests, pass through or return 404
return new Response('Not Found', { status: 404 });
}
};
// Test function for local development
export function testRedirects() {
const testCases = [
'https://docs.agpt.co/',
'https://docs.agpt.co/platform/getting-started/',
'https://docs.agpt.co/platform/advanced_setup/',
'https://docs.agpt.co/platform/blocks/basic/',
'https://docs.agpt.co/platform/blocks/ai_condition/',
'https://docs.agpt.co/classic/setup/',
'https://docs.agpt.co/forge/components/agents/',
'https://docs.agpt.co/contributing/',
'https://docs.agpt.co/unknown-page'
];
console.log('Testing redirects:');
testCases.forEach(testUrl => {
const url = new URL(testUrl);
const result = handleDocsRedirect(url);
console.log(`${testUrl}${result}`);
});
}

View File

@@ -46,14 +46,15 @@
"@radix-ui/react-scroll-area": "1.2.10",
"@radix-ui/react-select": "2.2.6",
"@radix-ui/react-separator": "1.1.7",
"@radix-ui/react-slider": "1.3.6",
"@radix-ui/react-slot": "1.2.3",
"@radix-ui/react-switch": "1.2.6",
"@radix-ui/react-tabs": "1.1.13",
"@radix-ui/react-toast": "1.2.15",
"@radix-ui/react-tooltip": "1.2.8",
"@rjsf/core": "5.24.13",
"@rjsf/utils": "5.24.13",
"@rjsf/validator-ajv8": "5.24.13",
"@rjsf/core": "6.1.2",
"@rjsf/utils": "6.1.2",
"@rjsf/validator-ajv8": "6.1.2",
"@sentry/nextjs": "10.27.0",
"@supabase/ssr": "0.7.0",
"@supabase/supabase-js": "2.78.0",
@@ -69,6 +70,7 @@
"cmdk": "1.1.1",
"cookie": "1.0.2",
"date-fns": "4.1.0",
"dexie": "4.2.1",
"dotenv": "17.2.3",
"elliptic": "6.6.1",
"embla-carousel-react": "8.6.0",
@@ -90,7 +92,6 @@
"react-currency-input-field": "4.0.3",
"react-day-picker": "9.11.1",
"react-dom": "18.3.1",
"react-drag-drop-files": "2.4.0",
"react-hook-form": "7.66.0",
"react-icons": "5.5.0",
"react-markdown": "9.0.3",

File diff suppressed because it is too large Load Diff

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.6 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

View File

@@ -1,4 +1,4 @@
import { OAuthPopupResultMessage } from "@/components/renderers/input-renderer/fields/CredentialField/models/OAuthCredentialModal/useOAuthCredentialModal";
import { OAuthPopupResultMessage } from "./types";
import { NextResponse } from "next/server";
// This route is intended to be used as the callback for integration OAuth flows,

View File

@@ -0,0 +1,11 @@
export type OAuthPopupResultMessage = { message_type: "oauth_popup_result" } & (
| {
success: true;
code: string;
state: string;
}
| {
success: false;
message: string;
}
);

View File

@@ -16,6 +16,7 @@ import {
SheetTitle,
SheetTrigger,
} from "@/components/__legacy__/ui/sheet";
import { Button } from "@/components/atoms/Button/Button";
import {
Tooltip,
TooltipContent,
@@ -25,7 +26,6 @@ import {
import { BookOpenIcon } from "@phosphor-icons/react";
import { useMemo } from "react";
import { useShallow } from "zustand/react/shallow";
import { BuilderActionButton } from "../BuilderActionButton";
export const AgentOutputs = ({ flowID }: { flowID: string | null }) => {
const hasOutputs = useGraphStore(useShallow((state) => state.hasOutputs));
@@ -76,9 +76,13 @@ export const AgentOutputs = ({ flowID }: { flowID: string | null }) => {
<Tooltip>
<TooltipTrigger asChild>
<SheetTrigger asChild>
<BuilderActionButton disabled={!flowID || !hasOutputs()}>
<BookOpenIcon className="size-6" />
</BuilderActionButton>
<Button
variant="outline"
size="icon"
disabled={!flowID || !hasOutputs()}
>
<BookOpenIcon className="size-4" />
</Button>
</SheetTrigger>
</TooltipTrigger>
<TooltipContent>

View File

@@ -1,37 +0,0 @@
import { Button } from "@/components/atoms/Button/Button";
import { ButtonProps } from "@/components/atoms/Button/helpers";
import { cn } from "@/lib/utils";
import { CircleNotchIcon } from "@phosphor-icons/react";
export const BuilderActionButton = ({
children,
className,
isLoading,
...props
}: ButtonProps & { isLoading?: boolean }) => {
return (
<Button
variant="icon"
size={"small"}
className={cn(
"relative h-12 w-12 min-w-0 text-lg",
"bg-gradient-to-br from-zinc-50 to-zinc-200",
"border border-zinc-200",
"shadow-[inset_0_3px_0_0_rgba(255,255,255,0.5),0_2px_4px_0_rgba(0,0,0,0.2)]",
"dark:shadow-[inset_0_1px_0_0_rgba(255,255,255,0.1),0_2px_4px_0_rgba(0,0,0,0.4)]",
"hover:shadow-[inset_0_1px_0_0_rgba(255,255,255,0.5),0_1px_2px_0_rgba(0,0,0,0.2)]",
"active:shadow-[inset_0_2px_4px_0_rgba(0,0,0,0.2)]",
"transition-all duration-150",
"disabled:cursor-not-allowed disabled:opacity-50",
className,
)}
{...props}
>
{!isLoading ? (
children
) : (
<CircleNotchIcon className="size-6 animate-spin" />
)}
</Button>
);
};

View File

@@ -1,12 +1,12 @@
import { ShareIcon } from "@phosphor-icons/react";
import { BuilderActionButton } from "../BuilderActionButton";
import { Button } from "@/components/atoms/Button/Button";
import {
Tooltip,
TooltipContent,
TooltipTrigger,
} from "@/components/atoms/Tooltip/BaseTooltip";
import { usePublishToMarketplace } from "./usePublishToMarketplace";
import { PublishAgentModal } from "@/components/contextual/PublishAgentModal/PublishAgentModal";
import { ShareIcon } from "@phosphor-icons/react";
import { usePublishToMarketplace } from "./usePublishToMarketplace";
export const PublishToMarketplace = ({ flowID }: { flowID: string | null }) => {
const { handlePublishToMarketplace, publishState, handleStateChange } =
@@ -16,12 +16,14 @@ export const PublishToMarketplace = ({ flowID }: { flowID: string | null }) => {
<>
<Tooltip>
<TooltipTrigger asChild>
<BuilderActionButton
<Button
variant="outline"
size="icon"
onClick={handlePublishToMarketplace}
disabled={!flowID}
>
<ShareIcon className="size-6 drop-shadow-sm" />
</BuilderActionButton>
<ShareIcon className="size-4" />
</Button>
</TooltipTrigger>
<TooltipContent>Publish to Marketplace</TooltipContent>
</Tooltip>
@@ -30,6 +32,7 @@ export const PublishToMarketplace = ({ flowID }: { flowID: string | null }) => {
targetState={publishState}
onStateChange={handleStateChange}
preSelectedAgentId={flowID || undefined}
showTrigger={false}
/>
</>
);

View File

@@ -1,15 +1,14 @@
import { useRunGraph } from "./useRunGraph";
import { useGraphStore } from "@/app/(platform)/build/stores/graphStore";
import { useShallow } from "zustand/react/shallow";
import { PlayIcon, StopIcon } from "@phosphor-icons/react";
import { cn } from "@/lib/utils";
import { RunInputDialog } from "../RunInputDialog/RunInputDialog";
import { Button } from "@/components/atoms/Button/Button";
import {
Tooltip,
TooltipContent,
TooltipTrigger,
} from "@/components/atoms/Tooltip/BaseTooltip";
import { BuilderActionButton } from "../BuilderActionButton";
import { PlayIcon, StopIcon } from "@phosphor-icons/react";
import { useShallow } from "zustand/react/shallow";
import { RunInputDialog } from "../RunInputDialog/RunInputDialog";
import { useRunGraph } from "./useRunGraph";
export const RunGraph = ({ flowID }: { flowID: string | null }) => {
const {
@@ -29,21 +28,19 @@ export const RunGraph = ({ flowID }: { flowID: string | null }) => {
<>
<Tooltip>
<TooltipTrigger asChild>
<BuilderActionButton
className={cn(
isGraphRunning &&
"border-red-500 bg-gradient-to-br from-red-400 to-red-500 shadow-[inset_0_2px_0_0_rgba(255,255,255,0.5),0_2px_4px_0_rgba(0,0,0,0.2)]",
)}
<Button
size="icon"
variant={isGraphRunning ? "destructive" : "primary"}
onClick={isGraphRunning ? handleStopGraph : handleRunGraph}
disabled={!flowID || isExecutingGraph || isTerminatingGraph}
isLoading={isExecutingGraph || isTerminatingGraph || isSaving}
loading={isExecutingGraph || isTerminatingGraph || isSaving}
>
{!isGraphRunning ? (
<PlayIcon className="size-6 drop-shadow-sm" />
<PlayIcon className="size-4" />
) : (
<StopIcon className="size-6 drop-shadow-sm" />
<StopIcon className="size-4" />
)}
</BuilderActionButton>
</Button>
</TooltipTrigger>
<TooltipContent>
{isGraphRunning ? "Stop agent" : "Run agent"}

View File

@@ -5,7 +5,7 @@ import { useGraphStore } from "@/app/(platform)/build/stores/graphStore";
import { Button } from "@/components/atoms/Button/Button";
import { ClockIcon, PlayIcon } from "@phosphor-icons/react";
import { Text } from "@/components/atoms/Text/Text";
import { FormRenderer } from "@/components/renderers/input-renderer/FormRenderer";
import { FormRenderer } from "@/components/renderers/InputRenderer/FormRenderer";
import { useRunInputDialog } from "./useRunInputDialog";
import { CronSchedulerDialog } from "../CronSchedulerDialog/CronSchedulerDialog";

View File

@@ -8,7 +8,7 @@ import {
import { parseAsInteger, parseAsString, useQueryStates } from "nuqs";
import { useMemo, useState } from "react";
import { uiSchema } from "../../../FlowEditor/nodes/uiSchema";
import { isCredentialFieldSchema } from "@/components/renderers/input-renderer/fields/CredentialField/helpers";
import { isCredentialFieldSchema } from "@/components/renderers/InputRenderer/custom/CredentialField/helpers";
export const useRunInputDialog = ({
setIsOpen,

View File

@@ -1,14 +1,14 @@
import { ClockIcon } from "@phosphor-icons/react";
import { RunInputDialog } from "../RunInputDialog/RunInputDialog";
import { useScheduleGraph } from "./useScheduleGraph";
import { Button } from "@/components/atoms/Button/Button";
import {
Tooltip,
TooltipContent,
TooltipProvider,
TooltipTrigger,
} from "@/components/atoms/Tooltip/BaseTooltip";
import { ClockIcon } from "@phosphor-icons/react";
import { CronSchedulerDialog } from "../CronSchedulerDialog/CronSchedulerDialog";
import { BuilderActionButton } from "../BuilderActionButton";
import { RunInputDialog } from "../RunInputDialog/RunInputDialog";
import { useScheduleGraph } from "./useScheduleGraph";
export const ScheduleGraph = ({ flowID }: { flowID: string | null }) => {
const {
@@ -23,12 +23,14 @@ export const ScheduleGraph = ({ flowID }: { flowID: string | null }) => {
<TooltipProvider>
<Tooltip>
<TooltipTrigger asChild>
<BuilderActionButton
<Button
variant="outline"
size="icon"
onClick={handleScheduleGraph}
disabled={!flowID}
>
<ClockIcon className="size-6" />
</BuilderActionButton>
<ClockIcon className="size-4" />
</Button>
</TooltipTrigger>
<TooltipContent>
<p>Schedule Graph</p>

View File

@@ -0,0 +1,160 @@
"use client";
import { Button } from "@/components/atoms/Button/Button";
import { ClockCounterClockwiseIcon, XIcon } from "@phosphor-icons/react";
import { cn } from "@/lib/utils";
import { formatTimeAgo } from "@/lib/utils/time";
import {
Tooltip,
TooltipContent,
TooltipTrigger,
} from "@/components/atoms/Tooltip/BaseTooltip";
import { useDraftRecoveryPopup } from "./useDraftRecoveryPopup";
import { Text } from "@/components/atoms/Text/Text";
import { AnimatePresence, motion } from "framer-motion";
import { DraftDiff } from "@/lib/dexie/draft-utils";
interface DraftRecoveryPopupProps {
isInitialLoadComplete: boolean;
}
function formatDiffSummary(diff: DraftDiff | null): string {
if (!diff) return "";
const parts: string[] = [];
// Node changes
const nodeChanges: string[] = [];
if (diff.nodes.added > 0) nodeChanges.push(`+${diff.nodes.added}`);
if (diff.nodes.removed > 0) nodeChanges.push(`-${diff.nodes.removed}`);
if (diff.nodes.modified > 0) nodeChanges.push(`~${diff.nodes.modified}`);
if (nodeChanges.length > 0) {
parts.push(
`${nodeChanges.join("/")} block${diff.nodes.added + diff.nodes.removed + diff.nodes.modified !== 1 ? "s" : ""}`,
);
}
// Edge changes
const edgeChanges: string[] = [];
if (diff.edges.added > 0) edgeChanges.push(`+${diff.edges.added}`);
if (diff.edges.removed > 0) edgeChanges.push(`-${diff.edges.removed}`);
if (diff.edges.modified > 0) edgeChanges.push(`~${diff.edges.modified}`);
if (edgeChanges.length > 0) {
parts.push(
`${edgeChanges.join("/")} connection${diff.edges.added + diff.edges.removed + diff.edges.modified !== 1 ? "s" : ""}`,
);
}
return parts.join(", ");
}
export function DraftRecoveryPopup({
isInitialLoadComplete,
}: DraftRecoveryPopupProps) {
const {
isOpen,
popupRef,
nodeCount,
edgeCount,
diff,
savedAt,
onLoad,
onDiscard,
} = useDraftRecoveryPopup(isInitialLoadComplete);
const diffSummary = formatDiffSummary(diff);
return (
<AnimatePresence>
{isOpen && (
<motion.div
ref={popupRef}
className={cn("absolute left-1/2 top-4 z-50")}
initial={{
opacity: 0,
x: "-50%",
y: "-150%",
scale: 0.5,
filter: "blur(20px)",
}}
animate={{
opacity: 1,
x: "-50%",
y: "0%",
scale: 1,
filter: "blur(0px)",
}}
exit={{
opacity: 0,
y: "-150%",
scale: 0.5,
filter: "blur(20px)",
transition: { duration: 0.4, type: "spring", bounce: 0.2 },
}}
transition={{ duration: 0.2, type: "spring", bounce: 0.2 }}
>
<div
className={cn(
"flex items-center gap-3 rounded-xlarge border border-amber-200 bg-amber-50 px-4 py-3 shadow-lg",
)}
>
<div className="flex items-center gap-2 text-amber-700 dark:text-amber-300">
<ClockCounterClockwiseIcon className="h-5 w-5" weight="fill" />
</div>
<div className="flex flex-col">
<Text
variant="small-medium"
className="text-amber-900 dark:text-amber-100"
>
Unsaved changes found
</Text>
<Text
variant="small"
className="text-amber-700 dark:text-amber-400"
>
{diffSummary ||
`${nodeCount} block${nodeCount !== 1 ? "s" : ""}, ${edgeCount} connection${edgeCount !== 1 ? "s" : ""}`}{" "}
{formatTimeAgo(new Date(savedAt).toISOString())}
</Text>
</div>
<div className="ml-2 flex items-center gap-2">
<Tooltip delayDuration={10}>
<TooltipTrigger asChild>
<Button
variant="primary"
size="small"
onClick={onLoad}
className="aspect-square min-w-0 p-1.5"
>
<ClockCounterClockwiseIcon size={20} weight="fill" />
<span className="sr-only">Restore changes</span>
</Button>
</TooltipTrigger>
<TooltipContent>Restore changes</TooltipContent>
</Tooltip>
<Tooltip delayDuration={10}>
<TooltipTrigger asChild>
<Button
variant="destructive"
size="icon"
onClick={onDiscard}
aria-label="Discard changes"
className="aspect-square min-w-0 p-1.5"
>
<XIcon size={20} />
<span className="sr-only">Discard changes</span>
</Button>
</TooltipTrigger>
<TooltipContent>Discard changes</TooltipContent>
</Tooltip>
</div>
</div>
</motion.div>
)}
</AnimatePresence>
);
}

View File

@@ -0,0 +1,63 @@
import { useEffect, useRef } from "react";
import { useDraftManager } from "../FlowEditor/Flow/useDraftManager";
export const useDraftRecoveryPopup = (isInitialLoadComplete: boolean) => {
const popupRef = useRef<HTMLDivElement>(null);
const {
isRecoveryOpen: isOpen,
savedAt,
nodeCount,
edgeCount,
diff,
loadDraft: onLoad,
discardDraft: onDiscard,
} = useDraftManager(isInitialLoadComplete);
useEffect(() => {
if (!isOpen) return;
const handleClickOutside = (event: MouseEvent) => {
if (
popupRef.current &&
!popupRef.current.contains(event.target as Node)
) {
onDiscard();
}
};
const timeoutId = setTimeout(() => {
document.addEventListener("mousedown", handleClickOutside);
}, 100);
return () => {
clearTimeout(timeoutId);
document.removeEventListener("mousedown", handleClickOutside);
};
}, [isOpen, onDiscard]);
useEffect(() => {
if (!isOpen) return;
const handleKeyDown = (event: KeyboardEvent) => {
if (event.key === "Escape") {
onDiscard();
}
};
document.addEventListener("keydown", handleKeyDown);
return () => {
document.removeEventListener("keydown", handleKeyDown);
};
}, [isOpen, onDiscard]);
return {
popupRef,
isOpen,
nodeCount,
edgeCount,
diff,
savedAt,
onLoad,
onDiscard,
};
};

View File

@@ -1,26 +1,27 @@
import { ReactFlow, Background } from "@xyflow/react";
import NewControlPanel from "../../NewControlPanel/NewControlPanel";
import CustomEdge from "../edges/CustomEdge";
import { useFlow } from "./useFlow";
import { useShallow } from "zustand/react/shallow";
import { useNodeStore } from "../../../stores/nodeStore";
import { useMemo, useEffect, useCallback } from "react";
import { CustomNode } from "../nodes/CustomNode/CustomNode";
import { useCustomEdge } from "../edges/useCustomEdge";
import { useFlowRealtime } from "./useFlowRealtime";
import { GraphLoadingBox } from "./components/GraphLoadingBox";
import { BuilderActions } from "../../BuilderActions/BuilderActions";
import { RunningBackground } from "./components/RunningBackground";
import { useGraphStore } from "../../../stores/graphStore";
import { useCopyPaste } from "./useCopyPaste";
import { FloatingReviewsPanel } from "@/components/organisms/FloatingReviewsPanel/FloatingReviewsPanel";
import { parseAsString, useQueryStates } from "nuqs";
import { CustomControls } from "./components/CustomControl";
import { useGetV1GetSpecificGraph } from "@/app/api/__generated__/endpoints/graphs/graphs";
import { okData } from "@/app/api/helpers";
import { FloatingReviewsPanel } from "@/components/organisms/FloatingReviewsPanel/FloatingReviewsPanel";
import { Background, ReactFlow } from "@xyflow/react";
import { parseAsString, useQueryStates } from "nuqs";
import { useCallback, useMemo } from "react";
import { useShallow } from "zustand/react/shallow";
import { useGraphStore } from "../../../stores/graphStore";
import { useNodeStore } from "../../../stores/nodeStore";
import { BuilderActions } from "../../BuilderActions/BuilderActions";
import { DraftRecoveryPopup } from "../../DraftRecoveryDialog/DraftRecoveryPopup";
import { FloatingSafeModeToggle } from "../../FloatingSafeModeToogle";
import NewControlPanel from "../../NewControlPanel/NewControlPanel";
import CustomEdge from "../edges/CustomEdge";
import { useCustomEdge } from "../edges/useCustomEdge";
import { CustomNode } from "../nodes/CustomNode/CustomNode";
import { CustomControls } from "./components/CustomControl";
import { GraphLoadingBox } from "./components/GraphLoadingBox";
import { RunningBackground } from "./components/RunningBackground";
import { TriggerAgentBanner } from "./components/TriggerAgentBanner";
import { resolveCollisions } from "./helpers/resolve-collision";
import { FloatingSafeModeToggle } from "../../FloatingSafeModeToogle";
import { useCopyPaste } from "./useCopyPaste";
import { useFlow } from "./useFlow";
import { useFlowRealtime } from "./useFlowRealtime";
export const Flow = () => {
const [{ flowID, flowExecutionID }] = useQueryStates({
@@ -41,14 +42,18 @@ export const Flow = () => {
const nodes = useNodeStore(useShallow((state) => state.nodes));
const setNodes = useNodeStore(useShallow((state) => state.setNodes));
const onNodesChange = useNodeStore(
useShallow((state) => state.onNodesChange),
);
const hasWebhookNodes = useNodeStore(
useShallow((state) => state.hasWebhookNodes()),
);
const nodeTypes = useMemo(() => ({ custom: CustomNode }), []);
const edgeTypes = useMemo(() => ({ custom: CustomEdge }), []);
const onNodeDragStop = useCallback(() => {
setNodes(
resolveCollisions(nodes, {
@@ -60,29 +65,26 @@ export const Flow = () => {
}, [setNodes, nodes]);
const { edges, onConnect, onEdgesChange } = useCustomEdge();
// We use this hook to load the graph and convert them into custom nodes and edges.
const { onDragOver, onDrop, isFlowContentLoading, isLocked, setIsLocked } =
useFlow();
// for loading purpose
const {
onDragOver,
onDrop,
isFlowContentLoading,
isInitialLoadComplete,
isLocked,
setIsLocked,
} = useFlow();
// This hook is used for websocket realtime updates.
useFlowRealtime();
// Copy/paste functionality
const handleCopyPaste = useCopyPaste();
useCopyPaste();
useEffect(() => {
const handleKeyDown = (event: KeyboardEvent) => {
handleCopyPaste(event);
};
window.addEventListener("keydown", handleKeyDown);
return () => {
window.removeEventListener("keydown", handleKeyDown);
};
}, [handleCopyPaste]);
const isGraphRunning = useGraphStore(
useShallow((state) => state.isGraphRunning),
);
return (
<div className="flex h-full w-full dark:bg-slate-900">
<div className="relative flex-1">
@@ -95,6 +97,9 @@ export const Flow = () => {
onConnect={onConnect}
onEdgesChange={onEdgesChange}
onNodeDragStop={onNodeDragStop}
onNodeContextMenu={(event) => {
event.preventDefault();
}}
maxZoom={2}
minZoom={0.1}
onDragOver={onDragOver}
@@ -102,6 +107,7 @@ export const Flow = () => {
nodesDraggable={!isLocked}
nodesConnectable={!isLocked}
elementsSelectable={!isLocked}
deleteKeyCode={["Backspace", "Delete"]}
>
<Background />
<CustomControls setIsLocked={setIsLocked} isLocked={isLocked} />
@@ -115,6 +121,7 @@ export const Flow = () => {
className="right-2 top-32 p-2"
/>
)}
<DraftRecoveryPopup isInitialLoadComplete={isInitialLoadComplete} />
</ReactFlow>
</div>
{/* TODO: Need to update it in future - also do not send executionId as prop - rather use useQueryState inside the component */}

View File

@@ -48,8 +48,6 @@ export const resolveCollisions: CollisionAlgorithm = (
const width = (node.width ?? node.measured?.width ?? 0) + margin * 2;
const height = (node.height ?? node.measured?.height ?? 0) + margin * 2;
console.log("width", width);
console.log("height", height);
const x = node.position.x - margin;
const y = node.position.y - margin;

View File

@@ -1,4 +1,4 @@
import { useCallback } from "react";
import { useCallback, useEffect } from "react";
import { useReactFlow } from "@xyflow/react";
import { v4 as uuidv4 } from "uuid";
import { useNodeStore } from "../../../stores/nodeStore";
@@ -151,5 +151,16 @@ export function useCopyPaste() {
[getViewport, toast],
);
useEffect(() => {
const handleKeyDown = (event: KeyboardEvent) => {
handleCopyPaste(event);
};
window.addEventListener("keydown", handleKeyDown);
return () => {
window.removeEventListener("keydown", handleKeyDown);
};
}, [handleCopyPaste]);
return handleCopyPaste;
}

View File

@@ -0,0 +1,319 @@
import { useState, useCallback, useEffect, useRef } from "react";
import { parseAsString, parseAsInteger, useQueryStates } from "nuqs";
import {
draftService,
getTempFlowId,
getOrCreateTempFlowId,
DraftData,
} from "@/services/builder-draft/draft-service";
import { BuilderDraft } from "@/lib/dexie/db";
import {
cleanNodes,
cleanEdges,
calculateDraftDiff,
DraftDiff,
} from "@/lib/dexie/draft-utils";
import { useNodeStore } from "../../../stores/nodeStore";
import { useEdgeStore } from "../../../stores/edgeStore";
import { useGraphStore } from "../../../stores/graphStore";
import { useHistoryStore } from "../../../stores/historyStore";
import isEqual from "lodash/isEqual";
const AUTO_SAVE_INTERVAL_MS = 15000; // 15 seconds
interface DraftRecoveryState {
isOpen: boolean;
draft: BuilderDraft | null;
diff: DraftDiff | null;
}
/**
* Consolidated hook for draft persistence and recovery
* - Auto-saves builder state every 15 seconds
* - Saves on beforeunload event
* - Checks for and manages unsaved drafts on load
*/
export function useDraftManager(isInitialLoadComplete: boolean) {
const [state, setState] = useState<DraftRecoveryState>({
isOpen: false,
draft: null,
diff: null,
});
const [{ flowID, flowVersion }] = useQueryStates({
flowID: parseAsString,
flowVersion: parseAsInteger,
});
const lastSavedStateRef = useRef<DraftData | null>(null);
const saveTimeoutRef = useRef<NodeJS.Timeout | null>(null);
const isDirtyRef = useRef(false);
const hasCheckedForDraft = useRef(false);
const getEffectiveFlowId = useCallback((): string => {
return flowID || getOrCreateTempFlowId();
}, [flowID]);
const getCurrentState = useCallback((): DraftData => {
const nodes = useNodeStore.getState().nodes;
const edges = useEdgeStore.getState().edges;
const nodeCounter = useNodeStore.getState().nodeCounter;
const graphStore = useGraphStore.getState();
return {
nodes,
edges,
graphSchemas: {
input: graphStore.inputSchema,
credentials: graphStore.credentialsInputSchema,
output: graphStore.outputSchema,
},
nodeCounter,
flowVersion: flowVersion ?? undefined,
};
}, [flowVersion]);
const cleanStateForComparison = useCallback((stateData: DraftData) => {
return {
nodes: cleanNodes(stateData.nodes),
edges: cleanEdges(stateData.edges),
};
}, []);
const hasChanges = useCallback((): boolean => {
const currentState = getCurrentState();
if (!lastSavedStateRef.current) {
return currentState.nodes.length > 0;
}
const currentClean = cleanStateForComparison(currentState);
const lastClean = cleanStateForComparison(lastSavedStateRef.current);
return !isEqual(currentClean, lastClean);
}, [getCurrentState, cleanStateForComparison]);
const saveDraft = useCallback(async () => {
const effectiveFlowId = getEffectiveFlowId();
const currentState = getCurrentState();
if (currentState.nodes.length === 0 && currentState.edges.length === 0) {
return;
}
if (!hasChanges()) {
return;
}
try {
await draftService.saveDraft(effectiveFlowId, currentState);
lastSavedStateRef.current = currentState;
isDirtyRef.current = false;
} catch (error) {
console.error("[DraftPersistence] Failed to save draft:", error);
}
}, [getEffectiveFlowId, getCurrentState, hasChanges]);
const scheduleSave = useCallback(() => {
isDirtyRef.current = true;
if (saveTimeoutRef.current) {
clearTimeout(saveTimeoutRef.current);
}
saveTimeoutRef.current = setTimeout(() => {
saveDraft();
}, AUTO_SAVE_INTERVAL_MS);
}, [saveDraft]);
useEffect(() => {
const unsubscribeNodes = useNodeStore.subscribe((storeState, prevState) => {
if (storeState.nodes !== prevState.nodes) {
scheduleSave();
}
});
const unsubscribeEdges = useEdgeStore.subscribe((storeState, prevState) => {
if (storeState.edges !== prevState.edges) {
scheduleSave();
}
});
return () => {
unsubscribeNodes();
unsubscribeEdges();
};
}, [scheduleSave]);
useEffect(() => {
const handleBeforeUnload = () => {
if (isDirtyRef.current) {
const effectiveFlowId = getEffectiveFlowId();
const currentState = getCurrentState();
if (
currentState.nodes.length === 0 &&
currentState.edges.length === 0
) {
return;
}
draftService.saveDraft(effectiveFlowId, currentState).catch(() => {
// Ignore errors on unload
});
}
};
window.addEventListener("beforeunload", handleBeforeUnload);
return () => {
window.removeEventListener("beforeunload", handleBeforeUnload);
};
}, [getEffectiveFlowId, getCurrentState]);
useEffect(() => {
return () => {
if (saveTimeoutRef.current) {
clearTimeout(saveTimeoutRef.current);
}
if (isDirtyRef.current) {
saveDraft();
}
};
}, [saveDraft]);
useEffect(() => {
draftService.cleanupExpired().catch((error) => {
console.error(
"[DraftPersistence] Failed to cleanup expired drafts:",
error,
);
});
}, []);
const checkForDraft = useCallback(async () => {
const effectiveFlowId = flowID || getTempFlowId();
if (!effectiveFlowId) {
return;
}
try {
const draft = await draftService.loadDraft(effectiveFlowId);
if (!draft) {
return;
}
const currentNodes = useNodeStore.getState().nodes;
const currentEdges = useEdgeStore.getState().edges;
const isDifferent = draftService.isDraftDifferent(
draft,
currentNodes,
currentEdges,
);
if (isDifferent && (draft.nodes.length > 0 || draft.edges.length > 0)) {
const diff = calculateDraftDiff(
draft.nodes,
draft.edges,
currentNodes,
currentEdges,
);
setState({
isOpen: true,
draft,
diff,
});
} else {
await draftService.deleteDraft(effectiveFlowId);
}
} catch (error) {
console.error("[DraftRecovery] Failed to check for draft:", error);
}
}, [flowID]);
useEffect(() => {
if (isInitialLoadComplete && !hasCheckedForDraft.current) {
hasCheckedForDraft.current = true;
checkForDraft();
}
}, [isInitialLoadComplete, checkForDraft]);
useEffect(() => {
hasCheckedForDraft.current = false;
setState({
isOpen: false,
draft: null,
diff: null,
});
}, [flowID]);
const loadDraft = useCallback(async () => {
if (!state.draft) return;
const { draft } = state;
try {
useNodeStore.getState().setNodes(draft.nodes);
useEdgeStore.getState().setEdges(draft.edges);
draft.nodes.forEach((node) => {
useNodeStore.getState().syncHardcodedValuesWithHandleIds(node.id);
});
if (draft.nodeCounter !== undefined) {
useNodeStore.setState({ nodeCounter: draft.nodeCounter });
}
if (draft.graphSchemas) {
useGraphStore
.getState()
.setGraphSchemas(
draft.graphSchemas.input as Record<string, unknown> | null,
draft.graphSchemas.credentials as Record<string, unknown> | null,
draft.graphSchemas.output as Record<string, unknown> | null,
);
}
setTimeout(() => {
useHistoryStore.getState().initializeHistory();
}, 100);
await draftService.deleteDraft(draft.id);
setState({
isOpen: false,
draft: null,
diff: null,
});
} catch (error) {
console.error("[DraftRecovery] Failed to load draft:", error);
}
}, [state.draft]);
const discardDraft = useCallback(async () => {
if (!state.draft) {
setState({ isOpen: false, draft: null, diff: null });
return;
}
try {
await draftService.deleteDraft(state.draft.id);
} catch (error) {
console.error("[DraftRecovery] Failed to discard draft:", error);
}
setState({ isOpen: false, draft: null, diff: null });
}, [state.draft]);
return {
// Recovery popup props
isRecoveryOpen: state.isOpen,
savedAt: state.draft?.savedAt ?? 0,
nodeCount: state.draft?.nodes.length ?? 0,
edgeCount: state.draft?.edges.length ?? 0,
diff: state.diff,
loadDraft,
discardDraft,
};
}

View File

@@ -21,6 +21,7 @@ import { AgentExecutionStatus } from "@/app/api/__generated__/models/agentExecut
export const useFlow = () => {
const [isLocked, setIsLocked] = useState(false);
const [hasAutoFramed, setHasAutoFramed] = useState(false);
const [isInitialLoadComplete, setIsInitialLoadComplete] = useState(false);
const addNodes = useNodeStore(useShallow((state) => state.addNodes));
const addLinks = useEdgeStore(useShallow((state) => state.addLinks));
const updateNodeStatus = useNodeStore(
@@ -120,6 +121,14 @@ export const useFlow = () => {
if (customNodes.length > 0) {
useNodeStore.getState().setNodes([]);
addNodes(customNodes);
// Sync hardcoded values with handle IDs.
// If a keyvalue field has a key without a value, the backend omits it from hardcoded values.
// But if a handleId exists for that key, it causes inconsistency.
// This ensures hardcoded values stay in sync with handle IDs.
customNodes.forEach((node) => {
useNodeStore.getState().syncHardcodedValuesWithHandleIds(node.id);
});
}
}, [customNodes, addNodes]);
@@ -174,11 +183,23 @@ export const useFlow = () => {
if (customNodes.length > 0 && graph?.links) {
const timer = setTimeout(() => {
useHistoryStore.getState().initializeHistory();
// Mark initial load as complete after history is initialized
setIsInitialLoadComplete(true);
}, 100);
return () => clearTimeout(timer);
}
}, [customNodes, graph?.links]);
// Also mark as complete for new flows (no flowID) after a short delay
useEffect(() => {
if (!flowID && !isGraphLoading && !isBlocksLoading) {
const timer = setTimeout(() => {
setIsInitialLoadComplete(true);
}, 200);
return () => clearTimeout(timer);
}
}, [flowID, isGraphLoading, isBlocksLoading]);
useEffect(() => {
return () => {
useNodeStore.getState().setNodes([]);
@@ -217,6 +238,7 @@ export const useFlow = () => {
useEffect(() => {
setHasAutoFramed(false);
setIsInitialLoadComplete(false);
}, [flowID, flowVersion]);
// Drag and drop block from block menu
@@ -253,6 +275,7 @@ export const useFlow = () => {
return {
isFlowContentLoading: isGraphLoading || isBlocksLoading,
isInitialLoadComplete,
onDragOver,
onDrop,
isLocked,

View File

@@ -1,12 +1,17 @@
import { Connection as RFConnection, EdgeChange } from "@xyflow/react";
import {
Connection as RFConnection,
EdgeChange,
applyEdgeChanges,
} from "@xyflow/react";
import { useEdgeStore } from "@/app/(platform)/build/stores/edgeStore";
import { useCallback } from "react";
import { useNodeStore } from "../../../stores/nodeStore";
import { CustomEdge } from "./CustomEdge";
export const useCustomEdge = () => {
const edges = useEdgeStore((s) => s.edges);
const addEdge = useEdgeStore((s) => s.addEdge);
const removeEdge = useEdgeStore((s) => s.removeEdge);
const setEdges = useEdgeStore((s) => s.setEdges);
const onConnect = useCallback(
(conn: RFConnection) => {
@@ -45,14 +50,10 @@ export const useCustomEdge = () => {
);
const onEdgesChange = useCallback(
(changes: EdgeChange[]) => {
changes.forEach((change) => {
if (change.type === "remove") {
removeEdge(change.id);
}
});
(changes: EdgeChange<CustomEdge>[]) => {
setEdges(applyEdgeChanges(changes, edges));
},
[removeEdge],
[edges, setEdges],
);
return { edges, onConnect, onEdgesChange };

View File

@@ -1,26 +1,32 @@
import { CircleIcon } from "@phosphor-icons/react";
import { Handle, Position } from "@xyflow/react";
import { useEdgeStore } from "../../../stores/edgeStore";
import { cleanUpHandleId } from "@/components/renderers/InputRenderer/helpers";
import { cn } from "@/lib/utils";
const NodeHandle = ({
const InputNodeHandle = ({
handleId,
isConnected,
side,
nodeId,
}: {
handleId: string;
isConnected: boolean;
side: "left" | "right";
nodeId: string;
}) => {
const cleanedHandleId = cleanUpHandleId(handleId);
const isInputConnected = useEdgeStore((state) =>
state.isInputConnected(nodeId ?? "", cleanedHandleId),
);
return (
<Handle
type={side === "left" ? "target" : "source"}
position={side === "left" ? Position.Left : Position.Right}
id={handleId}
className={side === "left" ? "-ml-4 mr-2" : "-mr-2 ml-2"}
type={"target"}
position={Position.Left}
id={cleanedHandleId}
className={"-ml-6 mr-2"}
>
<div className="pointer-events-none">
<CircleIcon
size={16}
weight={isConnected ? "fill" : "duotone"}
weight={isInputConnected ? "fill" : "duotone"}
className={"text-gray-400 opacity-100"}
/>
</div>
@@ -28,4 +34,35 @@ const NodeHandle = ({
);
};
export default NodeHandle;
const OutputNodeHandle = ({
field_name,
nodeId,
hexColor,
}: {
field_name: string;
nodeId: string;
hexColor: string;
}) => {
const isOutputConnected = useEdgeStore((state) =>
state.isOutputConnected(nodeId, field_name),
);
return (
<Handle
type={"source"}
position={Position.Right}
id={field_name}
className={"-mr-2 ml-2"}
>
<div className="pointer-events-none">
<CircleIcon
size={16}
weight={"duotone"}
color={isOutputConnected ? hexColor : "gray"}
className={cn("text-gray-400 opacity-100")}
/>
</div>
</Handle>
);
};
export { InputNodeHandle, OutputNodeHandle };

View File

@@ -1,31 +1,4 @@
/**
* Handle ID Types for different input structures
*
* Examples:
* SIMPLE: "message"
* NESTED: "config.api_key"
* ARRAY: "items_$_0", "items_$_1"
* KEY_VALUE: "headers_#_Authorization", "params_#_limit"
*
* Note: All handle IDs are sanitized to remove spaces and special characters.
* Spaces become underscores, and special characters are removed.
* Example: "user name" becomes "user_name", "email@domain.com" becomes "emaildomaincom"
*/
export enum HandleIdType {
SIMPLE = "SIMPLE",
NESTED = "NESTED",
ARRAY = "ARRAY",
KEY_VALUE = "KEY_VALUE",
}
const fromRjsfId = (id: string): string => {
if (!id) return "";
const parts = id.split("_");
const filtered = parts.filter(
(p) => p !== "root" && p !== "properties" && p.length > 0,
);
return filtered.join("_") || "";
};
// Here we are handling single level of nesting, if need more in future then i will update it
const sanitizeForHandleId = (str: string): string => {
if (!str) return "";
@@ -38,51 +11,53 @@ const sanitizeForHandleId = (str: string): string => {
.replace(/^_|_$/g, ""); // Remove leading/trailing underscores
};
export const generateHandleId = (
const cleanTitleId = (id: string): string => {
if (!id) return "";
if (id.endsWith("_title")) {
id = id.slice(0, -6);
}
const parts = id.split("_");
const filtered = parts.filter(
(p) => p !== "root" && p !== "properties" && p.length > 0,
);
const filtered_id = filtered.join("_") || "";
return filtered_id;
};
export const generateHandleIdFromTitleId = (
fieldKey: string,
nestedValues: string[] = [],
type: HandleIdType = HandleIdType.SIMPLE,
{
isObjectProperty,
isAdditionalProperty,
isArrayItem,
}: {
isArrayItem?: boolean;
isObjectProperty?: boolean;
isAdditionalProperty?: boolean;
} = {
isArrayItem: false,
isObjectProperty: false,
isAdditionalProperty: false,
},
): string => {
if (!fieldKey) return "";
fieldKey = fromRjsfId(fieldKey);
fieldKey = sanitizeForHandleId(fieldKey);
const filteredKey = cleanTitleId(fieldKey);
if (isAdditionalProperty || isArrayItem) {
return filteredKey;
}
const cleanedKey = sanitizeForHandleId(filteredKey);
if (type === HandleIdType.SIMPLE || nestedValues.length === 0) {
return fieldKey;
if (isObjectProperty) {
// "config_api_key" -> "config.api_key"
const parts = cleanedKey.split("_");
if (parts.length >= 2) {
const baseName = parts[0];
const propertyName = parts.slice(1).join("_");
return `${baseName}.${propertyName}`;
}
}
const sanitizedNestedValues = nestedValues.map((value) =>
sanitizeForHandleId(value),
);
switch (type) {
case HandleIdType.NESTED:
return [fieldKey, ...sanitizedNestedValues].join(".");
case HandleIdType.ARRAY:
return [fieldKey, ...sanitizedNestedValues].join("_$_");
case HandleIdType.KEY_VALUE:
return [fieldKey, ...sanitizedNestedValues].join("_#_");
default:
return fieldKey;
}
};
export const parseKeyValueHandleId = (
handleId: string,
type: HandleIdType,
): string => {
if (type === HandleIdType.KEY_VALUE) {
return handleId.split("_#_")[1];
} else if (type === HandleIdType.ARRAY) {
return handleId.split("_$_")[1];
} else if (type === HandleIdType.NESTED) {
return handleId.split(".")[1];
} else if (type === HandleIdType.SIMPLE) {
return handleId.split("_")[1];
}
return "";
return cleanedKey;
};

View File

@@ -1,24 +1,25 @@
import React from "react";
import { Node as XYNode, NodeProps } from "@xyflow/react";
import { RJSFSchema } from "@rjsf/utils";
import { BlockUIType } from "../../../types";
import { StickyNoteBlock } from "./components/StickyNoteBlock";
import { BlockInfoCategoriesItem } from "@/app/api/__generated__/models/blockInfoCategoriesItem";
import { BlockCost } from "@/app/api/__generated__/models/blockCost";
import { AgentExecutionStatus } from "@/app/api/__generated__/models/agentExecutionStatus";
import { BlockCost } from "@/app/api/__generated__/models/blockCost";
import { BlockInfoCategoriesItem } from "@/app/api/__generated__/models/blockInfoCategoriesItem";
import { NodeExecutionResult } from "@/app/api/__generated__/models/nodeExecutionResult";
import { NodeContainer } from "./components/NodeContainer";
import { NodeHeader } from "./components/NodeHeader";
import { FormCreator } from "../FormCreator";
import { preprocessInputSchema } from "@/components/renderers/input-renderer/utils/input-schema-pre-processor";
import { OutputHandler } from "../OutputHandler";
import { NodeAdvancedToggle } from "./components/NodeAdvancedToggle";
import { NodeDataRenderer } from "./components/NodeOutput/NodeOutput";
import { NodeExecutionBadge } from "./components/NodeExecutionBadge";
import { cn } from "@/lib/utils";
import { WebhookDisclaimer } from "./components/WebhookDisclaimer";
import { AyrshareConnectButton } from "./components/AyrshareConnectButton";
import { NodeModelMetadata } from "@/app/api/__generated__/models/nodeModelMetadata";
import { preprocessInputSchema } from "@/components/renderers/InputRenderer/utils/input-schema-pre-processor";
import { cn } from "@/lib/utils";
import { RJSFSchema } from "@rjsf/utils";
import { NodeProps, Node as XYNode } from "@xyflow/react";
import React from "react";
import { BlockUIType } from "../../../types";
import { FormCreator } from "../FormCreator";
import { OutputHandler } from "../OutputHandler";
import { AyrshareConnectButton } from "./components/AyrshareConnectButton";
import { NodeAdvancedToggle } from "./components/NodeAdvancedToggle";
import { NodeContainer } from "./components/NodeContainer";
import { NodeExecutionBadge } from "./components/NodeExecutionBadge";
import { NodeHeader } from "./components/NodeHeader";
import { NodeDataRenderer } from "./components/NodeOutput/NodeOutput";
import { NodeRightClickMenu } from "./components/NodeRightClickMenu";
import { StickyNoteBlock } from "./components/StickyNoteBlock";
import { WebhookDisclaimer } from "./components/WebhookDisclaimer";
export type CustomNodeData = {
hardcodedValues: {
@@ -88,7 +89,7 @@ export const CustomNode: React.FC<NodeProps<CustomNode>> = React.memo(
// Currently all blockTypes design are similar - that's why i am using the same component for all of them
// If in future - if we need some drastic change in some blockTypes design - we can create separate components for them
return (
const node = (
<NodeContainer selected={selected} nodeId={nodeId} hasErrors={hasErrors}>
<div className="rounded-xlarge bg-white">
<NodeHeader data={data} nodeId={nodeId} />
@@ -99,7 +100,7 @@ export const CustomNode: React.FC<NodeProps<CustomNode>> = React.memo(
nodeId={nodeId}
uiType={data.uiType}
className={cn(
"bg-white pr-6",
"bg-white px-4",
isWebhook && "pointer-events-none opacity-50",
)}
showHandles={showHandles}
@@ -117,6 +118,15 @@ export const CustomNode: React.FC<NodeProps<CustomNode>> = React.memo(
<NodeExecutionBadge nodeId={nodeId} />
</NodeContainer>
);
return (
<NodeRightClickMenu
nodeId={nodeId}
subGraphID={data.hardcodedValues?.graph_id}
>
{node}
</NodeRightClickMenu>
);
},
);

View File

@@ -8,7 +8,7 @@ export const NodeAdvancedToggle = ({ nodeId }: { nodeId: string }) => {
);
const setShowAdvanced = useNodeStore((state) => state.setShowAdvanced);
return (
<div className="flex items-center justify-between gap-2 rounded-b-xlarge border-t border-slate-200/50 bg-white px-5 py-3.5">
<div className="flex items-center justify-between gap-2 rounded-b-xlarge border-t border-zinc-200 bg-white px-5 py-3.5">
<Text variant="body" className="font-medium text-slate-700">
Advanced
</Text>

View File

@@ -22,7 +22,7 @@ export const NodeContainer = ({
return (
<div
className={cn(
"z-12 max-w-[370px] rounded-xlarge ring-1 ring-slate-200/60",
"z-12 w-[350px] rounded-xlarge ring-1 ring-slate-200/60",
selected && "shadow-lg ring-2 ring-slate-200",
status && nodeStyleBasedOnStatus[status],
hasErrors ? nodeStyleBasedOnStatus[AgentExecutionStatus.FAILED] : "",

View File

@@ -1,26 +1,31 @@
import { Separator } from "@/components/__legacy__/ui/separator";
import { useCopyPasteStore } from "@/app/(platform)/build/stores/copyPasteStore";
import { useNodeStore } from "@/app/(platform)/build/stores/nodeStore";
import {
DropdownMenu,
DropdownMenuContent,
DropdownMenuItem,
DropdownMenuTrigger,
} from "@/components/molecules/DropdownMenu/DropdownMenu";
import { DotsThreeOutlineVerticalIcon } from "@phosphor-icons/react";
import { Copy, Trash2, ExternalLink } from "lucide-react";
import { useNodeStore } from "@/app/(platform)/build/stores/nodeStore";
import { useCopyPasteStore } from "@/app/(platform)/build/stores/copyPasteStore";
import {
SecondaryDropdownMenuContent,
SecondaryDropdownMenuItem,
SecondaryDropdownMenuSeparator,
} from "@/components/molecules/SecondaryMenu/SecondaryMenu";
import {
ArrowSquareOutIcon,
CopyIcon,
DotsThreeOutlineVerticalIcon,
TrashIcon,
} from "@phosphor-icons/react";
import { useReactFlow } from "@xyflow/react";
export const NodeContextMenu = ({
nodeId,
subGraphID,
}: {
type Props = {
nodeId: string;
subGraphID?: string;
}) => {
};
export const NodeContextMenu = ({ nodeId, subGraphID }: Props) => {
const { deleteElements } = useReactFlow();
const handleCopy = () => {
function handleCopy() {
useNodeStore.setState((state) => ({
nodes: state.nodes.map((node) => ({
...node,
@@ -30,47 +35,47 @@ export const NodeContextMenu = ({
useCopyPasteStore.getState().copySelectedNodes();
useCopyPasteStore.getState().pasteNodes();
};
}
const handleDelete = () => {
function handleDelete() {
deleteElements({ nodes: [{ id: nodeId }] });
};
}
return (
<DropdownMenu>
<DropdownMenuTrigger className="py-2">
<DotsThreeOutlineVerticalIcon size={16} weight="fill" />
</DropdownMenuTrigger>
<DropdownMenuContent
side="right"
align="start"
className="rounded-xlarge"
>
<DropdownMenuItem onClick={handleCopy} className="hover:rounded-xlarge">
<Copy className="mr-2 h-4 w-4" />
Copy Node
</DropdownMenuItem>
<SecondaryDropdownMenuContent side="right" align="start">
<SecondaryDropdownMenuItem onClick={handleCopy}>
<CopyIcon size={20} className="mr-2 dark:text-gray-100" />
<span className="dark:text-gray-100">Copy</span>
</SecondaryDropdownMenuItem>
<SecondaryDropdownMenuSeparator />
{subGraphID && (
<DropdownMenuItem
onClick={() => window.open(`/build?flowID=${subGraphID}`)}
className="hover:rounded-xlarge"
>
<ExternalLink className="mr-2 h-4 w-4" />
Open Agent
</DropdownMenuItem>
<>
<SecondaryDropdownMenuItem
onClick={() => window.open(`/build?flowID=${subGraphID}`)}
>
<ArrowSquareOutIcon
size={20}
className="mr-2 dark:text-gray-100"
/>
<span className="dark:text-gray-100">Open agent</span>
</SecondaryDropdownMenuItem>
<SecondaryDropdownMenuSeparator />
</>
)}
<Separator className="my-2" />
<DropdownMenuItem
onClick={handleDelete}
className="text-red-600 hover:rounded-xlarge"
>
<Trash2 className="mr-2 h-4 w-4" />
Delete
</DropdownMenuItem>
</DropdownMenuContent>
<SecondaryDropdownMenuItem variant="destructive" onClick={handleDelete}>
<TrashIcon
size={20}
className="mr-2 text-red-500 dark:text-red-400"
/>
<span className="dark:text-red-400">Delete</span>
</SecondaryDropdownMenuItem>
</SecondaryDropdownMenuContent>
</DropdownMenu>
);
};

View File

@@ -1,29 +1,30 @@
import { Text } from "@/components/atoms/Text/Text";
import { beautifyString, cn } from "@/lib/utils";
import { NodeCost } from "./NodeCost";
import { NodeBadges } from "./NodeBadges";
import { NodeContextMenu } from "./NodeContextMenu";
import { CustomNodeData } from "../CustomNode";
import { useNodeStore } from "@/app/(platform)/build/stores/nodeStore";
import { useState } from "react";
import { Text } from "@/components/atoms/Text/Text";
import {
Tooltip,
TooltipContent,
TooltipProvider,
TooltipTrigger,
} from "@/components/atoms/Tooltip/BaseTooltip";
import { beautifyString, cn } from "@/lib/utils";
import { useState } from "react";
import { CustomNodeData } from "../CustomNode";
import { NodeBadges } from "./NodeBadges";
import { NodeContextMenu } from "./NodeContextMenu";
import { NodeCost } from "./NodeCost";
export const NodeHeader = ({
data,
nodeId,
}: {
type Props = {
data: CustomNodeData;
nodeId: string;
}) => {
};
export const NodeHeader = ({ data, nodeId }: Props) => {
const updateNodeData = useNodeStore((state) => state.updateNodeData);
const title = (data.metadata?.customized_name as string) || data.title;
const [isEditingTitle, setIsEditingTitle] = useState(false);
const [editedTitle, setEditedTitle] = useState(title);
const [editedTitle, setEditedTitle] = useState(
beautifyString(title).replace("Block", "").trim(),
);
const handleTitleEdit = () => {
updateNodeData(nodeId, {
@@ -41,7 +42,7 @@ export const NodeHeader = ({
};
return (
<div className="flex h-auto flex-col gap-1 rounded-xlarge border-b border-slate-200/50 bg-gradient-to-r from-slate-50/80 to-white/90 px-4 py-4 pt-3">
<div className="flex h-auto flex-col gap-1 rounded-xlarge border-b border-zinc-200 bg-gradient-to-r from-slate-50/80 to-white/90 px-4 py-4 pt-3">
{/* Title row with context menu */}
<div className="flex items-start justify-between gap-2">
<div className="flex min-w-0 flex-1 items-center gap-2">
@@ -68,12 +69,12 @@ export const NodeHeader = ({
<TooltipTrigger asChild>
<div>
<Text variant="large-semibold" className="line-clamp-1">
{beautifyString(title)}
{beautifyString(title).replace("Block", "").trim()}
</Text>
</div>
</TooltipTrigger>
<TooltipContent>
<p>{beautifyString(title)}</p>
<p>{beautifyString(title).replace("Block", "").trim()}</p>
</TooltipContent>
</Tooltip>
</TooltipProvider>

View File

@@ -23,7 +23,7 @@ export const NodeDataRenderer = ({ nodeId }: { nodeId: string }) => {
}
return (
<div className="flex flex-col gap-3 rounded-b-xl border-t border-slate-200/50 px-4 py-4">
<div className="flex flex-col gap-3 rounded-b-xl border-t border-zinc-200 px-4 py-4">
<div className="flex items-center justify-between">
<Text variant="body-medium" className="!font-semibold text-slate-700">
Node Output

View File

@@ -151,7 +151,7 @@ export const NodeDataViewer: FC<NodeDataViewerProps> = ({
</div>
<div className="flex justify-end pt-4">
{outputItems.length > 0 && (
{outputItems.length > 1 && (
<OutputActions
items={outputItems.map((item) => ({
value: item.value,

View File

@@ -0,0 +1,104 @@
import { useCopyPasteStore } from "@/app/(platform)/build/stores/copyPasteStore";
import { useNodeStore } from "@/app/(platform)/build/stores/nodeStore";
import {
SecondaryMenuContent,
SecondaryMenuItem,
SecondaryMenuSeparator,
} from "@/components/molecules/SecondaryMenu/SecondaryMenu";
import { ArrowSquareOutIcon, CopyIcon, TrashIcon } from "@phosphor-icons/react";
import * as ContextMenu from "@radix-ui/react-context-menu";
import { useReactFlow } from "@xyflow/react";
import { useEffect, useRef } from "react";
import { CustomNode } from "../CustomNode";
type Props = {
nodeId: string;
subGraphID?: string;
children: React.ReactNode;
};
const DOUBLE_CLICK_TIMEOUT = 300;
export function NodeRightClickMenu({ nodeId, subGraphID, children }: Props) {
const { deleteElements } = useReactFlow<CustomNode>();
const lastRightClickTime = useRef<number>(0);
const containerRef = useRef<HTMLDivElement>(null);
function copyNode() {
useNodeStore.setState((state) => ({
nodes: state.nodes.map((node) => ({
...node,
selected: node.id === nodeId,
})),
}));
useCopyPasteStore.getState().copySelectedNodes();
useCopyPasteStore.getState().pasteNodes();
}
function deleteNode() {
deleteElements({ nodes: [{ id: nodeId }] });
}
useEffect(() => {
const container = containerRef.current;
if (!container) return;
function handleContextMenu(e: MouseEvent) {
const now = Date.now();
const timeSinceLastClick = now - lastRightClickTime.current;
if (timeSinceLastClick < DOUBLE_CLICK_TIMEOUT) {
e.stopImmediatePropagation();
lastRightClickTime.current = 0;
return;
}
lastRightClickTime.current = now;
}
container.addEventListener("contextmenu", handleContextMenu, true);
return () => {
container.removeEventListener("contextmenu", handleContextMenu, true);
};
}, []);
return (
<ContextMenu.Root>
<ContextMenu.Trigger asChild>
<div ref={containerRef}>{children}</div>
</ContextMenu.Trigger>
<SecondaryMenuContent>
<SecondaryMenuItem onSelect={copyNode}>
<CopyIcon size={20} className="mr-2 dark:text-gray-100" />
<span className="dark:text-gray-100">Copy</span>
</SecondaryMenuItem>
<SecondaryMenuSeparator />
{subGraphID && (
<>
<SecondaryMenuItem
onClick={() => window.open(`/build?flowID=${subGraphID}`)}
>
<ArrowSquareOutIcon
size={20}
className="mr-2 dark:text-gray-100"
/>
<span className="dark:text-gray-100">Open agent</span>
</SecondaryMenuItem>
<SecondaryMenuSeparator />
</>
)}
<SecondaryMenuItem variant="destructive" onSelect={deleteNode}>
<TrashIcon
size={20}
className="mr-2 text-red-500 dark:text-red-400"
/>
<span className="dark:text-red-400">Delete</span>
</SecondaryMenuItem>
</SecondaryMenuContent>
</ContextMenu.Root>
);
}

View File

@@ -1,6 +1,6 @@
import { useMemo } from "react";
import { FormCreator } from "../../FormCreator";
import { preprocessInputSchema } from "@/components/renderers/input-renderer/utils/input-schema-pre-processor";
import { preprocessInputSchema } from "@/components/renderers/InputRenderer/utils/input-schema-pre-processor";
import { CustomNodeData } from "../CustomNode";
import { Text } from "@/components/atoms/Text/Text";
import { cn } from "@/lib/utils";

View File

@@ -3,7 +3,7 @@ import React from "react";
import { uiSchema } from "./uiSchema";
import { useNodeStore } from "../../../stores/nodeStore";
import { BlockUIType } from "../../types";
import { FormRenderer } from "@/components/renderers/input-renderer/FormRenderer";
import { FormRenderer } from "@/components/renderers/InputRenderer/FormRenderer";
export const FormCreator = React.memo(
({

View File

@@ -4,7 +4,7 @@ import { CaretDownIcon, InfoIcon } from "@phosphor-icons/react";
import { RJSFSchema } from "@rjsf/utils";
import { useState } from "react";
import NodeHandle from "../handlers/NodeHandle";
import { OutputNodeHandle } from "../handlers/NodeHandle";
import {
Tooltip,
TooltipContent,
@@ -13,7 +13,6 @@ import {
} from "@/components/atoms/Tooltip/BaseTooltip";
import { useEdgeStore } from "@/app/(platform)/build/stores/edgeStore";
import { getTypeDisplayInfo } from "./helpers";
import { generateHandleId } from "../handlers/helpers";
import { BlockUIType } from "../../types";
export const OutputHandler = ({
@@ -29,8 +28,73 @@ export const OutputHandler = ({
const properties = outputSchema?.properties || {};
const [isOutputVisible, setIsOutputVisible] = useState(true);
const showHandles = uiType !== BlockUIType.OUTPUT;
const renderOutputHandles = (
schema: RJSFSchema,
keyPrefix: string = "",
titlePrefix: string = "",
): React.ReactNode[] => {
return Object.entries(schema).map(
([key, fieldSchema]: [string, RJSFSchema]) => {
const fullKey = keyPrefix ? `${keyPrefix}_#_${key}` : key;
const fieldTitle = titlePrefix + (fieldSchema?.title || key);
const isConnected = isOutputConnected(nodeId, fullKey);
const shouldShow = isConnected || isOutputVisible;
const { displayType, colorClass, hexColor } =
getTypeDisplayInfo(fieldSchema);
return shouldShow ? (
<div key={fullKey} className="flex flex-col items-end gap-2">
<div className="relative flex items-center gap-2">
{fieldSchema?.description && (
<TooltipProvider>
<Tooltip>
<TooltipTrigger asChild>
<span
style={{ marginLeft: 6, cursor: "pointer" }}
aria-label="info"
tabIndex={0}
>
<InfoIcon />
</span>
</TooltipTrigger>
<TooltipContent>{fieldSchema?.description}</TooltipContent>
</Tooltip>
</TooltipProvider>
)}
<Text variant="body" className="text-slate-700">
{fieldTitle}
</Text>
<Text variant="small" as="span" className={colorClass}>
({displayType})
</Text>
{showHandles && (
<OutputNodeHandle
field_name={fullKey}
nodeId={nodeId}
hexColor={hexColor}
/>
)}
</div>
{/* Recursively render nested properties */}
{fieldSchema?.properties &&
renderOutputHandles(
fieldSchema.properties,
fullKey,
`${fieldTitle}.`,
)}
</div>
) : null;
},
);
};
return (
<div className="flex flex-col items-end justify-between gap-2 rounded-b-xlarge border-t border-slate-200/50 bg-white py-3.5">
<div className="flex flex-col items-end justify-between gap-2 rounded-b-xlarge border-t border-zinc-200 bg-white py-3.5">
<Button
variant="ghost"
className="mr-4 h-fit min-w-0 p-0 hover:border-transparent hover:bg-transparent"
@@ -49,50 +113,9 @@ export const OutputHandler = ({
</Text>
</Button>
{
<div className="flex flex-col items-end gap-2">
{Object.entries(properties).map(([key, property]: [string, any]) => {
const isConnected = isOutputConnected(nodeId, key);
const shouldShow = isConnected || isOutputVisible;
const { displayType, colorClass } = getTypeDisplayInfo(property);
return shouldShow ? (
<div key={key} className="relative flex items-center gap-2">
{property?.description && (
<TooltipProvider>
<Tooltip>
<TooltipTrigger asChild>
<span
style={{ marginLeft: 6, cursor: "pointer" }}
aria-label="info"
tabIndex={0}
>
<InfoIcon />
</span>
</TooltipTrigger>
<TooltipContent>{property?.description}</TooltipContent>
</Tooltip>
</TooltipProvider>
)}
<Text variant="body" className="text-slate-700">
{property?.title || key}{" "}
</Text>
<Text variant="small" as="span" className={colorClass}>
({displayType})
</Text>
<NodeHandle
handleId={
uiType === BlockUIType.AGENT ? key : generateHandleId(key)
}
isConnected={isConnected}
side="right"
/>
</div>
) : null;
})}
</div>
}
<div className="flex flex-col items-end gap-2">
{renderOutputHandles(properties)}
</div>
</div>
);
};

View File

@@ -92,14 +92,38 @@ export const getTypeDisplayInfo = (schema: any) => {
if (schema?.type === "string" && schema?.format) {
const formatMap: Record<
string,
{ displayType: string; colorClass: string }
{ displayType: string; colorClass: string; hexColor: string }
> = {
file: { displayType: "file", colorClass: "!text-green-500" },
date: { displayType: "date", colorClass: "!text-blue-500" },
time: { displayType: "time", colorClass: "!text-blue-500" },
"date-time": { displayType: "datetime", colorClass: "!text-blue-500" },
"long-text": { displayType: "text", colorClass: "!text-green-500" },
"short-text": { displayType: "text", colorClass: "!text-green-500" },
file: {
displayType: "file",
colorClass: "!text-green-500",
hexColor: "#22c55e",
},
date: {
displayType: "date",
colorClass: "!text-blue-500",
hexColor: "#3b82f6",
},
time: {
displayType: "time",
colorClass: "!text-blue-500",
hexColor: "#3b82f6",
},
"date-time": {
displayType: "datetime",
colorClass: "!text-blue-500",
hexColor: "#3b82f6",
},
"long-text": {
displayType: "text",
colorClass: "!text-green-500",
hexColor: "#22c55e",
},
"short-text": {
displayType: "text",
colorClass: "!text-green-500",
hexColor: "#22c55e",
},
};
const formatInfo = formatMap[schema.format];
@@ -131,10 +155,23 @@ export const getTypeDisplayInfo = (schema: any) => {
any: "!text-gray-500",
};
const hexColorMap: Record<string, string> = {
string: "#22c55e",
number: "#3b82f6",
integer: "#3b82f6",
boolean: "#eab308",
object: "#a855f7",
array: "#6366f1",
null: "#6b7280",
any: "#6b7280",
};
const colorClass = colorMap[schema?.type] || "!text-gray-500";
const hexColor = hexColorMap[schema?.type] || "#6b7280";
return {
displayType,
colorClass,
hexColor,
};
};

View File

@@ -24,7 +24,7 @@ export const ControlPanelButton: React.FC<Props> = ({
role={as === "div" ? "button" : undefined}
disabled={as === "button" ? disabled : undefined}
className={cn(
"flex h-[4.25rem] w-[4.25rem] items-center justify-center whitespace-normal bg-white p-[1.38rem] text-zinc-800 shadow-none hover:cursor-pointer hover:bg-zinc-100 hover:text-zinc-950 focus:ring-0",
"flex w-auto items-center justify-center whitespace-normal bg-white px-4 py-4 text-zinc-800 shadow-none hover:cursor-pointer hover:bg-zinc-100 hover:text-zinc-950 focus:ring-0",
selected &&
"bg-violet-50 text-violet-700 hover:cursor-default hover:bg-violet-50 hover:text-violet-700 active:bg-violet-50 active:text-violet-700",
disabled && "cursor-not-allowed opacity-50 hover:cursor-not-allowed",

View File

@@ -1,18 +1,17 @@
import React from "react";
import { useControlPanelStore } from "@/app/(platform)/build/stores/controlPanelStore";
import {
Popover,
PopoverContent,
PopoverTrigger,
} from "@/components/__legacy__/ui/popover";
import { BlockMenuContent } from "../BlockMenuContent/BlockMenuContent";
import { ControlPanelButton } from "../../ControlPanelButton";
import { LegoIcon } from "@phosphor-icons/react";
import { useControlPanelStore } from "@/app/(platform)/build/stores/controlPanelStore";
import {
Tooltip,
TooltipContent,
TooltipTrigger,
} from "@/components/atoms/Tooltip/BaseTooltip";
import { LegoIcon } from "@phosphor-icons/react";
import { ControlPanelButton } from "../../ControlPanelButton";
import { BlockMenuContent } from "../BlockMenuContent/BlockMenuContent";
export const BlockMenu = () => {
const { blockMenuOpen, setBlockMenuOpen } = useControlPanelStore();
@@ -28,7 +27,7 @@ export const BlockMenu = () => {
selected={blockMenuOpen}
className="rounded-none"
>
<LegoIcon className="h-6 w-6" />
<LegoIcon className="size-5" />
</ControlPanelButton>
</PopoverTrigger>
</TooltipTrigger>

View File

@@ -0,0 +1,57 @@
import { useBlockMenuStore } from "@/app/(platform)/build/stores/blockMenuStore";
import { FilterChip } from "../FilterChip";
import { categories } from "./constants";
import { FilterSheet } from "../FilterSheet/FilterSheet";
import { GetV2BuilderSearchFilterAnyOfItem } from "@/app/api/__generated__/models/getV2BuilderSearchFilterAnyOfItem";
export const BlockMenuFilters = () => {
const {
filters,
addFilter,
removeFilter,
categoryCounts,
creators,
addCreator,
removeCreator,
} = useBlockMenuStore();
const handleFilterClick = (filter: GetV2BuilderSearchFilterAnyOfItem) => {
if (filters.includes(filter)) {
removeFilter(filter);
} else {
addFilter(filter);
}
};
const handleCreatorClick = (creator: string) => {
if (creators.includes(creator)) {
removeCreator(creator);
} else {
addCreator(creator);
}
};
return (
<div className="flex flex-wrap gap-2">
<FilterSheet categories={categories} />
{creators.length > 0 &&
creators.map((creator) => (
<FilterChip
key={creator}
name={"Created by " + creator.slice(0, 10) + "..."}
selected={creators.includes(creator)}
onClick={() => handleCreatorClick(creator)}
/>
))}
{categories.map((category) => (
<FilterChip
key={category.key}
name={category.name}
selected={filters.includes(category.key)}
onClick={() => handleFilterClick(category.key)}
number={categoryCounts[category.key] ?? 0}
/>
))}
</div>
);
};

View File

@@ -0,0 +1,15 @@
import { GetV2BuilderSearchFilterAnyOfItem } from "@/app/api/__generated__/models/getV2BuilderSearchFilterAnyOfItem";
import { CategoryKey } from "./types";
export const categories: Array<{ key: CategoryKey; name: string }> = [
{ key: GetV2BuilderSearchFilterAnyOfItem.blocks, name: "Blocks" },
{
key: GetV2BuilderSearchFilterAnyOfItem.integrations,
name: "Integrations",
},
{
key: GetV2BuilderSearchFilterAnyOfItem.marketplace_agents,
name: "Marketplace agents",
},
{ key: GetV2BuilderSearchFilterAnyOfItem.my_agents, name: "My agents" },
];

View File

@@ -0,0 +1,26 @@
import { GetV2BuilderSearchFilterAnyOfItem } from "@/app/api/__generated__/models/getV2BuilderSearchFilterAnyOfItem";
export type DefaultStateType =
| "suggestion"
| "all_blocks"
| "input_blocks"
| "action_blocks"
| "output_blocks"
| "integrations"
| "marketplace_agents"
| "my_agents";
export type CategoryKey = GetV2BuilderSearchFilterAnyOfItem;
export interface Filters {
categories: {
blocks: boolean;
integrations: boolean;
marketplace_agents: boolean;
my_agents: boolean;
providers: boolean;
};
createdBy: string[];
}
export type CategoryCounts = Record<CategoryKey, number>;

View File

@@ -1,111 +1,14 @@
import { Text } from "@/components/atoms/Text/Text";
import { useBlockMenuSearch } from "./useBlockMenuSearch";
import { InfiniteScroll } from "@/components/contextual/InfiniteScroll/InfiniteScroll";
import { LoadingSpinner } from "@/components/__legacy__/ui/loading";
import { SearchResponseItemsItem } from "@/app/api/__generated__/models/searchResponseItemsItem";
import { MarketplaceAgentBlock } from "../MarketplaceAgentBlock";
import { Block } from "../Block";
import { UGCAgentBlock } from "../UGCAgentBlock";
import { getSearchItemType } from "./helper";
import { useBlockMenuStore } from "../../../../stores/blockMenuStore";
import { blockMenuContainerStyle } from "../style";
import { cn } from "@/lib/utils";
import { NoSearchResult } from "../NoSearchResult";
import { BlockMenuFilters } from "../BlockMenuFilters/BlockMenuFilters";
import { BlockMenuSearchContent } from "../BlockMenuSearchContent/BlockMenuSearchContent";
export const BlockMenuSearch = () => {
const {
searchResults,
isFetchingNextPage,
fetchNextPage,
hasNextPage,
searchLoading,
handleAddLibraryAgent,
handleAddMarketplaceAgent,
addingLibraryAgentId,
addingMarketplaceAgentSlug,
} = useBlockMenuSearch();
const { searchQuery } = useBlockMenuStore();
if (searchLoading) {
return (
<div
className={cn(
blockMenuContainerStyle,
"flex items-center justify-center",
)}
>
<LoadingSpinner className="size-13" />
</div>
);
}
if (searchResults.length === 0) {
return <NoSearchResult />;
}
return (
<div className={blockMenuContainerStyle}>
<BlockMenuFilters />
<Text variant="body-medium">Search results</Text>
<InfiniteScroll
isFetchingNextPage={isFetchingNextPage}
fetchNextPage={fetchNextPage}
hasNextPage={hasNextPage}
loader={<LoadingSpinner className="size-13" />}
className="space-y-2.5"
>
{searchResults.map((item: SearchResponseItemsItem, index: number) => {
const { type, data } = getSearchItemType(item);
// backend give support to these 3 types only [right now] - we need to give support to integration and ai agent types in follow up PRs
switch (type) {
case "store_agent":
return (
<MarketplaceAgentBlock
key={index}
slug={data.slug}
highlightedText={searchQuery}
title={data.agent_name}
image_url={data.agent_image}
creator_name={data.creator}
number_of_runs={data.runs}
loading={addingMarketplaceAgentSlug === data.slug}
onClick={() =>
handleAddMarketplaceAgent({
creator_name: data.creator,
slug: data.slug,
})
}
/>
);
case "block":
return (
<Block
key={index}
title={data.name}
highlightedText={searchQuery}
description={data.description}
blockData={data}
/>
);
case "library_agent":
return (
<UGCAgentBlock
key={index}
title={data.name}
highlightedText={searchQuery}
image_url={data.image_url}
version={data.graph_version}
edited_time={data.updated_at}
isLoading={addingLibraryAgentId === data.id}
onClick={() => handleAddLibraryAgent(data)}
/>
);
default:
return null;
}
})}
</InfiniteScroll>
<BlockMenuSearchContent />
</div>
);
};

View File

@@ -0,0 +1,108 @@
import { SearchResponseItemsItem } from "@/app/api/__generated__/models/searchResponseItemsItem";
import { LoadingSpinner } from "@/components/atoms/LoadingSpinner/LoadingSpinner";
import { InfiniteScroll } from "@/components/contextual/InfiniteScroll/InfiniteScroll";
import { getSearchItemType } from "./helper";
import { MarketplaceAgentBlock } from "../MarketplaceAgentBlock";
import { Block } from "../Block";
import { UGCAgentBlock } from "../UGCAgentBlock";
import { useBlockMenuSearchContent } from "./useBlockMenuSearchContent";
import { useBlockMenuStore } from "@/app/(platform)/build/stores/blockMenuStore";
import { cn } from "@/lib/utils";
import { blockMenuContainerStyle } from "../style";
import { NoSearchResult } from "../NoSearchResult";
export const BlockMenuSearchContent = () => {
const {
searchResults,
isFetchingNextPage,
fetchNextPage,
hasNextPage,
searchLoading,
handleAddLibraryAgent,
handleAddMarketplaceAgent,
addingLibraryAgentId,
addingMarketplaceAgentSlug,
} = useBlockMenuSearchContent();
const { searchQuery } = useBlockMenuStore();
if (searchLoading) {
return (
<div
className={cn(
blockMenuContainerStyle,
"flex items-center justify-center",
)}
>
<LoadingSpinner className="size-13" />
</div>
);
}
if (searchResults.length === 0) {
return <NoSearchResult />;
}
return (
<InfiniteScroll
isFetchingNextPage={isFetchingNextPage}
fetchNextPage={fetchNextPage}
hasNextPage={hasNextPage}
loader={<LoadingSpinner className="size-13" />}
className="space-y-2.5"
>
{searchResults.map((item: SearchResponseItemsItem, index: number) => {
const { type, data } = getSearchItemType(item);
// backend give support to these 3 types only [right now] - we need to give support to integration and ai agent types in follow up PRs
switch (type) {
case "store_agent":
return (
<MarketplaceAgentBlock
key={index}
slug={data.slug}
highlightedText={searchQuery}
title={data.agent_name}
image_url={data.agent_image}
creator_name={data.creator}
number_of_runs={data.runs}
loading={addingMarketplaceAgentSlug === data.slug}
onClick={() =>
handleAddMarketplaceAgent({
creator_name: data.creator,
slug: data.slug,
})
}
/>
);
case "block":
return (
<Block
key={index}
title={data.name}
highlightedText={searchQuery}
description={data.description}
blockData={data}
/>
);
case "library_agent":
return (
<UGCAgentBlock
key={index}
title={data.name}
highlightedText={searchQuery}
image_url={data.image_url}
version={data.graph_version}
edited_time={data.updated_at}
isLoading={addingLibraryAgentId === data.id}
onClick={() => handleAddLibraryAgent(data)}
/>
);
default:
return null;
}
})}
</InfiniteScroll>
);
};

View File

@@ -23,9 +23,19 @@ import { LibraryAgent } from "@/app/api/__generated__/models/libraryAgent";
import { getQueryClient } from "@/lib/react-query/queryClient";
import { useToast } from "@/components/molecules/Toast/use-toast";
import * as Sentry from "@sentry/nextjs";
import { GetV2BuilderSearchFilterAnyOfItem } from "@/app/api/__generated__/models/getV2BuilderSearchFilterAnyOfItem";
export const useBlockMenuSearchContent = () => {
const {
searchQuery,
searchId,
setSearchId,
filters,
setCreatorsList,
creators,
setCategoryCounts,
} = useBlockMenuStore();
export const useBlockMenuSearch = () => {
const { searchQuery, searchId, setSearchId } = useBlockMenuStore();
const { toast } = useToast();
const { addAgentToBuilder, addLibraryAgentToBuilder } =
useAddAgentToBuilder();
@@ -57,6 +67,8 @@ export const useBlockMenuSearch = () => {
page_size: 8,
search_query: searchQuery,
search_id: searchId,
filter: filters.length > 0 ? filters : undefined,
by_creator: creators.length > 0 ? creators : undefined,
},
{
query: { getNextPageParam: getPaginationNextPageNumber },
@@ -98,6 +110,26 @@ export const useBlockMenuSearch = () => {
}
}, [searchQueryData, searchId, setSearchId]);
// from all the results, we need to get all the unique creators
useEffect(() => {
if (!searchQueryData?.pages?.length) {
return;
}
const latestData = okData(searchQueryData.pages.at(-1));
setCategoryCounts(
(latestData?.total_items as Record<
GetV2BuilderSearchFilterAnyOfItem,
number
>) || {
blocks: 0,
integrations: 0,
marketplace_agents: 0,
my_agents: 0,
},
);
setCreatorsList(latestData?.items || []);
}, [searchQueryData]);
useEffect(() => {
if (searchId && !searchQuery) {
resetSearchSession();

View File

@@ -1,7 +1,9 @@
import { Button } from "@/components/__legacy__/ui/button";
import { cn } from "@/lib/utils";
import { X } from "lucide-react";
import React, { ButtonHTMLAttributes } from "react";
import { XIcon } from "@phosphor-icons/react";
import { AnimatePresence, motion } from "framer-motion";
import React, { ButtonHTMLAttributes, useState } from "react";
interface Props extends ButtonHTMLAttributes<HTMLButtonElement> {
selected?: boolean;
@@ -16,39 +18,51 @@ export const FilterChip: React.FC<Props> = ({
className,
...rest
}) => {
const [isHovered, setIsHovered] = useState(false);
return (
<Button
className={cn(
"group w-fit space-x-1 rounded-[1.5rem] border border-zinc-300 bg-transparent px-[0.625rem] py-[0.375rem] shadow-none transition-transform duration-300 ease-in-out",
"hover:border-violet-500 hover:bg-transparent focus:ring-0 disabled:cursor-not-allowed",
selected && "border-0 bg-violet-700 hover:border",
className,
)}
{...rest}
>
<span
<AnimatePresence mode="wait">
<Button
onMouseEnter={() => setIsHovered(true)}
onMouseLeave={() => setIsHovered(false)}
className={cn(
"font-sans text-sm font-medium leading-[1.375rem] text-zinc-600 group-hover:text-zinc-600 group-disabled:text-zinc-400",
selected && "text-zinc-50",
"group w-fit space-x-1 rounded-[1.5rem] border border-zinc-300 bg-transparent px-[0.625rem] py-[0.375rem] shadow-none",
"hover:border-violet-500 hover:bg-transparent focus:ring-0 disabled:cursor-not-allowed",
selected && "border-0 bg-violet-700 hover:border",
className,
)}
{...rest}
>
{name}
</span>
{selected && (
<>
<span className="flex h-4 w-4 items-center justify-center rounded-full bg-zinc-50 transition-all duration-300 ease-in-out group-hover:hidden">
<X
className="h-3 w-3 rounded-full text-violet-700"
strokeWidth={2}
/>
</span>
{number !== undefined && (
<span className="hidden h-[1.375rem] items-center rounded-[1.25rem] bg-violet-700 p-[0.375rem] text-zinc-50 transition-all duration-300 ease-in-out animate-in fade-in zoom-in group-hover:flex">
{number > 100 ? "100+" : number}
</span>
<span
className={cn(
"font-sans text-sm font-medium leading-[1.375rem] text-zinc-600 group-hover:text-zinc-600 group-disabled:text-zinc-400",
selected && "text-zinc-50",
)}
</>
)}
</Button>
>
{name}
</span>
{selected && !isHovered && (
<motion.span
initial={{ opacity: 0.5, scale: 0.5, filter: "blur(20px)" }}
animate={{ opacity: 1, scale: 1, filter: "blur(0px)" }}
exit={{ opacity: 0.5, scale: 0.5, filter: "blur(20px)" }}
transition={{ duration: 0.3, type: "spring", bounce: 0.2 }}
className="flex h-4 w-4 items-center justify-center rounded-full bg-zinc-50"
>
<XIcon size={12} weight="bold" className="text-violet-700" />
</motion.span>
)}
{number !== undefined && isHovered && (
<motion.span
initial={{ opacity: 0.5, scale: 0.5, filter: "blur(10px)" }}
animate={{ opacity: 1, scale: 1, filter: "blur(0px)" }}
exit={{ opacity: 0.5, scale: 0.5, filter: "blur(10px)" }}
transition={{ duration: 0.3, type: "spring", bounce: 0.2 }}
className="flex h-[1.375rem] items-center rounded-[1.25rem] bg-violet-700 p-[0.375rem] text-zinc-50"
>
{number > 100 ? "100+" : number}
</motion.span>
)}
</Button>
</AnimatePresence>
);
};

View File

@@ -0,0 +1,156 @@
import { FilterChip } from "../FilterChip";
import { cn } from "@/lib/utils";
import { CategoryKey } from "../BlockMenuFilters/types";
import { AnimatePresence, motion } from "framer-motion";
import { XIcon } from "@phosphor-icons/react";
import { Button } from "@/components/atoms/Button/Button";
import { Text } from "@/components/atoms/Text/Text";
import { Separator } from "@/components/__legacy__/ui/separator";
import { Checkbox } from "@/components/__legacy__/ui/checkbox";
import { useFilterSheet } from "./useFilterSheet";
import { INITIAL_CREATORS_TO_SHOW } from "./constant";
export function FilterSheet({
categories,
}: {
categories: Array<{ key: CategoryKey; name: string }>;
}) {
const {
isOpen,
localCategories,
localCreators,
displayedCreatorsCount,
handleLocalCategoryChange,
handleToggleShowMoreCreators,
handleLocalCreatorChange,
handleClearFilters,
handleCloseButton,
handleApplyFilters,
hasLocalActiveFilters,
visibleCreators,
creators,
handleOpenFilters,
hasActiveFilters,
} = useFilterSheet();
return (
<div className="m-0 inline w-fit p-0">
<FilterChip
name={hasActiveFilters() ? "Edit filters" : "All filters"}
onClick={handleOpenFilters}
/>
<AnimatePresence>
{isOpen && (
<motion.div
className={cn(
"absolute bottom-2 left-2 top-2 z-20 w-3/4 max-w-[22.5rem] space-y-4 overflow-hidden rounded-[0.75rem] bg-white pb-4 shadow-[0_4px_12px_2px_rgba(0,0,0,0.1)]",
)}
initial={{ x: "-100%", filter: "blur(10px)" }}
animate={{ x: 0, filter: "blur(0px)" }}
exit={{ x: "-110%", filter: "blur(10px)" }}
transition={{ duration: 0.4, type: "spring", bounce: 0.2 }}
>
{/* Top section */}
<div className="flex items-center justify-between px-5 pt-4">
<Text variant="body">Filters</Text>
<Button
className="p-0"
variant="ghost"
size="icon"
onClick={handleCloseButton}
>
<XIcon size={20} />
</Button>
</div>
<Separator className="h-[1px] w-full text-zinc-300" />
{/* Category section */}
<div className="space-y-4 px-5">
<Text variant="large">Categories</Text>
<div className="space-y-2">
{categories.map((category) => (
<div
key={category.key}
className="flex items-center space-x-2"
>
<Checkbox
id={category.key}
checked={localCategories.includes(category.key)}
onCheckedChange={() =>
handleLocalCategoryChange(category.key)
}
className="border border-[#D4D4D4] shadow-none data-[state=checked]:border-none data-[state=checked]:bg-violet-700 data-[state=checked]:text-white"
/>
<label
htmlFor={category.key}
className="font-sans text-sm leading-[1.375rem] text-zinc-600"
>
{category.name}
</label>
</div>
))}
</div>
</div>
{/* Created by section */}
<div className="space-y-4 px-5">
<p className="font-sans text-base font-medium text-zinc-800">
Created by
</p>
<div className="space-y-2">
{visibleCreators.map((creator, i) => (
<div key={i} className="flex items-center space-x-2">
<Checkbox
id={`creator-${creator}`}
checked={localCreators.includes(creator)}
onCheckedChange={() => handleLocalCreatorChange(creator)}
className="border border-[#D4D4D4] shadow-none data-[state=checked]:border-none data-[state=checked]:bg-violet-700 data-[state=checked]:text-white"
/>
<label
htmlFor={`creator-${creator}`}
className="font-sans text-sm leading-[1.375rem] text-zinc-600"
>
{creator}
</label>
</div>
))}
</div>
{creators.length > INITIAL_CREATORS_TO_SHOW && (
<Button
variant={"link"}
className="m-0 p-0 font-sans text-sm font-medium leading-[1.375rem] text-zinc-800 underline hover:text-zinc-600"
onClick={handleToggleShowMoreCreators}
>
{displayedCreatorsCount < creators.length ? "More" : "Less"}
</Button>
)}
</div>
{/* Footer section */}
<div className="fixed bottom-0 flex w-full justify-between gap-3 border-t border-zinc-200 bg-white px-5 py-3">
<Button
size="small"
variant={"outline"}
onClick={handleClearFilters}
className="rounded-[8px] px-2 py-1.5"
>
Clear
</Button>
<Button
size="small"
onClick={handleApplyFilters}
disabled={!hasLocalActiveFilters()}
className="rounded-[8px] px-2 py-1.5"
>
Apply filters
</Button>
</div>
</motion.div>
)}
</AnimatePresence>
</div>
);
}

View File

@@ -0,0 +1 @@
export const INITIAL_CREATORS_TO_SHOW = 5;

View File

@@ -0,0 +1,100 @@
import { useBlockMenuStore } from "@/app/(platform)/build/stores/blockMenuStore";
import { useState } from "react";
import { INITIAL_CREATORS_TO_SHOW } from "./constant";
import { GetV2BuilderSearchFilterAnyOfItem } from "@/app/api/__generated__/models/getV2BuilderSearchFilterAnyOfItem";
export const useFilterSheet = () => {
const { filters, creators_list, creators, setFilters, setCreators } =
useBlockMenuStore();
const [isOpen, setIsOpen] = useState(false);
const [localCategories, setLocalCategories] =
useState<GetV2BuilderSearchFilterAnyOfItem[]>(filters);
const [localCreators, setLocalCreators] = useState<string[]>(creators);
const [displayedCreatorsCount, setDisplayedCreatorsCount] = useState(
INITIAL_CREATORS_TO_SHOW,
);
const handleLocalCategoryChange = (
category: GetV2BuilderSearchFilterAnyOfItem,
) => {
setLocalCategories((prev) => {
if (prev.includes(category)) {
return prev.filter((c) => c !== category);
}
return [...prev, category];
});
};
const hasActiveFilters = () => {
return filters.length > 0 || creators.length > 0;
};
const handleToggleShowMoreCreators = () => {
if (displayedCreatorsCount < creators.length) {
setDisplayedCreatorsCount(creators.length);
} else {
setDisplayedCreatorsCount(INITIAL_CREATORS_TO_SHOW);
}
};
const handleLocalCreatorChange = (creator: string) => {
setLocalCreators((prev) => {
if (prev.includes(creator)) {
return prev.filter((c) => c !== creator);
}
return [...prev, creator];
});
};
const handleClearFilters = () => {
setLocalCategories([]);
setLocalCreators([]);
setDisplayedCreatorsCount(INITIAL_CREATORS_TO_SHOW);
};
const handleCloseButton = () => {
setIsOpen(false);
setLocalCategories(filters);
setLocalCreators(creators);
setDisplayedCreatorsCount(INITIAL_CREATORS_TO_SHOW);
};
const handleApplyFilters = () => {
setFilters(localCategories);
setCreators(localCreators);
setIsOpen(false);
};
const handleOpenFilters = () => {
setIsOpen(true);
setLocalCategories(filters);
setLocalCreators(creators);
};
const hasLocalActiveFilters = () => {
return localCategories.length > 0 || localCreators.length > 0;
};
const visibleCreators = creators_list.slice(0, displayedCreatorsCount);
return {
creators,
isOpen,
setIsOpen,
localCategories,
localCreators,
displayedCreatorsCount,
setDisplayedCreatorsCount,
handleLocalCategoryChange,
handleToggleShowMoreCreators,
handleLocalCreatorChange,
handleClearFilters,
handleCloseButton,
handleOpenFilters,
handleApplyFilters,
hasLocalActiveFilters,
visibleCreators,
hasActiveFilters,
};
};

View File

@@ -7,10 +7,10 @@ import { useNewControlPanel } from "./useNewControlPanel";
import { GraphExecutionID } from "@/lib/autogpt-server-api";
// import { ControlPanelButton } from "../ControlPanelButton";
// import { GraphSearchMenu } from "../GraphMenu/GraphMenu";
import { Flag, useGetFlag } from "@/services/feature-flags/use-get-flag";
import { Separator } from "@/components/__legacy__/ui/separator";
import { NewSaveControl } from "./NewSaveControl/NewSaveControl";
import { Flag, useGetFlag } from "@/services/feature-flags/use-get-flag";
import { CustomNode } from "../FlowEditor/nodes/CustomNode/CustomNode";
import { NewSaveControl } from "./NewSaveControl/NewSaveControl";
import { UndoRedoButtons } from "./UndoRedoButtons";
export type Control = {
@@ -56,7 +56,7 @@ export const NewControlPanel = memo(
return (
<section
className={cn(
"absolute left-4 top-10 z-10 w-[4.25rem] overflow-hidden rounded-[1rem] border-none bg-white p-0 shadow-[0_1px_5px_0_rgba(0,0,0,0.1)]",
"absolute left-4 top-10 z-10 overflow-hidden rounded-[1rem] border-none bg-white p-0 shadow-[0_1px_5px_0_rgba(0,0,0,0.1)]",
)}
>
<div className="flex flex-col items-center justify-center rounded-[1rem] p-0">

View File

@@ -1,22 +1,21 @@
import React from "react";
import { Card, CardContent, CardFooter } from "@/components/__legacy__/ui/card";
import { Form, FormField } from "@/components/__legacy__/ui/form";
import {
Popover,
PopoverContent,
PopoverTrigger,
} from "@/components/__legacy__/ui/popover";
import { Card, CardContent, CardFooter } from "@/components/__legacy__/ui/card";
import { Button } from "@/components/atoms/Button/Button";
import { Input } from "@/components/atoms/Input/Input";
import {
Tooltip,
TooltipContent,
TooltipTrigger,
} from "@/components/atoms/Tooltip/BaseTooltip";
import { useNewSaveControl } from "./useNewSaveControl";
import { Form, FormField } from "@/components/__legacy__/ui/form";
import { ControlPanelButton } from "../ControlPanelButton";
import { useControlPanelStore } from "../../../stores/controlPanelStore";
import { FloppyDiskIcon } from "@phosphor-icons/react";
import { Input } from "@/components/atoms/Input/Input";
import { Button } from "@/components/atoms/Button/Button";
import { useControlPanelStore } from "../../../stores/controlPanelStore";
import { ControlPanelButton } from "../ControlPanelButton";
import { useNewSaveControl } from "./useNewSaveControl";
export const NewSaveControl = () => {
const { form, isSaving, graphVersion, handleSave } = useNewSaveControl();
@@ -33,7 +32,7 @@ export const NewSaveControl = () => {
selected={saveControlOpen}
className="rounded-none"
>
<FloppyDiskIcon className="h-6 w-6" />
<FloppyDiskIcon className="size-5" />
</ControlPanelButton>
</PopoverTrigger>
</TooltipTrigger>

View File

@@ -1,13 +1,13 @@
import React from "react";
import { CustomNode } from "@/app/(platform)/build/components/legacy-builder/CustomNode/CustomNode";
import {
Popover,
PopoverContent,
PopoverTrigger,
} from "@/components/__legacy__/ui/popover";
import { MagnifyingGlassIcon } from "@phosphor-icons/react";
import { GraphSearchContent } from "../GraphMenuContent/GraphContent";
import React from "react";
import { ControlPanelButton } from "../../ControlPanelButton";
import { CustomNode } from "@/app/(platform)/build/components/legacy-builder/CustomNode/CustomNode";
import { GraphSearchContent } from "../GraphMenuContent/GraphContent";
import { useGraphMenu } from "./useGraphMenu";
interface GraphSearchMenuProps {
@@ -50,7 +50,7 @@ export const GraphSearchMenu: React.FC<GraphSearchMenuProps> = ({
selected={blockMenuSelected === "search"}
className="rounded-none"
>
<MagnifyingGlassIcon className="h-5 w-6" strokeWidth={2} />
<MagnifyingGlassIcon className="size-5" strokeWidth={2} />
</ControlPanelButton>
</PopoverTrigger>

View File

@@ -1,12 +1,12 @@
import { Separator } from "@/components/__legacy__/ui/separator";
import { ControlPanelButton } from "./ControlPanelButton";
import { ArrowUUpLeftIcon, ArrowUUpRightIcon } from "@phosphor-icons/react";
import {
Tooltip,
TooltipContent,
TooltipTrigger,
} from "@/components/atoms/Tooltip/BaseTooltip";
import { ArrowUUpLeftIcon, ArrowUUpRightIcon } from "@phosphor-icons/react";
import { useHistoryStore } from "../../stores/historyStore";
import { ControlPanelButton } from "./ControlPanelButton";
import { useEffect } from "react";
@@ -43,7 +43,7 @@ export const UndoRedoButtons = () => {
<Tooltip delayDuration={100}>
<TooltipTrigger asChild>
<ControlPanelButton as="button" disabled={!canUndo()} onClick={undo}>
<ArrowUUpLeftIcon className="h-6 w-6" />
<ArrowUUpLeftIcon className="size-5" />
</ControlPanelButton>
</TooltipTrigger>
<TooltipContent side="right">Undo</TooltipContent>
@@ -52,7 +52,7 @@ export const UndoRedoButtons = () => {
<Tooltip delayDuration={100}>
<TooltipTrigger asChild>
<ControlPanelButton as="button" disabled={!canRedo()} onClick={redo}>
<ArrowUUpRightIcon className="h-6 w-6" />
<ArrowUUpRightIcon className="size-5" />
</ControlPanelButton>
</TooltipTrigger>
<TooltipContent side="right">Redo</TooltipContent>

Some files were not shown because too many files have changed in this diff Show More