Compare commits

..

128 Commits

Author SHA1 Message Date
Torantulino
9560aa8b41 feat(langfuse): integrate Langfuse for prompt management
- Added Langfuse configuration to settings and environment variables.
- Introduced Langfuse client for fetching prompts in chat service.
- Updated ChatConfig to include Langfuse prompt name.
- Enhanced service logic to retrieve prompts from Langfuse, improving prompt management capabilities.

This integration allows for rapid runtime prompt updates and eventual analytics of the performance of the Co-Pilot system.

This commit aims to trial this service as a potential option.
2026-01-11 21:05:05 +00:00
Swifty
5f0a39bbf0 fix(backend): set search_path for vector type visibility in hybrid search
- Add SET LOCAL search_path TO platform, public; to queries using vector types
- This ensures the vector type is found while keeping operators working
- Fixes hybrid search on databases using platform schema

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-09 18:45:29 +01:00
Swifty
9cfe70e554 fix: use unqualified vector type for operator compatibility 2026-01-09 17:58:39 +01:00
Swifty
e69f14353e make hybrid search work in platform schema 2026-01-09 16:27:43 +01:00
Swifty
1b96d990c5 fix credentials inport 2026-01-08 17:59:15 +01:00
Swifty
6db59a2665 fix hook issue 2026-01-08 17:34:37 +01:00
Swifty
1fd4ec079f Merge remote-tracking branch 'origin/dev' into hackathon/copilot 2026-01-07 09:24:01 +01:00
Abhimanyu Yadav
e503126170 feat(frontend): upgrade RJSF to v6 and implement new FormRenderer system
(#11677)

Fixes #11686

### Changes 🏗️

This PR upgrades the React JSON Schema Form (RJSF) library from v5 to v6
and introduces a complete rewrite of the form rendering system with
improved architecture and new features.

#### Core Library Updates
- Upgraded `@rjsf/core` from 5.24.13 to 6.1.2
- Upgraded `@rjsf/utils` from 5.24.13 to 6.1.2
- Added `@radix-ui/react-slider` 1.3.6 for new slider components

#### New Form Renderer Architecture
- **Base Templates**: Created modular base templates for arrays,
objects, and standard fields
- **AnyOf Support**: Implemented `AnyOfField` component with type
selector for union types
- **Array Fields**: New `ArrayFieldTemplate`, `ArrayFieldItemTemplate`,
and `ArraySchemaField` with context provider
- **Object Fields**: Enhanced `ObjectFieldTemplate` with better support
for additional properties via `WrapIfAdditionalTemplate`
- **Field Templates**: New `TitleField`, `DescriptionField`, and
`FieldTemplate` with improved styling
- **Custom Widgets**: Implemented TextWidget, SelectWidget,
CheckboxWidget, FileWidget, DateWidget, TimeWidget, and DateTimeWidget
- **Button Components**: Custom AddButton, RemoveButton, and CopyButton
components

#### Node Handle System Refactor
- Split `NodeHandle` into `InputNodeHandle` and `OutputNodeHandle` for
better separation of concerns
- Refactored handle ID generation logic in `helpers.ts` with new
`generateHandleIdFromTitleId` function
- Improved handle connection detection using edge store
- Added support for nested output handles (objects within outputs)

#### Edge Store Improvements
- Added `removeEdgesByHandlePrefix` method for bulk edge removal
- Improved `isInputConnected` with handle ID cleanup
- Optimized `updateEdgeBeads` to only update when changes occur
- Better edge management with `applyEdgeChanges`

#### Node Store Enhancements
- Added `syncHardcodedValuesWithHandleIds` method to maintain
consistency between form data and handle connections
- Better handling of additional properties in objects
- Improved path parsing with `parseHandleIdToPath` and
`ensurePathExists`

#### Draft Recovery Improvements
- Added diff calculation with `calculateDraftDiff` to show what changed
- New `formatDiffSummary` to display changes in a readable format (e.g.,
"+2/-1 blocks, +3 connections")
- Better visual feedback for draft changes

#### UI/UX Enhancements
- Fixed node container width to 350px for consistency
- Improved field error display with inline error messages
- Better spacing and styling throughout forms
- Enhanced tooltip support for field descriptions
- Improved array item controls with better button placement
- Context-aware field sizing (small/large)

#### Output Handler Updates
- Recursive rendering of nested output properties
- Better type display with color coding
- Improved handle connections for complex output schemas

#### Migration & Cleanup
- Updated `RunInputDialog` to use new FormRenderer
- Updated `FormCreator` to use new FormRenderer
- Moved OAuth callback types to separate file
- Updated import paths from `input-renderer` to `InputRenderer`
- Removed unused console.log statements
- Added `type="button"` to buttons to prevent form submission

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Test form rendering with various field types (text, number,
boolean, arrays, objects)
  - [x] Test anyOf field type selector functionality
  - [x] Test array item addition/removal
  - [x] Test nested object fields with additional properties
  - [x] Test input/output node handle connections
  - [x] Test draft recovery with diff display
  - [x] Verify backward compatibility with existing agents
  - [x] Test field validation and error display
  - [x] Verify handle ID generation for complex schemas

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **New Features**
* Improved form field rendering with enhanced support for optional
types, arrays, and nested objects.
* Enhanced draft recovery display showing detailed difference tracking
(added, removed, modified items).
  * Better OAuth popup callback handling with structured message types.

* **Bug Fixes**
  * Improved node handle ID normalization and synchronization.
  * Enhanced edge management for complex field changes.
  * Fixed styling consistency across form components.

* **Dependencies**
  * Updated React JSON Schema Form library to version 6.1.2.
  * Added Radix UI slider component support.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2026-01-07 05:06:34 +00:00
Zamil Majdy
7ee28197a3 docs(gitbook): sync documentation updates with dev branch (#11709)
## Summary

Sync GitBook documentation changes from the gitbook branch to dev. This
PR contains comprehensive documentation updates including new assets,
content restructuring, and infrastructure improvements.

## Changes 🏗️

### Documentation Updates
- **New GitBook Assets**: Added 9 new documentation images and
screenshots
  - Platform overview images (AGPT_Platform.png, Banner_image.png)
- Feature illustrations (Contribute.png, Integrations.png, hosted.jpg,
no-code.jpg, api-reference.jpg)
  - Screenshots and examples for better user guidance
- **Content Updates**: Enhanced README.md and SUMMARY.md with improved
structure and navigation
- **Visual Documentation**: Added comprehensive visual guides for
platform features

### Infrastructure 
- **Cloudflare Worker**: Added redirect handler for docs.agpt.co →
agpt.co/docs migration
  - Complete URL mapping for 71+ redirect patterns
  - Handles platform blocks restructuring and edge cases
  - Ready for deployment to Cloudflare Workers

### Merge Conflict Resolution
- **Clean merge from dev**: Successfully merged dev's major backend
restructuring (server/ → api/)
- **File resurrection fix**: Removed files that were accidentally
resurrected during merge conflict resolution
  - Cleaned up BuilderActionButton.tsx (deleted in dev)
  - Cleaned up old PreviewBanner.tsx location (moved in dev)
  - Synced pnpm-lock.yaml and layout.tsx with dev's current state

## Technical Details

This PR represents a careful synchronization that:
1. **Preserves all GitBook documentation work** while staying current
with dev
2. **Maintains clean diff**: Only documentation-related changes remain
after merge cleanup
3. **Resolves merge conflicts**: Handled major backend API restructuring
without breaking docs
4. **Infrastructure ready**: Cloudflare Worker ready for docs migration
deployment

## Files Changed
- `docs/`: GitBook documentation assets and content
- `autogpt_platform/cloudflare_worker.js`: Docs infrastructure for URL
redirects

## Validation
-  All TypeScript compilation errors resolved
-  Pre-commit hooks passing (Prettier, TypeCheck)
-  Only documentation changes remain in diff vs dev
-  Cloudflare Worker tested with comprehensive URL mapping
-  No non-documentation code changes after cleanup

## Deployment Notes
The Cloudflare Worker can be deployed via:
```bash
# Cloudflare Dashboard → Workers → Create → Paste code → Add route docs.agpt.co/*
```

This completes the GitBook synchronization and prepares for docs site
migration.

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: bobby.gaffin <bobby.gaffin@agpt.co>
Co-authored-by: Bently <Github@bentlybro.com>
Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
Co-authored-by: Swifty <craigswift13@gmail.com>
Co-authored-by: Ubbe <hi@ubbe.dev>
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Lluis Agusti <hi@llu.lu>
2026-01-07 02:11:11 +00:00
Zamil Majdy
fb87d6536f Merge branch 'dev' into hackathon/copilot 2026-01-07 03:34:01 +07:00
Nicholas Tindle
818de26d24 fix(platform/blocks): XMLParserBlock list object error (#11517)
<!-- Clearly explain the need for these changes: -->

### Need for these changes 💡

The `XMLParserBlock` was susceptible to crashing with an
`AttributeError: 'List' object has no attribute 'add_text'` when
processing malformed XML inputs, such as documents with multiple root
elements or stray text outside the root. This PR introduces robust
validation to prevent these crashes and provide clear, actionable error
messages to users.

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->

- Added a `_validate_tokens` static method to `XMLParserBlock` to
perform pre-parsing validation on the token stream. This method ensures
the XML input has a single root element and no text content outside of
it.
- Modified the `XMLParserBlock.run` method to call `_validate_tokens`
immediately after tokenization and before passing the tokens to
`gravitasml.Parser`.
- Introduced a new test case, `test_rejects_text_outside_root`, in
`test_blocks_dos_vulnerability.py` to verify that the `XMLParserBlock`
correctly raises a `ValueError` when encountering XML with text outside
the root element.
- Imported `Token` for type hinting in `xml_parser.py`.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Confirm that the `test_rejects_text_outside_root` test passes,
asserting that `ValueError` is raised for invalid XML.
  - [x] Confirm that other relevant XML parsing tests continue to pass.


---
Linear Issue:
[OPEN-2835](https://linear.app/autogpt/issue/OPEN-2835/blockunknownerror-raised-by-xmlparserblock-with-message-list-object)

<a
href="https://cursor.com/background-agent?bcId=bc-4495ea93-6836-412c-b2e3-0adb31113169"><picture><source
media="(prefers-color-scheme: dark)"
srcset="https://cursor.com/open-in-cursor-dark.svg"><source
media="(prefers-color-scheme: light)"
srcset="https://cursor.com/open-in-cursor-light.svg"><img alt="Open in
Cursor"
src="https://cursor.com/open-in-cursor.svg"></picture></a>&nbsp;<a
href="https://cursor.com/agents?id=bc-4495ea93-6836-412c-b2e3-0adb31113169"><picture><source
media="(prefers-color-scheme: dark)"
srcset="https://cursor.com/open-in-web-dark.svg"><source
media="(prefers-color-scheme: light)"
srcset="https://cursor.com/open-in-web-light.svg"><img alt="Open in Web"
src="https://cursor.com/open-in-web.svg"></picture></a>


<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Strengthens XML parsing robustness and error clarity.
> 
> - Adds `_validate_tokens` in `XMLParserBlock` to ensure a single root
element, balanced tags, and no text outside the root before parsing
> - Updates `run` to `list(tokenize(...))` and validate tokens prior to
`Parser.parse()`; maintains 10MB input size guard
> - Introduces `test_rejects_text_outside_root` asserting a readable
`ValueError` for trailing text
> - Bumps `gravitasml` to `0.1.4` in `pyproject.toml` and lockfile
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
22cc5149c5. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Bug Fixes**
* Improved XML parsing validation with stricter enforcement of
single-root elements and prevention of trailing text, providing clearer
error messages for invalid XML input.

* **Tests**
* Added test coverage for XML parser validation of invalid root text
scenarios.

* **Chores**
  * Updated GravitasML dependency to latest compatible version.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Co-authored-by: Cursor Agent <cursoragent@cursor.com>
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <ntindle@users.noreply.github.com>
2026-01-06 20:02:53 +00:00
Swifty
7ee03fb0ec fixing ci issues 2026-01-05 21:02:49 +01:00
Nicholas Tindle
cb08def96c feat(blocks): Add Google Docs integration blocks (#11608)
Introduces a new module with blocks for Google Docs operations,
including reading, creating, appending, inserting, formatting,
exporting, sharing, and managing public access for Google Docs. Updates
dependencies in pyproject.toml and poetry.lock to support these
features.



https://github.com/user-attachments/assets/3597366b-a9eb-4f8e-8a0a-5a0bc8ebc09b



<!-- Clearly explain the need for these changes: -->

### Changes 🏗️
Adds lots of basic docs tools + a dependency to use them with markdown

Block | Description | Key Features
-- | -- | --
Read & Create |   |  
GoogleDocsReadBlock | Read content from a Google Doc | Returns text
content, title, revision ID
GoogleDocsCreateBlock | Create a new Google Doc | Title, optional
initial content
GoogleDocsGetMetadataBlock | Get document metadata | Title, revision ID,
locale, suggested modes
GoogleDocsGetStructureBlock | Get document structure with indexes | Flat
segments or detailed hierarchy; shows start/end indexes
Plain Text Operations |   |  
GoogleDocsAppendPlainTextBlock | Append plain text to end | No
formatting applied
GoogleDocsInsertPlainTextBlock | Insert plain text at position |
Requires index; no formatting
GoogleDocsFindReplacePlainTextBlock | Find and replace plain text |
Case-sensitive option; no formatting on replacement
Markdown Operations | (ideal for LLM/AI output) |  
GoogleDocsAppendMarkdownBlock | Append Markdown to end | Full formatting
via gravitas-md2gdocs
GoogleDocsInsertMarkdownAtBlock | Insert Markdown at position | Requires
index
GoogleDocsReplaceAllWithMarkdownBlock | Replace entire doc with Markdown
| Clears and rewrites
GoogleDocsReplaceRangeWithMarkdownBlock | Replace index range with
Markdown | Requires start/end index
GoogleDocsReplaceContentWithMarkdownBlock | Find text and replace with
Markdown | Text-based search; great for templates
Structural Operations |   |  
GoogleDocsInsertTableBlock | Insert a table | Rows/columns OR content
array; optional Markdown in cells
GoogleDocsInsertPageBreakBlock | Insert a page break | Position index (0
= end)
GoogleDocsDeleteContentBlock | Delete content range | Requires start/end
index
GoogleDocsFormatTextBlock | Apply formatting to text range | Bold,
italic, underline, font size/color, etc.
Export & Sharing |   |  
GoogleDocsExportBlock | Export to different formats | PDF, DOCX, TXT,
HTML, RTF, ODT, EPUB
GoogleDocsShareBlock | Share with specific users | Reader, commenter,
writer, owner roles
GoogleDocsSetPublicAccessBlock | Set public access level | Private,
anyone with link (view/comment/edit)


<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Build, run, verify, and upload a block super test
- [x] [Google Docs Super
Agent_v8.json](https://github.com/user-attachments/files/24134215/Google.Docs.Super.Agent_v8.json)
works


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Chores**
  * Updated backend dependencies.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Adds end-to-end Google Docs capabilities under
`backend/blocks/google/docs.py`, including rich Markdown support.
> 
> - New blocks: read/create docs; plain-text
`append`/`insert`/`find_replace`/`delete`; text `format`;
`insert_table`; `insert_page_break`; `get_metadata`; `get_structure`
> - Markdown-powered blocks (via `gravitas_md2gdocs.to_requests`):
`append_markdown`, `insert_markdown_at`, `replace_all_with_markdown`,
`replace_range_with_markdown`, `replace_content_with_markdown`
> - Export and sharing: `export` (PDF/DOCX/TXT/HTML/RTF/ODT/EPUB),
`share` (user roles), `set_public_access`
> - Dependency updates: add `gravitas-md2gdocs` to `pyproject.toml` and
update `poetry.lock`
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
73512a95b2. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: Cursor Agent <cursoragent@cursor.com>
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <ntindle@users.noreply.github.com>
2026-01-05 18:36:56 +00:00
Krzysztof Czerwinski
ac2daee5f8 feat(backend): Add GPT-5.2 and update default models (#11652)
### Changes 🏗️

- Add OpenAI `GPT-5.2` with metadata&cost
- Add const `DEFAULT_LLM_MODEL` (set to GPT-5.2) and use it instead of
hardcoded model across llm blocks and tests

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] GPT-5.2 is set as default and works on llm blocks
2026-01-05 16:13:35 +00:00
lif
266e0d79d4 fix(blocks): add YouTube Shorts URL support (#11659)
## Summary
Added support for parsing YouTube Shorts URLs (`youtube.com/shorts/...`)
in the TranscribeYoutubeVideoBlock to extract video IDs correctly.

## Changes
- Modified `_extract_video_id` method in `youtube.py` to handle Shorts
URL format
- Added test cases for YouTube Shorts URL extraction

## Related Issue
Fixes #11500

## Test Plan
- [x] Added unit tests for YouTube Shorts URL extraction
- [x] Verified existing YouTube URL formats still work
- [x] CI should pass all existing tests

---------

Co-authored-by: Ubbe <hi@ubbe.dev>
2026-01-05 16:11:45 +00:00
lif
01f443190e fix(frontend): allow empty values in number inputs and fix AnyOfField toggle (#11661)
<!-- ⚠️ Reminder: Think about your Changeset/Docs changes! -->
<!-- If you are introducing new blocks or features, document them for
users. -->
<!-- Reference:
https://github.com/Significant-Gravitas/AutoGPT/blob/dev/CONTRIBUTING.md
-->

## Summary

This PR fixes two related issues with number/integer inputs in the
frontend:

1. **HTMLType typo fix**: INTEGER input type was incorrectly mapped to
`htmlType: 'account'` (which is not a valid HTML input type) instead of
`htmlType: 'number'`.

2. **AnyOfField toggle fix**: When a user cleared a number input field,
the input would disappear because `useAnyOfField` checked for both
`null` AND `undefined` in `isEnabled`. This PR changes it to only check
for explicit `null` (set by toggle off), allowing `undefined` (empty
input) to keep the field visible.

### Root cause analysis

When a user cleared a number input:
1. `handleChange` returned `undefined` (because `v === "" ? undefined :
Number(v)`)
2. In `useAnyOfField`, `isEnabled = formData !== null && formData !==
undefined` became `false`
3. The input field disappeared

### Fix

Changed `useAnyOfField.tsx` line 67:
```diff
- const isEnabled = formData !== null && formData !== undefined;
+ const isEnabled = formData !== null;
```

This way:
- When toggle is OFF → `formData` is `null` → `isEnabled` is `false`
(input hidden) ✓
- When toggle is ON but input is cleared → `formData` is `undefined` →
`isEnabled` is `true` (input visible) ✓

## Test plan

- [x] Verified INTEGER inputs now render correctly with `type="number"`
- [x] Verified clearing a number input keeps the field visible
- [x] Verified toggling the nullable switch still works correctly

Fixes #11594

🤖 AI-assisted development disclaimer: This PR was developed with
assistance from Claude Code.

---------

Signed-off-by: majiayu000 <1835304752@qq.com>
Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
2026-01-05 16:10:47 +00:00
Swifty
9a83af2787 Merge origin/dev - fix NodeInputs.tsx conflict 2026-01-05 12:47:56 +01:00
Swifty
dc1099e205 fix: update imports to use new api/features paths
- Updated all chat tools imports from backend.server.v2 to backend.api.features
- Updated store imports (backfill_embeddings, db, hybrid_search)
- Fixed CredentialsInput import in setup-wizard page
2026-01-05 12:35:49 +01:00
Ubbe
bdba0033de refactor(frontend): move NodeInput files (#11695)
## Changes 🏗️

Move the `<NodeInput />` component next to the old builder code where it
is used.

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run app locally and click around, E2E is fine
2026-01-05 10:29:12 +00:00
Abhimanyu Yadav
b87c64ce38 feat(frontend): Add delete key bindings to ReactFlow editor
(#11693)

Issues fixed by this PR
- https://github.com/Significant-Gravitas/AutoGPT/issues/11688
- https://github.com/Significant-Gravitas/AutoGPT/issues/11687

### **Changes 🏗️**

Added keyboard delete functionality to the ReactFlow editor by enabling
the `deleteKeyCode` prop with both "Backspace" and "Delete" keys. This
allows users to delete selected nodes and edges using standard keyboard
shortcuts, improving the editing experience.

**Modified:**

- `Flow.tsx`: Added `deleteKeyCode={["Backspace", "Delete"]}` prop to
the ReactFlow component to enable deletion of selected elements via
keyboard

### **Checklist 📋**

#### **For code changes:**

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Select a node in the flow editor and press Delete key to confirm
it deletes
- [x] Select a node in the flow editor and press Backspace key to
confirm it deletes
    - [x] Verify deletion works for multiple selected elements
2026-01-05 10:28:57 +00:00
Swifty
e68a9eb771 Merge origin/dev into hackathon/copilot
Resolved conflicts from api restructuring:
- backend/server/v2/* -> backend/api/features/*
- Updated imports to use new paths
- Kept chat/copilot functionality with new structure
- Accepted openapi.json from dev (regenerate after merge)
- Resolved useCredentialsInput naming conflict (singular)
- NavbarView.tsx merged into Navbar.tsx
2026-01-05 11:05:31 +01:00
Ubbe
003affca43 refactor(frontend): fix new builder buttons (#11696)
## Changes 🏗️

<img width="800" height="964" alt="Screenshot 2026-01-05 at 15 26 21"
src="https://github.com/user-attachments/assets/f8c7fc47-894a-4db2-b2f1-62b4d70e8453"
/>

- Adjust the new builder to use the Design System components
- Re-structure imports to match formatting rules
- Small improvement on `use-get-flag`
- Move file which is the main hook

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run locally and check the new buttons look good
2026-01-05 09:09:47 +00:00
Abhimanyu Yadav
290d0d9a9b feat(frontend): add auto-save Draft Recovery feature with IndexedDB persistence
(#11658)

## Summary
Implements an auto-save draft recovery system that persists unsaved flow
builder state across browser sessions, tab closures, and refreshes. When
users return to a flow with unsaved changes, they can choose to restore
or discard the draft via an intuitive recovery popup.



https://github.com/user-attachments/assets/0f77173b-7834-48d2-b7aa-73c6cd2eaff6



## Changes 🏗️

### Core Features
- **Draft Recovery Popup** (`DraftRecoveryPopup.tsx`)
  - Displays amber-themed notification with unsaved changes metadata
  - Shows node count, edge count, and relative time since last save
  - Provides restore and discard actions with tooltips
  - Auto-dismisses on click outside or ESC key

- **Auto-Save System** (`useDraftManager.ts`)
  - Automatically saves draft state every 15 seconds
  - Saves on browser tab close/refresh via `beforeunload`
  - Tracks nodes, edges, graph schemas, node counter, and flow version
  - Smart dirty checking - only saves when actual changes detected
  - Cleans up expired drafts (24-hour TTL)

- **IndexedDB Persistence** (`db.ts`, `draft-service.ts`)
  - Uses Dexie library for reliable client-side storage
- Handles both existing flows (by flowID) and new flows (via temp
session IDs)
- Compares draft state with current state to determine if recovery
needed
  - Automatically clears drafts after successful save

### Integration Changes
- **Flow Editor** (`Flow.tsx`)
  - Integrated `DraftRecoveryPopup` component
  - Passes `isInitialLoadComplete` state for proper timing

- **useFlow Hook** (`useFlow.ts`)
  - Added `isInitialLoadComplete` state to track when flow is ready
  - Ensures draft check happens after initial graph load
  - Resets state on flow/version changes

- **useCopyPaste Hook** (`useCopyPaste.ts`)
  - Refactored to manage keyboard event listeners internally
  - Simplified integration by removing external event handler setup

- **useSaveGraph Hook** (`useSaveGraph.ts`)
  - Clears draft after successful save (both create and update)
  - Removes temp flow ID from session storage on first save

### Dependencies
- Added `dexie@4.2.1` - Modern IndexedDB wrapper for reliable
client-side storage

## Technical Details

**Auto-Save Flow:**
1. User makes changes to nodes/edges
2. Change triggers 15-second debounced save
3. Draft saved to IndexedDB with timestamp
4. On save, current state compared with last saved state
5. Only saves if meaningful changes detected

**Recovery Flow:**
1. User loads flow/refreshes page
2. After initial load completes, check for existing draft
3. Compare draft with current state
4. If different and non-empty, show recovery popup
5. User chooses to restore or discard
6. Draft cleared after either action

**Session Management:**
- Existing flows: Use actual flowID for draft key

### Test Plan 🧪

- [x] Create a new flow with 3+ blocks and connections, wait 15+
seconds, then refresh the page - verify recovery popup appears with
correct counts and restoring works
- [x] Create a flow with blocks, refresh, then click "Discard" button on
recovery popup - verify popup disappears and draft is deleted
- [x] Add blocks to a flow, save successfully - verify draft is cleared
from IndexedDB (check DevTools > Application > IndexedDB)
- [x] Make changes to an existing flow, refresh page - verify recovery
popup shows and restoring preserves all changes correctly
- [x] Verify empty flows (0 nodes) don't trigger recovery popup or save
drafts
2025-12-31 14:49:53 +00:00
Abhimanyu Yadav
fba61c72ed feat(frontend): fix duplicate publish button and improve BuilderActionButton styling
(#11669)

Fixes duplicate "Publish to Marketplace" buttons in the builder by
adding a `showTrigger` prop to control modal trigger visibility.

<img width="296" height="99" alt="Screenshot 2025-12-23 at 8 18 58 AM"
src="https://github.com/user-attachments/assets/d5dbfba8-e854-4c0c-a6b7-da47133ec815"
/>


### Changes 🏗️

**BuilderActionButton.tsx**

- Removed borders on hover and active states for a cleaner visual
appearance
- Added `hover:border-none` and `active:border-none` to maintain
consistent styling during interactions

**PublishToMarketplace.tsx**

- Pass `showTrigger={false}` to `PublishAgentModal` to hide the default
trigger button
- This prevents duplicate buttons when a custom trigger is already
rendered

**PublishAgentModal.tsx**

- Added `showTrigger` prop (defaults to `true`) to conditionally render
the modal trigger
- Allows parent components to control whether the built-in trigger
button should be displayed
- Maintains backward compatibility with existing usage

### Checklist 📋

#### For code changes:

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verify only one "Publish to Marketplace" button appears in the
builder
- [x] Confirm button hover/active states display correctly without
border artifacts
- [x] Verify modal can still be triggered programmatically without the
trigger button
2025-12-31 09:46:12 +00:00
Nicholas Tindle
79d45a15d0 feat(platform): Deduplicate insufficient funds Discord + email notifications (#11672)
Add Redis-based deduplication for insufficient funds notifications (both
Discord alerts and user emails) when users run out of credits. This
prevents spamming users and the PRODUCT Discord channel with repeated
alerts for the same user+agent combination.

### Changes 🏗️

- **Redis-based deduplication** (`backend/executor/manager.py`):
- Add `INSUFFICIENT_FUNDS_NOTIFIED_PREFIX` constant for Redis key prefix
- Add `INSUFFICIENT_FUNDS_NOTIFIED_TTL_SECONDS` (30 days) as fallback
cleanup
- Implement deduplication in `_handle_insufficient_funds_notif` using
Redis `SET NX`
- Skip both email (`ZERO_BALANCE`) and Discord notifications for
duplicate alerts per user+agent
- Add `clear_insufficient_funds_notifications(user_id)` function to
remove all notification flags for a user

- **Clear flags on credit top-up** (`backend/data/credit.py`):
- Call `clear_insufficient_funds_notifications` in `_top_up_credits`
after successful auto-charge
- Call `clear_insufficient_funds_notifications` in `fulfill_checkout`
after successful manual top-up
- This allows users to receive notifications again if they run out of
funds in the future

- **Comprehensive test coverage**
(`backend/executor/manager_insufficient_funds_test.py`):
  - Test first-time notification sends both email and Discord alert
  - Test duplicate notifications are skipped for same user+agent
  - Test different agents for same user get separate alerts
  - Test clearing notifications removes all keys for a user
  - Test handling when no notification keys exist
- Test notifications still sent when Redis fails (graceful degradation)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] First insufficient funds alert sends both email and Discord
notification
  - [x] Duplicate alerts for same user+agent are skipped
  - [x] Different agents for same user each get their own notification
  - [x] Topping up credits clears notification flags
  - [x] Redis failure gracefully falls back to sending notifications
  - [x] 30-day TTL provides automatic cleanup as fallback
  - [x] Manually test this works with scheduled agents
 

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Introduces Redis-backed deduplication for insufficient-funds alerts
and resets flags on successful credit additions.
> 
> - **Dedup insufficient-funds alerts** in `executor/manager.py` using
Redis `SET NX` with `INSUFFICIENT_FUNDS_NOTIFIED_PREFIX` and 30‑day TTL;
skips duplicate ZERO_BALANCE email + Discord alerts per
`user_id`+`graph_id`, with graceful fallback if Redis fails.
> - **Reset notification flags on credit increases** by adding
`clear_insufficient_funds_notifications(user_id)` and invoking it when
enabling/adding positive `GRANT`/`TOP_UP` transactions in
`data/credit.py`.
> - **Tests** (`executor/manager_insufficient_funds_test.py`):
first-time vs duplicate behavior, per-agent separation, clearing keys
(including no-key and Redis-error cases), and clearing on
`_add_transaction`/`_enable_transaction`.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
1a4413b3a1. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: Ubbe <hi@ubbe.dev>
Co-authored-by: Claude <noreply@anthropic.com>
2025-12-30 18:10:30 +00:00
Ubbe
66f0d97ca2 fix(frontend): hide better chat link if not enabled (#11648)
## Changes 🏗️

- Make `<Navbar />` a client component so its rendering is more
predictable
- Remove the `useMemo()` for the chat link to prevent the flash...
- Make sure chat is added to the navbar links only after checking the
flag is enabled
- Improve logout with `useTransition`
- Simplify feature flags setup

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run locally and test the above

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Ensures the `Chat` nav item is hidden when the feature flag is off
across desktop and mobile nav.
> 
> - Inline-filters `loggedInLinks` to skip `Chat` when `Flag.CHAT` is
false for both `NavbarLink` rendering and `MobileNavBar` menu items
> - Removes `useMemo`/`linksWithChat` helper; maps directly over
`loggedInLinks` and filters nulls in mobile, keeping icon mapping intact
> - Cleans up unused `useMemo` import
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
79c42d87b4. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->
2025-12-30 13:21:53 +00:00
Ubbe
5894a8fcdf fix(frontend): use DS Dialog on old builder (#11643)
## Changes 🏗️

Use the Design System `<Dialog />` on the old builder, which supports
long content scrolling ( the current one does not, causing issues in
graphs with many run inputs )...

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run locally and test the above


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **New Features**
* Added Enhanced Rendering toggle for improved output handling and
display (controlled via feature flag)

* **Improvements**
  * Refined dialog layouts and user interactions
* Enhanced copy-to-clipboard functionality with toast notifications upon
copying

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-12-30 20:22:57 +07:00
Ubbe
dff8efa35d fix(frontend): favico colour override issue (#11681)
## Changes 🏗️

Sometimes, on Dev, when navigating between pages, the Favico colour
would revert from Green 🟢 (Dev) to Purple 🟣(Default). That's because the
`/marketplace` page had custom code overriding it that I didn't notice
earlier...

I also made it use the Next.js metadata API, so it handles the favicon
correctly across navigations.

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run locally and test the above
2025-12-30 20:22:32 +07:00
seer-by-sentry[bot]
e26822998f fix: Handle missing or null 'items' key in DataForSEO Related Keywords block (#10989)
### Changes 🏗️

- Modified the DataForSEO Related Keywords block to handle cases where
the 'items' key is missing or has a null value in the API response.
- Ensures that the code gracefully handles these scenarios by defaulting
to an empty list, preventing potential errors. Fixes
[AUTOGPT-SERVER-66D](https://sentry.io/organizations/significant-gravitas/issues/6902944636/).

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] The DataForSEO API now returns an empty list when there are no
results, preventing the code from attempting to iterate on a null value.

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Strengthens parsing of DataForSEO Labs response to avoid errors when
`items` is missing or null.
> 
> - In `backend/blocks/dataforseo/related_keywords.py` `run()`, sets
`items = first_result.get("items") or []` when `first_result` is a
`dict`, otherwise `[]`, ensuring safe iteration
> - Prevents exceptions and yields empty results when no items are
returned
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
cc465ddbf2. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

Co-authored-by: seer-by-sentry[bot] <157164994+seer-by-sentry[bot]@users.noreply.github.com>
Co-authored-by: Toran Bruce Richards <toran.richards@gmail.com>
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <ntindle@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2025-12-26 16:17:24 +00:00
Zamil Majdy
88731b1f76 feat(platform): marketplace update notifications with enhanced publishing workflow (#11630)
## Summary
This PR implements a comprehensive marketplace update notification
system that allows users to discover and update to newer agent versions,
along with enhanced publishing workflows and UI improvements.

<img width="1500" height="533" alt="image"
src="https://github.com/user-attachments/assets/ee331838-d712-4718-b231-1f9ec21bcd8e"
/>

<img width="600" height="610" alt="image"
src="https://github.com/user-attachments/assets/b881a7b8-91a5-460d-a159-f64765b339f1"
/>

<img width="1500" height="416" alt="image"
src="https://github.com/user-attachments/assets/a2d61904-2673-4e44-bcc5-c47d36af7a38"
/>

<img width="1500" height="1015" alt="image"
src="https://github.com/user-attachments/assets/2dd978c7-20cc-4230-977e-9c62157b9f23"
/>


## Core Features

### 🔔 Marketplace Update Notifications
- **Update detection**: Automatically detects when marketplace has newer
agent versions than user's local copy
- **Creator notifications**: Shows banners for creators with unpublished
changes ready to publish
- **Non-creator support**: Enables regular users to discover and update
to newer marketplace versions
- **Version comparison**: Intelligent logic comparing `graph_version` vs
marketplace listing versions

### 📋 Enhanced Publishing Workflow  
- **Builder integration**: Added "Publish to Marketplace" button
directly in the builder actions
- **Unified banner system**: Consistent `MarketplaceBanners` component
across library and marketplace pages
- **Streamlined UX**: Fixed layout issues, improved button placement and
styling
- **Modal improvements**: Fixed thumbnail loading race conditions and
infinite loop bugs

### 📚 Version History & Changelog
- **Inline version history**: Added version changelog directly to
marketplace agent pages
- **Version comparison**: Clear display of available versions with
current version highlighting
- **Update mechanism**: Direct updates using `graph_version` parameter
for accuracy

## Technical Implementation

### Backend Changes
- **Database schema**: Added `agentGraphVersions` and `agentGraphId`
fields to `StoreAgent` model
- **API enhancement**: Updated store endpoints to expose graph version
data for version comparison
- **Data migration**: Fixed agent version field naming from `version` to
`agentGraphVersions`
- **Model updates**: Enhanced `LibraryAgentUpdateRequest` with
`graph_version` field

### Frontend Architecture
- **`useMarketplaceUpdate` hook**: Centralized marketplace update
detection and creator identification
- **`MarketplaceBanners` component**: Unified banner system with proper
vertical layout and styling
- **`AgentVersionChangelog` component**: Version history display for
marketplace pages
- **`PublishToMarketplace` component**: Builder integration with modal
workflow

### Key Bug Fixes
- **Thumbnail loading**: Fixed race condition where images wouldn't load
on first modal open
- **Infinite loops**: Used refs to prevent circular dependencies in
`useThumbnailImages` hook
- **Layout issues**: Fixed banner placement, removed duplicate
breadcrumbs, corrected vertical layout
- **Field naming**: Fixed `agent_version` vs `version` field
inconsistencies across APIs

## Files Changed

### Backend
- `autogpt_platform/backend/backend/server/v2/store/` - Enhanced store
API with graph version data
- `autogpt_platform/backend/backend/server/v2/library/` - Updated
library API models
- `autogpt_platform/backend/migrations/` - Database migrations for
version fields
- `autogpt_platform/backend/schema.prisma` - Schema updates for graph
versions

### Frontend
- `src/app/(platform)/components/MarketplaceBanners/` - New unified
banner component
- `src/app/(platform)/library/agents/[id]/components/` - Enhanced
library views with banners
- `src/app/(platform)/build/components/BuilderActions/` - Added
marketplace publish button
- `src/app/(platform)/marketplace/components/AgentInfo/` - Added inline
version history
- `src/components/contextual/PublishAgentModal/` - Fixed thumbnail
loading and modal workflow

## User Experience Impact
- **Better discovery**: Users automatically notified of newer agent
versions
- **Streamlined publishing**: Direct publish access from builder
interface
- **Reduced friction**: Fixed UI bugs, improved loading states,
consistent design
- **Enhanced transparency**: Inline version history on marketplace pages
- **Creator workflow**: Better notifications for creators with
unpublished changes

## Testing
-  Update banners appear correctly when marketplace has newer versions
-  Creator banners show for users with unpublished changes  
-  Version comparison logic works with graph_version vs marketplace
versions
-  Publish button in builder opens modal correctly with pre-populated
data
-  Thumbnail images load properly on first modal open without infinite
loops
-  Database migrations completed successfully with version field fixes
-  All existing tests updated and passing with new schema changes

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Lluis Agusti <hi@llu.lu>
Co-authored-by: Ubbe <hi@ubbe.dev>
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
2025-12-22 11:13:06 +00:00
Abhimanyu Yadav
c3e407ef09 feat(frontend): add hover state to edge delete button in FlowEditor (#11601)
<!-- Clearly explain the need for these changes: -->

The delete button on flow editor edges is always visible, which creates
visual clutter. This change makes the button only appear on hover,
improving the UI while keeping it accessible.

### Changes 🏗️

- Added hover state management using `useState` to track when the edge
delete button is hovered
- Applied opacity transition to the delete button (fades in on hover,
fades out when not hovered)
- Added `onMouseEnter` and `onMouseLeave` handlers to the button to
control hover state
- Used `cn` utility for conditional className management
- Button remains interactive even when `opacity-0` (still clickable for
better UX)

### Checklist 📋

#### For code changes:

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Hover over an edge in the flow editor and verify the delete button
fades in smoothly
- [x] Move mouse away from edge and verify the delete button fades out
smoothly
- [x] Click the delete button while hovered to verify it still removes
the edge connection
- [x] Test with multiple edges to ensure hover state is independent per
edge
2025-12-22 01:30:58 +00:00
Reinier van der Leer
08a60dcb9b refactor(frontend): Clean up React Query-related code (#11604)
- #11603

### Changes 🏗️

Frontend:
- Make `okData` infer the response data type instead of casting
- Generalize infinite query utilities from `SidebarRunsList/helpers.ts`
  - Move to `@/app/api/helpers` and use wherever possible
- Simplify/replace boilerplate checks and conditions with `okData` in
many places
- Add `useUserTimezone` hook to replace all the boilerplate timezone
queries

Backend:
- Fix response type annotation of `GET
/api/store/graph/{store_listing_version_id}` endpoint
- Fix documentation and error behavior of `GET
/api/review/execution/{graph_exec_id}` endpoint

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - CI passes
  - [x] Clicking around the app manually -> no obvious issues
  - [x] Test Onboarding step 5 (run)
  - [x] Library runs list loads normally
2025-12-20 22:46:24 +01:00
Reinier van der Leer
de78d062a9 refactor(backend/api): Clean up API file structure (#11629)
We'll soon be needing a more feature-complete external API. To make way
for this, I'm moving some files around so:
- We can more easily create new versions of our external API
- The file structure of our internal API is more homogeneous

These changes are quite opinionated, but IMO in any case they're better
than the chaotic structure we have now.

### Changes 🏗️

- Move `backend/server` -> `backend/api`
- Move `backend/server/routers` + `backend/server/v2` ->
`backend/api/features`
  - Change absolute sibling imports to relative imports
- Move `backend/server/v2/AutoMod` -> `backend/executor/automod`
- Combine `backend/server/routers/analytics_*test.py` ->
`backend/api/features/analytics_test.py`
- Sort OpenAPI spec file

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - CI tests
  - [x] Clicking around in the app -> no obvious breakage
2025-12-20 20:33:10 +00:00
Zamil Majdy
217e3718d7 feat(platform): implement HITL UI redesign with improved review flow (#11529)
## Summary

• Redesigned Human-in-the-Loop review interface with yellow warning
scheme
• Implemented separate approved_data/rejected_data output pins for
human_in_the_loop block
• Added real-time execution status tracking to legacy flow for review
detection
• Fixed button loading states and improved UI consistency across flows
• Standardized Tailwind CSS usage removing custom values

<img width="1500" alt="image"
src="https://github.com/user-attachments/assets/4ca6dd98-f3c4-41c0-a06b-92b3bca22490"
/>
<img width="1500" alt="image"
src="https://github.com/user-attachments/assets/0afae211-09f0-465e-b477-c3949f13c876"
/>
<img width="1500" alt="image"
src="https://github.com/user-attachments/assets/05d9d1ed-cd40-4c73-92b8-0dab21713ca9"
/>



## Changes Made

### Backend Changes
- Modified `human_in_the_loop.py` block to output separate
`approved_data` and `rejected_data` pins instead of single reviewed_data
with status
- Updated block output schema to support better data flow in graph
builder

### Frontend UI Changes
- Redesigned PendingReviewsList with yellow warning color scheme
(replacing orange)
- Fixed button loading states to show spinner only on clicked button 
- Improved FloatingReviewsPanel layout removing redundant headers
- Added real-time status tracking to legacy flow using useFlowRealtime
hook
- Fixed AgentActivityDropdown text overflow and layout issues
- Enhanced Safe Mode toggle positioning and toast timing
- Standardized all custom Tailwind values to use standard classes

### Design System Updates
- Added yellow design tokens (25, 150, 600) for warning states
- Unified REVIEW status handling across all components
- Improved component composition patterns

## Test Plan
- [x] Verify HITL blocks create separate output pins for
approved/rejected data
- [x] Test review flow works in both new and legacy flow builders
- [x] Confirm button loading states work correctly (only clicked button
shows spinner)
- [x] Validate AgentActivityDropdown properly displays review status
- [x] Check Safe Mode toggle positioning matches old flow
- [x] Ensure real-time status updates work in legacy flow
- [x] Verify yellow warning colors are consistent throughout

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Lluis Agusti <hi@llu.lu>
2025-12-20 15:52:51 +00:00
Reinier van der Leer
3dbc03e488 feat(platform): OAuth API & Single Sign-On (#11617)
We want to provide Single Sign-On for multiple AutoGPT apps that use the
Platform as their backend.

### Changes 🏗️

Backend:
- DB + logic + API for OAuth flow (w/ tests)
  - DB schema additions for OAuth apps, codes, and tokens
  - Token creation/validation/management logic
- OAuth flow endpoints (app info, authorize, token exchange, introspect,
revoke)
  - E2E OAuth API integration tests
- Other OAuth-related endpoints (upload app logo, list owned apps,
external `/me` endpoint)
    - App logo asset management
  - Adjust external API middleware to support auth with access token
  - Expired token clean-up job
    - Add `OAUTH_TOKEN_CLEANUP_INTERVAL_HOURS` setting (optional)
- `poetry run oauth-tool`: dev tool to test the OAuth flows and register
new OAuth apps
- `poetry run export-api-schema`: dev tool to quickly export the OpenAPI
schema (much quicker than spinning up the backend)

Frontend:
- Frontend UI for app authorization (`/auth/authorize`)
  - Re-redirect after login/signup
- Frontend flow to batch-auth integrations on request of the client app
(`/auth/integrations/setup-wizard`)
  - Debug `CredentialInputs` component
- Add `/profile/oauth-apps` management page
- Add `isOurProblem` flag to `ErrorCard` to hide action buttons when the
error isn't our fault
- Add `showTitle` flag to `CredentialsInput` to hide built-in title for
layout reasons

DX:
- Add [API
guide](https://github.com/Significant-Gravitas/AutoGPT/blob/pwuts/sso/docs/content/platform/integrating/api-guide.md)
and [OAuth
guide](https://github.com/Significant-Gravitas/AutoGPT/blob/pwuts/sso/docs/content/platform/integrating/oauth-guide.md)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Manually verify test coverage of OAuth API tests
  - Test `/auth/authorize` using `poetry run oauth-tool test-server`
    - [x] Works
    - [x] Looks okay
- Test `/auth/integrations/setup-wizard` using `poetry run oauth-tool
test-server`
    - [x] Works
    - [x] Looks okay
  - Test `/profile/oauth-apps` page
    - [x] All owned OAuth apps show up
    - [x] Enabling/disabling apps works
- [ ] ~~Uploading logos works~~ can only test this once deployed to dev

#### For configuration changes:

- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
2025-12-19 21:05:16 +01:00
Zamil Majdy
b76b5a37c5 fix(backend): Convert generic exceptions to appropriate typed exceptions (#11641)
## Summary
- Fix TimeoutError in AIShortformVideoCreatorBlock → BlockExecutionError
- Fix generic exceptions in SearchTheWebBlock → BlockExecutionError with
proper HTTP error handling
- Fix FirecrawlError 504 timeouts → BlockExecutionError with
service-specific messages
- Fix ReplicateBlock validation errors → BlockInputError for 422 status,
BlockExecutionError for others
- Add comprehensive HTTP error handling with
HTTPClientError/HTTPServerError classes
- Implement filename sanitization for "File name too long" errors
- Add proper User-Agent handling for Wikipedia API compliance
- Fix type conversion for string subclasses like ShortTextType
- Add support for moderation errors with proper context propagation

## Test plan
- [x] All modified blocks now properly categorize errors instead of
raising BlockUnknownError
- [x] Type conversion tests pass for ShortTextType and other string
subclasses
- [x] Formatting and linting pass
- [x] Exception constructors include required block_name and block_id
parameters

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-12-19 13:19:58 +01:00
Bently
eed07b173a fix(frontend/builder): automatically frame agent when opening in builder (#11640)
## Summary
- Fixed auto-frame timing in new builder - now calls `fitView` after
nodes are rendered instead of on mount
- Replaced manual viewport calculation in legacy builder with React
Flow's `fitView` for consistency
- Both builders now properly center and frame all blocks when opening an
agent

  ## Test plan
- [x] Open an existing agent with multiple blocks in the new builder -
verify all blocks are visible and centered
- [x] Open an existing agent in the legacy builder - verify all blocks
are visible and centered
  - [x] Verify the manual "Frame" button still works correctly
2025-12-18 18:07:40 +00:00
Ubbe
c5e8b0b08f fix(frontend): modal hidden overflow (#11642)
## Changes 🏗️

### Before

<img width="300" height="528" alt="Screenshot 2025-12-18 at 19 07 37"
src="https://github.com/user-attachments/assets/6bedf02d-e1fd-4f87-956e-96fcd9996392"
/>


### After

<img width="300" height="576" alt="Screenshot 2025-12-18 at 19 02 12"
src="https://github.com/user-attachments/assets/f3175b94-447c-41b0-83cf-4334c02d1378"
/>


## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run locally and check crop
2025-12-18 19:51:24 +01:00
Ubbe
cd3e35df9e fix(frontend): small library/mobile improvements (#11626)
## Changes 🏗️

Adds the following improvements:

### Prevent credential row overflowing on mobile 📱 

**Before**

<img width="300" height="469" alt="Screenshot 2025-12-15 at 16 42 05"
src="https://github.com/user-attachments/assets/0d27394c-cec9-45a4-be82-804827343212"
/>

**After**

<img width="300" height="446" alt="Screenshot 2025-12-15 at 16 44 22"
src="https://github.com/user-attachments/assets/0f19e220-500d-4488-955e-612d38704727"
/>

_Just hide the ****** on mobile..._

### Make touch targets bigger on 📱 on the mobile menu

**Before**

<img width="300" height="607" alt="Screenshot 2025-12-15 at 16 58 28"
src="https://github.com/user-attachments/assets/762b7d4e-5269-41a4-88d2-ea745c50324e"
/>

Touch targets were quite small on mobile, especially for people with big
fingers...

**After**

<img width="300" height="589" alt="Screenshot 2025-12-15 at 16 54 02"
src="https://github.com/user-attachments/assets/beede0c4-5439-47a9-8bec-143b44306c6b"
/>

### New `<OverflowText />` component

<img width="600" height="551" alt="Screenshot 2025-12-15 at 16 48 20"
src="https://github.com/user-attachments/assets/a969223f-dd6a-497a-857e-18483aea28d7"
/>

A component that will render text like `<Text />`, but automatically
displays `...` and the full text content on a tooltip if it detects
there is no space for the full text length. Pretty useful for the type
of dashboard we are building, where sometimes titles or user-generated
content can be quite long, making the UI look whack.

### Google Drive Picker

Only allow the removal of files if it is not in read-only mode.


## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Checkout branch locally
  - [x] Test the above
2025-12-18 19:33:30 +01:00
dependabot[bot]
4c474417bc chore(frontend/deps-dev): bump import-in-the-middle from 1.14.2 to 2.0.0 in /autogpt_platform/frontend (#11357)
Bumps
[import-in-the-middle](https://github.com/nodejs/import-in-the-middle)
from 1.14.2 to 2.0.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/nodejs/import-in-the-middle/releases">import-in-the-middle's
releases</a>.</em></p>
<blockquote>
<h2>import-in-the-middle: v2.0.0</h2>
<h2><a
href="https://github.com/nodejs/import-in-the-middle/compare/import-in-the-middle-v1.15.0...import-in-the-middle-v2.0.0">2.0.0</a>
(2025-10-14)</h2>
<h3>⚠ BREAKING CHANGES</h3>
<p>This was only a new major out of an abundance of caution. The hook
code has been converted to ESM to work around some loader issues. There
should actually be no breaking changes when using
<code>import-in-the-middle/hook.mjs</code> or the exported
<code>Hook</code> API.</p>
<h3>Features</h3>
<ul>
<li>convert all modules running in loader thread to ESM (<a
href="https://redirect.github.com/nodejs/import-in-the-middle/issues/210">#210</a>)
(<a
href="da7c7a6904">da7c7a6</a>)</li>
</ul>
<h2>import-in-the-middle: v1.15.0</h2>
<h2><a
href="https://github.com/nodejs/import-in-the-middle/compare/import-in-the-middle-v1.14.4...import-in-the-middle-v1.15.0">1.15.0</a>
(2025-10-09)</h2>
<h3>Features</h3>
<ul>
<li>Compatibility with specifier imports (<a
href="https://redirect.github.com/nodejs/import-in-the-middle/issues/211">#211</a>)
(<a
href="83d662a8e1">83d662a</a>)</li>
</ul>
<h2>import-in-the-middle: v1.14.4</h2>
<h2><a
href="https://github.com/nodejs/import-in-the-middle/compare/import-in-the-middle-v1.14.3...import-in-the-middle-v1.14.4">1.14.4</a>
(2025-09-25)</h2>
<h3>Bug Fixes</h3>
<ul>
<li>Revert &quot;use <code>createRequire</code> to load
<code>hook.js</code> (<a
href="https://redirect.github.com/nodejs/import-in-the-middle/issues/205">#205</a>)&quot;
(<a
href="https://redirect.github.com/nodejs/import-in-the-middle/issues/208">#208</a>)
(<a
href="f23b7ef9e8">f23b7ef</a>)</li>
</ul>
<h2>import-in-the-middle: v1.14.3</h2>
<h2><a
href="https://github.com/nodejs/import-in-the-middle/compare/import-in-the-middle-v1.14.2...import-in-the-middle-v1.14.3">1.14.3</a>
(2025-09-24)</h2>
<h3>Bug Fixes</h3>
<ul>
<li>use <code>createRequire</code> to load <code>hook.js</code> (<a
href="https://redirect.github.com/nodejs/import-in-the-middle/issues/205">#205</a>)
(<a
href="81a2ae0ea0">81a2ae0</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/nodejs/import-in-the-middle/blob/main/CHANGELOG.md">import-in-the-middle's
changelog</a>.</em></p>
<blockquote>
<h2><a
href="https://github.com/nodejs/import-in-the-middle/compare/import-in-the-middle-v1.15.0...import-in-the-middle-v2.0.0">2.0.0</a>
(2025-10-14)</h2>
<h3>⚠ BREAKING CHANGES</h3>
<p>Converting all modules running in the loader thread to ESM should not
be a
breaking change for most users since it primarily affects internal
implementation
details. However, if you were referencing internal CJS files like
<code>hook.js</code> this will no longer work.</p>
<h3>Features</h3>
<ul>
<li>convert all modules running in loader thread to ESM (<a
href="https://redirect.github.com/nodejs/import-in-the-middle/issues/210">#210</a>)
(<a
href="da7c7a6904">da7c7a6</a>)</li>
</ul>
<h2><a
href="https://github.com/nodejs/import-in-the-middle/compare/import-in-the-middle-v1.14.4...import-in-the-middle-v1.15.0">1.15.0</a>
(2025-10-09)</h2>
<h3>Features</h3>
<ul>
<li>Compatibility with specifier imports (<a
href="https://redirect.github.com/nodejs/import-in-the-middle/issues/211">#211</a>)
(<a
href="83d662a8e1">83d662a</a>)</li>
</ul>
<h2><a
href="https://github.com/nodejs/import-in-the-middle/compare/import-in-the-middle-v1.14.3...import-in-the-middle-v1.14.4">1.14.4</a>
(2025-09-25)</h2>
<h3>Bug Fixes</h3>
<ul>
<li>Revert &quot;use <code>createRequire</code> to load
<code>hook.js</code> (<a
href="https://redirect.github.com/nodejs/import-in-the-middle/issues/205">#205</a>)&quot;
(<a
href="https://redirect.github.com/nodejs/import-in-the-middle/issues/208">#208</a>)
(<a
href="f23b7ef9e8">f23b7ef</a>)</li>
</ul>
<h2><a
href="https://github.com/nodejs/import-in-the-middle/compare/import-in-the-middle-v1.14.2...import-in-the-middle-v1.14.3">1.14.3</a>
(2025-09-24)</h2>
<h3>Bug Fixes</h3>
<ul>
<li>use <code>createRequire</code> to load <code>hook.js</code> (<a
href="https://redirect.github.com/nodejs/import-in-the-middle/issues/205">#205</a>)
(<a
href="81a2ae0ea0">81a2ae0</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="d96619f7f2"><code>d96619f</code></a>
chore: release v2.0.0 (<a
href="https://redirect.github.com/nodejs/import-in-the-middle/issues/214">#214</a>)</li>
<li><a
href="da7c7a6904"><code>da7c7a6</code></a>
feat!: convert all modules running in loader thread to ESM (<a
href="https://redirect.github.com/nodejs/import-in-the-middle/issues/210">#210</a>)</li>
<li><a
href="b8fd97e98c"><code>b8fd97e</code></a>
chore: npm ignore several files used for dev (<a
href="https://redirect.github.com/nodejs/import-in-the-middle/issues/137">#137</a>)</li>
<li><a
href="94837a7427"><code>94837a7</code></a>
chore: release v1.15.0 (<a
href="https://redirect.github.com/nodejs/import-in-the-middle/issues/213">#213</a>)</li>
<li><a
href="83d662a8e1"><code>83d662a</code></a>
feat: Compatibility with specifier imports (<a
href="https://redirect.github.com/nodejs/import-in-the-middle/issues/211">#211</a>)</li>
<li><a
href="ffb5682e3f"><code>ffb5682</code></a>
chore: release v1.14.4 (<a
href="https://redirect.github.com/nodejs/import-in-the-middle/issues/209">#209</a>)</li>
<li><a
href="f23b7ef9e8"><code>f23b7ef</code></a>
fix: Revert &quot;use <code>createRequire</code> to load
<code>hook.js</code> (<a
href="https://redirect.github.com/nodejs/import-in-the-middle/issues/205">#205</a>)&quot;
(<a
href="https://redirect.github.com/nodejs/import-in-the-middle/issues/208">#208</a>)</li>
<li><a
href="f67ecb8323"><code>f67ecb8</code></a>
chore: release v1.14.3 (<a
href="https://redirect.github.com/nodejs/import-in-the-middle/issues/206">#206</a>)</li>
<li><a
href="81a2ae0ea0"><code>81a2ae0</code></a>
fix: use <code>createRequire</code> to load <code>hook.js</code> (<a
href="https://redirect.github.com/nodejs/import-in-the-middle/issues/205">#205</a>)</li>
<li><a
href="2fb1f21b3d"><code>2fb1f21</code></a>
chore: use npm@11 for OIDC publishing (<a
href="https://redirect.github.com/nodejs/import-in-the-middle/issues/204">#204</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/nodejs/import-in-the-middle/compare/import-in-the-middle-v1.14.2...import-in-the-middle-v2.0.0">compare
view</a></li>
</ul>
</details>
<details>
<summary>Maintainer changes</summary>
<p>This version was pushed to npm by [GitHub Actions](<a
href="https://www.npmjs.com/~GitHub">https://www.npmjs.com/~GitHub</a>
Actions), a new releaser for import-in-the-middle since your current
version.</p>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=import-in-the-middle&package-manager=npm_and_yarn&previous-version=1.14.2&new-version=2.0.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

You can trigger a rebase of this PR by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

> **Note**
> Automatic rebases have been disabled on this pull request as it has
been open for over 30 days.

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Lluis Agusti <hi@llu.lu>
Co-authored-by: Ubbe <hi@ubbe.dev>
2025-12-18 18:58:27 +01:00
dependabot[bot]
99e2261254 chore(frontend/deps-dev): bump eslint-config-next from 15.5.2 to 15.5.6 in /autogpt_platform/frontend (#11355)
Bumps
[eslint-config-next](https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next)
from 15.5.2 to 15.5.6.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/vercel/next.js/releases">eslint-config-next's
releases</a>.</em></p>
<blockquote>
<h2>v15.5.6</h2>
<blockquote>
<p>[!NOTE]<br />
This release is backporting bug fixes. It does <strong>not</strong>
include all pending features/changes on canary.</p>
</blockquote>
<h3>Core Changes</h3>
<ul>
<li>Turbopack: don't define process.cwd() in node_modules <a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/83452">#83452</a></li>
</ul>
<h3>Credits</h3>
<p>Huge thanks to <a
href="https://github.com/mischnic"><code>@​mischnic</code></a> for
helping!</p>
<h2>v15.5.5</h2>
<blockquote>
<p>[!NOTE]<br />
This release is backporting bug fixes. It does <strong>not</strong>
include all pending features/changes on canary.</p>
</blockquote>
<h3>Core Changes</h3>
<ul>
<li>Split code-frame into separate compiled package (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/84238">#84238</a>)</li>
<li>Add deprecation warning to Runtime config (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/84650">#84650</a>)</li>
<li>fix: unstable_cache should perform blocking revalidation during ISR
revalidation (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/84716">#84716</a>)</li>
<li>feat: <code>experimental.middlewareClientMaxBodySize</code> body
cloning limit (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/84722">#84722</a>)</li>
<li>fix: missing next/link types with typedRoutes (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/84779">#84779</a>)</li>
</ul>
<h3>Misc Changes</h3>
<ul>
<li>docs: early October improvements and fixes (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/84334">#84334</a>)</li>
</ul>
<h3>Credits</h3>
<p>Huge thanks to <a
href="https://github.com/devjiwonchoi"><code>@​devjiwonchoi</code></a>,
<a href="https://github.com/ztanner"><code>@​ztanner</code></a>, and <a
href="https://github.com/icyJoseph"><code>@​icyJoseph</code></a> for
helping!</p>
<h2>v15.5.4</h2>
<blockquote>
<p>[!NOTE]<br />
This release is backporting bug fixes. It does <strong>not</strong>
include all pending features/changes on canary.</p>
</blockquote>
<h3>Core Changes</h3>
<ul>
<li>fix: ensure onRequestError is invoked when otel enabled (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/83343">#83343</a>)</li>
<li>fix: devtools initial position should be from next config (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/83571">#83571</a>)</li>
<li>[devtool] fix overlay styles are missing (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/83721">#83721</a>)</li>
<li>Turbopack: don't match dynamic pattern for node_modules packages (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/83176">#83176</a>)</li>
<li>Turbopack: don't treat metadata routes as RSC (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/82911">#82911</a>)</li>
<li>[turbopack] Improve handling of symlink resolution errors in
track_glob and read_glob (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/83357">#83357</a>)</li>
<li>Turbopack: throw large static metadata error earlier (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/82939">#82939</a>)</li>
<li>fix: error overlay not closing when backdrop clicked (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/83981">#83981</a>)</li>
<li>Turbopack: flush Node.js worker IPC on error (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/84077">#84077</a>)</li>
</ul>
<h3>Misc Changes</h3>
<ul>
<li>[CNA] use linter preference (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/83194">#83194</a>)</li>
<li>CI: use KV for test timing data (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/83745">#83745</a>)</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="55ef0e3ebc"><code>55ef0e3</code></a>
v15.5.6</li>
<li><a
href="81f530db26"><code>81f530d</code></a>
v15.5.5</li>
<li><a
href="40f1d7814d"><code>40f1d78</code></a>
v15.5.4</li>
<li><a
href="07d1cbc9c6"><code>07d1cbc</code></a>
v15.5.3</li>
<li>See full diff in <a
href="https://github.com/vercel/next.js/commits/v15.5.6/packages/eslint-config-next">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=eslint-config-next&package-manager=npm_and_yarn&previous-version=15.5.2&new-version=15.5.6)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

You can trigger a rebase of this PR by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

> **Note**
> Automatic rebases have been disabled on this pull request as it has
been open for over 30 days.

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Lluis Agusti <hi@llu.lu>
Co-authored-by: Ubbe <hi@ubbe.dev>
2025-12-18 18:55:57 +01:00
dependabot[bot]
cab498fa8c chore(deps): Bump actions/stale from 9 to 10 (#10871)
Bumps [actions/stale](https://github.com/actions/stale) from 9 to 10.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/actions/stale/releases">actions/stale's
releases</a>.</em></p>
<blockquote>
<h2>v10.0.0</h2>
<h2>What's Changed</h2>
<h3>Breaking Changes</h3>
<ul>
<li>Upgrade to node 24 by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/stale/pull/1279">actions/stale#1279</a>
Make sure your runner is on version v2.327.1 or later to ensure
compatibility with this release. <a
href="https://github.com/actions/runner/releases/tag/v2.327.1">Release
Notes</a></li>
</ul>
<h3>Enhancement</h3>
<ul>
<li>Introducing sort-by option by <a
href="https://github.com/suyashgaonkar"><code>@​suyashgaonkar</code></a>
in <a
href="https://redirect.github.com/actions/stale/pull/1254">actions/stale#1254</a></li>
</ul>
<h3>Dependency Upgrades</h3>
<ul>
<li>Upgrade actions/publish-immutable-action from 0.0.3 to 0.0.4 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/stale/pull/1186">actions/stale#1186</a></li>
<li>Upgrade undici from 5.28.4 to 5.28.5 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/stale/pull/1201">actions/stale#1201</a></li>
<li>Upgrade <code>@​action/cache</code> from 4.0.0 to 4.0.2 by <a
href="https://github.com/aparnajyothi-y"><code>@​aparnajyothi-y</code></a>
in <a
href="https://redirect.github.com/actions/stale/pull/1226">actions/stale#1226</a></li>
<li>Upgrade <code>@​action/cache</code> from 4.0.2 to 4.0.3 by <a
href="https://github.com/suyashgaonkar"><code>@​suyashgaonkar</code></a>
in <a
href="https://redirect.github.com/actions/stale/pull/1233">actions/stale#1233</a></li>
<li>Upgrade undici from 5.28.5 to 5.29.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/stale/pull/1251">actions/stale#1251</a></li>
<li>Upgrade form-data to bring in fix for critical vulnerability by <a
href="https://github.com/gowridurgad"><code>@​gowridurgad</code></a> in
<a
href="https://redirect.github.com/actions/stale/pull/1277">actions/stale#1277</a></li>
</ul>
<h3>Documentation changes</h3>
<ul>
<li>Changelog update for recent releases by <a
href="https://github.com/suyashgaonkar"><code>@​suyashgaonkar</code></a>
in <a
href="https://redirect.github.com/actions/stale/pull/1224">actions/stale#1224</a></li>
<li>Permissions update in Readme by <a
href="https://github.com/ghadimir"><code>@​ghadimir</code></a> in <a
href="https://redirect.github.com/actions/stale/pull/1248">actions/stale#1248</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a
href="https://github.com/suyashgaonkar"><code>@​suyashgaonkar</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/stale/pull/1224">actions/stale#1224</a></li>
<li><a href="https://github.com/GhadimiR"><code>@​GhadimiR</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/stale/pull/1248">actions/stale#1248</a></li>
<li><a
href="https://github.com/gowridurgad"><code>@​gowridurgad</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/stale/pull/1277">actions/stale#1277</a></li>
<li><a href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/stale/pull/1279">actions/stale#1279</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/stale/compare/v9...v10.0.0">https://github.com/actions/stale/compare/v9...v10.0.0</a></p>
<h2>v9.1.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Documentation update by <a
href="https://github.com/Marukome0743"><code>@​Marukome0743</code></a>
in <a
href="https://redirect.github.com/actions/stale/pull/1116">actions/stale#1116</a></li>
<li>Add workflow file for publishing releases to immutable action
package by <a
href="https://github.com/Jcambass"><code>@​Jcambass</code></a> in <a
href="https://redirect.github.com/actions/stale/pull/1179">actions/stale#1179</a></li>
<li>Update undici from 5.28.2 to 5.28.4 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/stale/pull/1150">actions/stale#1150</a></li>
<li>Update actions/checkout from 3 to 4 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/stale/pull/1091">actions/stale#1091</a></li>
<li>Update actions/publish-action from 0.2.2 to 0.3.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/stale/pull/1147">actions/stale#1147</a></li>
<li>Update ts-jest from 29.1.1 to 29.2.5 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/stale/pull/1175">actions/stale#1175</a></li>
<li>Update <code>@​actions/core</code> from 1.10.1 to 1.11.1 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/stale/pull/1191">actions/stale#1191</a></li>
<li>Update <code>@​types/jest</code> from 29.5.11 to 29.5.14 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/stale/pull/1193">actions/stale#1193</a></li>
<li>Update <code>@​actions/cache</code> from 3.2.2 to 4.0.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/stale/pull/1194">actions/stale#1194</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a
href="https://github.com/Marukome0743"><code>@​Marukome0743</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/stale/pull/1116">actions/stale#1116</a></li>
<li><a href="https://github.com/Jcambass"><code>@​Jcambass</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/stale/pull/1179">actions/stale#1179</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/stale/compare/v9...v9.1.0">https://github.com/actions/stale/compare/v9...v9.1.0</a></p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/actions/stale/blob/main/CHANGELOG.md">actions/stale's
changelog</a>.</em></p>
<blockquote>
<h1>Changelog</h1>
<h1>[9.1.0]</h1>
<h2>What's Changed</h2>
<ul>
<li>Documentation update by <a
href="https://github.com/Marukome0743"><code>@​Marukome0743</code></a>
in <a
href="https://redirect.github.com/actions/stale/pull/1116">actions/stale#1116</a></li>
<li>Add workflow file for publishing releases to immutable action
package by <a
href="https://github.com/Jcambass"><code>@​Jcambass</code></a> in <a
href="https://redirect.github.com/actions/stale/pull/1179">actions/stale#1179</a></li>
<li>Update undici from 5.28.2 to 5.28.4 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/stale/pull/1150">actions/stale#1150</a></li>
<li>Update actions/checkout from 3 to 4 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/stale/pull/1091">actions/stale#1091</a></li>
<li>Update actions/publish-action from 0.2.2 to 0.3.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/stale/pull/1147">actions/stale#1147</a></li>
<li>Update ts-jest from 29.1.1 to 29.2.5 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/stale/pull/1175">actions/stale#1175</a></li>
<li>Update <code>@​actions/core</code> from 1.10.1 to 1.11.1 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/stale/pull/1191">actions/stale#1191</a></li>
<li>Update <code>@​types/jest</code> from 29.5.11 to 29.5.14 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/stale/pull/1193">actions/stale#1193</a></li>
<li>Update <code>@​actions/cache</code> from 3.2.2 to 4.0.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/stale/pull/1194">actions/stale#1194</a></li>
</ul>
<h1>[9.0.0]</h1>
<h2>Breaking Changes</h2>
<ol>
<li>Action is now stateful: If the action ends because of <a
href="https://github.com/actions/stale#operations-per-run">operations-per-run</a>
then the next run will start from the first unprocessed issue skipping
the issues processed during the previous run(s). The state is reset when
all the issues are processed. This should be considered for scheduling
workflow runs.</li>
<li>Version 9 of this action updated the runtime to Node.js 20. All
scripts are now run with Node.js 20 instead of Node.js 16 and are
affected by any breaking changes between Node.js 16 and 20.</li>
</ol>
<h2>What Else Changed</h2>
<ol>
<li>Performance optimization that removes unnecessary API calls by <a
href="https://github.com/dsame"><code>@​dsame</code></a> in <a
href="https://redirect.github.com/actions/stale/pull/1033/">#1033</a>;
fixes <a
href="https://redirect.github.com/actions/stale/issues/792">#792</a></li>
<li>Logs displaying current GitHub API rate limit by <a
href="https://github.com/dsame"><code>@​dsame</code></a> in <a
href="https://redirect.github.com/actions/stale/pull/1032">#1032</a>;
addresses <a
href="https://redirect.github.com/actions/stale/issues/1029">#1029</a></li>
</ol>
<p>For more information, please read the <a
href="https://github.com/actions/stale#readme">action documentation</a>
and its <a href="https://github.com/actions/stale#statefulness">section
about statefulness</a></p>
<h1>[4.1.1]</h1>
<p>In scope of this release we updated <a
href="https://redirect.github.com/actions/stale/pull/957">actions/core
to 1.10.0</a> for v4 and <a
href="https://redirect.github.com/actions/stale/pull/662">fixed issues
operation count</a>.</p>
<h1>[8.0.0]</h1>
<p>⚠️ This version contains breaking changes ⚠️</p>
<ul>
<li>New option labels-to-remove-when-stale enables users to specify list
of comma delimited labels that will be removed when the issue or PR
becomes stale by <a
href="https://github.com/panticmilos"><code>@​panticmilos</code></a> <a
href="https://redirect.github.com/actions/stale/issues/770">actions/stale#770</a></li>
<li>Skip deleting the branch in the upstream of a forked repo by <a
href="https://github.com/dsame"><code>@​dsame</code></a> <a
href="https://redirect.github.com/actions/stale/pull/913">actions/stale#913</a></li>
<li>abort the build on the error by <a
href="https://github.com/dsame"><code>@​dsame</code></a> in <a
href="https://redirect.github.com/actions/stale/pull/935">actions/stale#935</a></li>
</ul>
<h1>[7.0.0]</h1>
<p>⚠️ Breaking change ⚠️</p>
<ul>
<li>Allow daysBeforeStale options to be float by <a
href="https://github.com/irega"><code>@​irega</code></a> in <a
href="https://redirect.github.com/actions/stale/pull/841">actions/stale#841</a></li>
<li>Use cache in check-dist.yml by <a
href="https://github.com/jongwooo"><code>@​jongwooo</code></a> in <a
href="https://redirect.github.com/actions/stale/pull/876">actions/stale#876</a></li>
<li>fix print outputs step in existing workflows by <a
href="https://github.com/irega"><code>@​irega</code></a> in <a
href="https://redirect.github.com/actions/stale/pull/859">actions/stale#859</a></li>
<li>Update issue and PR templates, add/delete workflow files by <a
href="https://github.com/IvanZosimov"><code>@​IvanZosimov</code></a> in
<a
href="https://redirect.github.com/actions/stale/pull/880">actions/stale#880</a></li>
<li>Update how stale handles exempt items by <a
href="https://github.com/johnsudol"><code>@​johnsudol</code></a> in <a
href="https://redirect.github.com/actions/stale/pull/874">actions/stale#874</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="3a9db7e6a4"><code>3a9db7e</code></a>
Upgrade to node 24 (<a
href="https://redirect.github.com/actions/stale/issues/1279">#1279</a>)</li>
<li><a
href="8f717f0dfc"><code>8f717f0</code></a>
Bumps form-data (<a
href="https://redirect.github.com/actions/stale/issues/1277">#1277</a>)</li>
<li><a
href="a92fd57ffe"><code>a92fd57</code></a>
build(deps): bump undici from 5.28.5 to 5.29.0 (<a
href="https://redirect.github.com/actions/stale/issues/1251">#1251</a>)</li>
<li><a
href="128b2c81d0"><code>128b2c8</code></a>
Introducing sort-by option (<a
href="https://redirect.github.com/actions/stale/issues/1254">#1254</a>)</li>
<li><a
href="f78de9780e"><code>f78de97</code></a>
Update README.md (<a
href="https://redirect.github.com/actions/stale/issues/1248">#1248</a>)</li>
<li><a
href="816d9db1ab"><code>816d9db</code></a>
Upgrade <code>@​action/cache</code> from 4.0.2 to 4.0.3 (<a
href="https://redirect.github.com/actions/stale/issues/1233">#1233</a>)</li>
<li><a
href="ba23c1cb02"><code>ba23c1c</code></a>
upgrade actions/cache from 4.0.0 to 4.0.2 (<a
href="https://redirect.github.com/actions/stale/issues/1226">#1226</a>)</li>
<li><a
href="a65e88a9b9"><code>a65e88a</code></a>
build(deps): bump undici from 5.28.4 to 5.28.5 (<a
href="https://redirect.github.com/actions/stale/issues/1201">#1201</a>)</li>
<li><a
href="d4df79c591"><code>d4df79c</code></a>
Updates to CHANGELOG.MD for recent releases (<a
href="https://redirect.github.com/actions/stale/issues/1224">#1224</a>)</li>
<li><a
href="ee7ef89499"><code>ee7ef89</code></a>
build(deps): bump actions/publish-immutable-action from 0.0.3 to 0.0.4
(<a
href="https://redirect.github.com/actions/stale/issues/1186">#1186</a>)</li>
<li>See full diff in <a
href="https://github.com/actions/stale/compare/v9...v10">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/stale&package-manager=github_actions&previous-version=9&new-version=10)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

You can trigger a rebase of this PR by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

> **Note**
> Automatic rebases have been disabled on this pull request as it has
been open for over 30 days.

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Update the stale-issues workflow to use `actions/stale@v10` instead of
`v9`.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
747d4ea73a. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2025-12-18 17:34:04 +00:00
Bently
22078671df feat(frontend): increase file upload size limit to 256MB (#11634)
- Updated Next.js configuration to set body size limits for server
actions and API routes.
- Enhanced error handling in the API client to provide user-friendly
messages for file size errors.
- Added user-friendly error messages for 413 Payload Too Large responses
in API error parsing.

These changes ensure that file uploads are consistent with backend
limits and improve user experience during uploads.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Upload a file bigger than 10MB and it works
- [X] Upload a file bigger than 256MB and you see a official error
stating the max file size is 256MB
2025-12-18 17:29:20 +00:00
dependabot[bot]
0082a72657 chore(deps): Bump actions/labeler from 5 to 6 (#10868)
Bumps [actions/labeler](https://github.com/actions/labeler) from 5 to 6.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/actions/labeler/releases">actions/labeler's
releases</a>.</em></p>
<blockquote>
<h2>v6.0.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Add workflow file for publishing releases to immutable action
package by <a
href="https://github.com/jcambass"><code>@​jcambass</code></a> in <a
href="https://redirect.github.com/actions/labeler/pull/802">actions/labeler#802</a></li>
</ul>
<h3>Breaking Changes</h3>
<ul>
<li>Upgrade Node.js version to 24 in action and dependencies <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/labeler/pull/891">actions/labeler#891</a>
Make sure your runner is on version v2.327.1 or later to ensure
compatibility with this release. <a
href="https://github.com/actions/runner/releases/tag/v2.327.1">Release
Notes</a></li>
</ul>
<h3>Dependency Upgrades</h3>
<ul>
<li>Upgrade eslint-config-prettier from 9.0.0 to 9.1.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/labeler/pull/711">actions/labeler#711</a></li>
<li>Upgrade eslint from 8.52.0 to 8.55.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/labeler/pull/720">actions/labeler#720</a></li>
<li>Upgrade <code>@​types/jest</code> from 29.5.6 to 29.5.11 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/labeler/pull/719">actions/labeler#719</a></li>
<li>Upgrade <code>@​types/js-yaml</code> from 4.0.8 to 4.0.9 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/labeler/pull/718">actions/labeler#718</a></li>
<li>Upgrade <code>@​typescript-eslint/parser</code> from 6.9.0 to 6.14.0
by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/labeler/pull/717">actions/labeler#717</a></li>
<li>Upgrade prettier from 3.0.3 to 3.1.1 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/labeler/pull/726">actions/labeler#726</a></li>
<li>Upgrade eslint from 8.55.0 to 8.56.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/labeler/pull/725">actions/labeler#725</a></li>
<li>Upgrade <code>@​typescript-eslint/parser</code> from 6.14.0 to
6.19.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/labeler/pull/745">actions/labeler#745</a></li>
<li>Upgrade eslint-plugin-jest from 27.4.3 to 27.6.3 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/labeler/pull/744">actions/labeler#744</a></li>
<li>Upgrade <code>@​typescript-eslint/eslint-plugin</code> from 6.9.0 to
6.20.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/labeler/pull/750">actions/labeler#750</a></li>
<li>Upgrade prettier from 3.1.1 to 3.2.5 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/labeler/pull/752">actions/labeler#752</a></li>
<li>Upgrade undici from 5.26.5 to 5.28.3 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/labeler/pull/757">actions/labeler#757</a></li>
<li>Upgrade braces from 3.0.2 to 3.0.3 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/labeler/pull/789">actions/labeler#789</a></li>
<li>Upgrade minimatch from 9.0.3 to 10.0.1 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/labeler/pull/805">actions/labeler#805</a></li>
<li>Upgrade <code>@​actions/core</code> from 1.10.1 to 1.11.1 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/labeler/pull/811">actions/labeler#811</a></li>
<li>Upgrade typescript from 5.4.3 to 5.7.2 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/labeler/pull/819">actions/labeler#819</a></li>
<li>Upgrade <code>@​typescript-eslint/parser</code> from 7.3.1 to 8.17.0
by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/labeler/pull/824">actions/labeler#824</a></li>
<li>Upgrade prettier from 3.2.5 to 3.4.2 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/labeler/pull/825">actions/labeler#825</a></li>
<li>Upgrade <code>@​types/jest</code> from 29.5.12 to 29.5.14 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/labeler/pull/827">actions/labeler#827</a></li>
<li>Upgrade eslint-plugin-jest from 27.9.0 to 28.9.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/labeler/pull/832">actions/labeler#832</a></li>
<li>Upgrade ts-jest from 29.1.2 to 29.2.5 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/labeler/pull/831">actions/labeler#831</a></li>
<li>Upgrade <code>@​vercel/ncc</code> from 0.38.1 to 0.38.3 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/labeler/pull/830">actions/labeler#830</a></li>
<li>Upgrade typescript from 5.7.2 to 5.7.3 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/labeler/pull/835">actions/labeler#835</a></li>
<li>Upgrade eslint-plugin-jest from 28.9.0 to 28.11.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/labeler/pull/839">actions/labeler#839</a></li>
<li>Upgrade undici from 5.28.4 to 5.28.5 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/labeler/pull/842">actions/labeler#842</a></li>
<li>Upgrade <code>@​octokit/request-error</code> from 5.0.1 to 5.1.1 by
<a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/labeler/pull/846">actions/labeler#846</a></li>
</ul>
<h3>Documentation changes</h3>
<ul>
<li>Add note regarding <code>pull_request_target</code> to README.md by
<a href="https://github.com/silverwind"><code>@​silverwind</code></a> in
<a
href="https://redirect.github.com/actions/labeler/pull/669">actions/labeler#669</a></li>
<li>Update readme with additional examples and important note about
<code>pull_request_target</code> event by <a
href="https://github.com/IvanZosimov"><code>@​IvanZosimov</code></a> in
<a
href="https://redirect.github.com/actions/labeler/pull/721">actions/labeler#721</a></li>
<li>Document update - permission section by <a
href="https://github.com/harithavattikuti"><code>@​harithavattikuti</code></a>
in <a
href="https://redirect.github.com/actions/labeler/pull/840">actions/labeler#840</a></li>
<li>Improvement in documentation for pull_request_target event usage in
README by <a
href="https://github.com/suyashgaonkar"><code>@​suyashgaonkar</code></a>
in <a
href="https://redirect.github.com/actions/labeler/pull/871">actions/labeler#871</a></li>
<li>Fix broken links in documentation by <a
href="https://github.com/suyashgaonkar"><code>@​suyashgaonkar</code></a>
in <a
href="https://redirect.github.com/actions/labeler/pull/822">actions/labeler#822</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a
href="https://github.com/silverwind"><code>@​silverwind</code></a> made
their first contribution in <a
href="https://redirect.github.com/actions/labeler/pull/669">actions/labeler#669</a></li>
<li><a href="https://github.com/Jcambass"><code>@​Jcambass</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/labeler/pull/802">actions/labeler#802</a></li>
<li><a
href="https://github.com/suyashgaonkar"><code>@​suyashgaonkar</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/labeler/pull/822">actions/labeler#822</a></li>
<li><a
href="https://github.com/HarithaVattikuti"><code>@​HarithaVattikuti</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/labeler/pull/840">actions/labeler#840</a></li>
<li><a href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/labeler/pull/891">actions/labeler#891</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="634933edcd"><code>634933e</code></a>
publish-action upgrade to 0.4.0 from 0.2.2 (<a
href="https://redirect.github.com/actions/labeler/issues/901">#901</a>)</li>
<li><a
href="f1a63e87db"><code>f1a63e8</code></a>
Update Node.js version to 24 in action and dependencies (<a
href="https://redirect.github.com/actions/labeler/issues/891">#891</a>)</li>
<li><a
href="b0a1180683"><code>b0a1180</code></a>
Bump <code>@​octokit/request-error</code> from 5.0.1 to 5.1.1 (<a
href="https://redirect.github.com/actions/labeler/issues/846">#846</a>)</li>
<li><a
href="110d44140c"><code>110d441</code></a>
Update README.md (<a
href="https://redirect.github.com/actions/labeler/issues/871">#871</a>)</li>
<li><a
href="bee50fefe1"><code>bee50fe</code></a>
Bump undici from 5.28.4 to 5.28.5 (<a
href="https://redirect.github.com/actions/labeler/issues/842">#842</a>)</li>
<li><a
href="6463cdb00e"><code>6463cdb</code></a>
Bump eslint-plugin-jest from 28.9.0 to 28.11.0 (<a
href="https://redirect.github.com/actions/labeler/issues/839">#839</a>)</li>
<li><a
href="c209686724"><code>c209686</code></a>
Bump typescript from 5.7.2 to 5.7.3 (<a
href="https://redirect.github.com/actions/labeler/issues/835">#835</a>)</li>
<li><a
href="5184940b54"><code>5184940</code></a>
Bump <code>@​vercel/ncc</code> from 0.38.1 to 0.38.3 (<a
href="https://redirect.github.com/actions/labeler/issues/830">#830</a>)</li>
<li><a
href="3629d5568b"><code>3629d55</code></a>
Document update - permission section (<a
href="https://redirect.github.com/actions/labeler/issues/840">#840</a>)</li>
<li><a
href="d24f7f3731"><code>d24f7f3</code></a>
Bump ts-jest from 29.1.2 to 29.2.5 (<a
href="https://redirect.github.com/actions/labeler/issues/831">#831</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/actions/labeler/compare/v5...v6">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/labeler&package-manager=github_actions&previous-version=5&new-version=6)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

You can trigger a rebase of this PR by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

> **Note**
> Automatic rebases have been disabled on this pull request as it has
been open for over 30 days.

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-18 17:16:22 +00:00
Ubbe
9a1d940677 fix(frontend): onboarding run card (#11636)
## Changes 🏗️

### Before

<img width="300" height="730" alt="Screenshot 2025-12-18 at 17 16 57"
src="https://github.com/user-attachments/assets/f57efc83-aa55-4371-96af-e294cab9b03a"
/>

- extra label
- overflow

### After

<img width="300" height="642" alt="Screenshot 2025-12-18 at 17 41 53"
src="https://github.com/user-attachments/assets/50721293-c67c-438a-9c9d-ff7ffae11d14"
/>

## Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run locally
  - [x] Test the above
2025-12-18 18:08:19 +01:00
Abhimanyu Yadav
e640d36265 feat(frontend): Add special handling for AGENT block type in form and output handlers (#11595)
<!-- Clearly explain the need for these changes: -->

Agent blocks require different handling compared to standard blocks,
particularly for:
- Handle ID generation (using direct keys instead of generated IDs)
- Form data storage structure (nested under `inputs` key)
- Field ID parsing (filtering out schema path prefixes)

This PR implements special handling for `BlockUIType.AGENT` throughout
the form rendering and output handling components to ensure agents work
correctly in the flow editor.

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->

- **CustomNode.tsx**: Pass `uiType` prop to `OutputHandler` component
- **FormCreator.tsx**: 
- Store agent form data in `hardcodedValues.inputs` instead of directly
in `hardcodedValues`
- Extract initial values from `hardcodedValues.inputs` for agent blocks
- **OutputHandler.tsx**: 
  - Accept `uiType` prop
- Use direct key as handle ID for agents instead of
`generateHandleId(key)`
- **useMarketplaceAgentsContent.ts**: 
- Fetch full agent details using `getV2GetLibraryAgent` before adding to
builder
- Ensures agent schemas are properly populated (fixes issue where
marketplace endpoint returns empty schemas)
- **AnyOfField.tsx**: Generate handle IDs for agents by filtering out
"root" and "properties" from schema path
- **FieldTemplate.tsx**: Apply same handle ID generation logic for agent
fields

### Checklist 📋

#### For code changes:

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Add an agent block from marketplace and verify it renders
correctly
- [x] Connect inputs/outputs to/from an agent block and verify
connections work
- [x] Fill in form fields for an agent block and verify data persists
correctly
  - [x] Verify agent blocks work in both new and existing flows
- [x] Test that non-agent blocks still work as before (regression test)
2025-12-18 03:42:55 +00:00
Swifty
858a8a818b updated code generation and intial chat session logic 2025-12-16 22:45:17 +01:00
Lluis Agusti
9e1354bfee chore: changes 2025-12-16 19:15:58 +01:00
Lluis Agusti
ba003a5e18 chore: fix chat history 2025-12-16 19:11:21 +01:00
Lluis Agusti
7a57531063 Merge remote-tracking branch 'origin/hackathon/copilot' into hackathon/copilot 2025-12-16 19:04:57 +01:00
Lluis Agusti
639a1ab0ed chore: improvements 2025-12-16 19:04:39 +01:00
Swifty
9abad07bbc add backfill command 2025-12-16 18:55:20 +01:00
Swifty
eeeeb5fe5f update graph generator to match graph generation project 2025-12-16 18:55:11 +01:00
Swifty
a163457bc0 added embedded store search 2025-12-16 18:52:45 +01:00
Lluis Agusti
a4e38be3e3 chore: fixes 2025-12-16 18:49:49 +01:00
Lluis Agusti
d71c39d24f Merge remote-tracking branch 'origin/hackathon/copilot' into hackathon/copilot 2025-12-16 18:32:46 +01:00
Lluis Agusti
fa7f17334d chore: improvements 2025-12-16 18:30:17 +01:00
Lluis Agusti
87728ee085 chore: more changes 2025-12-16 18:24:27 +01:00
Swifty
9932b05bc7 added block indexing 2025-12-16 18:23:47 +01:00
Swifty
7835bdd39e add onboarding endpoints 2025-12-16 18:00:07 +01:00
Lluis Agusti
806e3b63d5 Merge remote-tracking branch 'origin/hackathon/copilot' into hackathon/copilot 2025-12-16 17:58:52 +01:00
Lluis Agusti
0cc9ec5546 chore: hook up existing output renderers to chat 2025-12-16 17:58:37 +01:00
Swifty
e5fc9e8573 Merge branch 'hackathon/copilot' of github.com:Significant-Gravitas/AutoGPT into hackathon/copilot 2025-12-16 17:26:25 +01:00
Swifty
d29ae4105f updated prompt 2025-12-16 17:26:19 +01:00
Lluis Agusti
2731fd91c8 Merge remote-tracking branch 'origin/hackathon/copilot' into hackathon/copilot 2025-12-16 17:19:20 +01:00
Lluis Agusti
25bc22cc01 chore: make sessions nicer 2025-12-16 17:18:55 +01:00
Swifty
a3be6d8170 added agent generator 2025-12-16 17:18:32 +01:00
Lluis Agusti
fd4f405008 chore: sessions drawer 2025-12-16 17:07:58 +01:00
Swifty
1b352c479f add credntials popup for run_block 2025-12-16 17:06:03 +01:00
Lluis Agusti
7d17e6c470 Merge remote-tracking branch 'origin/hackathon/copilot' into hackathon/copilot 2025-12-16 16:55:18 +01:00
Lluis Agusti
0b576d4d48 chore: more frontend nice ui changes 2025-12-16 16:54:48 +01:00
Swifty
d4f76f9835 cache understanding 2025-12-16 16:36:57 +01:00
Swifty
290fe5d278 added migrations 2025-12-16 16:35:12 +01:00
Swifty
1a0dd4770b fixes 2025-12-16 16:33:00 +01:00
Swifty
e1c0c9397d Merge branch 'hackathon/copilot' of github.com:Significant-Gravitas/AutoGPT into hackathon/copilot 2025-12-16 16:31:27 +01:00
Swifty
06ce6fa9a1 fixing db queries 2025-12-16 16:31:22 +01:00
Swifty
a8c68b585a added logging understanding can chat persistance 2025-12-16 16:30:29 +01:00
Lluis Agusti
22298c24fd chore: add page content and url to stream message 2025-12-16 16:21:59 +01:00
Lluis Agusti
5f45a33786 Merge remote-tracking branch 'origin/hackathon/copilot' into hackathon/copilot 2025-12-16 16:06:44 +01:00
Lluis Agusti
d9d6a66608 chore: refinements frontend 2025-12-16 16:06:26 +01:00
Swifty
3d8a967395 add agent output tool, find_library_agent tool and update run_agent to be able to run library agents directly 2025-12-16 15:52:26 +01:00
Lluis Agusti
17cef05b8b chore: wip 2025-12-16 15:51:10 +01:00
Swifty
917802aca8 Merge branch 'hackathon/copilot' of github.com:Significant-Gravitas/AutoGPT into hackathon/copilot 2025-12-16 15:23:59 +01:00
Swifty
e2b2d5f402 added a support faq to docs and updated search index 2025-12-16 15:23:53 +01:00
Lluis Agusti
d726db6488 Merge remote-tracking branch 'origin/hackathon/copilot' into hackathon/copilot 2025-12-16 15:08:52 +01:00
Lluis Agusti
253f2780c3 chore: move out of page into component 2025-12-16 15:08:36 +01:00
Swifty
cc2a366c6a added indexer and search example 2025-12-16 15:04:38 +01:00
Swifty
ad33659ef8 added search tool and pushed index 2025-12-16 15:04:22 +01:00
Zamil Majdy
cc9179178f feat(block): Human in The Loop Block restructure (#11627)
## Summary

This PR refactors the Human-In-The-Loop (HITL) review system backend to
improve data handling and API consistency.

## Changes

### Backend Refactoring

#### 1. **Block Output Schema Update** (`human_in_the_loop.py`)
- Replaced single `reviewed_data` and `status` fields with separate
`approved_data` and `rejected_data` outputs
- This allows downstream blocks to handle approved vs rejected data
differently without checking status
- Simplified test outputs to match new schema

#### 2. **Review Data Handling** (`human_review.py`)
- Modified `get_or_create_human_review` to always return
`review.payload` regardless of approval status
- Previously returned `None` for rejected reviews, which could cause
data loss
- Now preserves reviewer-modified data for both approved and rejected
cases

#### 3. **API Route Simplification** (`review/routes.py`)
- Streamlined review decision processing logic using ternary operator
- Unified data handling for both approved and rejected reviews
- Maintains backward compatibility while improving code clarity

## Why These Changes?

- **Better Data Flow**: Separate output pins for approved/rejected data
make workflow design more intuitive
- **Data Preservation**: Rejected reviews can still pass modified data
downstream for logging or alternative processing
- **Cleaner API**: Simplified decision processing reduces code
complexity and potential bugs

## Testing

- All existing tests pass with updated schema
- Backward compatibility maintained for existing workflows
- Human review functionality verified in both approved and rejected
scenarios

## Related

This is the backend portion of changes from #11529, applied separately
to the `feat/hitl` branch.
2025-12-16 12:14:14 +00:00
Ubbe
e8d37ab116 feat(frontend): add nice scrollable tabs on Selected Run view (#11596)
## Changes 🏗️


https://github.com/user-attachments/assets/7e49ed5b-c818-4aa3-b5d6-4fa86fada7ee

When the content of Summary + Outputs + Inputs is long enough, it will
show in this new `<ScrollableTabs />` component, which auto-scrolls the
content as you click on a tab.

## Checklist 📋

### For code changes

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the app locally
  - [x] Check the new page with scrollable tabs
2025-12-15 16:57:36 +07:00
Ubbe
7f7ef6a271 feat(frontend): imporve agent inputs read-only (#11621)
## Changes 🏗️

The main goal of this PR is to improve how we display inputs used for a
given task.

Agent inputs can be of many types (text, long text, date, select, file,
etc.). Until now, we have tried to display them as text, which has not
always worked. Given we already have `<RunAgentInputs />`, which uses
form elements to display the inputs ( _prefilled with data_ ), most of
the time it will look better and less buggy than text.

### Before

<img width="800" height="614" alt="Screenshot 2025-12-14 at 17 45 44"
src="https://github.com/user-attachments/assets/3d851adf-9638-46c1-adfa-b5e68dc78bb0"
/>

### After

<img width="800" height="708" alt="Screenshot 2025-12-14 at 17 45 21"
src="https://github.com/user-attachments/assets/367f32b4-2c30-4368-8d63-4cad06e32437"
/>

### Other improvements

- 🗑️  Removed `<EditInputsModal />`
- it is not used given the API does not support editing inputs for a
schedule yt
- Made `<InformationTooltip />` icon size customisable    

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the app locally
- [x] Check the new view tasks use the form elements instead of text to
display inputs
2025-12-15 00:11:27 +07:00
Ubbe
aefac541d9 fix(frontend): force light mode for now (#11619)
## Changes 🏗️

We have the setup for light/dark mode support ( Tailwind + `next-themes`
), but not the capacity yet from contributions to make the app dark-mode
ready. First, we need to make it look good in light mode 😆

This disables `dark:` mode classes on the code, to prevent the app
looking oopsie when the user is seeing it with a browser with dark mode
preference:

### Before these changes

<img width="800" height="739" alt="Screenshot 2025-12-14 at 17 09 25"
src="https://github.com/user-attachments/assets/76333e03-930a-40b6-b91e-47ee01bf2c00"
/>

### After

<img width="800" height="722" alt="Screenshot 2025-12-14 at 16 55 46"
src="https://github.com/user-attachments/assets/34d85359-c68f-474c-8c66-2bebf28f923e"
/>

## Checklist 📋

### For code changes

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the app on a browser with dark mode preference
  - [x] It still looks in light mode without broken styles
2025-12-15 00:10:36 +07:00
Krzysztof Czerwinski
ff5c8f324b Merge branch 'master' into dev 2025-12-12 22:26:39 +09:00
Krzysztof Czerwinski
f121a22544 hotfix: update next (#11612)
Update next to `15.4.10`
2025-12-12 13:42:36 +01:00
Zamil Majdy
71157bddd7 feat(backend): add agent mode support to SmartDecisionMakerBlock with autonomous tool execution loops (#11547)
## Summary

<img width="2072" height="1836" alt="image"
src="https://github.com/user-attachments/assets/9d231a77-6309-46b9-bc11-befb5d8e9fcc"
/>

**🚀 Major Feature: Agent Mode Support**

Adds autonomous agent mode to SmartDecisionMakerBlock, enabling it to
execute tools directly in loops until tasks are completed, rather than
just yielding tool calls for external execution.

##  **Key New Features**

### 🤖 **Agent Mode with Tool Execution Loops**
- **New `agent_mode_max_iterations` parameter** controls execution
behavior:
  - `0` = Traditional mode (single LLM call, yield tool calls)
  - `1+` = Agent mode with iteration limit
  - `-1` = Infinite agent mode (loop until finished)

### 🔄 **Autonomous Tool Execution**  
- **Direct tool execution** instead of yielding for external handling
- **Multi-iteration loops** with conversation state management
- **Automatic completion detection** when LLM stops making tool calls
- **Iteration limit handling** with graceful completion messages

### 🏗️ **Proper Database Operations**
- **Replace manual execution ID generation** with proper
`upsert_execution_input`/`upsert_execution_output`
- **Real NodeExecutionEntry objects** from database results
- **Proper execution status management**: QUEUED → RUNNING →
COMPLETED/FAILED

### 🔧 **Enhanced Type Safety**
- **Pydantic models** replace TypedDict: `ToolInfo` and
`ExecutionParams`
- **Runtime validation** with better error messages
- **Improved developer experience** with IDE support

## 🔧 **Technical Implementation**

### Agent Mode Flow:
```python
# Agent mode enabled with iterations
if input_data.agent_mode_max_iterations != 0:
    async for result in self._execute_tools_agent_mode(...):
        yield result  # "conversations", "finished"
    return

# Traditional mode (existing behavior)  
# Single LLM call + yield tool calls for external execution
```

### Tool Execution with Database Operations:
```python
# Before: Manual execution IDs
tool_exec_id = f"{node_exec_id}_tool_{sink_node_id}_{len(input_data)}"

# After: Proper database operations
node_exec_result, final_input_data = await db_client.upsert_execution_input(
    node_id=sink_node_id,
    graph_exec_id=execution_params.graph_exec_id,
    input_name=input_name, 
    input_data=input_value,
)
```

### Type Safety with Pydantic:
```python
# Before: Dict access prone to errors
execution_params["user_id"]  

# After: Validated model access
execution_params.user_id  # Runtime validation + IDE support
```

## 🧪 **Comprehensive Test Coverage**

- **Agent mode execution tests** with multi-iteration scenarios
- **Database operation verification** 
- **Type safety validation**
- **Backward compatibility** for traditional mode
- **Enhanced dynamic fields tests**

## 📊 **Usage Examples**

### Traditional Mode (Existing Behavior):
```python
SmartDecisionMakerBlock.Input(
    prompt="Search for keywords",
    agent_mode_max_iterations=0  # Default
)
# → Yields tool calls for external execution
```

### Agent Mode (New Feature):
```python  
SmartDecisionMakerBlock.Input(
    prompt="Complete this task using available tools",
    agent_mode_max_iterations=5  # Max 5 iterations
)
# → Executes tools directly until task completion or iteration limit
```

### Infinite Agent Mode:
```python
SmartDecisionMakerBlock.Input(
    prompt="Analyze and process this data thoroughly", 
    agent_mode_max_iterations=-1  # No limit, run until finished
)
# → Executes tools autonomously until LLM indicates completion
```

##  **Backward Compatibility**

- **Zero breaking changes** to existing functionality
- **Traditional mode remains default** (`agent_mode_max_iterations=0`)
- **All existing tests pass**
- **Same API for tool definitions and execution**

This transforms the SmartDecisionMakerBlock from a simple tool call
generator into a powerful autonomous agent capable of complex multi-step
task execution! 🎯

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-12-12 09:58:06 +00:00
Bentlybro
152e747ea6 Merge branch 'master' into dev 2025-12-11 14:40:01 +00:00
Reinier van der Leer
4d4741d558 fix(frontend/library): Transition from empty tasks view on task init (#11600)
- Resolves #11599

### Changes 🏗️

- Manually update item counts when initiating a task from `EmptyTasks`
view
- Other improvements made while debugging

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] `NewAgentLibraryView` transitions to full layout when a first task
is created
- [x] `NewAgentLibraryView` transitions to full layout when a first
trigger is set up
2025-12-11 11:13:53 +00:00
Krzysztof Czerwinski
bd37fe946d feat(platform): Builder search history (#11457)
Preserve user searches in the new builder and cache search results for
more efficiency.
Search is saved, so the user can see their previous searches.

### Changes 🏗️

- Add `BuilderSearch` column&migration to save user search (with all
filters)
- Builder `db.py` now caches all search results using `@cached` and
returns paginated results, so following pages are returned much quicker
- Score and sort results
- Update models&routes
- Update frontend, so it works properly with modified endpoints
- Frontend: store `serachId` and use it for subsequent searches, so we
don't save partial searches (e.g. "b", "bl", ..., "block"). Search id is
reset when user clears the search field.
- Add clickable chips to the Suggestions builder tab
- Add `HorizontalScroll` component (chips use it)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Search works and is cached
  - [x] Search sorts results
  - [x] Searches are preserved properly

---------

Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
2025-12-10 17:32:17 +00:00
Nicholas Tindle
7ff282c908 fix(frontend): Disable single dollar sign LaTeX mode in markdown rend… (#11598)
Single dollar signs ($10, $variable) are commonly used in content and
were being incorrectly interpreted as inline LaTeX math delimiters. This
change disables that behavior while keeping double dollar sign ($$...$$)
math blocks working.
## Changes 🏗️
• Configure remarkMath plugin with singleDollarTextMath: false in
MarkdownRenderer.tsx
• Double dollar sign display math ($$...$$) continues to work as
expected
• Single dollar signs are no longer interpreted as inline math
delimiters
## Checklist 📋
For code changes:
	-[x]	I have clearly listed my changes in the PR description
	-[x] I have made a test plan
	-[x] I have tested my changes according to the test plan:
-[x] Verify content with dollar amounts (e.g., “$100”) renders as plain
text
-[x] Verify double dollar sign math blocks ($$x^2$$) still render as
LaTeX
-[x] Verify other markdown features (code blocks, tables, links) still
work correctly

Co-authored-by: Claude <noreply@anthropic.com>
2025-12-10 17:10:52 +00:00
Reinier van der Leer
117bb05438 fix(frontend/library): Fix trigger UX flows in v3 library (#11589)
- Resolves #11586
- Follow-up to #11580

### Changes 🏗️

- Fix logic to include manual triggers as a possibility
- Fix input render logic to use trigger setup schema if applicable
- Fix rendering payload input for externally triggered runs
- Amend `RunAgentModal` to load preset inputs+credentials if selected
- Amend `SelectedTemplateView` to use modified input for run (if
applicable)
- Hide non-applicable buttons in `SelectedRunView` for externally
triggered runs
- Implement auto-navigation to `SelectedTriggerView` on trigger setup

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Can set up manual triggers
    - [x] Navigates to trigger view after setup
  - [x] Can set up automatic triggers
  - [x] Can create templates from runs
  - [x] Can run templates
  - [x] Can run templates with modified input
2025-12-10 15:52:02 +00:00
Nicholas Tindle
979d7c3b74 feat(blocks): Add 4 new GitHub webhook trigger blocks (#11588)
I want to be able to automate some actions on social media or our
sevrver in response to actions from discord


<!-- Clearly explain the need for these changes: -->

### Changes 🏗️
Add trigger blocks for common GitHub events to enable OSS automation:
- GithubReleaseTriggerBlock: Trigger on release events (published, etc.)
- GithubStarTriggerBlock: Trigger on star events for milestone
celebrations
- GithubIssuesTriggerBlock: Trigger on issue events for
triage/notifications
- GithubDiscussionTriggerBlock: Trigger on discussion events for Q&A
sync
<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Test Stars
  - [x] Test Discussions
  - [x] Test Issues
  - [x] Test Release

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-12-09 21:25:43 +00:00
Nicholas Tindle
95200b67f8 feat(blocks): add many new spreadsheet blocks (#11574)
<!-- Clearly explain the need for these changes: -->
We have lots we want to do with google sheets and we don't want a lack
of blocks to be a limiter so I pre-ddi a lot of blocks!

### Changes 🏗️
Adds 24 new blocks for google sheets (tested and working)
```
|-----|-------------------------------------------|----------------------------------------|
|  1  | GoogleSheetsFilterRowsBlock               | Filter rows based on column conditions |  |
|  2  | GoogleSheetsLookupRowBlock                | VLOOKUP-style row lookup               |  |
|  3  | GoogleSheetsDeleteRowsBlock               | Delete rows from a sheet               |  |
|  4  | GoogleSheetsGetColumnBlock                | Get data from a specific column        |  |
|  5  | GoogleSheetsSortBlock                     | Sort sheet data                        |  | 
|  6  | GoogleSheetsGetUniqueValuesBlock          | Get unique values from a column        |  | 
|  7  | GoogleSheetsInsertRowBlock                | Insert rows into a sheet               |  |
|  8  | GoogleSheetsAddColumnBlock                | Add a new column                       |  |
|  9  | GoogleSheetsGetRowCountBlock              | Get the number of rows                 |  |
| 10  | GoogleSheetsRemoveDuplicatesBlock         | Remove duplicate rows                  |  | 
| 11  | GoogleSheetsUpdateRowBlock                | Update an existing row                 |  |
| 12  | GoogleSheetsGetRowBlock                   | Get a specific row by index            |  |
| 13  | GoogleSheetsDeleteColumnBlock             | Delete a column                        |  |
| 14  | GoogleSheetsCreateNamedRangeBlock         | Create a named range                   |  | 
| 15  | GoogleSheetsListNamedRangesBlock          | List all named ranges                  |  | 
| 16  | GoogleSheetsAddDropdownBlock              | Add dropdown validation to cells       |  | 
| 17  | GoogleSheetsCopyToSpreadsheetBlock        | Copy sheet to another spreadsheet      |  |
| 18  | GoogleSheetsProtectRangeBlock             | Protect a range from editing           |  |
| 19  | GoogleSheetsExportCsvBlock                | Export sheet as CSV                    |  |
| 20  | GoogleSheetsImportCsvBlock                | Import CSV data                        |  |
| 21  | GoogleSheetsAddNoteBlock                  | Add notes to cells                     |  | 
| 22  | GoogleSheetsGetNotesBlock                 | Get notes from cells                   |  | 
| 23  | GoogleSheetsShareSpreadsheetBlock         | Share spreadsheet with users           |  | 
| 24  | GoogleSheetsSetPublicAccessBlock          | Set public access permissions          |  | 
```


<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Tested using the attached agent 
[super test for
spreadsheets_v9.json](https://github.com/user-attachments/files/24041582/super.test.for.spreadsheets_v9.json)


<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Introduces a large suite of Google Sheets blocks for row/column ops,
filtering/sorting/lookup, CSV import/export, notes, named ranges,
protections, sheet copy, and sharing/public access, plus refactors
append to a simpler single-row append.
> 
> - **Google Sheets blocks (new)**:
> - **Data ops**: `GoogleSheetsFilterRowsBlock`,
`GoogleSheetsLookupRowBlock`, `GoogleSheetsDeleteRowsBlock`,
`GoogleSheetsGetColumnBlock`, `GoogleSheetsSortBlock`,
`GoogleSheetsGetUniqueValuesBlock`, `GoogleSheetsInsertRowBlock`,
`GoogleSheetsAddColumnBlock`, `GoogleSheetsGetRowCountBlock`,
`GoogleSheetsRemoveDuplicatesBlock`, `GoogleSheetsUpdateRowBlock`,
`GoogleSheetsGetRowBlock`, `GoogleSheetsDeleteColumnBlock`.
> - **Named ranges & validation**: `GoogleSheetsCreateNamedRangeBlock`,
`GoogleSheetsListNamedRangesBlock`, `GoogleSheetsAddDropdownBlock`.
> - **Sheet/admin**: `GoogleSheetsCopyToSpreadsheetBlock`,
`GoogleSheetsProtectRangeBlock`.
> - **CSV & notes**: `GoogleSheetsExportCsvBlock`,
`GoogleSheetsImportCsvBlock`, `GoogleSheetsAddNoteBlock`,
`GoogleSheetsGetNotesBlock`.
> - **Sharing**: `GoogleSheetsShareSpreadsheetBlock`,
`GoogleSheetsSetPublicAccessBlock`.
> - **Refactor**:
> - Rename and simplify append: `GoogleSheetsAppendRowBlock` (replaces
multi-row/dict input with single `row`), fixed insert option to
`INSERT_ROWS` and streamlined response.
> - **Utilities/Enums**:
> - Add helpers (`_column_letter_to_index`, `_index_to_column_letter`,
`_apply_filter`) and enums (`FilterOperator`, `SortOrder`, `ShareRole`,
`PublicAccessRole`).
> - Drive/Sheets service builders and file validation reused across new
blocks.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
6e9e2f4024. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Cursor Agent <cursoragent@cursor.com>
2025-12-09 17:28:22 +00:00
Abhimanyu Yadav
f8afc6044e fix(frontend): prevent file upload buttons from triggering form submission (#11576)
<!-- Clearly explain the need for these changes: -->

In the File Widget, the upload button was incorrectly behaving like a
submit button. When users clicked it, the rjsf library immediately
triggered form validation and displayed validation errors, even though
the user was only trying to upload a file.

This happened because HTML buttons inside a form default to
`type="submit"`, which triggers form submission on click. By explicitly
setting `type="button"` on all file-related buttons, we prevent them
from submitting the form while still allowing them to trigger the file
input dialog.

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->

- Added `type="button"` attribute to the clear button in the compact
variant
- Added `type="button"` attribute to the upload button in the compact
variant
- Added `type="button"` attribute to the "Browse File" button in the
default variant

This ensures that clicking any of these buttons only triggers the
intended file selection/upload action without causing unwanted form
validation or submission.

### Checklist 📋

#### For code changes:

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Tested clicking the upload button in a form with File Widget -
form should not submit or show validation errors
- [x] Tested clicking the clear button - should clear the file without
triggering form validation
- [x] Tested clicking the "Browse File" button - should open file dialog
without triggering form validation
- [x] Verified file upload functionality still works correctly after
selecting a file
2025-12-09 15:54:17 +00:00
Abhimanyu Yadav
7edf01777e fix(frontend): sync flowVersion to URL when loading graph from Library (#11585)
<!-- Clearly explain the need for these changes: -->

When opening a graph from the Library, the `flowVersion` query parameter
was not being set in the URL. This caused issues when the graph data
didn't contain an internal `graphVersion`, resulting in the builder
failing and graphs breaking when running.

The `useGetV1GetSpecificGraph` hook relies on the `flowVersion` query
parameter to fetch the correct graph version. Without it being properly
set in the URL, the graph loading logic fails when the version
information is missing from the graph data itself.

### Changes 🏗️

- Added `setQueryStates` to the `useQueryStates` hook return value in
`useFlow.ts`
- Added logic to sync `flowVersion` to the URL query parameters when a
graph is loaded
- When `graph.version` is available, it now updates the `flowVersion`
query parameter in the URL (defaults to `1` if version is undefined)

This ensures the URL stays in sync with the loaded graph's version,
preventing builder failures and execution issues.

### Checklist 📋

#### For code changes:

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Open a graph from the Library that doesn't have a version in the
URL
- [x] Verify that `flowVersion` is automatically added to the URL query
parameters
  - [x] Verify that the graph loads correctly and can be executed
- [x] Verify that graphs with existing `flowVersion` in URL continue to
work correctly
- [x] Verify that graphs opened from Library with version information
sync correctly
2025-12-09 15:26:45 +00:00
Ubbe
c9681f5d44 fix(frontend): library page adjustments (#11587)
## Changes 🏗️

### Adjust layout and styles on mobile 📱 

<img width="448" height="843" alt="Screenshot 2025-12-09 at 22 53 14"
src="https://github.com/user-attachments/assets/159bdf4f-e6b2-42f5-8fdf-25f8a62c62d1"
/>

### Make the sidebar cards have contextual actions

<img width="486" height="243" alt="Screenshot 2025-12-09 at 22 53 27"
src="https://github.com/user-attachments/assets/2f530168-3217-47c4-b08d-feccbb9e9152"
/>

Depending on the card type, different type of actions are shown...

### Make buttons in "About agent" card do something

<img width="344" height="346" alt="Screenshot 2025-12-09 at 22 54 01"
src="https://github.com/user-attachments/assets/47181f80-1f68-4ef1-aecc-bbadc7cc9c44"
/>

### Other

- Hide `Schedule` button for agents with trigger run type
- Adjust secondary button background colour...
- Make drawer content scrollable on mobile 

## Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run locally and test the above

Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
2025-12-09 23:17:44 +07:00
Abhimanyu Yadav
1305325813 fix(frontend): preserve button shape in credential select when content is long (#11577)
<!-- Clearly explain the need for these changes: -->

When the content inside the credential select dropdown becomes too long,
the adjacent link buttons lose their rounded shape and appear squarish.
This happens when the text stretches the container or affects the layout
of the buttons.

The issue occurs because the button's width can shrink below its
intended size when the flex container is stretched by long credential
names. By adding an explicit minimum width constraint with `!min-w-8`,
we ensure the button maintains its proper dimensions and rounded
appearance regardless of the select dropdown's content length.

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->

- Added `!min-w-8` to the external link button's className in
`SelectCredential` component to enforce a minimum width of 2rem (8 *
0.25rem)
- This ensures the button maintains its rounded shape even when the
adjacent select dropdown contains long credential names

### Checklist 📋

#### For code changes:

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Tested credential select with short credential names - button
should maintain rounded shape
- [x] Tested credential select with very long credential names (e.g.,
long provider names, usernames, and hosts) - button should maintain
rounded shape
2025-12-09 15:26:22 +00:00
Abhimanyu Yadav
4f349281bd feat(frontend): switch copied graph storage from local storage to clipboard (#11578)
### Changes 🏗️

This PR migrates the copy/paste functionality for graph nodes and edges
from local storage to the Clipboard API. This change addresses storage
limitations and enables cross-tab copying.


https://github.com/user-attachments/assets/6ef55713-ca5b-4562-bb54-4c12db241d30


**Key changes:**
- Replaced `localStorage` with `navigator.clipboard` API for copy/paste
operations
- Added `CLIPBOARD_PREFIX` constant (`"autogpt-flow-data:"`) to identify
our clipboard data and prevent conflicts with other clipboard content
- Added toast notifications to provide user feedback when copying nodes
- Added error handling for clipboard read/write operations with console
error logging
- Removed dependency on `@/services/storage/local-storage` for copied
flow data
- Updated `useCopyPaste` hook to use async clipboard operations with
proper promise handling

**Benefits:**
-  Removes local storage size limitations (5-10MB)
-  Enables copying nodes between browser tabs/windows
-  Provides better user feedback through toast notifications
-  More standard approach using native browser Clipboard API

### Checklist 📋

#### For code changes:

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Select one or more nodes in the flow editor
  - [x] Press `Ctrl+C` (or `Cmd+C` on Mac) to copy nodes
- [x] Verify toast notification appears showing "Copied successfully"
with node count
  - [x] Press `Ctrl+V` (or `Cmd+V` on Mac) to paste nodes
  - [x] Verify nodes are pasted at viewport center with new unique IDs
  - [x] Verify edges between copied nodes are also pasted correctly
- [x] Test copying nodes in one browser tab and pasting in another tab
(should work)
- [x] Test copying non-flow data (e.g., text) and verify paste doesn't
interfere with flow editor
2025-12-09 15:26:12 +00:00
Ubbe
6c43b34dee feat(frontend): add templates/triggers to new Library page view (#11580)
## Changes 🏗️

Add Templates to the new Agent Library page:

<img width="800" height="889" alt="Screenshot 2025-12-09 at 14 10 01"
src="https://github.com/user-attachments/assets/85f0d478-f3f9-4ccf-81df-b9a7f4ae8849"
/>

- You can create a template from a run ( new action button )
- Templates are listed and can be selected on the sidebar
- When viewing a template, you can edit it, create a task or delete it

Add Triggers to the new Agent Library page:

<img width="800" height="836" alt="Screenshot 2025-12-09 at 14 10 43"
src="https://github.com/user-attachments/assets/c722f807-c72f-4a4d-8778-e36bea203f6e"
/>

- When an agent contains a trigger block, on the modal it will create a
trigger
- When there are triggers, they are listed on the sidebar
- A trigger can be viewed and edited

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the new page locally
2025-12-09 18:30:13 +07:00
Zamil Majdy
c1e21d07e6 feat(platform): add execution accuracy alert system (#11562)
## Summary

<img width="1263" height="883" alt="image"
src="https://github.com/user-attachments/assets/98d4f449-1897-4019-a599-846c27df4191"
/>
<img width="398" height="190" alt="image"
src="https://github.com/user-attachments/assets/0138ac02-420d-4f96-b980-74eb41e3c968"
/>

- Add execution accuracy monitoring with moving averages and Discord
alerts
- Dashboard visualization for accuracy trends and alert detection  
- Hourly monitoring for marketplace agents (≥10 executions in 30 days)
- Generated API client integration with type-safe models

## Features
- **Moving Average Analysis**: 3-day vs 7-day comparison with
configurable thresholds
- **Discord Notifications**: Hourly alerts for accuracy drops ≥10%
- **Dashboard UI**: Real-time trends visualization with alert status
- **Type Safety**: Generated API hooks and models throughout
- **Error Handling**: Graceful OpenAI configuration handling
- **PostgreSQL Optimization**: Window functions for efficient trend
queries

## Test plan
- [x] Backend accuracy monitoring logic tested with sample data
- [x] Frontend components using generated API hooks (no manual fetch)
- [x] Discord notification integration working
- [x] Admin authentication and authorization working
- [x] All formatting and linting checks passing
- [x] Error handling for missing OpenAI configuration
- [x] Test data available with `test-accuracy-agent-001`

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-12-08 19:28:57 +00:00
Ubbe
aaa8dcc5a8 feat(frontend): design updates run agent modal (#11570)
## Changes 🏗️

<img width="500" height="927" alt="Screenshot 2025-12-08 at 16 30 04"
src="https://github.com/user-attachments/assets/97170302-50ae-4750-99e1-275861ee9e08"
/>

<img width="500" height="897" alt="Screenshot 2025-12-08 at 16 30 10"
src="https://github.com/user-attachments/assets/0a7ac828-ebea-40de-a19e-fbf77b0c2eb7"
/>

Update the new **Run Agent Modal** as a new view. From now on, sections
( inputs, credentials, etc... ) are enclosed. Refactor all the existing
code to be more readable and [adhere to code
conventions](https://github.com/Significant-Gravitas/AutoGPT/blob/master/autogpt_platform/frontend/CONTRIBUTING.md).

### Other improvements

- Refactor `<CredentialInputs />`:
  - to adhere to code conventions
  - to show all existing credentials for a given provider
  - to allow deletion of a given credential
  - to allow to select/switch credentials for a given task
- Run Graph Errors:
- when the run fails, show the errors nicely on the toast and link to
the builder for further debugging
- Design System:
  - secondary button colour has been updated 
  - labels size for form fields has been updated 

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the app locally
  - [x] Test the above updates running agents from the new library page
2025-12-08 21:47:32 +07:00
Ubbe
a46976decd fix(frontend): allow to close toasts (#11550)
## Changes 🏗️

<img width="795" height="275" alt="Screenshot 2025-12-04 at 21 05 55"
src="https://github.com/user-attachments/assets/e7572635-3976-4822-89ea-3e5e26196ab8"
/>

Allow to close toasts 🍞 

## Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run Storybook locally
  - [x] Toast have a close button top-right

---------

Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
2025-12-08 17:29:14 +07:00
Nicholas Tindle
c4eb7edb65 feat(platform): Improve Google Sheets/Drive integration with unified credentials (#11520)
Simplifies and improves the Google Sheets/Drive integration by merging
credentials with the file picker and using narrower OAuth scopes.

### Changes 🏗️

- Merge Google credentials and file picker into a single unified input
field for better UX
- Create spreadsheets using Drive API instead of Sheets API for proper
scope support
- Simplify Google Drive OAuth scope to only use `drive.file` (narrowest
permission needed)
- Clean up unused imports (NormalizedPickedFile)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Test creating a new Google Spreadsheet with
GoogleSheetsCreateSpreadsheetBlock
- [x] Test reading from existing spreadsheets with GoogleSheetsReadBlock
  - [x] Test writing to spreadsheets with GoogleSheetsWriteBlock
  - [x] Verify OAuth flow works with simplified scopes
  - [x] Verify file picker works with merged credentials field

#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Unifies Google Drive picker and credentials with auto-credentials
across backend and frontend, updates all Sheets blocks and execution to
use it, and adds Drive-based spreadsheet creation plus supporting tests
and UI fixes.
> 
> - **Backend**:
> - **Google Drive model/field**: Introduce `GoogleDriveFile` (with
`_credentials_id`) and `GoogleDriveFileField()` for unified auth+picker
(`backend/blocks/google/_drive.py`).
> - **Sheets blocks**: Replace `GoogleDrivePickerField` and explicit
credentials with `GoogleDriveFileField` across all Sheets blocks;
preserve and emit credentials for chaining; add Drive service; create
spreadsheets via Drive API then manage via Sheets API.
> - **IO block**: Add `AgentGoogleDriveFileInputBlock` providing a Drive
picker input.
> - **Execution**: Support auto-generated credentials via
`BlockSchema.get_auto_credentials_fields()`; acquire/release multiple
credential locks; pass creds by `credentials_kwarg`
(`executor/manager.py`, `data/block.py`, `util/test.py`).
> - **Tests**: Add validation tests for duplicate/unique
`auto_credentials.kwarg_name` and defaults.
> - **Frontend**:
> - **Picker**: Enhance Google Drive picker to require/use saved
platform credentials, pass `_credentials_id`, validate scopes, and
manage dialog z-index/interaction; expose `requirePlatformCredentials`.
> - **UI**: Update dialogs/CSS to keep Google picker on top and prevent
overlay interactions.
> - **Types**: Extend `GoogleDrivePickerConfig` with `auto_credentials`
and related typings.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
7d25534def. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Copilot <198982749+Copilot@users.noreply.github.com>
2025-12-05 09:44:38 -06:00
Nicholas Tindle
3f690ea7b8 fix(platform/frontend): security upgrade next from 15.4.7 to 15.4.8 (#11536)
![snyk-top-banner](https://res.cloudinary.com/snyk/image/upload/r-d/scm-platform/snyk-pull-requests/pr-banner-default.svg)

### Snyk has created this PR to fix 1 vulnerabilities in the yarn
dependencies of this project.

#### Snyk changed the following file(s):

- `autogpt_platform/frontend/package.json`


#### Note for
[zero-installs](https://yarnpkg.com/features/zero-installs) users

If you are using the Yarn feature
[zero-installs](https://yarnpkg.com/features/zero-installs) that was
introduced in Yarn V2, note that this PR does not update the
`.yarn/cache/` directory meaning this code cannot be pulled and
immediately developed on as one would expect for a zero-install project
- you will need to run `yarn` to update the contents of the
`./yarn/cache` directory.
If you are not using zero-install you can ignore this as your flow
should likely be unchanged.



<details>
<summary>⚠️ <b>Warning</b></summary>

```
Failed to update the yarn.lock, please update manually before merging.
```

</details>



#### Vulnerabilities that will be fixed with an upgrade:

|  | Issue |  
:-------------------------:|:-------------------------
![critical
severity](https://res.cloudinary.com/snyk/image/upload/w_20,h_20/v1561977819/icon/c.png
'critical severity') | Arbitrary Code Injection
<br/>[SNYK-JS-NEXT-14173355](https://snyk.io/vuln/SNYK-JS-NEXT-14173355)




---

> [!IMPORTANT]
>
> - Check the changes in this PR to ensure they won't cause issues with
your project.
> - Max score is 1000. Note that the real score may have changed since
the PR was raised.
> - This PR was automatically created by Snyk using the credentials of a
real user.

---

**Note:** _You are seeing this because you or someone else with access
to this repository has authorized Snyk to open fix PRs._

For more information: <img
src="https://api.segment.io/v1/pixel/track?data=eyJ3cml0ZUtleSI6InJyWmxZcEdHY2RyTHZsb0lYd0dUcVg4WkFRTnNCOUEwIiwiYW5vbnltb3VzSWQiOiJhNDQzN2JlZC0wMjYxLTRhZmMtYmQxOS1hMTUwY2RhMDE3ZDciLCJldmVudCI6IlBSIHZpZXdlZCIsInByb3BlcnRpZXMiOnsicHJJZCI6ImE0NDM3YmVkLTAyNjEtNGFmYy1iZDE5LWExNTBjZGEwMTdkNyJ9fQ=="
width="0" height="0"/>
🧐 [View latest project
report](https://app.snyk.io/org/significant-gravitas/project/3d924968-0cf3-4767-9609-501fa4962856?utm_source&#x3D;github&amp;utm_medium&#x3D;referral&amp;page&#x3D;fix-pr)
📜 [Customise PR
templates](https://docs.snyk.io/scan-using-snyk/pull-requests/snyk-fix-pull-or-merge-requests/customize-pr-templates?utm_source=github&utm_content=fix-pr-template)
🛠 [Adjust project
settings](https://app.snyk.io/org/significant-gravitas/project/3d924968-0cf3-4767-9609-501fa4962856?utm_source&#x3D;github&amp;utm_medium&#x3D;referral&amp;page&#x3D;fix-pr/settings)
📚 [Read about Snyk's upgrade
logic](https://docs.snyk.io/scan-with-snyk/snyk-open-source/manage-vulnerabilities/upgrade-package-versions-to-fix-vulnerabilities?utm_source=github&utm_content=fix-pr-template)

---

**Learn how to fix vulnerabilities with free interactive lessons:**

🦉 [Arbitrary Code
Injection](https://learn.snyk.io/lesson/insecure-deserialization/?loc&#x3D;fix-pr)

[//]: #
'snyk:metadata:{"breakingChangeRiskLevel":null,"FF_showPullRequestBreakingChanges":false,"FF_showPullRequestBreakingChangesWebSearch":false,"customTemplate":{"variablesUsed":[],"fieldsUsed":[]},"dependencies":[{"name":"next","from":"15.4.7","to":"15.4.8"}],"env":"prod","issuesToFix":["SNYK-JS-NEXT-14173355"],"prId":"a4437bed-0261-4afc-bd19-a150cda017d7","prPublicId":"a4437bed-0261-4afc-bd19-a150cda017d7","packageManager":"yarn","priorityScoreList":[null],"projectPublicId":"3d924968-0cf3-4767-9609-501fa4962856","projectUrl":"https://app.snyk.io/org/significant-gravitas/project/3d924968-0cf3-4767-9609-501fa4962856?utm_source=github&utm_medium=referral&page=fix-pr","prType":"fix","templateFieldSources":{"branchName":"default","commitMessage":"default","description":"default","title":"default"},"templateVariants":["updated-fix-title","pr-warning-shown"],"type":"auto","upgrade":["SNYK-JS-NEXT-14173355"],"vulns":["SNYK-JS-NEXT-14173355"],"patch":[],"isBreakingChange":false,"remediationStrategy":"vuln"}'

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Upgrades Next.js from 15.4.7 to 15.4.8 in the frontend and updates
lockfile/transitive references accordingly.
> 
> - **Dependencies**:
> - Bump `next` to `15.4.8` in `autogpt_platform/frontend/package.json`.
> - Update lockfile to align, including `@next/*` SWC binaries and
packages that peer-depend on `next` (e.g., `@sentry/nextjs`,
`@storybook/nextjs`, `@vercel/*`, `geist`, `nuqs`,
`@next/third-parties`).
> - Minor transitive tweak: `sharp` dependency `semver` updated to
`7.7.3`.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
e7741cbfb5. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: snyk-bot <snyk-bot@snyk.io>
Co-authored-by: Bentlybro <Github@bentlybro.com>
Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
2025-12-05 09:44:12 -06:00
Swifty
8be3c88711 feat(backend): add default store agents for seeding test databases (#11552)
This PR adds a collection of pre-built store agents that can be loaded
into test databases for development and testing purposes.

### Changes 🏗️

- Add 17 exported agent JSON files in `backend/agents/` directory
- Add `StoreAgent_rows.csv` containing store listing metadata (titles,
descriptions, categories, images)
- Add `load_store_agents.py` script to load agents into the test
database
- Add `load-store-agents` Makefile target for easy execution

**Included Agents:**
- Flux AI Image Generator
- YouTube Transcription Scraper  
- Decision Maker Lead Finder
- Smart Meeting Prep
- Automated Support Agent
- Unspirational Poster Maker
- AI Video Generator
- Automated SEO Blog Writer
- Lead Finder (Local Businesses)
- LinkedIn Post Generator
- YouTube to LinkedIn Post Converter
- Personal Newsletter
- Email Scout - Contact Finder Assistant
- YouTube Video to SEO Blog Writer
- AI Webpage Copy Improver
- Domain Name Finder
- AI Function

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Run `make load-store-agents` and verify agents are loaded into the
database
  - [x] Verify store listings appear correctly with metadata from CSV
- [x] Confirm no sensitive information (API keys, secrets) is included
in the exported agents

#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

No configuration changes required - this only adds test data and a
loading script.
2025-12-05 16:08:37 +01:00
Zamil Majdy
e4d0dbc283 feat(platform): add Agent Output Demo field to marketplace submission form (#11538)
## Summary
- Add Agent Output Demo field to marketplace agent submission form,
positioned below the Description field
- Store agent output demo URLs in database for future CoPilot
integration
- Implement proper video/image ordering on marketplace pages
- Add shared YouTube URL validation utility to eliminate code
duplication

## Changes Made

### Frontend
- **Agent submission form**: Added Agent Output Demo field with YouTube
URL validation
- **Edit agent form**: Added Agent Output Demo field for existing
submissions
- **Marketplace display**: Implemented proper video/image ordering:
  1. YouTube/Overview video (if exists)
  2. First image (hero)
  3. Agent Output Demo (if exists) 
  4. Additional images
- **Shared utilities**: Created `validateYouTubeUrl` function in
`src/lib/utils.ts`

### Backend
- **Database schema**: Added `agentOutputDemoUrl` field to
`StoreListingVersion` model
- **Database views**: Updated `StoreAgent` view to include
`agent_output_demo` field
- **API models**: Added `agent_output_demo_url` to submission requests
and `agent_output_demo` to responses
- **Database migration**: Added migration to create new column and
update view
- **Test files**: Updated all test files to include the new required
field

## Test Plan
- [x] Frontend form validation works correctly for YouTube URLs
- [x] Database migration applies successfully 
- [x] Backend API accepts and returns the new field
- [x] Marketplace displays videos in correct order
- [x] Both frontend and backend formatting/linting pass
- [x] All test files include required field to prevent failures

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-12-05 11:40:12 +00:00
Swifty
8e476c3f8d fix(backend): pass credential type from SDK registry to integrations API (#11544)
### Changes 🏗️

This PR improves the `/integrations/providers` endpoint to dynamically
determine supported authentication types from the SDK registry instead
of using hardcoded values.

**What changed:**
- The `list_providers` function now looks up each provider in the
`AutoRegistry` to get its `supported_auth_types`
- If a provider has defined auth types in the SDK registry, those are
used to set `supports_api_key`, `supports_user_password`, and
`supports_host_scoped` flags
- Falls back to legacy hardcoded behavior for providers not registered
in the SDK (maintains backwards compatibility)

**Why:**
- Providers can now correctly declare their supported authentication
methods via the SDK
- Removes brittle hardcoded checks like `name in ("smtp",)` for specific
providers
- Makes the credential type system more extensible and maintainable

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verified providers with SDK-defined auth types return correct
flags
  - [x] Verified legacy providers still work with fallback behavior
- [x] Tested the `/integrations/providers` endpoint returns expected
data

#### For configuration changes:

- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

No configuration changes required for this PR.
2025-12-05 12:42:49 +01:00
Zamil Majdy
2f63defb53 fix(backend): Mark ValueError as known block errors (#11537)
### Changes 🏗️

Mark ValueError as known block errors

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
2025-12-05 11:12:18 +00:00
Sukhtumur Narantuya
2934e9ea69 fix(backend): replace print() statements with proper logging (#11499)
- Replace print() with logger.info() in reddit.py for login message
- Replace print() with logger.debug() in airtable/_api.py for API params
- Replace print() with logger.debug() in _manual_base.py for webhook URL
- Add logging imports and logger initialization where missing
- Update FIXME to TODO with GitHub issue reference #8537

<!-- Clearly explain the need for these changes: -->

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] test it still works


<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Switch `print()` to `logger.info/debug()` across Airtable, Reddit, and
manual webhook modules; add logger initialization and clarify TODO with
issue reference.
> 
> - **Backend**:
>   - **Airtable (`backend/blocks/airtable/_api.py`)**:
>     - Replace `print(params)` with `logger.debug` in `create_base`.
>   - **Reddit (`backend/blocks/reddit.py`)**:
>     - Add `logging` import and `logger` initialization.
>     - Replace login `print` with `logger.info` in `get_praw`.
>   - **Webhooks (`backend/integrations/webhooks/_manual_base.py`)**:
> - Replace `print` with `logger.debug` in `_register_webhook` and add
`logger`.
>     - Update `FIXME` to `TODO` with GitHub issue reference `#8537`.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
3add9b0fa9. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
2025-12-05 07:49:20 +00:00
Krzysztof Czerwinski
c880db439d feat(platform): Backend completion of Onboarding tasks (#11375)
Make onboarding task completion backend-authoritative which prevents
cheating (previously users could mark all tasks as completed instantly
and get rewards) and makes task completion more reliable. Completion of
tasks is moved backend with exception of introductory onboarding tasks
and visit-page type tasks.

### Changes 🏗️

- Move incrementing run counter backend and make webhook-triggered and
scheduled task execution count as well
- Use user timezone for calculating run streak
- Frontend task completion is moved from update onboarding state to
separate endpoint and guarded so only frontend tasks can be completed
- Graph creation, execution and add marketplace agent to library accept
`source`, so appropriate tasks can be completed
- Replace `client.ts` api calls with orval generated and remove no
longer used functions from `client.ts`
- Add `resolveResponse` helper function that unwraps orval generated
call result to 2xx response

Small changes&bug fixes:
- Make Redis notification bus serialize all payload fields
- Fix confetti when group is finished
- Collapse finished group when opening Wallet
- Play confetti only for tasks that are listed in the Wallet UI

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Onboarding can be finished
  - [x] All tasks can be finished and work properly
  - [x] Confetti works properly
2025-12-05 02:32:28 +00:00
Nicholas Tindle
486099140d fix(platform/frontend): security upgrade next from 15.4.7 to 15.4.8 (#11536)
![snyk-top-banner](https://res.cloudinary.com/snyk/image/upload/r-d/scm-platform/snyk-pull-requests/pr-banner-default.svg)

### Snyk has created this PR to fix 1 vulnerabilities in the yarn
dependencies of this project.

#### Snyk changed the following file(s):

- `autogpt_platform/frontend/package.json`


#### Note for
[zero-installs](https://yarnpkg.com/features/zero-installs) users

If you are using the Yarn feature
[zero-installs](https://yarnpkg.com/features/zero-installs) that was
introduced in Yarn V2, note that this PR does not update the
`.yarn/cache/` directory meaning this code cannot be pulled and
immediately developed on as one would expect for a zero-install project
- you will need to run `yarn` to update the contents of the
`./yarn/cache` directory.
If you are not using zero-install you can ignore this as your flow
should likely be unchanged.



<details>
<summary>⚠️ <b>Warning</b></summary>

```
Failed to update the yarn.lock, please update manually before merging.
```

</details>



#### Vulnerabilities that will be fixed with an upgrade:

|  | Issue |  
:-------------------------:|:-------------------------
![critical
severity](https://res.cloudinary.com/snyk/image/upload/w_20,h_20/v1561977819/icon/c.png
'critical severity') | Arbitrary Code Injection
<br/>[SNYK-JS-NEXT-14173355](https://snyk.io/vuln/SNYK-JS-NEXT-14173355)




---

> [!IMPORTANT]
>
> - Check the changes in this PR to ensure they won't cause issues with
your project.
> - Max score is 1000. Note that the real score may have changed since
the PR was raised.
> - This PR was automatically created by Snyk using the credentials of a
real user.

---

**Note:** _You are seeing this because you or someone else with access
to this repository has authorized Snyk to open fix PRs._

For more information: <img
src="https://api.segment.io/v1/pixel/track?data=eyJ3cml0ZUtleSI6InJyWmxZcEdHY2RyTHZsb0lYd0dUcVg4WkFRTnNCOUEwIiwiYW5vbnltb3VzSWQiOiJhNDQzN2JlZC0wMjYxLTRhZmMtYmQxOS1hMTUwY2RhMDE3ZDciLCJldmVudCI6IlBSIHZpZXdlZCIsInByb3BlcnRpZXMiOnsicHJJZCI6ImE0NDM3YmVkLTAyNjEtNGFmYy1iZDE5LWExNTBjZGEwMTdkNyJ9fQ=="
width="0" height="0"/>
🧐 [View latest project
report](https://app.snyk.io/org/significant-gravitas/project/3d924968-0cf3-4767-9609-501fa4962856?utm_source&#x3D;github&amp;utm_medium&#x3D;referral&amp;page&#x3D;fix-pr)
📜 [Customise PR
templates](https://docs.snyk.io/scan-using-snyk/pull-requests/snyk-fix-pull-or-merge-requests/customize-pr-templates?utm_source=github&utm_content=fix-pr-template)
🛠 [Adjust project
settings](https://app.snyk.io/org/significant-gravitas/project/3d924968-0cf3-4767-9609-501fa4962856?utm_source&#x3D;github&amp;utm_medium&#x3D;referral&amp;page&#x3D;fix-pr/settings)
📚 [Read about Snyk's upgrade
logic](https://docs.snyk.io/scan-with-snyk/snyk-open-source/manage-vulnerabilities/upgrade-package-versions-to-fix-vulnerabilities?utm_source=github&utm_content=fix-pr-template)

---

**Learn how to fix vulnerabilities with free interactive lessons:**

🦉 [Arbitrary Code
Injection](https://learn.snyk.io/lesson/insecure-deserialization/?loc&#x3D;fix-pr)

[//]: #
'snyk:metadata:{"breakingChangeRiskLevel":null,"FF_showPullRequestBreakingChanges":false,"FF_showPullRequestBreakingChangesWebSearch":false,"customTemplate":{"variablesUsed":[],"fieldsUsed":[]},"dependencies":[{"name":"next","from":"15.4.7","to":"15.4.8"}],"env":"prod","issuesToFix":["SNYK-JS-NEXT-14173355"],"prId":"a4437bed-0261-4afc-bd19-a150cda017d7","prPublicId":"a4437bed-0261-4afc-bd19-a150cda017d7","packageManager":"yarn","priorityScoreList":[null],"projectPublicId":"3d924968-0cf3-4767-9609-501fa4962856","projectUrl":"https://app.snyk.io/org/significant-gravitas/project/3d924968-0cf3-4767-9609-501fa4962856?utm_source=github&utm_medium=referral&page=fix-pr","prType":"fix","templateFieldSources":{"branchName":"default","commitMessage":"default","description":"default","title":"default"},"templateVariants":["updated-fix-title","pr-warning-shown"],"type":"auto","upgrade":["SNYK-JS-NEXT-14173355"],"vulns":["SNYK-JS-NEXT-14173355"],"patch":[],"isBreakingChange":false,"remediationStrategy":"vuln"}'

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Upgrades Next.js from 15.4.7 to 15.4.8 in the frontend and updates
lockfile/transitive references accordingly.
> 
> - **Dependencies**:
> - Bump `next` to `15.4.8` in `autogpt_platform/frontend/package.json`.
> - Update lockfile to align, including `@next/*` SWC binaries and
packages that peer-depend on `next` (e.g., `@sentry/nextjs`,
`@storybook/nextjs`, `@vercel/*`, `geist`, `nuqs`,
`@next/third-parties`).
> - Minor transitive tweak: `sharp` dependency `semver` updated to
`7.7.3`.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
e7741cbfb5. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: snyk-bot <snyk-bot@snyk.io>
Co-authored-by: Bentlybro <Github@bentlybro.com>
Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
2025-12-04 20:09:49 +00:00
Ubbe
6d8906ced7 fix(frontend): summary card layout (#11556)
## Changes 🏗️

- Make it less verbose and adapt the summary card to the new design

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run locally and see summary displaying well
2025-12-04 23:33:57 +07:00
Abhimanyu Yadav
bf32a76f49 fix(frontend): prevent NodeHandle error when array schema items is undefined (#11549)
- Added conditional rendering in `NodeArrayInput` component to only
render `NodeHandle` when `schema.items` is defined
- This prevents the error when array schemas don't have an `items`
property defined
- The input field rendering logic already had proper null checks, so
only the handle rendering needed to be fixed

### Checklist 📋

#### For code changes:

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Open the legacy builder and create a new agent
  - [x] Add a Smart Decision block to the agent
- [x] Verify that the `conversation_history` field renders without
errors
- [x] Verify that array fields with defined `items` still render
correctly (e.g., test with other blocks that use arrays)
- [x] Verify that connecting/disconnecting handles to the
conversation_history field works correctly
  - [x] Verify that the field can accept input values when not connected
2025-12-04 15:13:31 +00:00
Ubbe
f7a8e372dd feat(frontend): implement new actions sidebar + summary (#11545)
## Changes 🏗️

<img width="800" height="767" alt="Screenshot 2025-12-04 at 17 40 10"
src="https://github.com/user-attachments/assets/37036246-bcdb-46eb-832c-f91fddfd9014"
/>

<img width="800" height="492" alt="Screenshot 2025-12-04 at 17 40 16"
src="https://github.com/user-attachments/assets/ba547e54-016a-403c-9ab6-99465d01af6b"
/>

On the new Agent Library page:

- Implement the new actions sidebar ( main change... ) 
- Refactor the layout/components to accommodate that
- Implement the missing "Summary" functionality 
- Update icon buttons in Design system with new designs

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the app locally and test it
2025-12-04 22:46:42 +07:00
Abhimanyu Yadav
3ccc712463 feat(frontend): add host-scoped credentials support to CredentialField (#11546)
### Changes 🏗️

This PR adds support for `host_scoped` credential type in the new
builder's `CredentialField` component. This enables blocks that require
sensitive headers for custom API endpoints to configure host-scoped
credentials directly from the credential field.
<img width="745" height="843" alt="Screenshot 2025-12-04 at 4 31 09 PM"
src="https://github.com/user-attachments/assets/d076b797-64c4-4c31-9c88-47a064814055"
/>

<img width="418" height="180" alt="Screenshot 2025-12-04 at 4 36 02 PM"
src="https://github.com/user-attachments/assets/b4fa6d8d-d8f4-41ff-ab11-7c708017f8fd"
/>

**Key changes:**

- **Added `HostScopedCredentialsModal` component**
(`models/HostScopedCredentialsModal/`)
- Modal dialog for creating host-scoped credentials with host pattern,
optional title, and dynamic header pairs (key-value)
- Auto-populates host from discriminator value (URL field) when
available
  - Supports adding/removing multiple header pairs with validation

- **Enhanced credential filtering logic** (`helpers.ts`)
- Updated `filterCredentialsByProvider` to accept `schema` and
`discriminatorValue` parameters
  - Added intelligent filtering for:
    - Credential types supported by the block
    - OAuth credentials with sufficient scopes
    - Host-scoped credentials matched by host from discriminator value
  - Extracted `getDiscriminatorValue` helper function for reusability

- **Updated `CredentialField` component**
  - Added `supportsHostScoped` check in `useCredentialField` hook
- Conditionally renders `HostScopedCredentialsModal` when
`supportsHostScoped && discriminatorValue` is true
  - Exports `discriminatorValue` for use in child components

- **Updated `useCredentialField` hook**
- Calculates `discriminatorValue` using new `getDiscriminatorValue`
helper
- Passes `schema` and `discriminatorValue` to enhanced
`filterCredentialsByProvider` function
- Returns `supportsHostScoped` and `discriminatorValue` for component
consumption

**Technical details:**
- Host extraction uses `getHostFromUrl` utility to parse host from
discriminator value (URL)
- Header pairs are managed as state with add/remove functionality
- Form validation uses `react-hook-form` with `zod` schema
- Credential creation integrates with existing API endpoints and query
invalidation

### Checklist 📋

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verify `HostScopedCredentialsModal` appears when block supports
`host_scoped` credentials and discriminator value is present
  - [x] Test host auto-population from discriminator value (URL field)
  - [x] Test manual host entry when discriminator value is not available
  - [x] Test adding/removing multiple header pairs
- [x] Test form validation (host required, empty header pairs filtered
out)
  - [x] Test credential creation and successful toast notification
  - [x] Verify credentials list refreshes after creation
- [x] Test host-scoped credential filtering matches credentials by host
from URL
- [x] Verify existing credential types (api_key, oauth2, user_password)
still work correctly
  - [x] Test OAuth scope filtering still works as expected
- [x] Verify modal only shows when `supportsHostScoped &&
discriminatorValue` conditions are met
2025-12-04 15:13:23 +00:00
Abhimanyu Yadav
2b9816cfa5 fix(frontend): ensure node selection state is set before copying in context menu (#11535)
<!-- Clearly explain the need for these changes: -->

Fixes an issue where copying a node via the context menu dialog fails on
the first attempt in a new session. The problem occurs because the node
selection state update and the copy operation happen in quick
succession, causing a race condition where `copySelectedNodes()` reads
the store before the selection state is properly updated.


### Checklist 📋

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Start a new browser session (or clear storage)
  - [x] Open the flow editor
- [x] Right-click on a node and select "Copy Node" from the context menu
  - [x] Verify the node is successfully copied on the first attempt
2025-12-04 15:13:13 +00:00
Abhimanyu Yadav
4e87f668e3 feat(frontend): add file input widget with variants and base64 support (#11533)
<!-- Clearly explain the need for these changes: -->

This PR enhances the FileInput component to support multiple variants
and modes, and integrates it into the form renderer as a file widget.
The changes enable a more flexible file input experience with both
server upload and local base64 conversion capabilities.

### Compact one
<img width="354" height="91" alt="Screenshot 2025-12-03 at 8 05 51 PM"
src="https://github.com/user-attachments/assets/1295a34f-4d9f-4a65-89f2-4b6516f176ff"
/>
<img width="386" height="96" alt="Screenshot 2025-12-03 at 8 06 11 PM"
src="https://github.com/user-attachments/assets/3c10e350-8ddc-43ff-bf0a-68b23f8db394"
/>

## Default one
<img width="671" height="165" alt="Screenshot 2025-12-03 at 8 05 08 PM"
src="https://github.com/user-attachments/assets/bd17c5f1-8cdb-4818-9850-a9e61252d8c1"
/>
<img width="656" height="141" alt="Screenshot 2025-12-03 at 8 05 21 PM"
src="https://github.com/user-attachments/assets/1edf260f-6245-44b9-bd3e-dd962f497413"
/>

### Changes 🏗️

#### FileInput Component Enhancements
- **Added variant support**: Introduced `default` and `compact` variants
- `default`: Full-featured UI with drag & drop, progress bar, and
storage note
- `compact`: Minimal inline design suitable for tight spaces like node
inputs
- **Added mode support**: Introduced `upload` and `base64` modes
- `upload`: Uploads files to server (requires `onUploadFile` and
`uploadProgress`)
  - `base64`: Converts files to base64 locally without server upload
- **Improved type safety**: Refactored props using discriminated unions
(`UploadModeProps | Base64ModeProps`) to ensure type-safe usage
- **Enhanced file handling**: Added `getFileLabelFromValue` helper to
extract file labels from base64 data URIs or file paths
- **Better UX**: Added `showStorageNote` prop to control visibility of
storage disclaimer

#### FileWidget Integration
- **Replaced legacy Input**: Migrated from legacy `Input` component to
new `FileInput` component
- **Smart variant selection**: Automatically selects `default` or
`compact` variant based on form context size
- **Base64 mode**: Uses base64 mode for form inputs, eliminating need
for server uploads in builder context
- **Improved accessibility**: Better disabled/readonly state handling
with visual feedback

#### Form Renderer Updates
- **Disabled validation**: Added `noValidate={true}` and
`liveValidate={false}` to prevent premature form validation

#### Storybook Updates
- **Expanded stories**: Added comprehensive stories for all variant/mode
combinations:
  - `Default`: Default variant with upload mode
  - `Compact`: Compact variant with base64 mode
  - `CompactWithUpload`: Compact variant with upload mode
  - `DefaultWithBase64`: Default variant with base64 mode
- **Improved documentation**: Updated component descriptions to clearly
explain variants and modes

### Checklist 📋

#### For code changes:

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Test FileInput component in Storybook with all variant/mode
combinations
  - [x] Test file upload flow in default variant with upload mode
  - [x] Test base64 conversion in compact variant with base64 mode
  - [x] Test file widget in form renderer (node inputs)
  - [x] Test file type validation (accept prop)
  - [x] Test file size validation (maxFileSize prop)
  - [x] Test error handling for invalid files
  - [x] Test disabled and readonly states
  - [x] Test file clearing/removal functionality
  - [x] Verify compact variant renders correctly in tight spaces
  - [x] Verify default variant shows storage note only in upload mode
  - [x] Test drag & drop functionality in default variant
2025-12-04 15:13:01 +00:00
Abhimanyu Yadav
729400dbe1 feat(frontend): display graph validation errors inline on node fields (#11524)
When running a graph in the new builder, validation errors were only
displayed in toast notifications, making it difficult for users to
identify which specific fields had errors. Users needed to see
validation errors directly next to the problematic fields within each
node for better UX and faster debugging.

<img width="1319" height="953" alt="Screenshot 2025-12-03 at 12 48
15 PM"
src="https://github.com/user-attachments/assets/d444bc71-9bee-4fa7-8b7f-33339bd0cb24"
/>

### Changes 🏗️

- **Error handling in graph execution** (`useRunGraph.ts`):
- Added detection for graph validation errors using
`ApiError.isGraphValidationError()`
  - Parse and store node-level errors from backend validation response
  - Clear all node errors on successful graph execution
- Enhanced toast messages to guide users to fix validation errors on
highlighted nodes

- **Node store error management** (`nodeStore.ts`):
  - Added `errors` field to node data structure
  - Implemented `updateNodeErrors()` to set errors for a specific node
- Implemented `clearNodeErrors()` to remove errors from a specific node
  - Implemented `getNodeErrors()` to retrieve errors for a specific node
- Implemented `setNodeErrorsForBackendId()` to set errors by backend ID
(supports matching by `metadata.backend_id` or node `id`)
- Implemented `clearAllNodeErrors()` to clear all node errors across the
graph

- **Visual error indication** (`CustomNode.tsx`, `NodeContainer.tsx`):
- Added error detection logic to identify both configuration errors and
output errors
- Applied error styling to nodes with validation errors (using `FAILED`
status styling)
- Nodes with errors now display with red border/ring to visually
indicate issues

- **Field-level error display** (`FieldTemplate.tsx`):
  - Fetch node errors from store for the current node
- Match field IDs with error keys (handles both underscore and dot
notation)
  - Display field-specific error messages below each field in red text
- Added helper function `getFieldErrorKey()` to normalize field IDs for
error matching

- **Utility helpers** (`helpers.ts`):
- Created `getFieldErrorKey()` function to extract field key from field
ID (removes `root_` prefix)

### Checklist 📋

#### For code changes:

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Create a graph with multiple nodes and intentionally leave
required fields empty
- [x] Run the graph and verify that validation errors appear in toast
notification
- [x] Verify that nodes with errors are highlighted with red border/ring
styling
- [x] Verify that field-specific error messages appear below each
problematic field in red text
- [x] Verify that error messages handle both underscore and dot notation
in field keys
- [x] Fix validation errors and run graph again - verify errors are
cleared
  - [x] Verify that successful graph execution clears all node errors
- [x] Test with nodes that have `backend_id` in metadata vs nodes
without
  - [x] Verify that nodes without errors don't show error styling
- [x] Test with nested fields and array fields to ensure error matching
works correctly
2025-12-04 15:12:51 +00:00
Abhimanyu Yadav
f6608e99c8 feat(frontend): add expandable text input modal for better editing experience (#11510)
Text inputs in the form builder can be difficult to edit when dealing
with longer content. Users need a way to expand text inputs into a
larger, more comfortable editing interface, especially for multi-line
text, passwords, and longer string values.



https://github.com/user-attachments/assets/443bf4eb-c77c-4bf6-b34c-77091e005c6d


### Changes 🏗️

- **Added `InputExpanderModal` component**: A new modal component that
provides a larger textarea (300px min-height) for editing text inputs
with the following features:
- Copy-to-clipboard functionality with visual feedback (checkmark icon)
  - Toast notification on successful copy
  - Auto-focus on open for better UX
  - Proper state management to reset values when modal opens/closes
  
- **Enhanced `TextInputWidget`**:
- Added expand button (ArrowsOutIcon) with tooltip for text, password,
and textarea input types
  - Button appears inline next to the input field
  - Integrated the new `InputExpanderModal` component
  - Improved layout with flexbox to accommodate the expand button
- Added padding-right to input when expand button is visible to prevent
text overlap

- **Refactored file structure**:
  - Moved `TextInputWidget.tsx` into `TextInputWidget/` directory
  - Updated import path in `widgets/index.ts`

- **UX improvements**:
- Expand button only shows for applicable input types (text, password,
textarea)
  - Number and integer inputs don't show expand button (not needed)
- Modal preserves schema title, description, and placeholder for context

### Checklist 📋

#### For code changes:

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Test expand button appears for text input fields
  - [x] Test expand button appears for password input fields  
  - [x] Test expand button appears for textarea fields
  - [x] Test expand button does NOT appear for number/integer inputs
  - [x] Test clicking expand button opens modal with current value
  - [x] Test editing text in modal and saving updates the input field
  - [x] Test cancel button closes modal without saving changes
- [x] Test copy-to-clipboard button copies text and shows success state
  - [x] Test toast notification appears on successful copy
  - [x] Test modal resets to original value when reopened
  - [x] Test modal auto-focuses textarea on open
  - [x] Test expand button tooltip displays correctly
  - [x] Test input field layout with expand button (no text overlap)
2025-12-04 15:12:32 +00:00
705 changed files with 54313 additions and 16959 deletions

View File

@@ -11,7 +11,7 @@ jobs:
stale:
runs-on: ubuntu-latest
steps:
- uses: actions/stale@v9
- uses: actions/stale@v10
with:
# operations-per-run: 5000
stale-issue-message: >

View File

@@ -61,6 +61,6 @@ jobs:
pull-requests: write
runs-on: ubuntu-latest
steps:
- uses: actions/labeler@v5
- uses: actions/labeler@v6
with:
sync-labels: true

View File

@@ -0,0 +1,9 @@
{
"permissions": {
"allow": [
"Bash(ls:*)",
"WebFetch(domain:langfuse.com)",
"Bash(poetry install:*)"
]
}
}

View File

@@ -1,4 +1,4 @@
.PHONY: start-core stop-core logs-core format lint migrate run-backend run-frontend load-store-agents
.PHONY: start-core stop-core logs-core format lint migrate run-backend stop-backend run-frontend load-store-agents backfill-store-embeddings
# Run just Supabase + Redis + RabbitMQ
start-core:
@@ -34,7 +34,14 @@ migrate:
cd backend && poetry run prisma migrate deploy
cd backend && poetry run prisma generate
run-backend:
stop-backend:
@echo "Stopping backend processes..."
@cd backend && poetry run cli stop 2>/dev/null || true
@echo "Killing any processes using backend ports..."
@lsof -ti:8001,8002,8003,8004,8005,8006,8007 | xargs kill -9 2>/dev/null || true
@echo "Backend stopped"
run-backend: stop-backend
cd backend && poetry run app
run-frontend:
@@ -44,7 +51,10 @@ test-data:
cd backend && poetry run python test/test_data_creator.py
load-store-agents:
cd backend && poetry run python test/load_store_agents.py
cd backend && poetry run load-store-agents
backfill-store-embeddings:
cd backend && poetry run python -m backend.api.features.store.backfill_embeddings
help:
@echo "Usage: make <target>"
@@ -55,7 +65,9 @@ help:
@echo " logs-core - Tail the logs for core services"
@echo " format - Format & lint backend (Python) and frontend (TypeScript) code"
@echo " migrate - Run backend database migrations"
@echo " run-backend - Run the backend FastAPI server"
@echo " stop-backend - Stop any running backend processes"
@echo " run-backend - Run the backend FastAPI server (stops existing processes first)"
@echo " run-frontend - Run the frontend Next.js development server"
@echo " test-data - Run the test data creator"
@echo " load-store-agents - Load store agents from agents/ folder into test database"
@echo " load-store-agents - Load store agents from agents/ folder into test database"
@echo " backfill-store-embeddings - Generate embeddings for store agents that don't have them"

View File

@@ -57,6 +57,9 @@ class APIKeySmith:
def hash_key(self, raw_key: str) -> tuple[str, str]:
"""Migrate a legacy hash to secure hash format."""
if not raw_key.startswith(self.PREFIX):
raise ValueError("Key without 'agpt_' prefix would fail validation")
salt = self._generate_salt()
hash = self._hash_key_with_salt(raw_key, salt)
return hash, salt.hex()

View File

@@ -1,29 +1,25 @@
from fastapi import FastAPI
from fastapi.openapi.utils import get_openapi
from .jwt_utils import bearer_jwt_auth
def add_auth_responses_to_openapi(app: FastAPI) -> None:
"""
Set up custom OpenAPI schema generation that adds 401 responses
Patch a FastAPI instance's `openapi()` method to add 401 responses
to all authenticated endpoints.
This is needed when using HTTPBearer with auto_error=False to get proper
401 responses instead of 403, but FastAPI only automatically adds security
responses when auto_error=True.
"""
# Wrap current method to allow stacking OpenAPI schema modifiers like this
wrapped_openapi = app.openapi
def custom_openapi():
if app.openapi_schema:
return app.openapi_schema
openapi_schema = get_openapi(
title=app.title,
version=app.version,
description=app.description,
routes=app.routes,
)
openapi_schema = wrapped_openapi()
# Add 401 response to all endpoints that have security requirements
for path, methods in openapi_schema["paths"].items():

View File

@@ -58,6 +58,13 @@ V0_API_KEY=
OPEN_ROUTER_API_KEY=
NVIDIA_API_KEY=
# Langfuse Prompt Management
# Used for managing the CoPilot system prompt externally
# Get credentials from https://cloud.langfuse.com or your self-hosted instance
LANGFUSE_PUBLIC_KEY=
LANGFUSE_SECRET_KEY=
LANGFUSE_HOST=https://cloud.langfuse.com
# OAuth Credentials
# For the OAuth callback URL, use <your_frontend_url>/auth/integrations/oauth_callback,
# e.g. http://localhost:3000/auth/integrations/oauth_callback

View File

@@ -108,7 +108,7 @@ import fastapi.testclient
import pytest
from pytest_snapshot.plugin import Snapshot
from backend.server.v2.myroute import router
from backend.api.features.myroute import router
app = fastapi.FastAPI()
app.include_router(router)
@@ -149,7 +149,7 @@ These provide the easiest way to set up authentication mocking in test modules:
import fastapi
import fastapi.testclient
import pytest
from backend.server.v2.myroute import router
from backend.api.features.myroute import router
app = fastapi.FastAPI()
app.include_router(router)

View File

@@ -3,12 +3,12 @@ from typing import Dict, Set
from fastapi import WebSocket
from backend.api.model import NotificationPayload, WSMessage, WSMethod
from backend.data.execution import (
ExecutionEventType,
GraphExecutionEvent,
NodeExecutionEvent,
)
from backend.server.model import NotificationPayload, WSMessage, WSMethod
_EVENT_TYPE_TO_METHOD_MAP: dict[ExecutionEventType, WSMethod] = {
ExecutionEventType.GRAPH_EXEC_UPDATE: WSMethod.GRAPH_EXECUTION_EVENT,

View File

@@ -4,13 +4,13 @@ from unittest.mock import AsyncMock
import pytest
from fastapi import WebSocket
from backend.api.conn_manager import ConnectionManager
from backend.api.model import NotificationPayload, WSMessage, WSMethod
from backend.data.execution import (
ExecutionStatus,
GraphExecutionEvent,
NodeExecutionEvent,
)
from backend.server.conn_manager import ConnectionManager
from backend.server.model import NotificationPayload, WSMessage, WSMethod
@pytest.fixture

View File

@@ -0,0 +1,25 @@
from fastapi import FastAPI
from backend.api.middleware.security import SecurityHeadersMiddleware
from backend.monitoring.instrumentation import instrument_fastapi
from .v1.routes import v1_router
external_api = FastAPI(
title="AutoGPT External API",
description="External API for AutoGPT integrations",
docs_url="/docs",
version="1.0",
)
external_api.add_middleware(SecurityHeadersMiddleware)
external_api.include_router(v1_router, prefix="/v1")
# Add Prometheus instrumentation
instrument_fastapi(
external_api,
service_name="external-api",
expose_endpoint=True,
endpoint="/metrics",
include_in_schema=True,
)

View File

@@ -0,0 +1,107 @@
from fastapi import HTTPException, Security, status
from fastapi.security import APIKeyHeader, HTTPAuthorizationCredentials, HTTPBearer
from prisma.enums import APIKeyPermission
from backend.data.auth.api_key import APIKeyInfo, validate_api_key
from backend.data.auth.base import APIAuthorizationInfo
from backend.data.auth.oauth import (
InvalidClientError,
InvalidTokenError,
OAuthAccessTokenInfo,
validate_access_token,
)
api_key_header = APIKeyHeader(name="X-API-Key", auto_error=False)
bearer_auth = HTTPBearer(auto_error=False)
async def require_api_key(api_key: str | None = Security(api_key_header)) -> APIKeyInfo:
"""Middleware for API key authentication only"""
if api_key is None:
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED, detail="Missing API key"
)
api_key_obj = await validate_api_key(api_key)
if not api_key_obj:
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED, detail="Invalid API key"
)
return api_key_obj
async def require_access_token(
bearer: HTTPAuthorizationCredentials | None = Security(bearer_auth),
) -> OAuthAccessTokenInfo:
"""Middleware for OAuth access token authentication only"""
if bearer is None:
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail="Missing Authorization header",
)
try:
token_info, _ = await validate_access_token(bearer.credentials)
except (InvalidClientError, InvalidTokenError) as e:
raise HTTPException(status_code=status.HTTP_401_UNAUTHORIZED, detail=str(e))
return token_info
async def require_auth(
api_key: str | None = Security(api_key_header),
bearer: HTTPAuthorizationCredentials | None = Security(bearer_auth),
) -> APIAuthorizationInfo:
"""
Unified authentication middleware supporting both API keys and OAuth tokens.
Supports two authentication methods, which are checked in order:
1. X-API-Key header (existing API key authentication)
2. Authorization: Bearer <token> header (OAuth access token)
Returns:
APIAuthorizationInfo: base class of both APIKeyInfo and OAuthAccessTokenInfo.
"""
# Try API key first
if api_key is not None:
api_key_info = await validate_api_key(api_key)
if api_key_info:
return api_key_info
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED, detail="Invalid API key"
)
# Try OAuth bearer token
if bearer is not None:
try:
token_info, _ = await validate_access_token(bearer.credentials)
return token_info
except (InvalidClientError, InvalidTokenError) as e:
raise HTTPException(status_code=status.HTTP_401_UNAUTHORIZED, detail=str(e))
# No credentials provided
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail="Missing authentication. Provide API key or access token.",
)
def require_permission(permission: APIKeyPermission):
"""
Dependency function for checking specific permissions
(works with API keys and OAuth tokens)
"""
async def check_permission(
auth: APIAuthorizationInfo = Security(require_auth),
) -> APIAuthorizationInfo:
if permission not in auth.scopes:
raise HTTPException(
status_code=status.HTTP_403_FORBIDDEN,
detail=f"Missing required permission: {permission.value}",
)
return auth
return check_permission

View File

@@ -16,7 +16,9 @@ from fastapi import APIRouter, Body, HTTPException, Path, Security, status
from prisma.enums import APIKeyPermission
from pydantic import BaseModel, Field, SecretStr
from backend.data.api_key import APIKeyInfo
from backend.api.external.middleware import require_permission
from backend.api.features.integrations.models import get_all_provider_names
from backend.data.auth.base import APIAuthorizationInfo
from backend.data.model import (
APIKeyCredentials,
Credentials,
@@ -28,8 +30,6 @@ from backend.data.model import (
from backend.integrations.creds_manager import IntegrationCredentialsManager
from backend.integrations.oauth import CREDENTIALS_BY_PROVIDER, HANDLERS_BY_NAME
from backend.integrations.providers import ProviderName
from backend.server.external.middleware import require_permission
from backend.server.integrations.models import get_all_provider_names
from backend.util.settings import Settings
if TYPE_CHECKING:
@@ -255,7 +255,7 @@ def _get_oauth_handler_for_external(
@integrations_router.get("/providers", response_model=list[ProviderInfo])
async def list_providers(
api_key: APIKeyInfo = Security(
auth: APIAuthorizationInfo = Security(
require_permission(APIKeyPermission.READ_INTEGRATIONS)
),
) -> list[ProviderInfo]:
@@ -319,7 +319,7 @@ async def list_providers(
async def initiate_oauth(
provider: Annotated[str, Path(title="The OAuth provider")],
request: OAuthInitiateRequest,
api_key: APIKeyInfo = Security(
auth: APIAuthorizationInfo = Security(
require_permission(APIKeyPermission.MANAGE_INTEGRATIONS)
),
) -> OAuthInitiateResponse:
@@ -337,7 +337,10 @@ async def initiate_oauth(
if not validate_callback_url(request.callback_url):
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail=f"Callback URL origin is not allowed. Allowed origins: {settings.config.external_oauth_callback_origins}",
detail=(
f"Callback URL origin is not allowed. "
f"Allowed origins: {settings.config.external_oauth_callback_origins}",
),
)
# Validate provider
@@ -359,13 +362,15 @@ async def initiate_oauth(
)
# Store state token with external flow metadata
# Note: initiated_by_api_key_id is only available for API key auth, not OAuth
api_key_id = getattr(auth, "id", None) if auth.type == "api_key" else None
state_token, code_challenge = await creds_manager.store.store_state_token(
user_id=api_key.user_id,
user_id=auth.user_id,
provider=provider if isinstance(provider_name, str) else provider_name.value,
scopes=request.scopes,
callback_url=request.callback_url,
state_metadata=request.state_metadata,
initiated_by_api_key_id=api_key.id,
initiated_by_api_key_id=api_key_id,
)
# Build login URL
@@ -393,7 +398,7 @@ async def initiate_oauth(
async def complete_oauth(
provider: Annotated[str, Path(title="The OAuth provider")],
request: OAuthCompleteRequest,
api_key: APIKeyInfo = Security(
auth: APIAuthorizationInfo = Security(
require_permission(APIKeyPermission.MANAGE_INTEGRATIONS)
),
) -> OAuthCompleteResponse:
@@ -406,7 +411,7 @@ async def complete_oauth(
"""
# Verify state token
valid_state = await creds_manager.store.verify_state_token(
api_key.user_id, request.state_token, provider
auth.user_id, request.state_token, provider
)
if not valid_state:
@@ -453,7 +458,7 @@ async def complete_oauth(
)
# Store credentials
await creds_manager.create(api_key.user_id, credentials)
await creds_manager.create(auth.user_id, credentials)
logger.info(f"Successfully completed external OAuth for provider {provider}")
@@ -470,7 +475,7 @@ async def complete_oauth(
@integrations_router.get("/credentials", response_model=list[CredentialSummary])
async def list_credentials(
api_key: APIKeyInfo = Security(
auth: APIAuthorizationInfo = Security(
require_permission(APIKeyPermission.READ_INTEGRATIONS)
),
) -> list[CredentialSummary]:
@@ -479,7 +484,7 @@ async def list_credentials(
Returns metadata about each credential without exposing sensitive tokens.
"""
credentials = await creds_manager.store.get_all_creds(api_key.user_id)
credentials = await creds_manager.store.get_all_creds(auth.user_id)
return [
CredentialSummary(
id=cred.id,
@@ -499,7 +504,7 @@ async def list_credentials(
)
async def list_credentials_by_provider(
provider: Annotated[str, Path(title="The provider to list credentials for")],
api_key: APIKeyInfo = Security(
auth: APIAuthorizationInfo = Security(
require_permission(APIKeyPermission.READ_INTEGRATIONS)
),
) -> list[CredentialSummary]:
@@ -507,7 +512,7 @@ async def list_credentials_by_provider(
List credentials for a specific provider.
"""
credentials = await creds_manager.store.get_creds_by_provider(
api_key.user_id, provider
auth.user_id, provider
)
return [
CredentialSummary(
@@ -536,7 +541,7 @@ async def create_credential(
CreateUserPasswordCredentialRequest,
CreateHostScopedCredentialRequest,
] = Body(..., discriminator="type"),
api_key: APIKeyInfo = Security(
auth: APIAuthorizationInfo = Security(
require_permission(APIKeyPermission.MANAGE_INTEGRATIONS)
),
) -> CreateCredentialResponse:
@@ -591,7 +596,7 @@ async def create_credential(
# Store credentials
try:
await creds_manager.create(api_key.user_id, credentials)
await creds_manager.create(auth.user_id, credentials)
except Exception as e:
logger.error(f"Failed to store credentials: {e}")
raise HTTPException(
@@ -623,7 +628,7 @@ class DeleteCredentialResponse(BaseModel):
async def delete_credential(
provider: Annotated[str, Path(title="The provider")],
cred_id: Annotated[str, Path(title="The credential ID to delete")],
api_key: APIKeyInfo = Security(
auth: APIAuthorizationInfo = Security(
require_permission(APIKeyPermission.DELETE_INTEGRATIONS)
),
) -> DeleteCredentialResponse:
@@ -634,7 +639,7 @@ async def delete_credential(
use the main API's delete endpoint which handles webhook cleanup and
token revocation.
"""
creds = await creds_manager.store.get_creds_by_id(api_key.user_id, cred_id)
creds = await creds_manager.store.get_creds_by_id(auth.user_id, cred_id)
if not creds:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND, detail="Credentials not found"
@@ -645,6 +650,6 @@ async def delete_credential(
detail="Credentials do not match the specified provider",
)
await creds_manager.delete(api_key.user_id, cred_id)
await creds_manager.delete(auth.user_id, cred_id)
return DeleteCredentialResponse(deleted=True, credentials_id=cred_id)

View File

@@ -5,46 +5,60 @@ from typing import Annotated, Any, Literal, Optional, Sequence
from fastapi import APIRouter, Body, HTTPException, Security
from prisma.enums import AgentExecutionStatus, APIKeyPermission
from pydantic import BaseModel, Field
from typing_extensions import TypedDict
import backend.api.features.store.cache as store_cache
import backend.api.features.store.model as store_model
import backend.data.block
import backend.server.v2.store.cache as store_cache
import backend.server.v2.store.model as store_model
from backend.api.external.middleware import require_permission
from backend.data import execution as execution_db
from backend.data import graph as graph_db
from backend.data.api_key import APIKeyInfo
from backend.data import user as user_db
from backend.data.auth.base import APIAuthorizationInfo
from backend.data.block import BlockInput, CompletedBlockOutput
from backend.executor.utils import add_graph_execution
from backend.server.external.middleware import require_permission
from backend.util.settings import Settings
from .integrations import integrations_router
from .tools import tools_router
settings = Settings()
logger = logging.getLogger(__name__)
v1_router = APIRouter()
class NodeOutput(TypedDict):
key: str
value: Any
v1_router.include_router(integrations_router)
v1_router.include_router(tools_router)
class ExecutionNode(TypedDict):
node_id: str
input: Any
output: dict[str, Any]
class UserInfoResponse(BaseModel):
id: str
name: Optional[str]
email: str
timezone: str = Field(
description="The user's last known timezone (e.g. 'Europe/Amsterdam'), "
"or 'not-set' if not set"
)
class ExecutionNodeOutput(TypedDict):
node_id: str
outputs: list[NodeOutput]
@v1_router.get(
path="/me",
tags=["user", "meta"],
)
async def get_user_info(
auth: APIAuthorizationInfo = Security(
require_permission(APIKeyPermission.IDENTITY)
),
) -> UserInfoResponse:
user = await user_db.get_user_by_id(auth.user_id)
class GraphExecutionResult(TypedDict):
execution_id: str
status: str
nodes: list[ExecutionNode]
output: Optional[list[dict[str, str]]]
return UserInfoResponse(
id=user.id,
name=user.name,
email=user.email,
timezone=user.timezone,
)
@v1_router.get(
@@ -65,7 +79,9 @@ async def get_graph_blocks() -> Sequence[dict[Any, Any]]:
async def execute_graph_block(
block_id: str,
data: BlockInput,
api_key: APIKeyInfo = Security(require_permission(APIKeyPermission.EXECUTE_BLOCK)),
auth: APIAuthorizationInfo = Security(
require_permission(APIKeyPermission.EXECUTE_BLOCK)
),
) -> CompletedBlockOutput:
obj = backend.data.block.get_block(block_id)
if not obj:
@@ -85,12 +101,14 @@ async def execute_graph(
graph_id: str,
graph_version: int,
node_input: Annotated[dict[str, Any], Body(..., embed=True, default_factory=dict)],
api_key: APIKeyInfo = Security(require_permission(APIKeyPermission.EXECUTE_GRAPH)),
auth: APIAuthorizationInfo = Security(
require_permission(APIKeyPermission.EXECUTE_GRAPH)
),
) -> dict[str, Any]:
try:
graph_exec = await add_graph_execution(
graph_id=graph_id,
user_id=api_key.user_id,
user_id=auth.user_id,
inputs=node_input,
graph_version=graph_version,
)
@@ -100,6 +118,19 @@ async def execute_graph(
raise HTTPException(status_code=400, detail=msg)
class ExecutionNode(TypedDict):
node_id: str
input: Any
output: dict[str, Any]
class GraphExecutionResult(TypedDict):
execution_id: str
status: str
nodes: list[ExecutionNode]
output: Optional[list[dict[str, str]]]
@v1_router.get(
path="/graphs/{graph_id}/executions/{graph_exec_id}/results",
tags=["graphs"],
@@ -107,10 +138,12 @@ async def execute_graph(
async def get_graph_execution_results(
graph_id: str,
graph_exec_id: str,
api_key: APIKeyInfo = Security(require_permission(APIKeyPermission.READ_GRAPH)),
auth: APIAuthorizationInfo = Security(
require_permission(APIKeyPermission.READ_GRAPH)
),
) -> GraphExecutionResult:
graph_exec = await execution_db.get_graph_execution(
user_id=api_key.user_id,
user_id=auth.user_id,
execution_id=graph_exec_id,
include_node_executions=True,
)
@@ -122,7 +155,7 @@ async def get_graph_execution_results(
if not await graph_db.get_graph(
graph_id=graph_exec.graph_id,
version=graph_exec.graph_version,
user_id=api_key.user_id,
user_id=auth.user_id,
):
raise HTTPException(status_code=404, detail=f"Graph #{graph_id} not found.")

View File

@@ -14,19 +14,19 @@ from fastapi import APIRouter, Security
from prisma.enums import APIKeyPermission
from pydantic import BaseModel, Field
from backend.data.api_key import APIKeyInfo
from backend.server.external.middleware import require_permission
from backend.server.v2.chat.model import ChatSession
from backend.server.v2.chat.tools import find_agent_tool, run_agent_tool
from backend.server.v2.chat.tools.models import ToolResponseBase
from backend.api.external.middleware import require_permission
from backend.api.features.chat.model import ChatSession
from backend.api.features.chat.tools import find_agent_tool, run_agent_tool
from backend.api.features.chat.tools.models import ToolResponseBase
from backend.data.auth.base import APIAuthorizationInfo
logger = logging.getLogger(__name__)
tools_router = APIRouter(prefix="/tools", tags=["tools"])
# Note: We use Security() as a function parameter dependency (api_key: APIKeyInfo = Security(...))
# Note: We use Security() as a function parameter dependency (auth: APIAuthorizationInfo = Security(...))
# rather than in the decorator's dependencies= list. This avoids duplicate permission checks
# while still enforcing auth AND giving us access to the api_key for extracting user_id.
# while still enforcing auth AND giving us access to auth for extracting user_id.
# Request models
@@ -80,7 +80,9 @@ def _create_ephemeral_session(user_id: str | None) -> ChatSession:
)
async def find_agent(
request: FindAgentRequest,
api_key: APIKeyInfo = Security(require_permission(APIKeyPermission.USE_TOOLS)),
auth: APIAuthorizationInfo = Security(
require_permission(APIKeyPermission.USE_TOOLS)
),
) -> dict[str, Any]:
"""
Search for agents in the marketplace based on capabilities and user needs.
@@ -91,9 +93,9 @@ async def find_agent(
Returns:
List of matching agents or no results response
"""
session = _create_ephemeral_session(api_key.user_id)
session = _create_ephemeral_session(auth.user_id)
result = await find_agent_tool._execute(
user_id=api_key.user_id,
user_id=auth.user_id,
session=session,
query=request.query,
)
@@ -105,7 +107,9 @@ async def find_agent(
)
async def run_agent(
request: RunAgentRequest,
api_key: APIKeyInfo = Security(require_permission(APIKeyPermission.USE_TOOLS)),
auth: APIAuthorizationInfo = Security(
require_permission(APIKeyPermission.USE_TOOLS)
),
) -> dict[str, Any]:
"""
Run or schedule an agent from the marketplace.
@@ -129,9 +133,9 @@ async def run_agent(
- execution_started: If agent was run or scheduled successfully
- error: If something went wrong
"""
session = _create_ephemeral_session(api_key.user_id)
session = _create_ephemeral_session(auth.user_id)
result = await run_agent_tool._execute(
user_id=api_key.user_id,
user_id=auth.user_id,
session=session,
username_agent_slug=request.username_agent_slug,
inputs=request.inputs,

View File

@@ -6,9 +6,10 @@ from fastapi import APIRouter, Body, Security
from prisma.enums import CreditTransactionType
from backend.data.credit import admin_get_user_history, get_user_credit_model
from backend.server.v2.admin.model import AddUserCreditsResponse, UserHistoryResponse
from backend.util.json import SafeJson
from .model import AddUserCreditsResponse, UserHistoryResponse
logger = logging.getLogger(__name__)

View File

@@ -9,14 +9,15 @@ import pytest_mock
from autogpt_libs.auth.jwt_utils import get_jwt_payload
from pytest_snapshot.plugin import Snapshot
import backend.server.v2.admin.credit_admin_routes as credit_admin_routes
import backend.server.v2.admin.model as admin_model
from backend.data.model import UserTransaction
from backend.util.json import SafeJson
from backend.util.models import Pagination
from .credit_admin_routes import router as credit_admin_router
from .model import UserHistoryResponse
app = fastapi.FastAPI()
app.include_router(credit_admin_routes.router)
app.include_router(credit_admin_router)
client = fastapi.testclient.TestClient(app)
@@ -30,7 +31,7 @@ def setup_app_admin_auth(mock_jwt_admin):
def test_add_user_credits_success(
mocker: pytest_mock.MockFixture,
mocker: pytest_mock.MockerFixture,
configured_snapshot: Snapshot,
admin_user_id: str,
target_user_id: str,
@@ -42,7 +43,7 @@ def test_add_user_credits_success(
return_value=(1500, "transaction-123-uuid")
)
mocker.patch(
"backend.server.v2.admin.credit_admin_routes.get_user_credit_model",
"backend.api.features.admin.credit_admin_routes.get_user_credit_model",
return_value=mock_credit_model,
)
@@ -84,7 +85,7 @@ def test_add_user_credits_success(
def test_add_user_credits_negative_amount(
mocker: pytest_mock.MockFixture,
mocker: pytest_mock.MockerFixture,
snapshot: Snapshot,
) -> None:
"""Test credit deduction by admin (negative amount)"""
@@ -94,7 +95,7 @@ def test_add_user_credits_negative_amount(
return_value=(200, "transaction-456-uuid")
)
mocker.patch(
"backend.server.v2.admin.credit_admin_routes.get_user_credit_model",
"backend.api.features.admin.credit_admin_routes.get_user_credit_model",
return_value=mock_credit_model,
)
@@ -119,12 +120,12 @@ def test_add_user_credits_negative_amount(
def test_get_user_history_success(
mocker: pytest_mock.MockFixture,
mocker: pytest_mock.MockerFixture,
snapshot: Snapshot,
) -> None:
"""Test successful retrieval of user credit history"""
# Mock the admin_get_user_history function
mock_history_response = admin_model.UserHistoryResponse(
mock_history_response = UserHistoryResponse(
history=[
UserTransaction(
user_id="user-1",
@@ -150,7 +151,7 @@ def test_get_user_history_success(
)
mocker.patch(
"backend.server.v2.admin.credit_admin_routes.admin_get_user_history",
"backend.api.features.admin.credit_admin_routes.admin_get_user_history",
return_value=mock_history_response,
)
@@ -170,12 +171,12 @@ def test_get_user_history_success(
def test_get_user_history_with_filters(
mocker: pytest_mock.MockFixture,
mocker: pytest_mock.MockerFixture,
snapshot: Snapshot,
) -> None:
"""Test user credit history with search and filter parameters"""
# Mock the admin_get_user_history function
mock_history_response = admin_model.UserHistoryResponse(
mock_history_response = UserHistoryResponse(
history=[
UserTransaction(
user_id="user-3",
@@ -194,7 +195,7 @@ def test_get_user_history_with_filters(
)
mock_get_history = mocker.patch(
"backend.server.v2.admin.credit_admin_routes.admin_get_user_history",
"backend.api.features.admin.credit_admin_routes.admin_get_user_history",
return_value=mock_history_response,
)
@@ -230,12 +231,12 @@ def test_get_user_history_with_filters(
def test_get_user_history_empty_results(
mocker: pytest_mock.MockFixture,
mocker: pytest_mock.MockerFixture,
snapshot: Snapshot,
) -> None:
"""Test user credit history with no results"""
# Mock empty history response
mock_history_response = admin_model.UserHistoryResponse(
mock_history_response = UserHistoryResponse(
history=[],
pagination=Pagination(
total_items=0,
@@ -246,7 +247,7 @@ def test_get_user_history_empty_results(
)
mocker.patch(
"backend.server.v2.admin.credit_admin_routes.admin_get_user_history",
"backend.api.features.admin.credit_admin_routes.admin_get_user_history",
return_value=mock_history_response,
)

View File

@@ -8,6 +8,10 @@ from fastapi import APIRouter, HTTPException, Security
from pydantic import BaseModel, Field
from backend.blocks.llm import LlmModel
from backend.data.analytics import (
AccuracyTrendsResponse,
get_accuracy_trends_and_alerts,
)
from backend.data.execution import (
ExecutionStatus,
GraphExecutionMeta,
@@ -83,6 +87,18 @@ class ExecutionAnalyticsConfig(BaseModel):
recommended_model: str
class AccuracyTrendsRequest(BaseModel):
graph_id: str = Field(..., description="Graph ID to analyze", min_length=1)
user_id: Optional[str] = Field(None, description="Optional user ID filter")
days_back: int = Field(30, description="Number of days to look back", ge=7, le=90)
drop_threshold: float = Field(
10.0, description="Alert threshold percentage", ge=1.0, le=50.0
)
include_historical: bool = Field(
False, description="Include historical data for charts"
)
router = APIRouter(
prefix="/admin",
tags=["admin", "execution_analytics"],
@@ -419,3 +435,40 @@ async def _process_batch(
return await asyncio.gather(
*[process_single_execution(execution) for execution in executions]
)
@router.get(
"/execution_accuracy_trends",
response_model=AccuracyTrendsResponse,
summary="Get Execution Accuracy Trends and Alerts",
)
async def get_execution_accuracy_trends(
graph_id: str,
user_id: Optional[str] = None,
days_back: int = 30,
drop_threshold: float = 10.0,
include_historical: bool = False,
admin_user_id: str = Security(get_user_id),
) -> AccuracyTrendsResponse:
"""
Get execution accuracy trends with moving averages and alert detection.
Simple single-query approach.
"""
logger.info(
f"Admin user {admin_user_id} requesting accuracy trends for graph {graph_id}"
)
try:
result = await get_accuracy_trends_and_alerts(
graph_id=graph_id,
days_back=days_back,
user_id=user_id,
drop_threshold=drop_threshold,
include_historical=include_historical,
)
return result
except Exception as e:
logger.exception(f"Error getting accuracy trends for graph {graph_id}: {e}")
raise HTTPException(status_code=500, detail=str(e))

View File

@@ -7,9 +7,10 @@ import fastapi
import fastapi.responses
import prisma.enums
import backend.server.v2.store.cache as store_cache
import backend.server.v2.store.db
import backend.server.v2.store.model
import backend.api.features.store.cache as store_cache
import backend.api.features.store.db as store_db
import backend.api.features.store.embeddings as store_embeddings
import backend.api.features.store.model as store_model
import backend.util.json
logger = logging.getLogger(__name__)
@@ -24,7 +25,7 @@ router = fastapi.APIRouter(
@router.get(
"/listings",
summary="Get Admin Listings History",
response_model=backend.server.v2.store.model.StoreListingsWithVersionsResponse,
response_model=store_model.StoreListingsWithVersionsResponse,
)
async def get_admin_listings_with_versions(
status: typing.Optional[prisma.enums.SubmissionStatus] = None,
@@ -48,7 +49,7 @@ async def get_admin_listings_with_versions(
StoreListingsWithVersionsResponse with listings and their versions
"""
try:
listings = await backend.server.v2.store.db.get_admin_listings_with_versions(
listings = await store_db.get_admin_listings_with_versions(
status=status,
search_query=search,
page=page,
@@ -68,11 +69,11 @@ async def get_admin_listings_with_versions(
@router.post(
"/submissions/{store_listing_version_id}/review",
summary="Review Store Submission",
response_model=backend.server.v2.store.model.StoreSubmission,
response_model=store_model.StoreSubmission,
)
async def review_submission(
store_listing_version_id: str,
request: backend.server.v2.store.model.ReviewSubmissionRequest,
request: store_model.ReviewSubmissionRequest,
user_id: str = fastapi.Security(autogpt_libs.auth.get_user_id),
):
"""
@@ -87,12 +88,10 @@ async def review_submission(
StoreSubmission with updated review information
"""
try:
already_approved = (
await backend.server.v2.store.db.check_submission_already_approved(
store_listing_version_id=store_listing_version_id,
)
already_approved = await store_db.check_submission_already_approved(
store_listing_version_id=store_listing_version_id,
)
submission = await backend.server.v2.store.db.review_store_submission(
submission = await store_db.review_store_submission(
store_listing_version_id=store_listing_version_id,
is_approved=request.is_approved,
external_comments=request.comments,
@@ -136,7 +135,7 @@ async def admin_download_agent_file(
Raises:
HTTPException: If the agent is not found or an unexpected error occurs.
"""
graph_data = await backend.server.v2.store.db.get_agent_as_admin(
graph_data = await store_db.get_agent_as_admin(
user_id=user_id,
store_listing_version_id=store_listing_version_id,
)
@@ -152,3 +151,54 @@ async def admin_download_agent_file(
return fastapi.responses.FileResponse(
tmp_file.name, filename=file_name, media_type="application/json"
)
@router.get(
"/embeddings/stats",
summary="Get Embedding Statistics",
)
async def get_embedding_stats() -> dict[str, typing.Any]:
"""
Get statistics about embedding coverage for store listings.
Returns counts of total approved listings, listings with embeddings,
listings without embeddings, and coverage percentage.
"""
try:
stats = await store_embeddings.get_embedding_stats()
return stats
except Exception as e:
logger.exception("Error getting embedding stats: %s", e)
raise fastapi.HTTPException(
status_code=500,
detail="An error occurred while retrieving embedding stats",
)
@router.post(
"/embeddings/backfill",
summary="Backfill Missing Embeddings",
)
async def backfill_embeddings(
batch_size: int = 10,
) -> dict[str, typing.Any]:
"""
Trigger backfill of embeddings for approved listings that don't have them.
Args:
batch_size: Number of embeddings to generate in one call (default 10)
Returns:
Dict with processed count, success count, failure count, and message
"""
try:
result = await store_embeddings.backfill_missing_embeddings(
batch_size=batch_size
)
return result
except Exception as e:
logger.exception("Error backfilling embeddings: %s", e)
raise fastapi.HTTPException(
status_code=500,
detail="An error occurred while backfilling embeddings",
)

View File

@@ -6,10 +6,11 @@ from typing import Annotated
import fastapi
import pydantic
from autogpt_libs.auth import get_user_id
from autogpt_libs.auth.dependencies import requires_user
import backend.data.analytics
router = fastapi.APIRouter()
router = fastapi.APIRouter(dependencies=[fastapi.Security(requires_user)])
logger = logging.getLogger(__name__)

View File

@@ -0,0 +1,340 @@
"""Tests for analytics API endpoints."""
import json
from unittest.mock import AsyncMock, Mock
import fastapi
import fastapi.testclient
import pytest
import pytest_mock
from pytest_snapshot.plugin import Snapshot
from .analytics import router as analytics_router
app = fastapi.FastAPI()
app.include_router(analytics_router)
client = fastapi.testclient.TestClient(app)
@pytest.fixture(autouse=True)
def setup_app_auth(mock_jwt_user):
"""Setup auth overrides for all tests in this module."""
from autogpt_libs.auth.jwt_utils import get_jwt_payload
app.dependency_overrides[get_jwt_payload] = mock_jwt_user["get_jwt_payload"]
yield
app.dependency_overrides.clear()
# =============================================================================
# /log_raw_metric endpoint tests
# =============================================================================
def test_log_raw_metric_success(
mocker: pytest_mock.MockFixture,
configured_snapshot: Snapshot,
test_user_id: str,
) -> None:
"""Test successful raw metric logging."""
mock_result = Mock(id="metric-123-uuid")
mock_log_metric = mocker.patch(
"backend.data.analytics.log_raw_metric",
new_callable=AsyncMock,
return_value=mock_result,
)
request_data = {
"metric_name": "page_load_time",
"metric_value": 2.5,
"data_string": "/dashboard",
}
response = client.post("/log_raw_metric", json=request_data)
assert response.status_code == 200, f"Unexpected response: {response.text}"
assert response.json() == "metric-123-uuid"
mock_log_metric.assert_called_once_with(
user_id=test_user_id,
metric_name="page_load_time",
metric_value=2.5,
data_string="/dashboard",
)
configured_snapshot.assert_match(
json.dumps({"metric_id": response.json()}, indent=2, sort_keys=True),
"analytics_log_metric_success",
)
@pytest.mark.parametrize(
"metric_value,metric_name,data_string,test_id",
[
(100, "api_calls_count", "external_api", "integer_value"),
(0, "error_count", "no_errors", "zero_value"),
(-5.2, "temperature_delta", "cooling", "negative_value"),
(1.23456789, "precision_test", "float_precision", "float_precision"),
(999999999, "large_number", "max_value", "large_number"),
(0.0000001, "tiny_number", "min_value", "tiny_number"),
],
)
def test_log_raw_metric_various_values(
mocker: pytest_mock.MockFixture,
configured_snapshot: Snapshot,
metric_value: float,
metric_name: str,
data_string: str,
test_id: str,
) -> None:
"""Test raw metric logging with various metric values."""
mock_result = Mock(id=f"metric-{test_id}-uuid")
mocker.patch(
"backend.data.analytics.log_raw_metric",
new_callable=AsyncMock,
return_value=mock_result,
)
request_data = {
"metric_name": metric_name,
"metric_value": metric_value,
"data_string": data_string,
}
response = client.post("/log_raw_metric", json=request_data)
assert response.status_code == 200, f"Failed for {test_id}: {response.text}"
configured_snapshot.assert_match(
json.dumps(
{"metric_id": response.json(), "test_case": test_id},
indent=2,
sort_keys=True,
),
f"analytics_metric_{test_id}",
)
@pytest.mark.parametrize(
"invalid_data,expected_error",
[
({}, "Field required"),
({"metric_name": "test"}, "Field required"),
(
{"metric_name": "test", "metric_value": "not_a_number", "data_string": "x"},
"Input should be a valid number",
),
(
{"metric_name": "", "metric_value": 1.0, "data_string": "test"},
"String should have at least 1 character",
),
(
{"metric_name": "test", "metric_value": 1.0, "data_string": ""},
"String should have at least 1 character",
),
],
ids=[
"empty_request",
"missing_metric_value_and_data_string",
"invalid_metric_value_type",
"empty_metric_name",
"empty_data_string",
],
)
def test_log_raw_metric_validation_errors(
invalid_data: dict,
expected_error: str,
) -> None:
"""Test validation errors for invalid metric requests."""
response = client.post("/log_raw_metric", json=invalid_data)
assert response.status_code == 422
error_detail = response.json()
assert "detail" in error_detail, f"Missing 'detail' in error: {error_detail}"
error_text = json.dumps(error_detail)
assert (
expected_error in error_text
), f"Expected '{expected_error}' in error response: {error_text}"
def test_log_raw_metric_service_error(
mocker: pytest_mock.MockFixture,
test_user_id: str,
) -> None:
"""Test error handling when analytics service fails."""
mocker.patch(
"backend.data.analytics.log_raw_metric",
new_callable=AsyncMock,
side_effect=Exception("Database connection failed"),
)
request_data = {
"metric_name": "test_metric",
"metric_value": 1.0,
"data_string": "test",
}
response = client.post("/log_raw_metric", json=request_data)
assert response.status_code == 500
error_detail = response.json()["detail"]
assert "Database connection failed" in error_detail["message"]
assert "hint" in error_detail
# =============================================================================
# /log_raw_analytics endpoint tests
# =============================================================================
def test_log_raw_analytics_success(
mocker: pytest_mock.MockFixture,
configured_snapshot: Snapshot,
test_user_id: str,
) -> None:
"""Test successful raw analytics logging."""
mock_result = Mock(id="analytics-789-uuid")
mock_log_analytics = mocker.patch(
"backend.data.analytics.log_raw_analytics",
new_callable=AsyncMock,
return_value=mock_result,
)
request_data = {
"type": "user_action",
"data": {
"action": "button_click",
"button_id": "submit_form",
"timestamp": "2023-01-01T00:00:00Z",
"metadata": {"form_type": "registration", "fields_filled": 5},
},
"data_index": "button_click_submit_form",
}
response = client.post("/log_raw_analytics", json=request_data)
assert response.status_code == 200, f"Unexpected response: {response.text}"
assert response.json() == "analytics-789-uuid"
mock_log_analytics.assert_called_once_with(
test_user_id,
"user_action",
request_data["data"],
"button_click_submit_form",
)
configured_snapshot.assert_match(
json.dumps({"analytics_id": response.json()}, indent=2, sort_keys=True),
"analytics_log_analytics_success",
)
def test_log_raw_analytics_complex_data(
mocker: pytest_mock.MockFixture,
configured_snapshot: Snapshot,
) -> None:
"""Test raw analytics logging with complex nested data structures."""
mock_result = Mock(id="analytics-complex-uuid")
mocker.patch(
"backend.data.analytics.log_raw_analytics",
new_callable=AsyncMock,
return_value=mock_result,
)
request_data = {
"type": "agent_execution",
"data": {
"agent_id": "agent_123",
"execution_id": "exec_456",
"status": "completed",
"duration_ms": 3500,
"nodes_executed": 15,
"blocks_used": [
{"block_id": "llm_block", "count": 3},
{"block_id": "http_block", "count": 5},
{"block_id": "code_block", "count": 2},
],
"errors": [],
"metadata": {
"trigger": "manual",
"user_tier": "premium",
"environment": "production",
},
},
"data_index": "agent_123_exec_456",
}
response = client.post("/log_raw_analytics", json=request_data)
assert response.status_code == 200
configured_snapshot.assert_match(
json.dumps(
{"analytics_id": response.json(), "logged_data": request_data["data"]},
indent=2,
sort_keys=True,
),
"analytics_log_analytics_complex_data",
)
@pytest.mark.parametrize(
"invalid_data,expected_error",
[
({}, "Field required"),
({"type": "test"}, "Field required"),
(
{"type": "test", "data": "not_a_dict", "data_index": "test"},
"Input should be a valid dictionary",
),
({"type": "test", "data": {"key": "value"}}, "Field required"),
],
ids=[
"empty_request",
"missing_data_and_data_index",
"invalid_data_type",
"missing_data_index",
],
)
def test_log_raw_analytics_validation_errors(
invalid_data: dict,
expected_error: str,
) -> None:
"""Test validation errors for invalid analytics requests."""
response = client.post("/log_raw_analytics", json=invalid_data)
assert response.status_code == 422
error_detail = response.json()
assert "detail" in error_detail, f"Missing 'detail' in error: {error_detail}"
error_text = json.dumps(error_detail)
assert (
expected_error in error_text
), f"Expected '{expected_error}' in error response: {error_text}"
def test_log_raw_analytics_service_error(
mocker: pytest_mock.MockFixture,
test_user_id: str,
) -> None:
"""Test error handling when analytics service fails."""
mocker.patch(
"backend.data.analytics.log_raw_analytics",
new_callable=AsyncMock,
side_effect=Exception("Analytics DB unreachable"),
)
request_data = {
"type": "test_event",
"data": {"key": "value"},
"data_index": "test_index",
}
response = client.post("/log_raw_analytics", json=request_data)
assert response.status_code == 500
error_detail = response.json()["detail"]
assert "Analytics DB unreachable" in error_detail["message"]
assert "hint" in error_detail

View File

@@ -0,0 +1,689 @@
import logging
from dataclasses import dataclass
from datetime import datetime, timedelta, timezone
from difflib import SequenceMatcher
from typing import Sequence
import prisma
import backend.api.features.library.db as library_db
import backend.api.features.library.model as library_model
import backend.api.features.store.db as store_db
import backend.api.features.store.model as store_model
import backend.data.block
from backend.blocks import load_all_blocks
from backend.blocks.llm import LlmModel
from backend.data.block import AnyBlockSchema, BlockCategory, BlockInfo, BlockSchema
from backend.data.db import query_raw_with_schema
from backend.integrations.providers import ProviderName
from backend.util.cache import cached
from backend.util.models import Pagination
from .model import (
BlockCategoryResponse,
BlockResponse,
BlockType,
CountResponse,
FilterType,
Provider,
ProviderResponse,
SearchEntry,
)
logger = logging.getLogger(__name__)
llm_models = [name.name.lower().replace("_", " ") for name in LlmModel]
MAX_LIBRARY_AGENT_RESULTS = 100
MAX_MARKETPLACE_AGENT_RESULTS = 100
MIN_SCORE_FOR_FILTERED_RESULTS = 10.0
SearchResultItem = BlockInfo | library_model.LibraryAgent | store_model.StoreAgent
@dataclass
class _ScoredItem:
item: SearchResultItem
filter_type: FilterType
score: float
sort_key: str
@dataclass
class _SearchCacheEntry:
items: list[SearchResultItem]
total_items: dict[FilterType, int]
def get_block_categories(category_blocks: int = 3) -> list[BlockCategoryResponse]:
categories: dict[BlockCategory, BlockCategoryResponse] = {}
for block_type in load_all_blocks().values():
block: AnyBlockSchema = block_type()
# Skip disabled blocks
if block.disabled:
continue
# Skip blocks that don't have categories (all should have at least one)
if not block.categories:
continue
# Add block to the categories
for category in block.categories:
if category not in categories:
categories[category] = BlockCategoryResponse(
name=category.name.lower(),
total_blocks=0,
blocks=[],
)
categories[category].total_blocks += 1
# Append if the category has less than the specified number of blocks
if len(categories[category].blocks) < category_blocks:
categories[category].blocks.append(block.get_info())
# Sort categories by name
return sorted(categories.values(), key=lambda x: x.name)
def get_blocks(
*,
category: str | None = None,
type: BlockType | None = None,
provider: ProviderName | None = None,
page: int = 1,
page_size: int = 50,
) -> BlockResponse:
"""
Get blocks based on either category, type or provider.
Providing nothing fetches all block types.
"""
# Only one of category, type, or provider can be specified
if (category and type) or (category and provider) or (type and provider):
raise ValueError("Only one of category, type, or provider can be specified")
blocks: list[AnyBlockSchema] = []
skip = (page - 1) * page_size
take = page_size
total = 0
for block_type in load_all_blocks().values():
block: AnyBlockSchema = block_type()
# Skip disabled blocks
if block.disabled:
continue
# Skip blocks that don't match the category
if category and category not in {c.name.lower() for c in block.categories}:
continue
# Skip blocks that don't match the type
if (
(type == "input" and block.block_type.value != "Input")
or (type == "output" and block.block_type.value != "Output")
or (type == "action" and block.block_type.value in ("Input", "Output"))
):
continue
# Skip blocks that don't match the provider
if provider:
credentials_info = block.input_schema.get_credentials_fields_info().values()
if not any(provider in info.provider for info in credentials_info):
continue
total += 1
if skip > 0:
skip -= 1
continue
if take > 0:
take -= 1
blocks.append(block)
return BlockResponse(
blocks=[b.get_info() for b in blocks],
pagination=Pagination(
total_items=total,
total_pages=(total + page_size - 1) // page_size,
current_page=page,
page_size=page_size,
),
)
def get_block_by_id(block_id: str) -> BlockInfo | None:
"""
Get a specific block by its ID.
"""
for block_type in load_all_blocks().values():
block: AnyBlockSchema = block_type()
if block.id == block_id:
return block.get_info()
return None
async def update_search(user_id: str, search: SearchEntry) -> str:
"""
Upsert a search request for the user and return the search ID.
"""
if search.search_id:
# Update existing search
await prisma.models.BuilderSearchHistory.prisma().update(
where={
"id": search.search_id,
},
data={
"searchQuery": search.search_query or "",
"filter": search.filter or [], # type: ignore
"byCreator": search.by_creator or [],
},
)
return search.search_id
else:
# Create new search
new_search = await prisma.models.BuilderSearchHistory.prisma().create(
data={
"userId": user_id,
"searchQuery": search.search_query or "",
"filter": search.filter or [], # type: ignore
"byCreator": search.by_creator or [],
}
)
return new_search.id
async def get_recent_searches(user_id: str, limit: int = 5) -> list[SearchEntry]:
"""
Get the user's most recent search requests.
"""
searches = await prisma.models.BuilderSearchHistory.prisma().find_many(
where={
"userId": user_id,
},
order={
"updatedAt": "desc",
},
take=limit,
)
return [
SearchEntry(
search_query=s.searchQuery,
filter=s.filter, # type: ignore
by_creator=s.byCreator,
search_id=s.id,
)
for s in searches
]
async def get_sorted_search_results(
*,
user_id: str,
search_query: str | None,
filters: Sequence[FilterType],
by_creator: Sequence[str] | None = None,
) -> _SearchCacheEntry:
normalized_filters: tuple[FilterType, ...] = tuple(sorted(set(filters or [])))
normalized_creators: tuple[str, ...] = tuple(sorted(set(by_creator or [])))
return await _build_cached_search_results(
user_id=user_id,
search_query=search_query or "",
filters=normalized_filters,
by_creator=normalized_creators,
)
@cached(ttl_seconds=300, shared_cache=True)
async def _build_cached_search_results(
user_id: str,
search_query: str,
filters: tuple[FilterType, ...],
by_creator: tuple[str, ...],
) -> _SearchCacheEntry:
normalized_query = (search_query or "").strip().lower()
include_blocks = "blocks" in filters
include_integrations = "integrations" in filters
include_library_agents = "my_agents" in filters
include_marketplace_agents = "marketplace_agents" in filters
scored_items: list[_ScoredItem] = []
total_items: dict[FilterType, int] = {
"blocks": 0,
"integrations": 0,
"marketplace_agents": 0,
"my_agents": 0,
}
block_results, block_total, integration_total = _collect_block_results(
normalized_query=normalized_query,
include_blocks=include_blocks,
include_integrations=include_integrations,
)
scored_items.extend(block_results)
total_items["blocks"] = block_total
total_items["integrations"] = integration_total
if include_library_agents:
library_response = await library_db.list_library_agents(
user_id=user_id,
search_term=search_query or None,
page=1,
page_size=MAX_LIBRARY_AGENT_RESULTS,
)
total_items["my_agents"] = library_response.pagination.total_items
scored_items.extend(
_build_library_items(
agents=library_response.agents,
normalized_query=normalized_query,
)
)
if include_marketplace_agents:
marketplace_response = await store_db.get_store_agents(
creators=list(by_creator) or None,
search_query=search_query or None,
page=1,
page_size=MAX_MARKETPLACE_AGENT_RESULTS,
)
total_items["marketplace_agents"] = marketplace_response.pagination.total_items
scored_items.extend(
_build_marketplace_items(
agents=marketplace_response.agents,
normalized_query=normalized_query,
)
)
sorted_items = sorted(
scored_items,
key=lambda entry: (-entry.score, entry.sort_key, entry.filter_type),
)
return _SearchCacheEntry(
items=[entry.item for entry in sorted_items],
total_items=total_items,
)
def _collect_block_results(
*,
normalized_query: str,
include_blocks: bool,
include_integrations: bool,
) -> tuple[list[_ScoredItem], int, int]:
results: list[_ScoredItem] = []
block_count = 0
integration_count = 0
if not include_blocks and not include_integrations:
return results, block_count, integration_count
for block_type in load_all_blocks().values():
block: AnyBlockSchema = block_type()
if block.disabled:
continue
block_info = block.get_info()
credentials = list(block.input_schema.get_credentials_fields().values())
is_integration = len(credentials) > 0
if is_integration and not include_integrations:
continue
if not is_integration and not include_blocks:
continue
score = _score_block(block, block_info, normalized_query)
if not _should_include_item(score, normalized_query):
continue
filter_type: FilterType = "integrations" if is_integration else "blocks"
if is_integration:
integration_count += 1
else:
block_count += 1
results.append(
_ScoredItem(
item=block_info,
filter_type=filter_type,
score=score,
sort_key=_get_item_name(block_info),
)
)
return results, block_count, integration_count
def _build_library_items(
*,
agents: list[library_model.LibraryAgent],
normalized_query: str,
) -> list[_ScoredItem]:
results: list[_ScoredItem] = []
for agent in agents:
score = _score_library_agent(agent, normalized_query)
if not _should_include_item(score, normalized_query):
continue
results.append(
_ScoredItem(
item=agent,
filter_type="my_agents",
score=score,
sort_key=_get_item_name(agent),
)
)
return results
def _build_marketplace_items(
*,
agents: list[store_model.StoreAgent],
normalized_query: str,
) -> list[_ScoredItem]:
results: list[_ScoredItem] = []
for agent in agents:
score = _score_store_agent(agent, normalized_query)
if not _should_include_item(score, normalized_query):
continue
results.append(
_ScoredItem(
item=agent,
filter_type="marketplace_agents",
score=score,
sort_key=_get_item_name(agent),
)
)
return results
def get_providers(
query: str = "",
page: int = 1,
page_size: int = 50,
) -> ProviderResponse:
providers = []
query = query.lower()
skip = (page - 1) * page_size
take = page_size
all_providers = _get_all_providers()
for provider in all_providers.values():
if (
query not in provider.name.value.lower()
and query not in provider.description.lower()
):
continue
if skip > 0:
skip -= 1
continue
if take > 0:
take -= 1
providers.append(provider)
total = len(all_providers)
return ProviderResponse(
providers=providers,
pagination=Pagination(
total_items=total,
total_pages=(total + page_size - 1) // page_size,
current_page=page,
page_size=page_size,
),
)
async def get_counts(user_id: str) -> CountResponse:
my_agents = await prisma.models.LibraryAgent.prisma().count(
where={
"userId": user_id,
"isDeleted": False,
"isArchived": False,
}
)
counts = await _get_static_counts()
return CountResponse(
my_agents=my_agents,
**counts,
)
@cached(ttl_seconds=3600)
async def _get_static_counts():
"""
Get counts of blocks, integrations, and marketplace agents.
This is cached to avoid unnecessary database queries and calculations.
"""
all_blocks = 0
input_blocks = 0
action_blocks = 0
output_blocks = 0
integrations = 0
for block_type in load_all_blocks().values():
block: AnyBlockSchema = block_type()
if block.disabled:
continue
all_blocks += 1
if block.block_type.value == "Input":
input_blocks += 1
elif block.block_type.value == "Output":
output_blocks += 1
else:
action_blocks += 1
credentials = list(block.input_schema.get_credentials_fields().values())
if len(credentials) > 0:
integrations += 1
marketplace_agents = await prisma.models.StoreAgent.prisma().count()
return {
"all_blocks": all_blocks,
"input_blocks": input_blocks,
"action_blocks": action_blocks,
"output_blocks": output_blocks,
"integrations": integrations,
"marketplace_agents": marketplace_agents,
}
def _matches_llm_model(schema_cls: type[BlockSchema], query: str) -> bool:
for field in schema_cls.model_fields.values():
if field.annotation == LlmModel:
# Check if query matches any value in llm_models
if any(query in name for name in llm_models):
return True
return False
def _score_block(
block: AnyBlockSchema,
block_info: BlockInfo,
normalized_query: str,
) -> float:
if not normalized_query:
return 0.0
name = block_info.name.lower()
description = block_info.description.lower()
score = _score_primary_fields(name, description, normalized_query)
category_text = " ".join(
category.get("category", "").lower() for category in block_info.categories
)
score += _score_additional_field(category_text, normalized_query, 12, 6)
credentials_info = block.input_schema.get_credentials_fields_info().values()
provider_names = [
provider.value.lower()
for info in credentials_info
for provider in info.provider
]
provider_text = " ".join(provider_names)
score += _score_additional_field(provider_text, normalized_query, 15, 6)
if _matches_llm_model(block.input_schema, normalized_query):
score += 20
return score
def _score_library_agent(
agent: library_model.LibraryAgent,
normalized_query: str,
) -> float:
if not normalized_query:
return 0.0
name = agent.name.lower()
description = (agent.description or "").lower()
instructions = (agent.instructions or "").lower()
score = _score_primary_fields(name, description, normalized_query)
score += _score_additional_field(instructions, normalized_query, 15, 6)
score += _score_additional_field(
agent.creator_name.lower(), normalized_query, 10, 5
)
return score
def _score_store_agent(
agent: store_model.StoreAgent,
normalized_query: str,
) -> float:
if not normalized_query:
return 0.0
name = agent.agent_name.lower()
description = agent.description.lower()
sub_heading = agent.sub_heading.lower()
score = _score_primary_fields(name, description, normalized_query)
score += _score_additional_field(sub_heading, normalized_query, 12, 6)
score += _score_additional_field(agent.creator.lower(), normalized_query, 10, 5)
return score
def _score_primary_fields(name: str, description: str, query: str) -> float:
score = 0.0
if name == query:
score += 120
elif name.startswith(query):
score += 90
elif query in name:
score += 60
score += SequenceMatcher(None, name, query).ratio() * 50
if description:
if query in description:
score += 30
score += SequenceMatcher(None, description, query).ratio() * 25
return score
def _score_additional_field(
value: str,
query: str,
contains_weight: float,
similarity_weight: float,
) -> float:
if not value or not query:
return 0.0
score = 0.0
if query in value:
score += contains_weight
score += SequenceMatcher(None, value, query).ratio() * similarity_weight
return score
def _should_include_item(score: float, normalized_query: str) -> bool:
if not normalized_query:
return True
return score >= MIN_SCORE_FOR_FILTERED_RESULTS
def _get_item_name(item: SearchResultItem) -> str:
if isinstance(item, BlockInfo):
return item.name.lower()
if isinstance(item, library_model.LibraryAgent):
return item.name.lower()
return item.agent_name.lower()
@cached(ttl_seconds=3600)
def _get_all_providers() -> dict[ProviderName, Provider]:
providers: dict[ProviderName, Provider] = {}
for block_type in load_all_blocks().values():
block: AnyBlockSchema = block_type()
if block.disabled:
continue
credentials_info = block.input_schema.get_credentials_fields_info().values()
for info in credentials_info:
for provider in info.provider: # provider is a ProviderName enum member
if provider in providers:
providers[provider].integration_count += 1
else:
providers[provider] = Provider(
name=provider, description="", integration_count=1
)
return providers
@cached(ttl_seconds=3600)
async def get_suggested_blocks(count: int = 5) -> list[BlockInfo]:
suggested_blocks = []
# Sum the number of executions for each block type
# Prisma cannot group by nested relations, so we do a raw query
# Calculate the cutoff timestamp
timestamp_threshold = datetime.now(timezone.utc) - timedelta(days=30)
results = await query_raw_with_schema(
"""
SELECT
agent_node."agentBlockId" AS block_id,
COUNT(execution.id) AS execution_count
FROM {schema_prefix}"AgentNodeExecution" execution
JOIN {schema_prefix}"AgentNode" agent_node ON execution."agentNodeId" = agent_node.id
WHERE execution."endedTime" >= $1::timestamp
GROUP BY agent_node."agentBlockId"
ORDER BY execution_count DESC;
""",
timestamp_threshold,
)
# Get the top blocks based on execution count
# But ignore Input and Output blocks
blocks: list[tuple[BlockInfo, int]] = []
for block_type in load_all_blocks().values():
block: AnyBlockSchema = block_type()
if block.disabled or block.block_type in (
backend.data.block.BlockType.INPUT,
backend.data.block.BlockType.OUTPUT,
backend.data.block.BlockType.AGENT,
):
continue
# Find the execution count for this block
execution_count = next(
(row["execution_count"] for row in results if row["block_id"] == block.id),
0,
)
blocks.append((block.get_info(), execution_count))
# Sort blocks by execution count
blocks.sort(key=lambda x: x[1], reverse=True)
suggested_blocks = [block[0] for block in blocks]
# Return the top blocks
return suggested_blocks[:count]

View File

@@ -2,8 +2,8 @@ from typing import Literal
from pydantic import BaseModel
import backend.server.v2.library.model as library_model
import backend.server.v2.store.model as store_model
import backend.api.features.library.model as library_model
import backend.api.features.store.model as store_model
from backend.data.block import BlockInfo
from backend.integrations.providers import ProviderName
from backend.util.models import Pagination
@@ -18,10 +18,17 @@ FilterType = Literal[
BlockType = Literal["all", "input", "action", "output"]
class SearchEntry(BaseModel):
search_query: str | None = None
filter: list[FilterType] | None = None
by_creator: list[str] | None = None
search_id: str | None = None
# Suggestions
class SuggestionsResponse(BaseModel):
otto_suggestions: list[str]
recent_searches: list[str]
recent_searches: list[SearchEntry]
providers: list[ProviderName]
top_blocks: list[BlockInfo]
@@ -32,7 +39,7 @@ class BlockCategoryResponse(BaseModel):
total_blocks: int
blocks: list[BlockInfo]
model_config = {"use_enum_values": False} # <== use enum names like "AI"
model_config = {"use_enum_values": False} # Use enum names like "AI"
# Input/Action/Output and see all for block categories
@@ -53,17 +60,11 @@ class ProviderResponse(BaseModel):
pagination: Pagination
class SearchBlocksResponse(BaseModel):
blocks: BlockResponse
total_block_count: int
total_integration_count: int
class SearchResponse(BaseModel):
items: list[BlockInfo | library_model.LibraryAgent | store_model.StoreAgent]
search_id: str
total_items: dict[FilterType, int]
page: int
more_pages: bool
pagination: Pagination
class CountResponse(BaseModel):

View File

@@ -4,15 +4,12 @@ from typing import Annotated, Sequence
import fastapi
from autogpt_libs.auth.dependencies import get_user_id, requires_user
import backend.server.v2.builder.db as builder_db
import backend.server.v2.builder.model as builder_model
import backend.server.v2.library.db as library_db
import backend.server.v2.library.model as library_model
import backend.server.v2.store.db as store_db
import backend.server.v2.store.model as store_model
from backend.integrations.providers import ProviderName
from backend.util.models import Pagination
from . import db as builder_db
from . import model as builder_model
logger = logging.getLogger(__name__)
router = fastapi.APIRouter(
@@ -45,7 +42,9 @@ def sanitize_query(query: str | None) -> str | None:
summary="Get Builder suggestions",
response_model=builder_model.SuggestionsResponse,
)
async def get_suggestions() -> builder_model.SuggestionsResponse:
async def get_suggestions(
user_id: Annotated[str, fastapi.Security(get_user_id)],
) -> builder_model.SuggestionsResponse:
"""
Get all suggestions for the Blocks Menu.
"""
@@ -55,11 +54,7 @@ async def get_suggestions() -> builder_model.SuggestionsResponse:
"Help me create a list",
"Help me feed my data to Google Maps",
],
recent_searches=[
"image generation",
"deepfake",
"competitor analysis",
],
recent_searches=await builder_db.get_recent_searches(user_id),
providers=[
ProviderName.TWITTER,
ProviderName.GITHUB,
@@ -147,7 +142,6 @@ async def get_providers(
)
# Not using post method because on frontend, orval doesn't support Infinite Query with POST method.
@router.get(
"/search",
summary="Builder search",
@@ -157,7 +151,7 @@ async def get_providers(
async def search(
user_id: Annotated[str, fastapi.Security(get_user_id)],
search_query: Annotated[str | None, fastapi.Query()] = None,
filter: Annotated[list[str] | None, fastapi.Query()] = None,
filter: Annotated[list[builder_model.FilterType] | None, fastapi.Query()] = None,
search_id: Annotated[str | None, fastapi.Query()] = None,
by_creator: Annotated[list[str] | None, fastapi.Query()] = None,
page: Annotated[int, fastapi.Query()] = 1,
@@ -176,69 +170,43 @@ async def search(
]
search_query = sanitize_query(search_query)
# Blocks&Integrations
blocks = builder_model.SearchBlocksResponse(
blocks=builder_model.BlockResponse(
blocks=[],
pagination=Pagination.empty(),
),
total_block_count=0,
total_integration_count=0,
# Get all possible results
cached_results = await builder_db.get_sorted_search_results(
user_id=user_id,
search_query=search_query,
filters=filter,
by_creator=by_creator,
)
if "blocks" in filter or "integrations" in filter:
blocks = builder_db.search_blocks(
include_blocks="blocks" in filter,
include_integrations="integrations" in filter,
query=search_query or "",
page=page,
page_size=page_size,
)
# Library Agents
my_agents = library_model.LibraryAgentResponse(
agents=[],
pagination=Pagination.empty(),
# Paginate results
total_combined_items = len(cached_results.items)
pagination = Pagination(
total_items=total_combined_items,
total_pages=(total_combined_items + page_size - 1) // page_size,
current_page=page,
page_size=page_size,
)
if "my_agents" in filter:
my_agents = await library_db.list_library_agents(
user_id=user_id,
search_term=search_query,
page=page,
page_size=page_size,
)
# Marketplace Agents
marketplace_agents = store_model.StoreAgentsResponse(
agents=[],
pagination=Pagination.empty(),
)
if "marketplace_agents" in filter:
marketplace_agents = await store_db.get_store_agents(
creators=by_creator,
start_idx = (page - 1) * page_size
end_idx = start_idx + page_size
paginated_items = cached_results.items[start_idx:end_idx]
# Update the search entry by id
search_id = await builder_db.update_search(
user_id,
builder_model.SearchEntry(
search_query=search_query,
page=page,
page_size=page_size,
)
more_pages = False
if (
blocks.blocks.pagination.current_page < blocks.blocks.pagination.total_pages
or my_agents.pagination.current_page < my_agents.pagination.total_pages
or marketplace_agents.pagination.current_page
< marketplace_agents.pagination.total_pages
):
more_pages = True
filter=filter,
by_creator=by_creator,
search_id=search_id,
),
)
return builder_model.SearchResponse(
items=blocks.blocks.blocks + my_agents.agents + marketplace_agents.agents,
total_items={
"blocks": blocks.total_block_count,
"integrations": blocks.total_integration_count,
"marketplace_agents": marketplace_agents.pagination.total_items,
"my_agents": my_agents.pagination.total_items,
},
page=page,
more_pages=more_pages,
items=paginated_items,
search_id=search_id,
total_items=cached_results.total_items,
pagination=pagination,
)

View File

@@ -12,7 +12,11 @@ class ChatConfig(BaseSettings):
# OpenAI API Configuration
model: str = Field(
default="qwen/qwen3-235b-a22b-2507", description="Default model to use"
default="anthropic/claude-opus-4.5", description="Default model to use"
)
title_model: str = Field(
default="openai/gpt-4o-mini",
description="Model to use for generating session titles (should be fast/cheap)",
)
api_key: str | None = Field(default=None, description="OpenAI API key")
base_url: str | None = Field(
@@ -41,6 +45,13 @@ class ChatConfig(BaseSettings):
default=3, description="Maximum number of agent schedules"
)
# Langfuse Prompt Management Configuration
# Note: Langfuse credentials are in Settings().secrets (settings.py)
langfuse_prompt_name: str = Field(
default="CoPilot Prompt",
description="Name of the prompt in Langfuse to fetch",
)
@field_validator("api_key", mode="before")
@classmethod
def get_api_key(cls, v):
@@ -72,8 +83,31 @@ class ChatConfig(BaseSettings):
v = "https://openrouter.ai/api/v1"
return v
# Prompt paths for different contexts
PROMPT_PATHS: dict[str, str] = {
"default": "prompts/chat_system.md",
"onboarding": "prompts/onboarding_system.md",
}
def get_system_prompt_for_type(
self, prompt_type: str = "default", **template_vars
) -> str:
"""Load and render a system prompt by type.
Args:
prompt_type: The type of prompt to load ("default" or "onboarding")
**template_vars: Variables to substitute in the template
Returns:
Rendered system prompt string
"""
prompt_path_str = self.PROMPT_PATHS.get(
prompt_type, self.PROMPT_PATHS["default"]
)
return self._load_prompt_from_path(prompt_path_str, **template_vars)
def get_system_prompt(self, **template_vars) -> str:
"""Load and render the system prompt from file.
"""Load and render the default system prompt from file.
Args:
**template_vars: Variables to substitute in the template
@@ -82,9 +116,21 @@ class ChatConfig(BaseSettings):
Rendered system prompt string
"""
return self._load_prompt_from_path(self.system_prompt_path, **template_vars)
def _load_prompt_from_path(self, prompt_path_str: str, **template_vars) -> str:
"""Load and render a system prompt from a given path.
Args:
prompt_path_str: Path to the prompt file relative to chat module
**template_vars: Variables to substitute in the template
Returns:
Rendered system prompt string
"""
# Get the path relative to this module
module_dir = Path(__file__).parent
prompt_path = module_dir / self.system_prompt_path
prompt_path = module_dir / prompt_path_str
# Check for .j2 extension first (Jinja2 template)
j2_path = Path(str(prompt_path) + ".j2")

View File

@@ -0,0 +1,195 @@
"""Database operations for chat sessions."""
import logging
from datetime import UTC, datetime
from typing import Any
from prisma.models import ChatMessage as PrismaChatMessage
from prisma.models import ChatSession as PrismaChatSession
from prisma.types import ChatSessionUpdateInput
from backend.util.json import SafeJson
logger = logging.getLogger(__name__)
async def get_chat_session(session_id: str) -> PrismaChatSession | None:
"""Get a chat session by ID from the database."""
session = await PrismaChatSession.prisma().find_unique(
where={"id": session_id},
include={"Messages": True},
)
if session and session.Messages:
# Sort messages by sequence in Python since Prisma doesn't support order_by in include
session.Messages.sort(key=lambda m: m.sequence)
return session
async def create_chat_session(
session_id: str,
user_id: str | None,
) -> PrismaChatSession:
"""Create a new chat session in the database."""
data = {
"id": session_id,
"userId": user_id,
"credentials": SafeJson({}),
"successfulAgentRuns": SafeJson({}),
"successfulAgentSchedules": SafeJson({}),
}
return await PrismaChatSession.prisma().create(
data=data,
include={"Messages": True},
)
async def update_chat_session(
session_id: str,
credentials: dict[str, Any] | None = None,
successful_agent_runs: dict[str, Any] | None = None,
successful_agent_schedules: dict[str, Any] | None = None,
total_prompt_tokens: int | None = None,
total_completion_tokens: int | None = None,
title: str | None = None,
) -> PrismaChatSession | None:
"""Update a chat session's metadata."""
data: ChatSessionUpdateInput = {"updatedAt": datetime.now(UTC)}
if credentials is not None:
data["credentials"] = SafeJson(credentials)
if successful_agent_runs is not None:
data["successfulAgentRuns"] = SafeJson(successful_agent_runs)
if successful_agent_schedules is not None:
data["successfulAgentSchedules"] = SafeJson(successful_agent_schedules)
if total_prompt_tokens is not None:
data["totalPromptTokens"] = total_prompt_tokens
if total_completion_tokens is not None:
data["totalCompletionTokens"] = total_completion_tokens
if title is not None:
data["title"] = title
session = await PrismaChatSession.prisma().update(
where={"id": session_id},
data=data,
include={"Messages": True},
)
if session and session.Messages:
session.Messages.sort(key=lambda m: m.sequence)
return session
async def add_chat_message(
session_id: str,
role: str,
sequence: int,
content: str | None = None,
name: str | None = None,
tool_call_id: str | None = None,
refusal: str | None = None,
tool_calls: list[dict[str, Any]] | None = None,
function_call: dict[str, Any] | None = None,
) -> PrismaChatMessage:
"""Add a message to a chat session."""
data: dict[str, Any] = {
"Session": {"connect": {"id": session_id}},
"role": role,
"sequence": sequence,
}
if content is not None:
data["content"] = content
if name is not None:
data["name"] = name
if tool_call_id is not None:
data["toolCallId"] = tool_call_id
if refusal is not None:
data["refusal"] = refusal
if tool_calls is not None:
data["toolCalls"] = SafeJson(tool_calls)
if function_call is not None:
data["functionCall"] = SafeJson(function_call)
# Update session's updatedAt timestamp
await PrismaChatSession.prisma().update(
where={"id": session_id},
data={"updatedAt": datetime.now(UTC)},
)
return await PrismaChatMessage.prisma().create(data=data)
async def add_chat_messages_batch(
session_id: str,
messages: list[dict[str, Any]],
start_sequence: int,
) -> list[PrismaChatMessage]:
"""Add multiple messages to a chat session in a batch."""
if not messages:
return []
created_messages = []
for i, msg in enumerate(messages):
data: dict[str, Any] = {
"Session": {"connect": {"id": session_id}},
"role": msg["role"],
"sequence": start_sequence + i,
}
if msg.get("content") is not None:
data["content"] = msg["content"]
if msg.get("name") is not None:
data["name"] = msg["name"]
if msg.get("tool_call_id") is not None:
data["toolCallId"] = msg["tool_call_id"]
if msg.get("refusal") is not None:
data["refusal"] = msg["refusal"]
if msg.get("tool_calls") is not None:
data["toolCalls"] = SafeJson(msg["tool_calls"])
if msg.get("function_call") is not None:
data["functionCall"] = SafeJson(msg["function_call"])
created = await PrismaChatMessage.prisma().create(data=data)
created_messages.append(created)
# Update session's updatedAt timestamp
await PrismaChatSession.prisma().update(
where={"id": session_id},
data={"updatedAt": datetime.now(UTC)},
)
return created_messages
async def get_user_chat_sessions(
user_id: str,
limit: int = 50,
offset: int = 0,
) -> list[PrismaChatSession]:
"""Get chat sessions for a user, ordered by most recent."""
return await PrismaChatSession.prisma().find_many(
where={"userId": user_id},
order={"updatedAt": "desc"},
take=limit,
skip=offset,
)
async def get_user_session_count(user_id: str) -> int:
"""Get the total number of chat sessions for a user."""
return await PrismaChatSession.prisma().count(where={"userId": user_id})
async def delete_chat_session(session_id: str) -> bool:
"""Delete a chat session and all its messages."""
try:
await PrismaChatSession.prisma().delete(where={"id": session_id})
return True
except Exception as e:
logger.error(f"Failed to delete chat session {session_id}: {e}")
return False
async def get_chat_session_message_count(session_id: str) -> int:
"""Get the number of messages in a chat session."""
count = await PrismaChatMessage.prisma().count(where={"sessionId": session_id})
return count

View File

@@ -0,0 +1,473 @@
import logging
import uuid
from datetime import UTC, datetime
from openai.types.chat import (
ChatCompletionAssistantMessageParam,
ChatCompletionDeveloperMessageParam,
ChatCompletionFunctionMessageParam,
ChatCompletionMessageParam,
ChatCompletionSystemMessageParam,
ChatCompletionToolMessageParam,
ChatCompletionUserMessageParam,
)
from openai.types.chat.chat_completion_assistant_message_param import FunctionCall
from openai.types.chat.chat_completion_message_tool_call_param import (
ChatCompletionMessageToolCallParam,
Function,
)
from prisma.models import ChatMessage as PrismaChatMessage
from prisma.models import ChatSession as PrismaChatSession
from pydantic import BaseModel
from backend.data.redis_client import get_redis_async
from backend.util import json
from backend.util.exceptions import RedisError
from . import db as chat_db
from .config import ChatConfig
logger = logging.getLogger(__name__)
config = ChatConfig()
class ChatMessage(BaseModel):
role: str
content: str | None = None
name: str | None = None
tool_call_id: str | None = None
refusal: str | None = None
tool_calls: list[dict] | None = None
function_call: dict | None = None
class Usage(BaseModel):
prompt_tokens: int
completion_tokens: int
total_tokens: int
class ChatSession(BaseModel):
session_id: str
user_id: str | None
title: str | None = None
messages: list[ChatMessage]
usage: list[Usage]
credentials: dict[str, dict] = {} # Map of provider -> credential metadata
started_at: datetime
updated_at: datetime
successful_agent_runs: dict[str, int] = {}
successful_agent_schedules: dict[str, int] = {}
@staticmethod
def new(user_id: str | None) -> "ChatSession":
return ChatSession(
session_id=str(uuid.uuid4()),
user_id=user_id,
title=None,
messages=[],
usage=[],
credentials={},
started_at=datetime.now(UTC),
updated_at=datetime.now(UTC),
)
@staticmethod
def from_prisma(
prisma_session: PrismaChatSession,
prisma_messages: list[PrismaChatMessage] | None = None,
) -> "ChatSession":
"""Convert Prisma models to Pydantic ChatSession."""
messages = []
if prisma_messages:
for msg in prisma_messages:
tool_calls = None
if msg.toolCalls:
tool_calls = (
json.loads(msg.toolCalls)
if isinstance(msg.toolCalls, str)
else msg.toolCalls
)
function_call = None
if msg.functionCall:
function_call = (
json.loads(msg.functionCall)
if isinstance(msg.functionCall, str)
else msg.functionCall
)
messages.append(
ChatMessage(
role=msg.role,
content=msg.content,
name=msg.name,
tool_call_id=msg.toolCallId,
refusal=msg.refusal,
tool_calls=tool_calls,
function_call=function_call,
)
)
# Parse JSON fields from Prisma
credentials = (
json.loads(prisma_session.credentials)
if isinstance(prisma_session.credentials, str)
else prisma_session.credentials or {}
)
successful_agent_runs = (
json.loads(prisma_session.successfulAgentRuns)
if isinstance(prisma_session.successfulAgentRuns, str)
else prisma_session.successfulAgentRuns or {}
)
successful_agent_schedules = (
json.loads(prisma_session.successfulAgentSchedules)
if isinstance(prisma_session.successfulAgentSchedules, str)
else prisma_session.successfulAgentSchedules or {}
)
# Calculate usage from token counts
usage = []
if prisma_session.totalPromptTokens or prisma_session.totalCompletionTokens:
usage.append(
Usage(
prompt_tokens=prisma_session.totalPromptTokens or 0,
completion_tokens=prisma_session.totalCompletionTokens or 0,
total_tokens=(prisma_session.totalPromptTokens or 0)
+ (prisma_session.totalCompletionTokens or 0),
)
)
return ChatSession(
session_id=prisma_session.id,
user_id=prisma_session.userId,
title=prisma_session.title,
messages=messages,
usage=usage,
credentials=credentials,
started_at=prisma_session.createdAt,
updated_at=prisma_session.updatedAt,
successful_agent_runs=successful_agent_runs,
successful_agent_schedules=successful_agent_schedules,
)
def to_openai_messages(self) -> list[ChatCompletionMessageParam]:
messages = []
for message in self.messages:
if message.role == "developer":
m = ChatCompletionDeveloperMessageParam(
role="developer",
content=message.content or "",
)
if message.name:
m["name"] = message.name
messages.append(m)
elif message.role == "system":
m = ChatCompletionSystemMessageParam(
role="system",
content=message.content or "",
)
if message.name:
m["name"] = message.name
messages.append(m)
elif message.role == "user":
m = ChatCompletionUserMessageParam(
role="user",
content=message.content or "",
)
if message.name:
m["name"] = message.name
messages.append(m)
elif message.role == "assistant":
m = ChatCompletionAssistantMessageParam(
role="assistant",
content=message.content or "",
)
if message.function_call:
m["function_call"] = FunctionCall(
arguments=message.function_call["arguments"],
name=message.function_call["name"],
)
if message.refusal:
m["refusal"] = message.refusal
if message.tool_calls:
t: list[ChatCompletionMessageToolCallParam] = []
for tool_call in message.tool_calls:
# Tool calls are stored with nested structure: {id, type, function: {name, arguments}}
function_data = tool_call.get("function", {})
# Skip tool calls that are missing required fields
if "id" not in tool_call or "name" not in function_data:
logger.warning(
f"Skipping invalid tool call: missing required fields. "
f"Got: {tool_call.keys()}, function keys: {function_data.keys()}"
)
continue
# Arguments are stored as a JSON string
arguments_str = function_data.get("arguments", "{}")
t.append(
ChatCompletionMessageToolCallParam(
id=tool_call["id"],
type="function",
function=Function(
arguments=arguments_str,
name=function_data["name"],
),
)
)
m["tool_calls"] = t
if message.name:
m["name"] = message.name
messages.append(m)
elif message.role == "tool":
messages.append(
ChatCompletionToolMessageParam(
role="tool",
content=message.content or "",
tool_call_id=message.tool_call_id or "",
)
)
elif message.role == "function":
messages.append(
ChatCompletionFunctionMessageParam(
role="function",
content=message.content,
name=message.name or "",
)
)
return messages
async def _get_session_from_cache(session_id: str) -> ChatSession | None:
"""Get a chat session from Redis cache."""
redis_key = f"chat:session:{session_id}"
async_redis = await get_redis_async()
raw_session: bytes | None = await async_redis.get(redis_key)
if raw_session is None:
return None
try:
session = ChatSession.model_validate_json(raw_session)
logger.info(
f"Loading session {session_id} from cache: "
f"message_count={len(session.messages)}, "
f"roles={[m.role for m in session.messages]}"
)
return session
except Exception as e:
logger.error(f"Failed to deserialize session {session_id}: {e}", exc_info=True)
raise RedisError(f"Corrupted session data for {session_id}") from e
async def _cache_session(session: ChatSession) -> None:
"""Cache a chat session in Redis."""
redis_key = f"chat:session:{session.session_id}"
async_redis = await get_redis_async()
await async_redis.setex(redis_key, config.session_ttl, session.model_dump_json())
async def _get_session_from_db(session_id: str) -> ChatSession | None:
"""Get a chat session from the database."""
prisma_session = await chat_db.get_chat_session(session_id)
if not prisma_session:
return None
messages = prisma_session.Messages
logger.info(
f"Loading session {session_id} from DB: "
f"has_messages={messages is not None}, "
f"message_count={len(messages) if messages else 0}, "
f"roles={[m.role for m in messages] if messages else []}"
)
return ChatSession.from_prisma(prisma_session, messages)
async def _save_session_to_db(
session: ChatSession, existing_message_count: int
) -> None:
"""Save or update a chat session in the database."""
# Check if session exists in DB
existing = await chat_db.get_chat_session(session.session_id)
if not existing:
# Create new session
await chat_db.create_chat_session(
session_id=session.session_id,
user_id=session.user_id,
)
existing_message_count = 0
# Calculate total tokens from usage
total_prompt = sum(u.prompt_tokens for u in session.usage)
total_completion = sum(u.completion_tokens for u in session.usage)
# Update session metadata
await chat_db.update_chat_session(
session_id=session.session_id,
credentials=session.credentials,
successful_agent_runs=session.successful_agent_runs,
successful_agent_schedules=session.successful_agent_schedules,
total_prompt_tokens=total_prompt,
total_completion_tokens=total_completion,
)
# Add new messages (only those after existing count)
new_messages = session.messages[existing_message_count:]
if new_messages:
messages_data = []
for msg in new_messages:
messages_data.append(
{
"role": msg.role,
"content": msg.content,
"name": msg.name,
"tool_call_id": msg.tool_call_id,
"refusal": msg.refusal,
"tool_calls": msg.tool_calls,
"function_call": msg.function_call,
}
)
logger.info(
f"Saving {len(new_messages)} new messages to DB for session {session.session_id}: "
f"roles={[m['role'] for m in messages_data]}, "
f"start_sequence={existing_message_count}"
)
await chat_db.add_chat_messages_batch(
session_id=session.session_id,
messages=messages_data,
start_sequence=existing_message_count,
)
async def get_chat_session(
session_id: str,
user_id: str | None,
) -> ChatSession | None:
"""Get a chat session by ID.
Checks Redis cache first, falls back to database if not found.
Caches database results back to Redis.
"""
# Try cache first
try:
session = await _get_session_from_cache(session_id)
if session:
# Verify user ownership
if session.user_id is not None and session.user_id != user_id:
logger.warning(
f"Session {session_id} user id mismatch: {session.user_id} != {user_id}"
)
return None
return session
except RedisError:
logger.warning(f"Cache error for session {session_id}, trying database")
except Exception as e:
logger.warning(f"Unexpected cache error for session {session_id}: {e}")
# Fall back to database
logger.info(f"Session {session_id} not in cache, checking database")
session = await _get_session_from_db(session_id)
if session is None:
logger.warning(f"Session {session_id} not found in cache or database")
return None
# Verify user ownership
if session.user_id is not None and session.user_id != user_id:
logger.warning(
f"Session {session_id} user id mismatch: {session.user_id} != {user_id}"
)
return None
# Cache the session from DB
try:
await _cache_session(session)
logger.info(f"Cached session {session_id} from database")
except Exception as e:
logger.warning(f"Failed to cache session {session_id}: {e}")
return session
async def upsert_chat_session(
session: ChatSession,
) -> ChatSession:
"""Update a chat session in both cache and database."""
# Get existing message count from DB for incremental saves
existing_message_count = await chat_db.get_chat_session_message_count(
session.session_id
)
# Save to database
try:
await _save_session_to_db(session, existing_message_count)
except Exception as e:
logger.error(f"Failed to save session {session.session_id} to database: {e}")
# Continue to cache even if DB fails
# Save to cache
try:
await _cache_session(session)
except Exception as e:
raise RedisError(
f"Failed to persist chat session {session.session_id} to Redis: {e}"
) from e
return session
async def create_chat_session(user_id: str | None) -> ChatSession:
"""Create a new chat session and persist it."""
session = ChatSession.new(user_id)
# Create in database first
try:
await chat_db.create_chat_session(
session_id=session.session_id,
user_id=user_id,
)
except Exception as e:
logger.error(f"Failed to create session in database: {e}")
# Continue even if DB fails - cache will still work
# Cache the session
try:
await _cache_session(session)
except Exception as e:
logger.warning(f"Failed to cache new session: {e}")
return session
async def get_user_sessions(
user_id: str,
limit: int = 50,
offset: int = 0,
) -> list[ChatSession]:
"""Get all chat sessions for a user from the database."""
prisma_sessions = await chat_db.get_user_chat_sessions(user_id, limit, offset)
sessions = []
for prisma_session in prisma_sessions:
# Convert without messages for listing (lighter weight)
sessions.append(ChatSession.from_prisma(prisma_session, None))
return sessions
async def delete_chat_session(session_id: str) -> bool:
"""Delete a chat session from both cache and database."""
# Delete from cache
try:
redis_key = f"chat:session:{session_id}"
async_redis = await get_redis_async()
await async_redis.delete(redis_key)
except Exception as e:
logger.warning(f"Failed to delete session {session_id} from cache: {e}")
# Delete from database
return await chat_db.delete_chat_session(session_id)

View File

@@ -0,0 +1,117 @@
import pytest
from .model import (
ChatMessage,
ChatSession,
Usage,
get_chat_session,
upsert_chat_session,
)
messages = [
ChatMessage(content="Hello, how are you?", role="user"),
ChatMessage(
content="I'm fine, thank you!",
role="assistant",
tool_calls=[
{
"id": "t123",
"type": "function",
"function": {
"name": "get_weather",
"arguments": '{"city": "New York"}',
},
}
],
),
ChatMessage(
content="I'm using the tool to get the weather",
role="tool",
tool_call_id="t123",
),
]
@pytest.mark.asyncio(loop_scope="session")
async def test_chatsession_serialization_deserialization():
s = ChatSession.new(user_id="abc123")
s.messages = messages
s.usage = [Usage(prompt_tokens=100, completion_tokens=200, total_tokens=300)]
serialized = s.model_dump_json()
s2 = ChatSession.model_validate_json(serialized)
assert s2.model_dump() == s.model_dump()
@pytest.mark.asyncio(loop_scope="session")
async def test_chatsession_redis_storage():
s = ChatSession.new(user_id=None)
s.messages = messages
s = await upsert_chat_session(s)
s2 = await get_chat_session(
session_id=s.session_id,
user_id=s.user_id,
)
assert s2 == s
@pytest.mark.asyncio(loop_scope="session")
async def test_chatsession_redis_storage_user_id_mismatch():
s = ChatSession.new(user_id="abc123")
s.messages = messages
s = await upsert_chat_session(s)
s2 = await get_chat_session(s.session_id, None)
assert s2 is None
@pytest.mark.asyncio(loop_scope="session")
async def test_chatsession_db_storage():
"""Test that messages are correctly saved to and loaded from DB (not cache)."""
from backend.data.redis_client import get_redis_async
# Create session with messages including assistant message
s = ChatSession.new(user_id=None)
s.messages = messages # Contains user, assistant, and tool messages
# Upsert to save to both cache and DB
s = await upsert_chat_session(s)
# Clear the Redis cache to force DB load
redis_key = f"chat:session:{s.session_id}"
async_redis = await get_redis_async()
await async_redis.delete(redis_key)
# Load from DB (cache was cleared)
s2 = await get_chat_session(
session_id=s.session_id,
user_id=s.user_id,
)
assert s2 is not None, "Session not found after loading from DB"
assert len(s2.messages) == len(
s.messages
), f"Message count mismatch: expected {len(s.messages)}, got {len(s2.messages)}"
# Verify all roles are present
roles = [m.role for m in s2.messages]
assert "user" in roles, f"User message missing. Roles found: {roles}"
assert "assistant" in roles, f"Assistant message missing. Roles found: {roles}"
assert "tool" in roles, f"Tool message missing. Roles found: {roles}"
# Verify message content
for orig, loaded in zip(s.messages, s2.messages):
assert orig.role == loaded.role, f"Role mismatch: {orig.role} != {loaded.role}"
assert (
orig.content == loaded.content
), f"Content mismatch for {orig.role}: {orig.content} != {loaded.content}"
if orig.tool_calls:
assert (
loaded.tool_calls is not None
), f"Tool calls missing for {orig.role} message"
assert len(orig.tool_calls) == len(loaded.tool_calls)

View File

@@ -0,0 +1,192 @@
You are Otto, an AI Co-Pilot and Forward Deployed Engineer for AutoGPT, an AI Business Automation tool. Your mission is to help users quickly find, create, and set up AutoGPT agents to solve their business problems.
Here are the functions available to you:
<functions>
**Understanding & Discovery:**
1. **add_understanding** - Save information about the user's business context (use this as you learn about them)
2. **find_agent** - Search the marketplace for pre-built agents that solve the user's problem
3. **find_library_agent** - Search the user's personal library of saved agents
4. **find_block** - Search for individual blocks (building components for agents)
5. **search_platform_docs** - Search AutoGPT documentation for help
**Agent Creation & Editing:**
6. **create_agent** - Create a new custom agent from scratch based on user requirements
7. **edit_agent** - Modify an existing agent (add/remove blocks, change configuration)
**Execution & Output:**
8. **run_agent** - Run or schedule an agent (automatically handles setup)
9. **run_block** - Run a single block directly without creating an agent
10. **agent_output** - Get the output/results from a running or completed agent execution
</functions>
## ALWAYS GET THE USER'S NAME
**This is critical:** If you don't know the user's name, ask for it in your first response. Use a friendly, natural approach:
- "Hi! I'm Otto. What's your name?"
- "Hey there! Before we dive in, what should I call you?"
Once you have their name, immediately save it with `add_understanding(user_name="...")` and use it throughout the conversation.
## BUILDING USER UNDERSTANDING
**If no User Business Context is provided below**, gather information naturally during conversation - don't interrogate them.
**Key information to gather (in priority order):**
1. Their name (ALWAYS first if unknown)
2. Their job title and role
3. Their business/company and industry
4. Pain points and what they want to automate
5. Tools they currently use
**How to gather this information:**
- Ask naturally as part of helping them (e.g., "What's your role?" or "What industry are you in?")
- When they share information, immediately save it using `add_understanding`
- Don't ask all questions at once - spread them across the conversation
- Prioritize understanding their immediate problem first
**Example:**
```
User: "I need help automating my social media"
Otto: I can help with that! I'm Otto - what's your name?
User: "I'm Sarah"
Otto: [calls add_understanding with user_name="Sarah"]
Nice to meet you, Sarah! What's your role - are you a social media manager or business owner?
User: "I'm the marketing director at a fintech startup"
Otto: [calls add_understanding with job_title="Marketing Director", industry="fintech", business_size="startup"]
Great! Let me find social media automation agents for you.
[calls find_agent with query="social media automation marketing"]
```
## WHEN TO USE WHICH TOOL
**Finding existing agents:**
- `find_agent` - Search the marketplace for pre-built agents others have created
- `find_library_agent` - Search agents the user has already saved to their library
**Creating/editing agents:**
- `create_agent` - When user wants a custom agent that doesn't exist, or has specific requirements
- `edit_agent` - When user wants to modify an existing agent (change inputs, add blocks, etc.)
**Running agents:**
- `run_agent` - To execute an agent (handles credentials and inputs automatically)
- `agent_output` - To check the results of a running or completed agent execution
**Direct execution:**
- `run_block` - Run a single block directly without needing a full agent
## HOW run_agent WORKS
The `run_agent` tool automatically handles the entire setup flow:
1. **First call** (no inputs) → Returns available inputs so user can decide what values to use
2. **Credentials check** → If missing, UI automatically prompts user to add them (you don't need to mention this)
3. **Execution** → Runs when you provide `inputs` OR set `use_defaults=true`
Parameters:
- `username_agent_slug` (required): Agent identifier like "creator/agent-name"
- `inputs`: Object with input values for the agent
- `use_defaults`: Set to `true` to run with default values (only after user confirms)
- `schedule_name` + `cron`: For scheduled execution
## HOW create_agent WORKS
Use `create_agent` when the user wants to build a custom automation:
- Describe what the agent should do
- The tool will create the agent structure with appropriate blocks
- Returns the agent ID for further editing or running
## HOW agent_output WORKS
Use `agent_output` to get results from agent executions:
- Pass the execution_id from a run_agent response
- Returns the current status and any outputs produced
- Useful for checking if an agent has completed and what it produced
## WORKFLOW
1. **Get their name** - If unknown, ask for it first
2. **Understand context** - Ask 1-2 questions about their problem while helping
3. **Find or create** - Use find_agent for existing solutions, create_agent for custom needs
4. **Set up and run** - Use run_agent to execute, agent_output to get results
## YOUR APPROACH
**Step 1: Greet and Identify**
- If you don't know their name, ask for it
- Be friendly and conversational
**Step 2: Understand the Problem**
- Ask maximum 1-2 targeted questions
- Focus on: What business problem are they solving?
- If they want to create/edit an agent, understand what it should do
**Step 3: Find or Create**
- For existing solutions: Use `find_agent` with relevant keywords
- For custom needs: Use `create_agent` with their requirements
- For modifications: Use `edit_agent` on an existing agent
**Step 4: Execute**
- Call `run_agent` without inputs first to see what's available
- Ask user what values they want or if defaults are okay
- Call `run_agent` again with inputs or `use_defaults=true`
- Use `agent_output` to check results when needed
## USING add_understanding
Call `add_understanding` whenever you learn something about the user:
**User info:** `user_name`, `job_title`
**Business:** `business_name`, `industry`, `business_size` (1-10, 11-50, 51-200, 201-1000, 1000+), `user_role` (decision maker, implementer, end user)
**Processes:** `key_workflows` (array), `daily_activities` (array)
**Pain points:** `pain_points` (array), `bottlenecks` (array), `manual_tasks` (array), `automation_goals` (array)
**Tools:** `current_software` (array), `existing_automation` (array)
**Other:** `additional_notes`
Example: `add_understanding(user_name="Sarah", job_title="Marketing Director", industry="fintech")`
## KEY RULES
**What You DON'T Do:**
- Don't help with login (frontend handles this)
- Don't mention or explain credentials to the user (frontend handles this automatically)
- Don't run agents without first showing available inputs to the user
- Don't use `use_defaults=true` without user explicitly confirming
- Don't write responses longer than 3 sentences
- Don't interrogate users with many questions - gather info naturally
**What You DO:**
- ALWAYS ask for user's name if you don't have it
- Save user information with `add_understanding` as you learn it
- Use their name when addressing them
- Always call run_agent first without inputs to see what's available
- Ask user what values they want OR if they want to use defaults
- Keep all responses to maximum 3 sentences
- Include the agent link in your response after successful execution
**Error Handling:**
- Authentication needed → "Please sign in via the interface"
- Credentials missing → The UI handles this automatically. Focus on asking the user about input values instead.
## RESPONSE STRUCTURE
Before responding, wrap your analysis in <thinking> tags to systematically plan your approach:
- Check if you know the user's name - if not, ask for it
- Check if you have user context - if not, plan to gather some naturally
- Extract the key business problem or request from the user's message
- Determine what function call (if any) you need to make next
- Plan your response to stay under the 3-sentence maximum
Example interaction:
```
User: "Hi, I want to build an agent that monitors my competitors"
Otto: <thinking>I don't know this user's name. I should ask for it while acknowledging their request.</thinking>
Hi! I'm Otto and I'd love to help you build a competitor monitoring agent. What's your name?
User: "I'm Mike"
Otto: [calls add_understanding with user_name="Mike"]
<thinking>Now I know Mike wants competitor monitoring. I should search for existing agents first.</thinking>
Great to meet you, Mike! Let me search for competitor monitoring agents.
[calls find_agent with query="competitor monitoring analysis"]
```
KEEP ANSWERS TO 3 SENTENCES

View File

@@ -0,0 +1,155 @@
You are Otto, an AI Co-Pilot helping new users get started with AutoGPT, an AI Business Automation platform. Your mission is to welcome them, learn about their needs, and help them run their first successful agent.
Here are the functions available to you:
<functions>
**Understanding & Discovery:**
1. **add_understanding** - Save information about the user's business context (use this as you learn about them)
2. **find_agent** - Search the marketplace for pre-built agents that solve the user's problem
3. **find_library_agent** - Search the user's personal library of saved agents
4. **find_block** - Search for individual blocks (building components for agents)
5. **search_platform_docs** - Search AutoGPT documentation for help
**Agent Creation & Editing:**
6. **create_agent** - Create a new custom agent from scratch based on user requirements
7. **edit_agent** - Modify an existing agent (add/remove blocks, change configuration)
**Execution & Output:**
8. **run_agent** - Run or schedule an agent (automatically handles setup)
9. **run_block** - Run a single block directly without creating an agent
10. **agent_output** - Get the output/results from a running or completed agent execution
</functions>
## YOUR ONBOARDING MISSION
You are guiding a new user through their first experience with AutoGPT. Your goal is to:
1. Welcome them warmly and get their name
2. Learn about them and their business
3. Find or create an agent that solves a real problem for them
4. Get that agent running successfully
5. Celebrate their success and point them to next steps
## PHASE 1: WELCOME & INTRODUCTION
**Start every conversation by:**
- Giving a warm, friendly greeting
- Introducing yourself as Otto, their AI assistant
- Asking for their name immediately
**Example opening:**
```
Hi! I'm Otto, your AI assistant. Welcome to AutoGPT! I'm here to help you set up your first automation. What's your name?
```
Once you have their name, save it immediately with `add_understanding(user_name="...")` and use it throughout.
## PHASE 2: DISCOVERY
**After getting their name, learn about them:**
- What's their role/job title?
- What industry/business are they in?
- What's one thing they'd love to automate?
**Keep it conversational - don't interrogate. Example:**
```
Nice to meet you, Sarah! What do you do for work, and what's one task you wish you could automate?
```
Save everything you learn with `add_understanding`.
## PHASE 3: FIND OR CREATE AN AGENT
**Once you understand their need:**
- Search for existing agents with `find_agent`
- Present the best match and explain how it helps them
- If nothing fits, offer to create a custom agent with `create_agent`
**Be enthusiastic about the solution:**
```
I found a great agent for you! The "Social Media Scheduler" can automatically post to your accounts on a schedule. Want to try it?
```
## PHASE 4: SETUP & RUN
**Guide them through running the agent:**
1. Call `run_agent` without inputs first to see what's needed
2. Explain each input in simple terms
3. Ask what values they want to use
4. Run the agent with their inputs or defaults
**Don't mention credentials** - the UI handles that automatically.
## PHASE 5: CELEBRATE & HANDOFF
**After successful execution:**
- Congratulate them on their first automation!
- Tell them where to find this agent (their Library)
- Mention they can explore more agents in the Marketplace
- Offer to help with anything else
**Example:**
```
You did it! Your first agent is running. You can find it anytime in your Library. Ready to explore more automations?
```
## KEY RULES
**What You DON'T Do:**
- Don't help with login (frontend handles this)
- Don't mention credentials (UI handles automatically)
- Don't run agents without showing inputs first
- Don't use `use_defaults=true` without explicit confirmation
- Don't write responses longer than 3 sentences
- Don't overwhelm with too many questions at once
**What You DO:**
- ALWAYS get the user's name first
- Be warm, encouraging, and celebratory
- Save info with `add_understanding` as you learn it
- Use their name when addressing them
- Keep responses to maximum 3 sentences
- Make them feel successful at each step
## USING add_understanding
Save information as you learn it:
**User info:** `user_name`, `job_title`
**Business:** `business_name`, `industry`, `business_size`, `user_role`
**Pain points:** `pain_points`, `manual_tasks`, `automation_goals`
**Tools:** `current_software`
Example: `add_understanding(user_name="Sarah", job_title="Marketing Manager", automation_goals=["social media scheduling"])`
## HOW run_agent WORKS
1. **First call** (no inputs) → Shows available inputs
2. **Credentials** → UI handles automatically (don't mention)
3. **Execution** → Run with `inputs={...}` or `use_defaults=true`
## RESPONSE STRUCTURE
Before responding, plan your approach in <thinking> tags:
- What phase am I in? (Welcome/Discovery/Find/Setup/Celebrate)
- Do I know their name? If not, ask for it
- What's the next step to move them forward?
- Keep response under 3 sentences
**Example flow:**
```
User: "Hi"
Otto: <thinking>Phase 1 - I need to welcome them and get their name.</thinking>
Hi! I'm Otto, welcome to AutoGPT! I'm here to help you set up your first automation - what's your name?
User: "I'm Alex"
Otto: [calls add_understanding with user_name="Alex"]
<thinking>Got their name. Phase 2 - learn about them.</thinking>
Great to meet you, Alex! What do you do for work, and what's one task you'd love to automate?
User: "I run an e-commerce store and spend hours on customer support emails"
Otto: [calls add_understanding with industry="e-commerce", pain_points=["customer support emails"]]
<thinking>Phase 3 - search for agents.</thinking>
[calls find_agent with query="customer support email automation"]
```
KEEP ANSWERS TO 3 SENTENCES - Be warm, helpful, and focused on their success!

View File

@@ -0,0 +1,472 @@
"""Chat API routes for chat session management and streaming via SSE."""
import logging
from collections.abc import AsyncGenerator
from typing import Annotated
from autogpt_libs import auth
from fastapi import APIRouter, Depends, Query, Security
from fastapi.responses import StreamingResponse
from pydantic import BaseModel
from backend.util.exceptions import NotFoundError
from . import service as chat_service
from .config import ChatConfig
config = ChatConfig()
logger = logging.getLogger(__name__)
router = APIRouter(
tags=["chat"],
)
# ========== Request/Response Models ==========
class StreamChatRequest(BaseModel):
"""Request model for streaming chat with optional context."""
message: str
is_user_message: bool = True
context: dict[str, str] | None = None # {url: str, content: str}
class CreateSessionResponse(BaseModel):
"""Response model containing information on a newly created chat session."""
id: str
created_at: str
user_id: str | None
class SessionDetailResponse(BaseModel):
"""Response model providing complete details for a chat session, including messages."""
id: str
created_at: str
updated_at: str
user_id: str | None
messages: list[dict]
class SessionSummaryResponse(BaseModel):
"""Response model for a session summary (without messages)."""
id: str
created_at: str
updated_at: str
title: str | None = None
class ListSessionsResponse(BaseModel):
"""Response model for listing chat sessions."""
sessions: list[SessionSummaryResponse]
total: int
# ========== Routes ==========
@router.get(
"/sessions",
dependencies=[Security(auth.requires_user)],
)
async def list_sessions(
user_id: Annotated[str, Security(auth.get_user_id)],
limit: int = Query(default=50, ge=1, le=100),
offset: int = Query(default=0, ge=0),
) -> ListSessionsResponse:
"""
List chat sessions for the authenticated user.
Returns a paginated list of chat sessions belonging to the current user,
ordered by most recently updated.
Args:
user_id: The authenticated user's ID.
limit: Maximum number of sessions to return (1-100).
offset: Number of sessions to skip for pagination.
Returns:
ListSessionsResponse: List of session summaries and total count.
"""
sessions = await chat_service.get_user_sessions(user_id, limit, offset)
return ListSessionsResponse(
sessions=[
SessionSummaryResponse(
id=session.session_id,
created_at=session.started_at.isoformat(),
updated_at=session.updated_at.isoformat(),
title=None, # TODO: Add title support
)
for session in sessions
],
total=len(sessions),
)
@router.post(
"/sessions",
)
async def create_session(
user_id: Annotated[str | None, Depends(auth.get_user_id)],
) -> CreateSessionResponse:
"""
Create a new chat session.
Initiates a new chat session for either an authenticated or anonymous user.
Args:
user_id: The optional authenticated user ID parsed from the JWT. If missing, creates an anonymous session.
Returns:
CreateSessionResponse: Details of the created session.
"""
logger.info(
f"Creating session with user_id: "
f"...{user_id[-8:] if user_id and len(user_id) > 8 else '<redacted>'}"
)
session = await chat_service.create_chat_session(user_id)
return CreateSessionResponse(
id=session.session_id,
created_at=session.started_at.isoformat(),
user_id=session.user_id or None,
)
@router.get(
"/sessions/{session_id}",
)
async def get_session(
session_id: str,
user_id: Annotated[str | None, Depends(auth.get_user_id)],
) -> SessionDetailResponse:
"""
Retrieve the details of a specific chat session.
Looks up a chat session by ID for the given user (if authenticated) and returns all session data including messages.
Args:
session_id: The unique identifier for the desired chat session.
user_id: The optional authenticated user ID, or None for anonymous access.
Returns:
SessionDetailResponse: Details for the requested session; raises NotFoundError if not found.
"""
session = await chat_service.get_session(session_id, user_id)
if not session:
raise NotFoundError(f"Session {session_id} not found")
messages = [message.model_dump() for message in session.messages]
logger.info(
f"Returning session {session_id}: "
f"message_count={len(messages)}, "
f"roles={[m.get('role') for m in messages]}"
)
return SessionDetailResponse(
id=session.session_id,
created_at=session.started_at.isoformat(),
updated_at=session.updated_at.isoformat(),
user_id=session.user_id or None,
messages=messages,
)
@router.post(
"/sessions/{session_id}/stream",
)
async def stream_chat_post(
session_id: str,
request: StreamChatRequest,
user_id: str | None = Depends(auth.get_user_id),
):
"""
Stream chat responses for a session (POST with context support).
Streams the AI/completion responses in real time over Server-Sent Events (SSE), including:
- Text fragments as they are generated
- Tool call UI elements (if invoked)
- Tool execution results
Args:
session_id: The chat session identifier to associate with the streamed messages.
request: Request body containing message, is_user_message, and optional context.
user_id: Optional authenticated user ID.
Returns:
StreamingResponse: SSE-formatted response chunks.
"""
# Validate session exists before starting the stream
# This prevents errors after the response has already started
session = await chat_service.get_session(session_id, user_id)
if not session:
raise NotFoundError(f"Session {session_id} not found. ")
if session.user_id is None and user_id is not None:
session = await chat_service.assign_user_to_session(session_id, user_id)
async def event_generator() -> AsyncGenerator[str, None]:
async for chunk in chat_service.stream_chat_completion(
session_id,
request.message,
is_user_message=request.is_user_message,
user_id=user_id,
session=session, # Pass pre-fetched session to avoid double-fetch
context=request.context,
):
yield chunk.to_sse()
return StreamingResponse(
event_generator(),
media_type="text/event-stream",
headers={
"Cache-Control": "no-cache",
"Connection": "keep-alive",
"X-Accel-Buffering": "no", # Disable nginx buffering
},
)
@router.get(
"/sessions/{session_id}/stream",
)
async def stream_chat_get(
session_id: str,
message: Annotated[str, Query(min_length=1, max_length=10000)],
user_id: str | None = Depends(auth.get_user_id),
is_user_message: bool = Query(default=True),
):
"""
Stream chat responses for a session (GET - legacy endpoint).
Streams the AI/completion responses in real time over Server-Sent Events (SSE), including:
- Text fragments as they are generated
- Tool call UI elements (if invoked)
- Tool execution results
Args:
session_id: The chat session identifier to associate with the streamed messages.
message: The user's new message to process.
user_id: Optional authenticated user ID.
is_user_message: Whether the message is a user message.
Returns:
StreamingResponse: SSE-formatted response chunks.
"""
# Validate session exists before starting the stream
# This prevents errors after the response has already started
session = await chat_service.get_session(session_id, user_id)
if not session:
raise NotFoundError(f"Session {session_id} not found. ")
if session.user_id is None and user_id is not None:
session = await chat_service.assign_user_to_session(session_id, user_id)
async def event_generator() -> AsyncGenerator[str, None]:
async for chunk in chat_service.stream_chat_completion(
session_id,
message,
is_user_message=is_user_message,
user_id=user_id,
session=session, # Pass pre-fetched session to avoid double-fetch
):
yield chunk.to_sse()
return StreamingResponse(
event_generator(),
media_type="text/event-stream",
headers={
"Cache-Control": "no-cache",
"Connection": "keep-alive",
"X-Accel-Buffering": "no", # Disable nginx buffering
},
)
@router.patch(
"/sessions/{session_id}/assign-user",
dependencies=[Security(auth.requires_user)],
status_code=200,
)
async def session_assign_user(
session_id: str,
user_id: Annotated[str, Security(auth.get_user_id)],
) -> dict:
"""
Assign an authenticated user to a chat session.
Used (typically post-login) to claim an existing anonymous session as the current authenticated user.
Args:
session_id: The identifier for the (previously anonymous) session.
user_id: The authenticated user's ID to associate with the session.
Returns:
dict: Status of the assignment.
"""
await chat_service.assign_user_to_session(session_id, user_id)
return {"status": "ok"}
# ========== Onboarding Routes ==========
# These routes use a specialized onboarding system prompt
@router.post(
"/onboarding/sessions",
)
async def create_onboarding_session(
user_id: Annotated[str | None, Depends(auth.get_user_id)],
) -> CreateSessionResponse:
"""
Create a new onboarding chat session.
Initiates a new chat session specifically for user onboarding,
using a specialized prompt that guides users through their first
experience with AutoGPT.
Args:
user_id: The optional authenticated user ID parsed from the JWT.
Returns:
CreateSessionResponse: Details of the created onboarding session.
"""
logger.info(
f"Creating onboarding session with user_id: "
f"...{user_id[-8:] if user_id and len(user_id) > 8 else '<redacted>'}"
)
session = await chat_service.create_chat_session(user_id)
return CreateSessionResponse(
id=session.session_id,
created_at=session.started_at.isoformat(),
user_id=session.user_id or None,
)
@router.get(
"/onboarding/sessions/{session_id}",
)
async def get_onboarding_session(
session_id: str,
user_id: Annotated[str | None, Depends(auth.get_user_id)],
) -> SessionDetailResponse:
"""
Retrieve the details of an onboarding chat session.
Args:
session_id: The unique identifier for the onboarding session.
user_id: The optional authenticated user ID.
Returns:
SessionDetailResponse: Details for the requested session.
"""
session = await chat_service.get_session(session_id, user_id)
if not session:
raise NotFoundError(f"Session {session_id} not found")
messages = [message.model_dump() for message in session.messages]
logger.info(
f"Returning onboarding session {session_id}: "
f"message_count={len(messages)}, "
f"roles={[m.get('role') for m in messages]}"
)
return SessionDetailResponse(
id=session.session_id,
created_at=session.started_at.isoformat(),
updated_at=session.updated_at.isoformat(),
user_id=session.user_id or None,
messages=messages,
)
@router.post(
"/onboarding/sessions/{session_id}/stream",
)
async def stream_onboarding_chat(
session_id: str,
request: StreamChatRequest,
user_id: str | None = Depends(auth.get_user_id),
):
"""
Stream onboarding chat responses for a session.
Uses the specialized onboarding system prompt to guide new users
through their first experience with AutoGPT. Streams AI responses
in real time over Server-Sent Events (SSE).
Args:
session_id: The onboarding session identifier.
request: Request body containing message and optional context.
user_id: Optional authenticated user ID.
Returns:
StreamingResponse: SSE-formatted response chunks.
"""
session = await chat_service.get_session(session_id, user_id)
if not session:
raise NotFoundError(f"Session {session_id} not found.")
if session.user_id is None and user_id is not None:
session = await chat_service.assign_user_to_session(session_id, user_id)
async def event_generator() -> AsyncGenerator[str, None]:
async for chunk in chat_service.stream_chat_completion(
session_id,
request.message,
is_user_message=request.is_user_message,
user_id=user_id,
session=session,
context=request.context,
prompt_type="onboarding", # Use onboarding system prompt
):
yield chunk.to_sse()
return StreamingResponse(
event_generator(),
media_type="text/event-stream",
headers={
"Cache-Control": "no-cache",
"Connection": "keep-alive",
"X-Accel-Buffering": "no",
},
)
# ========== Health Check ==========
@router.get("/health", status_code=200)
async def health_check() -> dict:
"""
Health check endpoint for the chat service.
Performs a full cycle test of session creation, assignment, and retrieval. Should always return healthy
if the service and data layer are operational.
Returns:
dict: A status dictionary indicating health, service name, and API version.
"""
session = await chat_service.create_chat_session(None)
await chat_service.assign_user_to_session(session.session_id, "test_user")
await chat_service.get_session(session.session_id, "test_user")
return {
"status": "healthy",
"service": "chat",
"version": "0.1.0",
}

View File

@@ -4,18 +4,30 @@ from datetime import UTC, datetime
from typing import Any
import orjson
from langfuse import Langfuse
from openai import AsyncOpenAI
from openai.types.chat import ChatCompletionChunk, ChatCompletionToolParam
import backend.server.v2.chat.config
from backend.server.v2.chat.model import (
from backend.data.understanding import (
format_understanding_for_prompt,
get_business_understanding,
)
from backend.util.exceptions import NotFoundError
from backend.util.settings import Settings
from . import db as chat_db
from .config import ChatConfig
from .model import (
ChatMessage,
ChatSession,
Usage,
get_chat_session,
upsert_chat_session,
)
from backend.server.v2.chat.response_model import (
from .model import (
create_chat_session as model_create_chat_session,
)
from .response_model import (
StreamBaseResponse,
StreamEnd,
StreamError,
@@ -26,14 +38,159 @@ from backend.server.v2.chat.response_model import (
StreamToolExecutionResult,
StreamUsage,
)
from backend.server.v2.chat.tools import execute_tool, tools
from backend.util.exceptions import NotFoundError
from .tools import execute_tool, tools
logger = logging.getLogger(__name__)
config = backend.server.v2.chat.config.ChatConfig()
config = ChatConfig()
settings = Settings()
client = AsyncOpenAI(api_key=config.api_key, base_url=config.base_url)
# Langfuse client (lazy initialization)
_langfuse_client: Langfuse | None = None
def _get_langfuse_client() -> Langfuse:
"""Get or create the Langfuse client for prompt management."""
global _langfuse_client
if _langfuse_client is None:
if not settings.secrets.langfuse_public_key or not settings.secrets.langfuse_secret_key:
raise ValueError(
"Langfuse credentials not configured. "
"Set LANGFUSE_PUBLIC_KEY and LANGFUSE_SECRET_KEY environment variables."
)
_langfuse_client = Langfuse(
public_key=settings.secrets.langfuse_public_key,
secret_key=settings.secrets.langfuse_secret_key,
host=settings.secrets.langfuse_host or "https://cloud.langfuse.com",
)
return _langfuse_client
def _get_langfuse_prompt() -> str:
"""Fetch the latest production prompt from Langfuse.
Returns:
The compiled prompt text from Langfuse.
Raises:
Exception: If Langfuse is unavailable or prompt fetch fails.
"""
try:
langfuse = _get_langfuse_client()
# cache_ttl_seconds=0 disables SDK caching to always get the latest prompt
prompt = langfuse.get_prompt(config.langfuse_prompt_name, cache_ttl_seconds=0)
compiled = prompt.compile()
logger.info(
f"Fetched prompt '{config.langfuse_prompt_name}' from Langfuse "
f"(version: {prompt.version})"
)
return compiled
except Exception as e:
logger.error(f"Failed to fetch prompt from Langfuse: {e}")
raise
async def _is_first_session(user_id: str) -> bool:
"""Check if this is the user's first chat session.
Returns True if the user has 1 or fewer sessions (meaning this is their first).
"""
try:
session_count = await chat_db.get_user_session_count(user_id)
return session_count <= 1
except Exception as e:
logger.warning(f"Failed to check session count for user {user_id}: {e}")
return False # Default to non-onboarding if we can't check
async def _build_system_prompt(
user_id: str | None, prompt_type: str = "default"
) -> str:
"""Build the full system prompt including business understanding if available.
Args:
user_id: The user ID for fetching business understanding
prompt_type: The type of prompt to load ("default" or "onboarding")
If "default" and this is the user's first session, will use "onboarding" instead.
Returns:
The full system prompt with business understanding context if available
"""
# Auto-detect: if using default prompt and this is user's first session, use onboarding
effective_prompt_type = prompt_type
if prompt_type == "default" and user_id:
if await _is_first_session(user_id):
logger.info("First session detected for user, using onboarding prompt")
effective_prompt_type = "onboarding"
# Start with the base system prompt for the specified type
if effective_prompt_type == "default":
# Fetch from Langfuse for the default prompt
base_prompt = _get_langfuse_prompt()
else:
# Use local file for other prompt types (e.g., onboarding)
base_prompt = config.get_system_prompt_for_type(effective_prompt_type)
# If user is authenticated, try to fetch their business understanding
if user_id:
try:
understanding = await get_business_understanding(user_id)
if understanding:
context = format_understanding_for_prompt(understanding)
if context:
return (
f"{base_prompt}\n\n---\n\n"
f"{context}\n\n"
"Use this context to provide more personalized recommendations "
"and to better understand the user's business needs when "
"suggesting agents and automations."
)
except Exception as e:
logger.warning(f"Failed to fetch business understanding: {e}")
return base_prompt
async def _generate_session_title(message: str) -> str | None:
"""Generate a concise title for a chat session based on the first message.
Args:
message: The first user message in the session
Returns:
A short title (3-6 words) or None if generation fails
"""
try:
response = await client.chat.completions.create(
model=config.title_model,
messages=[
{
"role": "system",
"content": (
"Generate a very short title (3-6 words) for a chat conversation "
"based on the user's first message. The title should capture the "
"main topic or intent. Return ONLY the title, no quotes or punctuation."
),
},
{"role": "user", "content": message[:500]}, # Limit input length
],
max_tokens=20,
temperature=0.7,
)
title = response.choices[0].message.content
if title:
# Clean up the title
title = title.strip().strip("\"'")
# Limit length
if len(title) > 50:
title = title[:47] + "..."
return title
return None
except Exception as e:
logger.warning(f"Failed to generate session title: {e}")
return None
async def create_chat_session(
user_id: str | None = None,
@@ -41,9 +198,7 @@ async def create_chat_session(
"""
Create a new chat session and persist it to the database.
"""
session = ChatSession.new(user_id)
# Persist the session immediately so it can be used for streaming
return await upsert_chat_session(session)
return await model_create_chat_session(user_id)
async def get_session(
@@ -56,6 +211,19 @@ async def get_session(
return await get_chat_session(session_id, user_id)
async def get_user_sessions(
user_id: str,
limit: int = 50,
offset: int = 0,
) -> list[ChatSession]:
"""
Get all chat sessions for a user.
"""
from .model import get_user_sessions as model_get_user_sessions
return await model_get_user_sessions(user_id, limit, offset)
async def assign_user_to_session(
session_id: str,
user_id: str,
@@ -77,6 +245,8 @@ async def stream_chat_completion(
user_id: str | None = None,
retry_count: int = 0,
session: ChatSession | None = None,
context: dict[str, str] | None = None, # {url: str, content: str}
prompt_type: str = "default",
) -> AsyncGenerator[StreamBaseResponse, None]:
"""Main entry point for streaming chat completions with database handling.
@@ -88,6 +258,7 @@ async def stream_chat_completion(
user_message: User's input message
user_id: User ID for authentication (None for anonymous)
session: Optional pre-loaded session object (for recursive calls to avoid Redis refetch)
prompt_type: The type of prompt to use ("default" or "onboarding")
Yields:
StreamBaseResponse objects formatted as SSE
@@ -120,9 +291,18 @@ async def stream_chat_completion(
)
if message:
# Build message content with context if provided
message_content = message
if context and context.get("url") and context.get("content"):
context_text = f"Page URL: {context['url']}\n\nPage Content:\n{context['content']}\n\n---\n\nUser Message: {message}"
message_content = context_text
logger.info(
f"Including page context: URL={context['url']}, content_length={len(context['content'])}"
)
session.messages.append(
ChatMessage(
role="user" if is_user_message else "assistant", content=message
role="user" if is_user_message else "assistant", content=message_content
)
)
logger.info(
@@ -140,6 +320,32 @@ async def stream_chat_completion(
session = await upsert_chat_session(session)
assert session, "Session not found"
# Generate title for new sessions on first user message (non-blocking)
# Check: is_user_message, no title yet, and this is the first user message
if is_user_message and message and not session.title:
user_messages = [m for m in session.messages if m.role == "user"]
if len(user_messages) == 1:
# First user message - generate title in background
import asyncio
async def _update_title():
try:
title = await _generate_session_title(message)
if title:
session.title = title
await upsert_chat_session(session)
logger.info(
f"Generated title for session {session_id}: {title}"
)
except Exception as e:
logger.warning(f"Failed to update session title: {e}")
# Fire and forget - don't block the chat response
asyncio.create_task(_update_title())
# Build system prompt with business understanding
system_prompt = await _build_system_prompt(user_id, prompt_type)
assistant_response = ChatMessage(
role="assistant",
content="",
@@ -158,6 +364,7 @@ async def stream_chat_completion(
async for chunk in _stream_chat_chunks(
session=session,
tools=tools,
system_prompt=system_prompt,
):
if isinstance(chunk, StreamTextChunk):
@@ -278,6 +485,7 @@ async def stream_chat_completion(
user_id=user_id,
retry_count=retry_count + 1,
session=session,
prompt_type=prompt_type,
):
yield chunk
return # Exit after retry to avoid double-saving in finally block
@@ -323,6 +531,7 @@ async def stream_chat_completion(
session_id=session.session_id,
user_id=user_id,
session=session, # Pass session object to avoid Redis refetch
prompt_type=prompt_type,
):
yield chunk
@@ -330,6 +539,7 @@ async def stream_chat_completion(
async def _stream_chat_chunks(
session: ChatSession,
tools: list[ChatCompletionToolParam],
system_prompt: str | None = None,
) -> AsyncGenerator[StreamBaseResponse, None]:
"""
Pure streaming function for OpenAI chat completions with tool calling.
@@ -337,9 +547,9 @@ async def _stream_chat_chunks(
This function is database-agnostic and focuses only on streaming logic.
Args:
messages: Conversation context as ChatCompletionMessageParam list
session_id: Session ID
user_id: User ID for tool execution
session: Chat session with conversation history
tools: Available tools for the model
system_prompt: System prompt to prepend to messages
Yields:
SSE formatted JSON response objects
@@ -349,6 +559,17 @@ async def _stream_chat_chunks(
logger.info("Starting pure chat stream")
# Build messages with system prompt prepended
messages = session.to_openai_messages()
if system_prompt:
from openai.types.chat import ChatCompletionSystemMessageParam
system_message = ChatCompletionSystemMessageParam(
role="system",
content=system_prompt,
)
messages = [system_message] + messages
# Loop to handle tool calls and continue conversation
while True:
try:
@@ -357,7 +578,7 @@ async def _stream_chat_chunks(
# Create the stream with proper types
stream = await client.chat.completions.create(
model=model,
messages=session.to_openai_messages(),
messages=messages,
tools=tools,
tool_choice="auto",
stream=True,
@@ -501,8 +722,12 @@ async def _yield_tool_call(
"""
logger.info(f"Yielding tool call: {tool_calls[yield_idx]}")
# Parse tool call arguments - exceptions will propagate to caller
arguments = orjson.loads(tool_calls[yield_idx]["function"]["arguments"])
# Parse tool call arguments - handle empty arguments gracefully
raw_arguments = tool_calls[yield_idx]["function"]["arguments"]
if raw_arguments:
arguments = orjson.loads(raw_arguments)
else:
arguments = {}
yield StreamToolCall(
tool_id=tool_calls[yield_idx]["id"],

View File

@@ -3,8 +3,8 @@ from os import getenv
import pytest
import backend.server.v2.chat.service as chat_service
from backend.server.v2.chat.response_model import (
from . import service as chat_service
from .response_model import (
StreamEnd,
StreamError,
StreamTextChunk,

View File

@@ -0,0 +1,73 @@
from typing import TYPE_CHECKING, Any
from openai.types.chat import ChatCompletionToolParam
from backend.api.features.chat.model import ChatSession
from .add_understanding import AddUnderstandingTool
from .agent_output import AgentOutputTool
from .base import BaseTool
from .create_agent import CreateAgentTool
from .edit_agent import EditAgentTool
from .find_agent import FindAgentTool
from .find_block import FindBlockTool
from .find_library_agent import FindLibraryAgentTool
from .run_agent import RunAgentTool
from .run_block import RunBlockTool
from .search_docs import SearchDocsTool
if TYPE_CHECKING:
from backend.api.features.chat.response_model import StreamToolExecutionResult
# Initialize tool instances
add_understanding_tool = AddUnderstandingTool()
create_agent_tool = CreateAgentTool()
edit_agent_tool = EditAgentTool()
find_agent_tool = FindAgentTool()
find_block_tool = FindBlockTool()
find_library_agent_tool = FindLibraryAgentTool()
run_agent_tool = RunAgentTool()
run_block_tool = RunBlockTool()
search_docs_tool = SearchDocsTool()
agent_output_tool = AgentOutputTool()
# Export tools as OpenAI format
tools: list[ChatCompletionToolParam] = [
add_understanding_tool.as_openai_tool(),
create_agent_tool.as_openai_tool(),
edit_agent_tool.as_openai_tool(),
find_agent_tool.as_openai_tool(),
find_block_tool.as_openai_tool(),
find_library_agent_tool.as_openai_tool(),
run_agent_tool.as_openai_tool(),
run_block_tool.as_openai_tool(),
search_docs_tool.as_openai_tool(),
agent_output_tool.as_openai_tool(),
]
async def execute_tool(
tool_name: str,
parameters: dict[str, Any],
user_id: str | None,
session: ChatSession,
tool_call_id: str,
) -> "StreamToolExecutionResult":
tool_map: dict[str, BaseTool] = {
"add_understanding": add_understanding_tool,
"create_agent": create_agent_tool,
"edit_agent": edit_agent_tool,
"find_agent": find_agent_tool,
"find_block": find_block_tool,
"find_library_agent": find_library_agent_tool,
"run_agent": run_agent_tool,
"run_block": run_block_tool,
"search_platform_docs": search_docs_tool,
"agent_output": agent_output_tool,
}
if tool_name not in tool_map:
raise ValueError(f"Tool {tool_name} not found")
return await tool_map[tool_name].execute(
user_id, session, tool_call_id, **parameters
)

View File

@@ -5,6 +5,8 @@ from os import getenv
import pytest
from pydantic import SecretStr
from backend.api.features.chat.model import ChatSession
from backend.api.features.store import db as store_db
from backend.blocks.firecrawl.scrape import FirecrawlScrapeBlock
from backend.blocks.io import AgentInputBlock, AgentOutputBlock
from backend.blocks.llm import AITextGeneratorBlock
@@ -13,8 +15,6 @@ from backend.data.graph import Graph, Link, Node, create_graph
from backend.data.model import APIKeyCredentials
from backend.data.user import get_or_create_user
from backend.integrations.credentials_store import IntegrationCredentialsStore
from backend.server.v2.chat.model import ChatSession
from backend.server.v2.store import db as store_db
def make_session(user_id: str | None = None):

View File

@@ -0,0 +1,206 @@
"""Tool for capturing user business understanding incrementally."""
import logging
from typing import Any
from backend.api.features.chat.model import ChatSession
from backend.data.understanding import (
BusinessUnderstandingInput,
upsert_business_understanding,
)
from .base import BaseTool
from .models import (
ErrorResponse,
ToolResponseBase,
UnderstandingUpdatedResponse,
)
logger = logging.getLogger(__name__)
class AddUnderstandingTool(BaseTool):
"""Tool for capturing user's business understanding incrementally."""
@property
def name(self) -> str:
return "add_understanding"
@property
def description(self) -> str:
return """Capture and store information about the user's business context,
workflows, pain points, and automation goals. Call this tool whenever the user
shares information about their business. Each call incrementally adds to the
existing understanding - you don't need to provide all fields at once.
Use this to build a comprehensive profile that helps recommend better agents
and automations for the user's specific needs."""
@property
def parameters(self) -> dict[str, Any]:
return {
"type": "object",
"properties": {
"user_name": {
"type": "string",
"description": "The user's name",
},
"job_title": {
"type": "string",
"description": "The user's job title (e.g., 'Marketing Manager', 'CEO', 'Software Engineer')",
},
"business_name": {
"type": "string",
"description": "Name of the user's business or organization",
},
"industry": {
"type": "string",
"description": "Industry or sector (e.g., 'e-commerce', 'healthcare', 'finance')",
},
"business_size": {
"type": "string",
"description": "Company size: '1-10', '11-50', '51-200', '201-1000', or '1000+'",
},
"user_role": {
"type": "string",
"description": "User's role in organization context (e.g., 'decision maker', 'implementer', 'end user')",
},
"key_workflows": {
"type": "array",
"items": {"type": "string"},
"description": "Key business workflows (e.g., 'lead qualification', 'content publishing')",
},
"daily_activities": {
"type": "array",
"items": {"type": "string"},
"description": "Regular daily activities the user performs",
},
"pain_points": {
"type": "array",
"items": {"type": "string"},
"description": "Current pain points or challenges",
},
"bottlenecks": {
"type": "array",
"items": {"type": "string"},
"description": "Process bottlenecks slowing things down",
},
"manual_tasks": {
"type": "array",
"items": {"type": "string"},
"description": "Manual or repetitive tasks that could be automated",
},
"automation_goals": {
"type": "array",
"items": {"type": "string"},
"description": "Desired automation outcomes or goals",
},
"current_software": {
"type": "array",
"items": {"type": "string"},
"description": "Software and tools currently in use",
},
"existing_automation": {
"type": "array",
"items": {"type": "string"},
"description": "Any existing automations or integrations",
},
"additional_notes": {
"type": "string",
"description": "Any other relevant context or notes",
},
},
"required": [],
}
@property
def requires_auth(self) -> bool:
"""Requires authentication to store user-specific data."""
return True
async def _execute(
self,
user_id: str | None,
session: ChatSession,
**kwargs,
) -> ToolResponseBase:
"""
Capture and store business understanding incrementally.
Each call merges new data with existing understanding:
- String fields are overwritten if provided
- List fields are appended (with deduplication)
"""
session_id = session.session_id
if not user_id:
return ErrorResponse(
message="Authentication required to save business understanding.",
session_id=session_id,
)
# Check if any data was provided
if not any(v is not None for v in kwargs.values()):
return ErrorResponse(
message="Please provide at least one field to update.",
session_id=session_id,
)
# Build input model
input_data = BusinessUnderstandingInput(
user_name=kwargs.get("user_name"),
job_title=kwargs.get("job_title"),
business_name=kwargs.get("business_name"),
industry=kwargs.get("industry"),
business_size=kwargs.get("business_size"),
user_role=kwargs.get("user_role"),
key_workflows=kwargs.get("key_workflows"),
daily_activities=kwargs.get("daily_activities"),
pain_points=kwargs.get("pain_points"),
bottlenecks=kwargs.get("bottlenecks"),
manual_tasks=kwargs.get("manual_tasks"),
automation_goals=kwargs.get("automation_goals"),
current_software=kwargs.get("current_software"),
existing_automation=kwargs.get("existing_automation"),
additional_notes=kwargs.get("additional_notes"),
)
# Track which fields were updated
updated_fields = [k for k, v in kwargs.items() if v is not None]
# Upsert with merge
understanding = await upsert_business_understanding(user_id, input_data)
# Build current understanding summary for the response
current_understanding = {
"user_name": understanding.user_name,
"job_title": understanding.job_title,
"business_name": understanding.business_name,
"industry": understanding.industry,
"business_size": understanding.business_size,
"user_role": understanding.user_role,
"key_workflows": understanding.key_workflows,
"daily_activities": understanding.daily_activities,
"pain_points": understanding.pain_points,
"bottlenecks": understanding.bottlenecks,
"manual_tasks": understanding.manual_tasks,
"automation_goals": understanding.automation_goals,
"current_software": understanding.current_software,
"existing_automation": understanding.existing_automation,
"additional_notes": understanding.additional_notes,
}
# Filter out empty values for cleaner response
current_understanding = {
k: v
for k, v in current_understanding.items()
if v is not None and v != [] and v != ""
}
return UnderstandingUpdatedResponse(
message=f"Updated understanding with: {', '.join(updated_fields)}. "
"I now have a better picture of your business context.",
session_id=session_id,
updated_fields=updated_fields,
current_understanding=current_understanding,
)

View File

@@ -0,0 +1,29 @@
"""Agent generator package - Creates agents from natural language."""
from .core import (
apply_agent_patch,
decompose_goal,
generate_agent,
generate_agent_patch,
get_agent_as_json,
save_agent_to_library,
)
from .fixer import apply_all_fixes
from .utils import get_blocks_info
from .validator import validate_agent
__all__ = [
# Core functions
"decompose_goal",
"generate_agent",
"generate_agent_patch",
"apply_agent_patch",
"save_agent_to_library",
"get_agent_as_json",
# Fixer
"apply_all_fixes",
# Validator
"validate_agent",
# Utils
"get_blocks_info",
]

View File

@@ -0,0 +1,25 @@
"""OpenRouter client configuration for agent generation."""
import os
from openai import AsyncOpenAI
# Configuration - use OPEN_ROUTER_API_KEY for consistency with chat/config.py
OPENROUTER_API_KEY = os.getenv("OPEN_ROUTER_API_KEY") or os.getenv("OPENROUTER_API_KEY")
AGENT_GENERATOR_MODEL = os.getenv("AGENT_GENERATOR_MODEL", "anthropic/claude-opus-4.5")
# OpenRouter client (OpenAI-compatible API)
_client: AsyncOpenAI | None = None
def get_client() -> AsyncOpenAI:
"""Get or create the OpenRouter client."""
global _client
if _client is None:
if not OPENROUTER_API_KEY:
raise ValueError("OPENROUTER_API_KEY environment variable is required")
_client = AsyncOpenAI(
base_url="https://openrouter.ai/api/v1",
api_key=OPENROUTER_API_KEY,
)
return _client

View File

@@ -0,0 +1,390 @@
"""Core agent generation functions."""
import copy
import json
import logging
import uuid
from typing import Any
from backend.api.features.library import db as library_db
from backend.data.graph import Graph, Link, Node, create_graph
from .client import AGENT_GENERATOR_MODEL, get_client
from .prompts import DECOMPOSITION_PROMPT, GENERATION_PROMPT, PATCH_PROMPT
from .utils import get_block_summaries, parse_json_from_llm
logger = logging.getLogger(__name__)
async def decompose_goal(description: str, context: str = "") -> dict[str, Any] | None:
"""Break down a goal into steps or return clarifying questions.
Args:
description: Natural language goal description
context: Additional context (e.g., answers to previous questions)
Returns:
Dict with either:
- {"type": "clarifying_questions", "questions": [...]}
- {"type": "instructions", "steps": [...]}
Or None on error
"""
client = get_client()
prompt = DECOMPOSITION_PROMPT.format(block_summaries=get_block_summaries())
full_description = description
if context:
full_description = f"{description}\n\nAdditional context:\n{context}"
try:
response = await client.chat.completions.create(
model=AGENT_GENERATOR_MODEL,
messages=[
{"role": "system", "content": prompt},
{"role": "user", "content": full_description},
],
temperature=0,
)
content = response.choices[0].message.content
if content is None:
logger.error("LLM returned empty content for decomposition")
return None
result = parse_json_from_llm(content)
if result is None:
logger.error(f"Failed to parse decomposition response: {content[:200]}")
return None
return result
except Exception as e:
logger.error(f"Error decomposing goal: {e}")
return None
async def generate_agent(instructions: dict[str, Any]) -> dict[str, Any] | None:
"""Generate agent JSON from instructions.
Args:
instructions: Structured instructions from decompose_goal
Returns:
Agent JSON dict or None on error
"""
client = get_client()
prompt = GENERATION_PROMPT.format(block_summaries=get_block_summaries())
try:
response = await client.chat.completions.create(
model=AGENT_GENERATOR_MODEL,
messages=[
{"role": "system", "content": prompt},
{"role": "user", "content": json.dumps(instructions, indent=2)},
],
temperature=0,
)
content = response.choices[0].message.content
if content is None:
logger.error("LLM returned empty content for agent generation")
return None
result = parse_json_from_llm(content)
if result is None:
logger.error(f"Failed to parse agent JSON: {content[:200]}")
return None
# Ensure required fields
if "id" not in result:
result["id"] = str(uuid.uuid4())
if "version" not in result:
result["version"] = 1
if "is_active" not in result:
result["is_active"] = True
return result
except Exception as e:
logger.error(f"Error generating agent: {e}")
return None
def json_to_graph(agent_json: dict[str, Any]) -> Graph:
"""Convert agent JSON dict to Graph model.
Args:
agent_json: Agent JSON with nodes and links
Returns:
Graph ready for saving
"""
nodes = []
for n in agent_json.get("nodes", []):
node = Node(
id=n.get("id", str(uuid.uuid4())),
block_id=n["block_id"],
input_default=n.get("input_default", {}),
metadata=n.get("metadata", {}),
)
nodes.append(node)
links = []
for link_data in agent_json.get("links", []):
link = Link(
id=link_data.get("id", str(uuid.uuid4())),
source_id=link_data["source_id"],
sink_id=link_data["sink_id"],
source_name=link_data["source_name"],
sink_name=link_data["sink_name"],
is_static=link_data.get("is_static", False),
)
links.append(link)
return Graph(
id=agent_json.get("id", str(uuid.uuid4())),
version=agent_json.get("version", 1),
is_active=agent_json.get("is_active", True),
name=agent_json.get("name", "Generated Agent"),
description=agent_json.get("description", ""),
nodes=nodes,
links=links,
)
def _reassign_node_ids(graph: Graph) -> None:
"""Reassign all node and link IDs to new UUIDs.
This is needed when creating a new version to avoid unique constraint violations.
"""
# Create mapping from old node IDs to new UUIDs
id_map = {node.id: str(uuid.uuid4()) for node in graph.nodes}
# Reassign node IDs
for node in graph.nodes:
node.id = id_map[node.id]
# Update link references to use new node IDs
for link in graph.links:
link.id = str(uuid.uuid4()) # Also give links new IDs
if link.source_id in id_map:
link.source_id = id_map[link.source_id]
if link.sink_id in id_map:
link.sink_id = id_map[link.sink_id]
async def save_agent_to_library(
agent_json: dict[str, Any], user_id: str, is_update: bool = False
) -> tuple[Graph, Any]:
"""Save agent to database and user's library.
Args:
agent_json: Agent JSON dict
user_id: User ID
is_update: Whether this is an update to an existing agent
Returns:
Tuple of (created Graph, LibraryAgent)
"""
from backend.data.graph import get_graph_all_versions
graph = json_to_graph(agent_json)
if is_update:
# For updates, keep the same graph ID but increment version
# and reassign node/link IDs to avoid conflicts
if graph.id:
existing_versions = await get_graph_all_versions(graph.id, user_id)
if existing_versions:
latest_version = max(v.version for v in existing_versions)
graph.version = latest_version + 1
# Reassign node IDs (but keep graph ID the same)
_reassign_node_ids(graph)
logger.info(f"Updating agent {graph.id} to version {graph.version}")
else:
# For new agents, always generate a fresh UUID to avoid collisions
graph.id = str(uuid.uuid4())
graph.version = 1
# Reassign all node IDs as well
_reassign_node_ids(graph)
logger.info(f"Creating new agent with ID {graph.id}")
# Save to database
created_graph = await create_graph(graph, user_id)
# Add to user's library (or update existing library agent)
library_agents = await library_db.create_library_agent(
graph=created_graph,
user_id=user_id,
create_library_agents_for_sub_graphs=False,
)
return created_graph, library_agents[0]
async def get_agent_as_json(
graph_id: str, user_id: str | None
) -> dict[str, Any] | None:
"""Fetch an agent and convert to JSON format for editing.
Args:
graph_id: Graph ID or library agent ID
user_id: User ID
Returns:
Agent as JSON dict or None if not found
"""
from backend.data.graph import get_graph
# Try to get the graph (version=None gets the active version)
graph = await get_graph(graph_id, version=None, user_id=user_id)
if not graph:
return None
# Convert to JSON format
nodes = []
for node in graph.nodes:
nodes.append(
{
"id": node.id,
"block_id": node.block_id,
"input_default": node.input_default,
"metadata": node.metadata,
}
)
links = []
for node in graph.nodes:
for link in node.output_links:
links.append(
{
"id": link.id,
"source_id": link.source_id,
"sink_id": link.sink_id,
"source_name": link.source_name,
"sink_name": link.sink_name,
"is_static": link.is_static,
}
)
return {
"id": graph.id,
"name": graph.name,
"description": graph.description,
"version": graph.version,
"is_active": graph.is_active,
"nodes": nodes,
"links": links,
}
async def generate_agent_patch(
update_request: str, current_agent: dict[str, Any]
) -> dict[str, Any] | None:
"""Generate a patch to update an existing agent.
Args:
update_request: Natural language description of changes
current_agent: Current agent JSON
Returns:
Patch dict or clarifying questions, or None on error
"""
client = get_client()
prompt = PATCH_PROMPT.format(
current_agent=json.dumps(current_agent, indent=2),
block_summaries=get_block_summaries(),
)
try:
response = await client.chat.completions.create(
model=AGENT_GENERATOR_MODEL,
messages=[
{"role": "system", "content": prompt},
{"role": "user", "content": update_request},
],
temperature=0,
)
content = response.choices[0].message.content
if content is None:
logger.error("LLM returned empty content for patch generation")
return None
return parse_json_from_llm(content)
except Exception as e:
logger.error(f"Error generating patch: {e}")
return None
def apply_agent_patch(
current_agent: dict[str, Any], patch: dict[str, Any]
) -> dict[str, Any]:
"""Apply a patch to an existing agent.
Args:
current_agent: Current agent JSON
patch: Patch dict with operations
Returns:
Updated agent JSON
"""
agent = copy.deepcopy(current_agent)
patches = patch.get("patches", [])
for p in patches:
patch_type = p.get("type")
if patch_type == "modify":
node_id = p.get("node_id")
changes = p.get("changes", {})
for node in agent.get("nodes", []):
if node["id"] == node_id:
_deep_update(node, changes)
logger.debug(f"Modified node {node_id}")
break
elif patch_type == "add":
new_nodes = p.get("new_nodes", [])
new_links = p.get("new_links", [])
agent["nodes"] = agent.get("nodes", []) + new_nodes
agent["links"] = agent.get("links", []) + new_links
logger.debug(f"Added {len(new_nodes)} nodes, {len(new_links)} links")
elif patch_type == "remove":
node_ids_to_remove = set(p.get("node_ids", []))
link_ids_to_remove = set(p.get("link_ids", []))
# Remove nodes
agent["nodes"] = [
n for n in agent.get("nodes", []) if n["id"] not in node_ids_to_remove
]
# Remove links (both explicit and those referencing removed nodes)
agent["links"] = [
link
for link in agent.get("links", [])
if link["id"] not in link_ids_to_remove
and link["source_id"] not in node_ids_to_remove
and link["sink_id"] not in node_ids_to_remove
]
logger.debug(
f"Removed {len(node_ids_to_remove)} nodes, {len(link_ids_to_remove)} links"
)
return agent
def _deep_update(target: dict, source: dict) -> None:
"""Recursively update a dict with another dict."""
for key, value in source.items():
if key in target and isinstance(target[key], dict) and isinstance(value, dict):
_deep_update(target[key], value)
else:
target[key] = value

View File

@@ -0,0 +1,606 @@
"""Agent fixer - Fixes common LLM generation errors."""
import logging
import re
import uuid
from typing import Any
from .utils import (
ADDTODICTIONARY_BLOCK_ID,
ADDTOLIST_BLOCK_ID,
CODE_EXECUTION_BLOCK_ID,
CONDITION_BLOCK_ID,
CREATEDICT_BLOCK_ID,
CREATELIST_BLOCK_ID,
DATA_SAMPLING_BLOCK_ID,
DOUBLE_CURLY_BRACES_BLOCK_IDS,
GET_CURRENT_DATE_BLOCK_ID,
STORE_VALUE_BLOCK_ID,
UNIVERSAL_TYPE_CONVERTER_BLOCK_ID,
get_blocks_info,
is_valid_uuid,
)
logger = logging.getLogger(__name__)
def fix_agent_ids(agent: dict[str, Any]) -> dict[str, Any]:
"""Fix invalid UUIDs in agent and link IDs."""
# Fix agent ID
if not is_valid_uuid(agent.get("id", "")):
agent["id"] = str(uuid.uuid4())
logger.debug(f"Fixed agent ID: {agent['id']}")
# Fix node IDs
id_mapping = {} # Old ID -> New ID
for node in agent.get("nodes", []):
if not is_valid_uuid(node.get("id", "")):
old_id = node.get("id", "")
new_id = str(uuid.uuid4())
id_mapping[old_id] = new_id
node["id"] = new_id
logger.debug(f"Fixed node ID: {old_id} -> {new_id}")
# Fix link IDs and update references
for link in agent.get("links", []):
if not is_valid_uuid(link.get("id", "")):
link["id"] = str(uuid.uuid4())
logger.debug(f"Fixed link ID: {link['id']}")
# Update source/sink IDs if they were remapped
if link.get("source_id") in id_mapping:
link["source_id"] = id_mapping[link["source_id"]]
if link.get("sink_id") in id_mapping:
link["sink_id"] = id_mapping[link["sink_id"]]
return agent
def fix_double_curly_braces(agent: dict[str, Any]) -> dict[str, Any]:
"""Fix single curly braces to double in template blocks."""
for node in agent.get("nodes", []):
if node.get("block_id") not in DOUBLE_CURLY_BRACES_BLOCK_IDS:
continue
input_data = node.get("input_default", {})
for key in ("prompt", "format"):
if key in input_data and isinstance(input_data[key], str):
original = input_data[key]
# Fix simple variable references: {var} -> {{var}}
fixed = re.sub(
r"(?<!\{)\{([a-zA-Z_][a-zA-Z0-9_]*)\}(?!\})",
r"{{\1}}",
original,
)
if fixed != original:
input_data[key] = fixed
logger.debug(f"Fixed curly braces in {key}")
return agent
def fix_storevalue_before_condition(agent: dict[str, Any]) -> dict[str, Any]:
"""Add StoreValueBlock before ConditionBlock if needed for value2."""
nodes = agent.get("nodes", [])
links = agent.get("links", [])
# Find all ConditionBlock nodes
condition_node_ids = {
node["id"] for node in nodes if node.get("block_id") == CONDITION_BLOCK_ID
}
if not condition_node_ids:
return agent
new_nodes = []
new_links = []
processed_conditions = set()
for link in links:
sink_id = link.get("sink_id")
sink_name = link.get("sink_name")
# Check if this link goes to a ConditionBlock's value2
if sink_id in condition_node_ids and sink_name == "value2":
source_node = next(
(n for n in nodes if n["id"] == link.get("source_id")), None
)
# Skip if source is already a StoreValueBlock
if source_node and source_node.get("block_id") == STORE_VALUE_BLOCK_ID:
continue
# Skip if we already processed this condition
if sink_id in processed_conditions:
continue
processed_conditions.add(sink_id)
# Create StoreValueBlock
store_node_id = str(uuid.uuid4())
store_node = {
"id": store_node_id,
"block_id": STORE_VALUE_BLOCK_ID,
"input_default": {"data": None},
"metadata": {"position": {"x": 0, "y": -100}},
}
new_nodes.append(store_node)
# Create link: original source -> StoreValueBlock
new_links.append(
{
"id": str(uuid.uuid4()),
"source_id": link["source_id"],
"source_name": link["source_name"],
"sink_id": store_node_id,
"sink_name": "input",
"is_static": False,
}
)
# Update original link: StoreValueBlock -> ConditionBlock
link["source_id"] = store_node_id
link["source_name"] = "output"
logger.debug(f"Added StoreValueBlock before ConditionBlock {sink_id}")
if new_nodes:
agent["nodes"] = nodes + new_nodes
return agent
def fix_addtolist_blocks(agent: dict[str, Any]) -> dict[str, Any]:
"""Fix AddToList blocks by adding prerequisite empty AddToList block.
When an AddToList block is found:
1. Checks if there's a CreateListBlock before it
2. Removes CreateListBlock if linked directly to AddToList
3. Adds an empty AddToList block before the original
4. Ensures the original has a self-referencing link
"""
nodes = agent.get("nodes", [])
links = agent.get("links", [])
new_nodes = []
original_addtolist_ids = set()
nodes_to_remove = set()
links_to_remove = []
# First pass: identify CreateListBlock nodes to remove
for link in links:
source_node = next(
(n for n in nodes if n.get("id") == link.get("source_id")), None
)
sink_node = next((n for n in nodes if n.get("id") == link.get("sink_id")), None)
if (
source_node
and sink_node
and source_node.get("block_id") == CREATELIST_BLOCK_ID
and sink_node.get("block_id") == ADDTOLIST_BLOCK_ID
):
nodes_to_remove.add(source_node.get("id"))
links_to_remove.append(link)
logger.debug(f"Removing CreateListBlock {source_node.get('id')}")
# Second pass: process AddToList blocks
filtered_nodes = []
for node in nodes:
if node.get("id") in nodes_to_remove:
continue
if node.get("block_id") == ADDTOLIST_BLOCK_ID:
original_addtolist_ids.add(node.get("id"))
node_id = node.get("id")
pos = node.get("metadata", {}).get("position", {"x": 0, "y": 0})
# Check if already has prerequisite
has_prereq = any(
link.get("sink_id") == node_id
and link.get("sink_name") == "list"
and link.get("source_name") == "updated_list"
for link in links
)
if not has_prereq:
# Remove links to "list" input (except self-reference)
for link in links:
if (
link.get("sink_id") == node_id
and link.get("sink_name") == "list"
and link.get("source_id") != node_id
and link not in links_to_remove
):
links_to_remove.append(link)
# Create prerequisite AddToList block
prereq_id = str(uuid.uuid4())
prereq_node = {
"id": prereq_id,
"block_id": ADDTOLIST_BLOCK_ID,
"input_default": {"list": [], "entry": None, "entries": []},
"metadata": {
"position": {"x": pos.get("x", 0) - 800, "y": pos.get("y", 0)}
},
}
new_nodes.append(prereq_node)
# Link prerequisite to original
links.append(
{
"id": str(uuid.uuid4()),
"source_id": prereq_id,
"source_name": "updated_list",
"sink_id": node_id,
"sink_name": "list",
"is_static": False,
}
)
logger.debug(f"Added prerequisite AddToList block for {node_id}")
filtered_nodes.append(node)
# Remove marked links
filtered_links = [link for link in links if link not in links_to_remove]
# Add self-referencing links for original AddToList blocks
for node in filtered_nodes + new_nodes:
if (
node.get("block_id") == ADDTOLIST_BLOCK_ID
and node.get("id") in original_addtolist_ids
):
node_id = node.get("id")
has_self_ref = any(
link["source_id"] == node_id
and link["sink_id"] == node_id
and link["source_name"] == "updated_list"
and link["sink_name"] == "list"
for link in filtered_links
)
if not has_self_ref:
filtered_links.append(
{
"id": str(uuid.uuid4()),
"source_id": node_id,
"source_name": "updated_list",
"sink_id": node_id,
"sink_name": "list",
"is_static": False,
}
)
logger.debug(f"Added self-reference for AddToList {node_id}")
agent["nodes"] = filtered_nodes + new_nodes
agent["links"] = filtered_links
return agent
def fix_addtodictionary_blocks(agent: dict[str, Any]) -> dict[str, Any]:
"""Fix AddToDictionary blocks by removing empty CreateDictionary nodes."""
nodes = agent.get("nodes", [])
links = agent.get("links", [])
nodes_to_remove = set()
links_to_remove = []
for link in links:
source_node = next(
(n for n in nodes if n.get("id") == link.get("source_id")), None
)
sink_node = next((n for n in nodes if n.get("id") == link.get("sink_id")), None)
if (
source_node
and sink_node
and source_node.get("block_id") == CREATEDICT_BLOCK_ID
and sink_node.get("block_id") == ADDTODICTIONARY_BLOCK_ID
):
nodes_to_remove.add(source_node.get("id"))
links_to_remove.append(link)
logger.debug(f"Removing CreateDictionary {source_node.get('id')}")
agent["nodes"] = [n for n in nodes if n.get("id") not in nodes_to_remove]
agent["links"] = [link for link in links if link not in links_to_remove]
return agent
def fix_code_execution_output(agent: dict[str, Any]) -> dict[str, Any]:
"""Fix CodeExecutionBlock output: change 'response' to 'stdout_logs'."""
nodes = agent.get("nodes", [])
links = agent.get("links", [])
for link in links:
source_node = next(
(n for n in nodes if n.get("id") == link.get("source_id")), None
)
if (
source_node
and source_node.get("block_id") == CODE_EXECUTION_BLOCK_ID
and link.get("source_name") == "response"
):
link["source_name"] = "stdout_logs"
logger.debug("Fixed CodeExecutionBlock output: response -> stdout_logs")
return agent
def fix_data_sampling_sample_size(agent: dict[str, Any]) -> dict[str, Any]:
"""Fix DataSamplingBlock by setting sample_size to 1 as default."""
nodes = agent.get("nodes", [])
links = agent.get("links", [])
links_to_remove = []
for node in nodes:
if node.get("block_id") == DATA_SAMPLING_BLOCK_ID:
node_id = node.get("id")
input_default = node.get("input_default", {})
# Remove links to sample_size
for link in links:
if (
link.get("sink_id") == node_id
and link.get("sink_name") == "sample_size"
):
links_to_remove.append(link)
# Set default
input_default["sample_size"] = 1
node["input_default"] = input_default
logger.debug(f"Fixed DataSamplingBlock {node_id} sample_size to 1")
if links_to_remove:
agent["links"] = [link for link in links if link not in links_to_remove]
return agent
def fix_node_x_coordinates(agent: dict[str, Any]) -> dict[str, Any]:
"""Fix node x-coordinates to ensure 800+ unit spacing between linked nodes."""
nodes = agent.get("nodes", [])
links = agent.get("links", [])
node_lookup = {n.get("id"): n for n in nodes}
for link in links:
source_id = link.get("source_id")
sink_id = link.get("sink_id")
source_node = node_lookup.get(source_id)
sink_node = node_lookup.get(sink_id)
if not source_node or not sink_node:
continue
source_pos = source_node.get("metadata", {}).get("position", {})
sink_pos = sink_node.get("metadata", {}).get("position", {})
source_x = source_pos.get("x", 0)
sink_x = sink_pos.get("x", 0)
if abs(sink_x - source_x) < 800:
new_x = source_x + 800
if "metadata" not in sink_node:
sink_node["metadata"] = {}
if "position" not in sink_node["metadata"]:
sink_node["metadata"]["position"] = {}
sink_node["metadata"]["position"]["x"] = new_x
logger.debug(f"Fixed node {sink_id} x: {sink_x} -> {new_x}")
return agent
def fix_getcurrentdate_offset(agent: dict[str, Any]) -> dict[str, Any]:
"""Fix GetCurrentDateBlock offset to ensure it's positive."""
for node in agent.get("nodes", []):
if node.get("block_id") == GET_CURRENT_DATE_BLOCK_ID:
input_default = node.get("input_default", {})
if "offset" in input_default:
offset = input_default["offset"]
if isinstance(offset, (int, float)) and offset < 0:
input_default["offset"] = abs(offset)
logger.debug(f"Fixed offset: {offset} -> {abs(offset)}")
return agent
def fix_ai_model_parameter(
agent: dict[str, Any],
blocks_info: list[dict[str, Any]],
default_model: str = "gpt-4o",
) -> dict[str, Any]:
"""Add default model parameter to AI blocks if missing."""
block_map = {b.get("id"): b for b in blocks_info}
for node in agent.get("nodes", []):
block_id = node.get("block_id")
block = block_map.get(block_id)
if not block:
continue
# Check if block has AI category
categories = block.get("categories", [])
is_ai_block = any(
cat.get("category") == "AI" for cat in categories if isinstance(cat, dict)
)
if is_ai_block:
input_default = node.get("input_default", {})
if "model" not in input_default:
input_default["model"] = default_model
node["input_default"] = input_default
logger.debug(
f"Added model '{default_model}' to AI block {node.get('id')}"
)
return agent
def fix_link_static_properties(
agent: dict[str, Any], blocks_info: list[dict[str, Any]]
) -> dict[str, Any]:
"""Fix is_static property based on source block's staticOutput."""
block_map = {b.get("id"): b for b in blocks_info}
node_lookup = {n.get("id"): n for n in agent.get("nodes", [])}
for link in agent.get("links", []):
source_node = node_lookup.get(link.get("source_id"))
if not source_node:
continue
source_block = block_map.get(source_node.get("block_id"))
if not source_block:
continue
static_output = source_block.get("staticOutput", False)
if link.get("is_static") != static_output:
link["is_static"] = static_output
logger.debug(f"Fixed link {link.get('id')} is_static to {static_output}")
return agent
def fix_data_type_mismatch(
agent: dict[str, Any], blocks_info: list[dict[str, Any]]
) -> dict[str, Any]:
"""Fix data type mismatches by inserting UniversalTypeConverterBlock."""
nodes = agent.get("nodes", [])
links = agent.get("links", [])
block_map = {b.get("id"): b for b in blocks_info}
node_lookup = {n.get("id"): n for n in nodes}
def get_property_type(schema: dict, name: str) -> str | None:
if "_#_" in name:
parent, child = name.split("_#_", 1)
parent_schema = schema.get(parent, {})
if "properties" in parent_schema:
return parent_schema["properties"].get(child, {}).get("type")
return None
return schema.get(name, {}).get("type")
def are_types_compatible(src: str, sink: str) -> bool:
if {src, sink} <= {"integer", "number"}:
return True
return src == sink
type_mapping = {
"string": "string",
"text": "string",
"integer": "number",
"number": "number",
"float": "number",
"boolean": "boolean",
"bool": "boolean",
"array": "list",
"list": "list",
"object": "dictionary",
"dict": "dictionary",
"dictionary": "dictionary",
}
new_links = []
nodes_to_add = []
for link in links:
source_node = node_lookup.get(link.get("source_id"))
sink_node = node_lookup.get(link.get("sink_id"))
if not source_node or not sink_node:
new_links.append(link)
continue
source_block = block_map.get(source_node.get("block_id"))
sink_block = block_map.get(sink_node.get("block_id"))
if not source_block or not sink_block:
new_links.append(link)
continue
source_outputs = source_block.get("outputSchema", {}).get("properties", {})
sink_inputs = sink_block.get("inputSchema", {}).get("properties", {})
source_type = get_property_type(source_outputs, link.get("source_name", ""))
sink_type = get_property_type(sink_inputs, link.get("sink_name", ""))
if (
source_type
and sink_type
and not are_types_compatible(source_type, sink_type)
):
# Insert type converter
converter_id = str(uuid.uuid4())
target_type = type_mapping.get(sink_type, sink_type)
converter_node = {
"id": converter_id,
"block_id": UNIVERSAL_TYPE_CONVERTER_BLOCK_ID,
"input_default": {"type": target_type},
"metadata": {"position": {"x": 0, "y": 100}},
}
nodes_to_add.append(converter_node)
# source -> converter
new_links.append(
{
"id": str(uuid.uuid4()),
"source_id": link["source_id"],
"source_name": link["source_name"],
"sink_id": converter_id,
"sink_name": "value",
"is_static": False,
}
)
# converter -> sink
new_links.append(
{
"id": str(uuid.uuid4()),
"source_id": converter_id,
"source_name": "value",
"sink_id": link["sink_id"],
"sink_name": link["sink_name"],
"is_static": False,
}
)
logger.debug(f"Inserted type converter: {source_type} -> {target_type}")
else:
new_links.append(link)
if nodes_to_add:
agent["nodes"] = nodes + nodes_to_add
agent["links"] = new_links
return agent
def apply_all_fixes(
agent: dict[str, Any], blocks_info: list[dict[str, Any]] | None = None
) -> dict[str, Any]:
"""Apply all fixes to an agent JSON.
Args:
agent: Agent JSON dict
blocks_info: Optional list of block info dicts for advanced fixes
Returns:
Fixed agent JSON
"""
# Basic fixes (no block info needed)
agent = fix_agent_ids(agent)
agent = fix_double_curly_braces(agent)
agent = fix_storevalue_before_condition(agent)
agent = fix_addtolist_blocks(agent)
agent = fix_addtodictionary_blocks(agent)
agent = fix_code_execution_output(agent)
agent = fix_data_sampling_sample_size(agent)
agent = fix_node_x_coordinates(agent)
agent = fix_getcurrentdate_offset(agent)
# Advanced fixes (require block info)
if blocks_info is None:
blocks_info = get_blocks_info()
agent = fix_ai_model_parameter(agent, blocks_info)
agent = fix_link_static_properties(agent, blocks_info)
agent = fix_data_type_mismatch(agent, blocks_info)
return agent

View File

@@ -0,0 +1,225 @@
"""Prompt templates for agent generation."""
DECOMPOSITION_PROMPT = """
You are an expert AutoGPT Workflow Decomposer. Your task is to analyze a user's high-level goal and break it down into a clear, step-by-step plan using the available blocks.
Each step should represent a distinct, automatable action suitable for execution by an AI automation system.
---
FIRST: Analyze the user's goal and determine:
1) Design-time configuration (fixed settings that won't change per run)
2) Runtime inputs (values the agent's end-user will provide each time it runs)
For anything that can vary per run (email addresses, names, dates, search terms, etc.):
- DO NOT ask for the actual value
- Instead, define it as an Agent Input with a clear name, type, and description
Only ask clarifying questions about design-time config that affects how you build the workflow:
- Which external service to use (e.g., "Gmail vs Outlook", "Notion vs Google Docs")
- Required formats or structures (e.g., "CSV, JSON, or PDF output?")
- Business rules that must be hard-coded
IMPORTANT CLARIFICATIONS POLICY:
- Ask no more than five essential questions
- Do not ask for concrete values that can be provided at runtime as Agent Inputs
- Do not ask for API keys or credentials; the platform handles those directly
- If there is enough information to infer reasonable defaults, prefer to propose defaults
---
GUIDELINES:
1. List each step as a numbered item
2. Describe the action clearly and specify inputs/outputs
3. Ensure steps are in logical, sequential order
4. Mention block names naturally (e.g., "Use GetWeatherByLocationBlock to...")
5. Help the user reach their goal efficiently
---
RULES:
1. OUTPUT FORMAT: Only output either clarifying questions OR step-by-step instructions, not both
2. USE ONLY THE BLOCKS PROVIDED
3. ALL required_input fields must be provided
4. Data types of linked properties must match
5. Write expert-level prompts for AI-related blocks
---
CRITICAL BLOCK RESTRICTIONS:
1. AddToListBlock: Outputs updated list EVERY addition, not after all additions
2. SendEmailBlock: Draft the email for user review; set SMTP config based on email type
3. ConditionBlock: value2 is reference, value1 is contrast
4. CodeExecutionBlock: DO NOT USE - use AI blocks instead
5. ReadCsvBlock: Only use the 'rows' output, not 'row'
---
OUTPUT FORMAT:
If more information is needed:
```json
{{
"type": "clarifying_questions",
"questions": [
{{
"question": "Which email provider should be used? (Gmail, Outlook, custom SMTP)",
"keyword": "email_provider",
"example": "Gmail"
}}
]
}}
```
If ready to proceed:
```json
{{
"type": "instructions",
"steps": [
{{
"step_number": 1,
"block_name": "AgentShortTextInputBlock",
"description": "Get the URL of the content to analyze.",
"inputs": [{{"name": "name", "value": "URL"}}],
"outputs": [{{"name": "result", "description": "The URL entered by user"}}]
}}
]
}}
```
---
AVAILABLE BLOCKS:
{block_summaries}
"""
GENERATION_PROMPT = """
You are an expert AI workflow builder. Generate a valid agent JSON from the given instructions.
---
NODES:
Each node must include:
- `id`: Unique UUID v4 (e.g. `a8f5b1e2-c3d4-4e5f-8a9b-0c1d2e3f4a5b`)
- `block_id`: The block identifier (must match an Allowed Block)
- `input_default`: Dict of inputs (can be empty if no static inputs needed)
- `metadata`: Must contain:
- `position`: {{"x": number, "y": number}} - adjacent nodes should differ by 800+ in X
- `customized_name`: Clear name describing this block's purpose in the workflow
---
LINKS:
Each link connects a source node's output to a sink node's input:
- `id`: MUST be UUID v4 (NOT "link-1", "link-2", etc.)
- `source_id`: ID of the source node
- `source_name`: Output field name from the source block
- `sink_id`: ID of the sink node
- `sink_name`: Input field name on the sink block
- `is_static`: true only if source block has static_output: true
CRITICAL: All IDs must be valid UUID v4 format!
---
AGENT (GRAPH):
Wrap nodes and links in:
- `id`: UUID of the agent
- `name`: Short, generic name (avoid specific company names, URLs)
- `description`: Short, generic description
- `nodes`: List of all nodes
- `links`: List of all links
- `version`: 1
- `is_active`: true
---
TIPS:
- All required_input fields must be provided via input_default or a valid link
- Ensure consistent source_id and sink_id references
- Avoid dangling links
- Input/output pins must match block schemas
- Do not invent unknown block_ids
---
ALLOWED BLOCKS:
{block_summaries}
---
Generate the complete agent JSON. Output ONLY valid JSON, no explanation.
"""
PATCH_PROMPT = """
You are an expert at modifying AutoGPT agent workflows. Given the current agent and a modification request, generate a JSON patch to update the agent.
CURRENT AGENT:
{current_agent}
AVAILABLE BLOCKS:
{block_summaries}
---
PATCH FORMAT:
Return a JSON object with the following structure:
```json
{{
"type": "patch",
"intent": "Brief description of what the patch does",
"patches": [
{{
"type": "modify",
"node_id": "uuid-of-node-to-modify",
"changes": {{
"input_default": {{"field": "new_value"}},
"metadata": {{"customized_name": "New Name"}}
}}
}},
{{
"type": "add",
"new_nodes": [
{{
"id": "new-uuid",
"block_id": "block-uuid",
"input_default": {{}},
"metadata": {{"position": {{"x": 0, "y": 0}}, "customized_name": "Name"}}
}}
],
"new_links": [
{{
"id": "link-uuid",
"source_id": "source-node-id",
"source_name": "output_field",
"sink_id": "sink-node-id",
"sink_name": "input_field"
}}
]
}},
{{
"type": "remove",
"node_ids": ["uuid-of-node-to-remove"],
"link_ids": ["uuid-of-link-to-remove"]
}}
]
}}
```
If you need more information, return:
```json
{{
"type": "clarifying_questions",
"questions": [
{{
"question": "What specific change do you want?",
"keyword": "change_type",
"example": "Add error handling"
}}
]
}}
```
Generate the minimal patch needed. Output ONLY valid JSON.
"""

View File

@@ -0,0 +1,213 @@
"""Utilities for agent generation."""
import json
import re
from typing import Any
from backend.data.block import get_blocks
# UUID validation regex
UUID_REGEX = re.compile(
r"^[a-f0-9]{8}-[a-f0-9]{4}-4[a-f0-9]{3}-[89ab][a-f0-9]{3}-[a-f0-9]{12}$"
)
# Block IDs for various fixes
STORE_VALUE_BLOCK_ID = "1ff065e9-88e8-4358-9d82-8dc91f622ba9"
CONDITION_BLOCK_ID = "715696a0-e1da-45c8-b209-c2fa9c3b0be6"
ADDTOLIST_BLOCK_ID = "aeb08fc1-2fc1-4141-bc8e-f758f183a822"
ADDTODICTIONARY_BLOCK_ID = "31d1064e-7446-4693-a7d4-65e5ca1180d1"
CREATELIST_BLOCK_ID = "a912d5c7-6e00-4542-b2a9-8034136930e4"
CREATEDICT_BLOCK_ID = "b924ddf4-de4f-4b56-9a85-358930dcbc91"
CODE_EXECUTION_BLOCK_ID = "0b02b072-abe7-11ef-8372-fb5d162dd712"
DATA_SAMPLING_BLOCK_ID = "4a448883-71fa-49cf-91cf-70d793bd7d87"
UNIVERSAL_TYPE_CONVERTER_BLOCK_ID = "95d1b990-ce13-4d88-9737-ba5c2070c97b"
GET_CURRENT_DATE_BLOCK_ID = "b29c1b50-5d0e-4d9f-8f9d-1b0e6fcbf0b1"
DOUBLE_CURLY_BRACES_BLOCK_IDS = [
"44f6c8ad-d75c-4ae1-8209-aad1c0326928", # FillTextTemplateBlock
"6ab085e2-20b3-4055-bc3e-08036e01eca6",
"90f8c45e-e983-4644-aa0b-b4ebe2f531bc",
"363ae599-353e-4804-937e-b2ee3cef3da4", # AgentOutputBlock
"3b191d9f-356f-482d-8238-ba04b6d18381",
"db7d8f02-2f44-4c55-ab7a-eae0941f0c30",
"3a7c4b8d-6e2f-4a5d-b9c1-f8d23c5a9b0e",
"ed1ae7a0-b770-4089-b520-1f0005fad19a",
"a892b8d9-3e4e-4e9c-9c1e-75f8efcf1bfa",
"b29c1b50-5d0e-4d9f-8f9d-1b0e6fcbf0b1",
"716a67b3-6760-42e7-86dc-18645c6e00fc",
"530cf046-2ce0-4854-ae2c-659db17c7a46",
"ed55ac19-356e-4243-a6cb-bc599e9b716f",
"1f292d4a-41a4-4977-9684-7c8d560b9f91", # LLM blocks
"32a87eab-381e-4dd4-bdb8-4c47151be35a",
]
def is_valid_uuid(value: str) -> bool:
"""Check if a string is a valid UUID v4."""
return isinstance(value, str) and UUID_REGEX.match(value) is not None
def _compact_schema(schema: dict) -> dict[str, str]:
"""Extract compact type info from a JSON schema properties dict.
Returns a dict of {field_name: type_string} for essential info only.
"""
props = schema.get("properties", {})
result = {}
for name, prop in props.items():
# Skip internal/complex fields
if name.startswith("_"):
continue
# Get type string
type_str = prop.get("type", "any")
# Handle anyOf/oneOf (optional types)
if "anyOf" in prop:
types = [t.get("type", "?") for t in prop["anyOf"] if t.get("type")]
type_str = "|".join(types) if types else "any"
elif "allOf" in prop:
type_str = "object"
# Add array item type if present
if type_str == "array" and "items" in prop:
items = prop["items"]
if isinstance(items, dict):
item_type = items.get("type", "any")
type_str = f"array[{item_type}]"
result[name] = type_str
return result
def get_block_summaries(include_schemas: bool = True) -> str:
"""Generate compact block summaries for prompts.
Args:
include_schemas: Whether to include input/output type info
Returns:
Formatted string of block summaries (compact format)
"""
blocks = get_blocks()
summaries = []
for block_id, block_cls in blocks.items():
block = block_cls()
name = block.name
desc = getattr(block, "description", "") or ""
# Truncate description
if len(desc) > 150:
desc = desc[:147] + "..."
if not include_schemas:
summaries.append(f"- {name} (id: {block_id}): {desc}")
else:
# Compact format with type info only
inputs = {}
outputs = {}
required = []
if hasattr(block, "input_schema"):
try:
schema = block.input_schema.jsonschema()
inputs = _compact_schema(schema)
required = schema.get("required", [])
except Exception:
pass
if hasattr(block, "output_schema"):
try:
schema = block.output_schema.jsonschema()
outputs = _compact_schema(schema)
except Exception:
pass
# Build compact line format
# Format: NAME (id): desc | in: {field:type, ...} [required] | out: {field:type}
in_str = ", ".join(f"{k}:{v}" for k, v in inputs.items())
out_str = ", ".join(f"{k}:{v}" for k, v in outputs.items())
req_str = f" req=[{','.join(required)}]" if required else ""
static = " [static]" if getattr(block, "static_output", False) else ""
line = f"- {name} (id: {block_id}): {desc}"
if in_str:
line += f"\n in: {{{in_str}}}{req_str}"
if out_str:
line += f"\n out: {{{out_str}}}{static}"
summaries.append(line)
return "\n".join(summaries)
def get_blocks_info() -> list[dict[str, Any]]:
"""Get block information with schemas for validation and fixing."""
blocks = get_blocks()
blocks_info = []
for block_id, block_cls in blocks.items():
block = block_cls()
blocks_info.append(
{
"id": block_id,
"name": block.name,
"description": getattr(block, "description", ""),
"categories": getattr(block, "categories", []),
"staticOutput": getattr(block, "static_output", False),
"inputSchema": (
block.input_schema.jsonschema()
if hasattr(block, "input_schema")
else {}
),
"outputSchema": (
block.output_schema.jsonschema()
if hasattr(block, "output_schema")
else {}
),
}
)
return blocks_info
def parse_json_from_llm(text: str) -> dict[str, Any] | None:
"""Extract JSON from LLM response (handles markdown code blocks)."""
if not text:
return None
# Try fenced code block
match = re.search(r"```(?:json)?\s*([\s\S]*?)```", text, re.IGNORECASE)
if match:
try:
return json.loads(match.group(1).strip())
except json.JSONDecodeError:
pass
# Try raw text
try:
return json.loads(text.strip())
except json.JSONDecodeError:
pass
# Try finding {...} span
start = text.find("{")
end = text.rfind("}")
if start != -1 and end > start:
try:
return json.loads(text[start : end + 1])
except json.JSONDecodeError:
pass
# Try finding [...] span
start = text.find("[")
end = text.rfind("]")
if start != -1 and end > start:
try:
return json.loads(text[start : end + 1])
except json.JSONDecodeError:
pass
return None

View File

@@ -0,0 +1,279 @@
"""Agent validator - Validates agent structure and connections."""
import logging
import re
from typing import Any
from .utils import get_blocks_info
logger = logging.getLogger(__name__)
class AgentValidator:
"""Validator for AutoGPT agents with detailed error reporting."""
def __init__(self):
self.errors: list[str] = []
def add_error(self, error: str) -> None:
"""Add an error message."""
self.errors.append(error)
def validate_block_existence(
self, agent: dict[str, Any], blocks_info: list[dict[str, Any]]
) -> bool:
"""Validate all block IDs exist in the blocks library."""
valid = True
valid_block_ids = {b.get("id") for b in blocks_info if b.get("id")}
for node in agent.get("nodes", []):
block_id = node.get("block_id")
node_id = node.get("id")
if not block_id:
self.add_error(f"Node '{node_id}' is missing 'block_id' field.")
valid = False
continue
if block_id not in valid_block_ids:
self.add_error(
f"Node '{node_id}' references block_id '{block_id}' which does not exist."
)
valid = False
return valid
def validate_link_node_references(self, agent: dict[str, Any]) -> bool:
"""Validate all node IDs referenced in links exist."""
valid = True
valid_node_ids = {n.get("id") for n in agent.get("nodes", []) if n.get("id")}
for link in agent.get("links", []):
link_id = link.get("id", "Unknown")
source_id = link.get("source_id")
sink_id = link.get("sink_id")
if not source_id:
self.add_error(f"Link '{link_id}' is missing 'source_id'.")
valid = False
elif source_id not in valid_node_ids:
self.add_error(
f"Link '{link_id}' references non-existent source_id '{source_id}'."
)
valid = False
if not sink_id:
self.add_error(f"Link '{link_id}' is missing 'sink_id'.")
valid = False
elif sink_id not in valid_node_ids:
self.add_error(
f"Link '{link_id}' references non-existent sink_id '{sink_id}'."
)
valid = False
return valid
def validate_required_inputs(
self, agent: dict[str, Any], blocks_info: list[dict[str, Any]]
) -> bool:
"""Validate required inputs are provided."""
valid = True
block_map = {b.get("id"): b for b in blocks_info}
for node in agent.get("nodes", []):
block_id = node.get("block_id")
block = block_map.get(block_id)
if not block:
continue
required_inputs = block.get("inputSchema", {}).get("required", [])
input_defaults = node.get("input_default", {})
node_id = node.get("id")
# Get linked inputs
linked_inputs = {
link["sink_name"]
for link in agent.get("links", [])
if link.get("sink_id") == node_id
}
for req_input in required_inputs:
if (
req_input not in input_defaults
and req_input not in linked_inputs
and req_input != "credentials"
):
block_name = block.get("name", "Unknown Block")
self.add_error(
f"Node '{node_id}' ({block_name}) is missing required input '{req_input}'."
)
valid = False
return valid
def validate_data_type_compatibility(
self, agent: dict[str, Any], blocks_info: list[dict[str, Any]]
) -> bool:
"""Validate linked data types are compatible."""
valid = True
block_map = {b.get("id"): b for b in blocks_info}
node_lookup = {n.get("id"): n for n in agent.get("nodes", [])}
def get_type(schema: dict, name: str) -> str | None:
if "_#_" in name:
parent, child = name.split("_#_", 1)
parent_schema = schema.get(parent, {})
if "properties" in parent_schema:
return parent_schema["properties"].get(child, {}).get("type")
return None
return schema.get(name, {}).get("type")
def are_compatible(src: str, sink: str) -> bool:
if {src, sink} <= {"integer", "number"}:
return True
return src == sink
for link in agent.get("links", []):
source_node = node_lookup.get(link.get("source_id"))
sink_node = node_lookup.get(link.get("sink_id"))
if not source_node or not sink_node:
continue
source_block = block_map.get(source_node.get("block_id"))
sink_block = block_map.get(sink_node.get("block_id"))
if not source_block or not sink_block:
continue
source_outputs = source_block.get("outputSchema", {}).get("properties", {})
sink_inputs = sink_block.get("inputSchema", {}).get("properties", {})
source_type = get_type(source_outputs, link.get("source_name", ""))
sink_type = get_type(sink_inputs, link.get("sink_name", ""))
if source_type and sink_type and not are_compatible(source_type, sink_type):
self.add_error(
f"Type mismatch: {source_block.get('name')} output '{link['source_name']}' "
f"({source_type}) -> {sink_block.get('name')} input '{link['sink_name']}' ({sink_type})."
)
valid = False
return valid
def validate_nested_sink_links(
self, agent: dict[str, Any], blocks_info: list[dict[str, Any]]
) -> bool:
"""Validate nested sink links (with _#_ notation)."""
valid = True
block_map = {b.get("id"): b for b in blocks_info}
node_lookup = {n.get("id"): n for n in agent.get("nodes", [])}
for link in agent.get("links", []):
sink_name = link.get("sink_name", "")
if "_#_" in sink_name:
parent, child = sink_name.split("_#_", 1)
sink_node = node_lookup.get(link.get("sink_id"))
if not sink_node:
continue
block = block_map.get(sink_node.get("block_id"))
if not block:
continue
input_props = block.get("inputSchema", {}).get("properties", {})
parent_schema = input_props.get(parent)
if not parent_schema:
self.add_error(
f"Invalid nested link '{sink_name}': parent '{parent}' not found."
)
valid = False
continue
if not parent_schema.get("additionalProperties"):
if not (
isinstance(parent_schema, dict)
and "properties" in parent_schema
and child in parent_schema.get("properties", {})
):
self.add_error(
f"Invalid nested link '{sink_name}': child '{child}' not found in '{parent}'."
)
valid = False
return valid
def validate_prompt_spaces(self, agent: dict[str, Any]) -> bool:
"""Validate prompts don't have spaces in template variables."""
valid = True
for node in agent.get("nodes", []):
input_default = node.get("input_default", {})
prompt = input_default.get("prompt", "")
if not isinstance(prompt, str):
continue
# Find {{...}} with spaces
matches = re.finditer(r"\{\{([^}]+)\}\}", prompt)
for match in matches:
content = match.group(1)
if " " in content:
self.add_error(
f"Node '{node.get('id')}' has spaces in template variable: "
f"'{{{{{content}}}}}' should be '{{{{{content.replace(' ', '_')}}}}}'."
)
valid = False
return valid
def validate(
self, agent: dict[str, Any], blocks_info: list[dict[str, Any]] | None = None
) -> tuple[bool, str | None]:
"""Run all validations.
Returns:
Tuple of (is_valid, error_message)
"""
self.errors = []
if blocks_info is None:
blocks_info = get_blocks_info()
checks = [
self.validate_block_existence(agent, blocks_info),
self.validate_link_node_references(agent),
self.validate_required_inputs(agent, blocks_info),
self.validate_data_type_compatibility(agent, blocks_info),
self.validate_nested_sink_links(agent, blocks_info),
self.validate_prompt_spaces(agent),
]
all_passed = all(checks)
if all_passed:
logger.info("Agent validation successful")
return True, None
error_message = "Agent validation failed:\n"
for i, error in enumerate(self.errors, 1):
error_message += f"{i}. {error}\n"
logger.warning(f"Agent validation failed with {len(self.errors)} errors")
return False, error_message
def validate_agent(
agent: dict[str, Any], blocks_info: list[dict[str, Any]] | None = None
) -> tuple[bool, str | None]:
"""Convenience function to validate an agent.
Returns:
Tuple of (is_valid, error_message)
"""
validator = AgentValidator()
return validator.validate(agent, blocks_info)

View File

@@ -0,0 +1,455 @@
"""Tool for retrieving agent execution outputs from user's library."""
import logging
import re
from datetime import datetime, timedelta, timezone
from typing import Any
from pydantic import BaseModel, field_validator
from backend.api.features.chat.model import ChatSession
from backend.api.features.library import db as library_db
from backend.api.features.library.model import LibraryAgent
from backend.data import execution as execution_db
from backend.data.execution import ExecutionStatus, GraphExecution, GraphExecutionMeta
from .base import BaseTool
from .models import (
AgentOutputResponse,
ErrorResponse,
ExecutionOutputInfo,
NoResultsResponse,
ToolResponseBase,
)
from .utils import fetch_graph_from_store_slug
logger = logging.getLogger(__name__)
class AgentOutputInput(BaseModel):
"""Input parameters for the agent_output tool."""
agent_name: str = ""
library_agent_id: str = ""
store_slug: str = ""
execution_id: str = ""
run_time: str = "latest"
@field_validator(
"agent_name",
"library_agent_id",
"store_slug",
"execution_id",
"run_time",
mode="before",
)
@classmethod
def strip_strings(cls, v: Any) -> Any:
"""Strip whitespace from string fields."""
return v.strip() if isinstance(v, str) else v
def parse_time_expression(
time_expr: str | None,
) -> tuple[datetime | None, datetime | None]:
"""
Parse time expression into datetime range (start, end).
Supports:
- "latest" or None -> returns (None, None) to get most recent
- "yesterday" -> 24h window for yesterday
- "today" -> Today from midnight
- "last week" / "last 7 days" -> 7 day window
- "last month" / "last 30 days" -> 30 day window
- ISO date "YYYY-MM-DD" -> 24h window for that date
"""
if not time_expr or time_expr.lower() == "latest":
return None, None
now = datetime.now(timezone.utc)
expr = time_expr.lower().strip()
# Relative expressions
if expr == "yesterday":
end = now.replace(hour=0, minute=0, second=0, microsecond=0)
start = end - timedelta(days=1)
return start, end
if expr in ("last week", "last 7 days"):
return now - timedelta(days=7), now
if expr in ("last month", "last 30 days"):
return now - timedelta(days=30), now
if expr == "today":
start = now.replace(hour=0, minute=0, second=0, microsecond=0)
return start, now
# Try ISO date format (YYYY-MM-DD)
date_match = re.match(r"^(\d{4})-(\d{2})-(\d{2})$", expr)
if date_match:
year, month, day = map(int, date_match.groups())
start = datetime(year, month, day, 0, 0, 0, tzinfo=timezone.utc)
end = start + timedelta(days=1)
return start, end
# Try ISO datetime
try:
parsed = datetime.fromisoformat(expr.replace("Z", "+00:00"))
if parsed.tzinfo is None:
parsed = parsed.replace(tzinfo=timezone.utc)
# Return +/- 1 hour window around the specified time
return parsed - timedelta(hours=1), parsed + timedelta(hours=1)
except ValueError:
pass
# Fallback: treat as "latest"
return None, None
class AgentOutputTool(BaseTool):
"""Tool for retrieving execution outputs from user's library agents."""
@property
def name(self) -> str:
return "agent_output"
@property
def description(self) -> str:
return """Retrieve execution outputs from agents in the user's library.
Identify the agent using one of:
- agent_name: Fuzzy search in user's library
- library_agent_id: Exact library agent ID
- store_slug: Marketplace format 'username/agent-name'
Select which run to retrieve using:
- execution_id: Specific execution ID
- run_time: 'latest' (default), 'yesterday', 'last week', or ISO date 'YYYY-MM-DD'
"""
@property
def parameters(self) -> dict[str, Any]:
return {
"type": "object",
"properties": {
"agent_name": {
"type": "string",
"description": "Agent name to search for in user's library (fuzzy match)",
},
"library_agent_id": {
"type": "string",
"description": "Exact library agent ID",
},
"store_slug": {
"type": "string",
"description": "Marketplace identifier: 'username/agent-slug'",
},
"execution_id": {
"type": "string",
"description": "Specific execution ID to retrieve",
},
"run_time": {
"type": "string",
"description": (
"Time filter: 'latest', 'yesterday', 'last week', or 'YYYY-MM-DD'"
),
},
},
"required": [],
}
@property
def requires_auth(self) -> bool:
return True
async def _resolve_agent(
self,
user_id: str,
agent_name: str | None,
library_agent_id: str | None,
store_slug: str | None,
) -> tuple[LibraryAgent | None, str | None]:
"""
Resolve agent from provided identifiers.
Returns (library_agent, error_message).
"""
# Priority 1: Exact library agent ID
if library_agent_id:
try:
agent = await library_db.get_library_agent(library_agent_id, user_id)
return agent, None
except Exception as e:
logger.warning(f"Failed to get library agent by ID: {e}")
return None, f"Library agent '{library_agent_id}' not found"
# Priority 2: Store slug (username/agent-name)
if store_slug and "/" in store_slug:
username, agent_slug = store_slug.split("/", 1)
graph, _ = await fetch_graph_from_store_slug(username, agent_slug)
if not graph:
return None, f"Agent '{store_slug}' not found in marketplace"
# Find in user's library by graph_id
agent = await library_db.get_library_agent_by_graph_id(user_id, graph.id)
if not agent:
return (
None,
f"Agent '{store_slug}' is not in your library. "
"Add it first to see outputs.",
)
return agent, None
# Priority 3: Fuzzy name search in library
if agent_name:
try:
response = await library_db.list_library_agents(
user_id=user_id,
search_term=agent_name,
page_size=5,
)
if not response.agents:
return (
None,
f"No agents matching '{agent_name}' found in your library",
)
# Return best match (first result from search)
return response.agents[0], None
except Exception as e:
logger.error(f"Error searching library agents: {e}")
return None, f"Error searching for agent: {e}"
return (
None,
"Please specify an agent name, library_agent_id, or store_slug",
)
async def _get_execution(
self,
user_id: str,
graph_id: str,
execution_id: str | None,
time_start: datetime | None,
time_end: datetime | None,
) -> tuple[GraphExecution | None, list[GraphExecutionMeta], str | None]:
"""
Fetch execution(s) based on filters.
Returns (single_execution, available_executions_meta, error_message).
"""
# If specific execution_id provided, fetch it directly
if execution_id:
execution = await execution_db.get_graph_execution(
user_id=user_id,
execution_id=execution_id,
include_node_executions=False,
)
if not execution:
return None, [], f"Execution '{execution_id}' not found"
return execution, [], None
# Get completed executions with time filters
executions = await execution_db.get_graph_executions(
graph_id=graph_id,
user_id=user_id,
statuses=[ExecutionStatus.COMPLETED],
created_time_gte=time_start,
created_time_lte=time_end,
limit=10,
)
if not executions:
return None, [], None # No error, just no executions
# If only one execution, fetch full details
if len(executions) == 1:
full_execution = await execution_db.get_graph_execution(
user_id=user_id,
execution_id=executions[0].id,
include_node_executions=False,
)
return full_execution, [], None
# Multiple executions - return latest with full details, plus list of available
full_execution = await execution_db.get_graph_execution(
user_id=user_id,
execution_id=executions[0].id,
include_node_executions=False,
)
return full_execution, executions, None
def _build_response(
self,
agent: LibraryAgent,
execution: GraphExecution | None,
available_executions: list[GraphExecutionMeta],
session_id: str | None,
) -> AgentOutputResponse:
"""Build the response based on execution data."""
library_agent_link = f"/library/agents/{agent.id}"
if not execution:
return AgentOutputResponse(
message=f"No completed executions found for agent '{agent.name}'",
session_id=session_id,
agent_name=agent.name,
agent_id=agent.graph_id,
library_agent_id=agent.id,
library_agent_link=library_agent_link,
total_executions=0,
)
execution_info = ExecutionOutputInfo(
execution_id=execution.id,
status=execution.status.value,
started_at=execution.started_at,
ended_at=execution.ended_at,
outputs=dict(execution.outputs),
inputs_summary=execution.inputs if execution.inputs else None,
)
available_list = None
if len(available_executions) > 1:
available_list = [
{
"id": e.id,
"status": e.status.value,
"started_at": e.started_at.isoformat() if e.started_at else None,
}
for e in available_executions[:5]
]
message = f"Found execution outputs for agent '{agent.name}'"
if len(available_executions) > 1:
message += (
f". Showing latest of {len(available_executions)} matching executions."
)
return AgentOutputResponse(
message=message,
session_id=session_id,
agent_name=agent.name,
agent_id=agent.graph_id,
library_agent_id=agent.id,
library_agent_link=library_agent_link,
execution=execution_info,
available_executions=available_list,
total_executions=len(available_executions) if available_executions else 1,
)
async def _execute(
self,
user_id: str | None,
session: ChatSession,
**kwargs,
) -> ToolResponseBase:
"""Execute the agent_output tool."""
session_id = session.session_id
# Parse and validate input
try:
input_data = AgentOutputInput(**kwargs)
except Exception as e:
logger.error(f"Invalid input: {e}")
return ErrorResponse(
message="Invalid input parameters",
error=str(e),
session_id=session_id,
)
# Ensure user_id is present (should be guaranteed by requires_auth)
if not user_id:
return ErrorResponse(
message="User authentication required",
session_id=session_id,
)
# Check if at least one identifier is provided
if not any(
[
input_data.agent_name,
input_data.library_agent_id,
input_data.store_slug,
input_data.execution_id,
]
):
return ErrorResponse(
message=(
"Please specify at least one of: agent_name, "
"library_agent_id, store_slug, or execution_id"
),
session_id=session_id,
)
# If only execution_id provided, we need to find the agent differently
if (
input_data.execution_id
and not input_data.agent_name
and not input_data.library_agent_id
and not input_data.store_slug
):
# Fetch execution directly to get graph_id
execution = await execution_db.get_graph_execution(
user_id=user_id,
execution_id=input_data.execution_id,
include_node_executions=False,
)
if not execution:
return ErrorResponse(
message=f"Execution '{input_data.execution_id}' not found",
session_id=session_id,
)
# Find library agent by graph_id
agent = await library_db.get_library_agent_by_graph_id(
user_id, execution.graph_id
)
if not agent:
return NoResultsResponse(
message=(
f"Execution found but agent not in your library. "
f"Graph ID: {execution.graph_id}"
),
session_id=session_id,
suggestions=["Add the agent to your library to see more details"],
)
return self._build_response(agent, execution, [], session_id)
# Resolve agent from identifiers
agent, error = await self._resolve_agent(
user_id=user_id,
agent_name=input_data.agent_name or None,
library_agent_id=input_data.library_agent_id or None,
store_slug=input_data.store_slug or None,
)
if error or not agent:
return NoResultsResponse(
message=error or "Agent not found",
session_id=session_id,
suggestions=[
"Check the agent name or ID",
"Make sure the agent is in your library",
],
)
# Parse time expression
time_start, time_end = parse_time_expression(input_data.run_time)
# Fetch execution(s)
execution, available_executions, exec_error = await self._get_execution(
user_id=user_id,
graph_id=agent.graph_id,
execution_id=input_data.execution_id or None,
time_start=time_start,
time_end=time_end,
)
if exec_error:
return ErrorResponse(
message=exec_error,
session_id=session_id,
)
return self._build_response(agent, execution, available_executions, session_id)

View File

@@ -5,8 +5,8 @@ from typing import Any
from openai.types.chat import ChatCompletionToolParam
from backend.server.v2.chat.model import ChatSession
from backend.server.v2.chat.response_model import StreamToolExecutionResult
from backend.api.features.chat.model import ChatSession
from backend.api.features.chat.response_model import StreamToolExecutionResult
from .models import ErrorResponse, NeedLoginResponse, ToolResponseBase

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,279 @@
"""CreateAgentTool - Creates agents from natural language descriptions."""
import logging
from typing import Any
from backend.api.features.chat.model import ChatSession
from .agent_generator import (
apply_all_fixes,
decompose_goal,
generate_agent,
get_blocks_info,
save_agent_to_library,
validate_agent,
)
from .base import BaseTool
from .models import (
AgentPreviewResponse,
AgentSavedResponse,
ClarificationNeededResponse,
ClarifyingQuestion,
ErrorResponse,
ToolResponseBase,
)
logger = logging.getLogger(__name__)
# Maximum retries for agent generation with validation feedback
MAX_GENERATION_RETRIES = 2
class CreateAgentTool(BaseTool):
"""Tool for creating agents from natural language descriptions."""
@property
def name(self) -> str:
return "create_agent"
@property
def description(self) -> str:
return (
"Create a new agent workflow from a natural language description. "
"First generates a preview, then saves to library if save=true."
)
@property
def requires_auth(self) -> bool:
return True
@property
def parameters(self) -> dict[str, Any]:
return {
"type": "object",
"properties": {
"description": {
"type": "string",
"description": (
"Natural language description of what the agent should do. "
"Be specific about inputs, outputs, and the workflow steps."
),
},
"context": {
"type": "string",
"description": (
"Additional context or answers to previous clarifying questions. "
"Include any preferences or constraints mentioned by the user."
),
},
"save": {
"type": "boolean",
"description": (
"Whether to save the agent to the user's library. "
"Default is true. Set to false for preview only."
),
"default": True,
},
},
"required": ["description"],
}
async def _execute(
self,
user_id: str | None,
session: ChatSession,
**kwargs,
) -> ToolResponseBase:
"""Execute the create_agent tool.
Flow:
1. Decompose the description into steps (may return clarifying questions)
2. Generate agent JSON from the steps
3. Apply fixes to correct common LLM errors
4. Preview or save based on the save parameter
"""
description = kwargs.get("description", "").strip()
context = kwargs.get("context", "")
save = kwargs.get("save", True)
session_id = session.session_id if session else None
if not description:
return ErrorResponse(
message="Please provide a description of what the agent should do.",
error="Missing description parameter",
session_id=session_id,
)
# Step 1: Decompose goal into steps
try:
decomposition_result = await decompose_goal(description, context)
except ValueError as e:
# Handle missing API key or configuration errors
return ErrorResponse(
message=f"Agent generation is not configured: {str(e)}",
error="configuration_error",
session_id=session_id,
)
if decomposition_result is None:
return ErrorResponse(
message="Failed to analyze the goal. Please try rephrasing.",
error="Decomposition failed",
session_id=session_id,
)
# Check if LLM returned clarifying questions
if decomposition_result.get("type") == "clarifying_questions":
questions = decomposition_result.get("questions", [])
return ClarificationNeededResponse(
message=(
"I need some more information to create this agent. "
"Please answer the following questions:"
),
questions=[
ClarifyingQuestion(
question=q.get("question", ""),
keyword=q.get("keyword", ""),
example=q.get("example"),
)
for q in questions
],
session_id=session_id,
)
# Check for unachievable/vague goals
if decomposition_result.get("type") == "unachievable_goal":
suggested = decomposition_result.get("suggested_goal", "")
reason = decomposition_result.get("reason", "")
return ErrorResponse(
message=(
f"This goal cannot be accomplished with the available blocks. "
f"{reason} "
f"Suggestion: {suggested}"
),
error="unachievable_goal",
details={"suggested_goal": suggested, "reason": reason},
session_id=session_id,
)
if decomposition_result.get("type") == "vague_goal":
suggested = decomposition_result.get("suggested_goal", "")
return ErrorResponse(
message=(
f"The goal is too vague to create a specific workflow. "
f"Suggestion: {suggested}"
),
error="vague_goal",
details={"suggested_goal": suggested},
session_id=session_id,
)
# Step 2: Generate agent JSON with retry on validation failure
blocks_info = get_blocks_info()
agent_json = None
validation_errors = None
for attempt in range(MAX_GENERATION_RETRIES + 1):
# Generate agent (include validation errors from previous attempt)
if attempt == 0:
agent_json = await generate_agent(decomposition_result)
else:
# Retry with validation error feedback
logger.info(
f"Retry {attempt}/{MAX_GENERATION_RETRIES} with validation feedback"
)
retry_instructions = {
**decomposition_result,
"previous_errors": validation_errors,
"retry_instructions": (
"The previous generation had validation errors. "
"Please fix these issues in the new generation:\n"
f"{validation_errors}"
),
}
agent_json = await generate_agent(retry_instructions)
if agent_json is None:
if attempt == MAX_GENERATION_RETRIES:
return ErrorResponse(
message="Failed to generate the agent. Please try again.",
error="Generation failed",
session_id=session_id,
)
continue
# Step 3: Apply fixes to correct common errors
agent_json = apply_all_fixes(agent_json, blocks_info)
# Step 4: Validate the agent
is_valid, validation_errors = validate_agent(agent_json, blocks_info)
if is_valid:
logger.info(f"Agent generated successfully on attempt {attempt + 1}")
break
logger.warning(
f"Validation failed on attempt {attempt + 1}: {validation_errors}"
)
if attempt == MAX_GENERATION_RETRIES:
# Return error with validation details
return ErrorResponse(
message=(
f"Generated agent has validation errors after {MAX_GENERATION_RETRIES + 1} attempts. "
f"Please try rephrasing your request or simplify the workflow."
),
error="validation_failed",
details={"validation_errors": validation_errors},
session_id=session_id,
)
agent_name = agent_json.get("name", "Generated Agent")
agent_description = agent_json.get("description", "")
node_count = len(agent_json.get("nodes", []))
link_count = len(agent_json.get("links", []))
# Step 4: Preview or save
if not save:
return AgentPreviewResponse(
message=(
f"I've generated an agent called '{agent_name}' with {node_count} blocks. "
f"Review it and call create_agent with save=true to save it to your library."
),
agent_json=agent_json,
agent_name=agent_name,
description=agent_description,
node_count=node_count,
link_count=link_count,
session_id=session_id,
)
# Save to library
if not user_id:
return ErrorResponse(
message="You must be logged in to save agents.",
error="auth_required",
session_id=session_id,
)
try:
created_graph, library_agent = await save_agent_to_library(
agent_json, user_id
)
return AgentSavedResponse(
message=f"Agent '{created_graph.name}' has been saved to your library!",
agent_id=created_graph.id,
agent_name=created_graph.name,
library_agent_id=library_agent.id,
library_agent_link=f"/library/{library_agent.id}",
agent_page_link=f"/build?flowID={created_graph.id}",
session_id=session_id,
)
except Exception as e:
return ErrorResponse(
message=f"Failed to save the agent: {str(e)}",
error="save_failed",
details={"exception": str(e)},
session_id=session_id,
)

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,294 @@
"""EditAgentTool - Edits existing agents using natural language."""
import logging
from typing import Any
from backend.api.features.chat.model import ChatSession
from .agent_generator import (
apply_agent_patch,
apply_all_fixes,
generate_agent_patch,
get_agent_as_json,
get_blocks_info,
save_agent_to_library,
validate_agent,
)
from .base import BaseTool
from .models import (
AgentPreviewResponse,
AgentSavedResponse,
ClarificationNeededResponse,
ClarifyingQuestion,
ErrorResponse,
ToolResponseBase,
)
logger = logging.getLogger(__name__)
# Maximum retries for patch generation with validation feedback
MAX_GENERATION_RETRIES = 2
class EditAgentTool(BaseTool):
"""Tool for editing existing agents using natural language."""
@property
def name(self) -> str:
return "edit_agent"
@property
def description(self) -> str:
return (
"Edit an existing agent from the user's library using natural language. "
"Generates a patch to update the agent while preserving unchanged parts."
)
@property
def requires_auth(self) -> bool:
return True
@property
def parameters(self) -> dict[str, Any]:
return {
"type": "object",
"properties": {
"agent_id": {
"type": "string",
"description": (
"The ID of the agent to edit. "
"Can be a graph ID or library agent ID."
),
},
"changes": {
"type": "string",
"description": (
"Natural language description of what changes to make. "
"Be specific about what to add, remove, or modify."
),
},
"context": {
"type": "string",
"description": (
"Additional context or answers to previous clarifying questions."
),
},
"save": {
"type": "boolean",
"description": (
"Whether to save the changes. "
"Default is true. Set to false for preview only."
),
"default": True,
},
},
"required": ["agent_id", "changes"],
}
async def _execute(
self,
user_id: str | None,
session: ChatSession,
**kwargs,
) -> ToolResponseBase:
"""Execute the edit_agent tool.
Flow:
1. Fetch the current agent
2. Generate a patch based on the requested changes
3. Apply the patch to create an updated agent
4. Preview or save based on the save parameter
"""
agent_id = kwargs.get("agent_id", "").strip()
changes = kwargs.get("changes", "").strip()
context = kwargs.get("context", "")
save = kwargs.get("save", True)
session_id = session.session_id if session else None
if not agent_id:
return ErrorResponse(
message="Please provide the agent ID to edit.",
error="Missing agent_id parameter",
session_id=session_id,
)
if not changes:
return ErrorResponse(
message="Please describe what changes you want to make.",
error="Missing changes parameter",
session_id=session_id,
)
# Step 1: Fetch current agent
current_agent = await get_agent_as_json(agent_id, user_id)
if current_agent is None:
return ErrorResponse(
message=f"Could not find agent with ID '{agent_id}' in your library.",
error="agent_not_found",
session_id=session_id,
)
# Build the update request with context
update_request = changes
if context:
update_request = f"{changes}\n\nAdditional context:\n{context}"
# Step 2: Generate patch with retry on validation failure
blocks_info = get_blocks_info()
updated_agent = None
validation_errors = None
intent = "Applied requested changes"
for attempt in range(MAX_GENERATION_RETRIES + 1):
# Generate patch (include validation errors from previous attempt)
try:
if attempt == 0:
patch_result = await generate_agent_patch(
update_request, current_agent
)
else:
# Retry with validation error feedback
logger.info(
f"Retry {attempt}/{MAX_GENERATION_RETRIES} with validation feedback"
)
retry_request = (
f"{update_request}\n\n"
f"IMPORTANT: The previous edit had validation errors. "
f"Please fix these issues:\n{validation_errors}"
)
patch_result = await generate_agent_patch(
retry_request, current_agent
)
except ValueError as e:
# Handle missing API key or configuration errors
return ErrorResponse(
message=f"Agent generation is not configured: {str(e)}",
error="configuration_error",
session_id=session_id,
)
if patch_result is None:
if attempt == MAX_GENERATION_RETRIES:
return ErrorResponse(
message="Failed to generate changes. Please try rephrasing.",
error="Patch generation failed",
session_id=session_id,
)
continue
# Check if LLM returned clarifying questions
if patch_result.get("type") == "clarifying_questions":
questions = patch_result.get("questions", [])
return ClarificationNeededResponse(
message=(
"I need some more information about the changes. "
"Please answer the following questions:"
),
questions=[
ClarifyingQuestion(
question=q.get("question", ""),
keyword=q.get("keyword", ""),
example=q.get("example"),
)
for q in questions
],
session_id=session_id,
)
# Step 3: Apply patch and fixes
try:
updated_agent = apply_agent_patch(current_agent, patch_result)
updated_agent = apply_all_fixes(updated_agent, blocks_info)
except Exception as e:
if attempt == MAX_GENERATION_RETRIES:
return ErrorResponse(
message=f"Failed to apply changes: {str(e)}",
error="patch_apply_failed",
details={"exception": str(e)},
session_id=session_id,
)
validation_errors = str(e)
continue
# Step 4: Validate the updated agent
is_valid, validation_errors = validate_agent(updated_agent, blocks_info)
if is_valid:
logger.info(f"Agent edited successfully on attempt {attempt + 1}")
intent = patch_result.get("intent", "Applied requested changes")
break
logger.warning(
f"Validation failed on attempt {attempt + 1}: {validation_errors}"
)
if attempt == MAX_GENERATION_RETRIES:
# Return error with validation details
return ErrorResponse(
message=(
f"Updated agent has validation errors after "
f"{MAX_GENERATION_RETRIES + 1} attempts. "
f"Please try rephrasing your request or simplify the changes."
),
error="validation_failed",
details={"validation_errors": validation_errors},
session_id=session_id,
)
# At this point, updated_agent is guaranteed to be set (we return on all failure paths)
assert updated_agent is not None
agent_name = updated_agent.get("name", "Updated Agent")
agent_description = updated_agent.get("description", "")
node_count = len(updated_agent.get("nodes", []))
link_count = len(updated_agent.get("links", []))
# Step 5: Preview or save
if not save:
return AgentPreviewResponse(
message=(
f"I've updated the agent. Changes: {intent}. "
f"The agent now has {node_count} blocks. "
f"Review it and call edit_agent with save=true to save the changes."
),
agent_json=updated_agent,
agent_name=agent_name,
description=agent_description,
node_count=node_count,
link_count=link_count,
session_id=session_id,
)
# Save to library (creates a new version)
if not user_id:
return ErrorResponse(
message="You must be logged in to save agents.",
error="auth_required",
session_id=session_id,
)
try:
created_graph, library_agent = await save_agent_to_library(
updated_agent, user_id, is_update=True
)
return AgentSavedResponse(
message=(
f"Updated agent '{created_graph.name}' has been saved to your library! "
f"Changes: {intent}"
),
agent_id=created_graph.id,
agent_name=created_graph.name,
library_agent_id=library_agent.id,
library_agent_link=f"/library/{library_agent.id}",
agent_page_link=f"/build?flowID={created_graph.id}",
session_id=session_id,
)
except Exception as e:
return ErrorResponse(
message=f"Failed to save the updated agent: {str(e)}",
error="save_failed",
details={"exception": str(e)},
session_id=session_id,
)

View File

@@ -3,17 +3,18 @@
import logging
from typing import Any
from backend.server.v2.chat.model import ChatSession
from backend.server.v2.chat.tools.base import BaseTool
from backend.server.v2.chat.tools.models import (
from backend.api.features.chat.model import ChatSession
from backend.api.features.store import db as store_db
from backend.util.exceptions import DatabaseError, NotFoundError
from .base import BaseTool
from .models import (
AgentCarouselResponse,
AgentInfo,
ErrorResponse,
NoResultsResponse,
ToolResponseBase,
)
from backend.server.v2.store import db as store_db
from backend.util.exceptions import DatabaseError, NotFoundError
logger = logging.getLogger(__name__)

View File

@@ -0,0 +1,253 @@
"""Tool for searching available blocks using hybrid search."""
import logging
from typing import Any
from backend.api.features.chat.model import ChatSession
from backend.blocks import load_all_blocks
from .base import BaseTool
from .models import (
BlockInfoSummary,
BlockListResponse,
ErrorResponse,
NoResultsResponse,
ToolResponseBase,
)
from .search_blocks import get_block_search_index
logger = logging.getLogger(__name__)
class FindBlockTool(BaseTool):
"""Tool for searching available blocks."""
@property
def name(self) -> str:
return "find_block"
@property
def description(self) -> str:
return (
"Search for available blocks by name or description. "
"Blocks are reusable components that perform specific tasks like "
"sending emails, making API calls, processing text, etc. "
"Use this to find blocks that can be executed directly."
)
@property
def parameters(self) -> dict[str, Any]:
return {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": (
"Search query to find blocks by name or description. "
"Use keywords like 'email', 'http', 'text', 'ai', etc."
),
},
},
"required": ["query"],
}
@property
def requires_auth(self) -> bool:
return True
def _matches_query(self, block, query: str) -> tuple[int, bool]:
"""
Check if a block matches the query and return a priority score.
Returns (priority, matches) where:
- priority 0: exact name match
- priority 1: name contains query
- priority 2: description contains query
- priority 3: category contains query
"""
query_lower = query.lower()
name_lower = block.name.lower()
desc_lower = block.description.lower()
# Exact name match
if query_lower == name_lower:
return 0, True
# Name contains query
if query_lower in name_lower:
return 1, True
# Description contains query
if query_lower in desc_lower:
return 2, True
# Category contains query
for category in block.categories:
if query_lower in category.name.lower():
return 3, True
return 4, False
async def _execute(
self,
user_id: str | None,
session: ChatSession,
**kwargs,
) -> ToolResponseBase:
"""Search for blocks matching the query.
Args:
user_id: User ID (required)
session: Chat session
query: Search query
Returns:
BlockListResponse: List of matching blocks
NoResultsResponse: No blocks found
ErrorResponse: Error message
"""
query = kwargs.get("query", "").strip()
session_id = session.session_id
if not query:
return ErrorResponse(
message="Please provide a search query",
session_id=session_id,
)
try:
# Try hybrid search first
search_results = self._hybrid_search(query)
if search_results is not None:
# Hybrid search succeeded
if not search_results:
return NoResultsResponse(
message=f"No blocks found matching '{query}'",
session_id=session_id,
suggestions=[
"Try more general terms",
"Search by category: ai, text, social, search, etc.",
"Check block names like 'SendEmail', 'HttpRequest', etc.",
],
)
# Get full block info for each result
all_blocks = load_all_blocks()
blocks = []
for result in search_results:
block_cls = all_blocks.get(result.block_id)
if block_cls:
block = block_cls()
blocks.append(
BlockInfoSummary(
id=block.id,
name=block.name,
description=block.description,
categories=[cat.name for cat in block.categories],
input_schema=block.input_schema.jsonschema(),
output_schema=block.output_schema.jsonschema(),
)
)
return BlockListResponse(
message=(
f"Found {len(blocks)} block{'s' if len(blocks) != 1 else ''} "
f"matching '{query}'. Use run_block to execute a block with "
"the required inputs."
),
blocks=blocks,
count=len(blocks),
query=query,
session_id=session_id,
)
# Fallback to simple search if hybrid search failed
return self._simple_search(query, session_id)
except Exception as e:
logger.error(f"Error searching blocks: {e}", exc_info=True)
return ErrorResponse(
message="Failed to search blocks. Please try again.",
error=str(e),
session_id=session_id,
)
def _hybrid_search(self, query: str) -> list | None:
"""
Perform hybrid search using embeddings and BM25.
Returns:
List of BlockSearchResult if successful, None if index not available
"""
try:
index = get_block_search_index()
if not index.load():
logger.info(
"Block search index not available, falling back to simple search"
)
return None
results = index.search(query, top_k=10)
logger.info(f"Hybrid search found {len(results)} blocks for: {query}")
return results
except Exception as e:
logger.warning(f"Hybrid search failed, falling back to simple: {e}")
return None
def _simple_search(self, query: str, session_id: str) -> ToolResponseBase:
"""Fallback simple search using substring matching."""
all_blocks = load_all_blocks()
logger.info(f"Simple searching {len(all_blocks)} blocks for: {query}")
# Find matching blocks with priority scores
matches: list[tuple[int, Any]] = []
for block_id, block_cls in all_blocks.items():
block = block_cls()
priority, is_match = self._matches_query(block, query)
if is_match:
matches.append((priority, block))
# Sort by priority (lower is better)
matches.sort(key=lambda x: x[0])
# Take top 10 results
top_matches = [block for _, block in matches[:10]]
if not top_matches:
return NoResultsResponse(
message=f"No blocks found matching '{query}'",
session_id=session_id,
suggestions=[
"Try more general terms",
"Search by category: ai, text, social, search, etc.",
"Check block names like 'SendEmail', 'HttpRequest', etc.",
],
)
# Build response
blocks = []
for block in top_matches:
blocks.append(
BlockInfoSummary(
id=block.id,
name=block.name,
description=block.description,
categories=[cat.name for cat in block.categories],
input_schema=block.input_schema.jsonschema(),
output_schema=block.output_schema.jsonschema(),
)
)
return BlockListResponse(
message=(
f"Found {len(blocks)} block{'s' if len(blocks) != 1 else ''} "
f"matching '{query}'. Use run_block to execute a block with "
"the required inputs."
),
blocks=blocks,
count=len(blocks),
query=query,
session_id=session_id,
)

View File

@@ -0,0 +1,157 @@
"""Tool for searching agents in the user's library."""
import logging
from typing import Any
from backend.api.features.chat.model import ChatSession
from backend.api.features.library import db as library_db
from backend.util.exceptions import DatabaseError
from .base import BaseTool
from .models import (
AgentCarouselResponse,
AgentInfo,
ErrorResponse,
NoResultsResponse,
ToolResponseBase,
)
logger = logging.getLogger(__name__)
class FindLibraryAgentTool(BaseTool):
"""Tool for searching agents in the user's library."""
@property
def name(self) -> str:
return "find_library_agent"
@property
def description(self) -> str:
return (
"Search for agents in the user's library. Use this to find agents "
"the user has already added to their library, including agents they "
"created or added from the marketplace."
)
@property
def parameters(self) -> dict[str, Any]:
return {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": (
"Search query to find agents by name or description. "
"Use keywords for best results."
),
},
},
"required": ["query"],
}
@property
def requires_auth(self) -> bool:
return True
async def _execute(
self,
user_id: str | None,
session: ChatSession,
**kwargs,
) -> ToolResponseBase:
"""Search for agents in the user's library.
Args:
user_id: User ID (required)
session: Chat session
query: Search query
Returns:
AgentCarouselResponse: List of agents found in the library
NoResultsResponse: No agents found
ErrorResponse: Error message
"""
query = kwargs.get("query", "").strip()
session_id = session.session_id
if not query:
return ErrorResponse(
message="Please provide a search query",
session_id=session_id,
)
if not user_id:
return ErrorResponse(
message="User authentication required to search library",
session_id=session_id,
)
agents = []
try:
logger.info(f"Searching user library for: {query}")
library_results = await library_db.list_library_agents(
user_id=user_id,
search_term=query,
page_size=10,
)
logger.info(
f"Find library agents tool found {len(library_results.agents)} agents"
)
for agent in library_results.agents:
agents.append(
AgentInfo(
id=agent.id,
name=agent.name,
description=agent.description or "",
source="library",
in_library=True,
creator=agent.creator_name,
status=agent.status.value,
can_access_graph=agent.can_access_graph,
has_external_trigger=agent.has_external_trigger,
new_output=agent.new_output,
graph_id=agent.graph_id,
),
)
except DatabaseError as e:
logger.error(f"Error searching library agents: {e}", exc_info=True)
return ErrorResponse(
message="Failed to search library. Please try again.",
error=str(e),
session_id=session_id,
)
if not agents:
return NoResultsResponse(
message=(
f"No agents found matching '{query}' in your library. "
"Try different keywords or use find_agent to search the marketplace."
),
session_id=session_id,
suggestions=[
"Try more general terms",
"Use find_agent to search the marketplace",
"Check your library at /library",
],
)
title = (
f"Found {len(agents)} agent{'s' if len(agents) != 1 else ''} "
f"in your library for '{query}'"
)
return AgentCarouselResponse(
message=(
"Found agents in the user's library. You can provide a link to "
"view an agent at: /library/agents/{agent_id}. "
"Use agent_output to get execution results, or run_agent to execute."
),
title=title,
agents=agents,
count=len(agents),
session_id=session_id,
)

View File

@@ -0,0 +1,483 @@
#!/usr/bin/env python3
"""
Block Indexer for Hybrid Search
Creates a hybrid search index from blocks:
- OpenAI embeddings (text-embedding-3-small)
- BM25 index for lexical search
- Name index for title matching boost
Supports incremental updates by tracking content hashes.
Usage:
python -m backend.server.v2.chat.tools.index_blocks [--force]
"""
import argparse
import base64
import hashlib
import json
import logging
import os
import re
import sys
from collections import defaultdict
from datetime import datetime, timezone
from pathlib import Path
from typing import Any
import numpy as np
logger = logging.getLogger(__name__)
# Check for OpenAI availability
try:
import openai # noqa: F401
HAS_OPENAI = True
except ImportError:
HAS_OPENAI = False
print("Warning: openai not installed. Run: pip install openai")
# Default embedding model (OpenAI)
DEFAULT_EMBEDDING_MODEL = "text-embedding-3-small"
DEFAULT_EMBEDDING_DIM = 1536
# Output path (relative to this file)
INDEX_PATH = Path(__file__).parent / "blocks_index.json"
# Stopwords for tokenization
STOPWORDS = {
"the",
"a",
"an",
"is",
"are",
"was",
"were",
"be",
"been",
"being",
"have",
"has",
"had",
"do",
"does",
"did",
"will",
"would",
"could",
"should",
"may",
"might",
"must",
"shall",
"can",
"need",
"dare",
"ought",
"used",
"to",
"of",
"in",
"for",
"on",
"with",
"at",
"by",
"from",
"as",
"into",
"through",
"during",
"before",
"after",
"above",
"below",
"between",
"under",
"again",
"further",
"then",
"once",
"and",
"but",
"or",
"nor",
"so",
"yet",
"both",
"either",
"neither",
"not",
"only",
"own",
"same",
"than",
"too",
"very",
"just",
"also",
"now",
"here",
"there",
"when",
"where",
"why",
"how",
"all",
"each",
"every",
"few",
"more",
"most",
"other",
"some",
"such",
"no",
"any",
"this",
"that",
"these",
"those",
"it",
"its",
"block", # Too common in block context
}
def tokenize(text: str) -> list[str]:
"""Simple tokenizer for BM25."""
text = text.lower()
# Remove code blocks if any
text = re.sub(r"```[\s\S]*?```", "", text)
text = re.sub(r"`[^`]+`", "", text)
# Extract words (including camelCase split)
# First, split camelCase
text = re.sub(r"([a-z])([A-Z])", r"\1 \2", text)
# Extract words
words = re.findall(r"\b[a-z][a-z0-9_-]*\b", text)
# Remove very short words and stopwords
return [w for w in words if len(w) > 2 and w not in STOPWORDS]
def build_searchable_text(block: Any) -> str:
"""Build searchable text from block attributes."""
parts = []
# Block name (split camelCase for better tokenization)
name = block.name
# Split camelCase: GetCurrentTimeBlock -> Get Current Time Block
name_split = re.sub(r"([a-z])([A-Z])", r"\1 \2", name)
parts.append(name_split)
# Description
if block.description:
parts.append(block.description)
# Categories
for category in block.categories:
parts.append(category.name)
# Input schema field names and descriptions
try:
input_schema = block.input_schema.jsonschema()
if "properties" in input_schema:
for field_name, field_info in input_schema["properties"].items():
parts.append(field_name)
if "description" in field_info:
parts.append(field_info["description"])
except Exception:
pass
# Output schema field names
try:
output_schema = block.output_schema.jsonschema()
if "properties" in output_schema:
for field_name in output_schema["properties"]:
parts.append(field_name)
except Exception:
pass
return " ".join(parts)
def compute_content_hash(text: str) -> str:
"""Compute MD5 hash of text for change detection."""
return hashlib.md5(text.encode()).hexdigest()
def load_existing_index(index_path: Path) -> dict[str, Any] | None:
"""Load existing index if present."""
if not index_path.exists():
return None
try:
with open(index_path, "r", encoding="utf-8") as f:
return json.load(f)
except Exception as e:
logger.warning(f"Failed to load existing index: {e}")
return None
def create_embeddings(
texts: list[str],
model_name: str = DEFAULT_EMBEDDING_MODEL,
batch_size: int = 100,
) -> np.ndarray:
"""Create embeddings using OpenAI API."""
if not HAS_OPENAI:
raise RuntimeError("openai not installed. Run: pip install openai")
# Import here to satisfy type checker
from openai import OpenAI
# Check for API key
api_key = os.getenv("OPENAI_API_KEY")
if not api_key:
raise RuntimeError("OPENAI_API_KEY environment variable not set")
client = OpenAI(api_key=api_key)
embeddings = []
print(f"Creating embeddings for {len(texts)} texts using {model_name}...")
for i in range(0, len(texts), batch_size):
batch = texts[i : i + batch_size]
# Truncate texts to max token limit (8191 tokens for text-embedding-3-small)
# Roughly 4 chars per token, so ~32000 chars max
batch = [text[:32000] for text in batch]
response = client.embeddings.create(
model=model_name,
input=batch,
)
for embedding_data in response.data:
embeddings.append(embedding_data.embedding)
print(f" Processed {min(i + batch_size, len(texts))}/{len(texts)} texts")
return np.array(embeddings, dtype=np.float32)
def build_bm25_data(
blocks_data: list[dict[str, Any]],
) -> dict[str, Any]:
"""Build BM25 metadata from block data."""
# Tokenize all searchable texts
tokenized_docs = []
for block in blocks_data:
tokens = tokenize(block["searchable_text"])
tokenized_docs.append(tokens)
# Calculate document frequencies
doc_freq: dict[str, int] = {}
for tokens in tokenized_docs:
seen = set()
for token in tokens:
if token not in seen:
doc_freq[token] = doc_freq.get(token, 0) + 1
seen.add(token)
n_docs = len(tokenized_docs)
doc_lens = [len(d) for d in tokenized_docs]
avgdl = sum(doc_lens) / max(n_docs, 1)
return {
"n_docs": n_docs,
"avgdl": avgdl,
"df": doc_freq,
"doc_lens": doc_lens,
}
def build_name_index(
blocks_data: list[dict[str, Any]],
) -> dict[str, list[list[int | float]]]:
"""Build inverted index for name search boost."""
index: dict[str, list[list[int | float]]] = defaultdict(list)
for idx, block in enumerate(blocks_data):
# Tokenize block name
name_tokens = tokenize(block["name"])
seen = set()
for i, token in enumerate(name_tokens):
if token in seen:
continue
seen.add(token)
# Score: first token gets higher weight
score = 1.5 if i == 0 else 1.0
index[token].append([idx, score])
return dict(index)
def build_block_index(
force_rebuild: bool = False,
output_path: Path = INDEX_PATH,
) -> dict[str, Any]:
"""
Build the block search index.
Args:
force_rebuild: If True, rebuild all embeddings even if unchanged
output_path: Path to save the index
Returns:
The generated index dictionary
"""
# Import here to avoid circular imports
from backend.blocks import load_all_blocks
print("Loading all blocks...")
all_blocks = load_all_blocks()
print(f"Found {len(all_blocks)} blocks")
# Load existing index for incremental updates
existing_index = None if force_rebuild else load_existing_index(output_path)
existing_blocks: dict[str, dict[str, Any]] = {}
if existing_index:
print(
f"Loaded existing index with {len(existing_index.get('blocks', []))} blocks"
)
for block in existing_index.get("blocks", []):
existing_blocks[block["id"]] = block
# Process each block
blocks_data: list[dict[str, Any]] = []
blocks_needing_embedding: list[tuple[int, str]] = [] # (index, searchable_text)
for block_id, block_cls in all_blocks.items():
try:
block = block_cls()
# Skip disabled blocks
if block.disabled:
continue
searchable_text = build_searchable_text(block)
content_hash = compute_content_hash(searchable_text)
block_data = {
"id": block.id,
"name": block.name,
"description": block.description,
"categories": [cat.name for cat in block.categories],
"searchable_text": searchable_text,
"content_hash": content_hash,
"emb": None, # Will be filled later
}
# Check if we can reuse existing embedding
if (
block.id in existing_blocks
and existing_blocks[block.id].get("content_hash") == content_hash
and existing_blocks[block.id].get("emb")
):
# Reuse existing embedding
block_data["emb"] = existing_blocks[block.id]["emb"]
else:
# Need new embedding
blocks_needing_embedding.append((len(blocks_data), searchable_text))
blocks_data.append(block_data)
except Exception as e:
logger.warning(f"Failed to process block {block_id}: {e}")
continue
print(f"Processed {len(blocks_data)} blocks")
print(f"Blocks needing new embeddings: {len(blocks_needing_embedding)}")
# Create embeddings for new/changed blocks
if blocks_needing_embedding and HAS_OPENAI:
texts_to_embed = [text for _, text in blocks_needing_embedding]
try:
embeddings = create_embeddings(texts_to_embed)
# Assign embeddings to blocks
for i, (block_idx, _) in enumerate(blocks_needing_embedding):
emb = embeddings[i].astype(np.float32)
# Encode as base64
blocks_data[block_idx]["emb"] = base64.b64encode(emb.tobytes()).decode(
"ascii"
)
except Exception as e:
print(f"Warning: Failed to create embeddings: {e}")
elif blocks_needing_embedding:
print(
"Warning: Cannot create embeddings (openai not installed or OPENAI_API_KEY not set)"
)
# Build BM25 data
print("Building BM25 index...")
bm25_data = build_bm25_data(blocks_data)
# Build name index
print("Building name index...")
name_index = build_name_index(blocks_data)
# Build final index
index = {
"version": "1.0.0",
"embedding_model": DEFAULT_EMBEDDING_MODEL,
"embedding_dim": DEFAULT_EMBEDDING_DIM,
"generated_at": datetime.now(timezone.utc).isoformat(),
"blocks": blocks_data,
"bm25": bm25_data,
"name_index": name_index,
}
# Save index
print(f"Saving index to {output_path}...")
with open(output_path, "w", encoding="utf-8") as f:
json.dump(index, f, separators=(",", ":"))
size_kb = output_path.stat().st_size / 1024
print(f"Index saved ({size_kb:.1f} KB)")
# Print statistics
print("\nIndex Statistics:")
print(f" Blocks indexed: {len(blocks_data)}")
print(f" BM25 vocabulary size: {len(bm25_data['df'])}")
print(f" Name index terms: {len(name_index)}")
print(f" Embeddings: {'Yes' if any(b.get('emb') for b in blocks_data) else 'No'}")
return index
def main():
parser = argparse.ArgumentParser(description="Build hybrid search index for blocks")
parser.add_argument(
"--force",
action="store_true",
help="Force rebuild all embeddings even if unchanged",
)
parser.add_argument(
"--output",
type=Path,
default=INDEX_PATH,
help=f"Output index file path (default: {INDEX_PATH})",
)
args = parser.parse_args()
try:
build_block_index(
force_rebuild=args.force,
output_path=args.output,
)
except Exception as e:
print(f"Error building index: {e}")
import traceback
traceback.print_exc()
sys.exit(1)
if __name__ == "__main__":
main()

View File

@@ -1,5 +1,6 @@
"""Pydantic models for tool responses."""
from datetime import datetime
from enum import Enum
from typing import Any
@@ -19,6 +20,15 @@ class ResponseType(str, Enum):
ERROR = "error"
NO_RESULTS = "no_results"
SUCCESS = "success"
DOC_SEARCH_RESULTS = "doc_search_results"
AGENT_OUTPUT = "agent_output"
BLOCK_LIST = "block_list"
BLOCK_OUTPUT = "block_output"
UNDERSTANDING_UPDATED = "understanding_updated"
# Agent generation responses
AGENT_PREVIEW = "agent_preview"
AGENT_SAVED = "agent_saved"
CLARIFICATION_NEEDED = "clarification_needed"
# Base response model
@@ -173,3 +183,128 @@ class ErrorResponse(ToolResponseBase):
type: ResponseType = ResponseType.ERROR
error: str | None = None
details: dict[str, Any] | None = None
# Documentation search models
class DocSearchResult(BaseModel):
"""A single documentation search result."""
title: str
path: str
section: str
snippet: str # Short excerpt for UI display
content: str # Full text content for LLM to read and understand
score: float
doc_url: str | None = None
class DocSearchResultsResponse(ToolResponseBase):
"""Response for search_docs tool."""
type: ResponseType = ResponseType.DOC_SEARCH_RESULTS
results: list[DocSearchResult]
count: int
query: str
# Agent output models
class ExecutionOutputInfo(BaseModel):
"""Summary of a single execution's outputs."""
execution_id: str
status: str
started_at: datetime | None = None
ended_at: datetime | None = None
outputs: dict[str, list[Any]]
inputs_summary: dict[str, Any] | None = None
class AgentOutputResponse(ToolResponseBase):
"""Response for agent_output tool."""
type: ResponseType = ResponseType.AGENT_OUTPUT
agent_name: str
agent_id: str
library_agent_id: str | None = None
library_agent_link: str | None = None
execution: ExecutionOutputInfo | None = None
available_executions: list[dict[str, Any]] | None = None
total_executions: int = 0
# Block models
class BlockInfoSummary(BaseModel):
"""Summary of a block for search results."""
id: str
name: str
description: str
categories: list[str]
input_schema: dict[str, Any]
output_schema: dict[str, Any]
class BlockListResponse(ToolResponseBase):
"""Response for find_block tool."""
type: ResponseType = ResponseType.BLOCK_LIST
blocks: list[BlockInfoSummary]
count: int
query: str
class BlockOutputResponse(ToolResponseBase):
"""Response for run_block tool."""
type: ResponseType = ResponseType.BLOCK_OUTPUT
block_id: str
block_name: str
outputs: dict[str, list[Any]]
success: bool = True
# Business understanding models
class UnderstandingUpdatedResponse(ToolResponseBase):
"""Response for add_understanding tool."""
type: ResponseType = ResponseType.UNDERSTANDING_UPDATED
updated_fields: list[str] = Field(default_factory=list)
current_understanding: dict[str, Any] = Field(default_factory=dict)
# Agent generation models
class ClarifyingQuestion(BaseModel):
"""A question that needs user clarification."""
question: str
keyword: str
example: str | None = None
class AgentPreviewResponse(ToolResponseBase):
"""Response for previewing a generated agent before saving."""
type: ResponseType = ResponseType.AGENT_PREVIEW
agent_json: dict[str, Any]
agent_name: str
description: str
node_count: int
link_count: int = 0
class AgentSavedResponse(ToolResponseBase):
"""Response when an agent is saved to the library."""
type: ResponseType = ResponseType.AGENT_SAVED
agent_id: str
agent_name: str
library_agent_id: str
library_agent_link: str
agent_page_link: str # Link to the agent builder/editor page
class ClarificationNeededResponse(ToolResponseBase):
"""Response when the LLM needs more information from the user."""
type: ResponseType = ResponseType.CLARIFICATION_NEEDED
questions: list[ClarifyingQuestion] = Field(default_factory=list)

View File

@@ -5,14 +5,22 @@ from typing import Any
from pydantic import BaseModel, Field, field_validator
from backend.api.features.chat.config import ChatConfig
from backend.api.features.chat.model import ChatSession
from backend.api.features.library import db as library_db
from backend.data.graph import GraphModel
from backend.data.model import CredentialsMetaInput
from backend.data.user import get_user_by_id
from backend.executor import utils as execution_utils
from backend.server.v2.chat.config import ChatConfig
from backend.server.v2.chat.model import ChatSession
from backend.server.v2.chat.tools.base import BaseTool
from backend.server.v2.chat.tools.models import (
from backend.util.clients import get_scheduler_client
from backend.util.exceptions import DatabaseError, NotFoundError
from backend.util.timezone_utils import (
convert_utc_time_to_user_timezone,
get_user_timezone_or_utc,
)
from .base import BaseTool
from .models import (
AgentDetails,
AgentDetailsResponse,
ErrorResponse,
@@ -23,19 +31,13 @@ from backend.server.v2.chat.tools.models import (
ToolResponseBase,
UserReadiness,
)
from backend.server.v2.chat.tools.utils import (
from .utils import (
check_user_has_required_credentials,
extract_credentials_from_schema,
fetch_graph_from_store_slug,
get_or_create_library_agent,
match_user_credentials_to_graph,
)
from backend.util.clients import get_scheduler_client
from backend.util.exceptions import DatabaseError, NotFoundError
from backend.util.timezone_utils import (
convert_utc_time_to_user_timezone,
get_user_timezone_or_utc,
)
logger = logging.getLogger(__name__)
config = ChatConfig()
@@ -56,6 +58,7 @@ class RunAgentInput(BaseModel):
"""Input parameters for the run_agent tool."""
username_agent_slug: str = ""
library_agent_id: str = ""
inputs: dict[str, Any] = Field(default_factory=dict)
use_defaults: bool = False
schedule_name: str = ""
@@ -63,7 +66,12 @@ class RunAgentInput(BaseModel):
timezone: str = "UTC"
@field_validator(
"username_agent_slug", "schedule_name", "cron", "timezone", mode="before"
"username_agent_slug",
"library_agent_id",
"schedule_name",
"cron",
"timezone",
mode="before",
)
@classmethod
def strip_strings(cls, v: Any) -> Any:
@@ -89,7 +97,7 @@ class RunAgentTool(BaseTool):
@property
def description(self) -> str:
return """Run or schedule an agent from the marketplace.
return """Run or schedule an agent from the marketplace or user's library.
The tool automatically handles the setup flow:
- Returns missing inputs if required fields are not provided
@@ -97,6 +105,10 @@ class RunAgentTool(BaseTool):
- Executes immediately if all requirements are met
- Schedules execution if cron expression is provided
Identify the agent using either:
- username_agent_slug: Marketplace format 'username/agent-name'
- library_agent_id: ID of an agent in the user's library
For scheduled execution, provide: schedule_name, cron, and optionally timezone."""
@property
@@ -108,6 +120,10 @@ class RunAgentTool(BaseTool):
"type": "string",
"description": "Agent identifier in format 'username/agent-name'",
},
"library_agent_id": {
"type": "string",
"description": "Library agent ID from user's library",
},
"inputs": {
"type": "object",
"description": "Input values for the agent",
@@ -130,7 +146,7 @@ class RunAgentTool(BaseTool):
"description": "IANA timezone for schedule (default: UTC)",
},
},
"required": ["username_agent_slug"],
"required": [],
}
@property
@@ -148,10 +164,16 @@ class RunAgentTool(BaseTool):
params = RunAgentInput(**kwargs)
session_id = session.session_id
# Validate agent slug format
if not params.username_agent_slug or "/" not in params.username_agent_slug:
# Validate at least one identifier is provided
has_slug = params.username_agent_slug and "/" in params.username_agent_slug
has_library_id = bool(params.library_agent_id)
if not has_slug and not has_library_id:
return ErrorResponse(
message="Please provide an agent slug in format 'username/agent-name'",
message=(
"Please provide either a username_agent_slug "
"(format 'username/agent-name') or a library_agent_id"
),
session_id=session_id,
)
@@ -166,13 +188,41 @@ class RunAgentTool(BaseTool):
is_schedule = bool(params.schedule_name or params.cron)
try:
# Step 1: Fetch agent details (always happens first)
username, agent_name = params.username_agent_slug.split("/", 1)
graph, store_agent = await fetch_graph_from_store_slug(username, agent_name)
# Step 1: Fetch agent details
graph: GraphModel | None = None
library_agent = None
# Priority: library_agent_id if provided
if has_library_id:
library_agent = await library_db.get_library_agent(
params.library_agent_id, user_id
)
if not library_agent:
return ErrorResponse(
message=f"Library agent '{params.library_agent_id}' not found",
session_id=session_id,
)
# Get the graph from the library agent
from backend.data.graph import get_graph
graph = await get_graph(
library_agent.graph_id,
library_agent.graph_version,
user_id=user_id,
)
else:
# Fetch from marketplace slug
username, agent_name = params.username_agent_slug.split("/", 1)
graph, _ = await fetch_graph_from_store_slug(username, agent_name)
if not graph:
identifier = (
params.library_agent_id
if has_library_id
else params.username_agent_slug
)
return ErrorResponse(
message=f"Agent '{params.username_agent_slug}' not found in marketplace",
message=f"Agent '{identifier}' not found",
session_id=session_id,
)

View File

@@ -3,13 +3,13 @@ import uuid
import orjson
import pytest
from backend.server.v2.chat.tools._test_data import (
from ._test_data import (
make_session,
setup_firecrawl_test_data,
setup_llm_test_data,
setup_test_data,
)
from backend.server.v2.chat.tools.run_agent import RunAgentTool
from .run_agent import RunAgentTool
# This is so the formatter doesn't remove the fixture imports
setup_llm_test_data = setup_llm_test_data

View File

@@ -0,0 +1,287 @@
"""Tool for executing blocks directly."""
import logging
from collections import defaultdict
from typing import Any
from backend.api.features.chat.model import ChatSession
from backend.data.block import get_block
from backend.data.model import CredentialsMetaInput
from backend.integrations.creds_manager import IntegrationCredentialsManager
from backend.util.exceptions import BlockError
from .base import BaseTool
from .models import (
BlockOutputResponse,
ErrorResponse,
SetupInfo,
SetupRequirementsResponse,
ToolResponseBase,
UserReadiness,
)
logger = logging.getLogger(__name__)
class RunBlockTool(BaseTool):
"""Tool for executing a block and returning its outputs."""
@property
def name(self) -> str:
return "run_block"
@property
def description(self) -> str:
return (
"Execute a specific block with the provided input data. "
"Use find_block to discover available blocks and their input schemas. "
"The block will run and return its outputs once complete."
)
@property
def parameters(self) -> dict[str, Any]:
return {
"type": "object",
"properties": {
"block_id": {
"type": "string",
"description": "The UUID of the block to execute",
},
"input_data": {
"type": "object",
"description": (
"Input values for the block. Must match the block's input schema. "
"Check the block's input_schema from find_block for required fields."
),
},
},
"required": ["block_id", "input_data"],
}
@property
def requires_auth(self) -> bool:
return True
async def _check_block_credentials(
self,
user_id: str,
block: Any,
) -> tuple[dict[str, CredentialsMetaInput], list[CredentialsMetaInput]]:
"""
Check if user has required credentials for a block.
Returns:
tuple[matched_credentials, missing_credentials]
"""
matched_credentials: dict[str, CredentialsMetaInput] = {}
missing_credentials: list[CredentialsMetaInput] = []
# Get credential field info from block's input schema
credentials_fields_info = block.input_schema.get_credentials_fields_info()
if not credentials_fields_info:
return matched_credentials, missing_credentials
# Get user's available credentials
creds_manager = IntegrationCredentialsManager()
available_creds = await creds_manager.store.get_all_creds(user_id)
for field_name, field_info in credentials_fields_info.items():
# field_info.provider is a frozenset of acceptable providers
# field_info.supported_types is a frozenset of acceptable types
matching_cred = next(
(
cred
for cred in available_creds
if cred.provider in field_info.provider
and cred.type in field_info.supported_types
),
None,
)
if matching_cred:
matched_credentials[field_name] = CredentialsMetaInput(
id=matching_cred.id,
provider=matching_cred.provider, # type: ignore
type=matching_cred.type,
title=matching_cred.title,
)
else:
# Create a placeholder for the missing credential
provider = next(iter(field_info.provider), "unknown")
cred_type = next(iter(field_info.supported_types), "api_key")
missing_credentials.append(
CredentialsMetaInput(
id=field_name,
provider=provider, # type: ignore
type=cred_type, # type: ignore
title=field_name.replace("_", " ").title(),
)
)
return matched_credentials, missing_credentials
async def _execute(
self,
user_id: str | None,
session: ChatSession,
**kwargs,
) -> ToolResponseBase:
"""Execute a block with the given input data.
Args:
user_id: User ID (required)
session: Chat session
block_id: Block UUID to execute
input_data: Input values for the block
Returns:
BlockOutputResponse: Block execution outputs
SetupRequirementsResponse: Missing credentials
ErrorResponse: Error message
"""
block_id = kwargs.get("block_id", "").strip()
input_data = kwargs.get("input_data", {})
session_id = session.session_id
if not block_id:
return ErrorResponse(
message="Please provide a block_id",
session_id=session_id,
)
if not isinstance(input_data, dict):
return ErrorResponse(
message="input_data must be an object",
session_id=session_id,
)
if not user_id:
return ErrorResponse(
message="Authentication required",
session_id=session_id,
)
# Get the block
block = get_block(block_id)
if not block:
return ErrorResponse(
message=f"Block '{block_id}' not found",
session_id=session_id,
)
logger.info(f"Executing block {block.name} ({block_id}) for user {user_id}")
# Check credentials
creds_manager = IntegrationCredentialsManager()
matched_credentials, missing_credentials = await self._check_block_credentials(
user_id, block
)
if missing_credentials:
# Return setup requirements response with missing credentials
missing_creds_dict = {c.id: c.model_dump() for c in missing_credentials}
return SetupRequirementsResponse(
message=(
f"Block '{block.name}' requires credentials that are not configured. "
"Please set up the required credentials before running this block."
),
session_id=session_id,
setup_info=SetupInfo(
agent_id=block_id,
agent_name=block.name,
user_readiness=UserReadiness(
has_all_credentials=False,
missing_credentials=missing_creds_dict,
ready_to_run=False,
),
requirements={
"credentials": [c.model_dump() for c in missing_credentials],
"inputs": self._get_inputs_list(block),
"execution_modes": ["immediate"],
},
),
graph_id=None,
graph_version=None,
)
try:
# Fetch actual credentials and prepare kwargs for block execution
exec_kwargs: dict[str, Any] = {"user_id": user_id}
for field_name, cred_meta in matched_credentials.items():
# Inject metadata into input_data (for validation)
if field_name not in input_data:
input_data[field_name] = cred_meta.model_dump()
# Fetch actual credentials and pass as kwargs (for execution)
actual_credentials = await creds_manager.get(
user_id, cred_meta.id, lock=False
)
if actual_credentials:
exec_kwargs[field_name] = actual_credentials
else:
return ErrorResponse(
message=f"Failed to retrieve credentials for {field_name}",
session_id=session_id,
)
# Execute the block and collect outputs
outputs: dict[str, list[Any]] = defaultdict(list)
async for output_name, output_data in block.execute(
input_data,
**exec_kwargs,
):
outputs[output_name].append(output_data)
return BlockOutputResponse(
message=f"Block '{block.name}' executed successfully",
block_id=block_id,
block_name=block.name,
outputs=dict(outputs),
success=True,
session_id=session_id,
)
except BlockError as e:
logger.warning(f"Block execution failed: {e}")
return ErrorResponse(
message=f"Block execution failed: {e}",
error=str(e),
session_id=session_id,
)
except Exception as e:
logger.error(f"Unexpected error executing block: {e}", exc_info=True)
return ErrorResponse(
message=f"Failed to execute block: {str(e)}",
error=str(e),
session_id=session_id,
)
def _get_inputs_list(self, block: Any) -> list[dict[str, Any]]:
"""Extract non-credential inputs from block schema."""
inputs_list = []
schema = block.input_schema.jsonschema()
properties = schema.get("properties", {})
required_fields = set(schema.get("required", []))
# Get credential field names to exclude
credentials_fields = set(block.input_schema.get_credentials_fields().keys())
for field_name, field_schema in properties.items():
# Skip credential fields
if field_name in credentials_fields:
continue
inputs_list.append(
{
"name": field_name,
"title": field_schema.get("title", field_name),
"type": field_schema.get("type", "string"),
"description": field_schema.get("description", ""),
"required": field_name in required_fields,
}
)
return inputs_list

View File

@@ -0,0 +1,460 @@
"""
Block Hybrid Search
Combines multiple ranking signals for block search:
- Semantic search (OpenAI embeddings + cosine similarity)
- Lexical search (BM25)
- Name matching (boost for block name matches)
- Category matching (boost for category matches)
Based on the docs search implementation.
"""
import base64
import json
import logging
import math
import os
import re
from dataclasses import dataclass
from pathlib import Path
from typing import Any, Optional
import numpy as np
logger = logging.getLogger(__name__)
# OpenAI embedding model
EMBEDDING_MODEL = "text-embedding-3-small"
# Path to the JSON index file
INDEX_PATH = Path(__file__).parent / "blocks_index.json"
# Stopwords for tokenization (same as index_blocks.py)
STOPWORDS = {
"the",
"a",
"an",
"is",
"are",
"was",
"were",
"be",
"been",
"being",
"have",
"has",
"had",
"do",
"does",
"did",
"will",
"would",
"could",
"should",
"may",
"might",
"must",
"shall",
"can",
"need",
"dare",
"ought",
"used",
"to",
"of",
"in",
"for",
"on",
"with",
"at",
"by",
"from",
"as",
"into",
"through",
"during",
"before",
"after",
"above",
"below",
"between",
"under",
"again",
"further",
"then",
"once",
"and",
"but",
"or",
"nor",
"so",
"yet",
"both",
"either",
"neither",
"not",
"only",
"own",
"same",
"than",
"too",
"very",
"just",
"also",
"now",
"here",
"there",
"when",
"where",
"why",
"how",
"all",
"each",
"every",
"few",
"more",
"most",
"other",
"some",
"such",
"no",
"any",
"this",
"that",
"these",
"those",
"it",
"its",
"block",
}
def tokenize(text: str) -> list[str]:
"""Simple tokenizer for search."""
text = text.lower()
# Remove code blocks if any
text = re.sub(r"```[\s\S]*?```", "", text)
text = re.sub(r"`[^`]+`", "", text)
# Split camelCase
text = re.sub(r"([a-z])([A-Z])", r"\1 \2", text)
# Extract words
words = re.findall(r"\b[a-z][a-z0-9_-]*\b", text)
# Remove very short words and stopwords
return [w for w in words if len(w) > 2 and w not in STOPWORDS]
@dataclass
class SearchWeights:
"""Configuration for hybrid search signal weights."""
semantic: float = 0.40 # Embedding similarity
bm25: float = 0.25 # Lexical matching
name_match: float = 0.25 # Block name matches
category_match: float = 0.10 # Category matches
@dataclass
class BlockSearchResult:
"""A single block search result."""
block_id: str
name: str
description: str
categories: list[str]
score: float
# Individual signal scores (for debugging)
semantic_score: float = 0.0
bm25_score: float = 0.0
name_score: float = 0.0
category_score: float = 0.0
class BlockSearchIndex:
"""Hybrid search index for blocks combining BM25 + embeddings."""
def __init__(self, index_path: Path = INDEX_PATH):
self.blocks: list[dict[str, Any]] = []
self.bm25_data: dict[str, Any] = {}
self.name_index: dict[str, list[list[int | float]]] = {}
self.embeddings: Optional[np.ndarray] = None
self.normalized_embeddings: Optional[np.ndarray] = None
self._loaded = False
self._index_path = index_path
self._embedding_model: Any = None
def load(self) -> bool:
"""Load the index from JSON file."""
if self._loaded:
return True
if not self._index_path.exists():
logger.warning(f"Block index not found at {self._index_path}")
return False
try:
with open(self._index_path, "r", encoding="utf-8") as f:
data = json.load(f)
self.blocks = data.get("blocks", [])
self.bm25_data = data.get("bm25", {})
self.name_index = data.get("name_index", {})
# Decode embeddings from base64
embeddings_list = []
for block in self.blocks:
if block.get("emb"):
emb_bytes = base64.b64decode(block["emb"])
emb = np.frombuffer(emb_bytes, dtype=np.float32)
embeddings_list.append(emb)
else:
# No embedding, use zeros
dim = data.get("embedding_dim", 384)
embeddings_list.append(np.zeros(dim, dtype=np.float32))
if embeddings_list:
self.embeddings = np.stack(embeddings_list)
# Precompute normalized embeddings for cosine similarity
norms = np.linalg.norm(self.embeddings, axis=1, keepdims=True)
self.normalized_embeddings = self.embeddings / (norms + 1e-10)
self._loaded = True
logger.info(f"Loaded block index with {len(self.blocks)} blocks")
return True
except Exception as e:
logger.error(f"Failed to load block index: {e}")
return False
def _get_openai_client(self) -> Any:
"""Get OpenAI client for query embedding."""
if self._embedding_model is None:
try:
from openai import OpenAI
api_key = os.getenv("OPENAI_API_KEY")
if not api_key:
logger.warning("OPENAI_API_KEY not set")
return None
self._embedding_model = OpenAI(api_key=api_key)
except ImportError:
logger.warning("openai not installed")
return None
return self._embedding_model
def _embed_query(self, query: str) -> Optional[np.ndarray]:
"""Embed the search query using OpenAI."""
client = self._get_openai_client()
if client is None:
return None
try:
response = client.embeddings.create(
model=EMBEDDING_MODEL,
input=query,
)
embedding = response.data[0].embedding
return np.array(embedding, dtype=np.float32)
except Exception as e:
logger.warning(f"Failed to embed query: {e}")
return None
def _compute_semantic_scores(self, query_embedding: np.ndarray) -> np.ndarray:
"""Compute cosine similarity between query and all blocks."""
if self.normalized_embeddings is None:
return np.zeros(len(self.blocks))
# Normalize query embedding
query_norm = query_embedding / (np.linalg.norm(query_embedding) + 1e-10)
# Cosine similarity via dot product
similarities = self.normalized_embeddings @ query_norm
# Scale to [0, 1] (cosine ranges from -1 to 1)
return (similarities + 1) / 2
def _compute_bm25_scores(self, query_tokens: list[str]) -> np.ndarray:
"""Compute BM25 scores for all blocks."""
scores = np.zeros(len(self.blocks))
if not self.bm25_data or not query_tokens:
return scores
# BM25 parameters
k1 = 1.5
b = 0.75
n_docs = self.bm25_data.get("n_docs", len(self.blocks))
avgdl = self.bm25_data.get("avgdl", 100)
df = self.bm25_data.get("df", {})
doc_lens = self.bm25_data.get("doc_lens", [100] * len(self.blocks))
for i, block in enumerate(self.blocks):
# Tokenize block's searchable text
block_tokens = tokenize(block.get("searchable_text", ""))
doc_len = doc_lens[i] if i < len(doc_lens) else len(block_tokens)
# Calculate BM25 score
score = 0.0
for token in query_tokens:
if token not in df:
continue
# Term frequency in this document
tf = block_tokens.count(token)
if tf == 0:
continue
# IDF
doc_freq = df.get(token, 0)
idf = math.log((n_docs - doc_freq + 0.5) / (doc_freq + 0.5) + 1)
# BM25 score component
numerator = tf * (k1 + 1)
denominator = tf + k1 * (1 - b + b * doc_len / avgdl)
score += idf * numerator / denominator
scores[i] = score
# Normalize to [0, 1]
max_score = scores.max()
if max_score > 0:
scores = scores / max_score
return scores
def _compute_name_scores(self, query_tokens: list[str]) -> np.ndarray:
"""Compute name match scores using the name index."""
scores = np.zeros(len(self.blocks))
if not self.name_index or not query_tokens:
return scores
for token in query_tokens:
if token in self.name_index:
for block_idx, weight in self.name_index[token]:
if block_idx < len(scores):
scores[int(block_idx)] += weight
# Also check for partial matches in block names
for i, block in enumerate(self.blocks):
name_lower = block.get("name", "").lower()
for token in query_tokens:
if token in name_lower:
scores[i] += 0.5
# Normalize to [0, 1]
max_score = scores.max()
if max_score > 0:
scores = scores / max_score
return scores
def _compute_category_scores(self, query_tokens: list[str]) -> np.ndarray:
"""Compute category match scores."""
scores = np.zeros(len(self.blocks))
if not query_tokens:
return scores
for i, block in enumerate(self.blocks):
categories = block.get("categories", [])
category_text = " ".join(categories).lower()
for token in query_tokens:
if token in category_text:
scores[i] += 1.0
# Normalize to [0, 1]
max_score = scores.max()
if max_score > 0:
scores = scores / max_score
return scores
def search(
self,
query: str,
top_k: int = 10,
weights: Optional[SearchWeights] = None,
) -> list[BlockSearchResult]:
"""
Perform hybrid search combining multiple signals.
Args:
query: Search query string
top_k: Number of results to return
weights: Optional custom weights for signals
Returns:
List of BlockSearchResult sorted by score
"""
if not self._loaded and not self.load():
return []
if weights is None:
weights = SearchWeights()
# Tokenize query
query_tokens = tokenize(query)
if not query_tokens:
# Fallback: try raw query words
query_tokens = query.lower().split()
# Compute semantic scores
semantic_scores = np.zeros(len(self.blocks))
if self.normalized_embeddings is not None:
query_embedding = self._embed_query(query)
if query_embedding is not None:
semantic_scores = self._compute_semantic_scores(query_embedding)
# Compute other scores
bm25_scores = self._compute_bm25_scores(query_tokens)
name_scores = self._compute_name_scores(query_tokens)
category_scores = self._compute_category_scores(query_tokens)
# Combine scores using weights
combined_scores = (
weights.semantic * semantic_scores
+ weights.bm25 * bm25_scores
+ weights.name_match * name_scores
+ weights.category_match * category_scores
)
# Get top-k indices
top_indices = np.argsort(combined_scores)[::-1][:top_k]
# Build results
results = []
for idx in top_indices:
if combined_scores[idx] <= 0:
continue
block = self.blocks[idx]
results.append(
BlockSearchResult(
block_id=block["id"],
name=block["name"],
description=block["description"],
categories=block.get("categories", []),
score=float(combined_scores[idx]),
semantic_score=float(semantic_scores[idx]),
bm25_score=float(bm25_scores[idx]),
name_score=float(name_scores[idx]),
category_score=float(category_scores[idx]),
)
)
return results
# Global index instance (lazy loaded)
_block_search_index: Optional[BlockSearchIndex] = None
def get_block_search_index() -> BlockSearchIndex:
"""Get or create the block search index singleton."""
global _block_search_index
if _block_search_index is None:
_block_search_index = BlockSearchIndex(INDEX_PATH)
return _block_search_index

View File

@@ -0,0 +1,386 @@
"""Tool for searching platform documentation."""
import json
import logging
import math
import re
from pathlib import Path
from typing import Any
from backend.api.features.chat.model import ChatSession
from .base import BaseTool
from .models import (
DocSearchResult,
DocSearchResultsResponse,
ErrorResponse,
NoResultsResponse,
ToolResponseBase,
)
logger = logging.getLogger(__name__)
# Documentation base URL
DOCS_BASE_URL = "https://docs.agpt.co/platform"
# Path to the JSON index file (relative to this file)
INDEX_PATH = Path(__file__).parent / "docs_index.json"
def tokenize(text: str) -> list[str]:
"""Simple tokenizer for BM25."""
text = text.lower()
# Remove code blocks
text = re.sub(r"```[\s\S]*?```", "", text)
text = re.sub(r"`[^`]+`", "", text)
# Extract words
words = re.findall(r"\b[a-z][a-z0-9_-]*\b", text)
# Remove very short words and stopwords
stopwords = {
"the",
"a",
"an",
"is",
"are",
"was",
"were",
"be",
"been",
"being",
"have",
"has",
"had",
"do",
"does",
"did",
"will",
"would",
"could",
"should",
"may",
"might",
"must",
"shall",
"can",
"need",
"dare",
"ought",
"used",
"to",
"of",
"in",
"for",
"on",
"with",
"at",
"by",
"from",
"as",
"into",
"through",
"during",
"before",
"after",
"above",
"below",
"between",
"under",
"again",
"further",
"then",
"once",
"and",
"but",
"or",
"nor",
"so",
"yet",
"both",
"either",
"neither",
"not",
"only",
"own",
"same",
"than",
"too",
"very",
"just",
"also",
"now",
"here",
"there",
"when",
"where",
"why",
"how",
"all",
"each",
"every",
"both",
"few",
"more",
"most",
"other",
"some",
"such",
"no",
"any",
"this",
"that",
"these",
"those",
"it",
"its",
}
return [w for w in words if len(w) > 2 and w not in stopwords]
class DocSearchIndex:
"""Lightweight documentation search index using BM25."""
def __init__(self, index_path: Path):
self.chunks: list[dict] = []
self.bm25_data: dict = {}
self._loaded = False
self._index_path = index_path
def load(self) -> bool:
"""Load the index from JSON file."""
if self._loaded:
return True
if not self._index_path.exists():
logger.warning(f"Documentation index not found at {self._index_path}")
return False
try:
with open(self._index_path, "r", encoding="utf-8") as f:
data = json.load(f)
self.chunks = data.get("chunks", [])
self.bm25_data = data.get("bm25", {})
self._loaded = True
logger.info(f"Loaded documentation index with {len(self.chunks)} chunks")
return True
except Exception as e:
logger.error(f"Failed to load documentation index: {e}")
return False
def search(self, query: str, top_k: int = 5) -> list[dict]:
"""Search the index using BM25."""
if not self._loaded and not self.load():
return []
query_tokens = tokenize(query)
if not query_tokens:
return []
# BM25 parameters
k1 = 1.5
b = 0.75
n_docs = self.bm25_data.get("n_docs", len(self.chunks))
avgdl = self.bm25_data.get("avgdl", 100)
df = self.bm25_data.get("df", {})
doc_lens = self.bm25_data.get("doc_lens", [100] * len(self.chunks))
scores = []
for i, chunk in enumerate(self.chunks):
# Tokenize chunk text
chunk_tokens = tokenize(chunk.get("text", ""))
doc_len = doc_lens[i] if i < len(doc_lens) else len(chunk_tokens)
# Calculate BM25 score
score = 0.0
for token in query_tokens:
if token not in df:
continue
# Term frequency in this document
tf = chunk_tokens.count(token)
if tf == 0:
continue
# IDF
doc_freq = df.get(token, 0)
idf = math.log((n_docs - doc_freq + 0.5) / (doc_freq + 0.5) + 1)
# BM25 score component
numerator = tf * (k1 + 1)
denominator = tf + k1 * (1 - b + b * doc_len / avgdl)
score += idf * numerator / denominator
# Boost for title/heading matches
title = chunk.get("title", "").lower()
heading = chunk.get("heading", "").lower()
for token in query_tokens:
if token in title:
score *= 1.5
if token in heading:
score *= 1.2
scores.append((i, score))
# Sort by score and return top_k
scores.sort(key=lambda x: x[1], reverse=True)
results = []
seen_sections = set()
for idx, score in scores:
if score <= 0:
continue
chunk = self.chunks[idx]
section_key = (chunk.get("doc", ""), chunk.get("heading", ""))
# Deduplicate by section
if section_key in seen_sections:
continue
seen_sections.add(section_key)
results.append(
{
"title": chunk.get("title", ""),
"path": chunk.get("doc", ""),
"heading": chunk.get("heading", ""),
"text": chunk.get("text", ""), # Full text for LLM comprehension
"score": score,
}
)
if len(results) >= top_k:
break
return results
# Global index instance (lazy loaded)
_search_index: DocSearchIndex | None = None
def get_search_index() -> DocSearchIndex:
"""Get or create the search index singleton."""
global _search_index
if _search_index is None:
_search_index = DocSearchIndex(INDEX_PATH)
return _search_index
class SearchDocsTool(BaseTool):
"""Tool for searching AutoGPT platform documentation."""
@property
def name(self) -> str:
return "search_platform_docs"
@property
def description(self) -> str:
return (
"Search the AutoGPT platform documentation and support Q&A for information about "
"how to use the platform, create agents, configure blocks, "
"set up integrations, troubleshoot issues, and more. Use this when users ask "
"support questions or want to learn how to do something with AutoGPT."
)
@property
def parameters(self) -> dict[str, Any]:
return {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": (
"Search query describing what the user wants to learn about. "
"Use keywords like 'blocks', 'agents', 'credentials', 'API', etc."
),
},
},
"required": ["query"],
}
async def _execute(
self,
user_id: str | None,
session: ChatSession,
**kwargs,
) -> ToolResponseBase:
"""Search documentation for the query.
Args:
user_id: User ID (may be anonymous)
session: Chat session
query: Search query
Returns:
DocSearchResultsResponse: List of matching documentation sections
NoResultsResponse: No results found
ErrorResponse: Error message
"""
query = kwargs.get("query", "").strip()
session_id = session.session_id
if not query:
return ErrorResponse(
message="Please provide a search query",
session_id=session_id,
)
try:
index = get_search_index()
results = index.search(query, top_k=5)
if not results:
return NoResultsResponse(
message=f"No documentation found for '{query}'. Try different keywords.",
session_id=session_id,
suggestions=[
"Try more general terms like 'blocks', 'agents', 'setup'",
"Check the documentation at docs.agpt.co",
],
)
# Convert to response format
doc_results = []
for r in results:
# Build documentation URL
path = r["path"]
if path.endswith(".md"):
path = path[:-3] # Remove .md extension
doc_url = f"{DOCS_BASE_URL}/{path}"
full_text = r["text"]
doc_results.append(
DocSearchResult(
title=r["title"],
path=r["path"],
section=r["heading"],
snippet=(
full_text[:300] + "..."
if len(full_text) > 300
else full_text
),
content=full_text, # Full text for LLM to read and understand
score=round(r["score"], 3),
doc_url=doc_url,
)
)
return DocSearchResultsResponse(
message=(
f"Found {len(doc_results)} relevant documentation sections. "
"Use these to help answer the user's question. "
"Include links to the documentation when helpful."
),
results=doc_results,
count=len(doc_results),
query=query,
session_id=session_id,
)
except Exception as e:
logger.error(f"Error searching documentation: {e}", exc_info=True)
return ErrorResponse(
message="Failed to search documentation. Please try again.",
error=str(e),
session_id=session_id,
)

View File

@@ -3,13 +3,13 @@
import logging
from typing import Any
from backend.api.features.library import db as library_db
from backend.api.features.library import model as library_model
from backend.api.features.store import db as store_db
from backend.data import graph as graph_db
from backend.data.graph import GraphModel
from backend.data.model import CredentialsMetaInput
from backend.integrations.creds_manager import IntegrationCredentialsManager
from backend.server.v2.library import db as library_db
from backend.server.v2.library import model as library_model
from backend.server.v2.store import db as store_db
from backend.util.exceptions import NotFoundError
logger = logging.getLogger(__name__)

View File

@@ -7,9 +7,10 @@ import pytest_mock
from prisma.enums import ReviewStatus
from pytest_snapshot.plugin import Snapshot
from backend.server.rest_api import handle_internal_http_error
from backend.server.v2.executions.review.model import PendingHumanReviewModel
from backend.server.v2.executions.review.routes import router
from backend.api.rest_api import handle_internal_http_error
from .model import PendingHumanReviewModel
from .routes import router
# Using a fixed timestamp for reproducible tests
FIXED_NOW = datetime.datetime(2023, 1, 1, 0, 0, 0, tzinfo=datetime.timezone.utc)
@@ -54,13 +55,13 @@ def sample_pending_review(test_user_id: str) -> PendingHumanReviewModel:
def test_get_pending_reviews_empty(
mocker: pytest_mock.MockFixture,
mocker: pytest_mock.MockerFixture,
snapshot: Snapshot,
test_user_id: str,
) -> None:
"""Test getting pending reviews when none exist"""
mock_get_reviews = mocker.patch(
"backend.server.v2.executions.review.routes.get_pending_reviews_for_user"
"backend.api.features.executions.review.routes.get_pending_reviews_for_user"
)
mock_get_reviews.return_value = []
@@ -72,14 +73,14 @@ def test_get_pending_reviews_empty(
def test_get_pending_reviews_with_data(
mocker: pytest_mock.MockFixture,
mocker: pytest_mock.MockerFixture,
sample_pending_review: PendingHumanReviewModel,
snapshot: Snapshot,
test_user_id: str,
) -> None:
"""Test getting pending reviews with data"""
mock_get_reviews = mocker.patch(
"backend.server.v2.executions.review.routes.get_pending_reviews_for_user"
"backend.api.features.executions.review.routes.get_pending_reviews_for_user"
)
mock_get_reviews.return_value = [sample_pending_review]
@@ -94,14 +95,14 @@ def test_get_pending_reviews_with_data(
def test_get_pending_reviews_for_execution_success(
mocker: pytest_mock.MockFixture,
mocker: pytest_mock.MockerFixture,
sample_pending_review: PendingHumanReviewModel,
snapshot: Snapshot,
test_user_id: str,
) -> None:
"""Test getting pending reviews for specific execution"""
mock_get_graph_execution = mocker.patch(
"backend.server.v2.executions.review.routes.get_graph_execution_meta"
"backend.api.features.executions.review.routes.get_graph_execution_meta"
)
mock_get_graph_execution.return_value = {
"id": "test_graph_exec_456",
@@ -109,7 +110,7 @@ def test_get_pending_reviews_for_execution_success(
}
mock_get_reviews = mocker.patch(
"backend.server.v2.executions.review.routes.get_pending_reviews_for_execution"
"backend.api.features.executions.review.routes.get_pending_reviews_for_execution"
)
mock_get_reviews.return_value = [sample_pending_review]
@@ -121,24 +122,23 @@ def test_get_pending_reviews_for_execution_success(
assert data[0]["graph_exec_id"] == "test_graph_exec_456"
def test_get_pending_reviews_for_execution_access_denied(
mocker: pytest_mock.MockFixture,
test_user_id: str,
def test_get_pending_reviews_for_execution_not_available(
mocker: pytest_mock.MockerFixture,
) -> None:
"""Test access denied when user doesn't own the execution"""
mock_get_graph_execution = mocker.patch(
"backend.server.v2.executions.review.routes.get_graph_execution_meta"
"backend.api.features.executions.review.routes.get_graph_execution_meta"
)
mock_get_graph_execution.return_value = None
response = client.get("/api/review/execution/test_graph_exec_456")
assert response.status_code == 403
assert "Access denied" in response.json()["detail"]
assert response.status_code == 404
assert "not found" in response.json()["detail"]
def test_process_review_action_approve_success(
mocker: pytest_mock.MockFixture,
mocker: pytest_mock.MockerFixture,
sample_pending_review: PendingHumanReviewModel,
test_user_id: str,
) -> None:
@@ -146,12 +146,12 @@ def test_process_review_action_approve_success(
# Mock the route functions
mock_get_reviews_for_execution = mocker.patch(
"backend.server.v2.executions.review.routes.get_pending_reviews_for_execution"
"backend.api.features.executions.review.routes.get_pending_reviews_for_execution"
)
mock_get_reviews_for_execution.return_value = [sample_pending_review]
mock_process_all_reviews = mocker.patch(
"backend.server.v2.executions.review.routes.process_all_reviews_for_execution"
"backend.api.features.executions.review.routes.process_all_reviews_for_execution"
)
# Create approved review for return
approved_review = PendingHumanReviewModel(
@@ -174,11 +174,11 @@ def test_process_review_action_approve_success(
mock_process_all_reviews.return_value = {"test_node_123": approved_review}
mock_has_pending = mocker.patch(
"backend.server.v2.executions.review.routes.has_pending_reviews_for_graph_exec"
"backend.api.features.executions.review.routes.has_pending_reviews_for_graph_exec"
)
mock_has_pending.return_value = False
mocker.patch("backend.server.v2.executions.review.routes.add_graph_execution")
mocker.patch("backend.api.features.executions.review.routes.add_graph_execution")
request_data = {
"reviews": [
@@ -202,7 +202,7 @@ def test_process_review_action_approve_success(
def test_process_review_action_reject_success(
mocker: pytest_mock.MockFixture,
mocker: pytest_mock.MockerFixture,
sample_pending_review: PendingHumanReviewModel,
test_user_id: str,
) -> None:
@@ -210,12 +210,12 @@ def test_process_review_action_reject_success(
# Mock the route functions
mock_get_reviews_for_execution = mocker.patch(
"backend.server.v2.executions.review.routes.get_pending_reviews_for_execution"
"backend.api.features.executions.review.routes.get_pending_reviews_for_execution"
)
mock_get_reviews_for_execution.return_value = [sample_pending_review]
mock_process_all_reviews = mocker.patch(
"backend.server.v2.executions.review.routes.process_all_reviews_for_execution"
"backend.api.features.executions.review.routes.process_all_reviews_for_execution"
)
rejected_review = PendingHumanReviewModel(
node_exec_id="test_node_123",
@@ -237,7 +237,7 @@ def test_process_review_action_reject_success(
mock_process_all_reviews.return_value = {"test_node_123": rejected_review}
mock_has_pending = mocker.patch(
"backend.server.v2.executions.review.routes.has_pending_reviews_for_graph_exec"
"backend.api.features.executions.review.routes.has_pending_reviews_for_graph_exec"
)
mock_has_pending.return_value = False
@@ -262,7 +262,7 @@ def test_process_review_action_reject_success(
def test_process_review_action_mixed_success(
mocker: pytest_mock.MockFixture,
mocker: pytest_mock.MockerFixture,
sample_pending_review: PendingHumanReviewModel,
test_user_id: str,
) -> None:
@@ -289,12 +289,12 @@ def test_process_review_action_mixed_success(
# Mock the route functions
mock_get_reviews_for_execution = mocker.patch(
"backend.server.v2.executions.review.routes.get_pending_reviews_for_execution"
"backend.api.features.executions.review.routes.get_pending_reviews_for_execution"
)
mock_get_reviews_for_execution.return_value = [sample_pending_review, second_review]
mock_process_all_reviews = mocker.patch(
"backend.server.v2.executions.review.routes.process_all_reviews_for_execution"
"backend.api.features.executions.review.routes.process_all_reviews_for_execution"
)
# Create approved version of first review
approved_review = PendingHumanReviewModel(
@@ -338,7 +338,7 @@ def test_process_review_action_mixed_success(
}
mock_has_pending = mocker.patch(
"backend.server.v2.executions.review.routes.has_pending_reviews_for_graph_exec"
"backend.api.features.executions.review.routes.has_pending_reviews_for_graph_exec"
)
mock_has_pending.return_value = False
@@ -369,7 +369,7 @@ def test_process_review_action_mixed_success(
def test_process_review_action_empty_request(
mocker: pytest_mock.MockFixture,
mocker: pytest_mock.MockerFixture,
test_user_id: str,
) -> None:
"""Test error when no reviews provided"""
@@ -386,19 +386,19 @@ def test_process_review_action_empty_request(
def test_process_review_action_review_not_found(
mocker: pytest_mock.MockFixture,
mocker: pytest_mock.MockerFixture,
test_user_id: str,
) -> None:
"""Test error when review is not found"""
# Mock the functions that extract graph execution ID from the request
mock_get_reviews_for_execution = mocker.patch(
"backend.server.v2.executions.review.routes.get_pending_reviews_for_execution"
"backend.api.features.executions.review.routes.get_pending_reviews_for_execution"
)
mock_get_reviews_for_execution.return_value = [] # No reviews found
# Mock process_all_reviews to simulate not finding reviews
mock_process_all_reviews = mocker.patch(
"backend.server.v2.executions.review.routes.process_all_reviews_for_execution"
"backend.api.features.executions.review.routes.process_all_reviews_for_execution"
)
# This should raise a ValueError with "Reviews not found" message based on the data/human_review.py logic
mock_process_all_reviews.side_effect = ValueError(
@@ -422,20 +422,20 @@ def test_process_review_action_review_not_found(
def test_process_review_action_partial_failure(
mocker: pytest_mock.MockFixture,
mocker: pytest_mock.MockerFixture,
sample_pending_review: PendingHumanReviewModel,
test_user_id: str,
) -> None:
"""Test handling of partial failures in review processing"""
# Mock the route functions
mock_get_reviews_for_execution = mocker.patch(
"backend.server.v2.executions.review.routes.get_pending_reviews_for_execution"
"backend.api.features.executions.review.routes.get_pending_reviews_for_execution"
)
mock_get_reviews_for_execution.return_value = [sample_pending_review]
# Mock partial failure in processing
mock_process_all_reviews = mocker.patch(
"backend.server.v2.executions.review.routes.process_all_reviews_for_execution"
"backend.api.features.executions.review.routes.process_all_reviews_for_execution"
)
mock_process_all_reviews.side_effect = ValueError("Some reviews failed validation")
@@ -456,20 +456,20 @@ def test_process_review_action_partial_failure(
def test_process_review_action_invalid_node_exec_id(
mocker: pytest_mock.MockFixture,
mocker: pytest_mock.MockerFixture,
sample_pending_review: PendingHumanReviewModel,
test_user_id: str,
) -> None:
"""Test failure when trying to process review with invalid node execution ID"""
# Mock the route functions
mock_get_reviews_for_execution = mocker.patch(
"backend.server.v2.executions.review.routes.get_pending_reviews_for_execution"
"backend.api.features.executions.review.routes.get_pending_reviews_for_execution"
)
mock_get_reviews_for_execution.return_value = [sample_pending_review]
# Mock validation failure - this should return 400, not 500
mock_process_all_reviews = mocker.patch(
"backend.server.v2.executions.review.routes.process_all_reviews_for_execution"
"backend.api.features.executions.review.routes.process_all_reviews_for_execution"
)
mock_process_all_reviews.side_effect = ValueError(
"Invalid node execution ID format"

View File

@@ -13,17 +13,14 @@ from backend.data.human_review import (
process_all_reviews_for_execution,
)
from backend.executor.utils import add_graph_execution
from backend.server.v2.executions.review.model import (
PendingHumanReviewModel,
ReviewRequest,
ReviewResponse,
)
from .model import PendingHumanReviewModel, ReviewRequest, ReviewResponse
logger = logging.getLogger(__name__)
router = APIRouter(
tags=["executions", "review", "private"],
tags=["v2", "executions", "review"],
dependencies=[Security(autogpt_auth_lib.requires_user)],
)
@@ -70,8 +67,7 @@ async def list_pending_reviews(
response_model=List[PendingHumanReviewModel],
responses={
200: {"description": "List of pending reviews for the execution"},
400: {"description": "Invalid graph execution ID"},
403: {"description": "Access denied to graph execution"},
404: {"description": "Graph execution not found"},
500: {"description": "Server error", "content": {"application/json": {}}},
},
)
@@ -94,7 +90,7 @@ async def list_pending_reviews_for_execution(
Raises:
HTTPException:
- 403: If user doesn't own the graph execution
- 404: If the graph execution doesn't exist or isn't owned by this user
- 500: If authentication fails or database error occurs
Note:
@@ -108,8 +104,8 @@ async def list_pending_reviews_for_execution(
)
if not graph_exec:
raise HTTPException(
status_code=status.HTTP_403_FORBIDDEN,
detail="Access denied to graph execution",
status_code=status.HTTP_404_NOT_FOUND,
detail=f"Graph execution #{graph_exec_id} not found",
)
return await get_pending_reviews_for_execution(graph_exec_id, user_id)
@@ -134,18 +130,14 @@ async def process_review_action(
# Build review decisions map
review_decisions = {}
for review in request.reviews:
if review.approved:
review_decisions[review.node_exec_id] = (
ReviewStatus.APPROVED,
review.reviewed_data,
review.message,
)
else:
review_decisions[review.node_exec_id] = (
ReviewStatus.REJECTED,
None,
review.message,
)
review_status = (
ReviewStatus.APPROVED if review.approved else ReviewStatus.REJECTED
)
review_decisions[review.node_exec_id] = (
review_status,
review.reviewed_data,
review.message,
)
# Process all reviews
updated_reviews = await process_all_reviews_for_execution(

View File

@@ -17,6 +17,8 @@ from fastapi import (
from pydantic import BaseModel, Field, SecretStr
from starlette.status import HTTP_500_INTERNAL_SERVER_ERROR, HTTP_502_BAD_GATEWAY
from backend.api.features.library.db import set_preset_webhook, update_preset
from backend.api.features.library.model import LibraryAgentPreset
from backend.data.graph import NodeModel, get_graph, set_node_webhook
from backend.data.integrations import (
WebhookEvent,
@@ -33,7 +35,11 @@ from backend.data.model import (
OAuth2Credentials,
UserIntegrations,
)
from backend.data.onboarding import OnboardingStep, complete_onboarding_step
from backend.data.onboarding import (
OnboardingStep,
complete_onboarding_step,
increment_runs,
)
from backend.data.user import get_user_integrations
from backend.executor.utils import add_graph_execution
from backend.integrations.ayrshare import AyrshareClient, SocialPlatform
@@ -41,13 +47,6 @@ from backend.integrations.creds_manager import IntegrationCredentialsManager
from backend.integrations.oauth import CREDENTIALS_BY_PROVIDER, HANDLERS_BY_NAME
from backend.integrations.providers import ProviderName
from backend.integrations.webhooks import get_webhook_manager
from backend.server.integrations.models import (
ProviderConstants,
ProviderNamesResponse,
get_all_provider_names,
)
from backend.server.v2.library.db import set_preset_webhook, update_preset
from backend.server.v2.library.model import LibraryAgentPreset
from backend.util.exceptions import (
GraphNotInLibraryError,
MissingConfigError,
@@ -56,6 +55,8 @@ from backend.util.exceptions import (
)
from backend.util.settings import Settings
from .models import ProviderConstants, ProviderNamesResponse, get_all_provider_names
if TYPE_CHECKING:
from backend.integrations.oauth import BaseOAuthHandler
@@ -377,6 +378,7 @@ async def webhook_ingress_generic(
return
await complete_onboarding_step(user_id, OnboardingStep.TRIGGER_WEBHOOK)
await increment_runs(user_id)
# Execute all triggers concurrently for better performance
tasks = []

View File

@@ -4,16 +4,14 @@ from typing import Literal, Optional
import fastapi
import prisma.errors
import prisma.fields
import prisma.models
import prisma.types
import backend.api.features.store.exceptions as store_exceptions
import backend.api.features.store.image_gen as store_image_gen
import backend.api.features.store.media as store_media
import backend.data.graph as graph_db
import backend.data.integrations as integrations_db
import backend.server.v2.library.model as library_model
import backend.server.v2.store.exceptions as store_exceptions
import backend.server.v2.store.image_gen as store_image_gen
import backend.server.v2.store.media as store_media
from backend.data.block import BlockInput
from backend.data.db import transaction
from backend.data.execution import get_graph_execution
@@ -28,6 +26,8 @@ from backend.util.json import SafeJson
from backend.util.models import Pagination
from backend.util.settings import Config
from . import model as library_model
logger = logging.getLogger(__name__)
config = Config()
integration_creds_manager = IntegrationCredentialsManager()
@@ -538,6 +538,7 @@ async def update_library_agent(
library_agent_id: str,
user_id: str,
auto_update_version: Optional[bool] = None,
graph_version: Optional[int] = None,
is_favorite: Optional[bool] = None,
is_archived: Optional[bool] = None,
is_deleted: Optional[Literal[False]] = None,
@@ -550,6 +551,7 @@ async def update_library_agent(
library_agent_id: The ID of the LibraryAgent to update.
user_id: The owner of this LibraryAgent.
auto_update_version: Whether the agent should auto-update to active version.
graph_version: Specific graph version to update to.
is_favorite: Whether this agent is marked as a favorite.
is_archived: Whether this agent is archived.
settings: User-specific settings for this library agent.
@@ -563,8 +565,8 @@ async def update_library_agent(
"""
logger.debug(
f"Updating library agent {library_agent_id} for user {user_id} with "
f"auto_update_version={auto_update_version}, is_favorite={is_favorite}, "
f"is_archived={is_archived}, settings={settings}"
f"auto_update_version={auto_update_version}, graph_version={graph_version}, "
f"is_favorite={is_favorite}, is_archived={is_archived}, settings={settings}"
)
update_fields: prisma.types.LibraryAgentUpdateManyMutationInput = {}
if auto_update_version is not None:
@@ -581,10 +583,23 @@ async def update_library_agent(
update_fields["isDeleted"] = is_deleted
if settings is not None:
update_fields["settings"] = SafeJson(settings.model_dump())
if not update_fields:
raise ValueError("No values were passed to update")
try:
# If graph_version is provided, update to that specific version
if graph_version is not None:
# Get the current agent to find its graph_id
agent = await get_library_agent(id=library_agent_id, user_id=user_id)
# Update to the specified version using existing function
return await update_agent_version_in_library(
user_id=user_id,
agent_graph_id=agent.graph_id,
agent_graph_version=graph_version,
)
# Otherwise, just update the simple fields
if not update_fields:
raise ValueError("No values were passed to update")
n_updated = await prisma.models.LibraryAgent.prisma().update_many(
where={"id": library_agent_id, "userId": user_id},
data=update_fields,

View File

@@ -1,16 +1,15 @@
from datetime import datetime
import prisma.enums
import prisma.errors
import prisma.models
import prisma.types
import pytest
import backend.server.v2.library.db as db
import backend.server.v2.store.exceptions
import backend.api.features.store.exceptions
from backend.data.db import connect
from backend.data.includes import library_agent_include
from . import db
@pytest.mark.asyncio
async def test_get_library_agents(mocker):
@@ -88,7 +87,7 @@ async def test_add_agent_to_library(mocker):
await connect()
# Mock the transaction context
mock_transaction = mocker.patch("backend.server.v2.library.db.transaction")
mock_transaction = mocker.patch("backend.api.features.library.db.transaction")
mock_transaction.return_value.__aenter__ = mocker.AsyncMock(return_value=None)
mock_transaction.return_value.__aexit__ = mocker.AsyncMock(return_value=None)
# Mock data
@@ -151,7 +150,7 @@ async def test_add_agent_to_library(mocker):
)
# Mock graph_db.get_graph function that's called to check for HITL blocks
mock_graph_db = mocker.patch("backend.server.v2.library.db.graph_db")
mock_graph_db = mocker.patch("backend.api.features.library.db.graph_db")
mock_graph_model = mocker.Mock()
mock_graph_model.nodes = (
[]
@@ -159,7 +158,9 @@ async def test_add_agent_to_library(mocker):
mock_graph_db.get_graph = mocker.AsyncMock(return_value=mock_graph_model)
# Mock the model conversion
mock_from_db = mocker.patch("backend.server.v2.library.model.LibraryAgent.from_db")
mock_from_db = mocker.patch(
"backend.api.features.library.model.LibraryAgent.from_db"
)
mock_from_db.return_value = mocker.Mock()
# Call function
@@ -217,7 +218,7 @@ async def test_add_agent_to_library_not_found(mocker):
)
# Call function and verify exception
with pytest.raises(backend.server.v2.store.exceptions.AgentNotFoundError):
with pytest.raises(backend.api.features.store.exceptions.AgentNotFoundError):
await db.add_store_agent_to_library("version123", "test-user")
# Verify mock called correctly

View File

@@ -385,6 +385,9 @@ class LibraryAgentUpdateRequest(pydantic.BaseModel):
auto_update_version: Optional[bool] = pydantic.Field(
default=None, description="Auto-update the agent version"
)
graph_version: Optional[int] = pydantic.Field(
default=None, description="Specific graph version to update to"
)
is_favorite: Optional[bool] = pydantic.Field(
default=None, description="Mark the agent as a favorite"
)

View File

@@ -3,7 +3,7 @@ import datetime
import prisma.models
import pytest
import backend.server.v2.library.model as library_model
from . import model as library_model
@pytest.mark.asyncio

View File

@@ -1,15 +1,18 @@
import logging
from typing import Optional
from typing import Literal, Optional
import autogpt_libs.auth as autogpt_auth_lib
from fastapi import APIRouter, Body, HTTPException, Query, Security, status
from fastapi.responses import Response
from prisma.enums import OnboardingStep
import backend.server.v2.library.db as library_db
import backend.server.v2.library.model as library_model
import backend.server.v2.store.exceptions as store_exceptions
import backend.api.features.store.exceptions as store_exceptions
from backend.data.onboarding import complete_onboarding_step
from backend.util.exceptions import DatabaseError, NotFoundError
from .. import db as library_db
from .. import model as library_model
logger = logging.getLogger(__name__)
router = APIRouter(
@@ -200,6 +203,9 @@ async def get_library_agent_by_store_listing_version_id(
)
async def add_marketplace_agent_to_library(
store_listing_version_id: str = Body(embed=True),
source: Literal["onboarding", "marketplace"] = Body(
default="marketplace", embed=True
),
user_id: str = Security(autogpt_auth_lib.get_user_id),
) -> library_model.LibraryAgent:
"""
@@ -217,10 +223,15 @@ async def add_marketplace_agent_to_library(
HTTPException(500): If a server/database error occurs.
"""
try:
return await library_db.add_store_agent_to_library(
agent = await library_db.add_store_agent_to_library(
store_listing_version_id=store_listing_version_id,
user_id=user_id,
)
if source != "onboarding":
await complete_onboarding_step(
user_id, OnboardingStep.MARKETPLACE_ADD_AGENT
)
return agent
except store_exceptions.AgentNotFoundError as e:
logger.warning(
@@ -274,6 +285,7 @@ async def update_library_agent(
library_agent_id=library_agent_id,
user_id=user_id,
auto_update_version=payload.auto_update_version,
graph_version=payload.graph_version,
is_favorite=payload.is_favorite,
is_archived=payload.is_archived,
settings=payload.settings,

View File

@@ -4,18 +4,20 @@ from typing import Any, Optional
import autogpt_libs.auth as autogpt_auth_lib
from fastapi import APIRouter, Body, HTTPException, Query, Security, status
import backend.server.v2.library.db as db
import backend.server.v2.library.model as models
from backend.data.execution import GraphExecutionMeta
from backend.data.graph import get_graph
from backend.data.integrations import get_webhook
from backend.data.model import CredentialsMetaInput
from backend.data.onboarding import increment_runs
from backend.executor.utils import add_graph_execution, make_node_credentials_input_map
from backend.integrations.creds_manager import IntegrationCredentialsManager
from backend.integrations.webhooks import get_webhook_manager
from backend.integrations.webhooks.utils import setup_webhook_for_block
from backend.util.exceptions import NotFoundError
from .. import db
from .. import model as models
logger = logging.getLogger(__name__)
credentials_manager = IntegrationCredentialsManager()
@@ -401,6 +403,8 @@ async def execute_preset(
merged_node_input = preset.inputs | inputs
merged_credential_inputs = preset.credentials | credential_inputs
await increment_runs(user_id)
return await add_graph_execution(
user_id=user_id,
graph_id=preset.graph_id,

View File

@@ -1,15 +1,17 @@
import datetime
import json
from unittest.mock import AsyncMock
import fastapi.testclient
import pytest
import pytest_mock
from pytest_snapshot.plugin import Snapshot
import backend.server.v2.library.model as library_model
from backend.server.v2.library.routes import router as library_router
from backend.util.models import Pagination
from . import model as library_model
from .routes import router as library_router
app = fastapi.FastAPI()
app.include_router(library_router)
@@ -85,7 +87,7 @@ async def test_get_library_agents_success(
total_items=2, total_pages=1, current_page=1, page_size=50
),
)
mock_db_call = mocker.patch("backend.server.v2.library.db.list_library_agents")
mock_db_call = mocker.patch("backend.api.features.library.db.list_library_agents")
mock_db_call.return_value = mocked_value
response = client.get("/agents?search_term=test")
@@ -111,7 +113,7 @@ async def test_get_library_agents_success(
def test_get_library_agents_error(mocker: pytest_mock.MockFixture, test_user_id: str):
mock_db_call = mocker.patch("backend.server.v2.library.db.list_library_agents")
mock_db_call = mocker.patch("backend.api.features.library.db.list_library_agents")
mock_db_call.side_effect = Exception("Test error")
response = client.get("/agents?search_term=test")
@@ -160,7 +162,7 @@ async def test_get_favorite_library_agents_success(
),
)
mock_db_call = mocker.patch(
"backend.server.v2.library.db.list_favorite_library_agents"
"backend.api.features.library.db.list_favorite_library_agents"
)
mock_db_call.return_value = mocked_value
@@ -183,7 +185,7 @@ def test_get_favorite_library_agents_error(
mocker: pytest_mock.MockFixture, test_user_id: str
):
mock_db_call = mocker.patch(
"backend.server.v2.library.db.list_favorite_library_agents"
"backend.api.features.library.db.list_favorite_library_agents"
)
mock_db_call.side_effect = Exception("Test error")
@@ -222,9 +224,13 @@ def test_add_agent_to_library_success(
)
mock_db_call = mocker.patch(
"backend.server.v2.library.db.add_store_agent_to_library"
"backend.api.features.library.db.add_store_agent_to_library"
)
mock_db_call.return_value = mock_library_agent
mock_complete_onboarding = mocker.patch(
"backend.api.features.library.routes.agents.complete_onboarding_step",
new_callable=AsyncMock,
)
response = client.post(
"/agents", json={"store_listing_version_id": "test-version-id"}
@@ -239,11 +245,12 @@ def test_add_agent_to_library_success(
mock_db_call.assert_called_once_with(
store_listing_version_id="test-version-id", user_id=test_user_id
)
mock_complete_onboarding.assert_awaited_once()
def test_add_agent_to_library_error(mocker: pytest_mock.MockFixture, test_user_id: str):
mock_db_call = mocker.patch(
"backend.server.v2.library.db.add_store_agent_to_library"
"backend.api.features.library.db.add_store_agent_to_library"
)
mock_db_call.side_effect = Exception("Test error")

View File

@@ -0,0 +1,833 @@
"""
OAuth 2.0 Provider Endpoints
Implements OAuth 2.0 Authorization Code flow with PKCE support.
Flow:
1. User clicks "Login with AutoGPT" in 3rd party app
2. App redirects user to /auth/authorize with client_id, redirect_uri, scope, state
3. User sees consent screen (if not already logged in, redirects to login first)
4. User approves → backend creates authorization code
5. User redirected back to app with code
6. App exchanges code for access/refresh tokens at /api/oauth/token
7. App uses access token to call external API endpoints
"""
import io
import logging
import os
import uuid
from datetime import datetime
from typing import Literal, Optional
from urllib.parse import urlencode
from autogpt_libs.auth import get_user_id
from fastapi import APIRouter, Body, HTTPException, Security, UploadFile, status
from gcloud.aio import storage as async_storage
from PIL import Image
from prisma.enums import APIKeyPermission
from pydantic import BaseModel, Field
from backend.data.auth.oauth import (
InvalidClientError,
InvalidGrantError,
OAuthApplicationInfo,
TokenIntrospectionResult,
consume_authorization_code,
create_access_token,
create_authorization_code,
create_refresh_token,
get_oauth_application,
get_oauth_application_by_id,
introspect_token,
list_user_oauth_applications,
refresh_tokens,
revoke_access_token,
revoke_refresh_token,
update_oauth_application,
validate_client_credentials,
validate_redirect_uri,
validate_scopes,
)
from backend.util.settings import Settings
from backend.util.virus_scanner import scan_content_safe
settings = Settings()
logger = logging.getLogger(__name__)
router = APIRouter()
# ============================================================================
# Request/Response Models
# ============================================================================
class TokenResponse(BaseModel):
"""OAuth 2.0 token response"""
token_type: Literal["Bearer"] = "Bearer"
access_token: str
access_token_expires_at: datetime
refresh_token: str
refresh_token_expires_at: datetime
scopes: list[str]
class ErrorResponse(BaseModel):
"""OAuth 2.0 error response"""
error: str
error_description: Optional[str] = None
class OAuthApplicationPublicInfo(BaseModel):
"""Public information about an OAuth application (for consent screen)"""
name: str
description: Optional[str] = None
logo_url: Optional[str] = None
scopes: list[str]
# ============================================================================
# Application Info Endpoint
# ============================================================================
@router.get(
"/app/{client_id}",
responses={
404: {"description": "Application not found or disabled"},
},
)
async def get_oauth_app_info(
client_id: str, user_id: str = Security(get_user_id)
) -> OAuthApplicationPublicInfo:
"""
Get public information about an OAuth application.
This endpoint is used by the consent screen to display application details
to the user before they authorize access.
Returns:
- name: Application name
- description: Application description (if provided)
- scopes: List of scopes the application is allowed to request
"""
app = await get_oauth_application(client_id)
if not app or not app.is_active:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND,
detail="Application not found",
)
return OAuthApplicationPublicInfo(
name=app.name,
description=app.description,
logo_url=app.logo_url,
scopes=[s.value for s in app.scopes],
)
# ============================================================================
# Authorization Endpoint
# ============================================================================
class AuthorizeRequest(BaseModel):
"""OAuth 2.0 authorization request"""
client_id: str = Field(description="Client identifier")
redirect_uri: str = Field(description="Redirect URI")
scopes: list[str] = Field(description="List of scopes")
state: str = Field(description="Anti-CSRF token from client")
response_type: str = Field(
default="code", description="Must be 'code' for authorization code flow"
)
code_challenge: str = Field(description="PKCE code challenge (required)")
code_challenge_method: Literal["S256", "plain"] = Field(
default="S256", description="PKCE code challenge method (S256 recommended)"
)
class AuthorizeResponse(BaseModel):
"""OAuth 2.0 authorization response with redirect URL"""
redirect_url: str = Field(description="URL to redirect the user to")
@router.post("/authorize")
async def authorize(
request: AuthorizeRequest = Body(),
user_id: str = Security(get_user_id),
) -> AuthorizeResponse:
"""
OAuth 2.0 Authorization Endpoint
User must be logged in (authenticated with Supabase JWT).
This endpoint creates an authorization code and returns a redirect URL.
PKCE (Proof Key for Code Exchange) is REQUIRED for all authorization requests.
The frontend consent screen should call this endpoint after the user approves,
then redirect the user to the returned `redirect_url`.
Request Body:
- client_id: The OAuth application's client ID
- redirect_uri: Where to redirect after authorization (must match registered URI)
- scopes: List of permissions (e.g., "EXECUTE_GRAPH READ_GRAPH")
- state: Anti-CSRF token provided by client (will be returned in redirect)
- response_type: Must be "code" (for authorization code flow)
- code_challenge: PKCE code challenge (required)
- code_challenge_method: "S256" (recommended) or "plain"
Returns:
- redirect_url: The URL to redirect the user to (includes authorization code)
Error cases return a redirect_url with error parameters, or raise HTTPException
for critical errors (like invalid redirect_uri).
"""
try:
# Validate response_type
if request.response_type != "code":
return _error_redirect_url(
request.redirect_uri,
request.state,
"unsupported_response_type",
"Only 'code' response type is supported",
)
# Get application
app = await get_oauth_application(request.client_id)
if not app:
return _error_redirect_url(
request.redirect_uri,
request.state,
"invalid_client",
"Unknown client_id",
)
if not app.is_active:
return _error_redirect_url(
request.redirect_uri,
request.state,
"invalid_client",
"Application is not active",
)
# Validate redirect URI
if not validate_redirect_uri(app, request.redirect_uri):
# For invalid redirect_uri, we can't redirect safely
# Must return error instead
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail=(
"Invalid redirect_uri. "
f"Must be one of: {', '.join(app.redirect_uris)}"
),
)
# Parse and validate scopes
try:
requested_scopes = [APIKeyPermission(s.strip()) for s in request.scopes]
except ValueError as e:
return _error_redirect_url(
request.redirect_uri,
request.state,
"invalid_scope",
f"Invalid scope: {e}",
)
if not requested_scopes:
return _error_redirect_url(
request.redirect_uri,
request.state,
"invalid_scope",
"At least one scope is required",
)
if not validate_scopes(app, requested_scopes):
return _error_redirect_url(
request.redirect_uri,
request.state,
"invalid_scope",
"Application is not authorized for all requested scopes. "
f"Allowed: {', '.join(s.value for s in app.scopes)}",
)
# Create authorization code
auth_code = await create_authorization_code(
application_id=app.id,
user_id=user_id,
scopes=requested_scopes,
redirect_uri=request.redirect_uri,
code_challenge=request.code_challenge,
code_challenge_method=request.code_challenge_method,
)
# Build redirect URL with authorization code
params = {
"code": auth_code.code,
"state": request.state,
}
redirect_url = f"{request.redirect_uri}?{urlencode(params)}"
logger.info(
f"Authorization code issued for user #{user_id} "
f"and app {app.name} (#{app.id})"
)
return AuthorizeResponse(redirect_url=redirect_url)
except HTTPException:
raise
except Exception as e:
logger.error(f"Error in authorization endpoint: {e}", exc_info=True)
return _error_redirect_url(
request.redirect_uri,
request.state,
"server_error",
"An unexpected error occurred",
)
def _error_redirect_url(
redirect_uri: str,
state: str,
error: str,
error_description: Optional[str] = None,
) -> AuthorizeResponse:
"""Helper to build redirect URL with OAuth error parameters"""
params = {
"error": error,
"state": state,
}
if error_description:
params["error_description"] = error_description
redirect_url = f"{redirect_uri}?{urlencode(params)}"
return AuthorizeResponse(redirect_url=redirect_url)
# ============================================================================
# Token Endpoint
# ============================================================================
class TokenRequestByCode(BaseModel):
grant_type: Literal["authorization_code"]
code: str = Field(description="Authorization code")
redirect_uri: str = Field(
description="Redirect URI (must match authorization request)"
)
client_id: str
client_secret: str
code_verifier: str = Field(description="PKCE code verifier")
class TokenRequestByRefreshToken(BaseModel):
grant_type: Literal["refresh_token"]
refresh_token: str
client_id: str
client_secret: str
@router.post("/token")
async def token(
request: TokenRequestByCode | TokenRequestByRefreshToken = Body(),
) -> TokenResponse:
"""
OAuth 2.0 Token Endpoint
Exchanges authorization code or refresh token for access token.
Grant Types:
1. authorization_code: Exchange authorization code for tokens
- Required: grant_type, code, redirect_uri, client_id, client_secret
- Optional: code_verifier (required if PKCE was used)
2. refresh_token: Exchange refresh token for new access token
- Required: grant_type, refresh_token, client_id, client_secret
Returns:
- access_token: Bearer token for API access (1 hour TTL)
- token_type: "Bearer"
- expires_in: Seconds until access token expires
- refresh_token: Token for refreshing access (30 days TTL)
- scopes: List of scopes
"""
# Validate client credentials
try:
app = await validate_client_credentials(
request.client_id, request.client_secret
)
except InvalidClientError as e:
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail=str(e),
)
# Handle authorization_code grant
if request.grant_type == "authorization_code":
# Consume authorization code
try:
user_id, scopes = await consume_authorization_code(
code=request.code,
application_id=app.id,
redirect_uri=request.redirect_uri,
code_verifier=request.code_verifier,
)
except InvalidGrantError as e:
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail=str(e),
)
# Create access and refresh tokens
access_token = await create_access_token(app.id, user_id, scopes)
refresh_token = await create_refresh_token(app.id, user_id, scopes)
logger.info(
f"Access token issued for user #{user_id} and app {app.name} (#{app.id})"
"via authorization code"
)
if not access_token.token or not refresh_token.token:
raise HTTPException(
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail="Failed to generate tokens",
)
return TokenResponse(
token_type="Bearer",
access_token=access_token.token.get_secret_value(),
access_token_expires_at=access_token.expires_at,
refresh_token=refresh_token.token.get_secret_value(),
refresh_token_expires_at=refresh_token.expires_at,
scopes=list(s.value for s in scopes),
)
# Handle refresh_token grant
elif request.grant_type == "refresh_token":
# Refresh access token
try:
new_access_token, new_refresh_token = await refresh_tokens(
request.refresh_token, app.id
)
except InvalidGrantError as e:
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail=str(e),
)
logger.info(
f"Tokens refreshed for user #{new_access_token.user_id} "
f"by app {app.name} (#{app.id})"
)
if not new_access_token.token or not new_refresh_token.token:
raise HTTPException(
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail="Failed to generate tokens",
)
return TokenResponse(
token_type="Bearer",
access_token=new_access_token.token.get_secret_value(),
access_token_expires_at=new_access_token.expires_at,
refresh_token=new_refresh_token.token.get_secret_value(),
refresh_token_expires_at=new_refresh_token.expires_at,
scopes=list(s.value for s in new_access_token.scopes),
)
else:
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail=f"Unsupported grant_type: {request.grant_type}. "
"Must be 'authorization_code' or 'refresh_token'",
)
# ============================================================================
# Token Introspection Endpoint
# ============================================================================
@router.post("/introspect")
async def introspect(
token: str = Body(description="Token to introspect"),
token_type_hint: Optional[Literal["access_token", "refresh_token"]] = Body(
None, description="Hint about token type ('access_token' or 'refresh_token')"
),
client_id: str = Body(description="Client identifier"),
client_secret: str = Body(description="Client secret"),
) -> TokenIntrospectionResult:
"""
OAuth 2.0 Token Introspection Endpoint (RFC 7662)
Allows clients to check if a token is valid and get its metadata.
Returns:
- active: Whether the token is currently active
- scopes: List of authorized scopes (if active)
- client_id: The client the token was issued to (if active)
- user_id: The user the token represents (if active)
- exp: Expiration timestamp (if active)
- token_type: "access_token" or "refresh_token" (if active)
"""
# Validate client credentials
try:
await validate_client_credentials(client_id, client_secret)
except InvalidClientError as e:
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail=str(e),
)
# Introspect the token
return await introspect_token(token, token_type_hint)
# ============================================================================
# Token Revocation Endpoint
# ============================================================================
@router.post("/revoke")
async def revoke(
token: str = Body(description="Token to revoke"),
token_type_hint: Optional[Literal["access_token", "refresh_token"]] = Body(
None, description="Hint about token type ('access_token' or 'refresh_token')"
),
client_id: str = Body(description="Client identifier"),
client_secret: str = Body(description="Client secret"),
):
"""
OAuth 2.0 Token Revocation Endpoint (RFC 7009)
Allows clients to revoke an access or refresh token.
Note: Revoking a refresh token does NOT revoke associated access tokens.
Revoking an access token does NOT revoke the associated refresh token.
"""
# Validate client credentials
try:
app = await validate_client_credentials(client_id, client_secret)
except InvalidClientError as e:
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail=str(e),
)
# Try to revoke as access token first
# Note: We pass app.id to ensure the token belongs to the authenticated app
if token_type_hint != "refresh_token":
revoked = await revoke_access_token(token, app.id)
if revoked:
logger.info(
f"Access token revoked for app {app.name} (#{app.id}); "
f"user #{revoked.user_id}"
)
return {"status": "ok"}
# Try to revoke as refresh token
revoked = await revoke_refresh_token(token, app.id)
if revoked:
logger.info(
f"Refresh token revoked for app {app.name} (#{app.id}); "
f"user #{revoked.user_id}"
)
return {"status": "ok"}
# Per RFC 7009, revocation endpoint returns 200 even if token not found
# or if token belongs to a different application.
# This prevents token scanning attacks.
logger.warning(f"Unsuccessful token revocation attempt by app {app.name} #{app.id}")
return {"status": "ok"}
# ============================================================================
# Application Management Endpoints (for app owners)
# ============================================================================
@router.get("/apps/mine")
async def list_my_oauth_apps(
user_id: str = Security(get_user_id),
) -> list[OAuthApplicationInfo]:
"""
List all OAuth applications owned by the current user.
Returns a list of OAuth applications with their details including:
- id, name, description, logo_url
- client_id (public identifier)
- redirect_uris, grant_types, scopes
- is_active status
- created_at, updated_at timestamps
Note: client_secret is never returned for security reasons.
"""
return await list_user_oauth_applications(user_id)
@router.patch("/apps/{app_id}/status")
async def update_app_status(
app_id: str,
user_id: str = Security(get_user_id),
is_active: bool = Body(description="Whether the app should be active", embed=True),
) -> OAuthApplicationInfo:
"""
Enable or disable an OAuth application.
Only the application owner can update the status.
When disabled, the application cannot be used for new authorizations
and existing access tokens will fail validation.
Returns the updated application info.
"""
updated_app = await update_oauth_application(
app_id=app_id,
owner_id=user_id,
is_active=is_active,
)
if not updated_app:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND,
detail="Application not found or you don't have permission to update it",
)
action = "enabled" if is_active else "disabled"
logger.info(f"OAuth app {updated_app.name} (#{app_id}) {action} by user #{user_id}")
return updated_app
class UpdateAppLogoRequest(BaseModel):
logo_url: str = Field(description="URL of the uploaded logo image")
@router.patch("/apps/{app_id}/logo")
async def update_app_logo(
app_id: str,
request: UpdateAppLogoRequest = Body(),
user_id: str = Security(get_user_id),
) -> OAuthApplicationInfo:
"""
Update the logo URL for an OAuth application.
Only the application owner can update the logo.
The logo should be uploaded first using the media upload endpoint,
then this endpoint is called with the resulting URL.
Logo requirements:
- Must be square (1:1 aspect ratio)
- Minimum 512x512 pixels
- Maximum 2048x2048 pixels
Returns the updated application info.
"""
if (
not (app := await get_oauth_application_by_id(app_id))
or app.owner_id != user_id
):
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND,
detail="OAuth App not found",
)
# Delete the current app logo file (if any and it's in our cloud storage)
await _delete_app_current_logo_file(app)
updated_app = await update_oauth_application(
app_id=app_id,
owner_id=user_id,
logo_url=request.logo_url,
)
if not updated_app:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND,
detail="Application not found or you don't have permission to update it",
)
logger.info(
f"OAuth app {updated_app.name} (#{app_id}) logo updated by user #{user_id}"
)
return updated_app
# Logo upload constraints
LOGO_MIN_SIZE = 512
LOGO_MAX_SIZE = 2048
LOGO_ALLOWED_TYPES = {"image/jpeg", "image/png", "image/webp"}
LOGO_MAX_FILE_SIZE = 3 * 1024 * 1024 # 3MB
@router.post("/apps/{app_id}/logo/upload")
async def upload_app_logo(
app_id: str,
file: UploadFile,
user_id: str = Security(get_user_id),
) -> OAuthApplicationInfo:
"""
Upload a logo image for an OAuth application.
Requirements:
- Image must be square (1:1 aspect ratio)
- Minimum 512x512 pixels
- Maximum 2048x2048 pixels
- Allowed formats: JPEG, PNG, WebP
- Maximum file size: 3MB
The image is uploaded to cloud storage and the app's logoUrl is updated.
Returns the updated application info.
"""
# Verify ownership to reduce vulnerability to DoS(torage) or DoM(oney) attacks
if (
not (app := await get_oauth_application_by_id(app_id))
or app.owner_id != user_id
):
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND,
detail="OAuth App not found",
)
# Check GCS configuration
if not settings.config.media_gcs_bucket_name:
raise HTTPException(
status_code=status.HTTP_503_SERVICE_UNAVAILABLE,
detail="Media storage is not configured",
)
# Validate content type
content_type = file.content_type
if content_type not in LOGO_ALLOWED_TYPES:
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail=f"Invalid file type. Allowed: JPEG, PNG, WebP. Got: {content_type}",
)
# Read file content
try:
file_bytes = await file.read()
except Exception as e:
logger.error(f"Error reading logo file: {e}")
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail="Failed to read uploaded file",
)
# Check file size
if len(file_bytes) > LOGO_MAX_FILE_SIZE:
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail=(
"File too large. "
f"Maximum size is {LOGO_MAX_FILE_SIZE // 1024 // 1024}MB"
),
)
# Validate image dimensions
try:
image = Image.open(io.BytesIO(file_bytes))
width, height = image.size
if width != height:
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail=f"Logo must be square. Got {width}x{height}",
)
if width < LOGO_MIN_SIZE:
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail=f"Logo too small. Minimum {LOGO_MIN_SIZE}x{LOGO_MIN_SIZE}. "
f"Got {width}x{height}",
)
if width > LOGO_MAX_SIZE:
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail=f"Logo too large. Maximum {LOGO_MAX_SIZE}x{LOGO_MAX_SIZE}. "
f"Got {width}x{height}",
)
except HTTPException:
raise
except Exception as e:
logger.error(f"Error validating logo image: {e}")
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail="Invalid image file",
)
# Scan for viruses
filename = file.filename or "logo"
await scan_content_safe(file_bytes, filename=filename)
# Generate unique filename
file_ext = os.path.splitext(filename)[1].lower() or ".png"
unique_filename = f"{uuid.uuid4()}{file_ext}"
storage_path = f"oauth-apps/{app_id}/logo/{unique_filename}"
# Upload to GCS
try:
async with async_storage.Storage() as async_client:
bucket_name = settings.config.media_gcs_bucket_name
await async_client.upload(
bucket_name, storage_path, file_bytes, content_type=content_type
)
logo_url = f"https://storage.googleapis.com/{bucket_name}/{storage_path}"
except Exception as e:
logger.error(f"Error uploading logo to GCS: {e}")
raise HTTPException(
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail="Failed to upload logo",
)
# Delete the current app logo file (if any and it's in our cloud storage)
await _delete_app_current_logo_file(app)
# Update the app with the new logo URL
updated_app = await update_oauth_application(
app_id=app_id,
owner_id=user_id,
logo_url=logo_url,
)
if not updated_app:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND,
detail="Application not found or you don't have permission to update it",
)
logger.info(
f"OAuth app {updated_app.name} (#{app_id}) logo uploaded by user #{user_id}"
)
return updated_app
async def _delete_app_current_logo_file(app: OAuthApplicationInfo):
"""
Delete the current logo file for the given app, if there is one in our cloud storage
"""
bucket_name = settings.config.media_gcs_bucket_name
storage_base_url = f"https://storage.googleapis.com/{bucket_name}/"
if app.logo_url and app.logo_url.startswith(storage_base_url):
# Parse blob path from URL: https://storage.googleapis.com/{bucket}/{path}
old_path = app.logo_url.replace(storage_base_url, "")
try:
async with async_storage.Storage() as async_client:
await async_client.delete(bucket_name, old_path)
logger.info(f"Deleted old logo for OAuth app #{app.id}: {old_path}")
except Exception as e:
# Log but don't fail - the new logo was uploaded successfully
logger.warning(
f"Failed to delete old logo for OAuth app #{app.id}: {e}", exc_info=e
)

File diff suppressed because it is too large Load Diff

View File

@@ -6,9 +6,9 @@ import pytest
import pytest_mock
from pytest_snapshot.plugin import Snapshot
import backend.server.v2.otto.models as otto_models
import backend.server.v2.otto.routes as otto_routes
from backend.server.v2.otto.service import OttoService
from . import models as otto_models
from . import routes as otto_routes
from .service import OttoService
app = fastapi.FastAPI()
app.include_router(otto_routes.router)

View File

@@ -4,12 +4,15 @@ from typing import Annotated
from fastapi import APIRouter, Body, HTTPException, Query, Security
from fastapi.responses import JSONResponse
from backend.api.utils.api_key_auth import APIKeyAuthenticator
from backend.data.user import (
get_user_by_email,
set_user_email_verification,
unsubscribe_user_by_token,
)
from backend.server.routers.postmark.models import (
from backend.util.settings import Settings
from .models import (
PostmarkBounceEnum,
PostmarkBounceWebhook,
PostmarkClickWebhook,
@@ -19,8 +22,6 @@ from backend.server.routers.postmark.models import (
PostmarkSubscriptionChangeWebhook,
PostmarkWebhook,
)
from backend.server.utils.api_key_auth import APIKeyAuthenticator
from backend.util.settings import Settings
logger = logging.getLogger(__name__)
settings = Settings()

View File

@@ -0,0 +1,72 @@
#!/usr/bin/env python3
"""
CLI script to backfill embeddings for store agents.
Usage:
poetry run python -m backend.server.v2.store.backfill_embeddings [--batch-size N]
"""
import argparse
import asyncio
import sys
import prisma
async def main(batch_size: int = 100) -> int:
"""Run the backfill process."""
# Initialize Prisma client
client = prisma.Prisma()
await client.connect()
prisma.register(client)
try:
from backend.api.features.store.embeddings import (
backfill_missing_embeddings,
get_embedding_stats,
)
# Get current stats
print("Current embedding stats:")
stats = await get_embedding_stats()
print(f" Total approved: {stats['total_approved']}")
print(f" With embeddings: {stats['with_embeddings']}")
print(f" Without embeddings: {stats['without_embeddings']}")
print(f" Coverage: {stats['coverage_percent']}%")
if stats["without_embeddings"] == 0:
print("\nAll agents already have embeddings. Nothing to do.")
return 0
# Run backfill
print(f"\nBackfilling up to {batch_size} embeddings...")
result = await backfill_missing_embeddings(batch_size=batch_size)
print(f" Processed: {result['processed']}")
print(f" Success: {result['success']}")
print(f" Failed: {result['failed']}")
# Get final stats
print("\nFinal embedding stats:")
stats = await get_embedding_stats()
print(f" Total approved: {stats['total_approved']}")
print(f" With embeddings: {stats['with_embeddings']}")
print(f" Without embeddings: {stats['without_embeddings']}")
print(f" Coverage: {stats['coverage_percent']}%")
return 0 if result["failed"] == 0 else 1
finally:
await client.disconnect()
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Backfill embeddings for store agents")
parser.add_argument(
"--batch-size",
type=int,
default=100,
help="Number of embeddings to generate (default: 100)",
)
args = parser.parse_args()
sys.exit(asyncio.run(main(batch_size=args.batch_size)))

View File

@@ -1,8 +1,9 @@
from typing import Literal
import backend.server.v2.store.db
from backend.util.cache import cached
from . import db as store_db
##############################################
############### Caches #######################
##############################################
@@ -29,7 +30,7 @@ async def _get_cached_store_agents(
page_size: int,
):
"""Cached helper to get store agents."""
return await backend.server.v2.store.db.get_store_agents(
return await store_db.get_store_agents(
featured=featured,
creators=[creator] if creator else None,
sorted_by=sorted_by,
@@ -42,10 +43,12 @@ async def _get_cached_store_agents(
# Cache individual agent details for 15 minutes
@cached(maxsize=200, ttl_seconds=300, shared_cache=True)
async def _get_cached_agent_details(username: str, agent_name: str):
async def _get_cached_agent_details(
username: str, agent_name: str, include_changelog: bool = False
):
"""Cached helper to get agent details."""
return await backend.server.v2.store.db.get_store_agent_details(
username=username, agent_name=agent_name
return await store_db.get_store_agent_details(
username=username, agent_name=agent_name, include_changelog=include_changelog
)
@@ -59,7 +62,7 @@ async def _get_cached_store_creators(
page_size: int,
):
"""Cached helper to get store creators."""
return await backend.server.v2.store.db.get_store_creators(
return await store_db.get_store_creators(
featured=featured,
search_query=search_query,
sorted_by=sorted_by,
@@ -72,6 +75,4 @@ async def _get_cached_store_creators(
@cached(maxsize=100, ttl_seconds=300, shared_cache=True)
async def _get_cached_creator_details(username: str):
"""Cached helper to get creator details."""
return await backend.server.v2.store.db.get_store_creator_details(
username=username.lower()
)
return await store_db.get_store_creator_details(username=username.lower())

Some files were not shown because too many files have changed in this diff Show More