Compare commits

...

31 Commits

Author SHA1 Message Date
Swifty
5d3903b6fb port backend changes ontop of a fresh dev branch 2026-01-08 12:12:41 +01:00
Ubbe
b0855e8cf2 feat(frontend): context menu right click new builder (#11703)
## Changes 🏗️

<img width="250" height="504" alt="Screenshot 2026-01-06 at 17 53 26"
src="https://github.com/user-attachments/assets/52013448-f49c-46b6-b86a-39f98270cbc3"
/>

<img width="300" height="544" alt="Screenshot 2026-01-06 at 17 53 29"
src="https://github.com/user-attachments/assets/e6334034-68e4-4346-9092-3774ab3e8445"
/>

On the **New Builder**:
- right-click on a node menu make it show the context menu
- use the same menu for right-click and when clicking on `...`

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run locally and test the above



<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **New Features**
* Added a custom right-click context menu for nodes with Copy, Open
agent (when available), and Delete actions; browser default menu is
suppressed while preserving zoom/drag/wiring.
* Introduced reusable SecondaryMenu primitives for context and dropdown
menus.

* **Documentation**
* Added Storybook examples demonstrating the context menu and dropdown
menu usage.

* **Style**
* Updated menu styling and icons with improved consistency and dark-mode
support.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2026-01-08 17:35:49 +07:00
Abhimanyu Yadav
5e2146dd76 feat(frontend): add CustomSchemaField wrapper for dynamic form field routing
(#11722)

### Changes 🏗️

This PR introduces automatic UI schema generation for custom form
fields, eliminating manual field mapping.

#### 1. **generateUiSchemaForCustomFields Utility**
(`generate-ui-schema.ts`) - New File
   - Auto-generates `ui:field` settings for custom fields
   - Detects custom fields using `findCustomFieldId()` matcher
   - Handles nested objects and array items recursively
   - Merges with existing UI schema without overwriting

#### 2. **FormRenderer Integration** (`FormRenderer.tsx`)
   - Imports and uses `generateUiSchemaForCustomFields`
   - Creates merged UI schema with `useMemo`
   - Passes merged schema to Form component
   - Enables automatic custom field detection

#### 3. **Preprocessor Cleanup** (`input-schema-pre-processor.ts`)
   - Removed manual `$id` assignment for custom fields
   - Removed unused `findCustomFieldId` import
   - Simplified to focus only on type validation

### Why these changes?

- Custom fields now auto-detect without manual `ui:field` configuration
- Uses standard RJSF approach (UI schema) for field routing
- Centralized custom field detection logic improves maintainability

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Verify custom fields render correctly when present in schema
- [x] Verify standard fields continue to render with default SchemaField
- [x] Verify multiple instances of same custom field type have unique
IDs
  - [x] Test form submission with custom fields

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Bug Fixes**
* Improved custom field rendering in forms by optimizing the UI schema
generation process.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2026-01-08 08:47:52 +00:00
Abhimanyu Yadav
103a62c9da feat(frontend/builder): add filters to blocks menu (#11654)
### Changes 🏗️

This PR adds filtering functionality to the new blocks menu, allowing
users to filter search results by category and creator.

**New Components:**
- `BlockMenuFilters`: Main filter component displaying active filters
and filter chips
- `FilterSheet`: Slide-out panel for selecting filters with categories
and creators
- `BlockMenuSearchContent`: Refactored search results display component

**Features Added:**
- Filter by categories: Blocks, Integrations, Marketplace agents, My
agents
- Filter by creator: Shows all available creators from search results
- Category counts: Display number of results per category
- Interactive filter chips with animations (using framer-motion)
- Hover states showing result counts on filter chips
- "All filters" sheet with apply/clear functionality

**State Management:**
- Extended `blockMenuStore` with filter state management
- Added `filters`, `creators`, `creators_list`, and `categoryCounts` to
store
- Integrated filters with search API (`filter` and `by_creator`
parameters)

**Refactoring:**
- Moved search logic from `BlockMenuSearch` to `BlockMenuSearchContent`
- Renamed `useBlockMenuSearch` to `useBlockMenuSearchContent`
- Moved helper functions to `BlockMenuSearchContent` directory

**API Changes:**
- Updated `custom-mutator.ts` to properly handle query parameter
encoding


### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Search for blocks and verify filter chips appear
- [x] Click "All filters" and verify filter sheet opens with categories
- [x] Select/deselect category filters and verify results update
accordingly
  - [x] Filter by creator and verify only blocks from that creator show
  - [x] Clear all filters and verify reset to default state
  - [x] Verify filter counts display correctly
  - [x] Test filter chip hover animations
2026-01-08 08:02:21 +00:00
Bentlybro
fc8434fb30 Merge branch 'master' into dev 2026-01-07 12:02:15 +00:00
Ubbe
3ae08cd48e feat(frontend): use Google Drive Picker on new builder (#11702)
## Changes 🏗️

<img width="600" height="960" alt="Screenshot 2026-01-06 at 17 40 23"
src="https://github.com/user-attachments/assets/61085ec5-a367-45c7-acaa-e3fc0f0af647"
/>

- So when using Google Blocks on the new builder, it shows Google Drive
Picket 🏁

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
  - [x] Run app locally and test the above


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **New Features**
* Added a Google Drive picker field and widget for forms with an
always-visible remove button and improved single/multi selection
handling.

* **Bug Fixes**
* Better validation and normalization of selected files and consolidated
error messaging.
* Adjusted layout spacing around the picker and selected files for
clearer display.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2026-01-07 17:07:09 +07:00
Swifty
4db13837b9 Revert "extracted frontend changes out of the hackathon/copilot branch"
This reverts commit df87867625.
2026-01-07 09:27:25 +01:00
Swifty
df87867625 extracted frontend changes out of the hackathon/copilot branch 2026-01-07 09:25:10 +01:00
Abhimanyu Yadav
e503126170 feat(frontend): upgrade RJSF to v6 and implement new FormRenderer system
(#11677)

Fixes #11686

### Changes 🏗️

This PR upgrades the React JSON Schema Form (RJSF) library from v5 to v6
and introduces a complete rewrite of the form rendering system with
improved architecture and new features.

#### Core Library Updates
- Upgraded `@rjsf/core` from 5.24.13 to 6.1.2
- Upgraded `@rjsf/utils` from 5.24.13 to 6.1.2
- Added `@radix-ui/react-slider` 1.3.6 for new slider components

#### New Form Renderer Architecture
- **Base Templates**: Created modular base templates for arrays,
objects, and standard fields
- **AnyOf Support**: Implemented `AnyOfField` component with type
selector for union types
- **Array Fields**: New `ArrayFieldTemplate`, `ArrayFieldItemTemplate`,
and `ArraySchemaField` with context provider
- **Object Fields**: Enhanced `ObjectFieldTemplate` with better support
for additional properties via `WrapIfAdditionalTemplate`
- **Field Templates**: New `TitleField`, `DescriptionField`, and
`FieldTemplate` with improved styling
- **Custom Widgets**: Implemented TextWidget, SelectWidget,
CheckboxWidget, FileWidget, DateWidget, TimeWidget, and DateTimeWidget
- **Button Components**: Custom AddButton, RemoveButton, and CopyButton
components

#### Node Handle System Refactor
- Split `NodeHandle` into `InputNodeHandle` and `OutputNodeHandle` for
better separation of concerns
- Refactored handle ID generation logic in `helpers.ts` with new
`generateHandleIdFromTitleId` function
- Improved handle connection detection using edge store
- Added support for nested output handles (objects within outputs)

#### Edge Store Improvements
- Added `removeEdgesByHandlePrefix` method for bulk edge removal
- Improved `isInputConnected` with handle ID cleanup
- Optimized `updateEdgeBeads` to only update when changes occur
- Better edge management with `applyEdgeChanges`

#### Node Store Enhancements
- Added `syncHardcodedValuesWithHandleIds` method to maintain
consistency between form data and handle connections
- Better handling of additional properties in objects
- Improved path parsing with `parseHandleIdToPath` and
`ensurePathExists`

#### Draft Recovery Improvements
- Added diff calculation with `calculateDraftDiff` to show what changed
- New `formatDiffSummary` to display changes in a readable format (e.g.,
"+2/-1 blocks, +3 connections")
- Better visual feedback for draft changes

#### UI/UX Enhancements
- Fixed node container width to 350px for consistency
- Improved field error display with inline error messages
- Better spacing and styling throughout forms
- Enhanced tooltip support for field descriptions
- Improved array item controls with better button placement
- Context-aware field sizing (small/large)

#### Output Handler Updates
- Recursive rendering of nested output properties
- Better type display with color coding
- Improved handle connections for complex output schemas

#### Migration & Cleanup
- Updated `RunInputDialog` to use new FormRenderer
- Updated `FormCreator` to use new FormRenderer
- Moved OAuth callback types to separate file
- Updated import paths from `input-renderer` to `InputRenderer`
- Removed unused console.log statements
- Added `type="button"` to buttons to prevent form submission

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Test form rendering with various field types (text, number,
boolean, arrays, objects)
  - [x] Test anyOf field type selector functionality
  - [x] Test array item addition/removal
  - [x] Test nested object fields with additional properties
  - [x] Test input/output node handle connections
  - [x] Test draft recovery with diff display
  - [x] Verify backward compatibility with existing agents
  - [x] Test field validation and error display
  - [x] Verify handle ID generation for complex schemas

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **New Features**
* Improved form field rendering with enhanced support for optional
types, arrays, and nested objects.
* Enhanced draft recovery display showing detailed difference tracking
(added, removed, modified items).
  * Better OAuth popup callback handling with structured message types.

* **Bug Fixes**
  * Improved node handle ID normalization and synchronization.
  * Enhanced edge management for complex field changes.
  * Fixed styling consistency across form components.

* **Dependencies**
  * Updated React JSON Schema Form library to version 6.1.2.
  * Added Radix UI slider component support.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2026-01-07 05:06:34 +00:00
Zamil Majdy
7ee28197a3 docs(gitbook): sync documentation updates with dev branch (#11709)
## Summary

Sync GitBook documentation changes from the gitbook branch to dev. This
PR contains comprehensive documentation updates including new assets,
content restructuring, and infrastructure improvements.

## Changes 🏗️

### Documentation Updates
- **New GitBook Assets**: Added 9 new documentation images and
screenshots
  - Platform overview images (AGPT_Platform.png, Banner_image.png)
- Feature illustrations (Contribute.png, Integrations.png, hosted.jpg,
no-code.jpg, api-reference.jpg)
  - Screenshots and examples for better user guidance
- **Content Updates**: Enhanced README.md and SUMMARY.md with improved
structure and navigation
- **Visual Documentation**: Added comprehensive visual guides for
platform features

### Infrastructure 
- **Cloudflare Worker**: Added redirect handler for docs.agpt.co →
agpt.co/docs migration
  - Complete URL mapping for 71+ redirect patterns
  - Handles platform blocks restructuring and edge cases
  - Ready for deployment to Cloudflare Workers

### Merge Conflict Resolution
- **Clean merge from dev**: Successfully merged dev's major backend
restructuring (server/ → api/)
- **File resurrection fix**: Removed files that were accidentally
resurrected during merge conflict resolution
  - Cleaned up BuilderActionButton.tsx (deleted in dev)
  - Cleaned up old PreviewBanner.tsx location (moved in dev)
  - Synced pnpm-lock.yaml and layout.tsx with dev's current state

## Technical Details

This PR represents a careful synchronization that:
1. **Preserves all GitBook documentation work** while staying current
with dev
2. **Maintains clean diff**: Only documentation-related changes remain
after merge cleanup
3. **Resolves merge conflicts**: Handled major backend API restructuring
without breaking docs
4. **Infrastructure ready**: Cloudflare Worker ready for docs migration
deployment

## Files Changed
- `docs/`: GitBook documentation assets and content
- `autogpt_platform/cloudflare_worker.js`: Docs infrastructure for URL
redirects

## Validation
-  All TypeScript compilation errors resolved
-  Pre-commit hooks passing (Prettier, TypeCheck)
-  Only documentation changes remain in diff vs dev
-  Cloudflare Worker tested with comprehensive URL mapping
-  No non-documentation code changes after cleanup

## Deployment Notes
The Cloudflare Worker can be deployed via:
```bash
# Cloudflare Dashboard → Workers → Create → Paste code → Add route docs.agpt.co/*
```

This completes the GitBook synchronization and prepares for docs site
migration.

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: bobby.gaffin <bobby.gaffin@agpt.co>
Co-authored-by: Bently <Github@bentlybro.com>
Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
Co-authored-by: Swifty <craigswift13@gmail.com>
Co-authored-by: Ubbe <hi@ubbe.dev>
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Lluis Agusti <hi@llu.lu>
2026-01-07 02:11:11 +00:00
Nicholas Tindle
818de26d24 fix(platform/blocks): XMLParserBlock list object error (#11517)
<!-- Clearly explain the need for these changes: -->

### Need for these changes 💡

The `XMLParserBlock` was susceptible to crashing with an
`AttributeError: 'List' object has no attribute 'add_text'` when
processing malformed XML inputs, such as documents with multiple root
elements or stray text outside the root. This PR introduces robust
validation to prevent these crashes and provide clear, actionable error
messages to users.

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->

- Added a `_validate_tokens` static method to `XMLParserBlock` to
perform pre-parsing validation on the token stream. This method ensures
the XML input has a single root element and no text content outside of
it.
- Modified the `XMLParserBlock.run` method to call `_validate_tokens`
immediately after tokenization and before passing the tokens to
`gravitasml.Parser`.
- Introduced a new test case, `test_rejects_text_outside_root`, in
`test_blocks_dos_vulnerability.py` to verify that the `XMLParserBlock`
correctly raises a `ValueError` when encountering XML with text outside
the root element.
- Imported `Token` for type hinting in `xml_parser.py`.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Confirm that the `test_rejects_text_outside_root` test passes,
asserting that `ValueError` is raised for invalid XML.
  - [x] Confirm that other relevant XML parsing tests continue to pass.


---
Linear Issue:
[OPEN-2835](https://linear.app/autogpt/issue/OPEN-2835/blockunknownerror-raised-by-xmlparserblock-with-message-list-object)

<a
href="https://cursor.com/background-agent?bcId=bc-4495ea93-6836-412c-b2e3-0adb31113169"><picture><source
media="(prefers-color-scheme: dark)"
srcset="https://cursor.com/open-in-cursor-dark.svg"><source
media="(prefers-color-scheme: light)"
srcset="https://cursor.com/open-in-cursor-light.svg"><img alt="Open in
Cursor"
src="https://cursor.com/open-in-cursor.svg"></picture></a>&nbsp;<a
href="https://cursor.com/agents?id=bc-4495ea93-6836-412c-b2e3-0adb31113169"><picture><source
media="(prefers-color-scheme: dark)"
srcset="https://cursor.com/open-in-web-dark.svg"><source
media="(prefers-color-scheme: light)"
srcset="https://cursor.com/open-in-web-light.svg"><img alt="Open in Web"
src="https://cursor.com/open-in-web.svg"></picture></a>


<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Strengthens XML parsing robustness and error clarity.
> 
> - Adds `_validate_tokens` in `XMLParserBlock` to ensure a single root
element, balanced tags, and no text outside the root before parsing
> - Updates `run` to `list(tokenize(...))` and validate tokens prior to
`Parser.parse()`; maintains 10MB input size guard
> - Introduces `test_rejects_text_outside_root` asserting a readable
`ValueError` for trailing text
> - Bumps `gravitasml` to `0.1.4` in `pyproject.toml` and lockfile
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
22cc5149c5. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Bug Fixes**
* Improved XML parsing validation with stricter enforcement of
single-root elements and prevention of trailing text, providing clearer
error messages for invalid XML input.

* **Tests**
* Added test coverage for XML parser validation of invalid root text
scenarios.

* **Chores**
  * Updated GravitasML dependency to latest compatible version.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Co-authored-by: Cursor Agent <cursoragent@cursor.com>
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <ntindle@users.noreply.github.com>
2026-01-06 20:02:53 +00:00
Nicholas Tindle
cb08def96c feat(blocks): Add Google Docs integration blocks (#11608)
Introduces a new module with blocks for Google Docs operations,
including reading, creating, appending, inserting, formatting,
exporting, sharing, and managing public access for Google Docs. Updates
dependencies in pyproject.toml and poetry.lock to support these
features.



https://github.com/user-attachments/assets/3597366b-a9eb-4f8e-8a0a-5a0bc8ebc09b



<!-- Clearly explain the need for these changes: -->

### Changes 🏗️
Adds lots of basic docs tools + a dependency to use them with markdown

Block | Description | Key Features
-- | -- | --
Read & Create |   |  
GoogleDocsReadBlock | Read content from a Google Doc | Returns text
content, title, revision ID
GoogleDocsCreateBlock | Create a new Google Doc | Title, optional
initial content
GoogleDocsGetMetadataBlock | Get document metadata | Title, revision ID,
locale, suggested modes
GoogleDocsGetStructureBlock | Get document structure with indexes | Flat
segments or detailed hierarchy; shows start/end indexes
Plain Text Operations |   |  
GoogleDocsAppendPlainTextBlock | Append plain text to end | No
formatting applied
GoogleDocsInsertPlainTextBlock | Insert plain text at position |
Requires index; no formatting
GoogleDocsFindReplacePlainTextBlock | Find and replace plain text |
Case-sensitive option; no formatting on replacement
Markdown Operations | (ideal for LLM/AI output) |  
GoogleDocsAppendMarkdownBlock | Append Markdown to end | Full formatting
via gravitas-md2gdocs
GoogleDocsInsertMarkdownAtBlock | Insert Markdown at position | Requires
index
GoogleDocsReplaceAllWithMarkdownBlock | Replace entire doc with Markdown
| Clears and rewrites
GoogleDocsReplaceRangeWithMarkdownBlock | Replace index range with
Markdown | Requires start/end index
GoogleDocsReplaceContentWithMarkdownBlock | Find text and replace with
Markdown | Text-based search; great for templates
Structural Operations |   |  
GoogleDocsInsertTableBlock | Insert a table | Rows/columns OR content
array; optional Markdown in cells
GoogleDocsInsertPageBreakBlock | Insert a page break | Position index (0
= end)
GoogleDocsDeleteContentBlock | Delete content range | Requires start/end
index
GoogleDocsFormatTextBlock | Apply formatting to text range | Bold,
italic, underline, font size/color, etc.
Export & Sharing |   |  
GoogleDocsExportBlock | Export to different formats | PDF, DOCX, TXT,
HTML, RTF, ODT, EPUB
GoogleDocsShareBlock | Share with specific users | Reader, commenter,
writer, owner roles
GoogleDocsSetPublicAccessBlock | Set public access level | Private,
anyone with link (view/comment/edit)


<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Build, run, verify, and upload a block super test
- [x] [Google Docs Super
Agent_v8.json](https://github.com/user-attachments/files/24134215/Google.Docs.Super.Agent_v8.json)
works


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Chores**
  * Updated backend dependencies.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Adds end-to-end Google Docs capabilities under
`backend/blocks/google/docs.py`, including rich Markdown support.
> 
> - New blocks: read/create docs; plain-text
`append`/`insert`/`find_replace`/`delete`; text `format`;
`insert_table`; `insert_page_break`; `get_metadata`; `get_structure`
> - Markdown-powered blocks (via `gravitas_md2gdocs.to_requests`):
`append_markdown`, `insert_markdown_at`, `replace_all_with_markdown`,
`replace_range_with_markdown`, `replace_content_with_markdown`
> - Export and sharing: `export` (PDF/DOCX/TXT/HTML/RTF/ODT/EPUB),
`share` (user roles), `set_public_access`
> - Dependency updates: add `gravitas-md2gdocs` to `pyproject.toml` and
update `poetry.lock`
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
73512a95b2. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: Cursor Agent <cursoragent@cursor.com>
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <ntindle@users.noreply.github.com>
2026-01-05 18:36:56 +00:00
Krzysztof Czerwinski
ac2daee5f8 feat(backend): Add GPT-5.2 and update default models (#11652)
### Changes 🏗️

- Add OpenAI `GPT-5.2` with metadata&cost
- Add const `DEFAULT_LLM_MODEL` (set to GPT-5.2) and use it instead of
hardcoded model across llm blocks and tests

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] GPT-5.2 is set as default and works on llm blocks
2026-01-05 16:13:35 +00:00
lif
266e0d79d4 fix(blocks): add YouTube Shorts URL support (#11659)
## Summary
Added support for parsing YouTube Shorts URLs (`youtube.com/shorts/...`)
in the TranscribeYoutubeVideoBlock to extract video IDs correctly.

## Changes
- Modified `_extract_video_id` method in `youtube.py` to handle Shorts
URL format
- Added test cases for YouTube Shorts URL extraction

## Related Issue
Fixes #11500

## Test Plan
- [x] Added unit tests for YouTube Shorts URL extraction
- [x] Verified existing YouTube URL formats still work
- [x] CI should pass all existing tests

---------

Co-authored-by: Ubbe <hi@ubbe.dev>
2026-01-05 16:11:45 +00:00
lif
01f443190e fix(frontend): allow empty values in number inputs and fix AnyOfField toggle (#11661)
<!-- ⚠️ Reminder: Think about your Changeset/Docs changes! -->
<!-- If you are introducing new blocks or features, document them for
users. -->
<!-- Reference:
https://github.com/Significant-Gravitas/AutoGPT/blob/dev/CONTRIBUTING.md
-->

## Summary

This PR fixes two related issues with number/integer inputs in the
frontend:

1. **HTMLType typo fix**: INTEGER input type was incorrectly mapped to
`htmlType: 'account'` (which is not a valid HTML input type) instead of
`htmlType: 'number'`.

2. **AnyOfField toggle fix**: When a user cleared a number input field,
the input would disappear because `useAnyOfField` checked for both
`null` AND `undefined` in `isEnabled`. This PR changes it to only check
for explicit `null` (set by toggle off), allowing `undefined` (empty
input) to keep the field visible.

### Root cause analysis

When a user cleared a number input:
1. `handleChange` returned `undefined` (because `v === "" ? undefined :
Number(v)`)
2. In `useAnyOfField`, `isEnabled = formData !== null && formData !==
undefined` became `false`
3. The input field disappeared

### Fix

Changed `useAnyOfField.tsx` line 67:
```diff
- const isEnabled = formData !== null && formData !== undefined;
+ const isEnabled = formData !== null;
```

This way:
- When toggle is OFF → `formData` is `null` → `isEnabled` is `false`
(input hidden) ✓
- When toggle is ON but input is cleared → `formData` is `undefined` →
`isEnabled` is `true` (input visible) ✓

## Test plan

- [x] Verified INTEGER inputs now render correctly with `type="number"`
- [x] Verified clearing a number input keeps the field visible
- [x] Verified toggling the nullable switch still works correctly

Fixes #11594

🤖 AI-assisted development disclaimer: This PR was developed with
assistance from Claude Code.

---------

Signed-off-by: majiayu000 <1835304752@qq.com>
Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
2026-01-05 16:10:47 +00:00
Ubbe
bdba0033de refactor(frontend): move NodeInput files (#11695)
## Changes 🏗️

Move the `<NodeInput />` component next to the old builder code where it
is used.

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run app locally and click around, E2E is fine
2026-01-05 10:29:12 +00:00
Abhimanyu Yadav
b87c64ce38 feat(frontend): Add delete key bindings to ReactFlow editor
(#11693)

Issues fixed by this PR
- https://github.com/Significant-Gravitas/AutoGPT/issues/11688
- https://github.com/Significant-Gravitas/AutoGPT/issues/11687

### **Changes 🏗️**

Added keyboard delete functionality to the ReactFlow editor by enabling
the `deleteKeyCode` prop with both "Backspace" and "Delete" keys. This
allows users to delete selected nodes and edges using standard keyboard
shortcuts, improving the editing experience.

**Modified:**

- `Flow.tsx`: Added `deleteKeyCode={["Backspace", "Delete"]}` prop to
the ReactFlow component to enable deletion of selected elements via
keyboard

### **Checklist 📋**

#### **For code changes:**

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Select a node in the flow editor and press Delete key to confirm
it deletes
- [x] Select a node in the flow editor and press Backspace key to
confirm it deletes
    - [x] Verify deletion works for multiple selected elements
2026-01-05 10:28:57 +00:00
Ubbe
003affca43 refactor(frontend): fix new builder buttons (#11696)
## Changes 🏗️

<img width="800" height="964" alt="Screenshot 2026-01-05 at 15 26 21"
src="https://github.com/user-attachments/assets/f8c7fc47-894a-4db2-b2f1-62b4d70e8453"
/>

- Adjust the new builder to use the Design System components
- Re-structure imports to match formatting rules
- Small improvement on `use-get-flag`
- Move file which is the main hook

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run locally and check the new buttons look good
2026-01-05 09:09:47 +00:00
Abhimanyu Yadav
290d0d9a9b feat(frontend): add auto-save Draft Recovery feature with IndexedDB persistence
(#11658)

## Summary
Implements an auto-save draft recovery system that persists unsaved flow
builder state across browser sessions, tab closures, and refreshes. When
users return to a flow with unsaved changes, they can choose to restore
or discard the draft via an intuitive recovery popup.



https://github.com/user-attachments/assets/0f77173b-7834-48d2-b7aa-73c6cd2eaff6



## Changes 🏗️

### Core Features
- **Draft Recovery Popup** (`DraftRecoveryPopup.tsx`)
  - Displays amber-themed notification with unsaved changes metadata
  - Shows node count, edge count, and relative time since last save
  - Provides restore and discard actions with tooltips
  - Auto-dismisses on click outside or ESC key

- **Auto-Save System** (`useDraftManager.ts`)
  - Automatically saves draft state every 15 seconds
  - Saves on browser tab close/refresh via `beforeunload`
  - Tracks nodes, edges, graph schemas, node counter, and flow version
  - Smart dirty checking - only saves when actual changes detected
  - Cleans up expired drafts (24-hour TTL)

- **IndexedDB Persistence** (`db.ts`, `draft-service.ts`)
  - Uses Dexie library for reliable client-side storage
- Handles both existing flows (by flowID) and new flows (via temp
session IDs)
- Compares draft state with current state to determine if recovery
needed
  - Automatically clears drafts after successful save

### Integration Changes
- **Flow Editor** (`Flow.tsx`)
  - Integrated `DraftRecoveryPopup` component
  - Passes `isInitialLoadComplete` state for proper timing

- **useFlow Hook** (`useFlow.ts`)
  - Added `isInitialLoadComplete` state to track when flow is ready
  - Ensures draft check happens after initial graph load
  - Resets state on flow/version changes

- **useCopyPaste Hook** (`useCopyPaste.ts`)
  - Refactored to manage keyboard event listeners internally
  - Simplified integration by removing external event handler setup

- **useSaveGraph Hook** (`useSaveGraph.ts`)
  - Clears draft after successful save (both create and update)
  - Removes temp flow ID from session storage on first save

### Dependencies
- Added `dexie@4.2.1` - Modern IndexedDB wrapper for reliable
client-side storage

## Technical Details

**Auto-Save Flow:**
1. User makes changes to nodes/edges
2. Change triggers 15-second debounced save
3. Draft saved to IndexedDB with timestamp
4. On save, current state compared with last saved state
5. Only saves if meaningful changes detected

**Recovery Flow:**
1. User loads flow/refreshes page
2. After initial load completes, check for existing draft
3. Compare draft with current state
4. If different and non-empty, show recovery popup
5. User chooses to restore or discard
6. Draft cleared after either action

**Session Management:**
- Existing flows: Use actual flowID for draft key

### Test Plan 🧪

- [x] Create a new flow with 3+ blocks and connections, wait 15+
seconds, then refresh the page - verify recovery popup appears with
correct counts and restoring works
- [x] Create a flow with blocks, refresh, then click "Discard" button on
recovery popup - verify popup disappears and draft is deleted
- [x] Add blocks to a flow, save successfully - verify draft is cleared
from IndexedDB (check DevTools > Application > IndexedDB)
- [x] Make changes to an existing flow, refresh page - verify recovery
popup shows and restoring preserves all changes correctly
- [x] Verify empty flows (0 nodes) don't trigger recovery popup or save
drafts
2025-12-31 14:49:53 +00:00
Abhimanyu Yadav
fba61c72ed feat(frontend): fix duplicate publish button and improve BuilderActionButton styling
(#11669)

Fixes duplicate "Publish to Marketplace" buttons in the builder by
adding a `showTrigger` prop to control modal trigger visibility.

<img width="296" height="99" alt="Screenshot 2025-12-23 at 8 18 58 AM"
src="https://github.com/user-attachments/assets/d5dbfba8-e854-4c0c-a6b7-da47133ec815"
/>


### Changes 🏗️

**BuilderActionButton.tsx**

- Removed borders on hover and active states for a cleaner visual
appearance
- Added `hover:border-none` and `active:border-none` to maintain
consistent styling during interactions

**PublishToMarketplace.tsx**

- Pass `showTrigger={false}` to `PublishAgentModal` to hide the default
trigger button
- This prevents duplicate buttons when a custom trigger is already
rendered

**PublishAgentModal.tsx**

- Added `showTrigger` prop (defaults to `true`) to conditionally render
the modal trigger
- Allows parent components to control whether the built-in trigger
button should be displayed
- Maintains backward compatibility with existing usage

### Checklist 📋

#### For code changes:

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verify only one "Publish to Marketplace" button appears in the
builder
- [x] Confirm button hover/active states display correctly without
border artifacts
- [x] Verify modal can still be triggered programmatically without the
trigger button
2025-12-31 09:46:12 +00:00
Nicholas Tindle
79d45a15d0 feat(platform): Deduplicate insufficient funds Discord + email notifications (#11672)
Add Redis-based deduplication for insufficient funds notifications (both
Discord alerts and user emails) when users run out of credits. This
prevents spamming users and the PRODUCT Discord channel with repeated
alerts for the same user+agent combination.

### Changes 🏗️

- **Redis-based deduplication** (`backend/executor/manager.py`):
- Add `INSUFFICIENT_FUNDS_NOTIFIED_PREFIX` constant for Redis key prefix
- Add `INSUFFICIENT_FUNDS_NOTIFIED_TTL_SECONDS` (30 days) as fallback
cleanup
- Implement deduplication in `_handle_insufficient_funds_notif` using
Redis `SET NX`
- Skip both email (`ZERO_BALANCE`) and Discord notifications for
duplicate alerts per user+agent
- Add `clear_insufficient_funds_notifications(user_id)` function to
remove all notification flags for a user

- **Clear flags on credit top-up** (`backend/data/credit.py`):
- Call `clear_insufficient_funds_notifications` in `_top_up_credits`
after successful auto-charge
- Call `clear_insufficient_funds_notifications` in `fulfill_checkout`
after successful manual top-up
- This allows users to receive notifications again if they run out of
funds in the future

- **Comprehensive test coverage**
(`backend/executor/manager_insufficient_funds_test.py`):
  - Test first-time notification sends both email and Discord alert
  - Test duplicate notifications are skipped for same user+agent
  - Test different agents for same user get separate alerts
  - Test clearing notifications removes all keys for a user
  - Test handling when no notification keys exist
- Test notifications still sent when Redis fails (graceful degradation)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] First insufficient funds alert sends both email and Discord
notification
  - [x] Duplicate alerts for same user+agent are skipped
  - [x] Different agents for same user each get their own notification
  - [x] Topping up credits clears notification flags
  - [x] Redis failure gracefully falls back to sending notifications
  - [x] 30-day TTL provides automatic cleanup as fallback
  - [x] Manually test this works with scheduled agents
 

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Introduces Redis-backed deduplication for insufficient-funds alerts
and resets flags on successful credit additions.
> 
> - **Dedup insufficient-funds alerts** in `executor/manager.py` using
Redis `SET NX` with `INSUFFICIENT_FUNDS_NOTIFIED_PREFIX` and 30‑day TTL;
skips duplicate ZERO_BALANCE email + Discord alerts per
`user_id`+`graph_id`, with graceful fallback if Redis fails.
> - **Reset notification flags on credit increases** by adding
`clear_insufficient_funds_notifications(user_id)` and invoking it when
enabling/adding positive `GRANT`/`TOP_UP` transactions in
`data/credit.py`.
> - **Tests** (`executor/manager_insufficient_funds_test.py`):
first-time vs duplicate behavior, per-agent separation, clearing keys
(including no-key and Redis-error cases), and clearing on
`_add_transaction`/`_enable_transaction`.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
1a4413b3a1. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: Ubbe <hi@ubbe.dev>
Co-authored-by: Claude <noreply@anthropic.com>
2025-12-30 18:10:30 +00:00
Ubbe
66f0d97ca2 fix(frontend): hide better chat link if not enabled (#11648)
## Changes 🏗️

- Make `<Navbar />` a client component so its rendering is more
predictable
- Remove the `useMemo()` for the chat link to prevent the flash...
- Make sure chat is added to the navbar links only after checking the
flag is enabled
- Improve logout with `useTransition`
- Simplify feature flags setup

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run locally and test the above

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Ensures the `Chat` nav item is hidden when the feature flag is off
across desktop and mobile nav.
> 
> - Inline-filters `loggedInLinks` to skip `Chat` when `Flag.CHAT` is
false for both `NavbarLink` rendering and `MobileNavBar` menu items
> - Removes `useMemo`/`linksWithChat` helper; maps directly over
`loggedInLinks` and filters nulls in mobile, keeping icon mapping intact
> - Cleans up unused `useMemo` import
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
79c42d87b4. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->
2025-12-30 13:21:53 +00:00
Ubbe
5894a8fcdf fix(frontend): use DS Dialog on old builder (#11643)
## Changes 🏗️

Use the Design System `<Dialog />` on the old builder, which supports
long content scrolling ( the current one does not, causing issues in
graphs with many run inputs )...

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run locally and test the above


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **New Features**
* Added Enhanced Rendering toggle for improved output handling and
display (controlled via feature flag)

* **Improvements**
  * Refined dialog layouts and user interactions
* Enhanced copy-to-clipboard functionality with toast notifications upon
copying

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-12-30 20:22:57 +07:00
Ubbe
dff8efa35d fix(frontend): favico colour override issue (#11681)
## Changes 🏗️

Sometimes, on Dev, when navigating between pages, the Favico colour
would revert from Green 🟢 (Dev) to Purple 🟣(Default). That's because the
`/marketplace` page had custom code overriding it that I didn't notice
earlier...

I also made it use the Next.js metadata API, so it handles the favicon
correctly across navigations.

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run locally and test the above
2025-12-30 20:22:32 +07:00
seer-by-sentry[bot]
e26822998f fix: Handle missing or null 'items' key in DataForSEO Related Keywords block (#10989)
### Changes 🏗️

- Modified the DataForSEO Related Keywords block to handle cases where
the 'items' key is missing or has a null value in the API response.
- Ensures that the code gracefully handles these scenarios by defaulting
to an empty list, preventing potential errors. Fixes
[AUTOGPT-SERVER-66D](https://sentry.io/organizations/significant-gravitas/issues/6902944636/).

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] The DataForSEO API now returns an empty list when there are no
results, preventing the code from attempting to iterate on a null value.

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Strengthens parsing of DataForSEO Labs response to avoid errors when
`items` is missing or null.
> 
> - In `backend/blocks/dataforseo/related_keywords.py` `run()`, sets
`items = first_result.get("items") or []` when `first_result` is a
`dict`, otherwise `[]`, ensuring safe iteration
> - Prevents exceptions and yields empty results when no items are
returned
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
cc465ddbf2. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

Co-authored-by: seer-by-sentry[bot] <157164994+seer-by-sentry[bot]@users.noreply.github.com>
Co-authored-by: Toran Bruce Richards <toran.richards@gmail.com>
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <ntindle@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2025-12-26 16:17:24 +00:00
Zamil Majdy
88731b1f76 feat(platform): marketplace update notifications with enhanced publishing workflow (#11630)
## Summary
This PR implements a comprehensive marketplace update notification
system that allows users to discover and update to newer agent versions,
along with enhanced publishing workflows and UI improvements.

<img width="1500" height="533" alt="image"
src="https://github.com/user-attachments/assets/ee331838-d712-4718-b231-1f9ec21bcd8e"
/>

<img width="600" height="610" alt="image"
src="https://github.com/user-attachments/assets/b881a7b8-91a5-460d-a159-f64765b339f1"
/>

<img width="1500" height="416" alt="image"
src="https://github.com/user-attachments/assets/a2d61904-2673-4e44-bcc5-c47d36af7a38"
/>

<img width="1500" height="1015" alt="image"
src="https://github.com/user-attachments/assets/2dd978c7-20cc-4230-977e-9c62157b9f23"
/>


## Core Features

### 🔔 Marketplace Update Notifications
- **Update detection**: Automatically detects when marketplace has newer
agent versions than user's local copy
- **Creator notifications**: Shows banners for creators with unpublished
changes ready to publish
- **Non-creator support**: Enables regular users to discover and update
to newer marketplace versions
- **Version comparison**: Intelligent logic comparing `graph_version` vs
marketplace listing versions

### 📋 Enhanced Publishing Workflow  
- **Builder integration**: Added "Publish to Marketplace" button
directly in the builder actions
- **Unified banner system**: Consistent `MarketplaceBanners` component
across library and marketplace pages
- **Streamlined UX**: Fixed layout issues, improved button placement and
styling
- **Modal improvements**: Fixed thumbnail loading race conditions and
infinite loop bugs

### 📚 Version History & Changelog
- **Inline version history**: Added version changelog directly to
marketplace agent pages
- **Version comparison**: Clear display of available versions with
current version highlighting
- **Update mechanism**: Direct updates using `graph_version` parameter
for accuracy

## Technical Implementation

### Backend Changes
- **Database schema**: Added `agentGraphVersions` and `agentGraphId`
fields to `StoreAgent` model
- **API enhancement**: Updated store endpoints to expose graph version
data for version comparison
- **Data migration**: Fixed agent version field naming from `version` to
`agentGraphVersions`
- **Model updates**: Enhanced `LibraryAgentUpdateRequest` with
`graph_version` field

### Frontend Architecture
- **`useMarketplaceUpdate` hook**: Centralized marketplace update
detection and creator identification
- **`MarketplaceBanners` component**: Unified banner system with proper
vertical layout and styling
- **`AgentVersionChangelog` component**: Version history display for
marketplace pages
- **`PublishToMarketplace` component**: Builder integration with modal
workflow

### Key Bug Fixes
- **Thumbnail loading**: Fixed race condition where images wouldn't load
on first modal open
- **Infinite loops**: Used refs to prevent circular dependencies in
`useThumbnailImages` hook
- **Layout issues**: Fixed banner placement, removed duplicate
breadcrumbs, corrected vertical layout
- **Field naming**: Fixed `agent_version` vs `version` field
inconsistencies across APIs

## Files Changed

### Backend
- `autogpt_platform/backend/backend/server/v2/store/` - Enhanced store
API with graph version data
- `autogpt_platform/backend/backend/server/v2/library/` - Updated
library API models
- `autogpt_platform/backend/migrations/` - Database migrations for
version fields
- `autogpt_platform/backend/schema.prisma` - Schema updates for graph
versions

### Frontend
- `src/app/(platform)/components/MarketplaceBanners/` - New unified
banner component
- `src/app/(platform)/library/agents/[id]/components/` - Enhanced
library views with banners
- `src/app/(platform)/build/components/BuilderActions/` - Added
marketplace publish button
- `src/app/(platform)/marketplace/components/AgentInfo/` - Added inline
version history
- `src/components/contextual/PublishAgentModal/` - Fixed thumbnail
loading and modal workflow

## User Experience Impact
- **Better discovery**: Users automatically notified of newer agent
versions
- **Streamlined publishing**: Direct publish access from builder
interface
- **Reduced friction**: Fixed UI bugs, improved loading states,
consistent design
- **Enhanced transparency**: Inline version history on marketplace pages
- **Creator workflow**: Better notifications for creators with
unpublished changes

## Testing
-  Update banners appear correctly when marketplace has newer versions
-  Creator banners show for users with unpublished changes  
-  Version comparison logic works with graph_version vs marketplace
versions
-  Publish button in builder opens modal correctly with pre-populated
data
-  Thumbnail images load properly on first modal open without infinite
loops
-  Database migrations completed successfully with version field fixes
-  All existing tests updated and passing with new schema changes

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Lluis Agusti <hi@llu.lu>
Co-authored-by: Ubbe <hi@ubbe.dev>
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
2025-12-22 11:13:06 +00:00
Abhimanyu Yadav
c3e407ef09 feat(frontend): add hover state to edge delete button in FlowEditor (#11601)
<!-- Clearly explain the need for these changes: -->

The delete button on flow editor edges is always visible, which creates
visual clutter. This change makes the button only appear on hover,
improving the UI while keeping it accessible.

### Changes 🏗️

- Added hover state management using `useState` to track when the edge
delete button is hovered
- Applied opacity transition to the delete button (fades in on hover,
fades out when not hovered)
- Added `onMouseEnter` and `onMouseLeave` handlers to the button to
control hover state
- Used `cn` utility for conditional className management
- Button remains interactive even when `opacity-0` (still clickable for
better UX)

### Checklist 📋

#### For code changes:

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Hover over an edge in the flow editor and verify the delete button
fades in smoothly
- [x] Move mouse away from edge and verify the delete button fades out
smoothly
- [x] Click the delete button while hovered to verify it still removes
the edge connection
- [x] Test with multiple edges to ensure hover state is independent per
edge
2025-12-22 01:30:58 +00:00
Reinier van der Leer
08a60dcb9b refactor(frontend): Clean up React Query-related code (#11604)
- #11603

### Changes 🏗️

Frontend:
- Make `okData` infer the response data type instead of casting
- Generalize infinite query utilities from `SidebarRunsList/helpers.ts`
  - Move to `@/app/api/helpers` and use wherever possible
- Simplify/replace boilerplate checks and conditions with `okData` in
many places
- Add `useUserTimezone` hook to replace all the boilerplate timezone
queries

Backend:
- Fix response type annotation of `GET
/api/store/graph/{store_listing_version_id}` endpoint
- Fix documentation and error behavior of `GET
/api/review/execution/{graph_exec_id}` endpoint

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - CI passes
  - [x] Clicking around the app manually -> no obvious issues
  - [x] Test Onboarding step 5 (run)
  - [x] Library runs list loads normally
2025-12-20 22:46:24 +01:00
Reinier van der Leer
de78d062a9 refactor(backend/api): Clean up API file structure (#11629)
We'll soon be needing a more feature-complete external API. To make way
for this, I'm moving some files around so:
- We can more easily create new versions of our external API
- The file structure of our internal API is more homogeneous

These changes are quite opinionated, but IMO in any case they're better
than the chaotic structure we have now.

### Changes 🏗️

- Move `backend/server` -> `backend/api`
- Move `backend/server/routers` + `backend/server/v2` ->
`backend/api/features`
  - Change absolute sibling imports to relative imports
- Move `backend/server/v2/AutoMod` -> `backend/executor/automod`
- Combine `backend/server/routers/analytics_*test.py` ->
`backend/api/features/analytics_test.py`
- Sort OpenAPI spec file

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - CI tests
  - [x] Clicking around in the app -> no obvious breakage
2025-12-20 20:33:10 +00:00
Zamil Majdy
217e3718d7 feat(platform): implement HITL UI redesign with improved review flow (#11529)
## Summary

• Redesigned Human-in-the-Loop review interface with yellow warning
scheme
• Implemented separate approved_data/rejected_data output pins for
human_in_the_loop block
• Added real-time execution status tracking to legacy flow for review
detection
• Fixed button loading states and improved UI consistency across flows
• Standardized Tailwind CSS usage removing custom values

<img width="1500" alt="image"
src="https://github.com/user-attachments/assets/4ca6dd98-f3c4-41c0-a06b-92b3bca22490"
/>
<img width="1500" alt="image"
src="https://github.com/user-attachments/assets/0afae211-09f0-465e-b477-c3949f13c876"
/>
<img width="1500" alt="image"
src="https://github.com/user-attachments/assets/05d9d1ed-cd40-4c73-92b8-0dab21713ca9"
/>



## Changes Made

### Backend Changes
- Modified `human_in_the_loop.py` block to output separate
`approved_data` and `rejected_data` pins instead of single reviewed_data
with status
- Updated block output schema to support better data flow in graph
builder

### Frontend UI Changes
- Redesigned PendingReviewsList with yellow warning color scheme
(replacing orange)
- Fixed button loading states to show spinner only on clicked button 
- Improved FloatingReviewsPanel layout removing redundant headers
- Added real-time status tracking to legacy flow using useFlowRealtime
hook
- Fixed AgentActivityDropdown text overflow and layout issues
- Enhanced Safe Mode toggle positioning and toast timing
- Standardized all custom Tailwind values to use standard classes

### Design System Updates
- Added yellow design tokens (25, 150, 600) for warning states
- Unified REVIEW status handling across all components
- Improved component composition patterns

## Test Plan
- [x] Verify HITL blocks create separate output pins for
approved/rejected data
- [x] Test review flow works in both new and legacy flow builders
- [x] Confirm button loading states work correctly (only clicked button
shows spinner)
- [x] Validate AgentActivityDropdown properly displays review status
- [x] Check Safe Mode toggle positioning matches old flow
- [x] Ensure real-time status updates work in legacy flow
- [x] Verify yellow warning colors are consistent throughout

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Lluis Agusti <hi@llu.lu>
2025-12-20 15:52:51 +00:00
Ubbe
4a7bc006a8 hotfix(frontend): chat should be disabled by default (#11639)
### Changes 🏗️

Chat should be disabled by default; otherwise, it flashes, and if Launch
Darkly fails to fail, it is dangerous.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run locally with Launch Darkly disabled and test the above
2025-12-18 19:04:13 +01:00
441 changed files with 22715 additions and 13998 deletions

View File

@@ -6,12 +6,11 @@ start-core:
# Stop core services
stop-core:
docker compose stop deps
docker compose stop
reset-db:
docker compose stop db
rm -rf db/docker/volumes/db/data
cd backend && poetry run prisma migrate deploy
cd backend && poetry run prisma generate
# View logs for core services
logs-core:
@@ -58,4 +57,4 @@ help:
@echo " run-backend - Run the backend FastAPI server"
@echo " run-frontend - Run the frontend Next.js development server"
@echo " test-data - Run the test data creator"
@echo " load-store-agents - Load store agents from agents/ folder into test database"
@echo " load-store-agents - Load store agents from agents/ folder into test database"

View File

@@ -1,29 +1,25 @@
from fastapi import FastAPI
from fastapi.openapi.utils import get_openapi
from .jwt_utils import bearer_jwt_auth
def add_auth_responses_to_openapi(app: FastAPI) -> None:
"""
Set up custom OpenAPI schema generation that adds 401 responses
Patch a FastAPI instance's `openapi()` method to add 401 responses
to all authenticated endpoints.
This is needed when using HTTPBearer with auto_error=False to get proper
401 responses instead of 403, but FastAPI only automatically adds security
responses when auto_error=True.
"""
# Wrap current method to allow stacking OpenAPI schema modifiers like this
wrapped_openapi = app.openapi
def custom_openapi():
if app.openapi_schema:
return app.openapi_schema
openapi_schema = get_openapi(
title=app.title,
version=app.version,
description=app.description,
routes=app.routes,
)
openapi_schema = wrapped_openapi()
# Add 401 response to all endpoints that have security requirements
for path, methods in openapi_schema["paths"].items():

View File

@@ -108,7 +108,7 @@ import fastapi.testclient
import pytest
from pytest_snapshot.plugin import Snapshot
from backend.server.v2.myroute import router
from backend.api.features.myroute import router
app = fastapi.FastAPI()
app.include_router(router)
@@ -149,7 +149,7 @@ These provide the easiest way to set up authentication mocking in test modules:
import fastapi
import fastapi.testclient
import pytest
from backend.server.v2.myroute import router
from backend.api.features.myroute import router
app = fastapi.FastAPI()
app.include_router(router)

View File

@@ -3,12 +3,12 @@ from typing import Dict, Set
from fastapi import WebSocket
from backend.api.model import NotificationPayload, WSMessage, WSMethod
from backend.data.execution import (
ExecutionEventType,
GraphExecutionEvent,
NodeExecutionEvent,
)
from backend.server.model import NotificationPayload, WSMessage, WSMethod
_EVENT_TYPE_TO_METHOD_MAP: dict[ExecutionEventType, WSMethod] = {
ExecutionEventType.GRAPH_EXEC_UPDATE: WSMethod.GRAPH_EXECUTION_EVENT,

View File

@@ -4,13 +4,13 @@ from unittest.mock import AsyncMock
import pytest
from fastapi import WebSocket
from backend.api.conn_manager import ConnectionManager
from backend.api.model import NotificationPayload, WSMessage, WSMethod
from backend.data.execution import (
ExecutionStatus,
GraphExecutionEvent,
NodeExecutionEvent,
)
from backend.server.conn_manager import ConnectionManager
from backend.server.model import NotificationPayload, WSMessage, WSMethod
@pytest.fixture

View File

@@ -0,0 +1,25 @@
from fastapi import FastAPI
from backend.api.middleware.security import SecurityHeadersMiddleware
from backend.monitoring.instrumentation import instrument_fastapi
from .v1.routes import v1_router
external_api = FastAPI(
title="AutoGPT External API",
description="External API for AutoGPT integrations",
docs_url="/docs",
version="1.0",
)
external_api.add_middleware(SecurityHeadersMiddleware)
external_api.include_router(v1_router, prefix="/v1")
# Add Prometheus instrumentation
instrument_fastapi(
external_api,
service_name="external-api",
expose_endpoint=True,
endpoint="/metrics",
include_in_schema=True,
)

View File

@@ -16,6 +16,8 @@ from fastapi import APIRouter, Body, HTTPException, Path, Security, status
from prisma.enums import APIKeyPermission
from pydantic import BaseModel, Field, SecretStr
from backend.api.external.middleware import require_permission
from backend.api.features.integrations.models import get_all_provider_names
from backend.data.auth.base import APIAuthorizationInfo
from backend.data.model import (
APIKeyCredentials,
@@ -28,8 +30,6 @@ from backend.data.model import (
from backend.integrations.creds_manager import IntegrationCredentialsManager
from backend.integrations.oauth import CREDENTIALS_BY_PROVIDER, HANDLERS_BY_NAME
from backend.integrations.providers import ProviderName
from backend.server.external.middleware import require_permission
from backend.server.integrations.models import get_all_provider_names
from backend.util.settings import Settings
if TYPE_CHECKING:

View File

@@ -8,23 +8,29 @@ from prisma.enums import AgentExecutionStatus, APIKeyPermission
from pydantic import BaseModel, Field
from typing_extensions import TypedDict
import backend.api.features.store.cache as store_cache
import backend.api.features.store.model as store_model
import backend.data.block
import backend.server.v2.store.cache as store_cache
import backend.server.v2.store.model as store_model
from backend.api.external.middleware import require_permission
from backend.data import execution as execution_db
from backend.data import graph as graph_db
from backend.data import user as user_db
from backend.data.auth.base import APIAuthorizationInfo
from backend.data.block import BlockInput, CompletedBlockOutput
from backend.executor.utils import add_graph_execution
from backend.server.external.middleware import require_permission
from backend.util.settings import Settings
from .integrations import integrations_router
from .tools import tools_router
settings = Settings()
logger = logging.getLogger(__name__)
v1_router = APIRouter()
v1_router.include_router(integrations_router)
v1_router.include_router(tools_router)
class UserInfoResponse(BaseModel):
id: str

View File

@@ -14,11 +14,11 @@ from fastapi import APIRouter, Security
from prisma.enums import APIKeyPermission
from pydantic import BaseModel, Field
from backend.api.external.middleware import require_permission
from backend.api.features.chat.model import ChatSession
from backend.api.features.chat.tools import find_agent_tool, run_agent_tool
from backend.api.features.chat.tools.models import ToolResponseBase
from backend.data.auth.base import APIAuthorizationInfo
from backend.server.external.middleware import require_permission
from backend.server.v2.chat.model import ChatSession
from backend.server.v2.chat.tools import find_agent_tool, run_agent_tool
from backend.server.v2.chat.tools.models import ToolResponseBase
logger = logging.getLogger(__name__)

View File

@@ -6,9 +6,10 @@ from fastapi import APIRouter, Body, Security
from prisma.enums import CreditTransactionType
from backend.data.credit import admin_get_user_history, get_user_credit_model
from backend.server.v2.admin.model import AddUserCreditsResponse, UserHistoryResponse
from backend.util.json import SafeJson
from .model import AddUserCreditsResponse, UserHistoryResponse
logger = logging.getLogger(__name__)

View File

@@ -9,14 +9,15 @@ import pytest_mock
from autogpt_libs.auth.jwt_utils import get_jwt_payload
from pytest_snapshot.plugin import Snapshot
import backend.server.v2.admin.credit_admin_routes as credit_admin_routes
import backend.server.v2.admin.model as admin_model
from backend.data.model import UserTransaction
from backend.util.json import SafeJson
from backend.util.models import Pagination
from .credit_admin_routes import router as credit_admin_router
from .model import UserHistoryResponse
app = fastapi.FastAPI()
app.include_router(credit_admin_routes.router)
app.include_router(credit_admin_router)
client = fastapi.testclient.TestClient(app)
@@ -30,7 +31,7 @@ def setup_app_admin_auth(mock_jwt_admin):
def test_add_user_credits_success(
mocker: pytest_mock.MockFixture,
mocker: pytest_mock.MockerFixture,
configured_snapshot: Snapshot,
admin_user_id: str,
target_user_id: str,
@@ -42,7 +43,7 @@ def test_add_user_credits_success(
return_value=(1500, "transaction-123-uuid")
)
mocker.patch(
"backend.server.v2.admin.credit_admin_routes.get_user_credit_model",
"backend.api.features.admin.credit_admin_routes.get_user_credit_model",
return_value=mock_credit_model,
)
@@ -84,7 +85,7 @@ def test_add_user_credits_success(
def test_add_user_credits_negative_amount(
mocker: pytest_mock.MockFixture,
mocker: pytest_mock.MockerFixture,
snapshot: Snapshot,
) -> None:
"""Test credit deduction by admin (negative amount)"""
@@ -94,7 +95,7 @@ def test_add_user_credits_negative_amount(
return_value=(200, "transaction-456-uuid")
)
mocker.patch(
"backend.server.v2.admin.credit_admin_routes.get_user_credit_model",
"backend.api.features.admin.credit_admin_routes.get_user_credit_model",
return_value=mock_credit_model,
)
@@ -119,12 +120,12 @@ def test_add_user_credits_negative_amount(
def test_get_user_history_success(
mocker: pytest_mock.MockFixture,
mocker: pytest_mock.MockerFixture,
snapshot: Snapshot,
) -> None:
"""Test successful retrieval of user credit history"""
# Mock the admin_get_user_history function
mock_history_response = admin_model.UserHistoryResponse(
mock_history_response = UserHistoryResponse(
history=[
UserTransaction(
user_id="user-1",
@@ -150,7 +151,7 @@ def test_get_user_history_success(
)
mocker.patch(
"backend.server.v2.admin.credit_admin_routes.admin_get_user_history",
"backend.api.features.admin.credit_admin_routes.admin_get_user_history",
return_value=mock_history_response,
)
@@ -170,12 +171,12 @@ def test_get_user_history_success(
def test_get_user_history_with_filters(
mocker: pytest_mock.MockFixture,
mocker: pytest_mock.MockerFixture,
snapshot: Snapshot,
) -> None:
"""Test user credit history with search and filter parameters"""
# Mock the admin_get_user_history function
mock_history_response = admin_model.UserHistoryResponse(
mock_history_response = UserHistoryResponse(
history=[
UserTransaction(
user_id="user-3",
@@ -194,7 +195,7 @@ def test_get_user_history_with_filters(
)
mock_get_history = mocker.patch(
"backend.server.v2.admin.credit_admin_routes.admin_get_user_history",
"backend.api.features.admin.credit_admin_routes.admin_get_user_history",
return_value=mock_history_response,
)
@@ -230,12 +231,12 @@ def test_get_user_history_with_filters(
def test_get_user_history_empty_results(
mocker: pytest_mock.MockFixture,
mocker: pytest_mock.MockerFixture,
snapshot: Snapshot,
) -> None:
"""Test user credit history with no results"""
# Mock empty history response
mock_history_response = admin_model.UserHistoryResponse(
mock_history_response = UserHistoryResponse(
history=[],
pagination=Pagination(
total_items=0,
@@ -246,7 +247,7 @@ def test_get_user_history_empty_results(
)
mocker.patch(
"backend.server.v2.admin.credit_admin_routes.admin_get_user_history",
"backend.api.features.admin.credit_admin_routes.admin_get_user_history",
return_value=mock_history_response,
)

View File

@@ -7,9 +7,9 @@ import fastapi
import fastapi.responses
import prisma.enums
import backend.server.v2.store.cache as store_cache
import backend.server.v2.store.db
import backend.server.v2.store.model
import backend.api.features.store.cache as store_cache
import backend.api.features.store.db as store_db
import backend.api.features.store.model as store_model
import backend.util.json
logger = logging.getLogger(__name__)
@@ -24,7 +24,7 @@ router = fastapi.APIRouter(
@router.get(
"/listings",
summary="Get Admin Listings History",
response_model=backend.server.v2.store.model.StoreListingsWithVersionsResponse,
response_model=store_model.StoreListingsWithVersionsResponse,
)
async def get_admin_listings_with_versions(
status: typing.Optional[prisma.enums.SubmissionStatus] = None,
@@ -48,7 +48,7 @@ async def get_admin_listings_with_versions(
StoreListingsWithVersionsResponse with listings and their versions
"""
try:
listings = await backend.server.v2.store.db.get_admin_listings_with_versions(
listings = await store_db.get_admin_listings_with_versions(
status=status,
search_query=search,
page=page,
@@ -68,11 +68,11 @@ async def get_admin_listings_with_versions(
@router.post(
"/submissions/{store_listing_version_id}/review",
summary="Review Store Submission",
response_model=backend.server.v2.store.model.StoreSubmission,
response_model=store_model.StoreSubmission,
)
async def review_submission(
store_listing_version_id: str,
request: backend.server.v2.store.model.ReviewSubmissionRequest,
request: store_model.ReviewSubmissionRequest,
user_id: str = fastapi.Security(autogpt_libs.auth.get_user_id),
):
"""
@@ -87,12 +87,10 @@ async def review_submission(
StoreSubmission with updated review information
"""
try:
already_approved = (
await backend.server.v2.store.db.check_submission_already_approved(
store_listing_version_id=store_listing_version_id,
)
already_approved = await store_db.check_submission_already_approved(
store_listing_version_id=store_listing_version_id,
)
submission = await backend.server.v2.store.db.review_store_submission(
submission = await store_db.review_store_submission(
store_listing_version_id=store_listing_version_id,
is_approved=request.is_approved,
external_comments=request.comments,
@@ -136,7 +134,7 @@ async def admin_download_agent_file(
Raises:
HTTPException: If the agent is not found or an unexpected error occurs.
"""
graph_data = await backend.server.v2.store.db.get_agent_as_admin(
graph_data = await store_db.get_agent_as_admin(
user_id=user_id,
store_listing_version_id=store_listing_version_id,
)

View File

@@ -6,10 +6,11 @@ from typing import Annotated
import fastapi
import pydantic
from autogpt_libs.auth import get_user_id
from autogpt_libs.auth.dependencies import requires_user
import backend.data.analytics
router = fastapi.APIRouter()
router = fastapi.APIRouter(dependencies=[fastapi.Security(requires_user)])
logger = logging.getLogger(__name__)

View File

@@ -0,0 +1,340 @@
"""Tests for analytics API endpoints."""
import json
from unittest.mock import AsyncMock, Mock
import fastapi
import fastapi.testclient
import pytest
import pytest_mock
from pytest_snapshot.plugin import Snapshot
from .analytics import router as analytics_router
app = fastapi.FastAPI()
app.include_router(analytics_router)
client = fastapi.testclient.TestClient(app)
@pytest.fixture(autouse=True)
def setup_app_auth(mock_jwt_user):
"""Setup auth overrides for all tests in this module."""
from autogpt_libs.auth.jwt_utils import get_jwt_payload
app.dependency_overrides[get_jwt_payload] = mock_jwt_user["get_jwt_payload"]
yield
app.dependency_overrides.clear()
# =============================================================================
# /log_raw_metric endpoint tests
# =============================================================================
def test_log_raw_metric_success(
mocker: pytest_mock.MockFixture,
configured_snapshot: Snapshot,
test_user_id: str,
) -> None:
"""Test successful raw metric logging."""
mock_result = Mock(id="metric-123-uuid")
mock_log_metric = mocker.patch(
"backend.data.analytics.log_raw_metric",
new_callable=AsyncMock,
return_value=mock_result,
)
request_data = {
"metric_name": "page_load_time",
"metric_value": 2.5,
"data_string": "/dashboard",
}
response = client.post("/log_raw_metric", json=request_data)
assert response.status_code == 200, f"Unexpected response: {response.text}"
assert response.json() == "metric-123-uuid"
mock_log_metric.assert_called_once_with(
user_id=test_user_id,
metric_name="page_load_time",
metric_value=2.5,
data_string="/dashboard",
)
configured_snapshot.assert_match(
json.dumps({"metric_id": response.json()}, indent=2, sort_keys=True),
"analytics_log_metric_success",
)
@pytest.mark.parametrize(
"metric_value,metric_name,data_string,test_id",
[
(100, "api_calls_count", "external_api", "integer_value"),
(0, "error_count", "no_errors", "zero_value"),
(-5.2, "temperature_delta", "cooling", "negative_value"),
(1.23456789, "precision_test", "float_precision", "float_precision"),
(999999999, "large_number", "max_value", "large_number"),
(0.0000001, "tiny_number", "min_value", "tiny_number"),
],
)
def test_log_raw_metric_various_values(
mocker: pytest_mock.MockFixture,
configured_snapshot: Snapshot,
metric_value: float,
metric_name: str,
data_string: str,
test_id: str,
) -> None:
"""Test raw metric logging with various metric values."""
mock_result = Mock(id=f"metric-{test_id}-uuid")
mocker.patch(
"backend.data.analytics.log_raw_metric",
new_callable=AsyncMock,
return_value=mock_result,
)
request_data = {
"metric_name": metric_name,
"metric_value": metric_value,
"data_string": data_string,
}
response = client.post("/log_raw_metric", json=request_data)
assert response.status_code == 200, f"Failed for {test_id}: {response.text}"
configured_snapshot.assert_match(
json.dumps(
{"metric_id": response.json(), "test_case": test_id},
indent=2,
sort_keys=True,
),
f"analytics_metric_{test_id}",
)
@pytest.mark.parametrize(
"invalid_data,expected_error",
[
({}, "Field required"),
({"metric_name": "test"}, "Field required"),
(
{"metric_name": "test", "metric_value": "not_a_number", "data_string": "x"},
"Input should be a valid number",
),
(
{"metric_name": "", "metric_value": 1.0, "data_string": "test"},
"String should have at least 1 character",
),
(
{"metric_name": "test", "metric_value": 1.0, "data_string": ""},
"String should have at least 1 character",
),
],
ids=[
"empty_request",
"missing_metric_value_and_data_string",
"invalid_metric_value_type",
"empty_metric_name",
"empty_data_string",
],
)
def test_log_raw_metric_validation_errors(
invalid_data: dict,
expected_error: str,
) -> None:
"""Test validation errors for invalid metric requests."""
response = client.post("/log_raw_metric", json=invalid_data)
assert response.status_code == 422
error_detail = response.json()
assert "detail" in error_detail, f"Missing 'detail' in error: {error_detail}"
error_text = json.dumps(error_detail)
assert (
expected_error in error_text
), f"Expected '{expected_error}' in error response: {error_text}"
def test_log_raw_metric_service_error(
mocker: pytest_mock.MockFixture,
test_user_id: str,
) -> None:
"""Test error handling when analytics service fails."""
mocker.patch(
"backend.data.analytics.log_raw_metric",
new_callable=AsyncMock,
side_effect=Exception("Database connection failed"),
)
request_data = {
"metric_name": "test_metric",
"metric_value": 1.0,
"data_string": "test",
}
response = client.post("/log_raw_metric", json=request_data)
assert response.status_code == 500
error_detail = response.json()["detail"]
assert "Database connection failed" in error_detail["message"]
assert "hint" in error_detail
# =============================================================================
# /log_raw_analytics endpoint tests
# =============================================================================
def test_log_raw_analytics_success(
mocker: pytest_mock.MockFixture,
configured_snapshot: Snapshot,
test_user_id: str,
) -> None:
"""Test successful raw analytics logging."""
mock_result = Mock(id="analytics-789-uuid")
mock_log_analytics = mocker.patch(
"backend.data.analytics.log_raw_analytics",
new_callable=AsyncMock,
return_value=mock_result,
)
request_data = {
"type": "user_action",
"data": {
"action": "button_click",
"button_id": "submit_form",
"timestamp": "2023-01-01T00:00:00Z",
"metadata": {"form_type": "registration", "fields_filled": 5},
},
"data_index": "button_click_submit_form",
}
response = client.post("/log_raw_analytics", json=request_data)
assert response.status_code == 200, f"Unexpected response: {response.text}"
assert response.json() == "analytics-789-uuid"
mock_log_analytics.assert_called_once_with(
test_user_id,
"user_action",
request_data["data"],
"button_click_submit_form",
)
configured_snapshot.assert_match(
json.dumps({"analytics_id": response.json()}, indent=2, sort_keys=True),
"analytics_log_analytics_success",
)
def test_log_raw_analytics_complex_data(
mocker: pytest_mock.MockFixture,
configured_snapshot: Snapshot,
) -> None:
"""Test raw analytics logging with complex nested data structures."""
mock_result = Mock(id="analytics-complex-uuid")
mocker.patch(
"backend.data.analytics.log_raw_analytics",
new_callable=AsyncMock,
return_value=mock_result,
)
request_data = {
"type": "agent_execution",
"data": {
"agent_id": "agent_123",
"execution_id": "exec_456",
"status": "completed",
"duration_ms": 3500,
"nodes_executed": 15,
"blocks_used": [
{"block_id": "llm_block", "count": 3},
{"block_id": "http_block", "count": 5},
{"block_id": "code_block", "count": 2},
],
"errors": [],
"metadata": {
"trigger": "manual",
"user_tier": "premium",
"environment": "production",
},
},
"data_index": "agent_123_exec_456",
}
response = client.post("/log_raw_analytics", json=request_data)
assert response.status_code == 200
configured_snapshot.assert_match(
json.dumps(
{"analytics_id": response.json(), "logged_data": request_data["data"]},
indent=2,
sort_keys=True,
),
"analytics_log_analytics_complex_data",
)
@pytest.mark.parametrize(
"invalid_data,expected_error",
[
({}, "Field required"),
({"type": "test"}, "Field required"),
(
{"type": "test", "data": "not_a_dict", "data_index": "test"},
"Input should be a valid dictionary",
),
({"type": "test", "data": {"key": "value"}}, "Field required"),
],
ids=[
"empty_request",
"missing_data_and_data_index",
"invalid_data_type",
"missing_data_index",
],
)
def test_log_raw_analytics_validation_errors(
invalid_data: dict,
expected_error: str,
) -> None:
"""Test validation errors for invalid analytics requests."""
response = client.post("/log_raw_analytics", json=invalid_data)
assert response.status_code == 422
error_detail = response.json()
assert "detail" in error_detail, f"Missing 'detail' in error: {error_detail}"
error_text = json.dumps(error_detail)
assert (
expected_error in error_text
), f"Expected '{expected_error}' in error response: {error_text}"
def test_log_raw_analytics_service_error(
mocker: pytest_mock.MockFixture,
test_user_id: str,
) -> None:
"""Test error handling when analytics service fails."""
mocker.patch(
"backend.data.analytics.log_raw_analytics",
new_callable=AsyncMock,
side_effect=Exception("Analytics DB unreachable"),
)
request_data = {
"type": "test_event",
"data": {"key": "value"},
"data_index": "test_index",
}
response = client.post("/log_raw_analytics", json=request_data)
assert response.status_code == 500
error_detail = response.json()["detail"]
assert "Analytics DB unreachable" in error_detail["message"]
assert "hint" in error_detail

View File

@@ -6,17 +6,20 @@ from typing import Sequence
import prisma
import backend.api.features.library.db as library_db
import backend.api.features.library.model as library_model
import backend.api.features.store.db as store_db
import backend.api.features.store.model as store_model
import backend.data.block
import backend.server.v2.library.db as library_db
import backend.server.v2.library.model as library_model
import backend.server.v2.store.db as store_db
import backend.server.v2.store.model as store_model
from backend.blocks import load_all_blocks
from backend.blocks.llm import LlmModel
from backend.data.block import AnyBlockSchema, BlockCategory, BlockInfo, BlockSchema
from backend.data.db import query_raw_with_schema
from backend.integrations.providers import ProviderName
from backend.server.v2.builder.model import (
from backend.util.cache import cached
from backend.util.models import Pagination
from .model import (
BlockCategoryResponse,
BlockResponse,
BlockType,
@@ -26,8 +29,6 @@ from backend.server.v2.builder.model import (
ProviderResponse,
SearchEntry,
)
from backend.util.cache import cached
from backend.util.models import Pagination
logger = logging.getLogger(__name__)
llm_models = [name.name.lower().replace("_", " ") for name in LlmModel]

View File

@@ -2,8 +2,8 @@ from typing import Literal
from pydantic import BaseModel
import backend.server.v2.library.model as library_model
import backend.server.v2.store.model as store_model
import backend.api.features.library.model as library_model
import backend.api.features.store.model as store_model
from backend.data.block import BlockInfo
from backend.integrations.providers import ProviderName
from backend.util.models import Pagination

View File

@@ -4,11 +4,12 @@ from typing import Annotated, Sequence
import fastapi
from autogpt_libs.auth.dependencies import get_user_id, requires_user
import backend.server.v2.builder.db as builder_db
import backend.server.v2.builder.model as builder_model
from backend.integrations.providers import ProviderName
from backend.util.models import Pagination
from . import db as builder_db
from . import model as builder_model
logger = logging.getLogger(__name__)
router = fastapi.APIRouter(

View File

@@ -12,7 +12,11 @@ class ChatConfig(BaseSettings):
# OpenAI API Configuration
model: str = Field(
default="qwen/qwen3-235b-a22b-2507", description="Default model to use"
default="anthropic/claude-opus-4.5", description="Default model to use"
)
title_model: str = Field(
default="openai/gpt-4o-mini",
description="Model to use for generating session titles (should be fast/cheap)",
)
api_key: str | None = Field(default=None, description="OpenAI API key")
base_url: str | None = Field(
@@ -72,8 +76,31 @@ class ChatConfig(BaseSettings):
v = "https://openrouter.ai/api/v1"
return v
# Prompt paths for different contexts
PROMPT_PATHS: dict[str, str] = {
"default": "prompts/chat_system.md",
"onboarding": "prompts/onboarding_system.md",
}
def get_system_prompt_for_type(
self, prompt_type: str = "default", **template_vars
) -> str:
"""Load and render a system prompt by type.
Args:
prompt_type: The type of prompt to load ("default" or "onboarding")
**template_vars: Variables to substitute in the template
Returns:
Rendered system prompt string
"""
prompt_path_str = self.PROMPT_PATHS.get(
prompt_type, self.PROMPT_PATHS["default"]
)
return self._load_prompt_from_path(prompt_path_str, **template_vars)
def get_system_prompt(self, **template_vars) -> str:
"""Load and render the system prompt from file.
"""Load and render the default system prompt from file.
Args:
**template_vars: Variables to substitute in the template
@@ -82,9 +109,21 @@ class ChatConfig(BaseSettings):
Rendered system prompt string
"""
return self._load_prompt_from_path(self.system_prompt_path, **template_vars)
def _load_prompt_from_path(self, prompt_path_str: str, **template_vars) -> str:
"""Load and render a system prompt from a given path.
Args:
prompt_path_str: Path to the prompt file relative to chat module
**template_vars: Variables to substitute in the template
Returns:
Rendered system prompt string
"""
# Get the path relative to this module
module_dir = Path(__file__).parent
prompt_path = module_dir / self.system_prompt_path
prompt_path = module_dir / prompt_path_str
# Check for .j2 extension first (Jinja2 template)
j2_path = Path(str(prompt_path) + ".j2")

View File

@@ -0,0 +1,215 @@
"""Database operations for chat sessions."""
import logging
from datetime import UTC, datetime
from typing import Any, cast
from prisma.models import ChatMessage as PrismaChatMessage
from prisma.models import ChatSession as PrismaChatSession
from prisma.types import (
ChatMessageCreateInput,
ChatSessionCreateInput,
ChatSessionUpdateInput,
)
from backend.util.json import SafeJson
logger = logging.getLogger(__name__)
async def get_chat_session(session_id: str) -> PrismaChatSession | None:
"""Get a chat session by ID from the database."""
session = await PrismaChatSession.prisma().find_unique(
where={"id": session_id},
include={"Messages": True},
)
if session and session.Messages:
# Sort messages by sequence in Python since Prisma doesn't support order_by in include
session.Messages.sort(key=lambda m: m.sequence)
return session
async def create_chat_session(
session_id: str,
user_id: str | None,
) -> PrismaChatSession:
"""Create a new chat session in the database."""
data = ChatSessionCreateInput(
id=session_id,
userId=user_id,
credentials=SafeJson({}),
successfulAgentRuns=SafeJson({}),
successfulAgentSchedules=SafeJson({}),
)
return await PrismaChatSession.prisma().create(
data=data,
include={"Messages": True},
)
async def update_chat_session(
session_id: str,
credentials: dict[str, Any] | None = None,
successful_agent_runs: dict[str, Any] | None = None,
successful_agent_schedules: dict[str, Any] | None = None,
total_prompt_tokens: int | None = None,
total_completion_tokens: int | None = None,
title: str | None = None,
) -> PrismaChatSession | None:
"""Update a chat session's metadata."""
data: ChatSessionUpdateInput = {"updatedAt": datetime.now(UTC)}
if credentials is not None:
data["credentials"] = SafeJson(credentials)
if successful_agent_runs is not None:
data["successfulAgentRuns"] = SafeJson(successful_agent_runs)
if successful_agent_schedules is not None:
data["successfulAgentSchedules"] = SafeJson(successful_agent_schedules)
if total_prompt_tokens is not None:
data["totalPromptTokens"] = total_prompt_tokens
if total_completion_tokens is not None:
data["totalCompletionTokens"] = total_completion_tokens
if title is not None:
data["title"] = title
session = await PrismaChatSession.prisma().update(
where={"id": session_id},
data=data,
include={"Messages": True},
)
if session and session.Messages:
session.Messages.sort(key=lambda m: m.sequence)
return session
async def add_chat_message(
session_id: str,
role: str,
sequence: int,
content: str | None = None,
name: str | None = None,
tool_call_id: str | None = None,
refusal: str | None = None,
tool_calls: list[dict[str, Any]] | None = None,
function_call: dict[str, Any] | None = None,
) -> PrismaChatMessage:
"""Add a message to a chat session."""
# Build the input dict dynamically - only include optional fields when they
# have values, as Prisma TypedDict validation fails when optional fields
# are explicitly set to None
data: dict[str, Any] = {
"Session": {"connect": {"id": session_id}},
"role": role,
"sequence": sequence,
}
# Add optional string fields
if content is not None:
data["content"] = content
if name is not None:
data["name"] = name
if tool_call_id is not None:
data["toolCallId"] = tool_call_id
if refusal is not None:
data["refusal"] = refusal
# Add optional JSON fields only when they have values
if tool_calls is not None:
data["toolCalls"] = SafeJson(tool_calls)
if function_call is not None:
data["functionCall"] = SafeJson(function_call)
# Update session's updatedAt timestamp
await PrismaChatSession.prisma().update(
where={"id": session_id},
data={"updatedAt": datetime.now(UTC)},
)
return await PrismaChatMessage.prisma().create(
data=cast(ChatMessageCreateInput, data)
)
async def add_chat_messages_batch(
session_id: str,
messages: list[dict[str, Any]],
start_sequence: int,
) -> list[PrismaChatMessage]:
"""Add multiple messages to a chat session in a batch."""
if not messages:
return []
created_messages = []
for i, msg in enumerate(messages):
# Build the input dict dynamically - only include optional JSON fields
# when they have values, as Prisma TypedDict validation fails when
# optional fields are explicitly set to None
data: dict[str, Any] = {
"Session": {"connect": {"id": session_id}},
"role": msg["role"],
"sequence": start_sequence + i,
}
# Add optional string fields
if msg.get("content") is not None:
data["content"] = msg["content"]
if msg.get("name") is not None:
data["name"] = msg["name"]
if msg.get("tool_call_id") is not None:
data["toolCallId"] = msg["tool_call_id"]
if msg.get("refusal") is not None:
data["refusal"] = msg["refusal"]
# Add optional JSON fields only when they have values
if msg.get("tool_calls") is not None:
data["toolCalls"] = SafeJson(msg["tool_calls"])
if msg.get("function_call") is not None:
data["functionCall"] = SafeJson(msg["function_call"])
created = await PrismaChatMessage.prisma().create(
data=cast(ChatMessageCreateInput, data)
)
created_messages.append(created)
# Update session's updatedAt timestamp
await PrismaChatSession.prisma().update(
where={"id": session_id},
data={"updatedAt": datetime.now(UTC)},
)
return created_messages
async def get_user_chat_sessions(
user_id: str,
limit: int = 50,
offset: int = 0,
) -> list[PrismaChatSession]:
"""Get chat sessions for a user, ordered by most recent."""
return await PrismaChatSession.prisma().find_many(
where={"userId": user_id},
order={"updatedAt": "desc"},
take=limit,
skip=offset,
)
async def get_user_session_count(user_id: str) -> int:
"""Get the total number of chat sessions for a user."""
return await PrismaChatSession.prisma().count(where={"userId": user_id})
async def delete_chat_session(session_id: str) -> bool:
"""Delete a chat session and all its messages."""
try:
await PrismaChatSession.prisma().delete(where={"id": session_id})
return True
except Exception as e:
logger.error(f"Failed to delete chat session {session_id}: {e}")
return False
async def get_chat_session_message_count(session_id: str) -> int:
"""Get the number of messages in a chat session."""
count = await PrismaChatMessage.prisma().count(where={"sessionId": session_id})
return count

View File

@@ -0,0 +1,473 @@
import logging
import uuid
from datetime import UTC, datetime
from openai.types.chat import (
ChatCompletionAssistantMessageParam,
ChatCompletionDeveloperMessageParam,
ChatCompletionFunctionMessageParam,
ChatCompletionMessageParam,
ChatCompletionSystemMessageParam,
ChatCompletionToolMessageParam,
ChatCompletionUserMessageParam,
)
from openai.types.chat.chat_completion_assistant_message_param import FunctionCall
from openai.types.chat.chat_completion_message_tool_call_param import (
ChatCompletionMessageToolCallParam,
Function,
)
from prisma.models import ChatMessage as PrismaChatMessage
from prisma.models import ChatSession as PrismaChatSession
from pydantic import BaseModel
from backend.data.redis_client import get_redis_async
from backend.util import json
from backend.util.exceptions import RedisError
from . import db as chat_db
from .config import ChatConfig
logger = logging.getLogger(__name__)
config = ChatConfig()
class ChatMessage(BaseModel):
role: str
content: str | None = None
name: str | None = None
tool_call_id: str | None = None
refusal: str | None = None
tool_calls: list[dict] | None = None
function_call: dict | None = None
class Usage(BaseModel):
prompt_tokens: int
completion_tokens: int
total_tokens: int
class ChatSession(BaseModel):
session_id: str
user_id: str | None
title: str | None = None
messages: list[ChatMessage]
usage: list[Usage]
credentials: dict[str, dict] = {} # Map of provider -> credential metadata
started_at: datetime
updated_at: datetime
successful_agent_runs: dict[str, int] = {}
successful_agent_schedules: dict[str, int] = {}
@staticmethod
def new(user_id: str | None) -> "ChatSession":
return ChatSession(
session_id=str(uuid.uuid4()),
user_id=user_id,
title=None,
messages=[],
usage=[],
credentials={},
started_at=datetime.now(UTC),
updated_at=datetime.now(UTC),
)
@staticmethod
def from_prisma(
prisma_session: PrismaChatSession,
prisma_messages: list[PrismaChatMessage] | None = None,
) -> "ChatSession":
"""Convert Prisma models to Pydantic ChatSession."""
messages = []
if prisma_messages:
for msg in prisma_messages:
tool_calls = None
if msg.toolCalls:
tool_calls = (
json.loads(msg.toolCalls)
if isinstance(msg.toolCalls, str)
else msg.toolCalls
)
function_call = None
if msg.functionCall:
function_call = (
json.loads(msg.functionCall)
if isinstance(msg.functionCall, str)
else msg.functionCall
)
messages.append(
ChatMessage(
role=msg.role,
content=msg.content,
name=msg.name,
tool_call_id=msg.toolCallId,
refusal=msg.refusal,
tool_calls=tool_calls,
function_call=function_call,
)
)
# Parse JSON fields from Prisma
credentials = (
json.loads(prisma_session.credentials)
if isinstance(prisma_session.credentials, str)
else prisma_session.credentials or {}
)
successful_agent_runs = (
json.loads(prisma_session.successfulAgentRuns)
if isinstance(prisma_session.successfulAgentRuns, str)
else prisma_session.successfulAgentRuns or {}
)
successful_agent_schedules = (
json.loads(prisma_session.successfulAgentSchedules)
if isinstance(prisma_session.successfulAgentSchedules, str)
else prisma_session.successfulAgentSchedules or {}
)
# Calculate usage from token counts
usage = []
if prisma_session.totalPromptTokens or prisma_session.totalCompletionTokens:
usage.append(
Usage(
prompt_tokens=prisma_session.totalPromptTokens or 0,
completion_tokens=prisma_session.totalCompletionTokens or 0,
total_tokens=(prisma_session.totalPromptTokens or 0)
+ (prisma_session.totalCompletionTokens or 0),
)
)
return ChatSession(
session_id=prisma_session.id,
user_id=prisma_session.userId,
title=prisma_session.title,
messages=messages,
usage=usage,
credentials=credentials,
started_at=prisma_session.createdAt,
updated_at=prisma_session.updatedAt,
successful_agent_runs=successful_agent_runs,
successful_agent_schedules=successful_agent_schedules,
)
def to_openai_messages(self) -> list[ChatCompletionMessageParam]:
messages = []
for message in self.messages:
if message.role == "developer":
m = ChatCompletionDeveloperMessageParam(
role="developer",
content=message.content or "",
)
if message.name:
m["name"] = message.name
messages.append(m)
elif message.role == "system":
m = ChatCompletionSystemMessageParam(
role="system",
content=message.content or "",
)
if message.name:
m["name"] = message.name
messages.append(m)
elif message.role == "user":
m = ChatCompletionUserMessageParam(
role="user",
content=message.content or "",
)
if message.name:
m["name"] = message.name
messages.append(m)
elif message.role == "assistant":
m = ChatCompletionAssistantMessageParam(
role="assistant",
content=message.content or "",
)
if message.function_call:
m["function_call"] = FunctionCall(
arguments=message.function_call["arguments"],
name=message.function_call["name"],
)
if message.refusal:
m["refusal"] = message.refusal
if message.tool_calls:
t: list[ChatCompletionMessageToolCallParam] = []
for tool_call in message.tool_calls:
# Tool calls are stored with nested structure: {id, type, function: {name, arguments}}
function_data = tool_call.get("function", {})
# Skip tool calls that are missing required fields
if "id" not in tool_call or "name" not in function_data:
logger.warning(
f"Skipping invalid tool call: missing required fields. "
f"Got: {tool_call.keys()}, function keys: {function_data.keys()}"
)
continue
# Arguments are stored as a JSON string
arguments_str = function_data.get("arguments", "{}")
t.append(
ChatCompletionMessageToolCallParam(
id=tool_call["id"],
type="function",
function=Function(
arguments=arguments_str,
name=function_data["name"],
),
)
)
m["tool_calls"] = t
if message.name:
m["name"] = message.name
messages.append(m)
elif message.role == "tool":
messages.append(
ChatCompletionToolMessageParam(
role="tool",
content=message.content or "",
tool_call_id=message.tool_call_id or "",
)
)
elif message.role == "function":
messages.append(
ChatCompletionFunctionMessageParam(
role="function",
content=message.content,
name=message.name or "",
)
)
return messages
async def _get_session_from_cache(session_id: str) -> ChatSession | None:
"""Get a chat session from Redis cache."""
redis_key = f"chat:session:{session_id}"
async_redis = await get_redis_async()
raw_session: bytes | None = await async_redis.get(redis_key)
if raw_session is None:
return None
try:
session = ChatSession.model_validate_json(raw_session)
logger.info(
f"Loading session {session_id} from cache: "
f"message_count={len(session.messages)}, "
f"roles={[m.role for m in session.messages]}"
)
return session
except Exception as e:
logger.error(f"Failed to deserialize session {session_id}: {e}", exc_info=True)
raise RedisError(f"Corrupted session data for {session_id}") from e
async def _cache_session(session: ChatSession) -> None:
"""Cache a chat session in Redis."""
redis_key = f"chat:session:{session.session_id}"
async_redis = await get_redis_async()
await async_redis.setex(redis_key, config.session_ttl, session.model_dump_json())
async def _get_session_from_db(session_id: str) -> ChatSession | None:
"""Get a chat session from the database."""
prisma_session = await chat_db.get_chat_session(session_id)
if not prisma_session:
return None
messages = prisma_session.Messages
logger.info(
f"Loading session {session_id} from DB: "
f"has_messages={messages is not None}, "
f"message_count={len(messages) if messages else 0}, "
f"roles={[m.role for m in messages] if messages else []}"
)
return ChatSession.from_prisma(prisma_session, messages)
async def _save_session_to_db(
session: ChatSession, existing_message_count: int
) -> None:
"""Save or update a chat session in the database."""
# Check if session exists in DB
existing = await chat_db.get_chat_session(session.session_id)
if not existing:
# Create new session
await chat_db.create_chat_session(
session_id=session.session_id,
user_id=session.user_id,
)
existing_message_count = 0
# Calculate total tokens from usage
total_prompt = sum(u.prompt_tokens for u in session.usage)
total_completion = sum(u.completion_tokens for u in session.usage)
# Update session metadata
await chat_db.update_chat_session(
session_id=session.session_id,
credentials=session.credentials,
successful_agent_runs=session.successful_agent_runs,
successful_agent_schedules=session.successful_agent_schedules,
total_prompt_tokens=total_prompt,
total_completion_tokens=total_completion,
)
# Add new messages (only those after existing count)
new_messages = session.messages[existing_message_count:]
if new_messages:
messages_data = []
for msg in new_messages:
messages_data.append(
{
"role": msg.role,
"content": msg.content,
"name": msg.name,
"tool_call_id": msg.tool_call_id,
"refusal": msg.refusal,
"tool_calls": msg.tool_calls,
"function_call": msg.function_call,
}
)
logger.info(
f"Saving {len(new_messages)} new messages to DB for session {session.session_id}: "
f"roles={[m['role'] for m in messages_data]}, "
f"start_sequence={existing_message_count}"
)
await chat_db.add_chat_messages_batch(
session_id=session.session_id,
messages=messages_data,
start_sequence=existing_message_count,
)
async def get_chat_session(
session_id: str,
user_id: str | None,
) -> ChatSession | None:
"""Get a chat session by ID.
Checks Redis cache first, falls back to database if not found.
Caches database results back to Redis.
"""
# Try cache first
try:
session = await _get_session_from_cache(session_id)
if session:
# Verify user ownership
if session.user_id is not None and session.user_id != user_id:
logger.warning(
f"Session {session_id} user id mismatch: {session.user_id} != {user_id}"
)
return None
return session
except RedisError:
logger.warning(f"Cache error for session {session_id}, trying database")
except Exception as e:
logger.warning(f"Unexpected cache error for session {session_id}: {e}")
# Fall back to database
logger.info(f"Session {session_id} not in cache, checking database")
session = await _get_session_from_db(session_id)
if session is None:
logger.warning(f"Session {session_id} not found in cache or database")
return None
# Verify user ownership
if session.user_id is not None and session.user_id != user_id:
logger.warning(
f"Session {session_id} user id mismatch: {session.user_id} != {user_id}"
)
return None
# Cache the session from DB
try:
await _cache_session(session)
logger.info(f"Cached session {session_id} from database")
except Exception as e:
logger.warning(f"Failed to cache session {session_id}: {e}")
return session
async def upsert_chat_session(
session: ChatSession,
) -> ChatSession:
"""Update a chat session in both cache and database."""
# Get existing message count from DB for incremental saves
existing_message_count = await chat_db.get_chat_session_message_count(
session.session_id
)
# Save to database
try:
await _save_session_to_db(session, existing_message_count)
except Exception as e:
logger.error(f"Failed to save session {session.session_id} to database: {e}")
# Continue to cache even if DB fails
# Save to cache
try:
await _cache_session(session)
except Exception as e:
raise RedisError(
f"Failed to persist chat session {session.session_id} to Redis: {e}"
) from e
return session
async def create_chat_session(user_id: str | None) -> ChatSession:
"""Create a new chat session and persist it."""
session = ChatSession.new(user_id)
# Create in database first
try:
await chat_db.create_chat_session(
session_id=session.session_id,
user_id=user_id,
)
except Exception as e:
logger.error(f"Failed to create session in database: {e}")
# Continue even if DB fails - cache will still work
# Cache the session
try:
await _cache_session(session)
except Exception as e:
logger.warning(f"Failed to cache new session: {e}")
return session
async def get_user_sessions(
user_id: str,
limit: int = 50,
offset: int = 0,
) -> list[ChatSession]:
"""Get all chat sessions for a user from the database."""
prisma_sessions = await chat_db.get_user_chat_sessions(user_id, limit, offset)
sessions = []
for prisma_session in prisma_sessions:
# Convert without messages for listing (lighter weight)
sessions.append(ChatSession.from_prisma(prisma_session, None))
return sessions
async def delete_chat_session(session_id: str) -> bool:
"""Delete a chat session from both cache and database."""
# Delete from cache
try:
redis_key = f"chat:session:{session_id}"
async_redis = await get_redis_async()
await async_redis.delete(redis_key)
except Exception as e:
logger.warning(f"Failed to delete session {session_id} from cache: {e}")
# Delete from database
return await chat_db.delete_chat_session(session_id)

View File

@@ -0,0 +1,117 @@
import pytest
from .model import (
ChatMessage,
ChatSession,
Usage,
get_chat_session,
upsert_chat_session,
)
messages = [
ChatMessage(content="Hello, how are you?", role="user"),
ChatMessage(
content="I'm fine, thank you!",
role="assistant",
tool_calls=[
{
"id": "t123",
"type": "function",
"function": {
"name": "get_weather",
"arguments": '{"city": "New York"}',
},
}
],
),
ChatMessage(
content="I'm using the tool to get the weather",
role="tool",
tool_call_id="t123",
),
]
@pytest.mark.asyncio(loop_scope="session")
async def test_chatsession_serialization_deserialization():
s = ChatSession.new(user_id="abc123")
s.messages = messages
s.usage = [Usage(prompt_tokens=100, completion_tokens=200, total_tokens=300)]
serialized = s.model_dump_json()
s2 = ChatSession.model_validate_json(serialized)
assert s2.model_dump() == s.model_dump()
@pytest.mark.asyncio(loop_scope="session")
async def test_chatsession_redis_storage():
s = ChatSession.new(user_id=None)
s.messages = messages
s = await upsert_chat_session(s)
s2 = await get_chat_session(
session_id=s.session_id,
user_id=s.user_id,
)
assert s2 == s
@pytest.mark.asyncio(loop_scope="session")
async def test_chatsession_redis_storage_user_id_mismatch():
s = ChatSession.new(user_id="abc123")
s.messages = messages
s = await upsert_chat_session(s)
s2 = await get_chat_session(s.session_id, None)
assert s2 is None
@pytest.mark.asyncio(loop_scope="session")
async def test_chatsession_db_storage():
"""Test that messages are correctly saved to and loaded from DB (not cache)."""
from backend.data.redis_client import get_redis_async
# Create session with messages including assistant message
s = ChatSession.new(user_id=None)
s.messages = messages # Contains user, assistant, and tool messages
assert s.session_id is not None, "Session id is not set"
# Upsert to save to both cache and DB
s = await upsert_chat_session(s)
# Clear the Redis cache to force DB load
redis_key = f"chat:session:{s.session_id}"
async_redis = await get_redis_async()
await async_redis.delete(redis_key)
# Load from DB (cache was cleared)
s2 = await get_chat_session(
session_id=s.session_id,
user_id=s.user_id,
)
assert s2 is not None, "Session not found after loading from DB"
assert len(s2.messages) == len(
s.messages
), f"Message count mismatch: expected {len(s.messages)}, got {len(s2.messages)}"
# Verify all roles are present
roles = [m.role for m in s2.messages]
assert "user" in roles, f"User message missing. Roles found: {roles}"
assert "assistant" in roles, f"Assistant message missing. Roles found: {roles}"
assert "tool" in roles, f"Tool message missing. Roles found: {roles}"
# Verify message content
for orig, loaded in zip(s.messages, s2.messages):
assert orig.role == loaded.role, f"Role mismatch: {orig.role} != {loaded.role}"
assert (
orig.content == loaded.content
), f"Content mismatch for {orig.role}: {orig.content} != {loaded.content}"
if orig.tool_calls:
assert (
loaded.tool_calls is not None
), f"Tool calls missing for {orig.role} message"
assert len(orig.tool_calls) == len(loaded.tool_calls)

View File

@@ -0,0 +1,192 @@
You are Otto, an AI Co-Pilot and Forward Deployed Engineer for AutoGPT, an AI Business Automation tool. Your mission is to help users quickly find, create, and set up AutoGPT agents to solve their business problems.
Here are the functions available to you:
<functions>
**Understanding & Discovery:**
1. **add_understanding** - Save information about the user's business context (use this as you learn about them)
2. **find_agent** - Search the marketplace for pre-built agents that solve the user's problem
3. **find_library_agent** - Search the user's personal library of saved agents
4. **find_block** - Search for individual blocks (building components for agents)
5. **search_platform_docs** - Search AutoGPT documentation for help
**Agent Creation & Editing:**
6. **create_agent** - Create a new custom agent from scratch based on user requirements
7. **edit_agent** - Modify an existing agent (add/remove blocks, change configuration)
**Execution & Output:**
8. **run_agent** - Run or schedule an agent (automatically handles setup)
9. **run_block** - Run a single block directly without creating an agent
10. **agent_output** - Get the output/results from a running or completed agent execution
</functions>
## ALWAYS GET THE USER'S NAME
**This is critical:** If you don't know the user's name, ask for it in your first response. Use a friendly, natural approach:
- "Hi! I'm Otto. What's your name?"
- "Hey there! Before we dive in, what should I call you?"
Once you have their name, immediately save it with `add_understanding(user_name="...")` and use it throughout the conversation.
## BUILDING USER UNDERSTANDING
**If no User Business Context is provided below**, gather information naturally during conversation - don't interrogate them.
**Key information to gather (in priority order):**
1. Their name (ALWAYS first if unknown)
2. Their job title and role
3. Their business/company and industry
4. Pain points and what they want to automate
5. Tools they currently use
**How to gather this information:**
- Ask naturally as part of helping them (e.g., "What's your role?" or "What industry are you in?")
- When they share information, immediately save it using `add_understanding`
- Don't ask all questions at once - spread them across the conversation
- Prioritize understanding their immediate problem first
**Example:**
```
User: "I need help automating my social media"
Otto: I can help with that! I'm Otto - what's your name?
User: "I'm Sarah"
Otto: [calls add_understanding with user_name="Sarah"]
Nice to meet you, Sarah! What's your role - are you a social media manager or business owner?
User: "I'm the marketing director at a fintech startup"
Otto: [calls add_understanding with job_title="Marketing Director", industry="fintech", business_size="startup"]
Great! Let me find social media automation agents for you.
[calls find_agent with query="social media automation marketing"]
```
## WHEN TO USE WHICH TOOL
**Finding existing agents:**
- `find_agent` - Search the marketplace for pre-built agents others have created
- `find_library_agent` - Search agents the user has already saved to their library
**Creating/editing agents:**
- `create_agent` - When user wants a custom agent that doesn't exist, or has specific requirements
- `edit_agent` - When user wants to modify an existing agent (change inputs, add blocks, etc.)
**Running agents:**
- `run_agent` - To execute an agent (handles credentials and inputs automatically)
- `agent_output` - To check the results of a running or completed agent execution
**Direct execution:**
- `run_block` - Run a single block directly without needing a full agent
## HOW run_agent WORKS
The `run_agent` tool automatically handles the entire setup flow:
1. **First call** (no inputs) → Returns available inputs so user can decide what values to use
2. **Credentials check** → If missing, UI automatically prompts user to add them (you don't need to mention this)
3. **Execution** → Runs when you provide `inputs` OR set `use_defaults=true`
Parameters:
- `username_agent_slug` (required): Agent identifier like "creator/agent-name"
- `inputs`: Object with input values for the agent
- `use_defaults`: Set to `true` to run with default values (only after user confirms)
- `schedule_name` + `cron`: For scheduled execution
## HOW create_agent WORKS
Use `create_agent` when the user wants to build a custom automation:
- Describe what the agent should do
- The tool will create the agent structure with appropriate blocks
- Returns the agent ID for further editing or running
## HOW agent_output WORKS
Use `agent_output` to get results from agent executions:
- Pass the execution_id from a run_agent response
- Returns the current status and any outputs produced
- Useful for checking if an agent has completed and what it produced
## WORKFLOW
1. **Get their name** - If unknown, ask for it first
2. **Understand context** - Ask 1-2 questions about their problem while helping
3. **Find or create** - Use find_agent for existing solutions, create_agent for custom needs
4. **Set up and run** - Use run_agent to execute, agent_output to get results
## YOUR APPROACH
**Step 1: Greet and Identify**
- If you don't know their name, ask for it
- Be friendly and conversational
**Step 2: Understand the Problem**
- Ask maximum 1-2 targeted questions
- Focus on: What business problem are they solving?
- If they want to create/edit an agent, understand what it should do
**Step 3: Find or Create**
- For existing solutions: Use `find_agent` with relevant keywords
- For custom needs: Use `create_agent` with their requirements
- For modifications: Use `edit_agent` on an existing agent
**Step 4: Execute**
- Call `run_agent` without inputs first to see what's available
- Ask user what values they want or if defaults are okay
- Call `run_agent` again with inputs or `use_defaults=true`
- Use `agent_output` to check results when needed
## USING add_understanding
Call `add_understanding` whenever you learn something about the user:
**User info:** `user_name`, `job_title`
**Business:** `business_name`, `industry`, `business_size` (1-10, 11-50, 51-200, 201-1000, 1000+), `user_role` (decision maker, implementer, end user)
**Processes:** `key_workflows` (array), `daily_activities` (array)
**Pain points:** `pain_points` (array), `bottlenecks` (array), `manual_tasks` (array), `automation_goals` (array)
**Tools:** `current_software` (array), `existing_automation` (array)
**Other:** `additional_notes`
Example: `add_understanding(user_name="Sarah", job_title="Marketing Director", industry="fintech")`
## KEY RULES
**What You DON'T Do:**
- Don't help with login (frontend handles this)
- Don't mention or explain credentials to the user (frontend handles this automatically)
- Don't run agents without first showing available inputs to the user
- Don't use `use_defaults=true` without user explicitly confirming
- Don't write responses longer than 3 sentences
- Don't interrogate users with many questions - gather info naturally
**What You DO:**
- ALWAYS ask for user's name if you don't have it
- Save user information with `add_understanding` as you learn it
- Use their name when addressing them
- Always call run_agent first without inputs to see what's available
- Ask user what values they want OR if they want to use defaults
- Keep all responses to maximum 3 sentences
- Include the agent link in your response after successful execution
**Error Handling:**
- Authentication needed → "Please sign in via the interface"
- Credentials missing → The UI handles this automatically. Focus on asking the user about input values instead.
## RESPONSE STRUCTURE
Before responding, wrap your analysis in <thinking> tags to systematically plan your approach:
- Check if you know the user's name - if not, ask for it
- Check if you have user context - if not, plan to gather some naturally
- Extract the key business problem or request from the user's message
- Determine what function call (if any) you need to make next
- Plan your response to stay under the 3-sentence maximum
Example interaction:
```
User: "Hi, I want to build an agent that monitors my competitors"
Otto: <thinking>I don't know this user's name. I should ask for it while acknowledging their request.</thinking>
Hi! I'm Otto and I'd love to help you build a competitor monitoring agent. What's your name?
User: "I'm Mike"
Otto: [calls add_understanding with user_name="Mike"]
<thinking>Now I know Mike wants competitor monitoring. I should search for existing agents first.</thinking>
Great to meet you, Mike! Let me search for competitor monitoring agents.
[calls find_agent with query="competitor monitoring analysis"]
```
KEEP ANSWERS TO 3 SENTENCES

View File

@@ -0,0 +1,155 @@
You are Otto, an AI Co-Pilot helping new users get started with AutoGPT, an AI Business Automation platform. Your mission is to welcome them, learn about their needs, and help them run their first successful agent.
Here are the functions available to you:
<functions>
**Understanding & Discovery:**
1. **add_understanding** - Save information about the user's business context (use this as you learn about them)
2. **find_agent** - Search the marketplace for pre-built agents that solve the user's problem
3. **find_library_agent** - Search the user's personal library of saved agents
4. **find_block** - Search for individual blocks (building components for agents)
5. **search_platform_docs** - Search AutoGPT documentation for help
**Agent Creation & Editing:**
6. **create_agent** - Create a new custom agent from scratch based on user requirements
7. **edit_agent** - Modify an existing agent (add/remove blocks, change configuration)
**Execution & Output:**
8. **run_agent** - Run or schedule an agent (automatically handles setup)
9. **run_block** - Run a single block directly without creating an agent
10. **agent_output** - Get the output/results from a running or completed agent execution
</functions>
## YOUR ONBOARDING MISSION
You are guiding a new user through their first experience with AutoGPT. Your goal is to:
1. Welcome them warmly and get their name
2. Learn about them and their business
3. Find or create an agent that solves a real problem for them
4. Get that agent running successfully
5. Celebrate their success and point them to next steps
## PHASE 1: WELCOME & INTRODUCTION
**Start every conversation by:**
- Giving a warm, friendly greeting
- Introducing yourself as Otto, their AI assistant
- Asking for their name immediately
**Example opening:**
```
Hi! I'm Otto, your AI assistant. Welcome to AutoGPT! I'm here to help you set up your first automation. What's your name?
```
Once you have their name, save it immediately with `add_understanding(user_name="...")` and use it throughout.
## PHASE 2: DISCOVERY
**After getting their name, learn about them:**
- What's their role/job title?
- What industry/business are they in?
- What's one thing they'd love to automate?
**Keep it conversational - don't interrogate. Example:**
```
Nice to meet you, Sarah! What do you do for work, and what's one task you wish you could automate?
```
Save everything you learn with `add_understanding`.
## PHASE 3: FIND OR CREATE AN AGENT
**Once you understand their need:**
- Search for existing agents with `find_agent`
- Present the best match and explain how it helps them
- If nothing fits, offer to create a custom agent with `create_agent`
**Be enthusiastic about the solution:**
```
I found a great agent for you! The "Social Media Scheduler" can automatically post to your accounts on a schedule. Want to try it?
```
## PHASE 4: SETUP & RUN
**Guide them through running the agent:**
1. Call `run_agent` without inputs first to see what's needed
2. Explain each input in simple terms
3. Ask what values they want to use
4. Run the agent with their inputs or defaults
**Don't mention credentials** - the UI handles that automatically.
## PHASE 5: CELEBRATE & HANDOFF
**After successful execution:**
- Congratulate them on their first automation!
- Tell them where to find this agent (their Library)
- Mention they can explore more agents in the Marketplace
- Offer to help with anything else
**Example:**
```
You did it! Your first agent is running. You can find it anytime in your Library. Ready to explore more automations?
```
## KEY RULES
**What You DON'T Do:**
- Don't help with login (frontend handles this)
- Don't mention credentials (UI handles automatically)
- Don't run agents without showing inputs first
- Don't use `use_defaults=true` without explicit confirmation
- Don't write responses longer than 3 sentences
- Don't overwhelm with too many questions at once
**What You DO:**
- ALWAYS get the user's name first
- Be warm, encouraging, and celebratory
- Save info with `add_understanding` as you learn it
- Use their name when addressing them
- Keep responses to maximum 3 sentences
- Make them feel successful at each step
## USING add_understanding
Save information as you learn it:
**User info:** `user_name`, `job_title`
**Business:** `business_name`, `industry`, `business_size`, `user_role`
**Pain points:** `pain_points`, `manual_tasks`, `automation_goals`
**Tools:** `current_software`
Example: `add_understanding(user_name="Sarah", job_title="Marketing Manager", automation_goals=["social media scheduling"])`
## HOW run_agent WORKS
1. **First call** (no inputs) → Shows available inputs
2. **Credentials** → UI handles automatically (don't mention)
3. **Execution** → Run with `inputs={...}` or `use_defaults=true`
## RESPONSE STRUCTURE
Before responding, plan your approach in <thinking> tags:
- What phase am I in? (Welcome/Discovery/Find/Setup/Celebrate)
- Do I know their name? If not, ask for it
- What's the next step to move them forward?
- Keep response under 3 sentences
**Example flow:**
```
User: "Hi"
Otto: <thinking>Phase 1 - I need to welcome them and get their name.</thinking>
Hi! I'm Otto, welcome to AutoGPT! I'm here to help you set up your first automation - what's your name?
User: "I'm Alex"
Otto: [calls add_understanding with user_name="Alex"]
<thinking>Got their name. Phase 2 - learn about them.</thinking>
Great to meet you, Alex! What do you do for work, and what's one task you'd love to automate?
User: "I run an e-commerce store and spend hours on customer support emails"
Otto: [calls add_understanding with industry="e-commerce", pain_points=["customer support emails"]]
<thinking>Phase 3 - search for agents.</thinking>
[calls find_agent with query="customer support email automation"]
```
KEEP ANSWERS TO 3 SENTENCES - Be warm, helpful, and focused on their success!

View File

@@ -0,0 +1,472 @@
"""Chat API routes for chat session management and streaming via SSE."""
import logging
from collections.abc import AsyncGenerator
from typing import Annotated
from autogpt_libs import auth
from fastapi import APIRouter, Depends, Query, Security
from fastapi.responses import StreamingResponse
from pydantic import BaseModel
from backend.util.exceptions import NotFoundError
from . import service as chat_service
from .config import ChatConfig
config = ChatConfig()
logger = logging.getLogger(__name__)
router = APIRouter(
tags=["chat"],
)
# ========== Request/Response Models ==========
class StreamChatRequest(BaseModel):
"""Request model for streaming chat with optional context."""
message: str
is_user_message: bool = True
context: dict[str, str] | None = None # {url: str, content: str}
class CreateSessionResponse(BaseModel):
"""Response model containing information on a newly created chat session."""
id: str
created_at: str
user_id: str | None
class SessionDetailResponse(BaseModel):
"""Response model providing complete details for a chat session, including messages."""
id: str
created_at: str
updated_at: str
user_id: str | None
messages: list[dict]
class SessionSummaryResponse(BaseModel):
"""Response model for a session summary (without messages)."""
id: str
created_at: str
updated_at: str
title: str | None = None
class ListSessionsResponse(BaseModel):
"""Response model for listing chat sessions."""
sessions: list[SessionSummaryResponse]
total: int
# ========== Routes ==========
@router.get(
"/sessions",
dependencies=[Security(auth.requires_user)],
)
async def list_sessions(
user_id: Annotated[str, Security(auth.get_user_id)],
limit: int = Query(default=50, ge=1, le=100),
offset: int = Query(default=0, ge=0),
) -> ListSessionsResponse:
"""
List chat sessions for the authenticated user.
Returns a paginated list of chat sessions belonging to the current user,
ordered by most recently updated.
Args:
user_id: The authenticated user's ID.
limit: Maximum number of sessions to return (1-100).
offset: Number of sessions to skip for pagination.
Returns:
ListSessionsResponse: List of session summaries and total count.
"""
sessions = await chat_service.get_user_sessions(user_id, limit, offset)
return ListSessionsResponse(
sessions=[
SessionSummaryResponse(
id=session.session_id,
created_at=session.started_at.isoformat(),
updated_at=session.updated_at.isoformat(),
title=None, # TODO: Add title support
)
for session in sessions
],
total=len(sessions),
)
@router.post(
"/sessions",
)
async def create_session(
user_id: Annotated[str | None, Depends(auth.get_user_id)],
) -> CreateSessionResponse:
"""
Create a new chat session.
Initiates a new chat session for either an authenticated or anonymous user.
Args:
user_id: The optional authenticated user ID parsed from the JWT. If missing, creates an anonymous session.
Returns:
CreateSessionResponse: Details of the created session.
"""
logger.info(
f"Creating session with user_id: "
f"...{user_id[-8:] if user_id and len(user_id) > 8 else '<redacted>'}"
)
session = await chat_service.create_chat_session(user_id)
return CreateSessionResponse(
id=session.session_id,
created_at=session.started_at.isoformat(),
user_id=session.user_id or None,
)
@router.get(
"/sessions/{session_id}",
)
async def get_session(
session_id: str,
user_id: Annotated[str | None, Depends(auth.get_user_id)],
) -> SessionDetailResponse:
"""
Retrieve the details of a specific chat session.
Looks up a chat session by ID for the given user (if authenticated) and returns all session data including messages.
Args:
session_id: The unique identifier for the desired chat session.
user_id: The optional authenticated user ID, or None for anonymous access.
Returns:
SessionDetailResponse: Details for the requested session; raises NotFoundError if not found.
"""
session = await chat_service.get_session(session_id, user_id)
if not session:
raise NotFoundError(f"Session {session_id} not found")
messages = [message.model_dump() for message in session.messages]
logger.info(
f"Returning session {session_id}: "
f"message_count={len(messages)}, "
f"roles={[m.get('role') for m in messages]}"
)
return SessionDetailResponse(
id=session.session_id,
created_at=session.started_at.isoformat(),
updated_at=session.updated_at.isoformat(),
user_id=session.user_id or None,
messages=messages,
)
@router.post(
"/sessions/{session_id}/stream",
)
async def stream_chat_post(
session_id: str,
request: StreamChatRequest,
user_id: str | None = Depends(auth.get_user_id),
):
"""
Stream chat responses for a session (POST with context support).
Streams the AI/completion responses in real time over Server-Sent Events (SSE), including:
- Text fragments as they are generated
- Tool call UI elements (if invoked)
- Tool execution results
Args:
session_id: The chat session identifier to associate with the streamed messages.
request: Request body containing message, is_user_message, and optional context.
user_id: Optional authenticated user ID.
Returns:
StreamingResponse: SSE-formatted response chunks.
"""
# Validate session exists before starting the stream
# This prevents errors after the response has already started
session = await chat_service.get_session(session_id, user_id)
if not session:
raise NotFoundError(f"Session {session_id} not found. ")
if session.user_id is None and user_id is not None:
session = await chat_service.assign_user_to_session(session_id, user_id)
async def event_generator() -> AsyncGenerator[str, None]:
async for chunk in chat_service.stream_chat_completion(
session_id,
request.message,
is_user_message=request.is_user_message,
user_id=user_id,
session=session, # Pass pre-fetched session to avoid double-fetch
context=request.context,
):
yield chunk.to_sse()
return StreamingResponse(
event_generator(),
media_type="text/event-stream",
headers={
"Cache-Control": "no-cache",
"Connection": "keep-alive",
"X-Accel-Buffering": "no", # Disable nginx buffering
},
)
@router.get(
"/sessions/{session_id}/stream",
)
async def stream_chat_get(
session_id: str,
message: Annotated[str, Query(min_length=1, max_length=10000)],
user_id: str | None = Depends(auth.get_user_id),
is_user_message: bool = Query(default=True),
):
"""
Stream chat responses for a session (GET - legacy endpoint).
Streams the AI/completion responses in real time over Server-Sent Events (SSE), including:
- Text fragments as they are generated
- Tool call UI elements (if invoked)
- Tool execution results
Args:
session_id: The chat session identifier to associate with the streamed messages.
message: The user's new message to process.
user_id: Optional authenticated user ID.
is_user_message: Whether the message is a user message.
Returns:
StreamingResponse: SSE-formatted response chunks.
"""
# Validate session exists before starting the stream
# This prevents errors after the response has already started
session = await chat_service.get_session(session_id, user_id)
if not session:
raise NotFoundError(f"Session {session_id} not found. ")
if session.user_id is None and user_id is not None:
session = await chat_service.assign_user_to_session(session_id, user_id)
async def event_generator() -> AsyncGenerator[str, None]:
async for chunk in chat_service.stream_chat_completion(
session_id,
message,
is_user_message=is_user_message,
user_id=user_id,
session=session, # Pass pre-fetched session to avoid double-fetch
):
yield chunk.to_sse()
return StreamingResponse(
event_generator(),
media_type="text/event-stream",
headers={
"Cache-Control": "no-cache",
"Connection": "keep-alive",
"X-Accel-Buffering": "no", # Disable nginx buffering
},
)
@router.patch(
"/sessions/{session_id}/assign-user",
dependencies=[Security(auth.requires_user)],
status_code=200,
)
async def session_assign_user(
session_id: str,
user_id: Annotated[str, Security(auth.get_user_id)],
) -> dict:
"""
Assign an authenticated user to a chat session.
Used (typically post-login) to claim an existing anonymous session as the current authenticated user.
Args:
session_id: The identifier for the (previously anonymous) session.
user_id: The authenticated user's ID to associate with the session.
Returns:
dict: Status of the assignment.
"""
await chat_service.assign_user_to_session(session_id, user_id)
return {"status": "ok"}
# ========== Onboarding Routes ==========
# These routes use a specialized onboarding system prompt
@router.post(
"/onboarding/sessions",
)
async def create_onboarding_session(
user_id: Annotated[str | None, Depends(auth.get_user_id)],
) -> CreateSessionResponse:
"""
Create a new onboarding chat session.
Initiates a new chat session specifically for user onboarding,
using a specialized prompt that guides users through their first
experience with AutoGPT.
Args:
user_id: The optional authenticated user ID parsed from the JWT.
Returns:
CreateSessionResponse: Details of the created onboarding session.
"""
logger.info(
f"Creating onboarding session with user_id: "
f"...{user_id[-8:] if user_id and len(user_id) > 8 else '<redacted>'}"
)
session = await chat_service.create_chat_session(user_id)
return CreateSessionResponse(
id=session.session_id,
created_at=session.started_at.isoformat(),
user_id=session.user_id or None,
)
@router.get(
"/onboarding/sessions/{session_id}",
)
async def get_onboarding_session(
session_id: str,
user_id: Annotated[str | None, Depends(auth.get_user_id)],
) -> SessionDetailResponse:
"""
Retrieve the details of an onboarding chat session.
Args:
session_id: The unique identifier for the onboarding session.
user_id: The optional authenticated user ID.
Returns:
SessionDetailResponse: Details for the requested session.
"""
session = await chat_service.get_session(session_id, user_id)
if not session:
raise NotFoundError(f"Session {session_id} not found")
messages = [message.model_dump() for message in session.messages]
logger.info(
f"Returning onboarding session {session_id}: "
f"message_count={len(messages)}, "
f"roles={[m.get('role') for m in messages]}"
)
return SessionDetailResponse(
id=session.session_id,
created_at=session.started_at.isoformat(),
updated_at=session.updated_at.isoformat(),
user_id=session.user_id or None,
messages=messages,
)
@router.post(
"/onboarding/sessions/{session_id}/stream",
)
async def stream_onboarding_chat(
session_id: str,
request: StreamChatRequest,
user_id: str | None = Depends(auth.get_user_id),
):
"""
Stream onboarding chat responses for a session.
Uses the specialized onboarding system prompt to guide new users
through their first experience with AutoGPT. Streams AI responses
in real time over Server-Sent Events (SSE).
Args:
session_id: The onboarding session identifier.
request: Request body containing message and optional context.
user_id: Optional authenticated user ID.
Returns:
StreamingResponse: SSE-formatted response chunks.
"""
session = await chat_service.get_session(session_id, user_id)
if not session:
raise NotFoundError(f"Session {session_id} not found.")
if session.user_id is None and user_id is not None:
session = await chat_service.assign_user_to_session(session_id, user_id)
async def event_generator() -> AsyncGenerator[str, None]:
async for chunk in chat_service.stream_chat_completion(
session_id,
request.message,
is_user_message=request.is_user_message,
user_id=user_id,
session=session,
context=request.context,
prompt_type="onboarding", # Use onboarding system prompt
):
yield chunk.to_sse()
return StreamingResponse(
event_generator(),
media_type="text/event-stream",
headers={
"Cache-Control": "no-cache",
"Connection": "keep-alive",
"X-Accel-Buffering": "no",
},
)
# ========== Health Check ==========
@router.get("/health", status_code=200)
async def health_check() -> dict:
"""
Health check endpoint for the chat service.
Performs a full cycle test of session creation, assignment, and retrieval. Should always return healthy
if the service and data layer are operational.
Returns:
dict: A status dictionary indicating health, service name, and API version.
"""
session = await chat_service.create_chat_session(None)
await chat_service.assign_user_to_session(session.session_id, "test_user")
await chat_service.get_session(session.session_id, "test_user")
return {
"status": "healthy",
"service": "chat",
"version": "0.1.0",
}

View File

@@ -7,15 +7,18 @@ import orjson
from openai import AsyncOpenAI
from openai.types.chat import ChatCompletionChunk, ChatCompletionToolParam
import backend.server.v2.chat.config
from backend.server.v2.chat.model import (
ChatMessage,
ChatSession,
Usage,
get_chat_session,
upsert_chat_session,
from backend.data.understanding import (
format_understanding_for_prompt,
get_business_understanding,
)
from backend.server.v2.chat.response_model import (
from backend.util.exceptions import NotFoundError
from . import db as chat_db
from .config import ChatConfig
from .model import ChatMessage, ChatSession, Usage
from .model import create_chat_session as model_create_chat_session
from .model import get_chat_session, upsert_chat_session
from .response_model import (
StreamBaseResponse,
StreamEnd,
StreamError,
@@ -26,24 +29,117 @@ from backend.server.v2.chat.response_model import (
StreamToolExecutionResult,
StreamUsage,
)
from backend.server.v2.chat.tools import execute_tool, tools
from backend.util.exceptions import NotFoundError
from .tools import execute_tool, tools
logger = logging.getLogger(__name__)
config = backend.server.v2.chat.config.ChatConfig()
config = ChatConfig()
client = AsyncOpenAI(api_key=config.api_key, base_url=config.base_url)
async def _is_first_session(user_id: str) -> bool:
"""Check if this is the user's first chat session.
Returns True if the user has 1 or fewer sessions (meaning this is their first).
"""
try:
session_count = await chat_db.get_user_session_count(user_id)
return session_count <= 1
except Exception as e:
logger.warning(f"Failed to check session count for user {user_id}: {e}")
return False # Default to non-onboarding if we can't check
async def _build_system_prompt(
user_id: str | None, prompt_type: str = "default"
) -> str:
"""Build the full system prompt including business understanding if available.
Args:
user_id: The user ID for fetching business understanding
prompt_type: The type of prompt to load ("default" or "onboarding")
If "default" and this is the user's first session, will use "onboarding" instead.
Returns:
The full system prompt with business understanding context if available
"""
# Auto-detect: if using default prompt and this is user's first session, use onboarding
effective_prompt_type = prompt_type
if prompt_type == "default" and user_id:
if await _is_first_session(user_id):
logger.info("First session detected for user, using onboarding prompt")
effective_prompt_type = "onboarding"
# Start with the base system prompt for the specified type
base_prompt = config.get_system_prompt_for_type(effective_prompt_type)
# If user is authenticated, try to fetch their business understanding
if user_id:
try:
understanding = await get_business_understanding(user_id)
if understanding:
context = format_understanding_for_prompt(understanding)
if context:
return (
f"{base_prompt}\n\n---\n\n"
f"{context}\n\n"
"Use this context to provide more personalized recommendations "
"and to better understand the user's business needs when "
"suggesting agents and automations."
)
except Exception as e:
logger.warning(f"Failed to fetch business understanding: {e}")
return base_prompt
async def _generate_session_title(message: str) -> str | None:
"""Generate a concise title for a chat session based on the first message.
Args:
message: The first user message in the session
Returns:
A short title (3-6 words) or None if generation fails
"""
try:
response = await client.chat.completions.create(
model=config.title_model,
messages=[
{
"role": "system",
"content": (
"Generate a very short title (3-6 words) for a chat conversation "
"based on the user's first message. The title should capture the "
"main topic or intent. Return ONLY the title, no quotes or punctuation."
),
},
{"role": "user", "content": message[:500]}, # Limit input length
],
max_tokens=20,
temperature=0.7,
)
title = response.choices[0].message.content
if title:
# Clean up the title
title = title.strip().strip("\"'")
# Limit length
if len(title) > 50:
title = title[:47] + "..."
return title
return None
except Exception as e:
logger.warning(f"Failed to generate session title: {e}")
return None
async def create_chat_session(
user_id: str | None = None,
) -> ChatSession:
"""
Create a new chat session and persist it to the database.
"""
session = ChatSession.new(user_id)
# Persist the session immediately so it can be used for streaming
return await upsert_chat_session(session)
return await model_create_chat_session(user_id)
async def get_session(
@@ -56,6 +152,19 @@ async def get_session(
return await get_chat_session(session_id, user_id)
async def get_user_sessions(
user_id: str,
limit: int = 50,
offset: int = 0,
) -> list[ChatSession]:
"""
Get all chat sessions for a user.
"""
from .model import get_user_sessions as model_get_user_sessions
return await model_get_user_sessions(user_id, limit, offset)
async def assign_user_to_session(
session_id: str,
user_id: str,
@@ -77,6 +186,8 @@ async def stream_chat_completion(
user_id: str | None = None,
retry_count: int = 0,
session: ChatSession | None = None,
context: dict[str, str] | None = None, # {url: str, content: str}
prompt_type: str = "default",
) -> AsyncGenerator[StreamBaseResponse, None]:
"""Main entry point for streaming chat completions with database handling.
@@ -88,6 +199,7 @@ async def stream_chat_completion(
user_message: User's input message
user_id: User ID for authentication (None for anonymous)
session: Optional pre-loaded session object (for recursive calls to avoid Redis refetch)
prompt_type: The type of prompt to use ("default" or "onboarding")
Yields:
StreamBaseResponse objects formatted as SSE
@@ -120,9 +232,18 @@ async def stream_chat_completion(
)
if message:
# Build message content with context if provided
message_content = message
if context and context.get("url") and context.get("content"):
context_text = f"Page URL: {context['url']}\n\nPage Content:\n{context['content']}\n\n---\n\nUser Message: {message}"
message_content = context_text
logger.info(
f"Including page context: URL={context['url']}, content_length={len(context['content'])}"
)
session.messages.append(
ChatMessage(
role="user" if is_user_message else "assistant", content=message
role="user" if is_user_message else "assistant", content=message_content
)
)
logger.info(
@@ -140,6 +261,32 @@ async def stream_chat_completion(
session = await upsert_chat_session(session)
assert session, "Session not found"
# Generate title for new sessions on first user message (non-blocking)
# Check: is_user_message, no title yet, and this is the first user message
if is_user_message and message and not session.title:
user_messages = [m for m in session.messages if m.role == "user"]
if len(user_messages) == 1:
# First user message - generate title in background
import asyncio
async def _update_title():
try:
title = await _generate_session_title(message)
if title:
session.title = title
await upsert_chat_session(session)
logger.info(
f"Generated title for session {session_id}: {title}"
)
except Exception as e:
logger.warning(f"Failed to update session title: {e}")
# Fire and forget - don't block the chat response
asyncio.create_task(_update_title())
# Build system prompt with business understanding
system_prompt = await _build_system_prompt(user_id, prompt_type)
assistant_response = ChatMessage(
role="assistant",
content="",
@@ -158,6 +305,7 @@ async def stream_chat_completion(
async for chunk in _stream_chat_chunks(
session=session,
tools=tools,
system_prompt=system_prompt,
):
if isinstance(chunk, StreamTextChunk):
@@ -278,6 +426,7 @@ async def stream_chat_completion(
user_id=user_id,
retry_count=retry_count + 1,
session=session,
prompt_type=prompt_type,
):
yield chunk
return # Exit after retry to avoid double-saving in finally block
@@ -323,6 +472,7 @@ async def stream_chat_completion(
session_id=session.session_id,
user_id=user_id,
session=session, # Pass session object to avoid Redis refetch
prompt_type=prompt_type,
):
yield chunk
@@ -330,6 +480,7 @@ async def stream_chat_completion(
async def _stream_chat_chunks(
session: ChatSession,
tools: list[ChatCompletionToolParam],
system_prompt: str | None = None,
) -> AsyncGenerator[StreamBaseResponse, None]:
"""
Pure streaming function for OpenAI chat completions with tool calling.
@@ -337,9 +488,9 @@ async def _stream_chat_chunks(
This function is database-agnostic and focuses only on streaming logic.
Args:
messages: Conversation context as ChatCompletionMessageParam list
session_id: Session ID
user_id: User ID for tool execution
session: Chat session with conversation history
tools: Available tools for the model
system_prompt: System prompt to prepend to messages
Yields:
SSE formatted JSON response objects
@@ -349,6 +500,17 @@ async def _stream_chat_chunks(
logger.info("Starting pure chat stream")
# Build messages with system prompt prepended
messages = session.to_openai_messages()
if system_prompt:
from openai.types.chat import ChatCompletionSystemMessageParam
system_message = ChatCompletionSystemMessageParam(
role="system",
content=system_prompt,
)
messages = [system_message] + messages
# Loop to handle tool calls and continue conversation
while True:
try:
@@ -357,7 +519,7 @@ async def _stream_chat_chunks(
# Create the stream with proper types
stream = await client.chat.completions.create(
model=model,
messages=session.to_openai_messages(),
messages=messages,
tools=tools,
tool_choice="auto",
stream=True,
@@ -501,8 +663,12 @@ async def _yield_tool_call(
"""
logger.info(f"Yielding tool call: {tool_calls[yield_idx]}")
# Parse tool call arguments - exceptions will propagate to caller
arguments = orjson.loads(tool_calls[yield_idx]["function"]["arguments"])
# Parse tool call arguments - handle empty arguments gracefully
raw_arguments = tool_calls[yield_idx]["function"]["arguments"]
if raw_arguments:
arguments = orjson.loads(raw_arguments)
else:
arguments = {}
yield StreamToolCall(
tool_id=tool_calls[yield_idx]["id"],

View File

@@ -3,8 +3,8 @@ from os import getenv
import pytest
import backend.server.v2.chat.service as chat_service
from backend.server.v2.chat.response_model import (
from . import service as chat_service
from .response_model import (
StreamEnd,
StreamError,
StreamTextChunk,

View File

@@ -2,23 +2,32 @@ from typing import TYPE_CHECKING, Any
from openai.types.chat import ChatCompletionToolParam
from backend.server.v2.chat.model import ChatSession
from backend.api.features.chat.model import ChatSession
from .add_understanding import AddUnderstandingTool
from .agent_output import AgentOutputTool
from .base import BaseTool
from .find_agent import FindAgentTool
from .find_library_agent import FindLibraryAgentTool
from .run_agent import RunAgentTool
if TYPE_CHECKING:
from backend.server.v2.chat.response_model import StreamToolExecutionResult
from backend.api.features.chat.response_model import StreamToolExecutionResult
# Initialize tool instances
add_understanding_tool = AddUnderstandingTool()
find_agent_tool = FindAgentTool()
find_library_agent_tool = FindLibraryAgentTool()
run_agent_tool = RunAgentTool()
agent_output_tool = AgentOutputTool()
# Export tools as OpenAI format
tools: list[ChatCompletionToolParam] = [
add_understanding_tool.as_openai_tool(),
find_agent_tool.as_openai_tool(),
find_library_agent_tool.as_openai_tool(),
run_agent_tool.as_openai_tool(),
agent_output_tool.as_openai_tool(),
]
@@ -31,8 +40,11 @@ async def execute_tool(
) -> "StreamToolExecutionResult":
tool_map: dict[str, BaseTool] = {
"add_understanding": add_understanding_tool,
"find_agent": find_agent_tool,
"find_library_agent": find_library_agent_tool,
"run_agent": run_agent_tool,
"agent_output": agent_output_tool,
}
if tool_name not in tool_map:
raise ValueError(f"Tool {tool_name} not found")

View File

@@ -3,8 +3,11 @@ from datetime import UTC, datetime
from os import getenv
import pytest
from prisma.types import ProfileCreateInput
from pydantic import SecretStr
from backend.api.features.chat.model import ChatSession
from backend.api.features.store import db as store_db
from backend.blocks.firecrawl.scrape import FirecrawlScrapeBlock
from backend.blocks.io import AgentInputBlock, AgentOutputBlock
from backend.blocks.llm import AITextGeneratorBlock
@@ -13,8 +16,6 @@ from backend.data.graph import Graph, Link, Node, create_graph
from backend.data.model import APIKeyCredentials
from backend.data.user import get_or_create_user
from backend.integrations.credentials_store import IntegrationCredentialsStore
from backend.server.v2.chat.model import ChatSession
from backend.server.v2.store import db as store_db
def make_session(user_id: str | None = None):
@@ -49,13 +50,13 @@ async def setup_test_data():
# 1b. Create a profile with username for the user (required for store agent lookup)
username = user.email.split("@")[0]
await prisma.profile.create(
data={
"userId": user.id,
"username": username,
"name": f"Test User {username}",
"description": "Test user profile",
"links": [], # Required field - empty array for test profiles
}
data=ProfileCreateInput(
userId=user.id,
username=username,
name=f"Test User {username}",
description="Test user profile",
links=[], # Required field - empty array for test profiles
)
)
# 2. Create a test graph with agent input -> agent output
@@ -172,13 +173,13 @@ async def setup_llm_test_data():
# 1b. Create a profile with username for the user (required for store agent lookup)
username = user.email.split("@")[0]
await prisma.profile.create(
data={
"userId": user.id,
"username": username,
"name": f"Test User {username}",
"description": "Test user profile for LLM tests",
"links": [], # Required field - empty array for test profiles
}
data=ProfileCreateInput(
userId=user.id,
username=username,
name=f"Test User {username}",
description="Test user profile for LLM tests",
links=[], # Required field - empty array for test profiles
)
)
# 2. Create test OpenAI credentials for the user
@@ -332,13 +333,13 @@ async def setup_firecrawl_test_data():
# 1b. Create a profile with username for the user (required for store agent lookup)
username = user.email.split("@")[0]
await prisma.profile.create(
data={
"userId": user.id,
"username": username,
"name": f"Test User {username}",
"description": "Test user profile for Firecrawl tests",
"links": [], # Required field - empty array for test profiles
}
data=ProfileCreateInput(
userId=user.id,
username=username,
name=f"Test User {username}",
description="Test user profile for Firecrawl tests",
links=[], # Required field - empty array for test profiles
)
)
# NOTE: We deliberately do NOT create Firecrawl credentials for this user

View File

@@ -0,0 +1,202 @@
"""Tool for capturing user business understanding incrementally."""
import logging
from typing import Any
from backend.api.features.chat.model import ChatSession
from backend.data.understanding import (
BusinessUnderstandingInput,
upsert_business_understanding,
)
from .base import BaseTool
from .models import ErrorResponse, ToolResponseBase, UnderstandingUpdatedResponse
logger = logging.getLogger(__name__)
class AddUnderstandingTool(BaseTool):
"""Tool for capturing user's business understanding incrementally."""
@property
def name(self) -> str:
return "add_understanding"
@property
def description(self) -> str:
return """Capture and store information about the user's business context,
workflows, pain points, and automation goals. Call this tool whenever the user
shares information about their business. Each call incrementally adds to the
existing understanding - you don't need to provide all fields at once.
Use this to build a comprehensive profile that helps recommend better agents
and automations for the user's specific needs."""
@property
def parameters(self) -> dict[str, Any]:
return {
"type": "object",
"properties": {
"user_name": {
"type": "string",
"description": "The user's name",
},
"job_title": {
"type": "string",
"description": "The user's job title (e.g., 'Marketing Manager', 'CEO', 'Software Engineer')",
},
"business_name": {
"type": "string",
"description": "Name of the user's business or organization",
},
"industry": {
"type": "string",
"description": "Industry or sector (e.g., 'e-commerce', 'healthcare', 'finance')",
},
"business_size": {
"type": "string",
"description": "Company size: '1-10', '11-50', '51-200', '201-1000', or '1000+'",
},
"user_role": {
"type": "string",
"description": "User's role in organization context (e.g., 'decision maker', 'implementer', 'end user')",
},
"key_workflows": {
"type": "array",
"items": {"type": "string"},
"description": "Key business workflows (e.g., 'lead qualification', 'content publishing')",
},
"daily_activities": {
"type": "array",
"items": {"type": "string"},
"description": "Regular daily activities the user performs",
},
"pain_points": {
"type": "array",
"items": {"type": "string"},
"description": "Current pain points or challenges",
},
"bottlenecks": {
"type": "array",
"items": {"type": "string"},
"description": "Process bottlenecks slowing things down",
},
"manual_tasks": {
"type": "array",
"items": {"type": "string"},
"description": "Manual or repetitive tasks that could be automated",
},
"automation_goals": {
"type": "array",
"items": {"type": "string"},
"description": "Desired automation outcomes or goals",
},
"current_software": {
"type": "array",
"items": {"type": "string"},
"description": "Software and tools currently in use",
},
"existing_automation": {
"type": "array",
"items": {"type": "string"},
"description": "Any existing automations or integrations",
},
"additional_notes": {
"type": "string",
"description": "Any other relevant context or notes",
},
},
"required": [],
}
@property
def requires_auth(self) -> bool:
"""Requires authentication to store user-specific data."""
return True
async def _execute(
self,
user_id: str | None,
session: ChatSession,
**kwargs,
) -> ToolResponseBase:
"""
Capture and store business understanding incrementally.
Each call merges new data with existing understanding:
- String fields are overwritten if provided
- List fields are appended (with deduplication)
"""
session_id = session.session_id
if not user_id:
return ErrorResponse(
message="Authentication required to save business understanding.",
session_id=session_id,
)
# Check if any data was provided
if not any(v is not None for v in kwargs.values()):
return ErrorResponse(
message="Please provide at least one field to update.",
session_id=session_id,
)
# Build input model
input_data = BusinessUnderstandingInput(
user_name=kwargs.get("user_name"),
job_title=kwargs.get("job_title"),
business_name=kwargs.get("business_name"),
industry=kwargs.get("industry"),
business_size=kwargs.get("business_size"),
user_role=kwargs.get("user_role"),
key_workflows=kwargs.get("key_workflows"),
daily_activities=kwargs.get("daily_activities"),
pain_points=kwargs.get("pain_points"),
bottlenecks=kwargs.get("bottlenecks"),
manual_tasks=kwargs.get("manual_tasks"),
automation_goals=kwargs.get("automation_goals"),
current_software=kwargs.get("current_software"),
existing_automation=kwargs.get("existing_automation"),
additional_notes=kwargs.get("additional_notes"),
)
# Track which fields were updated
updated_fields = [k for k, v in kwargs.items() if v is not None]
# Upsert with merge
understanding = await upsert_business_understanding(user_id, input_data)
# Build current understanding summary for the response
current_understanding = {
"user_name": understanding.user_name,
"job_title": understanding.job_title,
"business_name": understanding.business_name,
"industry": understanding.industry,
"business_size": understanding.business_size,
"user_role": understanding.user_role,
"key_workflows": understanding.key_workflows,
"daily_activities": understanding.daily_activities,
"pain_points": understanding.pain_points,
"bottlenecks": understanding.bottlenecks,
"manual_tasks": understanding.manual_tasks,
"automation_goals": understanding.automation_goals,
"current_software": understanding.current_software,
"existing_automation": understanding.existing_automation,
"additional_notes": understanding.additional_notes,
}
# Filter out empty values for cleaner response
current_understanding = {
k: v
for k, v in current_understanding.items()
if v is not None and v != [] and v != ""
}
return UnderstandingUpdatedResponse(
message=f"Updated understanding with: {', '.join(updated_fields)}. "
"I now have a better picture of your business context.",
session_id=session_id,
updated_fields=updated_fields,
current_understanding=current_understanding,
)

View File

@@ -0,0 +1,455 @@
"""Tool for retrieving agent execution outputs from user's library."""
import logging
import re
from datetime import datetime, timedelta, timezone
from typing import Any
from pydantic import BaseModel, field_validator
from backend.api.features.chat.model import ChatSession
from backend.api.features.library import db as library_db
from backend.api.features.library.model import LibraryAgent
from backend.data import execution as execution_db
from backend.data.execution import ExecutionStatus, GraphExecution, GraphExecutionMeta
from .base import BaseTool
from .models import (
AgentOutputResponse,
ErrorResponse,
ExecutionOutputInfo,
NoResultsResponse,
ToolResponseBase,
)
from .utils import fetch_graph_from_store_slug
logger = logging.getLogger(__name__)
class AgentOutputInput(BaseModel):
"""Input parameters for the agent_output tool."""
agent_name: str = ""
library_agent_id: str = ""
store_slug: str = ""
execution_id: str = ""
run_time: str = "latest"
@field_validator(
"agent_name",
"library_agent_id",
"store_slug",
"execution_id",
"run_time",
mode="before",
)
@classmethod
def strip_strings(cls, v: Any) -> Any:
"""Strip whitespace from string fields."""
return v.strip() if isinstance(v, str) else v
def parse_time_expression(
time_expr: str | None,
) -> tuple[datetime | None, datetime | None]:
"""
Parse time expression into datetime range (start, end).
Supports:
- "latest" or None -> returns (None, None) to get most recent
- "yesterday" -> 24h window for yesterday
- "today" -> Today from midnight
- "last week" / "last 7 days" -> 7 day window
- "last month" / "last 30 days" -> 30 day window
- ISO date "YYYY-MM-DD" -> 24h window for that date
"""
if not time_expr or time_expr.lower() == "latest":
return None, None
now = datetime.now(timezone.utc)
expr = time_expr.lower().strip()
# Relative expressions
if expr == "yesterday":
end = now.replace(hour=0, minute=0, second=0, microsecond=0)
start = end - timedelta(days=1)
return start, end
if expr in ("last week", "last 7 days"):
return now - timedelta(days=7), now
if expr in ("last month", "last 30 days"):
return now - timedelta(days=30), now
if expr == "today":
start = now.replace(hour=0, minute=0, second=0, microsecond=0)
return start, now
# Try ISO date format (YYYY-MM-DD)
date_match = re.match(r"^(\d{4})-(\d{2})-(\d{2})$", expr)
if date_match:
year, month, day = map(int, date_match.groups())
start = datetime(year, month, day, 0, 0, 0, tzinfo=timezone.utc)
end = start + timedelta(days=1)
return start, end
# Try ISO datetime
try:
parsed = datetime.fromisoformat(expr.replace("Z", "+00:00"))
if parsed.tzinfo is None:
parsed = parsed.replace(tzinfo=timezone.utc)
# Return +/- 1 hour window around the specified time
return parsed - timedelta(hours=1), parsed + timedelta(hours=1)
except ValueError:
pass
# Fallback: treat as "latest"
return None, None
class AgentOutputTool(BaseTool):
"""Tool for retrieving execution outputs from user's library agents."""
@property
def name(self) -> str:
return "agent_output"
@property
def description(self) -> str:
return """Retrieve execution outputs from agents in the user's library.
Identify the agent using one of:
- agent_name: Fuzzy search in user's library
- library_agent_id: Exact library agent ID
- store_slug: Marketplace format 'username/agent-name'
Select which run to retrieve using:
- execution_id: Specific execution ID
- run_time: 'latest' (default), 'yesterday', 'last week', or ISO date 'YYYY-MM-DD'
"""
@property
def parameters(self) -> dict[str, Any]:
return {
"type": "object",
"properties": {
"agent_name": {
"type": "string",
"description": "Agent name to search for in user's library (fuzzy match)",
},
"library_agent_id": {
"type": "string",
"description": "Exact library agent ID",
},
"store_slug": {
"type": "string",
"description": "Marketplace identifier: 'username/agent-slug'",
},
"execution_id": {
"type": "string",
"description": "Specific execution ID to retrieve",
},
"run_time": {
"type": "string",
"description": (
"Time filter: 'latest', 'yesterday', 'last week', or 'YYYY-MM-DD'"
),
},
},
"required": [],
}
@property
def requires_auth(self) -> bool:
return True
async def _resolve_agent(
self,
user_id: str,
agent_name: str | None,
library_agent_id: str | None,
store_slug: str | None,
) -> tuple[LibraryAgent | None, str | None]:
"""
Resolve agent from provided identifiers.
Returns (library_agent, error_message).
"""
# Priority 1: Exact library agent ID
if library_agent_id:
try:
agent = await library_db.get_library_agent(library_agent_id, user_id)
return agent, None
except Exception as e:
logger.warning(f"Failed to get library agent by ID: {e}")
return None, f"Library agent '{library_agent_id}' not found"
# Priority 2: Store slug (username/agent-name)
if store_slug and "/" in store_slug:
username, agent_slug = store_slug.split("/", 1)
graph, _ = await fetch_graph_from_store_slug(username, agent_slug)
if not graph:
return None, f"Agent '{store_slug}' not found in marketplace"
# Find in user's library by graph_id
agent = await library_db.get_library_agent_by_graph_id(user_id, graph.id)
if not agent:
return (
None,
f"Agent '{store_slug}' is not in your library. "
"Add it first to see outputs.",
)
return agent, None
# Priority 3: Fuzzy name search in library
if agent_name:
try:
response = await library_db.list_library_agents(
user_id=user_id,
search_term=agent_name,
page_size=5,
)
if not response.agents:
return (
None,
f"No agents matching '{agent_name}' found in your library",
)
# Return best match (first result from search)
return response.agents[0], None
except Exception as e:
logger.error(f"Error searching library agents: {e}")
return None, f"Error searching for agent: {e}"
return (
None,
"Please specify an agent name, library_agent_id, or store_slug",
)
async def _get_execution(
self,
user_id: str,
graph_id: str,
execution_id: str | None,
time_start: datetime | None,
time_end: datetime | None,
) -> tuple[GraphExecution | None, list[GraphExecutionMeta], str | None]:
"""
Fetch execution(s) based on filters.
Returns (single_execution, available_executions_meta, error_message).
"""
# If specific execution_id provided, fetch it directly
if execution_id:
execution = await execution_db.get_graph_execution(
user_id=user_id,
execution_id=execution_id,
include_node_executions=False,
)
if not execution:
return None, [], f"Execution '{execution_id}' not found"
return execution, [], None
# Get completed executions with time filters
executions = await execution_db.get_graph_executions(
graph_id=graph_id,
user_id=user_id,
statuses=[ExecutionStatus.COMPLETED],
created_time_gte=time_start,
created_time_lte=time_end,
limit=10,
)
if not executions:
return None, [], None # No error, just no executions
# If only one execution, fetch full details
if len(executions) == 1:
full_execution = await execution_db.get_graph_execution(
user_id=user_id,
execution_id=executions[0].id,
include_node_executions=False,
)
return full_execution, [], None
# Multiple executions - return latest with full details, plus list of available
full_execution = await execution_db.get_graph_execution(
user_id=user_id,
execution_id=executions[0].id,
include_node_executions=False,
)
return full_execution, executions, None
def _build_response(
self,
agent: LibraryAgent,
execution: GraphExecution | None,
available_executions: list[GraphExecutionMeta],
session_id: str | None,
) -> AgentOutputResponse:
"""Build the response based on execution data."""
library_agent_link = f"/library/agents/{agent.id}"
if not execution:
return AgentOutputResponse(
message=f"No completed executions found for agent '{agent.name}'",
session_id=session_id,
agent_name=agent.name,
agent_id=agent.graph_id,
library_agent_id=agent.id,
library_agent_link=library_agent_link,
total_executions=0,
)
execution_info = ExecutionOutputInfo(
execution_id=execution.id,
status=execution.status.value,
started_at=execution.started_at,
ended_at=execution.ended_at,
outputs=dict(execution.outputs),
inputs_summary=execution.inputs if execution.inputs else None,
)
available_list = None
if len(available_executions) > 1:
available_list = [
{
"id": e.id,
"status": e.status.value,
"started_at": e.started_at.isoformat() if e.started_at else None,
}
for e in available_executions[:5]
]
message = f"Found execution outputs for agent '{agent.name}'"
if len(available_executions) > 1:
message += (
f". Showing latest of {len(available_executions)} matching executions."
)
return AgentOutputResponse(
message=message,
session_id=session_id,
agent_name=agent.name,
agent_id=agent.graph_id,
library_agent_id=agent.id,
library_agent_link=library_agent_link,
execution=execution_info,
available_executions=available_list,
total_executions=len(available_executions) if available_executions else 1,
)
async def _execute(
self,
user_id: str | None,
session: ChatSession,
**kwargs,
) -> ToolResponseBase:
"""Execute the agent_output tool."""
session_id = session.session_id
# Parse and validate input
try:
input_data = AgentOutputInput(**kwargs)
except Exception as e:
logger.error(f"Invalid input: {e}")
return ErrorResponse(
message="Invalid input parameters",
error=str(e),
session_id=session_id,
)
# Ensure user_id is present (should be guaranteed by requires_auth)
if not user_id:
return ErrorResponse(
message="User authentication required",
session_id=session_id,
)
# Check if at least one identifier is provided
if not any(
[
input_data.agent_name,
input_data.library_agent_id,
input_data.store_slug,
input_data.execution_id,
]
):
return ErrorResponse(
message=(
"Please specify at least one of: agent_name, "
"library_agent_id, store_slug, or execution_id"
),
session_id=session_id,
)
# If only execution_id provided, we need to find the agent differently
if (
input_data.execution_id
and not input_data.agent_name
and not input_data.library_agent_id
and not input_data.store_slug
):
# Fetch execution directly to get graph_id
execution = await execution_db.get_graph_execution(
user_id=user_id,
execution_id=input_data.execution_id,
include_node_executions=False,
)
if not execution:
return ErrorResponse(
message=f"Execution '{input_data.execution_id}' not found",
session_id=session_id,
)
# Find library agent by graph_id
agent = await library_db.get_library_agent_by_graph_id(
user_id, execution.graph_id
)
if not agent:
return NoResultsResponse(
message=(
f"Execution found but agent not in your library. "
f"Graph ID: {execution.graph_id}"
),
session_id=session_id,
suggestions=["Add the agent to your library to see more details"],
)
return self._build_response(agent, execution, [], session_id)
# Resolve agent from identifiers
agent, error = await self._resolve_agent(
user_id=user_id,
agent_name=input_data.agent_name or None,
library_agent_id=input_data.library_agent_id or None,
store_slug=input_data.store_slug or None,
)
if error or not agent:
return NoResultsResponse(
message=error or "Agent not found",
session_id=session_id,
suggestions=[
"Check the agent name or ID",
"Make sure the agent is in your library",
],
)
# Parse time expression
time_start, time_end = parse_time_expression(input_data.run_time)
# Fetch execution(s)
execution, available_executions, exec_error = await self._get_execution(
user_id=user_id,
graph_id=agent.graph_id,
execution_id=input_data.execution_id or None,
time_start=time_start,
time_end=time_end,
)
if exec_error:
return ErrorResponse(
message=exec_error,
session_id=session_id,
)
return self._build_response(agent, execution, available_executions, session_id)

View File

@@ -5,8 +5,8 @@ from typing import Any
from openai.types.chat import ChatCompletionToolParam
from backend.server.v2.chat.model import ChatSession
from backend.server.v2.chat.response_model import StreamToolExecutionResult
from backend.api.features.chat.model import ChatSession
from backend.api.features.chat.response_model import StreamToolExecutionResult
from .models import ErrorResponse, NeedLoginResponse, ToolResponseBase

View File

@@ -3,17 +3,18 @@
import logging
from typing import Any
from backend.server.v2.chat.model import ChatSession
from backend.server.v2.chat.tools.base import BaseTool
from backend.server.v2.chat.tools.models import (
from backend.api.features.chat.model import ChatSession
from backend.api.features.store import db as store_db
from backend.util.exceptions import DatabaseError, NotFoundError
from .base import BaseTool
from .models import (
AgentCarouselResponse,
AgentInfo,
ErrorResponse,
NoResultsResponse,
ToolResponseBase,
)
from backend.server.v2.store import db as store_db
from backend.util.exceptions import DatabaseError, NotFoundError
logger = logging.getLogger(__name__)

View File

@@ -0,0 +1,157 @@
"""Tool for searching agents in the user's library."""
import logging
from typing import Any
from backend.api.features.chat.model import ChatSession
from backend.api.features.library import db as library_db
from backend.util.exceptions import DatabaseError
from .base import BaseTool
from .models import (
AgentCarouselResponse,
AgentInfo,
ErrorResponse,
NoResultsResponse,
ToolResponseBase,
)
logger = logging.getLogger(__name__)
class FindLibraryAgentTool(BaseTool):
"""Tool for searching agents in the user's library."""
@property
def name(self) -> str:
return "find_library_agent"
@property
def description(self) -> str:
return (
"Search for agents in the user's library. Use this to find agents "
"the user has already added to their library, including agents they "
"created or added from the marketplace."
)
@property
def parameters(self) -> dict[str, Any]:
return {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": (
"Search query to find agents by name or description. "
"Use keywords for best results."
),
},
},
"required": ["query"],
}
@property
def requires_auth(self) -> bool:
return True
async def _execute(
self,
user_id: str | None,
session: ChatSession,
**kwargs,
) -> ToolResponseBase:
"""Search for agents in the user's library.
Args:
user_id: User ID (required)
session: Chat session
query: Search query
Returns:
AgentCarouselResponse: List of agents found in the library
NoResultsResponse: No agents found
ErrorResponse: Error message
"""
query = kwargs.get("query", "").strip()
session_id = session.session_id
if not query:
return ErrorResponse(
message="Please provide a search query",
session_id=session_id,
)
if not user_id:
return ErrorResponse(
message="User authentication required to search library",
session_id=session_id,
)
agents = []
try:
logger.info(f"Searching user library for: {query}")
library_results = await library_db.list_library_agents(
user_id=user_id,
search_term=query,
page_size=10,
)
logger.info(
f"Find library agents tool found {len(library_results.agents)} agents"
)
for agent in library_results.agents:
agents.append(
AgentInfo(
id=agent.id,
name=agent.name,
description=agent.description or "",
source="library",
in_library=True,
creator=agent.creator_name,
status=agent.status.value,
can_access_graph=agent.can_access_graph,
has_external_trigger=agent.has_external_trigger,
new_output=agent.new_output,
graph_id=agent.graph_id,
),
)
except DatabaseError as e:
logger.error(f"Error searching library agents: {e}", exc_info=True)
return ErrorResponse(
message="Failed to search library. Please try again.",
error=str(e),
session_id=session_id,
)
if not agents:
return NoResultsResponse(
message=(
f"No agents found matching '{query}' in your library. "
"Try different keywords or use find_agent to search the marketplace."
),
session_id=session_id,
suggestions=[
"Try more general terms",
"Use find_agent to search the marketplace",
"Check your library at /library",
],
)
title = (
f"Found {len(agents)} agent{'s' if len(agents) != 1 else ''} "
f"in your library for '{query}'"
)
return AgentCarouselResponse(
message=(
"Found agents in the user's library. You can provide a link to "
"view an agent at: /library/agents/{agent_id}. "
"Use agent_output to get execution results, or run_agent to execute."
),
title=title,
agents=agents,
count=len(agents),
session_id=session_id,
)

View File

@@ -1,5 +1,6 @@
"""Pydantic models for tool responses."""
from datetime import datetime
from enum import Enum
from typing import Any
@@ -19,6 +20,15 @@ class ResponseType(str, Enum):
ERROR = "error"
NO_RESULTS = "no_results"
SUCCESS = "success"
DOC_SEARCH_RESULTS = "doc_search_results"
AGENT_OUTPUT = "agent_output"
BLOCK_LIST = "block_list"
BLOCK_OUTPUT = "block_output"
UNDERSTANDING_UPDATED = "understanding_updated"
# Agent generation responses
AGENT_PREVIEW = "agent_preview"
AGENT_SAVED = "agent_saved"
CLARIFICATION_NEEDED = "clarification_needed"
# Base response model
@@ -173,3 +183,128 @@ class ErrorResponse(ToolResponseBase):
type: ResponseType = ResponseType.ERROR
error: str | None = None
details: dict[str, Any] | None = None
# Documentation search models
class DocSearchResult(BaseModel):
"""A single documentation search result."""
title: str
path: str
section: str
snippet: str # Short excerpt for UI display
content: str # Full text content for LLM to read and understand
score: float
doc_url: str | None = None
class DocSearchResultsResponse(ToolResponseBase):
"""Response for search_docs tool."""
type: ResponseType = ResponseType.DOC_SEARCH_RESULTS
results: list[DocSearchResult]
count: int
query: str
# Agent output models
class ExecutionOutputInfo(BaseModel):
"""Summary of a single execution's outputs."""
execution_id: str
status: str
started_at: datetime | None = None
ended_at: datetime | None = None
outputs: dict[str, list[Any]]
inputs_summary: dict[str, Any] | None = None
class AgentOutputResponse(ToolResponseBase):
"""Response for agent_output tool."""
type: ResponseType = ResponseType.AGENT_OUTPUT
agent_name: str
agent_id: str
library_agent_id: str | None = None
library_agent_link: str | None = None
execution: ExecutionOutputInfo | None = None
available_executions: list[dict[str, Any]] | None = None
total_executions: int = 0
# Block models
class BlockInfoSummary(BaseModel):
"""Summary of a block for search results."""
id: str
name: str
description: str
categories: list[str]
input_schema: dict[str, Any]
output_schema: dict[str, Any]
class BlockListResponse(ToolResponseBase):
"""Response for find_block tool."""
type: ResponseType = ResponseType.BLOCK_LIST
blocks: list[BlockInfoSummary]
count: int
query: str
class BlockOutputResponse(ToolResponseBase):
"""Response for run_block tool."""
type: ResponseType = ResponseType.BLOCK_OUTPUT
block_id: str
block_name: str
outputs: dict[str, list[Any]]
success: bool = True
# Business understanding models
class UnderstandingUpdatedResponse(ToolResponseBase):
"""Response for add_understanding tool."""
type: ResponseType = ResponseType.UNDERSTANDING_UPDATED
updated_fields: list[str] = Field(default_factory=list)
current_understanding: dict[str, Any] = Field(default_factory=dict)
# Agent generation models
class ClarifyingQuestion(BaseModel):
"""A question that needs user clarification."""
question: str
keyword: str
example: str | None = None
class AgentPreviewResponse(ToolResponseBase):
"""Response for previewing a generated agent before saving."""
type: ResponseType = ResponseType.AGENT_PREVIEW
agent_json: dict[str, Any]
agent_name: str
description: str
node_count: int
link_count: int = 0
class AgentSavedResponse(ToolResponseBase):
"""Response when an agent is saved to the library."""
type: ResponseType = ResponseType.AGENT_SAVED
agent_id: str
agent_name: str
library_agent_id: str
library_agent_link: str
agent_page_link: str # Link to the agent builder/editor page
class ClarificationNeededResponse(ToolResponseBase):
"""Response when the LLM needs more information from the user."""
type: ResponseType = ResponseType.CLARIFICATION_NEEDED
questions: list[ClarifyingQuestion] = Field(default_factory=list)

View File

@@ -5,14 +5,22 @@ from typing import Any
from pydantic import BaseModel, Field, field_validator
from backend.api.features.chat.config import ChatConfig
from backend.api.features.chat.model import ChatSession
from backend.api.features.library import db as library_db
from backend.data.graph import GraphModel
from backend.data.model import CredentialsMetaInput
from backend.data.user import get_user_by_id
from backend.executor import utils as execution_utils
from backend.server.v2.chat.config import ChatConfig
from backend.server.v2.chat.model import ChatSession
from backend.server.v2.chat.tools.base import BaseTool
from backend.server.v2.chat.tools.models import (
from backend.util.clients import get_scheduler_client
from backend.util.exceptions import DatabaseError, NotFoundError
from backend.util.timezone_utils import (
convert_utc_time_to_user_timezone,
get_user_timezone_or_utc,
)
from .base import BaseTool
from .models import (
AgentDetails,
AgentDetailsResponse,
ErrorResponse,
@@ -23,19 +31,13 @@ from backend.server.v2.chat.tools.models import (
ToolResponseBase,
UserReadiness,
)
from backend.server.v2.chat.tools.utils import (
from .utils import (
check_user_has_required_credentials,
extract_credentials_from_schema,
fetch_graph_from_store_slug,
get_or_create_library_agent,
match_user_credentials_to_graph,
)
from backend.util.clients import get_scheduler_client
from backend.util.exceptions import DatabaseError, NotFoundError
from backend.util.timezone_utils import (
convert_utc_time_to_user_timezone,
get_user_timezone_or_utc,
)
logger = logging.getLogger(__name__)
config = ChatConfig()
@@ -56,6 +58,7 @@ class RunAgentInput(BaseModel):
"""Input parameters for the run_agent tool."""
username_agent_slug: str = ""
library_agent_id: str = ""
inputs: dict[str, Any] = Field(default_factory=dict)
use_defaults: bool = False
schedule_name: str = ""
@@ -63,7 +66,12 @@ class RunAgentInput(BaseModel):
timezone: str = "UTC"
@field_validator(
"username_agent_slug", "schedule_name", "cron", "timezone", mode="before"
"username_agent_slug",
"library_agent_id",
"schedule_name",
"cron",
"timezone",
mode="before",
)
@classmethod
def strip_strings(cls, v: Any) -> Any:
@@ -89,7 +97,7 @@ class RunAgentTool(BaseTool):
@property
def description(self) -> str:
return """Run or schedule an agent from the marketplace.
return """Run or schedule an agent from the marketplace or user's library.
The tool automatically handles the setup flow:
- Returns missing inputs if required fields are not provided
@@ -97,6 +105,10 @@ class RunAgentTool(BaseTool):
- Executes immediately if all requirements are met
- Schedules execution if cron expression is provided
Identify the agent using either:
- username_agent_slug: Marketplace format 'username/agent-name'
- library_agent_id: ID of an agent in the user's library
For scheduled execution, provide: schedule_name, cron, and optionally timezone."""
@property
@@ -108,6 +120,10 @@ class RunAgentTool(BaseTool):
"type": "string",
"description": "Agent identifier in format 'username/agent-name'",
},
"library_agent_id": {
"type": "string",
"description": "Library agent ID from user's library",
},
"inputs": {
"type": "object",
"description": "Input values for the agent",
@@ -130,7 +146,7 @@ class RunAgentTool(BaseTool):
"description": "IANA timezone for schedule (default: UTC)",
},
},
"required": ["username_agent_slug"],
"required": [],
}
@property
@@ -148,10 +164,16 @@ class RunAgentTool(BaseTool):
params = RunAgentInput(**kwargs)
session_id = session.session_id
# Validate agent slug format
if not params.username_agent_slug or "/" not in params.username_agent_slug:
# Validate at least one identifier is provided
has_slug = params.username_agent_slug and "/" in params.username_agent_slug
has_library_id = bool(params.library_agent_id)
if not has_slug and not has_library_id:
return ErrorResponse(
message="Please provide an agent slug in format 'username/agent-name'",
message=(
"Please provide either a username_agent_slug "
"(format 'username/agent-name') or a library_agent_id"
),
session_id=session_id,
)
@@ -166,13 +188,41 @@ class RunAgentTool(BaseTool):
is_schedule = bool(params.schedule_name or params.cron)
try:
# Step 1: Fetch agent details (always happens first)
username, agent_name = params.username_agent_slug.split("/", 1)
graph, store_agent = await fetch_graph_from_store_slug(username, agent_name)
# Step 1: Fetch agent details
graph: GraphModel | None = None
library_agent = None
# Priority: library_agent_id if provided
if has_library_id:
library_agent = await library_db.get_library_agent(
params.library_agent_id, user_id
)
if not library_agent:
return ErrorResponse(
message=f"Library agent '{params.library_agent_id}' not found",
session_id=session_id,
)
# Get the graph from the library agent
from backend.data.graph import get_graph
graph = await get_graph(
library_agent.graph_id,
library_agent.graph_version,
user_id=user_id,
)
else:
# Fetch from marketplace slug
username, agent_name = params.username_agent_slug.split("/", 1)
graph, _ = await fetch_graph_from_store_slug(username, agent_name)
if not graph:
identifier = (
params.library_agent_id
if has_library_id
else params.username_agent_slug
)
return ErrorResponse(
message=f"Agent '{params.username_agent_slug}' not found in marketplace",
message=f"Agent '{identifier}' not found",
session_id=session_id,
)

View File

@@ -3,13 +3,13 @@ import uuid
import orjson
import pytest
from backend.server.v2.chat.tools._test_data import (
from ._test_data import (
make_session,
setup_firecrawl_test_data,
setup_llm_test_data,
setup_test_data,
)
from backend.server.v2.chat.tools.run_agent import RunAgentTool
from .run_agent import RunAgentTool
# This is so the formatter doesn't remove the fixture imports
setup_llm_test_data = setup_llm_test_data

View File

@@ -3,13 +3,13 @@
import logging
from typing import Any
from backend.api.features.library import db as library_db
from backend.api.features.library import model as library_model
from backend.api.features.store import db as store_db
from backend.data import graph as graph_db
from backend.data.graph import GraphModel
from backend.data.model import CredentialsMetaInput
from backend.integrations.creds_manager import IntegrationCredentialsManager
from backend.server.v2.library import db as library_db
from backend.server.v2.library import model as library_model
from backend.server.v2.store import db as store_db
from backend.util.exceptions import NotFoundError
logger = logging.getLogger(__name__)

View File

@@ -7,9 +7,10 @@ import pytest_mock
from prisma.enums import ReviewStatus
from pytest_snapshot.plugin import Snapshot
from backend.server.rest_api import handle_internal_http_error
from backend.server.v2.executions.review.model import PendingHumanReviewModel
from backend.server.v2.executions.review.routes import router
from backend.api.rest_api import handle_internal_http_error
from .model import PendingHumanReviewModel
from .routes import router
# Using a fixed timestamp for reproducible tests
FIXED_NOW = datetime.datetime(2023, 1, 1, 0, 0, 0, tzinfo=datetime.timezone.utc)
@@ -54,13 +55,13 @@ def sample_pending_review(test_user_id: str) -> PendingHumanReviewModel:
def test_get_pending_reviews_empty(
mocker: pytest_mock.MockFixture,
mocker: pytest_mock.MockerFixture,
snapshot: Snapshot,
test_user_id: str,
) -> None:
"""Test getting pending reviews when none exist"""
mock_get_reviews = mocker.patch(
"backend.server.v2.executions.review.routes.get_pending_reviews_for_user"
"backend.api.features.executions.review.routes.get_pending_reviews_for_user"
)
mock_get_reviews.return_value = []
@@ -72,14 +73,14 @@ def test_get_pending_reviews_empty(
def test_get_pending_reviews_with_data(
mocker: pytest_mock.MockFixture,
mocker: pytest_mock.MockerFixture,
sample_pending_review: PendingHumanReviewModel,
snapshot: Snapshot,
test_user_id: str,
) -> None:
"""Test getting pending reviews with data"""
mock_get_reviews = mocker.patch(
"backend.server.v2.executions.review.routes.get_pending_reviews_for_user"
"backend.api.features.executions.review.routes.get_pending_reviews_for_user"
)
mock_get_reviews.return_value = [sample_pending_review]
@@ -94,14 +95,14 @@ def test_get_pending_reviews_with_data(
def test_get_pending_reviews_for_execution_success(
mocker: pytest_mock.MockFixture,
mocker: pytest_mock.MockerFixture,
sample_pending_review: PendingHumanReviewModel,
snapshot: Snapshot,
test_user_id: str,
) -> None:
"""Test getting pending reviews for specific execution"""
mock_get_graph_execution = mocker.patch(
"backend.server.v2.executions.review.routes.get_graph_execution_meta"
"backend.api.features.executions.review.routes.get_graph_execution_meta"
)
mock_get_graph_execution.return_value = {
"id": "test_graph_exec_456",
@@ -109,7 +110,7 @@ def test_get_pending_reviews_for_execution_success(
}
mock_get_reviews = mocker.patch(
"backend.server.v2.executions.review.routes.get_pending_reviews_for_execution"
"backend.api.features.executions.review.routes.get_pending_reviews_for_execution"
)
mock_get_reviews.return_value = [sample_pending_review]
@@ -121,24 +122,23 @@ def test_get_pending_reviews_for_execution_success(
assert data[0]["graph_exec_id"] == "test_graph_exec_456"
def test_get_pending_reviews_for_execution_access_denied(
mocker: pytest_mock.MockFixture,
test_user_id: str,
def test_get_pending_reviews_for_execution_not_available(
mocker: pytest_mock.MockerFixture,
) -> None:
"""Test access denied when user doesn't own the execution"""
mock_get_graph_execution = mocker.patch(
"backend.server.v2.executions.review.routes.get_graph_execution_meta"
"backend.api.features.executions.review.routes.get_graph_execution_meta"
)
mock_get_graph_execution.return_value = None
response = client.get("/api/review/execution/test_graph_exec_456")
assert response.status_code == 403
assert "Access denied" in response.json()["detail"]
assert response.status_code == 404
assert "not found" in response.json()["detail"]
def test_process_review_action_approve_success(
mocker: pytest_mock.MockFixture,
mocker: pytest_mock.MockerFixture,
sample_pending_review: PendingHumanReviewModel,
test_user_id: str,
) -> None:
@@ -146,12 +146,12 @@ def test_process_review_action_approve_success(
# Mock the route functions
mock_get_reviews_for_execution = mocker.patch(
"backend.server.v2.executions.review.routes.get_pending_reviews_for_execution"
"backend.api.features.executions.review.routes.get_pending_reviews_for_execution"
)
mock_get_reviews_for_execution.return_value = [sample_pending_review]
mock_process_all_reviews = mocker.patch(
"backend.server.v2.executions.review.routes.process_all_reviews_for_execution"
"backend.api.features.executions.review.routes.process_all_reviews_for_execution"
)
# Create approved review for return
approved_review = PendingHumanReviewModel(
@@ -174,11 +174,11 @@ def test_process_review_action_approve_success(
mock_process_all_reviews.return_value = {"test_node_123": approved_review}
mock_has_pending = mocker.patch(
"backend.server.v2.executions.review.routes.has_pending_reviews_for_graph_exec"
"backend.api.features.executions.review.routes.has_pending_reviews_for_graph_exec"
)
mock_has_pending.return_value = False
mocker.patch("backend.server.v2.executions.review.routes.add_graph_execution")
mocker.patch("backend.api.features.executions.review.routes.add_graph_execution")
request_data = {
"reviews": [
@@ -202,7 +202,7 @@ def test_process_review_action_approve_success(
def test_process_review_action_reject_success(
mocker: pytest_mock.MockFixture,
mocker: pytest_mock.MockerFixture,
sample_pending_review: PendingHumanReviewModel,
test_user_id: str,
) -> None:
@@ -210,12 +210,12 @@ def test_process_review_action_reject_success(
# Mock the route functions
mock_get_reviews_for_execution = mocker.patch(
"backend.server.v2.executions.review.routes.get_pending_reviews_for_execution"
"backend.api.features.executions.review.routes.get_pending_reviews_for_execution"
)
mock_get_reviews_for_execution.return_value = [sample_pending_review]
mock_process_all_reviews = mocker.patch(
"backend.server.v2.executions.review.routes.process_all_reviews_for_execution"
"backend.api.features.executions.review.routes.process_all_reviews_for_execution"
)
rejected_review = PendingHumanReviewModel(
node_exec_id="test_node_123",
@@ -237,7 +237,7 @@ def test_process_review_action_reject_success(
mock_process_all_reviews.return_value = {"test_node_123": rejected_review}
mock_has_pending = mocker.patch(
"backend.server.v2.executions.review.routes.has_pending_reviews_for_graph_exec"
"backend.api.features.executions.review.routes.has_pending_reviews_for_graph_exec"
)
mock_has_pending.return_value = False
@@ -262,7 +262,7 @@ def test_process_review_action_reject_success(
def test_process_review_action_mixed_success(
mocker: pytest_mock.MockFixture,
mocker: pytest_mock.MockerFixture,
sample_pending_review: PendingHumanReviewModel,
test_user_id: str,
) -> None:
@@ -289,12 +289,12 @@ def test_process_review_action_mixed_success(
# Mock the route functions
mock_get_reviews_for_execution = mocker.patch(
"backend.server.v2.executions.review.routes.get_pending_reviews_for_execution"
"backend.api.features.executions.review.routes.get_pending_reviews_for_execution"
)
mock_get_reviews_for_execution.return_value = [sample_pending_review, second_review]
mock_process_all_reviews = mocker.patch(
"backend.server.v2.executions.review.routes.process_all_reviews_for_execution"
"backend.api.features.executions.review.routes.process_all_reviews_for_execution"
)
# Create approved version of first review
approved_review = PendingHumanReviewModel(
@@ -338,7 +338,7 @@ def test_process_review_action_mixed_success(
}
mock_has_pending = mocker.patch(
"backend.server.v2.executions.review.routes.has_pending_reviews_for_graph_exec"
"backend.api.features.executions.review.routes.has_pending_reviews_for_graph_exec"
)
mock_has_pending.return_value = False
@@ -369,7 +369,7 @@ def test_process_review_action_mixed_success(
def test_process_review_action_empty_request(
mocker: pytest_mock.MockFixture,
mocker: pytest_mock.MockerFixture,
test_user_id: str,
) -> None:
"""Test error when no reviews provided"""
@@ -386,19 +386,19 @@ def test_process_review_action_empty_request(
def test_process_review_action_review_not_found(
mocker: pytest_mock.MockFixture,
mocker: pytest_mock.MockerFixture,
test_user_id: str,
) -> None:
"""Test error when review is not found"""
# Mock the functions that extract graph execution ID from the request
mock_get_reviews_for_execution = mocker.patch(
"backend.server.v2.executions.review.routes.get_pending_reviews_for_execution"
"backend.api.features.executions.review.routes.get_pending_reviews_for_execution"
)
mock_get_reviews_for_execution.return_value = [] # No reviews found
# Mock process_all_reviews to simulate not finding reviews
mock_process_all_reviews = mocker.patch(
"backend.server.v2.executions.review.routes.process_all_reviews_for_execution"
"backend.api.features.executions.review.routes.process_all_reviews_for_execution"
)
# This should raise a ValueError with "Reviews not found" message based on the data/human_review.py logic
mock_process_all_reviews.side_effect = ValueError(
@@ -422,20 +422,20 @@ def test_process_review_action_review_not_found(
def test_process_review_action_partial_failure(
mocker: pytest_mock.MockFixture,
mocker: pytest_mock.MockerFixture,
sample_pending_review: PendingHumanReviewModel,
test_user_id: str,
) -> None:
"""Test handling of partial failures in review processing"""
# Mock the route functions
mock_get_reviews_for_execution = mocker.patch(
"backend.server.v2.executions.review.routes.get_pending_reviews_for_execution"
"backend.api.features.executions.review.routes.get_pending_reviews_for_execution"
)
mock_get_reviews_for_execution.return_value = [sample_pending_review]
# Mock partial failure in processing
mock_process_all_reviews = mocker.patch(
"backend.server.v2.executions.review.routes.process_all_reviews_for_execution"
"backend.api.features.executions.review.routes.process_all_reviews_for_execution"
)
mock_process_all_reviews.side_effect = ValueError("Some reviews failed validation")
@@ -456,20 +456,20 @@ def test_process_review_action_partial_failure(
def test_process_review_action_invalid_node_exec_id(
mocker: pytest_mock.MockFixture,
mocker: pytest_mock.MockerFixture,
sample_pending_review: PendingHumanReviewModel,
test_user_id: str,
) -> None:
"""Test failure when trying to process review with invalid node execution ID"""
# Mock the route functions
mock_get_reviews_for_execution = mocker.patch(
"backend.server.v2.executions.review.routes.get_pending_reviews_for_execution"
"backend.api.features.executions.review.routes.get_pending_reviews_for_execution"
)
mock_get_reviews_for_execution.return_value = [sample_pending_review]
# Mock validation failure - this should return 400, not 500
mock_process_all_reviews = mocker.patch(
"backend.server.v2.executions.review.routes.process_all_reviews_for_execution"
"backend.api.features.executions.review.routes.process_all_reviews_for_execution"
)
mock_process_all_reviews.side_effect = ValueError(
"Invalid node execution ID format"

View File

@@ -13,11 +13,8 @@ from backend.data.human_review import (
process_all_reviews_for_execution,
)
from backend.executor.utils import add_graph_execution
from backend.server.v2.executions.review.model import (
PendingHumanReviewModel,
ReviewRequest,
ReviewResponse,
)
from .model import PendingHumanReviewModel, ReviewRequest, ReviewResponse
logger = logging.getLogger(__name__)
@@ -70,8 +67,7 @@ async def list_pending_reviews(
response_model=List[PendingHumanReviewModel],
responses={
200: {"description": "List of pending reviews for the execution"},
400: {"description": "Invalid graph execution ID"},
403: {"description": "Access denied to graph execution"},
404: {"description": "Graph execution not found"},
500: {"description": "Server error", "content": {"application/json": {}}},
},
)
@@ -94,7 +90,7 @@ async def list_pending_reviews_for_execution(
Raises:
HTTPException:
- 403: If user doesn't own the graph execution
- 404: If the graph execution doesn't exist or isn't owned by this user
- 500: If authentication fails or database error occurs
Note:
@@ -108,8 +104,8 @@ async def list_pending_reviews_for_execution(
)
if not graph_exec:
raise HTTPException(
status_code=status.HTTP_403_FORBIDDEN,
detail="Access denied to graph execution",
status_code=status.HTTP_404_NOT_FOUND,
detail=f"Graph execution #{graph_exec_id} not found",
)
return await get_pending_reviews_for_execution(graph_exec_id, user_id)

View File

@@ -17,6 +17,8 @@ from fastapi import (
from pydantic import BaseModel, Field, SecretStr
from starlette.status import HTTP_500_INTERNAL_SERVER_ERROR, HTTP_502_BAD_GATEWAY
from backend.api.features.library.db import set_preset_webhook, update_preset
from backend.api.features.library.model import LibraryAgentPreset
from backend.data.graph import NodeModel, get_graph, set_node_webhook
from backend.data.integrations import (
WebhookEvent,
@@ -45,13 +47,6 @@ from backend.integrations.creds_manager import IntegrationCredentialsManager
from backend.integrations.oauth import CREDENTIALS_BY_PROVIDER, HANDLERS_BY_NAME
from backend.integrations.providers import ProviderName
from backend.integrations.webhooks import get_webhook_manager
from backend.server.integrations.models import (
ProviderConstants,
ProviderNamesResponse,
get_all_provider_names,
)
from backend.server.v2.library.db import set_preset_webhook, update_preset
from backend.server.v2.library.model import LibraryAgentPreset
from backend.util.exceptions import (
GraphNotInLibraryError,
MissingConfigError,
@@ -60,6 +55,8 @@ from backend.util.exceptions import (
)
from backend.util.settings import Settings
from .models import ProviderConstants, ProviderNamesResponse, get_all_provider_names
if TYPE_CHECKING:
from backend.integrations.oauth import BaseOAuthHandler

View File

@@ -4,16 +4,14 @@ from typing import Literal, Optional
import fastapi
import prisma.errors
import prisma.fields
import prisma.models
import prisma.types
import backend.api.features.store.exceptions as store_exceptions
import backend.api.features.store.image_gen as store_image_gen
import backend.api.features.store.media as store_media
import backend.data.graph as graph_db
import backend.data.integrations as integrations_db
import backend.server.v2.library.model as library_model
import backend.server.v2.store.exceptions as store_exceptions
import backend.server.v2.store.image_gen as store_image_gen
import backend.server.v2.store.media as store_media
from backend.data.block import BlockInput
from backend.data.db import transaction
from backend.data.execution import get_graph_execution
@@ -28,6 +26,8 @@ from backend.util.json import SafeJson
from backend.util.models import Pagination
from backend.util.settings import Config
from . import model as library_model
logger = logging.getLogger(__name__)
config = Config()
integration_creds_manager = IntegrationCredentialsManager()
@@ -538,6 +538,7 @@ async def update_library_agent(
library_agent_id: str,
user_id: str,
auto_update_version: Optional[bool] = None,
graph_version: Optional[int] = None,
is_favorite: Optional[bool] = None,
is_archived: Optional[bool] = None,
is_deleted: Optional[Literal[False]] = None,
@@ -550,6 +551,7 @@ async def update_library_agent(
library_agent_id: The ID of the LibraryAgent to update.
user_id: The owner of this LibraryAgent.
auto_update_version: Whether the agent should auto-update to active version.
graph_version: Specific graph version to update to.
is_favorite: Whether this agent is marked as a favorite.
is_archived: Whether this agent is archived.
settings: User-specific settings for this library agent.
@@ -563,8 +565,8 @@ async def update_library_agent(
"""
logger.debug(
f"Updating library agent {library_agent_id} for user {user_id} with "
f"auto_update_version={auto_update_version}, is_favorite={is_favorite}, "
f"is_archived={is_archived}, settings={settings}"
f"auto_update_version={auto_update_version}, graph_version={graph_version}, "
f"is_favorite={is_favorite}, is_archived={is_archived}, settings={settings}"
)
update_fields: prisma.types.LibraryAgentUpdateManyMutationInput = {}
if auto_update_version is not None:
@@ -581,10 +583,23 @@ async def update_library_agent(
update_fields["isDeleted"] = is_deleted
if settings is not None:
update_fields["settings"] = SafeJson(settings.model_dump())
if not update_fields:
raise ValueError("No values were passed to update")
try:
# If graph_version is provided, update to that specific version
if graph_version is not None:
# Get the current agent to find its graph_id
agent = await get_library_agent(id=library_agent_id, user_id=user_id)
# Update to the specified version using existing function
return await update_agent_version_in_library(
user_id=user_id,
agent_graph_id=agent.graph_id,
agent_graph_version=graph_version,
)
# Otherwise, just update the simple fields
if not update_fields:
raise ValueError("No values were passed to update")
n_updated = await prisma.models.LibraryAgent.prisma().update_many(
where={"id": library_agent_id, "userId": user_id},
data=update_fields,

View File

@@ -1,16 +1,15 @@
from datetime import datetime
import prisma.enums
import prisma.errors
import prisma.models
import prisma.types
import pytest
import backend.server.v2.library.db as db
import backend.server.v2.store.exceptions
import backend.api.features.store.exceptions
from backend.data.db import connect
from backend.data.includes import library_agent_include
from . import db
@pytest.mark.asyncio
async def test_get_library_agents(mocker):
@@ -88,7 +87,7 @@ async def test_add_agent_to_library(mocker):
await connect()
# Mock the transaction context
mock_transaction = mocker.patch("backend.server.v2.library.db.transaction")
mock_transaction = mocker.patch("backend.api.features.library.db.transaction")
mock_transaction.return_value.__aenter__ = mocker.AsyncMock(return_value=None)
mock_transaction.return_value.__aexit__ = mocker.AsyncMock(return_value=None)
# Mock data
@@ -151,7 +150,7 @@ async def test_add_agent_to_library(mocker):
)
# Mock graph_db.get_graph function that's called to check for HITL blocks
mock_graph_db = mocker.patch("backend.server.v2.library.db.graph_db")
mock_graph_db = mocker.patch("backend.api.features.library.db.graph_db")
mock_graph_model = mocker.Mock()
mock_graph_model.nodes = (
[]
@@ -159,7 +158,9 @@ async def test_add_agent_to_library(mocker):
mock_graph_db.get_graph = mocker.AsyncMock(return_value=mock_graph_model)
# Mock the model conversion
mock_from_db = mocker.patch("backend.server.v2.library.model.LibraryAgent.from_db")
mock_from_db = mocker.patch(
"backend.api.features.library.model.LibraryAgent.from_db"
)
mock_from_db.return_value = mocker.Mock()
# Call function
@@ -217,7 +218,7 @@ async def test_add_agent_to_library_not_found(mocker):
)
# Call function and verify exception
with pytest.raises(backend.server.v2.store.exceptions.AgentNotFoundError):
with pytest.raises(backend.api.features.store.exceptions.AgentNotFoundError):
await db.add_store_agent_to_library("version123", "test-user")
# Verify mock called correctly

View File

@@ -385,6 +385,9 @@ class LibraryAgentUpdateRequest(pydantic.BaseModel):
auto_update_version: Optional[bool] = pydantic.Field(
default=None, description="Auto-update the agent version"
)
graph_version: Optional[int] = pydantic.Field(
default=None, description="Specific graph version to update to"
)
is_favorite: Optional[bool] = pydantic.Field(
default=None, description="Mark the agent as a favorite"
)

View File

@@ -3,7 +3,7 @@ import datetime
import prisma.models
import pytest
import backend.server.v2.library.model as library_model
from . import model as library_model
@pytest.mark.asyncio

View File

@@ -6,12 +6,13 @@ from fastapi import APIRouter, Body, HTTPException, Query, Security, status
from fastapi.responses import Response
from prisma.enums import OnboardingStep
import backend.server.v2.library.db as library_db
import backend.server.v2.library.model as library_model
import backend.server.v2.store.exceptions as store_exceptions
import backend.api.features.store.exceptions as store_exceptions
from backend.data.onboarding import complete_onboarding_step
from backend.util.exceptions import DatabaseError, NotFoundError
from .. import db as library_db
from .. import model as library_model
logger = logging.getLogger(__name__)
router = APIRouter(
@@ -284,6 +285,7 @@ async def update_library_agent(
library_agent_id=library_agent_id,
user_id=user_id,
auto_update_version=payload.auto_update_version,
graph_version=payload.graph_version,
is_favorite=payload.is_favorite,
is_archived=payload.is_archived,
settings=payload.settings,

View File

@@ -4,8 +4,6 @@ from typing import Any, Optional
import autogpt_libs.auth as autogpt_auth_lib
from fastapi import APIRouter, Body, HTTPException, Query, Security, status
import backend.server.v2.library.db as db
import backend.server.v2.library.model as models
from backend.data.execution import GraphExecutionMeta
from backend.data.graph import get_graph
from backend.data.integrations import get_webhook
@@ -17,6 +15,9 @@ from backend.integrations.webhooks import get_webhook_manager
from backend.integrations.webhooks.utils import setup_webhook_for_block
from backend.util.exceptions import NotFoundError
from .. import db
from .. import model as models
logger = logging.getLogger(__name__)
credentials_manager = IntegrationCredentialsManager()

View File

@@ -7,10 +7,11 @@ import pytest
import pytest_mock
from pytest_snapshot.plugin import Snapshot
import backend.server.v2.library.model as library_model
from backend.server.v2.library.routes import router as library_router
from backend.util.models import Pagination
from . import model as library_model
from .routes import router as library_router
app = fastapi.FastAPI()
app.include_router(library_router)
@@ -86,7 +87,7 @@ async def test_get_library_agents_success(
total_items=2, total_pages=1, current_page=1, page_size=50
),
)
mock_db_call = mocker.patch("backend.server.v2.library.db.list_library_agents")
mock_db_call = mocker.patch("backend.api.features.library.db.list_library_agents")
mock_db_call.return_value = mocked_value
response = client.get("/agents?search_term=test")
@@ -112,7 +113,7 @@ async def test_get_library_agents_success(
def test_get_library_agents_error(mocker: pytest_mock.MockFixture, test_user_id: str):
mock_db_call = mocker.patch("backend.server.v2.library.db.list_library_agents")
mock_db_call = mocker.patch("backend.api.features.library.db.list_library_agents")
mock_db_call.side_effect = Exception("Test error")
response = client.get("/agents?search_term=test")
@@ -161,7 +162,7 @@ async def test_get_favorite_library_agents_success(
),
)
mock_db_call = mocker.patch(
"backend.server.v2.library.db.list_favorite_library_agents"
"backend.api.features.library.db.list_favorite_library_agents"
)
mock_db_call.return_value = mocked_value
@@ -184,7 +185,7 @@ def test_get_favorite_library_agents_error(
mocker: pytest_mock.MockFixture, test_user_id: str
):
mock_db_call = mocker.patch(
"backend.server.v2.library.db.list_favorite_library_agents"
"backend.api.features.library.db.list_favorite_library_agents"
)
mock_db_call.side_effect = Exception("Test error")
@@ -223,11 +224,11 @@ def test_add_agent_to_library_success(
)
mock_db_call = mocker.patch(
"backend.server.v2.library.db.add_store_agent_to_library"
"backend.api.features.library.db.add_store_agent_to_library"
)
mock_db_call.return_value = mock_library_agent
mock_complete_onboarding = mocker.patch(
"backend.server.v2.library.routes.agents.complete_onboarding_step",
"backend.api.features.library.routes.agents.complete_onboarding_step",
new_callable=AsyncMock,
)
@@ -249,7 +250,7 @@ def test_add_agent_to_library_success(
def test_add_agent_to_library_error(mocker: pytest_mock.MockFixture, test_user_id: str):
mock_db_call = mocker.patch(
"backend.server.v2.library.db.add_store_agent_to_library"
"backend.api.features.library.db.add_store_agent_to_library"
)
mock_db_call.side_effect = Exception("Test error")

View File

@@ -5,11 +5,11 @@ Implements OAuth 2.0 Authorization Code flow with PKCE support.
Flow:
1. User clicks "Login with AutoGPT" in 3rd party app
2. App redirects user to /oauth/authorize with client_id, redirect_uri, scope, state
2. App redirects user to /auth/authorize with client_id, redirect_uri, scope, state
3. User sees consent screen (if not already logged in, redirects to login first)
4. User approves backend creates authorization code
5. User redirected back to app with code
6. App exchanges code for access/refresh tokens at /oauth/token
6. App exchanges code for access/refresh tokens at /api/oauth/token
7. App uses access token to call external API endpoints
"""

View File

@@ -28,7 +28,7 @@ from prisma.models import OAuthAuthorizationCode as PrismaOAuthAuthorizationCode
from prisma.models import OAuthRefreshToken as PrismaOAuthRefreshToken
from prisma.models import User as PrismaUser
from backend.server.rest_api import app
from backend.api.rest_api import app
keysmith = APIKeySmith()

View File

@@ -6,9 +6,9 @@ import pytest
import pytest_mock
from pytest_snapshot.plugin import Snapshot
import backend.server.v2.otto.models as otto_models
import backend.server.v2.otto.routes as otto_routes
from backend.server.v2.otto.service import OttoService
from . import models as otto_models
from . import routes as otto_routes
from .service import OttoService
app = fastapi.FastAPI()
app.include_router(otto_routes.router)

View File

@@ -4,12 +4,15 @@ from typing import Annotated
from fastapi import APIRouter, Body, HTTPException, Query, Security
from fastapi.responses import JSONResponse
from backend.api.utils.api_key_auth import APIKeyAuthenticator
from backend.data.user import (
get_user_by_email,
set_user_email_verification,
unsubscribe_user_by_token,
)
from backend.server.routers.postmark.models import (
from backend.util.settings import Settings
from .models import (
PostmarkBounceEnum,
PostmarkBounceWebhook,
PostmarkClickWebhook,
@@ -19,8 +22,6 @@ from backend.server.routers.postmark.models import (
PostmarkSubscriptionChangeWebhook,
PostmarkWebhook,
)
from backend.server.utils.api_key_auth import APIKeyAuthenticator
from backend.util.settings import Settings
logger = logging.getLogger(__name__)
settings = Settings()

View File

@@ -1,8 +1,9 @@
from typing import Literal
import backend.server.v2.store.db
from backend.util.cache import cached
from . import db as store_db
##############################################
############### Caches #######################
##############################################
@@ -29,7 +30,7 @@ async def _get_cached_store_agents(
page_size: int,
):
"""Cached helper to get store agents."""
return await backend.server.v2.store.db.get_store_agents(
return await store_db.get_store_agents(
featured=featured,
creators=[creator] if creator else None,
sorted_by=sorted_by,
@@ -42,10 +43,12 @@ async def _get_cached_store_agents(
# Cache individual agent details for 15 minutes
@cached(maxsize=200, ttl_seconds=300, shared_cache=True)
async def _get_cached_agent_details(username: str, agent_name: str):
async def _get_cached_agent_details(
username: str, agent_name: str, include_changelog: bool = False
):
"""Cached helper to get agent details."""
return await backend.server.v2.store.db.get_store_agent_details(
username=username, agent_name=agent_name
return await store_db.get_store_agent_details(
username=username, agent_name=agent_name, include_changelog=include_changelog
)
@@ -59,7 +62,7 @@ async def _get_cached_store_creators(
page_size: int,
):
"""Cached helper to get store creators."""
return await backend.server.v2.store.db.get_store_creators(
return await store_db.get_store_creators(
featured=featured,
search_query=search_query,
sorted_by=sorted_by,
@@ -72,6 +75,4 @@ async def _get_cached_store_creators(
@cached(maxsize=100, ttl_seconds=300, shared_cache=True)
async def _get_cached_creator_details(username: str):
"""Cached helper to get creator details."""
return await backend.server.v2.store.db.get_store_creator_details(
username=username.lower()
)
return await store_db.get_store_creator_details(username=username.lower())

View File

@@ -10,8 +10,6 @@ import prisma.errors
import prisma.models
import prisma.types
import backend.server.v2.store.exceptions
import backend.server.v2.store.model
from backend.data.db import query_raw_with_schema, transaction
from backend.data.graph import (
GraphMeta,
@@ -30,6 +28,9 @@ from backend.notifications.notifications import queue_notification_async
from backend.util.exceptions import DatabaseError
from backend.util.settings import Settings
from . import exceptions as store_exceptions
from . import model as store_model
logger = logging.getLogger(__name__)
settings = Settings()
@@ -47,7 +48,7 @@ async def get_store_agents(
category: str | None = None,
page: int = 1,
page_size: int = 20,
) -> backend.server.v2.store.model.StoreAgentsResponse:
) -> store_model.StoreAgentsResponse:
"""
Get PUBLIC store agents from the StoreAgent view
"""
@@ -148,10 +149,10 @@ async def get_store_agents(
total_pages = (total + page_size - 1) // page_size
# Convert raw results to StoreAgent models
store_agents: list[backend.server.v2.store.model.StoreAgent] = []
store_agents: list[store_model.StoreAgent] = []
for agent in agents:
try:
store_agent = backend.server.v2.store.model.StoreAgent(
store_agent = store_model.StoreAgent(
slug=agent["slug"],
agent_name=agent["agent_name"],
agent_image=(
@@ -197,11 +198,11 @@ async def get_store_agents(
total = await prisma.models.StoreAgent.prisma().count(where=where_clause)
total_pages = (total + page_size - 1) // page_size
store_agents: list[backend.server.v2.store.model.StoreAgent] = []
store_agents: list[store_model.StoreAgent] = []
for agent in agents:
try:
# Create the StoreAgent object safely
store_agent = backend.server.v2.store.model.StoreAgent(
store_agent = store_model.StoreAgent(
slug=agent.slug,
agent_name=agent.agent_name,
agent_image=agent.agent_image[0] if agent.agent_image else "",
@@ -223,9 +224,9 @@ async def get_store_agents(
continue
logger.debug(f"Found {len(store_agents)} agents")
return backend.server.v2.store.model.StoreAgentsResponse(
return store_model.StoreAgentsResponse(
agents=store_agents,
pagination=backend.server.v2.store.model.Pagination(
pagination=store_model.Pagination(
current_page=page,
total_items=total,
total_pages=total_pages,
@@ -256,8 +257,8 @@ async def log_search_term(search_query: str):
async def get_store_agent_details(
username: str, agent_name: str
) -> backend.server.v2.store.model.StoreAgentDetails:
username: str, agent_name: str, include_changelog: bool = False
) -> store_model.StoreAgentDetails:
"""Get PUBLIC store agent details from the StoreAgent view"""
logger.debug(f"Getting store agent details for {username}/{agent_name}")
@@ -268,7 +269,7 @@ async def get_store_agent_details(
if not agent:
logger.warning(f"Agent not found: {username}/{agent_name}")
raise backend.server.v2.store.exceptions.AgentNotFoundError(
raise store_exceptions.AgentNotFoundError(
f"Agent {username}/{agent_name} not found"
)
@@ -321,8 +322,29 @@ async def get_store_agent_details(
else:
recommended_schedule_cron = None
# Fetch changelog data if requested
changelog_data = None
if include_changelog and store_listing:
changelog_versions = (
await prisma.models.StoreListingVersion.prisma().find_many(
where={
"storeListingId": store_listing.id,
"submissionStatus": prisma.enums.SubmissionStatus.APPROVED,
},
order=[{"version": "desc"}],
)
)
changelog_data = [
store_model.ChangelogEntry(
version=str(version.version),
changes_summary=version.changesSummary or "No changes recorded",
date=version.createdAt,
)
for version in changelog_versions
]
logger.debug(f"Found agent details for {username}/{agent_name}")
return backend.server.v2.store.model.StoreAgentDetails(
return store_model.StoreAgentDetails(
store_listing_version_id=agent.storeListingVersionId,
slug=agent.slug,
agent_name=agent.agent_name,
@@ -337,12 +359,15 @@ async def get_store_agent_details(
runs=agent.runs,
rating=agent.rating,
versions=agent.versions,
agentGraphVersions=agent.agentGraphVersions,
agentGraphId=agent.agentGraphId,
last_updated=agent.updated_at,
active_version_id=active_version_id,
has_approved_version=has_approved_version,
recommended_schedule_cron=recommended_schedule_cron,
changelog=changelog_data,
)
except backend.server.v2.store.exceptions.AgentNotFoundError:
except store_exceptions.AgentNotFoundError:
raise
except Exception as e:
logger.error(f"Error getting store agent details: {e}")
@@ -378,7 +403,7 @@ async def get_available_graph(store_listing_version_id: str) -> GraphMeta:
async def get_store_agent_by_version_id(
store_listing_version_id: str,
) -> backend.server.v2.store.model.StoreAgentDetails:
) -> store_model.StoreAgentDetails:
logger.debug(f"Getting store agent details for {store_listing_version_id}")
try:
@@ -388,12 +413,12 @@ async def get_store_agent_by_version_id(
if not agent:
logger.warning(f"Agent not found: {store_listing_version_id}")
raise backend.server.v2.store.exceptions.AgentNotFoundError(
raise store_exceptions.AgentNotFoundError(
f"Agent {store_listing_version_id} not found"
)
logger.debug(f"Found agent details for {store_listing_version_id}")
return backend.server.v2.store.model.StoreAgentDetails(
return store_model.StoreAgentDetails(
store_listing_version_id=agent.storeListingVersionId,
slug=agent.slug,
agent_name=agent.agent_name,
@@ -408,9 +433,11 @@ async def get_store_agent_by_version_id(
runs=agent.runs,
rating=agent.rating,
versions=agent.versions,
agentGraphVersions=agent.agentGraphVersions,
agentGraphId=agent.agentGraphId,
last_updated=agent.updated_at,
)
except backend.server.v2.store.exceptions.AgentNotFoundError:
except store_exceptions.AgentNotFoundError:
raise
except Exception as e:
logger.error(f"Error getting store agent details: {e}")
@@ -423,7 +450,7 @@ async def get_store_creators(
sorted_by: Literal["agent_rating", "agent_runs", "num_agents"] | None = None,
page: int = 1,
page_size: int = 20,
) -> backend.server.v2.store.model.CreatorsResponse:
) -> store_model.CreatorsResponse:
"""Get PUBLIC store creators from the Creator view"""
logger.debug(
f"Getting store creators. featured={featured}, search={search_query}, sorted_by={sorted_by}, page={page}"
@@ -498,7 +525,7 @@ async def get_store_creators(
# Convert to response model
creator_models = [
backend.server.v2.store.model.Creator(
store_model.Creator(
username=creator.username,
name=creator.name,
description=creator.description,
@@ -512,9 +539,9 @@ async def get_store_creators(
]
logger.debug(f"Found {len(creator_models)} creators")
return backend.server.v2.store.model.CreatorsResponse(
return store_model.CreatorsResponse(
creators=creator_models,
pagination=backend.server.v2.store.model.Pagination(
pagination=store_model.Pagination(
current_page=page,
total_items=total,
total_pages=total_pages,
@@ -528,7 +555,7 @@ async def get_store_creators(
async def get_store_creator_details(
username: str,
) -> backend.server.v2.store.model.CreatorDetails:
) -> store_model.CreatorDetails:
logger.debug(f"Getting store creator details for {username}")
try:
@@ -539,12 +566,10 @@ async def get_store_creator_details(
if not creator:
logger.warning(f"Creator not found: {username}")
raise backend.server.v2.store.exceptions.CreatorNotFoundError(
f"Creator {username} not found"
)
raise store_exceptions.CreatorNotFoundError(f"Creator {username} not found")
logger.debug(f"Found creator details for {username}")
return backend.server.v2.store.model.CreatorDetails(
return store_model.CreatorDetails(
name=creator.name,
username=creator.username,
description=creator.description,
@@ -554,7 +579,7 @@ async def get_store_creator_details(
agent_runs=creator.agent_runs,
top_categories=creator.top_categories,
)
except backend.server.v2.store.exceptions.CreatorNotFoundError:
except store_exceptions.CreatorNotFoundError:
raise
except Exception as e:
logger.error(f"Error getting store creator details: {e}")
@@ -563,7 +588,7 @@ async def get_store_creator_details(
async def get_store_submissions(
user_id: str, page: int = 1, page_size: int = 20
) -> backend.server.v2.store.model.StoreSubmissionsResponse:
) -> store_model.StoreSubmissionsResponse:
"""Get store submissions for the authenticated user -- not an admin"""
logger.debug(f"Getting store submissions for user {user_id}, page={page}")
@@ -588,7 +613,7 @@ async def get_store_submissions(
# Convert to response models
submission_models = []
for sub in submissions:
submission_model = backend.server.v2.store.model.StoreSubmission(
submission_model = store_model.StoreSubmission(
agent_id=sub.agent_id,
agent_version=sub.agent_version,
name=sub.name,
@@ -613,9 +638,9 @@ async def get_store_submissions(
submission_models.append(submission_model)
logger.debug(f"Found {len(submission_models)} submissions")
return backend.server.v2.store.model.StoreSubmissionsResponse(
return store_model.StoreSubmissionsResponse(
submissions=submission_models,
pagination=backend.server.v2.store.model.Pagination(
pagination=store_model.Pagination(
current_page=page,
total_items=total,
total_pages=total_pages,
@@ -626,9 +651,9 @@ async def get_store_submissions(
except Exception as e:
logger.error(f"Error fetching store submissions: {e}")
# Return empty response rather than exposing internal errors
return backend.server.v2.store.model.StoreSubmissionsResponse(
return store_model.StoreSubmissionsResponse(
submissions=[],
pagination=backend.server.v2.store.model.Pagination(
pagination=store_model.Pagination(
current_page=page,
total_items=0,
total_pages=0,
@@ -661,7 +686,7 @@ async def delete_store_submission(
if not submission:
logger.warning(f"Submission not found for user {user_id}: {submission_id}")
raise backend.server.v2.store.exceptions.SubmissionNotFoundError(
raise store_exceptions.SubmissionNotFoundError(
f"Submission not found for this user. User ID: {user_id}, Submission ID: {submission_id}"
)
@@ -693,7 +718,7 @@ async def create_store_submission(
categories: list[str] = [],
changes_summary: str | None = "Initial Submission",
recommended_schedule_cron: str | None = None,
) -> backend.server.v2.store.model.StoreSubmission:
) -> store_model.StoreSubmission:
"""
Create the first (and only) store listing and thus submission as a normal user
@@ -734,7 +759,7 @@ async def create_store_submission(
logger.warning(
f"Agent not found for user {user_id}: {agent_id} v{agent_version}"
)
raise backend.server.v2.store.exceptions.AgentNotFoundError(
raise store_exceptions.AgentNotFoundError(
f"Agent not found for this user. User ID: {user_id}, Agent ID: {agent_id}, Version: {agent_version}"
)
@@ -807,7 +832,7 @@ async def create_store_submission(
logger.debug(f"Created store listing for agent {agent_id}")
# Return submission details
return backend.server.v2.store.model.StoreSubmission(
return store_model.StoreSubmission(
agent_id=agent_id,
agent_version=agent_version,
name=name,
@@ -830,7 +855,7 @@ async def create_store_submission(
logger.debug(
f"Slug '{slug}' is already in use by another agent (agent_id: {agent_id}) for user {user_id}"
)
raise backend.server.v2.store.exceptions.SlugAlreadyInUseError(
raise store_exceptions.SlugAlreadyInUseError(
f"The URL slug '{slug}' is already in use by another one of your agents. Please choose a different slug."
) from exc
else:
@@ -839,8 +864,8 @@ async def create_store_submission(
f"Unique constraint violated (not slug): {error_str}"
) from exc
except (
backend.server.v2.store.exceptions.AgentNotFoundError,
backend.server.v2.store.exceptions.ListingExistsError,
store_exceptions.AgentNotFoundError,
store_exceptions.ListingExistsError,
):
raise
except prisma.errors.PrismaError as e:
@@ -861,7 +886,7 @@ async def edit_store_submission(
changes_summary: str | None = "Update submission",
recommended_schedule_cron: str | None = None,
instructions: str | None = None,
) -> backend.server.v2.store.model.StoreSubmission:
) -> store_model.StoreSubmission:
"""
Edit an existing store listing submission.
@@ -903,7 +928,7 @@ async def edit_store_submission(
)
if not current_version:
raise backend.server.v2.store.exceptions.SubmissionNotFoundError(
raise store_exceptions.SubmissionNotFoundError(
f"Store listing version not found: {store_listing_version_id}"
)
@@ -912,7 +937,7 @@ async def edit_store_submission(
not current_version.StoreListing
or current_version.StoreListing.owningUserId != user_id
):
raise backend.server.v2.store.exceptions.UnauthorizedError(
raise store_exceptions.UnauthorizedError(
f"User {user_id} does not own submission {store_listing_version_id}"
)
@@ -921,7 +946,7 @@ async def edit_store_submission(
# Check if we can edit this submission
if current_version.submissionStatus == prisma.enums.SubmissionStatus.REJECTED:
raise backend.server.v2.store.exceptions.InvalidOperationError(
raise store_exceptions.InvalidOperationError(
"Cannot edit a rejected submission"
)
@@ -970,7 +995,7 @@ async def edit_store_submission(
if not updated_version:
raise DatabaseError("Failed to update store listing version")
return backend.server.v2.store.model.StoreSubmission(
return store_model.StoreSubmission(
agent_id=current_version.agentGraphId,
agent_version=current_version.agentGraphVersion,
name=name,
@@ -991,16 +1016,16 @@ async def edit_store_submission(
)
else:
raise backend.server.v2.store.exceptions.InvalidOperationError(
raise store_exceptions.InvalidOperationError(
f"Cannot edit submission with status: {current_version.submissionStatus}"
)
except (
backend.server.v2.store.exceptions.SubmissionNotFoundError,
backend.server.v2.store.exceptions.UnauthorizedError,
backend.server.v2.store.exceptions.AgentNotFoundError,
backend.server.v2.store.exceptions.ListingExistsError,
backend.server.v2.store.exceptions.InvalidOperationError,
store_exceptions.SubmissionNotFoundError,
store_exceptions.UnauthorizedError,
store_exceptions.AgentNotFoundError,
store_exceptions.ListingExistsError,
store_exceptions.InvalidOperationError,
):
raise
except prisma.errors.PrismaError as e:
@@ -1023,7 +1048,7 @@ async def create_store_version(
categories: list[str] = [],
changes_summary: str | None = "Initial submission",
recommended_schedule_cron: str | None = None,
) -> backend.server.v2.store.model.StoreSubmission:
) -> store_model.StoreSubmission:
"""
Create a new version for an existing store listing
@@ -1056,7 +1081,7 @@ async def create_store_version(
)
if not listing:
raise backend.server.v2.store.exceptions.ListingNotFoundError(
raise store_exceptions.ListingNotFoundError(
f"Store listing not found. User ID: {user_id}, Listing ID: {store_listing_id}"
)
@@ -1068,7 +1093,7 @@ async def create_store_version(
)
if not agent:
raise backend.server.v2.store.exceptions.AgentNotFoundError(
raise store_exceptions.AgentNotFoundError(
f"Agent not found for this user. User ID: {user_id}, Agent ID: {agent_id}, Version: {agent_version}"
)
@@ -1103,7 +1128,7 @@ async def create_store_version(
f"Created new version for listing {store_listing_id} of agent {agent_id}"
)
# Return submission details
return backend.server.v2.store.model.StoreSubmission(
return store_model.StoreSubmission(
agent_id=agent_id,
agent_version=agent_version,
name=name,
@@ -1130,7 +1155,7 @@ async def create_store_review(
store_listing_version_id: str,
score: int,
comments: str | None = None,
) -> backend.server.v2.store.model.StoreReview:
) -> store_model.StoreReview:
"""Create a review for a store listing as a user to detail their experience"""
try:
data = prisma.types.StoreListingReviewUpsertInput(
@@ -1155,7 +1180,7 @@ async def create_store_review(
data=data,
)
return backend.server.v2.store.model.StoreReview(
return store_model.StoreReview(
score=review.score,
comments=review.comments,
)
@@ -1167,7 +1192,7 @@ async def create_store_review(
async def get_user_profile(
user_id: str,
) -> backend.server.v2.store.model.ProfileDetails | None:
) -> store_model.ProfileDetails | None:
logger.debug(f"Getting user profile for {user_id}")
try:
@@ -1177,7 +1202,7 @@ async def get_user_profile(
if not profile:
return None
return backend.server.v2.store.model.ProfileDetails(
return store_model.ProfileDetails(
name=profile.name,
username=profile.username,
description=profile.description,
@@ -1190,8 +1215,8 @@ async def get_user_profile(
async def update_profile(
user_id: str, profile: backend.server.v2.store.model.Profile
) -> backend.server.v2.store.model.CreatorDetails:
user_id: str, profile: store_model.Profile
) -> store_model.CreatorDetails:
"""
Update the store profile for a user or create a new one if it doesn't exist.
Args:
@@ -1214,7 +1239,7 @@ async def update_profile(
where={"userId": user_id}
)
if not existing_profile:
raise backend.server.v2.store.exceptions.ProfileNotFoundError(
raise store_exceptions.ProfileNotFoundError(
f"Profile not found for user {user_id}. This should not be possible."
)
@@ -1250,7 +1275,7 @@ async def update_profile(
logger.error(f"Failed to update profile for user {user_id}")
raise DatabaseError("Failed to update profile")
return backend.server.v2.store.model.CreatorDetails(
return store_model.CreatorDetails(
name=updated_profile.name,
username=updated_profile.username,
description=updated_profile.description,
@@ -1270,7 +1295,7 @@ async def get_my_agents(
user_id: str,
page: int = 1,
page_size: int = 20,
) -> backend.server.v2.store.model.MyAgentsResponse:
) -> store_model.MyAgentsResponse:
"""Get the agents for the authenticated user"""
logger.debug(f"Getting my agents for user {user_id}, page={page}")
@@ -1307,7 +1332,7 @@ async def get_my_agents(
total_pages = (total + page_size - 1) // page_size
my_agents = [
backend.server.v2.store.model.MyAgent(
store_model.MyAgent(
agent_id=graph.id,
agent_version=graph.version,
agent_name=graph.name or "",
@@ -1320,9 +1345,9 @@ async def get_my_agents(
if (graph := library_agent.AgentGraph)
]
return backend.server.v2.store.model.MyAgentsResponse(
return store_model.MyAgentsResponse(
agents=my_agents,
pagination=backend.server.v2.store.model.Pagination(
pagination=store_model.Pagination(
current_page=page,
total_items=total,
total_pages=total_pages,
@@ -1469,7 +1494,7 @@ async def review_store_submission(
external_comments: str,
internal_comments: str,
reviewer_id: str,
) -> backend.server.v2.store.model.StoreSubmission:
) -> store_model.StoreSubmission:
"""Review a store listing submission as an admin."""
try:
store_listing_version = (
@@ -1682,7 +1707,7 @@ async def review_store_submission(
pass
# Convert to Pydantic model for consistency
return backend.server.v2.store.model.StoreSubmission(
return store_model.StoreSubmission(
agent_id=submission.agentGraphId,
agent_version=submission.agentGraphVersion,
name=submission.name,
@@ -1717,7 +1742,7 @@ async def get_admin_listings_with_versions(
search_query: str | None = None,
page: int = 1,
page_size: int = 20,
) -> backend.server.v2.store.model.StoreListingsWithVersionsResponse:
) -> store_model.StoreListingsWithVersionsResponse:
"""
Get store listings for admins with all their versions.
@@ -1816,10 +1841,10 @@ async def get_admin_listings_with_versions(
# Convert to response models
listings_with_versions = []
for listing in listings:
versions: list[backend.server.v2.store.model.StoreSubmission] = []
versions: list[store_model.StoreSubmission] = []
# If we have versions, turn them into StoreSubmission models
for version in listing.Versions or []:
version_model = backend.server.v2.store.model.StoreSubmission(
version_model = store_model.StoreSubmission(
agent_id=version.agentGraphId,
agent_version=version.agentGraphVersion,
name=version.name,
@@ -1847,26 +1872,24 @@ async def get_admin_listings_with_versions(
creator_email = listing.OwningUser.email if listing.OwningUser else None
listing_with_versions = (
backend.server.v2.store.model.StoreListingWithVersions(
listing_id=listing.id,
slug=listing.slug,
agent_id=listing.agentGraphId,
agent_version=listing.agentGraphVersion,
active_version_id=listing.activeVersionId,
has_approved_version=listing.hasApprovedVersion,
creator_email=creator_email,
latest_version=latest_version,
versions=versions,
)
listing_with_versions = store_model.StoreListingWithVersions(
listing_id=listing.id,
slug=listing.slug,
agent_id=listing.agentGraphId,
agent_version=listing.agentGraphVersion,
active_version_id=listing.activeVersionId,
has_approved_version=listing.hasApprovedVersion,
creator_email=creator_email,
latest_version=latest_version,
versions=versions,
)
listings_with_versions.append(listing_with_versions)
logger.debug(f"Found {len(listings_with_versions)} listings for admin")
return backend.server.v2.store.model.StoreListingsWithVersionsResponse(
return store_model.StoreListingsWithVersionsResponse(
listings=listings_with_versions,
pagination=backend.server.v2.store.model.Pagination(
pagination=store_model.Pagination(
current_page=page,
total_items=total,
total_pages=total_pages,
@@ -1876,9 +1899,9 @@ async def get_admin_listings_with_versions(
except Exception as e:
logger.error(f"Error fetching admin store listings: {e}")
# Return empty response rather than exposing internal errors
return backend.server.v2.store.model.StoreListingsWithVersionsResponse(
return store_model.StoreListingsWithVersionsResponse(
listings=[],
pagination=backend.server.v2.store.model.Pagination(
pagination=store_model.Pagination(
current_page=page,
total_items=0,
total_pages=0,

View File

@@ -6,8 +6,8 @@ import prisma.models
import pytest
from prisma import Prisma
import backend.server.v2.store.db as db
from backend.server.v2.store.model import Profile
from . import db
from .model import Profile
@pytest.fixture(autouse=True)
@@ -40,6 +40,8 @@ async def test_get_store_agents(mocker):
runs=10,
rating=4.5,
versions=["1.0"],
agentGraphVersions=["1"],
agentGraphId="test-graph-id",
updated_at=datetime.now(),
is_available=False,
useForOnboarding=False,
@@ -83,6 +85,8 @@ async def test_get_store_agent_details(mocker):
runs=10,
rating=4.5,
versions=["1.0"],
agentGraphVersions=["1"],
agentGraphId="test-graph-id",
updated_at=datetime.now(),
is_available=False,
useForOnboarding=False,
@@ -105,6 +109,8 @@ async def test_get_store_agent_details(mocker):
runs=15,
rating=4.8,
versions=["1.0", "2.0"],
agentGraphVersions=["1", "2"],
agentGraphId="test-graph-id-active",
updated_at=datetime.now(),
is_available=True,
useForOnboarding=False,

View File

@@ -5,11 +5,12 @@ import uuid
import fastapi
from gcloud.aio import storage as async_storage
import backend.server.v2.store.exceptions
from backend.util.exceptions import MissingConfigError
from backend.util.settings import Settings
from backend.util.virus_scanner import scan_content_safe
from . import exceptions as store_exceptions
logger = logging.getLogger(__name__)
ALLOWED_IMAGE_TYPES = {"image/jpeg", "image/png", "image/gif", "image/webp"}
@@ -68,61 +69,55 @@ async def upload_media(
await file.seek(0) # Reset file pointer
except Exception as e:
logger.error(f"Error reading file content: {str(e)}")
raise backend.server.v2.store.exceptions.FileReadError(
"Failed to read file content"
) from e
raise store_exceptions.FileReadError("Failed to read file content") from e
# Validate file signature/magic bytes
if file.content_type in ALLOWED_IMAGE_TYPES:
# Check image file signatures
if content.startswith(b"\xff\xd8\xff"): # JPEG
if file.content_type != "image/jpeg":
raise backend.server.v2.store.exceptions.InvalidFileTypeError(
raise store_exceptions.InvalidFileTypeError(
"File signature does not match content type"
)
elif content.startswith(b"\x89PNG\r\n\x1a\n"): # PNG
if file.content_type != "image/png":
raise backend.server.v2.store.exceptions.InvalidFileTypeError(
raise store_exceptions.InvalidFileTypeError(
"File signature does not match content type"
)
elif content.startswith(b"GIF87a") or content.startswith(b"GIF89a"): # GIF
if file.content_type != "image/gif":
raise backend.server.v2.store.exceptions.InvalidFileTypeError(
raise store_exceptions.InvalidFileTypeError(
"File signature does not match content type"
)
elif content.startswith(b"RIFF") and content[8:12] == b"WEBP": # WebP
if file.content_type != "image/webp":
raise backend.server.v2.store.exceptions.InvalidFileTypeError(
raise store_exceptions.InvalidFileTypeError(
"File signature does not match content type"
)
else:
raise backend.server.v2.store.exceptions.InvalidFileTypeError(
"Invalid image file signature"
)
raise store_exceptions.InvalidFileTypeError("Invalid image file signature")
elif file.content_type in ALLOWED_VIDEO_TYPES:
# Check video file signatures
if content.startswith(b"\x00\x00\x00") and (content[4:8] == b"ftyp"): # MP4
if file.content_type != "video/mp4":
raise backend.server.v2.store.exceptions.InvalidFileTypeError(
raise store_exceptions.InvalidFileTypeError(
"File signature does not match content type"
)
elif content.startswith(b"\x1a\x45\xdf\xa3"): # WebM
if file.content_type != "video/webm":
raise backend.server.v2.store.exceptions.InvalidFileTypeError(
raise store_exceptions.InvalidFileTypeError(
"File signature does not match content type"
)
else:
raise backend.server.v2.store.exceptions.InvalidFileTypeError(
"Invalid video file signature"
)
raise store_exceptions.InvalidFileTypeError("Invalid video file signature")
settings = Settings()
# Check required settings first before doing any file processing
if not settings.config.media_gcs_bucket_name:
logger.error("Missing GCS bucket name setting")
raise backend.server.v2.store.exceptions.StorageConfigError(
raise store_exceptions.StorageConfigError(
"Missing storage bucket configuration"
)
@@ -137,7 +132,7 @@ async def upload_media(
and content_type not in ALLOWED_VIDEO_TYPES
):
logger.warning(f"Invalid file type attempted: {content_type}")
raise backend.server.v2.store.exceptions.InvalidFileTypeError(
raise store_exceptions.InvalidFileTypeError(
f"File type not supported. Must be jpeg, png, gif, webp, mp4 or webm. Content type: {content_type}"
)
@@ -150,16 +145,14 @@ async def upload_media(
file_size += len(chunk)
if file_size > MAX_FILE_SIZE:
logger.warning(f"File size too large: {file_size} bytes")
raise backend.server.v2.store.exceptions.FileSizeTooLargeError(
raise store_exceptions.FileSizeTooLargeError(
"File too large. Maximum size is 50MB"
)
except backend.server.v2.store.exceptions.FileSizeTooLargeError:
except store_exceptions.FileSizeTooLargeError:
raise
except Exception as e:
logger.error(f"Error reading file chunks: {str(e)}")
raise backend.server.v2.store.exceptions.FileReadError(
"Failed to read uploaded file"
) from e
raise store_exceptions.FileReadError("Failed to read uploaded file") from e
# Reset file pointer
await file.seek(0)
@@ -198,14 +191,14 @@ async def upload_media(
except Exception as e:
logger.error(f"GCS storage error: {str(e)}")
raise backend.server.v2.store.exceptions.StorageUploadError(
raise store_exceptions.StorageUploadError(
"Failed to upload file to storage"
) from e
except backend.server.v2.store.exceptions.MediaUploadError:
except store_exceptions.MediaUploadError:
raise
except Exception as e:
logger.exception("Unexpected error in upload_media")
raise backend.server.v2.store.exceptions.MediaUploadError(
raise store_exceptions.MediaUploadError(
"Unexpected error during media upload"
) from e

View File

@@ -6,17 +6,18 @@ import fastapi
import pytest
import starlette.datastructures
import backend.server.v2.store.exceptions
import backend.server.v2.store.media
from backend.util.settings import Settings
from . import exceptions as store_exceptions
from . import media as store_media
@pytest.fixture
def mock_settings(monkeypatch):
settings = Settings()
settings.config.media_gcs_bucket_name = "test-bucket"
settings.config.google_application_credentials = "test-credentials"
monkeypatch.setattr("backend.server.v2.store.media.Settings", lambda: settings)
monkeypatch.setattr("backend.api.features.store.media.Settings", lambda: settings)
return settings
@@ -32,12 +33,13 @@ def mock_storage_client(mocker):
# Mock the constructor to return our mock client
mocker.patch(
"backend.server.v2.store.media.async_storage.Storage", return_value=mock_client
"backend.api.features.store.media.async_storage.Storage",
return_value=mock_client,
)
# Mock virus scanner to avoid actual scanning
mocker.patch(
"backend.server.v2.store.media.scan_content_safe", new_callable=AsyncMock
"backend.api.features.store.media.scan_content_safe", new_callable=AsyncMock
)
return mock_client
@@ -53,7 +55,7 @@ async def test_upload_media_success(mock_settings, mock_storage_client):
headers=starlette.datastructures.Headers({"content-type": "image/jpeg"}),
)
result = await backend.server.v2.store.media.upload_media("test-user", test_file)
result = await store_media.upload_media("test-user", test_file)
assert result.startswith(
"https://storage.googleapis.com/test-bucket/users/test-user/images/"
@@ -69,8 +71,8 @@ async def test_upload_media_invalid_type(mock_settings, mock_storage_client):
headers=starlette.datastructures.Headers({"content-type": "text/plain"}),
)
with pytest.raises(backend.server.v2.store.exceptions.InvalidFileTypeError):
await backend.server.v2.store.media.upload_media("test-user", test_file)
with pytest.raises(store_exceptions.InvalidFileTypeError):
await store_media.upload_media("test-user", test_file)
mock_storage_client.upload.assert_not_called()
@@ -79,7 +81,7 @@ async def test_upload_media_missing_credentials(monkeypatch):
settings = Settings()
settings.config.media_gcs_bucket_name = ""
settings.config.google_application_credentials = ""
monkeypatch.setattr("backend.server.v2.store.media.Settings", lambda: settings)
monkeypatch.setattr("backend.api.features.store.media.Settings", lambda: settings)
test_file = fastapi.UploadFile(
filename="laptop.jpeg",
@@ -87,8 +89,8 @@ async def test_upload_media_missing_credentials(monkeypatch):
headers=starlette.datastructures.Headers({"content-type": "image/jpeg"}),
)
with pytest.raises(backend.server.v2.store.exceptions.StorageConfigError):
await backend.server.v2.store.media.upload_media("test-user", test_file)
with pytest.raises(store_exceptions.StorageConfigError):
await store_media.upload_media("test-user", test_file)
async def test_upload_media_video_type(mock_settings, mock_storage_client):
@@ -98,7 +100,7 @@ async def test_upload_media_video_type(mock_settings, mock_storage_client):
headers=starlette.datastructures.Headers({"content-type": "video/mp4"}),
)
result = await backend.server.v2.store.media.upload_media("test-user", test_file)
result = await store_media.upload_media("test-user", test_file)
assert result.startswith(
"https://storage.googleapis.com/test-bucket/users/test-user/videos/"
@@ -117,8 +119,8 @@ async def test_upload_media_file_too_large(mock_settings, mock_storage_client):
headers=starlette.datastructures.Headers({"content-type": "image/jpeg"}),
)
with pytest.raises(backend.server.v2.store.exceptions.FileSizeTooLargeError):
await backend.server.v2.store.media.upload_media("test-user", test_file)
with pytest.raises(store_exceptions.FileSizeTooLargeError):
await store_media.upload_media("test-user", test_file)
async def test_upload_media_file_read_error(mock_settings, mock_storage_client):
@@ -129,8 +131,8 @@ async def test_upload_media_file_read_error(mock_settings, mock_storage_client):
)
test_file.read = unittest.mock.AsyncMock(side_effect=Exception("Read error"))
with pytest.raises(backend.server.v2.store.exceptions.FileReadError):
await backend.server.v2.store.media.upload_media("test-user", test_file)
with pytest.raises(store_exceptions.FileReadError):
await store_media.upload_media("test-user", test_file)
async def test_upload_media_png_success(mock_settings, mock_storage_client):
@@ -140,7 +142,7 @@ async def test_upload_media_png_success(mock_settings, mock_storage_client):
headers=starlette.datastructures.Headers({"content-type": "image/png"}),
)
result = await backend.server.v2.store.media.upload_media("test-user", test_file)
result = await store_media.upload_media("test-user", test_file)
assert result.startswith(
"https://storage.googleapis.com/test-bucket/users/test-user/images/"
)
@@ -154,7 +156,7 @@ async def test_upload_media_gif_success(mock_settings, mock_storage_client):
headers=starlette.datastructures.Headers({"content-type": "image/gif"}),
)
result = await backend.server.v2.store.media.upload_media("test-user", test_file)
result = await store_media.upload_media("test-user", test_file)
assert result.startswith(
"https://storage.googleapis.com/test-bucket/users/test-user/images/"
)
@@ -168,7 +170,7 @@ async def test_upload_media_webp_success(mock_settings, mock_storage_client):
headers=starlette.datastructures.Headers({"content-type": "image/webp"}),
)
result = await backend.server.v2.store.media.upload_media("test-user", test_file)
result = await store_media.upload_media("test-user", test_file)
assert result.startswith(
"https://storage.googleapis.com/test-bucket/users/test-user/images/"
)
@@ -182,7 +184,7 @@ async def test_upload_media_webm_success(mock_settings, mock_storage_client):
headers=starlette.datastructures.Headers({"content-type": "video/webm"}),
)
result = await backend.server.v2.store.media.upload_media("test-user", test_file)
result = await store_media.upload_media("test-user", test_file)
assert result.startswith(
"https://storage.googleapis.com/test-bucket/users/test-user/videos/"
)
@@ -196,8 +198,8 @@ async def test_upload_media_mismatched_signature(mock_settings, mock_storage_cli
headers=starlette.datastructures.Headers({"content-type": "image/jpeg"}),
)
with pytest.raises(backend.server.v2.store.exceptions.InvalidFileTypeError):
await backend.server.v2.store.media.upload_media("test-user", test_file)
with pytest.raises(store_exceptions.InvalidFileTypeError):
await store_media.upload_media("test-user", test_file)
async def test_upload_media_invalid_signature(mock_settings, mock_storage_client):
@@ -207,5 +209,5 @@ async def test_upload_media_invalid_signature(mock_settings, mock_storage_client
headers=starlette.datastructures.Headers({"content-type": "image/jpeg"}),
)
with pytest.raises(backend.server.v2.store.exceptions.InvalidFileTypeError):
await backend.server.v2.store.media.upload_media("test-user", test_file)
with pytest.raises(store_exceptions.InvalidFileTypeError):
await store_media.upload_media("test-user", test_file)

View File

@@ -7,6 +7,12 @@ import pydantic
from backend.util.models import Pagination
class ChangelogEntry(pydantic.BaseModel):
version: str
changes_summary: str
date: datetime.datetime
class MyAgent(pydantic.BaseModel):
agent_id: str
agent_version: int
@@ -55,12 +61,17 @@ class StoreAgentDetails(pydantic.BaseModel):
runs: int
rating: float
versions: list[str]
agentGraphVersions: list[str]
agentGraphId: str
last_updated: datetime.datetime
recommended_schedule_cron: str | None = None
active_version_id: str | None = None
has_approved_version: bool = False
# Optional changelog data when include_changelog=True
changelog: list[ChangelogEntry] | None = None
class Creator(pydantic.BaseModel):
name: str

View File

@@ -2,11 +2,11 @@ import datetime
import prisma.enums
import backend.server.v2.store.model
from . import model as store_model
def test_pagination():
pagination = backend.server.v2.store.model.Pagination(
pagination = store_model.Pagination(
total_items=100, total_pages=5, current_page=2, page_size=20
)
assert pagination.total_items == 100
@@ -16,7 +16,7 @@ def test_pagination():
def test_store_agent():
agent = backend.server.v2.store.model.StoreAgent(
agent = store_model.StoreAgent(
slug="test-agent",
agent_name="Test Agent",
agent_image="test.jpg",
@@ -34,9 +34,9 @@ def test_store_agent():
def test_store_agents_response():
response = backend.server.v2.store.model.StoreAgentsResponse(
response = store_model.StoreAgentsResponse(
agents=[
backend.server.v2.store.model.StoreAgent(
store_model.StoreAgent(
slug="test-agent",
agent_name="Test Agent",
agent_image="test.jpg",
@@ -48,7 +48,7 @@ def test_store_agents_response():
rating=4.5,
)
],
pagination=backend.server.v2.store.model.Pagination(
pagination=store_model.Pagination(
total_items=1, total_pages=1, current_page=1, page_size=20
),
)
@@ -57,7 +57,7 @@ def test_store_agents_response():
def test_store_agent_details():
details = backend.server.v2.store.model.StoreAgentDetails(
details = store_model.StoreAgentDetails(
store_listing_version_id="version123",
slug="test-agent",
agent_name="Test Agent",
@@ -72,6 +72,8 @@ def test_store_agent_details():
runs=50,
rating=4.5,
versions=["1.0", "2.0"],
agentGraphVersions=["1", "2"],
agentGraphId="test-graph-id",
last_updated=datetime.datetime.now(),
)
assert details.slug == "test-agent"
@@ -81,7 +83,7 @@ def test_store_agent_details():
def test_creator():
creator = backend.server.v2.store.model.Creator(
creator = store_model.Creator(
agent_rating=4.8,
agent_runs=1000,
name="Test Creator",
@@ -96,9 +98,9 @@ def test_creator():
def test_creators_response():
response = backend.server.v2.store.model.CreatorsResponse(
response = store_model.CreatorsResponse(
creators=[
backend.server.v2.store.model.Creator(
store_model.Creator(
agent_rating=4.8,
agent_runs=1000,
name="Test Creator",
@@ -109,7 +111,7 @@ def test_creators_response():
is_featured=False,
)
],
pagination=backend.server.v2.store.model.Pagination(
pagination=store_model.Pagination(
total_items=1, total_pages=1, current_page=1, page_size=20
),
)
@@ -118,7 +120,7 @@ def test_creators_response():
def test_creator_details():
details = backend.server.v2.store.model.CreatorDetails(
details = store_model.CreatorDetails(
name="Test Creator",
username="creator1",
description="Test description",
@@ -135,7 +137,7 @@ def test_creator_details():
def test_store_submission():
submission = backend.server.v2.store.model.StoreSubmission(
submission = store_model.StoreSubmission(
agent_id="agent123",
agent_version=1,
sub_heading="Test subheading",
@@ -154,9 +156,9 @@ def test_store_submission():
def test_store_submissions_response():
response = backend.server.v2.store.model.StoreSubmissionsResponse(
response = store_model.StoreSubmissionsResponse(
submissions=[
backend.server.v2.store.model.StoreSubmission(
store_model.StoreSubmission(
agent_id="agent123",
agent_version=1,
sub_heading="Test subheading",
@@ -170,7 +172,7 @@ def test_store_submissions_response():
rating=4.5,
)
],
pagination=backend.server.v2.store.model.Pagination(
pagination=store_model.Pagination(
total_items=1, total_pages=1, current_page=1, page_size=20
),
)
@@ -179,7 +181,7 @@ def test_store_submissions_response():
def test_store_submission_request():
request = backend.server.v2.store.model.StoreSubmissionRequest(
request = store_model.StoreSubmissionRequest(
agent_id="agent123",
agent_version=1,
slug="test-agent",

View File

@@ -9,14 +9,14 @@ import fastapi
import fastapi.responses
import backend.data.graph
import backend.server.v2.store.cache as store_cache
import backend.server.v2.store.db
import backend.server.v2.store.exceptions
import backend.server.v2.store.image_gen
import backend.server.v2.store.media
import backend.server.v2.store.model
import backend.util.json
from . import cache as store_cache
from . import db as store_db
from . import image_gen as store_image_gen
from . import media as store_media
from . import model as store_model
logger = logging.getLogger(__name__)
router = fastapi.APIRouter()
@@ -32,7 +32,7 @@ router = fastapi.APIRouter()
summary="Get user profile",
tags=["store", "private"],
dependencies=[fastapi.Security(autogpt_libs.auth.requires_user)],
response_model=backend.server.v2.store.model.ProfileDetails,
response_model=store_model.ProfileDetails,
)
async def get_profile(
user_id: str = fastapi.Security(autogpt_libs.auth.get_user_id),
@@ -41,7 +41,7 @@ async def get_profile(
Get the profile details for the authenticated user.
Cached for 1 hour per user.
"""
profile = await backend.server.v2.store.db.get_user_profile(user_id)
profile = await store_db.get_user_profile(user_id)
if profile is None:
return fastapi.responses.JSONResponse(
status_code=404,
@@ -55,10 +55,10 @@ async def get_profile(
summary="Update user profile",
tags=["store", "private"],
dependencies=[fastapi.Security(autogpt_libs.auth.requires_user)],
response_model=backend.server.v2.store.model.CreatorDetails,
response_model=store_model.CreatorDetails,
)
async def update_or_create_profile(
profile: backend.server.v2.store.model.Profile,
profile: store_model.Profile,
user_id: str = fastapi.Security(autogpt_libs.auth.get_user_id),
):
"""
@@ -74,9 +74,7 @@ async def update_or_create_profile(
Raises:
HTTPException: If there is an error updating the profile
"""
updated_profile = await backend.server.v2.store.db.update_profile(
user_id=user_id, profile=profile
)
updated_profile = await store_db.update_profile(user_id=user_id, profile=profile)
return updated_profile
@@ -89,7 +87,7 @@ async def update_or_create_profile(
"/agents",
summary="List store agents",
tags=["store", "public"],
response_model=backend.server.v2.store.model.StoreAgentsResponse,
response_model=store_model.StoreAgentsResponse,
)
async def get_agents(
featured: bool = False,
@@ -152,9 +150,13 @@ async def get_agents(
"/agents/{username}/{agent_name}",
summary="Get specific agent",
tags=["store", "public"],
response_model=backend.server.v2.store.model.StoreAgentDetails,
response_model=store_model.StoreAgentDetails,
)
async def get_agent(username: str, agent_name: str):
async def get_agent(
username: str,
agent_name: str,
include_changelog: bool = fastapi.Query(default=False),
):
"""
This is only used on the AgentDetails Page.
@@ -164,7 +166,7 @@ async def get_agent(username: str, agent_name: str):
# URL decode the agent name since it comes from the URL path
agent_name = urllib.parse.unquote(agent_name).lower()
agent = await store_cache._get_cached_agent_details(
username=username, agent_name=agent_name
username=username, agent_name=agent_name, include_changelog=include_changelog
)
return agent
@@ -175,13 +177,13 @@ async def get_agent(username: str, agent_name: str):
tags=["store"],
dependencies=[fastapi.Security(autogpt_libs.auth.requires_user)],
)
async def get_graph_meta_by_store_listing_version_id(store_listing_version_id: str):
async def get_graph_meta_by_store_listing_version_id(
store_listing_version_id: str,
) -> backend.data.graph.GraphMeta:
"""
Get Agent Graph from Store Listing Version ID.
"""
graph = await backend.server.v2.store.db.get_available_graph(
store_listing_version_id
)
graph = await store_db.get_available_graph(store_listing_version_id)
return graph
@@ -190,15 +192,13 @@ async def get_graph_meta_by_store_listing_version_id(store_listing_version_id: s
summary="Get agent by version",
tags=["store"],
dependencies=[fastapi.Security(autogpt_libs.auth.requires_user)],
response_model=backend.server.v2.store.model.StoreAgentDetails,
response_model=store_model.StoreAgentDetails,
)
async def get_store_agent(store_listing_version_id: str):
"""
Get Store Agent Details from Store Listing Version ID.
"""
agent = await backend.server.v2.store.db.get_store_agent_by_version_id(
store_listing_version_id
)
agent = await store_db.get_store_agent_by_version_id(store_listing_version_id)
return agent
@@ -208,12 +208,12 @@ async def get_store_agent(store_listing_version_id: str):
summary="Create agent review",
tags=["store"],
dependencies=[fastapi.Security(autogpt_libs.auth.requires_user)],
response_model=backend.server.v2.store.model.StoreReview,
response_model=store_model.StoreReview,
)
async def create_review(
username: str,
agent_name: str,
review: backend.server.v2.store.model.StoreReviewCreate,
review: store_model.StoreReviewCreate,
user_id: str = fastapi.Security(autogpt_libs.auth.get_user_id),
):
"""
@@ -231,7 +231,7 @@ async def create_review(
username = urllib.parse.unquote(username).lower()
agent_name = urllib.parse.unquote(agent_name).lower()
# Create the review
created_review = await backend.server.v2.store.db.create_store_review(
created_review = await store_db.create_store_review(
user_id=user_id,
store_listing_version_id=review.store_listing_version_id,
score=review.score,
@@ -250,7 +250,7 @@ async def create_review(
"/creators",
summary="List store creators",
tags=["store", "public"],
response_model=backend.server.v2.store.model.CreatorsResponse,
response_model=store_model.CreatorsResponse,
)
async def get_creators(
featured: bool = False,
@@ -295,7 +295,7 @@ async def get_creators(
"/creator/{username}",
summary="Get creator details",
tags=["store", "public"],
response_model=backend.server.v2.store.model.CreatorDetails,
response_model=store_model.CreatorDetails,
)
async def get_creator(
username: str,
@@ -319,7 +319,7 @@ async def get_creator(
summary="Get my agents",
tags=["store", "private"],
dependencies=[fastapi.Security(autogpt_libs.auth.requires_user)],
response_model=backend.server.v2.store.model.MyAgentsResponse,
response_model=store_model.MyAgentsResponse,
)
async def get_my_agents(
user_id: str = fastapi.Security(autogpt_libs.auth.get_user_id),
@@ -329,9 +329,7 @@ async def get_my_agents(
"""
Get user's own agents.
"""
agents = await backend.server.v2.store.db.get_my_agents(
user_id, page=page, page_size=page_size
)
agents = await store_db.get_my_agents(user_id, page=page, page_size=page_size)
return agents
@@ -356,7 +354,7 @@ async def delete_submission(
Returns:
bool: True if the submission was successfully deleted, False otherwise
"""
result = await backend.server.v2.store.db.delete_store_submission(
result = await store_db.delete_store_submission(
user_id=user_id,
submission_id=submission_id,
)
@@ -369,7 +367,7 @@ async def delete_submission(
summary="List my submissions",
tags=["store", "private"],
dependencies=[fastapi.Security(autogpt_libs.auth.requires_user)],
response_model=backend.server.v2.store.model.StoreSubmissionsResponse,
response_model=store_model.StoreSubmissionsResponse,
)
async def get_submissions(
user_id: str = fastapi.Security(autogpt_libs.auth.get_user_id),
@@ -399,7 +397,7 @@ async def get_submissions(
raise fastapi.HTTPException(
status_code=422, detail="Page size must be greater than 0"
)
listings = await backend.server.v2.store.db.get_store_submissions(
listings = await store_db.get_store_submissions(
user_id=user_id,
page=page,
page_size=page_size,
@@ -412,10 +410,10 @@ async def get_submissions(
summary="Create store submission",
tags=["store", "private"],
dependencies=[fastapi.Security(autogpt_libs.auth.requires_user)],
response_model=backend.server.v2.store.model.StoreSubmission,
response_model=store_model.StoreSubmission,
)
async def create_submission(
submission_request: backend.server.v2.store.model.StoreSubmissionRequest,
submission_request: store_model.StoreSubmissionRequest,
user_id: str = fastapi.Security(autogpt_libs.auth.get_user_id),
):
"""
@@ -431,7 +429,7 @@ async def create_submission(
Raises:
HTTPException: If there is an error creating the submission
"""
result = await backend.server.v2.store.db.create_store_submission(
result = await store_db.create_store_submission(
user_id=user_id,
agent_id=submission_request.agent_id,
agent_version=submission_request.agent_version,
@@ -456,11 +454,11 @@ async def create_submission(
summary="Edit store submission",
tags=["store", "private"],
dependencies=[fastapi.Security(autogpt_libs.auth.requires_user)],
response_model=backend.server.v2.store.model.StoreSubmission,
response_model=store_model.StoreSubmission,
)
async def edit_submission(
store_listing_version_id: str,
submission_request: backend.server.v2.store.model.StoreSubmissionEditRequest,
submission_request: store_model.StoreSubmissionEditRequest,
user_id: str = fastapi.Security(autogpt_libs.auth.get_user_id),
):
"""
@@ -477,7 +475,7 @@ async def edit_submission(
Raises:
HTTPException: If there is an error editing the submission
"""
result = await backend.server.v2.store.db.edit_store_submission(
result = await store_db.edit_store_submission(
user_id=user_id,
store_listing_version_id=store_listing_version_id,
name=submission_request.name,
@@ -518,9 +516,7 @@ async def upload_submission_media(
Raises:
HTTPException: If there is an error uploading the media
"""
media_url = await backend.server.v2.store.media.upload_media(
user_id=user_id, file=file
)
media_url = await store_media.upload_media(user_id=user_id, file=file)
return media_url
@@ -555,14 +551,12 @@ async def generate_image(
# Use .jpeg here since we are generating JPEG images
filename = f"agent_{agent_id}.jpeg"
existing_url = await backend.server.v2.store.media.check_media_exists(
user_id, filename
)
existing_url = await store_media.check_media_exists(user_id, filename)
if existing_url:
logger.info(f"Using existing image for agent {agent_id}")
return fastapi.responses.JSONResponse(content={"image_url": existing_url})
# Generate agent image as JPEG
image = await backend.server.v2.store.image_gen.generate_agent_image(agent=agent)
image = await store_image_gen.generate_agent_image(agent=agent)
# Create UploadFile with the correct filename and content_type
image_file = fastapi.UploadFile(
@@ -570,7 +564,7 @@ async def generate_image(
filename=filename,
)
image_url = await backend.server.v2.store.media.upload_media(
image_url = await store_media.upload_media(
user_id=user_id, file=image_file, use_file_name=True
)
@@ -599,7 +593,7 @@ async def download_agent_file(
Raises:
HTTPException: If the agent is not found or an unexpected error occurs.
"""
graph_data = await backend.server.v2.store.db.get_agent(store_listing_version_id)
graph_data = await store_db.get_agent(store_listing_version_id)
file_name = f"agent_{graph_data.id}_v{graph_data.version or 'latest'}.json"
# Sending graph as a stream (similar to marketplace v1)

View File

@@ -8,15 +8,15 @@ import pytest
import pytest_mock
from pytest_snapshot.plugin import Snapshot
import backend.server.v2.store.model
import backend.server.v2.store.routes
from . import model as store_model
from . import routes as store_routes
# Using a fixed timestamp for reproducible tests
# 2023 date is intentionally used to ensure tests work regardless of current year
FIXED_NOW = datetime.datetime(2023, 1, 1, 0, 0, 0)
app = fastapi.FastAPI()
app.include_router(backend.server.v2.store.routes.router)
app.include_router(store_routes.router)
client = fastapi.testclient.TestClient(app)
@@ -35,23 +35,21 @@ def test_get_agents_defaults(
mocker: pytest_mock.MockFixture,
snapshot: Snapshot,
) -> None:
mocked_value = backend.server.v2.store.model.StoreAgentsResponse(
mocked_value = store_model.StoreAgentsResponse(
agents=[],
pagination=backend.server.v2.store.model.Pagination(
pagination=store_model.Pagination(
current_page=0,
total_items=0,
total_pages=0,
page_size=10,
),
)
mock_db_call = mocker.patch("backend.server.v2.store.db.get_store_agents")
mock_db_call = mocker.patch("backend.api.features.store.db.get_store_agents")
mock_db_call.return_value = mocked_value
response = client.get("/agents")
assert response.status_code == 200
data = backend.server.v2.store.model.StoreAgentsResponse.model_validate(
response.json()
)
data = store_model.StoreAgentsResponse.model_validate(response.json())
assert data.pagination.total_pages == 0
assert data.agents == []
@@ -72,9 +70,9 @@ def test_get_agents_featured(
mocker: pytest_mock.MockFixture,
snapshot: Snapshot,
) -> None:
mocked_value = backend.server.v2.store.model.StoreAgentsResponse(
mocked_value = store_model.StoreAgentsResponse(
agents=[
backend.server.v2.store.model.StoreAgent(
store_model.StoreAgent(
slug="featured-agent",
agent_name="Featured Agent",
agent_image="featured.jpg",
@@ -86,20 +84,18 @@ def test_get_agents_featured(
rating=4.5,
)
],
pagination=backend.server.v2.store.model.Pagination(
pagination=store_model.Pagination(
current_page=1,
total_items=1,
total_pages=1,
page_size=20,
),
)
mock_db_call = mocker.patch("backend.server.v2.store.db.get_store_agents")
mock_db_call = mocker.patch("backend.api.features.store.db.get_store_agents")
mock_db_call.return_value = mocked_value
response = client.get("/agents?featured=true")
assert response.status_code == 200
data = backend.server.v2.store.model.StoreAgentsResponse.model_validate(
response.json()
)
data = store_model.StoreAgentsResponse.model_validate(response.json())
assert len(data.agents) == 1
assert data.agents[0].slug == "featured-agent"
snapshot.snapshot_dir = "snapshots"
@@ -119,9 +115,9 @@ def test_get_agents_by_creator(
mocker: pytest_mock.MockFixture,
snapshot: Snapshot,
) -> None:
mocked_value = backend.server.v2.store.model.StoreAgentsResponse(
mocked_value = store_model.StoreAgentsResponse(
agents=[
backend.server.v2.store.model.StoreAgent(
store_model.StoreAgent(
slug="creator-agent",
agent_name="Creator Agent",
agent_image="agent.jpg",
@@ -133,20 +129,18 @@ def test_get_agents_by_creator(
rating=4.0,
)
],
pagination=backend.server.v2.store.model.Pagination(
pagination=store_model.Pagination(
current_page=1,
total_items=1,
total_pages=1,
page_size=20,
),
)
mock_db_call = mocker.patch("backend.server.v2.store.db.get_store_agents")
mock_db_call = mocker.patch("backend.api.features.store.db.get_store_agents")
mock_db_call.return_value = mocked_value
response = client.get("/agents?creator=specific-creator")
assert response.status_code == 200
data = backend.server.v2.store.model.StoreAgentsResponse.model_validate(
response.json()
)
data = store_model.StoreAgentsResponse.model_validate(response.json())
assert len(data.agents) == 1
assert data.agents[0].creator == "specific-creator"
snapshot.snapshot_dir = "snapshots"
@@ -166,9 +160,9 @@ def test_get_agents_sorted(
mocker: pytest_mock.MockFixture,
snapshot: Snapshot,
) -> None:
mocked_value = backend.server.v2.store.model.StoreAgentsResponse(
mocked_value = store_model.StoreAgentsResponse(
agents=[
backend.server.v2.store.model.StoreAgent(
store_model.StoreAgent(
slug="top-agent",
agent_name="Top Agent",
agent_image="top.jpg",
@@ -180,20 +174,18 @@ def test_get_agents_sorted(
rating=5.0,
)
],
pagination=backend.server.v2.store.model.Pagination(
pagination=store_model.Pagination(
current_page=1,
total_items=1,
total_pages=1,
page_size=20,
),
)
mock_db_call = mocker.patch("backend.server.v2.store.db.get_store_agents")
mock_db_call = mocker.patch("backend.api.features.store.db.get_store_agents")
mock_db_call.return_value = mocked_value
response = client.get("/agents?sorted_by=runs")
assert response.status_code == 200
data = backend.server.v2.store.model.StoreAgentsResponse.model_validate(
response.json()
)
data = store_model.StoreAgentsResponse.model_validate(response.json())
assert len(data.agents) == 1
assert data.agents[0].runs == 1000
snapshot.snapshot_dir = "snapshots"
@@ -213,9 +205,9 @@ def test_get_agents_search(
mocker: pytest_mock.MockFixture,
snapshot: Snapshot,
) -> None:
mocked_value = backend.server.v2.store.model.StoreAgentsResponse(
mocked_value = store_model.StoreAgentsResponse(
agents=[
backend.server.v2.store.model.StoreAgent(
store_model.StoreAgent(
slug="search-agent",
agent_name="Search Agent",
agent_image="search.jpg",
@@ -227,20 +219,18 @@ def test_get_agents_search(
rating=4.2,
)
],
pagination=backend.server.v2.store.model.Pagination(
pagination=store_model.Pagination(
current_page=1,
total_items=1,
total_pages=1,
page_size=20,
),
)
mock_db_call = mocker.patch("backend.server.v2.store.db.get_store_agents")
mock_db_call = mocker.patch("backend.api.features.store.db.get_store_agents")
mock_db_call.return_value = mocked_value
response = client.get("/agents?search_query=specific")
assert response.status_code == 200
data = backend.server.v2.store.model.StoreAgentsResponse.model_validate(
response.json()
)
data = store_model.StoreAgentsResponse.model_validate(response.json())
assert len(data.agents) == 1
assert "specific" in data.agents[0].description.lower()
snapshot.snapshot_dir = "snapshots"
@@ -260,9 +250,9 @@ def test_get_agents_category(
mocker: pytest_mock.MockFixture,
snapshot: Snapshot,
) -> None:
mocked_value = backend.server.v2.store.model.StoreAgentsResponse(
mocked_value = store_model.StoreAgentsResponse(
agents=[
backend.server.v2.store.model.StoreAgent(
store_model.StoreAgent(
slug="category-agent",
agent_name="Category Agent",
agent_image="category.jpg",
@@ -274,20 +264,18 @@ def test_get_agents_category(
rating=4.1,
)
],
pagination=backend.server.v2.store.model.Pagination(
pagination=store_model.Pagination(
current_page=1,
total_items=1,
total_pages=1,
page_size=20,
),
)
mock_db_call = mocker.patch("backend.server.v2.store.db.get_store_agents")
mock_db_call = mocker.patch("backend.api.features.store.db.get_store_agents")
mock_db_call.return_value = mocked_value
response = client.get("/agents?category=test-category")
assert response.status_code == 200
data = backend.server.v2.store.model.StoreAgentsResponse.model_validate(
response.json()
)
data = store_model.StoreAgentsResponse.model_validate(response.json())
assert len(data.agents) == 1
snapshot.snapshot_dir = "snapshots"
snapshot.assert_match(json.dumps(response.json(), indent=2), "agts_category")
@@ -306,9 +294,9 @@ def test_get_agents_pagination(
mocker: pytest_mock.MockFixture,
snapshot: Snapshot,
) -> None:
mocked_value = backend.server.v2.store.model.StoreAgentsResponse(
mocked_value = store_model.StoreAgentsResponse(
agents=[
backend.server.v2.store.model.StoreAgent(
store_model.StoreAgent(
slug=f"agent-{i}",
agent_name=f"Agent {i}",
agent_image=f"agent{i}.jpg",
@@ -321,20 +309,18 @@ def test_get_agents_pagination(
)
for i in range(5)
],
pagination=backend.server.v2.store.model.Pagination(
pagination=store_model.Pagination(
current_page=2,
total_items=15,
total_pages=3,
page_size=5,
),
)
mock_db_call = mocker.patch("backend.server.v2.store.db.get_store_agents")
mock_db_call = mocker.patch("backend.api.features.store.db.get_store_agents")
mock_db_call.return_value = mocked_value
response = client.get("/agents?page=2&page_size=5")
assert response.status_code == 200
data = backend.server.v2.store.model.StoreAgentsResponse.model_validate(
response.json()
)
data = store_model.StoreAgentsResponse.model_validate(response.json())
assert len(data.agents) == 5
assert data.pagination.current_page == 2
assert data.pagination.page_size == 5
@@ -365,7 +351,7 @@ def test_get_agents_malformed_request(mocker: pytest_mock.MockFixture):
assert response.status_code == 422
# Verify no DB calls were made
mock_db_call = mocker.patch("backend.server.v2.store.db.get_store_agents")
mock_db_call = mocker.patch("backend.api.features.store.db.get_store_agents")
mock_db_call.assert_not_called()
@@ -373,7 +359,7 @@ def test_get_agent_details(
mocker: pytest_mock.MockFixture,
snapshot: Snapshot,
) -> None:
mocked_value = backend.server.v2.store.model.StoreAgentDetails(
mocked_value = store_model.StoreAgentDetails(
store_listing_version_id="test-version-id",
slug="test-agent",
agent_name="Test Agent",
@@ -388,46 +374,46 @@ def test_get_agent_details(
runs=100,
rating=4.5,
versions=["1.0.0", "1.1.0"],
agentGraphVersions=["1", "2"],
agentGraphId="test-graph-id",
last_updated=FIXED_NOW,
)
mock_db_call = mocker.patch("backend.server.v2.store.db.get_store_agent_details")
mock_db_call = mocker.patch("backend.api.features.store.db.get_store_agent_details")
mock_db_call.return_value = mocked_value
response = client.get("/agents/creator1/test-agent")
assert response.status_code == 200
data = backend.server.v2.store.model.StoreAgentDetails.model_validate(
response.json()
)
data = store_model.StoreAgentDetails.model_validate(response.json())
assert data.agent_name == "Test Agent"
assert data.creator == "creator1"
snapshot.snapshot_dir = "snapshots"
snapshot.assert_match(json.dumps(response.json(), indent=2), "agt_details")
mock_db_call.assert_called_once_with(username="creator1", agent_name="test-agent")
mock_db_call.assert_called_once_with(
username="creator1", agent_name="test-agent", include_changelog=False
)
def test_get_creators_defaults(
mocker: pytest_mock.MockFixture,
snapshot: Snapshot,
) -> None:
mocked_value = backend.server.v2.store.model.CreatorsResponse(
mocked_value = store_model.CreatorsResponse(
creators=[],
pagination=backend.server.v2.store.model.Pagination(
pagination=store_model.Pagination(
current_page=0,
total_items=0,
total_pages=0,
page_size=10,
),
)
mock_db_call = mocker.patch("backend.server.v2.store.db.get_store_creators")
mock_db_call = mocker.patch("backend.api.features.store.db.get_store_creators")
mock_db_call.return_value = mocked_value
response = client.get("/creators")
assert response.status_code == 200
data = backend.server.v2.store.model.CreatorsResponse.model_validate(
response.json()
)
data = store_model.CreatorsResponse.model_validate(response.json())
assert data.pagination.total_pages == 0
assert data.creators == []
snapshot.snapshot_dir = "snapshots"
@@ -441,9 +427,9 @@ def test_get_creators_pagination(
mocker: pytest_mock.MockFixture,
snapshot: Snapshot,
) -> None:
mocked_value = backend.server.v2.store.model.CreatorsResponse(
mocked_value = store_model.CreatorsResponse(
creators=[
backend.server.v2.store.model.Creator(
store_model.Creator(
name=f"Creator {i}",
username=f"creator{i}",
description=f"Creator {i} description",
@@ -455,22 +441,20 @@ def test_get_creators_pagination(
)
for i in range(5)
],
pagination=backend.server.v2.store.model.Pagination(
pagination=store_model.Pagination(
current_page=2,
total_items=15,
total_pages=3,
page_size=5,
),
)
mock_db_call = mocker.patch("backend.server.v2.store.db.get_store_creators")
mock_db_call = mocker.patch("backend.api.features.store.db.get_store_creators")
mock_db_call.return_value = mocked_value
response = client.get("/creators?page=2&page_size=5")
assert response.status_code == 200
data = backend.server.v2.store.model.CreatorsResponse.model_validate(
response.json()
)
data = store_model.CreatorsResponse.model_validate(response.json())
assert len(data.creators) == 5
assert data.pagination.current_page == 2
assert data.pagination.page_size == 5
@@ -495,7 +479,7 @@ def test_get_creators_malformed_request(mocker: pytest_mock.MockFixture):
assert response.status_code == 422
# Verify no DB calls were made
mock_db_call = mocker.patch("backend.server.v2.store.db.get_store_creators")
mock_db_call = mocker.patch("backend.api.features.store.db.get_store_creators")
mock_db_call.assert_not_called()
@@ -503,7 +487,7 @@ def test_get_creator_details(
mocker: pytest_mock.MockFixture,
snapshot: Snapshot,
) -> None:
mocked_value = backend.server.v2.store.model.CreatorDetails(
mocked_value = store_model.CreatorDetails(
name="Test User",
username="creator1",
description="Test creator description",
@@ -513,13 +497,15 @@ def test_get_creator_details(
agent_runs=1000,
top_categories=["category1", "category2"],
)
mock_db_call = mocker.patch("backend.server.v2.store.db.get_store_creator_details")
mock_db_call = mocker.patch(
"backend.api.features.store.db.get_store_creator_details"
)
mock_db_call.return_value = mocked_value
response = client.get("/creator/creator1")
assert response.status_code == 200
data = backend.server.v2.store.model.CreatorDetails.model_validate(response.json())
data = store_model.CreatorDetails.model_validate(response.json())
assert data.username == "creator1"
assert data.name == "Test User"
snapshot.snapshot_dir = "snapshots"
@@ -532,9 +518,9 @@ def test_get_submissions_success(
snapshot: Snapshot,
test_user_id: str,
) -> None:
mocked_value = backend.server.v2.store.model.StoreSubmissionsResponse(
mocked_value = store_model.StoreSubmissionsResponse(
submissions=[
backend.server.v2.store.model.StoreSubmission(
store_model.StoreSubmission(
name="Test Agent",
description="Test agent description",
image_urls=["test.jpg"],
@@ -550,22 +536,20 @@ def test_get_submissions_success(
categories=["test-category"],
)
],
pagination=backend.server.v2.store.model.Pagination(
pagination=store_model.Pagination(
current_page=1,
total_items=1,
total_pages=1,
page_size=20,
),
)
mock_db_call = mocker.patch("backend.server.v2.store.db.get_store_submissions")
mock_db_call = mocker.patch("backend.api.features.store.db.get_store_submissions")
mock_db_call.return_value = mocked_value
response = client.get("/submissions")
assert response.status_code == 200
data = backend.server.v2.store.model.StoreSubmissionsResponse.model_validate(
response.json()
)
data = store_model.StoreSubmissionsResponse.model_validate(response.json())
assert len(data.submissions) == 1
assert data.submissions[0].name == "Test Agent"
assert data.pagination.current_page == 1
@@ -579,24 +563,22 @@ def test_get_submissions_pagination(
snapshot: Snapshot,
test_user_id: str,
) -> None:
mocked_value = backend.server.v2.store.model.StoreSubmissionsResponse(
mocked_value = store_model.StoreSubmissionsResponse(
submissions=[],
pagination=backend.server.v2.store.model.Pagination(
pagination=store_model.Pagination(
current_page=2,
total_items=10,
total_pages=2,
page_size=5,
),
)
mock_db_call = mocker.patch("backend.server.v2.store.db.get_store_submissions")
mock_db_call = mocker.patch("backend.api.features.store.db.get_store_submissions")
mock_db_call.return_value = mocked_value
response = client.get("/submissions?page=2&page_size=5")
assert response.status_code == 200
data = backend.server.v2.store.model.StoreSubmissionsResponse.model_validate(
response.json()
)
data = store_model.StoreSubmissionsResponse.model_validate(response.json())
assert data.pagination.current_page == 2
assert data.pagination.page_size == 5
snapshot.snapshot_dir = "snapshots"
@@ -618,5 +600,5 @@ def test_get_submissions_malformed_request(mocker: pytest_mock.MockFixture):
assert response.status_code == 422
# Verify no DB calls were made
mock_db_call = mocker.patch("backend.server.v2.store.db.get_store_submissions")
mock_db_call = mocker.patch("backend.api.features.store.db.get_store_submissions")
mock_db_call.assert_not_called()

View File

@@ -8,10 +8,11 @@ from unittest.mock import AsyncMock, patch
import pytest
from backend.server.v2.store import cache as store_cache
from backend.server.v2.store.model import StoreAgent, StoreAgentsResponse
from backend.util.models import Pagination
from . import cache as store_cache
from .model import StoreAgent, StoreAgentsResponse
class TestCacheDeletion:
"""Test cache deletion functionality for store routes."""
@@ -43,7 +44,7 @@ class TestCacheDeletion:
)
with patch(
"backend.server.v2.store.db.get_store_agents",
"backend.api.features.store.db.get_store_agents",
new_callable=AsyncMock,
return_value=mock_response,
) as mock_db:
@@ -152,7 +153,7 @@ class TestCacheDeletion:
)
with patch(
"backend.server.v2.store.db.get_store_agents",
"backend.api.features.store.db.get_store_agents",
new_callable=AsyncMock,
return_value=mock_response,
):
@@ -203,7 +204,7 @@ class TestCacheDeletion:
)
with patch(
"backend.server.v2.store.db.get_store_agents",
"backend.api.features.store.db.get_store_agents",
new_callable=AsyncMock,
return_value=mock_response,
) as mock_db:

View File

@@ -28,9 +28,18 @@ from pydantic import BaseModel
from starlette.status import HTTP_204_NO_CONTENT, HTTP_404_NOT_FOUND
from typing_extensions import Optional, TypedDict
import backend.server.integrations.router
import backend.server.routers.analytics
import backend.server.v2.library.db as library_db
from backend.api.model import (
CreateAPIKeyRequest,
CreateAPIKeyResponse,
CreateGraph,
GraphExecutionSource,
RequestTopUp,
SetGraphActiveVersion,
TimezoneResponse,
UpdatePermissionsRequest,
UpdateTimezoneRequest,
UploadFileResponse,
)
from backend.data import execution as execution_db
from backend.data import graph as graph_db
from backend.data.auth import api_key as api_key_db
@@ -79,19 +88,6 @@ from backend.monitoring.instrumentation import (
record_graph_execution,
record_graph_operation,
)
from backend.server.model import (
CreateAPIKeyRequest,
CreateAPIKeyResponse,
CreateGraph,
GraphExecutionSource,
RequestTopUp,
SetGraphActiveVersion,
TimezoneResponse,
UpdatePermissionsRequest,
UpdateTimezoneRequest,
UploadFileResponse,
)
from backend.server.v2.store.model import StoreAgentDetails
from backend.util.cache import cached
from backend.util.clients import get_scheduler_client
from backend.util.cloud_storage import get_cloud_storage_handler
@@ -105,6 +101,10 @@ from backend.util.timezone_utils import (
)
from backend.util.virus_scanner import scan_content_safe
from .library import db as library_db
from .library import model as library_model
from .store.model import StoreAgentDetails
def _create_file_size_error(size_bytes: int, max_size_mb: int) -> HTTPException:
"""Create standardized file size error response."""
@@ -118,76 +118,9 @@ settings = Settings()
logger = logging.getLogger(__name__)
async def hide_activity_summaries_if_disabled(
executions: list[execution_db.GraphExecutionMeta], user_id: str
) -> list[execution_db.GraphExecutionMeta]:
"""Hide activity summaries and scores if AI_ACTIVITY_STATUS feature is disabled."""
if await is_feature_enabled(Flag.AI_ACTIVITY_STATUS, user_id):
return executions # Return as-is if feature is enabled
# Filter out activity features if disabled
filtered_executions = []
for execution in executions:
if execution.stats:
filtered_stats = execution.stats.without_activity_features()
execution = execution.model_copy(update={"stats": filtered_stats})
filtered_executions.append(execution)
return filtered_executions
async def hide_activity_summary_if_disabled(
execution: execution_db.GraphExecution | execution_db.GraphExecutionWithNodes,
user_id: str,
) -> execution_db.GraphExecution | execution_db.GraphExecutionWithNodes:
"""Hide activity summary and score for a single execution if AI_ACTIVITY_STATUS feature is disabled."""
if await is_feature_enabled(Flag.AI_ACTIVITY_STATUS, user_id):
return execution # Return as-is if feature is enabled
# Filter out activity features if disabled
if execution.stats:
filtered_stats = execution.stats.without_activity_features()
return execution.model_copy(update={"stats": filtered_stats})
return execution
async def _update_library_agent_version_and_settings(
user_id: str, agent_graph: graph_db.GraphModel
) -> library_db.library_model.LibraryAgent:
# Keep the library agent up to date with the new active version
library = await library_db.update_agent_version_in_library(
user_id, agent_graph.id, agent_graph.version
)
# If the graph has HITL node, initialize the setting if it's not already set.
if (
agent_graph.has_human_in_the_loop
and library.settings.human_in_the_loop_safe_mode is None
):
await library_db.update_library_agent_settings(
user_id=user_id,
agent_id=library.id,
settings=library.settings.model_copy(
update={"human_in_the_loop_safe_mode": True}
),
)
return library
# Define the API routes
v1_router = APIRouter()
v1_router.include_router(
backend.server.integrations.router.router,
prefix="/integrations",
tags=["integrations"],
)
v1_router.include_router(
backend.server.routers.analytics.router,
prefix="/analytics",
tags=["analytics"],
dependencies=[Security(requires_user)],
)
########################################################
##################### Auth #############################
@@ -953,6 +886,28 @@ async def set_graph_active_version(
await on_graph_deactivate(current_active_graph, user_id=user_id)
async def _update_library_agent_version_and_settings(
user_id: str, agent_graph: graph_db.GraphModel
) -> library_model.LibraryAgent:
# Keep the library agent up to date with the new active version
library = await library_db.update_agent_version_in_library(
user_id, agent_graph.id, agent_graph.version
)
# If the graph has HITL node, initialize the setting if it's not already set.
if (
agent_graph.has_human_in_the_loop
and library.settings.human_in_the_loop_safe_mode is None
):
await library_db.update_library_agent_settings(
user_id=user_id,
agent_id=library.id,
settings=library.settings.model_copy(
update={"human_in_the_loop_safe_mode": True}
),
)
return library
@v1_router.patch(
path="/graphs/{graph_id}/settings",
summary="Update graph settings",
@@ -1155,6 +1110,23 @@ async def list_graph_executions(
)
async def hide_activity_summaries_if_disabled(
executions: list[execution_db.GraphExecutionMeta], user_id: str
) -> list[execution_db.GraphExecutionMeta]:
"""Hide activity summaries and scores if AI_ACTIVITY_STATUS feature is disabled."""
if await is_feature_enabled(Flag.AI_ACTIVITY_STATUS, user_id):
return executions # Return as-is if feature is enabled
# Filter out activity features if disabled
filtered_executions = []
for execution in executions:
if execution.stats:
filtered_stats = execution.stats.without_activity_features()
execution = execution.model_copy(update={"stats": filtered_stats})
filtered_executions.append(execution)
return filtered_executions
@v1_router.get(
path="/graphs/{graph_id}/executions/{graph_exec_id}",
summary="Get execution details",
@@ -1197,6 +1169,21 @@ async def get_graph_execution(
return result
async def hide_activity_summary_if_disabled(
execution: execution_db.GraphExecution | execution_db.GraphExecutionWithNodes,
user_id: str,
) -> execution_db.GraphExecution | execution_db.GraphExecutionWithNodes:
"""Hide activity summary and score for a single execution if AI_ACTIVITY_STATUS feature is disabled."""
if await is_feature_enabled(Flag.AI_ACTIVITY_STATUS, user_id):
return execution # Return as-is if feature is enabled
# Filter out activity features if disabled
if execution.stats:
filtered_stats = execution.stats.without_activity_features()
return execution.model_copy(update={"stats": filtered_stats})
return execution
@v1_router.delete(
path="/executions/{graph_exec_id}",
summary="Delete graph execution",
@@ -1257,7 +1244,7 @@ async def enable_execution_sharing(
)
# Return the share URL
frontend_url = Settings().config.frontend_base_url or "http://localhost:3000"
frontend_url = settings.config.frontend_base_url or "http://localhost:3000"
share_url = f"{frontend_url}/share/{share_token}"
return ShareResponse(share_url=share_url, share_token=share_token)

View File

@@ -11,13 +11,13 @@ import starlette.datastructures
from fastapi import HTTPException, UploadFile
from pytest_snapshot.plugin import Snapshot
import backend.server.routers.v1 as v1_routes
from backend.data.credit import AutoTopUpConfig
from backend.data.graph import GraphModel
from backend.server.routers.v1 import upload_file
from .v1 import upload_file, v1_router
app = fastapi.FastAPI()
app.include_router(v1_routes.v1_router)
app.include_router(v1_router)
client = fastapi.testclient.TestClient(app)
@@ -50,7 +50,7 @@ def test_get_or_create_user_route(
}
mocker.patch(
"backend.server.routers.v1.get_or_create_user",
"backend.api.features.v1.get_or_create_user",
return_value=mock_user,
)
@@ -71,7 +71,7 @@ def test_update_user_email_route(
) -> None:
"""Test update user email endpoint"""
mocker.patch(
"backend.server.routers.v1.update_user_email",
"backend.api.features.v1.update_user_email",
return_value=None,
)
@@ -107,7 +107,7 @@ def test_get_graph_blocks(
# Mock get_blocks
mocker.patch(
"backend.server.routers.v1.get_blocks",
"backend.api.features.v1.get_blocks",
return_value={"test-block": lambda: mock_block},
)
@@ -146,7 +146,7 @@ def test_execute_graph_block(
mock_block.execute = mock_execute
mocker.patch(
"backend.server.routers.v1.get_block",
"backend.api.features.v1.get_block",
return_value=mock_block,
)
@@ -155,7 +155,7 @@ def test_execute_graph_block(
mock_user.timezone = "UTC"
mocker.patch(
"backend.server.routers.v1.get_user_by_id",
"backend.api.features.v1.get_user_by_id",
return_value=mock_user,
)
@@ -181,7 +181,7 @@ def test_execute_graph_block_not_found(
) -> None:
"""Test execute block with non-existent block"""
mocker.patch(
"backend.server.routers.v1.get_block",
"backend.api.features.v1.get_block",
return_value=None,
)
@@ -200,7 +200,7 @@ def test_get_user_credits(
mock_credit_model = Mock()
mock_credit_model.get_credits = AsyncMock(return_value=1000)
mocker.patch(
"backend.server.routers.v1.get_user_credit_model",
"backend.api.features.v1.get_user_credit_model",
return_value=mock_credit_model,
)
@@ -227,7 +227,7 @@ def test_request_top_up(
return_value="https://checkout.example.com/session123"
)
mocker.patch(
"backend.server.routers.v1.get_user_credit_model",
"backend.api.features.v1.get_user_credit_model",
return_value=mock_credit_model,
)
@@ -254,7 +254,7 @@ def test_get_auto_top_up(
mock_config = AutoTopUpConfig(threshold=100, amount=500)
mocker.patch(
"backend.server.routers.v1.get_auto_top_up",
"backend.api.features.v1.get_auto_top_up",
return_value=mock_config,
)
@@ -279,7 +279,7 @@ def test_configure_auto_top_up(
"""Test configure auto top-up endpoint - this test would have caught the enum casting bug"""
# Mock the set_auto_top_up function to avoid database operations
mocker.patch(
"backend.server.routers.v1.set_auto_top_up",
"backend.api.features.v1.set_auto_top_up",
return_value=None,
)
@@ -289,7 +289,7 @@ def test_configure_auto_top_up(
mock_credit_model.top_up_credits.return_value = None
mocker.patch(
"backend.server.routers.v1.get_user_credit_model",
"backend.api.features.v1.get_user_credit_model",
return_value=mock_credit_model,
)
@@ -311,7 +311,7 @@ def test_configure_auto_top_up_validation_errors(
) -> None:
"""Test configure auto top-up endpoint validation"""
# Mock set_auto_top_up to avoid database operations for successful case
mocker.patch("backend.server.routers.v1.set_auto_top_up")
mocker.patch("backend.api.features.v1.set_auto_top_up")
# Mock credit model to avoid Stripe API calls for the successful case
mock_credit_model = mocker.AsyncMock()
@@ -319,7 +319,7 @@ def test_configure_auto_top_up_validation_errors(
mock_credit_model.top_up_credits.return_value = None
mocker.patch(
"backend.server.routers.v1.get_user_credit_model",
"backend.api.features.v1.get_user_credit_model",
return_value=mock_credit_model,
)
@@ -393,7 +393,7 @@ def test_get_graph(
)
mocker.patch(
"backend.server.routers.v1.graph_db.get_graph",
"backend.api.features.v1.graph_db.get_graph",
return_value=mock_graph,
)
@@ -415,7 +415,7 @@ def test_get_graph_not_found(
) -> None:
"""Test get graph with non-existent ID"""
mocker.patch(
"backend.server.routers.v1.graph_db.get_graph",
"backend.api.features.v1.graph_db.get_graph",
return_value=None,
)
@@ -443,15 +443,15 @@ def test_delete_graph(
)
mocker.patch(
"backend.server.routers.v1.graph_db.get_graph",
"backend.api.features.v1.graph_db.get_graph",
return_value=mock_graph,
)
mocker.patch(
"backend.server.routers.v1.on_graph_deactivate",
"backend.api.features.v1.on_graph_deactivate",
return_value=None,
)
mocker.patch(
"backend.server.routers.v1.graph_db.delete_graph",
"backend.api.features.v1.graph_db.delete_graph",
return_value=3, # Number of versions deleted
)
@@ -498,8 +498,8 @@ async def test_upload_file_success(test_user_id: str):
)
# Mock dependencies
with patch("backend.server.routers.v1.scan_content_safe") as mock_scan, patch(
"backend.server.routers.v1.get_cloud_storage_handler"
with patch("backend.api.features.v1.scan_content_safe") as mock_scan, patch(
"backend.api.features.v1.get_cloud_storage_handler"
) as mock_handler_getter:
mock_scan.return_value = None
@@ -550,8 +550,8 @@ async def test_upload_file_no_filename(test_user_id: str):
),
)
with patch("backend.server.routers.v1.scan_content_safe") as mock_scan, patch(
"backend.server.routers.v1.get_cloud_storage_handler"
with patch("backend.api.features.v1.scan_content_safe") as mock_scan, patch(
"backend.api.features.v1.get_cloud_storage_handler"
) as mock_handler_getter:
mock_scan.return_value = None
@@ -610,7 +610,7 @@ async def test_upload_file_virus_scan_failure(test_user_id: str):
headers=starlette.datastructures.Headers({"content-type": "text/plain"}),
)
with patch("backend.server.routers.v1.scan_content_safe") as mock_scan:
with patch("backend.api.features.v1.scan_content_safe") as mock_scan:
# Mock virus scan to raise exception
mock_scan.side_effect = RuntimeError("Virus detected!")
@@ -631,8 +631,8 @@ async def test_upload_file_cloud_storage_failure(test_user_id: str):
headers=starlette.datastructures.Headers({"content-type": "text/plain"}),
)
with patch("backend.server.routers.v1.scan_content_safe") as mock_scan, patch(
"backend.server.routers.v1.get_cloud_storage_handler"
with patch("backend.api.features.v1.scan_content_safe") as mock_scan, patch(
"backend.api.features.v1.get_cloud_storage_handler"
) as mock_handler_getter:
mock_scan.return_value = None
@@ -678,8 +678,8 @@ async def test_upload_file_gcs_not_configured_fallback(test_user_id: str):
headers=starlette.datastructures.Headers({"content-type": "text/plain"}),
)
with patch("backend.server.routers.v1.scan_content_safe") as mock_scan, patch(
"backend.server.routers.v1.get_cloud_storage_handler"
with patch("backend.api.features.v1.scan_content_safe") as mock_scan, patch(
"backend.api.features.v1.get_cloud_storage_handler"
) as mock_handler_getter:
mock_scan.return_value = None

View File

@@ -3,7 +3,7 @@ from fastapi import FastAPI
from fastapi.testclient import TestClient
from starlette.applications import Starlette
from backend.server.middleware.security import SecurityHeadersMiddleware
from backend.api.middleware.security import SecurityHeadersMiddleware
@pytest.fixture

View File

@@ -16,36 +16,33 @@ from fastapi.middleware.gzip import GZipMiddleware
from fastapi.routing import APIRoute
from prisma.errors import PrismaError
import backend.api.features.admin.credit_admin_routes
import backend.api.features.admin.execution_analytics_routes
import backend.api.features.admin.store_admin_routes
import backend.api.features.builder
import backend.api.features.builder.routes
import backend.api.features.chat.routes as chat_routes
import backend.api.features.executions.review.routes
import backend.api.features.library.db
import backend.api.features.library.model
import backend.api.features.library.routes
import backend.api.features.oauth
import backend.api.features.otto.routes
import backend.api.features.postmark.postmark
import backend.api.features.store.model
import backend.api.features.store.routes
import backend.api.features.v1
import backend.data.block
import backend.data.db
import backend.data.graph
import backend.data.user
import backend.integrations.webhooks.utils
import backend.server.routers.oauth
import backend.server.routers.postmark.postmark
import backend.server.routers.v1
import backend.server.v2.admin.credit_admin_routes
import backend.server.v2.admin.execution_analytics_routes
import backend.server.v2.admin.store_admin_routes
import backend.server.v2.builder
import backend.server.v2.builder.routes
import backend.server.v2.chat.routes as chat_routes
import backend.server.v2.executions.review.routes
import backend.server.v2.library.db
import backend.server.v2.library.model
import backend.server.v2.library.routes
import backend.server.v2.otto.routes
import backend.server.v2.store.model
import backend.server.v2.store.routes
import backend.util.service
import backend.util.settings
from backend.blocks.llm import LlmModel
from backend.blocks.llm import DEFAULT_LLM_MODEL
from backend.data.model import Credentials
from backend.integrations.providers import ProviderName
from backend.monitoring.instrumentation import instrument_fastapi
from backend.server.external.api import external_app
from backend.server.middleware.security import SecurityHeadersMiddleware
from backend.server.utils.cors import build_cors_params
from backend.util import json
from backend.util.cloud_storage import shutdown_cloud_storage_handler
from backend.util.exceptions import (
@@ -56,6 +53,13 @@ from backend.util.exceptions import (
from backend.util.feature_flag import initialize_launchdarkly, shutdown_launchdarkly
from backend.util.service import UnhealthyServiceError
from .external.fastapi_app import external_api
from .features.analytics import router as analytics_router
from .features.integrations.router import router as integrations_router
from .middleware.security import SecurityHeadersMiddleware
from .utils.cors import build_cors_params
from .utils.openapi import sort_openapi
settings = backend.util.settings.Settings()
logger = logging.getLogger(__name__)
@@ -109,7 +113,7 @@ async def lifespan_context(app: fastapi.FastAPI):
await backend.data.user.migrate_and_encrypt_user_integrations()
await backend.data.graph.fix_llm_provider_credentials()
await backend.data.graph.migrate_llm_models(LlmModel.GPT4O)
await backend.data.graph.migrate_llm_models(DEFAULT_LLM_MODEL)
await backend.integrations.webhooks.utils.migrate_legacy_triggered_graphs()
with launch_darkly_context():
@@ -176,6 +180,9 @@ app.add_middleware(GZipMiddleware, minimum_size=50_000) # 50KB threshold
# Add 401 responses to authenticated endpoints in OpenAPI spec
add_auth_responses_to_openapi(app)
# Sort OpenAPI schema to eliminate diff on refactors
sort_openapi(app)
# Add Prometheus instrumentation
instrument_fastapi(
app,
@@ -254,42 +261,52 @@ app.add_exception_handler(MissingConfigError, handle_internal_http_error(503))
app.add_exception_handler(ValueError, handle_internal_http_error(400))
app.add_exception_handler(Exception, handle_internal_http_error(500))
app.include_router(backend.server.routers.v1.v1_router, tags=["v1"], prefix="/api")
app.include_router(backend.api.features.v1.v1_router, tags=["v1"], prefix="/api")
app.include_router(
backend.server.v2.store.routes.router, tags=["v2"], prefix="/api/store"
integrations_router,
prefix="/api/integrations",
tags=["v1", "integrations"],
)
app.include_router(
backend.server.v2.builder.routes.router, tags=["v2"], prefix="/api/builder"
analytics_router,
prefix="/api/analytics",
tags=["analytics"],
)
app.include_router(
backend.server.v2.admin.store_admin_routes.router,
backend.api.features.store.routes.router, tags=["v2"], prefix="/api/store"
)
app.include_router(
backend.api.features.builder.routes.router, tags=["v2"], prefix="/api/builder"
)
app.include_router(
backend.api.features.admin.store_admin_routes.router,
tags=["v2", "admin"],
prefix="/api/store",
)
app.include_router(
backend.server.v2.admin.credit_admin_routes.router,
backend.api.features.admin.credit_admin_routes.router,
tags=["v2", "admin"],
prefix="/api/credits",
)
app.include_router(
backend.server.v2.admin.execution_analytics_routes.router,
backend.api.features.admin.execution_analytics_routes.router,
tags=["v2", "admin"],
prefix="/api/executions",
)
app.include_router(
backend.server.v2.executions.review.routes.router,
backend.api.features.executions.review.routes.router,
tags=["v2", "executions", "review"],
prefix="/api/review",
)
app.include_router(
backend.server.v2.library.routes.router, tags=["v2"], prefix="/api/library"
backend.api.features.library.routes.router, tags=["v2"], prefix="/api/library"
)
app.include_router(
backend.server.v2.otto.routes.router, tags=["v2", "otto"], prefix="/api/otto"
backend.api.features.otto.routes.router, tags=["v2", "otto"], prefix="/api/otto"
)
app.include_router(
backend.server.routers.postmark.postmark.router,
backend.api.features.postmark.postmark.router,
tags=["v1", "email"],
prefix="/api/email",
)
@@ -299,12 +316,12 @@ app.include_router(
prefix="/api/chat",
)
app.include_router(
backend.server.routers.oauth.router,
backend.api.features.oauth.router,
tags=["oauth"],
prefix="/api/oauth",
)
app.mount("/external-api", external_app)
app.mount("/external-api", external_api)
@app.get(path="/health", tags=["health"], dependencies=[])
@@ -357,7 +374,7 @@ class AgentServer(backend.util.service.AppProcess):
graph_version: Optional[int] = None,
node_input: Optional[dict[str, Any]] = None,
):
return await backend.server.routers.v1.execute_graph(
return await backend.api.features.v1.execute_graph(
user_id=user_id,
graph_id=graph_id,
graph_version=graph_version,
@@ -372,16 +389,16 @@ class AgentServer(backend.util.service.AppProcess):
user_id: str,
for_export: bool = False,
):
return await backend.server.routers.v1.get_graph(
return await backend.api.features.v1.get_graph(
graph_id, user_id, graph_version, for_export
)
@staticmethod
async def test_create_graph(
create_graph: backend.server.routers.v1.CreateGraph,
create_graph: backend.api.features.v1.CreateGraph,
user_id: str,
):
return await backend.server.routers.v1.create_new_graph(create_graph, user_id)
return await backend.api.features.v1.create_new_graph(create_graph, user_id)
@staticmethod
async def test_get_graph_run_status(graph_exec_id: str, user_id: str):
@@ -397,45 +414,45 @@ class AgentServer(backend.util.service.AppProcess):
@staticmethod
async def test_delete_graph(graph_id: str, user_id: str):
"""Used for clean-up after a test run"""
await backend.server.v2.library.db.delete_library_agent_by_graph_id(
await backend.api.features.library.db.delete_library_agent_by_graph_id(
graph_id=graph_id, user_id=user_id
)
return await backend.server.routers.v1.delete_graph(graph_id, user_id)
return await backend.api.features.v1.delete_graph(graph_id, user_id)
@staticmethod
async def test_get_presets(user_id: str, page: int = 1, page_size: int = 10):
return await backend.server.v2.library.routes.presets.list_presets(
return await backend.api.features.library.routes.presets.list_presets(
user_id=user_id, page=page, page_size=page_size
)
@staticmethod
async def test_get_preset(preset_id: str, user_id: str):
return await backend.server.v2.library.routes.presets.get_preset(
return await backend.api.features.library.routes.presets.get_preset(
preset_id=preset_id, user_id=user_id
)
@staticmethod
async def test_create_preset(
preset: backend.server.v2.library.model.LibraryAgentPresetCreatable,
preset: backend.api.features.library.model.LibraryAgentPresetCreatable,
user_id: str,
):
return await backend.server.v2.library.routes.presets.create_preset(
return await backend.api.features.library.routes.presets.create_preset(
preset=preset, user_id=user_id
)
@staticmethod
async def test_update_preset(
preset_id: str,
preset: backend.server.v2.library.model.LibraryAgentPresetUpdatable,
preset: backend.api.features.library.model.LibraryAgentPresetUpdatable,
user_id: str,
):
return await backend.server.v2.library.routes.presets.update_preset(
return await backend.api.features.library.routes.presets.update_preset(
preset_id=preset_id, preset=preset, user_id=user_id
)
@staticmethod
async def test_delete_preset(preset_id: str, user_id: str):
return await backend.server.v2.library.routes.presets.delete_preset(
return await backend.api.features.library.routes.presets.delete_preset(
preset_id=preset_id, user_id=user_id
)
@@ -445,7 +462,7 @@ class AgentServer(backend.util.service.AppProcess):
user_id: str,
inputs: Optional[dict[str, Any]] = None,
):
return await backend.server.v2.library.routes.presets.execute_preset(
return await backend.api.features.library.routes.presets.execute_preset(
preset_id=preset_id,
user_id=user_id,
inputs=inputs or {},
@@ -454,18 +471,20 @@ class AgentServer(backend.util.service.AppProcess):
@staticmethod
async def test_create_store_listing(
request: backend.server.v2.store.model.StoreSubmissionRequest, user_id: str
request: backend.api.features.store.model.StoreSubmissionRequest, user_id: str
):
return await backend.server.v2.store.routes.create_submission(request, user_id)
return await backend.api.features.store.routes.create_submission(
request, user_id
)
### ADMIN ###
@staticmethod
async def test_review_store_listing(
request: backend.server.v2.store.model.ReviewSubmissionRequest,
request: backend.api.features.store.model.ReviewSubmissionRequest,
user_id: str,
):
return await backend.server.v2.admin.store_admin_routes.review_submission(
return await backend.api.features.admin.store_admin_routes.review_submission(
request.store_listing_version_id, request, user_id
)
@@ -475,10 +494,7 @@ class AgentServer(backend.util.service.AppProcess):
provider: ProviderName,
credentials: Credentials,
) -> Credentials:
from backend.server.integrations.router import (
create_credentials,
get_credential,
)
from .features.integrations.router import create_credentials, get_credential
try:
return await create_credentials(

View File

@@ -8,7 +8,7 @@ import pytest
from fastapi import HTTPException, Request
from starlette.status import HTTP_401_UNAUTHORIZED, HTTP_403_FORBIDDEN
from backend.server.utils.api_key_auth import APIKeyAuthenticator
from backend.api.utils.api_key_auth import APIKeyAuthenticator
from backend.util.exceptions import MissingConfigError

View File

@@ -1,6 +1,6 @@
import pytest
from backend.server.utils.cors import build_cors_params
from backend.api.utils.cors import build_cors_params
from backend.util.settings import AppEnvironment

Some files were not shown because too many files have changed in this diff Show More