Compare commits

..

25 Commits

Author SHA1 Message Date
Swifty
b46b9c7b97 Merge branch 'dev' into hackathon-copilot-backend 2026-01-09 16:31:39 +01:00
Swifty
843c487500 feat(backend): add prisma types stub generator for pyright compatibility (#11736)
Prisma's generated `types.py` file is 57,000+ lines with complex
recursive TypedDict definitions that exhaust Pyright's type inference
budget. This causes random type errors and makes the type checker
unreliable.

### Changes 🏗️

- Add `gen_prisma_types_stub.py` script that generates a lightweight
`.pyi` stub file
- The stub preserves safe types (Literal, TypeVar) while collapsing
complex TypedDicts to `dict[str, Any]`
- Integrate stub generation into all workflows that run `prisma
generate`:
  - `platform-backend-ci.yml`
  - `claude.yml`
  - `claude-dependabot.yml`
  - `copilot-setup-steps.yml`
  - `docker-compose.platform.yml`
  - `Dockerfile`
  - `Makefile` (migrate & reset-db targets)
  - `linter.py` (lint & format commands)
- Add `gen-prisma-stub` poetry script entry
- Fix two pre-existing type errors that were previously masked:
- `store/db.py`: Replace private type
`_StoreListingVersion_version_OrderByInput` with dict literal
  - `airtable/_webhook.py`: Add cast for `Serializable` type

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run `poetry run format` - passes with 0 errors (down from 57+)
  - [x] Run `poetry run lint` - passes with 0 errors
  - [x] Run `poetry run gen-prisma-stub` - generates stub successfully
- [x] Verify stub file is created at correct location with proper
content

#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **Chores**
* Added a lightweight Prisma type-stub generator and integrated it into
build, lint, CI/CD, and container workflows.
* Build, migration, formatting, and lint steps now generate these stubs
to improve type-checking performance and reduce overhead during builds
and deployments.
  * Exposed a project command to run stub generation manually.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2026-01-09 16:31:10 +01:00
Nicholas Tindle
47a3a5ef41 feat(backend,frontend): optional credentials flag for blocks at agent level (#11716)
This feature allows agent makers to mark credential fields as optional.
When credentials are not configured for an optional block, the block
will be skipped during execution rather than causing a validation error.

**Use case:** An agent with multiple notification channels (Discord,
Twilio, Slack) where the user only needs to configure one - unconfigured
channels are simply skipped.

### Changes 🏗️

#### Backend

**Data Model Changes:**
- `backend/data/graph.py`: Added `credentials_optional` property to
`Node` model that reads from node metadata
- `backend/data/execution.py`: Added `nodes_to_skip` field to
`GraphExecutionEntry` model to track nodes that should be skipped

**Validation Changes:**
- `backend/executor/utils.py`:
- Updated `_validate_node_input_credentials()` to return a tuple of
`(credential_errors, nodes_to_skip)`
- Nodes with `credentials_optional=True` and missing credentials are
added to `nodes_to_skip` instead of raising validation errors
- Updated `validate_graph_with_credentials()` to propagate
`nodes_to_skip` set
- Updated `validate_and_construct_node_execution_input()` to return
`nodes_to_skip`
- Updated `add_graph_execution()` to pass `nodes_to_skip` to execution
entry

**Execution Changes:**
- `backend/executor/manager.py`:
  - Added skip logic in `_on_graph_execution()` dispatch loop
- When a node is in `nodes_to_skip`, it is marked as `COMPLETED` without
execution
  - No outputs are produced, so downstream nodes won't trigger

#### Frontend

**Node Store:**
- `frontend/src/app/(platform)/build/stores/nodeStore.ts`:
- Added `credentials_optional` to node metadata serialization in
`convertCustomNodeToBackendNode()`
- Added `getCredentialsOptional()` and `setCredentialsOptional()` helper
methods

**Credential Field Component:**
-
`frontend/src/components/renderers/input-renderer/fields/CredentialField/CredentialField.tsx`:
  - Added "Optional - skip block if not configured" switch toggle
  - Switch controls the `credentials_optional` metadata flag
  - Placeholder text updates based on optional state

**Credential Field Hook:**
-
`frontend/src/components/renderers/input-renderer/fields/CredentialField/useCredentialField.ts`:
  - Added `disableAutoSelect` parameter
- When credentials are optional, auto-selection of credentials is
disabled

**Feature Flags:**
- `frontend/src/services/feature-flags/use-get-flag.ts`: Minor refactor
(condition ordering)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Build an agent using smart decision maker and down stream blocks
to test this

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Introduces optional credentials across graph execution and UI,
allowing nodes to be skipped (no outputs, no downstream triggers) when
their credentials are not configured.
> 
> - Backend
> - Adds `Node.credentials_optional` (from node `metadata`) and computes
required credential fields in `Graph.credentials_input_schema` based on
usage.
> - Validates credentials with `_validate_node_input_credentials` →
returns `(errors, nodes_to_skip)`; plumbs `nodes_to_skip` through
`validate_graph_with_credentials`,
`_construct_starting_node_execution_input`,
`validate_and_construct_node_execution_input`, and `add_graph_execution`
into `GraphExecutionEntry`.
> - Executor: dispatch loop skips nodes in `nodes_to_skip` (marks
`COMPLETED`); `execute_node`/`on_node_execution` accept `nodes_to_skip`;
`SmartDecisionMakerBlock.run` filters tool functions whose
`_sink_node_id` is in `nodes_to_skip` and errors only if all tools are
filtered.
> - Models: `GraphExecutionEntry` gains `nodes_to_skip` field. Tests and
snapshots updated accordingly.
> 
> - Frontend
> - Builder: credential field uses `custom/credential_field` with an
"Optional – skip block if not configured" toggle; `nodeStore` persists
`credentials_optional` and history; UI hides optional toggle in run
dialogs.
> - Run dialogs: compute required credentials from
`credentials_input_schema.required`; allow selecting "None"; avoid
auto-select for optional; filter out incomplete creds before execute.
>   - Minor schema/UI wiring updates (`uiSchema`, form context flags).
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
5e01fd6a3e. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <ntindle@users.noreply.github.com>
2026-01-09 14:11:35 +00:00
Ubbe
ec00aa951a fix(frontend): agent favorites layout (#11733)
## Changes 🏗️

<img width="800" height="744" alt="Screenshot 2026-01-09 at 16 07 08"
src="https://github.com/user-attachments/assets/034c97e2-18f3-441c-a13d-71f668ad672f"
/>

- Remove feature flag for agent favourites ( _keep it always visible_ )
- Fix the layout on the card so the ❤️ icon appears next to the `...`
menu
- Remove icons on toasts

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the app locally and check the above


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **New Features**
* Favorites now respond to the current search term and are available to
all users (no feature-flag).

* **UI/UX Improvements**
* Redesigned Favorites section with simplified header, inline agent
counts, updated spacing/dividers, and removal of skeleton placeholders.
  * Favorite button repositioned and visually simplified on agent cards.
* Toast visuals simplified by removing per-type icons and adjusting
close-button positioning.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2026-01-09 18:52:07 +07:00
Zamil Majdy
36fb1ea004 fix(platform): store submission validation and marketplace improvements (#11706)
## Summary

Major improvements to AutoGPT Platform store submission deletion,
creator detection, and marketplace functionality. This PR addresses
critical issues with submission management and significantly improves
performance.

### 🔧 **Store Submission Deletion Issues Fixed**

**Problems Solved**:
-  **Wrong deletion granularity**: Deleting entire `StoreListing` (all
versions) when users expected to delete individual submissions
-  **"Graph not found" errors**: Cascade deletion removing AgentGraphs
that were still referenced
-  **Multiple submissions deleted**: When removing one submission, all
submissions for that agent were removed
-  **Deletion of approved content**: Users could accidentally remove
live store content

**Solutions Implemented**:
-  **Granular deletion**: Now deletes individual `StoreListingVersion`
records instead of entire listings
-  **Protected approved content**: Prevents deletion of approved
submissions to keep store content safe
-  **Automatic cleanup**: Empty listings are automatically removed when
last version is deleted
-  **Simplified logic**: Reduced deletion function from 85 lines to 32
lines for better maintainability

### 🔧 **Creator Detection Performance Issues Fixed**

**Problems Solved**:
-  **Inefficient API calls**: Fetching ALL user submissions just to
check if they own one specific agent
-  **Complex logic**: Convoluted creator detection requiring multiple
database queries
-  **Performance impact**: Especially bad for non-creators who would
never need this data

**Solutions Implemented**:
-  **Added `owner_user_id` field**: Direct ownership reference in
`LibraryAgent` model
-  **Simple ownership check**: `owner_user_id === user.id` instead of
complex submission fetching
-  **90%+ performance improvement**: Massive reduction in unnecessary
API calls for non-creators
-  **Optimized data fetching**: Only fetch submissions when user is
creator AND has marketplace listing

### 🔧 **Original Store Submission Validation Issues (BUILDER-59F)**
Fixes "Agent not found for this user. User ID: ..., Agent ID: , Version:
0" errors:

- **Backend validation**: Added Pydantic validation for `agent_id`
(min_length=1) and `agent_version` (>0)
- **Frontend validation**: Pre-submission validation with user-friendly
error messages
- **Agent selection flow**: Fixed `agentId` not being set from
`selectedAgentId`
- **State management**: Prevented state reset conflicts clearing
selected agent

### 🔧 **Marketplace Display Improvements**
Enhanced version history and changelog display:

- Updated title from "Changelog" to "Version history"
- Added "Last updated X ago" with proper relative time formatting  
- Display version numbers as "Version X.0" format
- Replaced all hardcoded values with dynamic API data
- Improved text sizes and layout structure

### 📁 **Files Changed**

**Backend Changes**:
- `backend/api/features/store/db.py` - Simplified deletion logic, added
approval protection
- `backend/api/features/store/model.py` - Added `listing_id` field,
Pydantic validation
- `backend/api/features/library/model.py` - Added `owner_user_id` field
for efficient creator detection
- All test files - Updated with new required fields

**Frontend Changes**:
- `useMarketplaceUpdate.ts` - Optimized creator detection logic 
- `MainDashboardPage.tsx` - Added `listing_id` mapping for proper type
safety
- `useAgentTableRow.ts` - Updated deletion logic to use
`store_listing_version_id`
- `usePublishAgentModal.ts` - Fixed state reset conflicts
- Marketplace components - Enhanced version history display

###  **Benefits**

**Performance**:
- 🚀 **90%+ reduction** in unnecessary API calls for creator detection
- 🚀 **Instant ownership checks** (no database queries needed)
- 🚀 **Optimized submissions fetching** (only when needed)

**User Experience**: 
-  **Granular submission control** (delete individual versions, not
entire listings)
-  **Protected approved content** (prevents accidental store content
removal)
-  **Better error prevention** (no more "Graph not found" errors)
-  **Clear validation messages** (user-friendly error feedback)

**Code Quality**:
-  **Simplified deletion logic** (85 lines → 32 lines)
-  **Better type safety** (proper `listing_id` field usage)  
-  **Cleaner creator detection** (explicit ownership vs inferred)
-  **Automatic cleanup** (empty listings removed automatically)

### 🧪 **Testing**
- [x] Backend validation rejects empty agent_id and zero agent_version
- [x] Frontend TypeScript compilation passes
- [x] Store submission works from both creator dashboard and "become a
creator" flows
- [x] Granular submission deletion works correctly
- [x] Approved submissions are protected from deletion
- [x] Creator detection is fast and accurate
- [x] Marketplace displays version history correctly

**Breaking Changes**: None - All changes are additive and backwards
compatible.

Fixes critical submission deletion issues, improves performance
significantly, and enhances user experience across the platform.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **New Features**
  * Agent ownership is now tracked and exposed across the platform.
* Store submissions and versions now include a required listing_id to
preserve listing linkage.

* **Bug Fixes**
* Prevent deletion of APPROVED submissions; remove empty listings after
deletions.
* Edits restricted to PENDING submissions with clearer invalid-operation
messages.

* **Improvements**
* Stronger publish validation and UX guards; deduplicated images and
modal open/reset refinements.
* Version history shows relative "Last updated" times and version
badges.

* **Tests**
* E2E tests updated to target pending-submission flows for edit/delete.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Co-authored-by: Claude <noreply@anthropic.com>
2026-01-08 19:11:38 +00:00
Swifty
08c3f19fe4 Merge branch 'dev' into hackathon-copilot-backend 2026-01-08 12:53:40 +01:00
Swifty
a23f766ed7 Merge branch 'dev' into hackathon-copilot-backend 2026-01-08 12:44:58 +01:00
Ubbe
fc25e008b3 feat(frontend): update library agent cards to use DS (#11720)
## Changes 🏗️

<img width="700" height="838" alt="Screenshot 2026-01-07 at 16 11 04"
src="https://github.com/user-attachments/assets/0b38d2e1-d4a8-4036-862c-b35c82c496c2"
/>

- Update the agent library cards to new designs
- Update page to use Design System components
- Allow to edit/delete/duplicate agents on the library list page
- Add missing actions on library agent detail page

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run locally and test the above


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **New Features**
* Marketplace info shown on agent cards and improved favoriting with
optimistic UI and feedback.
  * Delete agent and delete schedule flows with confirmation dialogs.

* **Refactor**
* New composable form system, modernized upload dialog, streamlined
search bar, and multiple library components converted to named exports
with layout tweaks.
  * New agent card menu and favorite button UI.

* **Chores**
  * Removed notification UI and dropped a drag-drop dependency.

* **Tests**
  * Increased timeouts and stabilized upload/pagination flows.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2026-01-08 18:28:27 +07:00
Abhimanyu Yadav
a81ac150da fix(frontend): add word wrapping to CodeRenderer and improve output actions visibility (#11724)
## Changes 🏗️
- Updated the `CodeRenderer` component to add `whitespace-pre-wrap` and
`break-words` CSS classes to the `<code>` element
- This enables proper wrapping of long code lines while preserving
whitespace formatting

Before


![image.png](https://app.graphite.com/user-attachments/assets/aca769cc-0f6f-4e25-8cdd-c491fcbf21bb.png)

After

![Screenshot 2026-01-08 at
3.02.53 PM.png](https://app.graphite.com/user-attachments/assets/99e23efa-be2a-441b-b0d6-50fa2a08cdb0.png)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Verified code with long lines wraps correctly
  - [x] Confirmed whitespace and indentation are preserved
  - [x] Tested code display in various viewport sizes

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **Bug Fixes**
* Code blocks now preserve whitespace and wrap long lines for improved
readability.
* Output action controls are hidden when there is only a single output
item, reducing unnecessary UI elements.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2026-01-08 11:13:47 +00:00
Abhimanyu Yadav
49ee087496 feat(frontend): add new integration images for Webshare and WordPress (#11725)
### Changes 🏗️

Added two new integration icons to the frontend:
- `webshare_proxy.png` - Icon for WebShare Proxy integration
- `wordpress.png` - Icon for WordPress integration

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verified both icons display correctly in the integrations section
  - [x] Confirmed icons render properly at different screen sizes
  - [x] Checked that the icons maintain quality when scaled

#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
2026-01-08 11:13:34 +00:00
Swifty
e6a5631e47 update migration 2026-01-08 12:10:55 +01:00
Ubbe
b0855e8cf2 feat(frontend): context menu right click new builder (#11703)
## Changes 🏗️

<img width="250" height="504" alt="Screenshot 2026-01-06 at 17 53 26"
src="https://github.com/user-attachments/assets/52013448-f49c-46b6-b86a-39f98270cbc3"
/>

<img width="300" height="544" alt="Screenshot 2026-01-06 at 17 53 29"
src="https://github.com/user-attachments/assets/e6334034-68e4-4346-9092-3774ab3e8445"
/>

On the **New Builder**:
- right-click on a node menu make it show the context menu
- use the same menu for right-click and when clicking on `...`

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run locally and test the above



<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **New Features**
* Added a custom right-click context menu for nodes with Copy, Open
agent (when available), and Delete actions; browser default menu is
suppressed while preserving zoom/drag/wiring.
* Introduced reusable SecondaryMenu primitives for context and dropdown
menus.

* **Documentation**
* Added Storybook examples demonstrating the context menu and dropdown
menu usage.

* **Style**
* Updated menu styling and icons with improved consistency and dark-mode
support.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2026-01-08 17:35:49 +07:00
Swifty
1a7e5d3caf Merge branch 'dev' into hackathon-copilot-backend 2026-01-08 10:44:25 +01:00
Swifty
acd702ff55 update openapi.json 2026-01-08 10:44:13 +01:00
Swifty
cb89669ee1 Merge branch 'hackathon-copilot-backend' of github.com:Significant-Gravitas/AutoGPT into hackathon-copilot-backend 2026-01-08 10:43:21 +01:00
Swifty
eee478fa31 fix tests 2026-01-08 10:43:15 +01:00
Swifty
8a6bdaefe7 Merge branch 'dev' into hackathon-copilot-backend 2026-01-08 10:04:16 +01:00
Abhimanyu Yadav
5e2146dd76 feat(frontend): add CustomSchemaField wrapper for dynamic form field routing
(#11722)

### Changes 🏗️

This PR introduces automatic UI schema generation for custom form
fields, eliminating manual field mapping.

#### 1. **generateUiSchemaForCustomFields Utility**
(`generate-ui-schema.ts`) - New File
   - Auto-generates `ui:field` settings for custom fields
   - Detects custom fields using `findCustomFieldId()` matcher
   - Handles nested objects and array items recursively
   - Merges with existing UI schema without overwriting

#### 2. **FormRenderer Integration** (`FormRenderer.tsx`)
   - Imports and uses `generateUiSchemaForCustomFields`
   - Creates merged UI schema with `useMemo`
   - Passes merged schema to Form component
   - Enables automatic custom field detection

#### 3. **Preprocessor Cleanup** (`input-schema-pre-processor.ts`)
   - Removed manual `$id` assignment for custom fields
   - Removed unused `findCustomFieldId` import
   - Simplified to focus only on type validation

### Why these changes?

- Custom fields now auto-detect without manual `ui:field` configuration
- Uses standard RJSF approach (UI schema) for field routing
- Centralized custom field detection logic improves maintainability

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Verify custom fields render correctly when present in schema
- [x] Verify standard fields continue to render with default SchemaField
- [x] Verify multiple instances of same custom field type have unique
IDs
  - [x] Test form submission with custom fields

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Bug Fixes**
* Improved custom field rendering in forms by optimizing the UI schema
generation process.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2026-01-08 08:47:52 +00:00
Swifty
fd768e8f3f fixing linting errors 2026-01-08 09:40:54 +01:00
Abhimanyu Yadav
103a62c9da feat(frontend/builder): add filters to blocks menu (#11654)
### Changes 🏗️

This PR adds filtering functionality to the new blocks menu, allowing
users to filter search results by category and creator.

**New Components:**
- `BlockMenuFilters`: Main filter component displaying active filters
and filter chips
- `FilterSheet`: Slide-out panel for selecting filters with categories
and creators
- `BlockMenuSearchContent`: Refactored search results display component

**Features Added:**
- Filter by categories: Blocks, Integrations, Marketplace agents, My
agents
- Filter by creator: Shows all available creators from search results
- Category counts: Display number of results per category
- Interactive filter chips with animations (using framer-motion)
- Hover states showing result counts on filter chips
- "All filters" sheet with apply/clear functionality

**State Management:**
- Extended `blockMenuStore` with filter state management
- Added `filters`, `creators`, `creators_list`, and `categoryCounts` to
store
- Integrated filters with search API (`filter` and `by_creator`
parameters)

**Refactoring:**
- Moved search logic from `BlockMenuSearch` to `BlockMenuSearchContent`
- Renamed `useBlockMenuSearch` to `useBlockMenuSearchContent`
- Moved helper functions to `BlockMenuSearchContent` directory

**API Changes:**
- Updated `custom-mutator.ts` to properly handle query parameter
encoding


### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Search for blocks and verify filter chips appear
- [x] Click "All filters" and verify filter sheet opens with categories
- [x] Select/deselect category filters and verify results update
accordingly
  - [x] Filter by creator and verify only blocks from that creator show
  - [x] Clear all filters and verify reset to default state
  - [x] Verify filter counts display correctly
  - [x] Test filter chip hover animations
2026-01-08 08:02:21 +00:00
Swifty
21626edfa7 Merge branch 'dev' into hackathon-copilot-backend 2026-01-07 16:42:04 +01:00
Swifty
e6cd198e4d remove tool imports from tools we have not added yet 2026-01-07 15:49:37 +01:00
Swifty
e17d2a71db Merge branch 'dev' into hackathon-copilot-backend 2026-01-07 11:34:04 +01:00
Swifty
e8b878521f added prisma schame changes and understanding tool 2026-01-07 10:02:44 +01:00
Swifty
d3dd13fc55 extracted core chat changes from hackathon/copilot 2026-01-07 09:40:13 +01:00
139 changed files with 7182 additions and 2128 deletions

View File

@@ -16,6 +16,7 @@
!autogpt_platform/backend/poetry.lock
!autogpt_platform/backend/README.md
!autogpt_platform/backend/.env
!autogpt_platform/backend/gen_prisma_types_stub.py
# Platform - Market
!autogpt_platform/market/market/

View File

@@ -74,7 +74,7 @@ jobs:
- name: Generate Prisma Client
working-directory: autogpt_platform/backend
run: poetry run prisma generate
run: poetry run prisma generate && poetry run gen-prisma-stub
# Frontend Node.js/pnpm setup (mirrors platform-frontend-ci.yml)
- name: Set up Node.js

View File

@@ -90,7 +90,7 @@ jobs:
- name: Generate Prisma Client
working-directory: autogpt_platform/backend
run: poetry run prisma generate
run: poetry run prisma generate && poetry run gen-prisma-stub
# Frontend Node.js/pnpm setup (mirrors platform-frontend-ci.yml)
- name: Set up Node.js

View File

@@ -72,7 +72,7 @@ jobs:
- name: Generate Prisma Client
working-directory: autogpt_platform/backend
run: poetry run prisma generate
run: poetry run prisma generate && poetry run gen-prisma-stub
# Frontend Node.js/pnpm setup (mirrors platform-frontend-ci.yml)
- name: Set up Node.js
@@ -108,6 +108,16 @@ jobs:
# run: pnpm playwright install --with-deps chromium
# Docker setup for development environment
- name: Free up disk space
run: |
# Remove large unused tools to free disk space for Docker builds
sudo rm -rf /usr/share/dotnet
sudo rm -rf /usr/local/lib/android
sudo rm -rf /opt/ghc
sudo rm -rf /opt/hostedtoolcache/CodeQL
sudo docker system prune -af
df -h
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3

View File

@@ -134,7 +134,7 @@ jobs:
run: poetry install
- name: Generate Prisma Client
run: poetry run prisma generate
run: poetry run prisma generate && poetry run gen-prisma-stub
- id: supabase
name: Start Supabase

View File

@@ -6,12 +6,14 @@ start-core:
# Stop core services
stop-core:
docker compose stop deps
docker compose stop
reset-db:
docker compose stop db
rm -rf db/docker/volumes/db/data
cd backend && poetry run prisma migrate deploy
cd backend && poetry run prisma generate
cd backend && poetry run gen-prisma-stub
# View logs for core services
logs-core:
@@ -33,6 +35,7 @@ init-env:
migrate:
cd backend && poetry run prisma migrate deploy
cd backend && poetry run prisma generate
cd backend && poetry run gen-prisma-stub
run-backend:
cd backend && poetry run app
@@ -58,4 +61,4 @@ help:
@echo " run-backend - Run the backend FastAPI server"
@echo " run-frontend - Run the frontend Next.js development server"
@echo " test-data - Run the test data creator"
@echo " load-store-agents - Load store agents from agents/ folder into test database"
@echo " load-store-agents - Load store agents from agents/ folder into test database"

View File

@@ -48,7 +48,8 @@ RUN poetry install --no-ansi --no-root
# Generate Prisma client
COPY autogpt_platform/backend/schema.prisma ./
COPY autogpt_platform/backend/backend/data/partial_types.py ./backend/data/partial_types.py
RUN poetry run prisma generate
COPY autogpt_platform/backend/gen_prisma_types_stub.py ./
RUN poetry run prisma generate && poetry run gen-prisma-stub
FROM debian:13-slim AS server_dependencies

View File

@@ -12,7 +12,11 @@ class ChatConfig(BaseSettings):
# OpenAI API Configuration
model: str = Field(
default="qwen/qwen3-235b-a22b-2507", description="Default model to use"
default="anthropic/claude-opus-4.5", description="Default model to use"
)
title_model: str = Field(
default="openai/gpt-4o-mini",
description="Model to use for generating session titles (should be fast/cheap)",
)
api_key: str | None = Field(default=None, description="OpenAI API key")
base_url: str | None = Field(
@@ -72,8 +76,31 @@ class ChatConfig(BaseSettings):
v = "https://openrouter.ai/api/v1"
return v
# Prompt paths for different contexts
PROMPT_PATHS: dict[str, str] = {
"default": "prompts/chat_system.md",
"onboarding": "prompts/onboarding_system.md",
}
def get_system_prompt_for_type(
self, prompt_type: str = "default", **template_vars
) -> str:
"""Load and render a system prompt by type.
Args:
prompt_type: The type of prompt to load ("default" or "onboarding")
**template_vars: Variables to substitute in the template
Returns:
Rendered system prompt string
"""
prompt_path_str = self.PROMPT_PATHS.get(
prompt_type, self.PROMPT_PATHS["default"]
)
return self._load_prompt_from_path(prompt_path_str, **template_vars)
def get_system_prompt(self, **template_vars) -> str:
"""Load and render the system prompt from file.
"""Load and render the default system prompt from file.
Args:
**template_vars: Variables to substitute in the template
@@ -82,9 +109,21 @@ class ChatConfig(BaseSettings):
Rendered system prompt string
"""
return self._load_prompt_from_path(self.system_prompt_path, **template_vars)
def _load_prompt_from_path(self, prompt_path_str: str, **template_vars) -> str:
"""Load and render a system prompt from a given path.
Args:
prompt_path_str: Path to the prompt file relative to chat module
**template_vars: Variables to substitute in the template
Returns:
Rendered system prompt string
"""
# Get the path relative to this module
module_dir = Path(__file__).parent
prompt_path = module_dir / self.system_prompt_path
prompt_path = module_dir / prompt_path_str
# Check for .j2 extension first (Jinja2 template)
j2_path = Path(str(prompt_path) + ".j2")

View File

@@ -0,0 +1,215 @@
"""Database operations for chat sessions."""
import logging
from datetime import UTC, datetime
from typing import Any, cast
from prisma.models import ChatMessage as PrismaChatMessage
from prisma.models import ChatSession as PrismaChatSession
from prisma.types import (
ChatMessageCreateInput,
ChatSessionCreateInput,
ChatSessionUpdateInput,
)
from backend.util.json import SafeJson
logger = logging.getLogger(__name__)
async def get_chat_session(session_id: str) -> PrismaChatSession | None:
"""Get a chat session by ID from the database."""
session = await PrismaChatSession.prisma().find_unique(
where={"id": session_id},
include={"Messages": True},
)
if session and session.Messages:
# Sort messages by sequence in Python since Prisma doesn't support order_by in include
session.Messages.sort(key=lambda m: m.sequence)
return session
async def create_chat_session(
session_id: str,
user_id: str | None,
) -> PrismaChatSession:
"""Create a new chat session in the database."""
data = ChatSessionCreateInput(
id=session_id,
userId=user_id,
credentials=SafeJson({}),
successfulAgentRuns=SafeJson({}),
successfulAgentSchedules=SafeJson({}),
)
return await PrismaChatSession.prisma().create(
data=data,
include={"Messages": True},
)
async def update_chat_session(
session_id: str,
credentials: dict[str, Any] | None = None,
successful_agent_runs: dict[str, Any] | None = None,
successful_agent_schedules: dict[str, Any] | None = None,
total_prompt_tokens: int | None = None,
total_completion_tokens: int | None = None,
title: str | None = None,
) -> PrismaChatSession | None:
"""Update a chat session's metadata."""
data: ChatSessionUpdateInput = {"updatedAt": datetime.now(UTC)}
if credentials is not None:
data["credentials"] = SafeJson(credentials)
if successful_agent_runs is not None:
data["successfulAgentRuns"] = SafeJson(successful_agent_runs)
if successful_agent_schedules is not None:
data["successfulAgentSchedules"] = SafeJson(successful_agent_schedules)
if total_prompt_tokens is not None:
data["totalPromptTokens"] = total_prompt_tokens
if total_completion_tokens is not None:
data["totalCompletionTokens"] = total_completion_tokens
if title is not None:
data["title"] = title
session = await PrismaChatSession.prisma().update(
where={"id": session_id},
data=data,
include={"Messages": True},
)
if session and session.Messages:
session.Messages.sort(key=lambda m: m.sequence)
return session
async def add_chat_message(
session_id: str,
role: str,
sequence: int,
content: str | None = None,
name: str | None = None,
tool_call_id: str | None = None,
refusal: str | None = None,
tool_calls: list[dict[str, Any]] | None = None,
function_call: dict[str, Any] | None = None,
) -> PrismaChatMessage:
"""Add a message to a chat session."""
# Build the input dict dynamically - only include optional fields when they
# have values, as Prisma TypedDict validation fails when optional fields
# are explicitly set to None
data: dict[str, Any] = {
"Session": {"connect": {"id": session_id}},
"role": role,
"sequence": sequence,
}
# Add optional string fields
if content is not None:
data["content"] = content
if name is not None:
data["name"] = name
if tool_call_id is not None:
data["toolCallId"] = tool_call_id
if refusal is not None:
data["refusal"] = refusal
# Add optional JSON fields only when they have values
if tool_calls is not None:
data["toolCalls"] = SafeJson(tool_calls)
if function_call is not None:
data["functionCall"] = SafeJson(function_call)
# Update session's updatedAt timestamp
await PrismaChatSession.prisma().update(
where={"id": session_id},
data={"updatedAt": datetime.now(UTC)},
)
return await PrismaChatMessage.prisma().create(
data=cast(ChatMessageCreateInput, data)
)
async def add_chat_messages_batch(
session_id: str,
messages: list[dict[str, Any]],
start_sequence: int,
) -> list[PrismaChatMessage]:
"""Add multiple messages to a chat session in a batch."""
if not messages:
return []
created_messages = []
for i, msg in enumerate(messages):
# Build the input dict dynamically - only include optional JSON fields
# when they have values, as Prisma TypedDict validation fails when
# optional fields are explicitly set to None
data: dict[str, Any] = {
"Session": {"connect": {"id": session_id}},
"role": msg["role"],
"sequence": start_sequence + i,
}
# Add optional string fields
if msg.get("content") is not None:
data["content"] = msg["content"]
if msg.get("name") is not None:
data["name"] = msg["name"]
if msg.get("tool_call_id") is not None:
data["toolCallId"] = msg["tool_call_id"]
if msg.get("refusal") is not None:
data["refusal"] = msg["refusal"]
# Add optional JSON fields only when they have values
if msg.get("tool_calls") is not None:
data["toolCalls"] = SafeJson(msg["tool_calls"])
if msg.get("function_call") is not None:
data["functionCall"] = SafeJson(msg["function_call"])
created = await PrismaChatMessage.prisma().create(
data=cast(ChatMessageCreateInput, data)
)
created_messages.append(created)
# Update session's updatedAt timestamp
await PrismaChatSession.prisma().update(
where={"id": session_id},
data={"updatedAt": datetime.now(UTC)},
)
return created_messages
async def get_user_chat_sessions(
user_id: str,
limit: int = 50,
offset: int = 0,
) -> list[PrismaChatSession]:
"""Get chat sessions for a user, ordered by most recent."""
return await PrismaChatSession.prisma().find_many(
where={"userId": user_id},
order={"updatedAt": "desc"},
take=limit,
skip=offset,
)
async def get_user_session_count(user_id: str) -> int:
"""Get the total number of chat sessions for a user."""
return await PrismaChatSession.prisma().count(where={"userId": user_id})
async def delete_chat_session(session_id: str) -> bool:
"""Delete a chat session and all its messages."""
try:
await PrismaChatSession.prisma().delete(where={"id": session_id})
return True
except Exception as e:
logger.error(f"Failed to delete chat session {session_id}: {e}")
return False
async def get_chat_session_message_count(session_id: str) -> int:
"""Get the number of messages in a chat session."""
count = await PrismaChatMessage.prisma().count(where={"sessionId": session_id})
return count

View File

@@ -16,11 +16,15 @@ from openai.types.chat.chat_completion_message_tool_call_param import (
ChatCompletionMessageToolCallParam,
Function,
)
from prisma.models import ChatMessage as PrismaChatMessage
from prisma.models import ChatSession as PrismaChatSession
from pydantic import BaseModel
from backend.data.redis_client import get_redis_async
from backend.util import json
from backend.util.exceptions import RedisError
from . import db as chat_db
from .config import ChatConfig
logger = logging.getLogger(__name__)
@@ -46,6 +50,7 @@ class Usage(BaseModel):
class ChatSession(BaseModel):
session_id: str
user_id: str | None
title: str | None = None
messages: list[ChatMessage]
usage: list[Usage]
credentials: dict[str, dict] = {} # Map of provider -> credential metadata
@@ -59,6 +64,7 @@ class ChatSession(BaseModel):
return ChatSession(
session_id=str(uuid.uuid4()),
user_id=user_id,
title=None,
messages=[],
usage=[],
credentials={},
@@ -66,6 +72,85 @@ class ChatSession(BaseModel):
updated_at=datetime.now(UTC),
)
@staticmethod
def from_prisma(
prisma_session: PrismaChatSession,
prisma_messages: list[PrismaChatMessage] | None = None,
) -> "ChatSession":
"""Convert Prisma models to Pydantic ChatSession."""
messages = []
if prisma_messages:
for msg in prisma_messages:
tool_calls = None
if msg.toolCalls:
tool_calls = (
json.loads(msg.toolCalls)
if isinstance(msg.toolCalls, str)
else msg.toolCalls
)
function_call = None
if msg.functionCall:
function_call = (
json.loads(msg.functionCall)
if isinstance(msg.functionCall, str)
else msg.functionCall
)
messages.append(
ChatMessage(
role=msg.role,
content=msg.content,
name=msg.name,
tool_call_id=msg.toolCallId,
refusal=msg.refusal,
tool_calls=tool_calls,
function_call=function_call,
)
)
# Parse JSON fields from Prisma
credentials = (
json.loads(prisma_session.credentials)
if isinstance(prisma_session.credentials, str)
else prisma_session.credentials or {}
)
successful_agent_runs = (
json.loads(prisma_session.successfulAgentRuns)
if isinstance(prisma_session.successfulAgentRuns, str)
else prisma_session.successfulAgentRuns or {}
)
successful_agent_schedules = (
json.loads(prisma_session.successfulAgentSchedules)
if isinstance(prisma_session.successfulAgentSchedules, str)
else prisma_session.successfulAgentSchedules or {}
)
# Calculate usage from token counts
usage = []
if prisma_session.totalPromptTokens or prisma_session.totalCompletionTokens:
usage.append(
Usage(
prompt_tokens=prisma_session.totalPromptTokens or 0,
completion_tokens=prisma_session.totalCompletionTokens or 0,
total_tokens=(prisma_session.totalPromptTokens or 0)
+ (prisma_session.totalCompletionTokens or 0),
)
)
return ChatSession(
session_id=prisma_session.id,
user_id=prisma_session.userId,
title=prisma_session.title,
messages=messages,
usage=usage,
credentials=credentials,
started_at=prisma_session.createdAt,
updated_at=prisma_session.updatedAt,
successful_agent_runs=successful_agent_runs,
successful_agent_schedules=successful_agent_schedules,
)
def to_openai_messages(self) -> list[ChatCompletionMessageParam]:
messages = []
for message in self.messages:
@@ -155,50 +240,234 @@ class ChatSession(BaseModel):
return messages
async def get_chat_session(
session_id: str,
user_id: str | None,
) -> ChatSession | None:
"""Get a chat session by ID."""
async def _get_session_from_cache(session_id: str) -> ChatSession | None:
"""Get a chat session from Redis cache."""
redis_key = f"chat:session:{session_id}"
async_redis = await get_redis_async()
raw_session: bytes | None = await async_redis.get(redis_key)
if raw_session is None:
logger.warning(f"Session {session_id} not found in Redis")
return None
try:
session = ChatSession.model_validate_json(raw_session)
logger.info(
f"Loading session {session_id} from cache: "
f"message_count={len(session.messages)}, "
f"roles={[m.role for m in session.messages]}"
)
return session
except Exception as e:
logger.error(f"Failed to deserialize session {session_id}: {e}", exc_info=True)
raise RedisError(f"Corrupted session data for {session_id}") from e
async def _cache_session(session: ChatSession) -> None:
"""Cache a chat session in Redis."""
redis_key = f"chat:session:{session.session_id}"
async_redis = await get_redis_async()
await async_redis.setex(redis_key, config.session_ttl, session.model_dump_json())
async def _get_session_from_db(session_id: str) -> ChatSession | None:
"""Get a chat session from the database."""
prisma_session = await chat_db.get_chat_session(session_id)
if not prisma_session:
return None
messages = prisma_session.Messages
logger.info(
f"Loading session {session_id} from DB: "
f"has_messages={messages is not None}, "
f"message_count={len(messages) if messages else 0}, "
f"roles={[m.role for m in messages] if messages else []}"
)
return ChatSession.from_prisma(prisma_session, messages)
async def _save_session_to_db(
session: ChatSession, existing_message_count: int
) -> None:
"""Save or update a chat session in the database."""
# Check if session exists in DB
existing = await chat_db.get_chat_session(session.session_id)
if not existing:
# Create new session
await chat_db.create_chat_session(
session_id=session.session_id,
user_id=session.user_id,
)
existing_message_count = 0
# Calculate total tokens from usage
total_prompt = sum(u.prompt_tokens for u in session.usage)
total_completion = sum(u.completion_tokens for u in session.usage)
# Update session metadata
await chat_db.update_chat_session(
session_id=session.session_id,
credentials=session.credentials,
successful_agent_runs=session.successful_agent_runs,
successful_agent_schedules=session.successful_agent_schedules,
total_prompt_tokens=total_prompt,
total_completion_tokens=total_completion,
)
# Add new messages (only those after existing count)
new_messages = session.messages[existing_message_count:]
if new_messages:
messages_data = []
for msg in new_messages:
messages_data.append(
{
"role": msg.role,
"content": msg.content,
"name": msg.name,
"tool_call_id": msg.tool_call_id,
"refusal": msg.refusal,
"tool_calls": msg.tool_calls,
"function_call": msg.function_call,
}
)
logger.info(
f"Saving {len(new_messages)} new messages to DB for session {session.session_id}: "
f"roles={[m['role'] for m in messages_data]}, "
f"start_sequence={existing_message_count}"
)
await chat_db.add_chat_messages_batch(
session_id=session.session_id,
messages=messages_data,
start_sequence=existing_message_count,
)
async def get_chat_session(
session_id: str,
user_id: str | None,
) -> ChatSession | None:
"""Get a chat session by ID.
Checks Redis cache first, falls back to database if not found.
Caches database results back to Redis.
"""
# Try cache first
try:
session = await _get_session_from_cache(session_id)
if session:
# Verify user ownership
if session.user_id is not None and session.user_id != user_id:
logger.warning(
f"Session {session_id} user id mismatch: {session.user_id} != {user_id}"
)
return None
return session
except RedisError:
logger.warning(f"Cache error for session {session_id}, trying database")
except Exception as e:
logger.warning(f"Unexpected cache error for session {session_id}: {e}")
# Fall back to database
logger.info(f"Session {session_id} not in cache, checking database")
session = await _get_session_from_db(session_id)
if session is None:
logger.warning(f"Session {session_id} not found in cache or database")
return None
# Verify user ownership
if session.user_id is not None and session.user_id != user_id:
logger.warning(
f"Session {session_id} user id mismatch: {session.user_id} != {user_id}"
)
return None
# Cache the session from DB
try:
await _cache_session(session)
logger.info(f"Cached session {session_id} from database")
except Exception as e:
logger.warning(f"Failed to cache session {session_id}: {e}")
return session
async def upsert_chat_session(
session: ChatSession,
) -> ChatSession:
"""Update a chat session with the given messages."""
redis_key = f"chat:session:{session.session_id}"
async_redis = await get_redis_async()
resp = await async_redis.setex(
redis_key, config.session_ttl, session.model_dump_json()
"""Update a chat session in both cache and database."""
# Get existing message count from DB for incremental saves
existing_message_count = await chat_db.get_chat_session_message_count(
session.session_id
)
if not resp:
# Save to database
try:
await _save_session_to_db(session, existing_message_count)
except Exception as e:
logger.error(f"Failed to save session {session.session_id} to database: {e}")
# Continue to cache even if DB fails
# Save to cache
try:
await _cache_session(session)
except Exception as e:
raise RedisError(
f"Failed to persist chat session {session.session_id} to Redis: {resp}"
)
f"Failed to persist chat session {session.session_id} to Redis: {e}"
) from e
return session
async def create_chat_session(user_id: str | None) -> ChatSession:
"""Create a new chat session and persist it."""
session = ChatSession.new(user_id)
# Create in database first
try:
await chat_db.create_chat_session(
session_id=session.session_id,
user_id=user_id,
)
except Exception as e:
logger.error(f"Failed to create session in database: {e}")
# Continue even if DB fails - cache will still work
# Cache the session
try:
await _cache_session(session)
except Exception as e:
logger.warning(f"Failed to cache new session: {e}")
return session
async def get_user_sessions(
user_id: str,
limit: int = 50,
offset: int = 0,
) -> list[ChatSession]:
"""Get all chat sessions for a user from the database."""
prisma_sessions = await chat_db.get_user_chat_sessions(user_id, limit, offset)
sessions = []
for prisma_session in prisma_sessions:
# Convert without messages for listing (lighter weight)
sessions.append(ChatSession.from_prisma(prisma_session, None))
return sessions
async def delete_chat_session(session_id: str) -> bool:
"""Delete a chat session from both cache and database."""
# Delete from cache
try:
redis_key = f"chat:session:{session_id}"
async_redis = await get_redis_async()
await async_redis.delete(redis_key)
except Exception as e:
logger.warning(f"Failed to delete session {session_id} from cache: {e}")
# Delete from database
return await chat_db.delete_chat_session(session_id)

View File

@@ -68,3 +68,50 @@ async def test_chatsession_redis_storage_user_id_mismatch():
s2 = await get_chat_session(s.session_id, None)
assert s2 is None
@pytest.mark.asyncio(loop_scope="session")
async def test_chatsession_db_storage():
"""Test that messages are correctly saved to and loaded from DB (not cache)."""
from backend.data.redis_client import get_redis_async
# Create session with messages including assistant message
s = ChatSession.new(user_id=None)
s.messages = messages # Contains user, assistant, and tool messages
assert s.session_id is not None, "Session id is not set"
# Upsert to save to both cache and DB
s = await upsert_chat_session(s)
# Clear the Redis cache to force DB load
redis_key = f"chat:session:{s.session_id}"
async_redis = await get_redis_async()
await async_redis.delete(redis_key)
# Load from DB (cache was cleared)
s2 = await get_chat_session(
session_id=s.session_id,
user_id=s.user_id,
)
assert s2 is not None, "Session not found after loading from DB"
assert len(s2.messages) == len(
s.messages
), f"Message count mismatch: expected {len(s.messages)}, got {len(s2.messages)}"
# Verify all roles are present
roles = [m.role for m in s2.messages]
assert "user" in roles, f"User message missing. Roles found: {roles}"
assert "assistant" in roles, f"Assistant message missing. Roles found: {roles}"
assert "tool" in roles, f"Tool message missing. Roles found: {roles}"
# Verify message content
for orig, loaded in zip(s.messages, s2.messages):
assert orig.role == loaded.role, f"Role mismatch: {orig.role} != {loaded.role}"
assert (
orig.content == loaded.content
), f"Content mismatch for {orig.role}: {orig.content} != {loaded.content}"
if orig.tool_calls:
assert (
loaded.tool_calls is not None
), f"Tool calls missing for {orig.role} message"
assert len(orig.tool_calls) == len(loaded.tool_calls)

View File

@@ -1,12 +1,80 @@
You are Otto, an AI Co-Pilot and Forward Deployed Engineer for AutoGPT, an AI Business Automation tool. Your mission is to help users quickly find and set up AutoGPT agents to solve their business problems.
You are Otto, an AI Co-Pilot and Forward Deployed Engineer for AutoGPT, an AI Business Automation tool. Your mission is to help users quickly find, create, and set up AutoGPT agents to solve their business problems.
Here are the functions available to you:
<functions>
1. **find_agent** - Search for agents that solve the user's problem
2. **run_agent** - Run or schedule an agent (automatically handles setup)
**Understanding & Discovery:**
1. **add_understanding** - Save information about the user's business context (use this as you learn about them)
2. **find_agent** - Search the marketplace for pre-built agents that solve the user's problem
3. **find_library_agent** - Search the user's personal library of saved agents
4. **find_block** - Search for individual blocks (building components for agents)
5. **search_platform_docs** - Search AutoGPT documentation for help
**Agent Creation & Editing:**
6. **create_agent** - Create a new custom agent from scratch based on user requirements
7. **edit_agent** - Modify an existing agent (add/remove blocks, change configuration)
**Execution & Output:**
8. **run_agent** - Run or schedule an agent (automatically handles setup)
9. **run_block** - Run a single block directly without creating an agent
10. **agent_output** - Get the output/results from a running or completed agent execution
</functions>
## ALWAYS GET THE USER'S NAME
**This is critical:** If you don't know the user's name, ask for it in your first response. Use a friendly, natural approach:
- "Hi! I'm Otto. What's your name?"
- "Hey there! Before we dive in, what should I call you?"
Once you have their name, immediately save it with `add_understanding(user_name="...")` and use it throughout the conversation.
## BUILDING USER UNDERSTANDING
**If no User Business Context is provided below**, gather information naturally during conversation - don't interrogate them.
**Key information to gather (in priority order):**
1. Their name (ALWAYS first if unknown)
2. Their job title and role
3. Their business/company and industry
4. Pain points and what they want to automate
5. Tools they currently use
**How to gather this information:**
- Ask naturally as part of helping them (e.g., "What's your role?" or "What industry are you in?")
- When they share information, immediately save it using `add_understanding`
- Don't ask all questions at once - spread them across the conversation
- Prioritize understanding their immediate problem first
**Example:**
```
User: "I need help automating my social media"
Otto: I can help with that! I'm Otto - what's your name?
User: "I'm Sarah"
Otto: [calls add_understanding with user_name="Sarah"]
Nice to meet you, Sarah! What's your role - are you a social media manager or business owner?
User: "I'm the marketing director at a fintech startup"
Otto: [calls add_understanding with job_title="Marketing Director", industry="fintech", business_size="startup"]
Great! Let me find social media automation agents for you.
[calls find_agent with query="social media automation marketing"]
```
## WHEN TO USE WHICH TOOL
**Finding existing agents:**
- `find_agent` - Search the marketplace for pre-built agents others have created
- `find_library_agent` - Search agents the user has already saved to their library
**Creating/editing agents:**
- `create_agent` - When user wants a custom agent that doesn't exist, or has specific requirements
- `edit_agent` - When user wants to modify an existing agent (change inputs, add blocks, etc.)
**Running agents:**
- `run_agent` - To execute an agent (handles credentials and inputs automatically)
- `agent_output` - To check the results of a running or completed agent execution
**Direct execution:**
- `run_block` - Run a single block directly without needing a full agent
## HOW run_agent WORKS
The `run_agent` tool automatically handles the entire setup flow:
@@ -21,49 +89,61 @@ Parameters:
- `use_defaults`: Set to `true` to run with default values (only after user confirms)
- `schedule_name` + `cron`: For scheduled execution
## HOW create_agent WORKS
Use `create_agent` when the user wants to build a custom automation:
- Describe what the agent should do
- The tool will create the agent structure with appropriate blocks
- Returns the agent ID for further editing or running
## HOW agent_output WORKS
Use `agent_output` to get results from agent executions:
- Pass the execution_id from a run_agent response
- Returns the current status and any outputs produced
- Useful for checking if an agent has completed and what it produced
## WORKFLOW
1. **find_agent** - Search for agents that solve the user's problem
2. **run_agent** (first call, no inputs) - Get available inputs for the agent
3. **Ask user** what values they want to use OR if they want to use defaults
4. **run_agent** (second call) - Either with `inputs={...}` or `use_defaults=true`
1. **Get their name** - If unknown, ask for it first
2. **Understand context** - Ask 1-2 questions about their problem while helping
3. **Find or create** - Use find_agent for existing solutions, create_agent for custom needs
4. **Set up and run** - Use run_agent to execute, agent_output to get results
## YOUR APPROACH
**Step 1: Understand the Problem**
**Step 1: Greet and Identify**
- If you don't know their name, ask for it
- Be friendly and conversational
**Step 2: Understand the Problem**
- Ask maximum 1-2 targeted questions
- Focus on: What business problem are they solving?
- Move quickly to searching for solutions
- If they want to create/edit an agent, understand what it should do
**Step 2: Find Agents**
- Use `find_agent` immediately with relevant keywords
- Suggest the best option from search results
- Explain briefly how it solves their problem
**Step 3: Find or Create**
- For existing solutions: Use `find_agent` with relevant keywords
- For custom needs: Use `create_agent` with their requirements
- For modifications: Use `edit_agent` on an existing agent
**Step 3: Get Agent Inputs**
- Call `run_agent(username_agent_slug="creator/agent-name")` without inputs
- This returns the available inputs (required and optional)
- Present these to the user and ask what values they want
**Step 4: Execute**
- Call `run_agent` without inputs first to see what's available
- Ask user what values they want or if defaults are okay
- Call `run_agent` again with inputs or `use_defaults=true`
- Use `agent_output` to check results when needed
**Step 4: Run with User's Choice**
- If user provides values: `run_agent(username_agent_slug="...", inputs={...})`
- If user says "use defaults": `run_agent(username_agent_slug="...", use_defaults=true)`
- On success, share the agent link with the user
## USING add_understanding
**For Scheduled Execution:**
- Add `schedule_name` and `cron` parameters
- Example: `run_agent(username_agent_slug="...", inputs={...}, schedule_name="Daily Report", cron="0 9 * * *")`
Call `add_understanding` whenever you learn something about the user:
## FUNCTION CALL FORMAT
**User info:** `user_name`, `job_title`
**Business:** `business_name`, `industry`, `business_size` (1-10, 11-50, 51-200, 201-1000, 1000+), `user_role` (decision maker, implementer, end user)
**Processes:** `key_workflows` (array), `daily_activities` (array)
**Pain points:** `pain_points` (array), `bottlenecks` (array), `manual_tasks` (array), `automation_goals` (array)
**Tools:** `current_software` (array), `existing_automation` (array)
**Other:** `additional_notes`
To call a function, use this exact format:
`<function_call>function_name(parameter="value")</function_call>`
Examples:
- `<function_call>find_agent(query="social media automation")</function_call>`
- `<function_call>run_agent(username_agent_slug="creator/agent-name")</function_call>` (get inputs)
- `<function_call>run_agent(username_agent_slug="creator/agent-name", inputs={"topic": "AI news"})</function_call>`
- `<function_call>run_agent(username_agent_slug="creator/agent-name", use_defaults=true)</function_call>`
Example: `add_understanding(user_name="Sarah", job_title="Marketing Director", industry="fintech")`
## KEY RULES
@@ -73,8 +153,12 @@ Examples:
- Don't run agents without first showing available inputs to the user
- Don't use `use_defaults=true` without user explicitly confirming
- Don't write responses longer than 3 sentences
- Don't interrogate users with many questions - gather info naturally
**What You DO:**
- ALWAYS ask for user's name if you don't have it
- Save user information with `add_understanding` as you learn it
- Use their name when addressing them
- Always call run_agent first without inputs to see what's available
- Ask user what values they want OR if they want to use defaults
- Keep all responses to maximum 3 sentences
@@ -87,18 +171,22 @@ Examples:
## RESPONSE STRUCTURE
Before responding, wrap your analysis in <thinking> tags to systematically plan your approach:
- Check if you know the user's name - if not, ask for it
- Check if you have user context - if not, plan to gather some naturally
- Extract the key business problem or request from the user's message
- Determine what function call (if any) you need to make next
- Plan your response to stay under the 3-sentence maximum
Example interaction:
```
User: "Run the AI news agent for me"
Otto: <function_call>run_agent(username_agent_slug="autogpt/ai-news")</function_call>
[Tool returns: Agent accepts inputs - Required: topic. Optional: num_articles (default: 5)]
Otto: The AI News agent needs a topic. What topic would you like news about, or should I use the defaults?
User: "Use defaults"
Otto: <function_call>run_agent(username_agent_slug="autogpt/ai-news", use_defaults=true)</function_call>
User: "Hi, I want to build an agent that monitors my competitors"
Otto: <thinking>I don't know this user's name. I should ask for it while acknowledging their request.</thinking>
Hi! I'm Otto and I'd love to help you build a competitor monitoring agent. What's your name?
User: "I'm Mike"
Otto: [calls add_understanding with user_name="Mike"]
<thinking>Now I know Mike wants competitor monitoring. I should search for existing agents first.</thinking>
Great to meet you, Mike! Let me search for competitor monitoring agents.
[calls find_agent with query="competitor monitoring analysis"]
```
KEEP ANSWERS TO 3 SENTENCES

View File

@@ -0,0 +1,155 @@
You are Otto, an AI Co-Pilot helping new users get started with AutoGPT, an AI Business Automation platform. Your mission is to welcome them, learn about their needs, and help them run their first successful agent.
Here are the functions available to you:
<functions>
**Understanding & Discovery:**
1. **add_understanding** - Save information about the user's business context (use this as you learn about them)
2. **find_agent** - Search the marketplace for pre-built agents that solve the user's problem
3. **find_library_agent** - Search the user's personal library of saved agents
4. **find_block** - Search for individual blocks (building components for agents)
5. **search_platform_docs** - Search AutoGPT documentation for help
**Agent Creation & Editing:**
6. **create_agent** - Create a new custom agent from scratch based on user requirements
7. **edit_agent** - Modify an existing agent (add/remove blocks, change configuration)
**Execution & Output:**
8. **run_agent** - Run or schedule an agent (automatically handles setup)
9. **run_block** - Run a single block directly without creating an agent
10. **agent_output** - Get the output/results from a running or completed agent execution
</functions>
## YOUR ONBOARDING MISSION
You are guiding a new user through their first experience with AutoGPT. Your goal is to:
1. Welcome them warmly and get their name
2. Learn about them and their business
3. Find or create an agent that solves a real problem for them
4. Get that agent running successfully
5. Celebrate their success and point them to next steps
## PHASE 1: WELCOME & INTRODUCTION
**Start every conversation by:**
- Giving a warm, friendly greeting
- Introducing yourself as Otto, their AI assistant
- Asking for their name immediately
**Example opening:**
```
Hi! I'm Otto, your AI assistant. Welcome to AutoGPT! I'm here to help you set up your first automation. What's your name?
```
Once you have their name, save it immediately with `add_understanding(user_name="...")` and use it throughout.
## PHASE 2: DISCOVERY
**After getting their name, learn about them:**
- What's their role/job title?
- What industry/business are they in?
- What's one thing they'd love to automate?
**Keep it conversational - don't interrogate. Example:**
```
Nice to meet you, Sarah! What do you do for work, and what's one task you wish you could automate?
```
Save everything you learn with `add_understanding`.
## PHASE 3: FIND OR CREATE AN AGENT
**Once you understand their need:**
- Search for existing agents with `find_agent`
- Present the best match and explain how it helps them
- If nothing fits, offer to create a custom agent with `create_agent`
**Be enthusiastic about the solution:**
```
I found a great agent for you! The "Social Media Scheduler" can automatically post to your accounts on a schedule. Want to try it?
```
## PHASE 4: SETUP & RUN
**Guide them through running the agent:**
1. Call `run_agent` without inputs first to see what's needed
2. Explain each input in simple terms
3. Ask what values they want to use
4. Run the agent with their inputs or defaults
**Don't mention credentials** - the UI handles that automatically.
## PHASE 5: CELEBRATE & HANDOFF
**After successful execution:**
- Congratulate them on their first automation!
- Tell them where to find this agent (their Library)
- Mention they can explore more agents in the Marketplace
- Offer to help with anything else
**Example:**
```
You did it! Your first agent is running. You can find it anytime in your Library. Ready to explore more automations?
```
## KEY RULES
**What You DON'T Do:**
- Don't help with login (frontend handles this)
- Don't mention credentials (UI handles automatically)
- Don't run agents without showing inputs first
- Don't use `use_defaults=true` without explicit confirmation
- Don't write responses longer than 3 sentences
- Don't overwhelm with too many questions at once
**What You DO:**
- ALWAYS get the user's name first
- Be warm, encouraging, and celebratory
- Save info with `add_understanding` as you learn it
- Use their name when addressing them
- Keep responses to maximum 3 sentences
- Make them feel successful at each step
## USING add_understanding
Save information as you learn it:
**User info:** `user_name`, `job_title`
**Business:** `business_name`, `industry`, `business_size`, `user_role`
**Pain points:** `pain_points`, `manual_tasks`, `automation_goals`
**Tools:** `current_software`
Example: `add_understanding(user_name="Sarah", job_title="Marketing Manager", automation_goals=["social media scheduling"])`
## HOW run_agent WORKS
1. **First call** (no inputs) → Shows available inputs
2. **Credentials** → UI handles automatically (don't mention)
3. **Execution** → Run with `inputs={...}` or `use_defaults=true`
## RESPONSE STRUCTURE
Before responding, plan your approach in <thinking> tags:
- What phase am I in? (Welcome/Discovery/Find/Setup/Celebrate)
- Do I know their name? If not, ask for it
- What's the next step to move them forward?
- Keep response under 3 sentences
**Example flow:**
```
User: "Hi"
Otto: <thinking>Phase 1 - I need to welcome them and get their name.</thinking>
Hi! I'm Otto, welcome to AutoGPT! I'm here to help you set up your first automation - what's your name?
User: "I'm Alex"
Otto: [calls add_understanding with user_name="Alex"]
<thinking>Got their name. Phase 2 - learn about them.</thinking>
Great to meet you, Alex! What do you do for work, and what's one task you'd love to automate?
User: "I run an e-commerce store and spend hours on customer support emails"
Otto: [calls add_understanding with industry="e-commerce", pain_points=["customer support emails"]]
<thinking>Phase 3 - search for agents.</thinking>
[calls find_agent with query="customer support email automation"]
```
KEEP ANSWERS TO 3 SENTENCES - Be warm, helpful, and focused on their success!

View File

@@ -26,6 +26,14 @@ router = APIRouter(
# ========== Request/Response Models ==========
class StreamChatRequest(BaseModel):
"""Request model for streaming chat with optional context."""
message: str
is_user_message: bool = True
context: dict[str, str] | None = None # {url: str, content: str}
class CreateSessionResponse(BaseModel):
"""Response model containing information on a newly created chat session."""
@@ -44,9 +52,64 @@ class SessionDetailResponse(BaseModel):
messages: list[dict]
class SessionSummaryResponse(BaseModel):
"""Response model for a session summary (without messages)."""
id: str
created_at: str
updated_at: str
title: str | None = None
class ListSessionsResponse(BaseModel):
"""Response model for listing chat sessions."""
sessions: list[SessionSummaryResponse]
total: int
# ========== Routes ==========
@router.get(
"/sessions",
dependencies=[Security(auth.requires_user)],
)
async def list_sessions(
user_id: Annotated[str, Security(auth.get_user_id)],
limit: int = Query(default=50, ge=1, le=100),
offset: int = Query(default=0, ge=0),
) -> ListSessionsResponse:
"""
List chat sessions for the authenticated user.
Returns a paginated list of chat sessions belonging to the current user,
ordered by most recently updated.
Args:
user_id: The authenticated user's ID.
limit: Maximum number of sessions to return (1-100).
offset: Number of sessions to skip for pagination.
Returns:
ListSessionsResponse: List of session summaries and total count.
"""
sessions = await chat_service.get_user_sessions(user_id, limit, offset)
return ListSessionsResponse(
sessions=[
SessionSummaryResponse(
id=session.session_id,
created_at=session.started_at.isoformat(),
updated_at=session.updated_at.isoformat(),
title=None, # TODO: Add title support
)
for session in sessions
],
total=len(sessions),
)
@router.post(
"/sessions",
)
@@ -102,26 +165,89 @@ async def get_session(
session = await chat_service.get_session(session_id, user_id)
if not session:
raise NotFoundError(f"Session {session_id} not found")
messages = [message.model_dump() for message in session.messages]
logger.info(
f"Returning session {session_id}: "
f"message_count={len(messages)}, "
f"roles={[m.get('role') for m in messages]}"
)
return SessionDetailResponse(
id=session.session_id,
created_at=session.started_at.isoformat(),
updated_at=session.updated_at.isoformat(),
user_id=session.user_id or None,
messages=[message.model_dump() for message in session.messages],
messages=messages,
)
@router.post(
"/sessions/{session_id}/stream",
)
async def stream_chat_post(
session_id: str,
request: StreamChatRequest,
user_id: str | None = Depends(auth.get_user_id),
):
"""
Stream chat responses for a session (POST with context support).
Streams the AI/completion responses in real time over Server-Sent Events (SSE), including:
- Text fragments as they are generated
- Tool call UI elements (if invoked)
- Tool execution results
Args:
session_id: The chat session identifier to associate with the streamed messages.
request: Request body containing message, is_user_message, and optional context.
user_id: Optional authenticated user ID.
Returns:
StreamingResponse: SSE-formatted response chunks.
"""
# Validate session exists before starting the stream
# This prevents errors after the response has already started
session = await chat_service.get_session(session_id, user_id)
if not session:
raise NotFoundError(f"Session {session_id} not found. ")
if session.user_id is None and user_id is not None:
session = await chat_service.assign_user_to_session(session_id, user_id)
async def event_generator() -> AsyncGenerator[str, None]:
async for chunk in chat_service.stream_chat_completion(
session_id,
request.message,
is_user_message=request.is_user_message,
user_id=user_id,
session=session, # Pass pre-fetched session to avoid double-fetch
context=request.context,
):
yield chunk.to_sse()
return StreamingResponse(
event_generator(),
media_type="text/event-stream",
headers={
"Cache-Control": "no-cache",
"Connection": "keep-alive",
"X-Accel-Buffering": "no", # Disable nginx buffering
},
)
@router.get(
"/sessions/{session_id}/stream",
)
async def stream_chat(
async def stream_chat_get(
session_id: str,
message: Annotated[str, Query(min_length=1, max_length=10000)],
user_id: str | None = Depends(auth.get_user_id),
is_user_message: bool = Query(default=True),
):
"""
Stream chat responses for a session.
Stream chat responses for a session (GET - legacy endpoint).
Streams the AI/completion responses in real time over Server-Sent Events (SSE), including:
- Text fragments as they are generated
@@ -193,6 +319,133 @@ async def session_assign_user(
return {"status": "ok"}
# ========== Onboarding Routes ==========
# These routes use a specialized onboarding system prompt
@router.post(
"/onboarding/sessions",
)
async def create_onboarding_session(
user_id: Annotated[str | None, Depends(auth.get_user_id)],
) -> CreateSessionResponse:
"""
Create a new onboarding chat session.
Initiates a new chat session specifically for user onboarding,
using a specialized prompt that guides users through their first
experience with AutoGPT.
Args:
user_id: The optional authenticated user ID parsed from the JWT.
Returns:
CreateSessionResponse: Details of the created onboarding session.
"""
logger.info(
f"Creating onboarding session with user_id: "
f"...{user_id[-8:] if user_id and len(user_id) > 8 else '<redacted>'}"
)
session = await chat_service.create_chat_session(user_id)
return CreateSessionResponse(
id=session.session_id,
created_at=session.started_at.isoformat(),
user_id=session.user_id or None,
)
@router.get(
"/onboarding/sessions/{session_id}",
)
async def get_onboarding_session(
session_id: str,
user_id: Annotated[str | None, Depends(auth.get_user_id)],
) -> SessionDetailResponse:
"""
Retrieve the details of an onboarding chat session.
Args:
session_id: The unique identifier for the onboarding session.
user_id: The optional authenticated user ID.
Returns:
SessionDetailResponse: Details for the requested session.
"""
session = await chat_service.get_session(session_id, user_id)
if not session:
raise NotFoundError(f"Session {session_id} not found")
messages = [message.model_dump() for message in session.messages]
logger.info(
f"Returning onboarding session {session_id}: "
f"message_count={len(messages)}, "
f"roles={[m.get('role') for m in messages]}"
)
return SessionDetailResponse(
id=session.session_id,
created_at=session.started_at.isoformat(),
updated_at=session.updated_at.isoformat(),
user_id=session.user_id or None,
messages=messages,
)
@router.post(
"/onboarding/sessions/{session_id}/stream",
)
async def stream_onboarding_chat(
session_id: str,
request: StreamChatRequest,
user_id: str | None = Depends(auth.get_user_id),
):
"""
Stream onboarding chat responses for a session.
Uses the specialized onboarding system prompt to guide new users
through their first experience with AutoGPT. Streams AI responses
in real time over Server-Sent Events (SSE).
Args:
session_id: The onboarding session identifier.
request: Request body containing message and optional context.
user_id: Optional authenticated user ID.
Returns:
StreamingResponse: SSE-formatted response chunks.
"""
session = await chat_service.get_session(session_id, user_id)
if not session:
raise NotFoundError(f"Session {session_id} not found.")
if session.user_id is None and user_id is not None:
session = await chat_service.assign_user_to_session(session_id, user_id)
async def event_generator() -> AsyncGenerator[str, None]:
async for chunk in chat_service.stream_chat_completion(
session_id,
request.message,
is_user_message=request.is_user_message,
user_id=user_id,
session=session,
context=request.context,
prompt_type="onboarding", # Use onboarding system prompt
):
yield chunk.to_sse()
return StreamingResponse(
event_generator(),
media_type="text/event-stream",
headers={
"Cache-Control": "no-cache",
"Connection": "keep-alive",
"X-Accel-Buffering": "no",
},
)
# ========== Health Check ==========

View File

@@ -7,16 +7,17 @@ import orjson
from openai import AsyncOpenAI
from openai.types.chat import ChatCompletionChunk, ChatCompletionToolParam
from backend.data.understanding import (
format_understanding_for_prompt,
get_business_understanding,
)
from backend.util.exceptions import NotFoundError
from . import db as chat_db
from .config import ChatConfig
from .model import (
ChatMessage,
ChatSession,
Usage,
get_chat_session,
upsert_chat_session,
)
from .model import ChatMessage, ChatSession, Usage
from .model import create_chat_session as model_create_chat_session
from .model import get_chat_session, upsert_chat_session
from .response_model import (
StreamBaseResponse,
StreamEnd,
@@ -36,15 +37,109 @@ config = ChatConfig()
client = AsyncOpenAI(api_key=config.api_key, base_url=config.base_url)
async def _is_first_session(user_id: str) -> bool:
"""Check if this is the user's first chat session.
Returns True if the user has 1 or fewer sessions (meaning this is their first).
"""
try:
session_count = await chat_db.get_user_session_count(user_id)
return session_count <= 1
except Exception as e:
logger.warning(f"Failed to check session count for user {user_id}: {e}")
return False # Default to non-onboarding if we can't check
async def _build_system_prompt(
user_id: str | None, prompt_type: str = "default"
) -> str:
"""Build the full system prompt including business understanding if available.
Args:
user_id: The user ID for fetching business understanding
prompt_type: The type of prompt to load ("default" or "onboarding")
If "default" and this is the user's first session, will use "onboarding" instead.
Returns:
The full system prompt with business understanding context if available
"""
# Auto-detect: if using default prompt and this is user's first session, use onboarding
effective_prompt_type = prompt_type
if prompt_type == "default" and user_id:
if await _is_first_session(user_id):
logger.info("First session detected for user, using onboarding prompt")
effective_prompt_type = "onboarding"
# Start with the base system prompt for the specified type
base_prompt = config.get_system_prompt_for_type(effective_prompt_type)
# If user is authenticated, try to fetch their business understanding
if user_id:
try:
understanding = await get_business_understanding(user_id)
if understanding:
context = format_understanding_for_prompt(understanding)
if context:
return (
f"{base_prompt}\n\n---\n\n"
f"{context}\n\n"
"Use this context to provide more personalized recommendations "
"and to better understand the user's business needs when "
"suggesting agents and automations."
)
except Exception as e:
logger.warning(f"Failed to fetch business understanding: {e}")
return base_prompt
async def _generate_session_title(message: str) -> str | None:
"""Generate a concise title for a chat session based on the first message.
Args:
message: The first user message in the session
Returns:
A short title (3-6 words) or None if generation fails
"""
try:
response = await client.chat.completions.create(
model=config.title_model,
messages=[
{
"role": "system",
"content": (
"Generate a very short title (3-6 words) for a chat conversation "
"based on the user's first message. The title should capture the "
"main topic or intent. Return ONLY the title, no quotes or punctuation."
),
},
{"role": "user", "content": message[:500]}, # Limit input length
],
max_tokens=20,
temperature=0.7,
)
title = response.choices[0].message.content
if title:
# Clean up the title
title = title.strip().strip("\"'")
# Limit length
if len(title) > 50:
title = title[:47] + "..."
return title
return None
except Exception as e:
logger.warning(f"Failed to generate session title: {e}")
return None
async def create_chat_session(
user_id: str | None = None,
) -> ChatSession:
"""
Create a new chat session and persist it to the database.
"""
session = ChatSession.new(user_id)
# Persist the session immediately so it can be used for streaming
return await upsert_chat_session(session)
return await model_create_chat_session(user_id)
async def get_session(
@@ -57,6 +152,19 @@ async def get_session(
return await get_chat_session(session_id, user_id)
async def get_user_sessions(
user_id: str,
limit: int = 50,
offset: int = 0,
) -> list[ChatSession]:
"""
Get all chat sessions for a user.
"""
from .model import get_user_sessions as model_get_user_sessions
return await model_get_user_sessions(user_id, limit, offset)
async def assign_user_to_session(
session_id: str,
user_id: str,
@@ -78,6 +186,8 @@ async def stream_chat_completion(
user_id: str | None = None,
retry_count: int = 0,
session: ChatSession | None = None,
context: dict[str, str] | None = None, # {url: str, content: str}
prompt_type: str = "default",
) -> AsyncGenerator[StreamBaseResponse, None]:
"""Main entry point for streaming chat completions with database handling.
@@ -89,6 +199,7 @@ async def stream_chat_completion(
user_message: User's input message
user_id: User ID for authentication (None for anonymous)
session: Optional pre-loaded session object (for recursive calls to avoid Redis refetch)
prompt_type: The type of prompt to use ("default" or "onboarding")
Yields:
StreamBaseResponse objects formatted as SSE
@@ -121,9 +232,18 @@ async def stream_chat_completion(
)
if message:
# Build message content with context if provided
message_content = message
if context and context.get("url") and context.get("content"):
context_text = f"Page URL: {context['url']}\n\nPage Content:\n{context['content']}\n\n---\n\nUser Message: {message}"
message_content = context_text
logger.info(
f"Including page context: URL={context['url']}, content_length={len(context['content'])}"
)
session.messages.append(
ChatMessage(
role="user" if is_user_message else "assistant", content=message
role="user" if is_user_message else "assistant", content=message_content
)
)
logger.info(
@@ -141,6 +261,32 @@ async def stream_chat_completion(
session = await upsert_chat_session(session)
assert session, "Session not found"
# Generate title for new sessions on first user message (non-blocking)
# Check: is_user_message, no title yet, and this is the first user message
if is_user_message and message and not session.title:
user_messages = [m for m in session.messages if m.role == "user"]
if len(user_messages) == 1:
# First user message - generate title in background
import asyncio
async def _update_title():
try:
title = await _generate_session_title(message)
if title:
session.title = title
await upsert_chat_session(session)
logger.info(
f"Generated title for session {session_id}: {title}"
)
except Exception as e:
logger.warning(f"Failed to update session title: {e}")
# Fire and forget - don't block the chat response
asyncio.create_task(_update_title())
# Build system prompt with business understanding
system_prompt = await _build_system_prompt(user_id, prompt_type)
assistant_response = ChatMessage(
role="assistant",
content="",
@@ -159,6 +305,7 @@ async def stream_chat_completion(
async for chunk in _stream_chat_chunks(
session=session,
tools=tools,
system_prompt=system_prompt,
):
if isinstance(chunk, StreamTextChunk):
@@ -279,6 +426,7 @@ async def stream_chat_completion(
user_id=user_id,
retry_count=retry_count + 1,
session=session,
prompt_type=prompt_type,
):
yield chunk
return # Exit after retry to avoid double-saving in finally block
@@ -324,6 +472,7 @@ async def stream_chat_completion(
session_id=session.session_id,
user_id=user_id,
session=session, # Pass session object to avoid Redis refetch
prompt_type=prompt_type,
):
yield chunk
@@ -331,6 +480,7 @@ async def stream_chat_completion(
async def _stream_chat_chunks(
session: ChatSession,
tools: list[ChatCompletionToolParam],
system_prompt: str | None = None,
) -> AsyncGenerator[StreamBaseResponse, None]:
"""
Pure streaming function for OpenAI chat completions with tool calling.
@@ -338,9 +488,9 @@ async def _stream_chat_chunks(
This function is database-agnostic and focuses only on streaming logic.
Args:
messages: Conversation context as ChatCompletionMessageParam list
session_id: Session ID
user_id: User ID for tool execution
session: Chat session with conversation history
tools: Available tools for the model
system_prompt: System prompt to prepend to messages
Yields:
SSE formatted JSON response objects
@@ -350,6 +500,17 @@ async def _stream_chat_chunks(
logger.info("Starting pure chat stream")
# Build messages with system prompt prepended
messages = session.to_openai_messages()
if system_prompt:
from openai.types.chat import ChatCompletionSystemMessageParam
system_message = ChatCompletionSystemMessageParam(
role="system",
content=system_prompt,
)
messages = [system_message] + messages
# Loop to handle tool calls and continue conversation
while True:
try:
@@ -358,7 +519,7 @@ async def _stream_chat_chunks(
# Create the stream with proper types
stream = await client.chat.completions.create(
model=model,
messages=session.to_openai_messages(),
messages=messages,
tools=tools,
tool_choice="auto",
stream=True,
@@ -502,8 +663,12 @@ async def _yield_tool_call(
"""
logger.info(f"Yielding tool call: {tool_calls[yield_idx]}")
# Parse tool call arguments - exceptions will propagate to caller
arguments = orjson.loads(tool_calls[yield_idx]["function"]["arguments"])
# Parse tool call arguments - handle empty arguments gracefully
raw_arguments = tool_calls[yield_idx]["function"]["arguments"]
if raw_arguments:
arguments = orjson.loads(raw_arguments)
else:
arguments = {}
yield StreamToolCall(
tool_id=tool_calls[yield_idx]["id"],

View File

@@ -4,21 +4,30 @@ from openai.types.chat import ChatCompletionToolParam
from backend.api.features.chat.model import ChatSession
from .add_understanding import AddUnderstandingTool
from .agent_output import AgentOutputTool
from .base import BaseTool
from .find_agent import FindAgentTool
from .find_library_agent import FindLibraryAgentTool
from .run_agent import RunAgentTool
if TYPE_CHECKING:
from backend.api.features.chat.response_model import StreamToolExecutionResult
# Initialize tool instances
add_understanding_tool = AddUnderstandingTool()
find_agent_tool = FindAgentTool()
find_library_agent_tool = FindLibraryAgentTool()
run_agent_tool = RunAgentTool()
agent_output_tool = AgentOutputTool()
# Export tools as OpenAI format
tools: list[ChatCompletionToolParam] = [
add_understanding_tool.as_openai_tool(),
find_agent_tool.as_openai_tool(),
find_library_agent_tool.as_openai_tool(),
run_agent_tool.as_openai_tool(),
agent_output_tool.as_openai_tool(),
]
@@ -31,8 +40,11 @@ async def execute_tool(
) -> "StreamToolExecutionResult":
tool_map: dict[str, BaseTool] = {
"add_understanding": add_understanding_tool,
"find_agent": find_agent_tool,
"find_library_agent": find_library_agent_tool,
"run_agent": run_agent_tool,
"agent_output": agent_output_tool,
}
if tool_name not in tool_map:
raise ValueError(f"Tool {tool_name} not found")

View File

@@ -3,6 +3,7 @@ from datetime import UTC, datetime
from os import getenv
import pytest
from prisma.types import ProfileCreateInput
from pydantic import SecretStr
from backend.api.features.chat.model import ChatSession
@@ -49,13 +50,13 @@ async def setup_test_data():
# 1b. Create a profile with username for the user (required for store agent lookup)
username = user.email.split("@")[0]
await prisma.profile.create(
data={
"userId": user.id,
"username": username,
"name": f"Test User {username}",
"description": "Test user profile",
"links": [], # Required field - empty array for test profiles
}
data=ProfileCreateInput(
userId=user.id,
username=username,
name=f"Test User {username}",
description="Test user profile",
links=[], # Required field - empty array for test profiles
)
)
# 2. Create a test graph with agent input -> agent output
@@ -172,13 +173,13 @@ async def setup_llm_test_data():
# 1b. Create a profile with username for the user (required for store agent lookup)
username = user.email.split("@")[0]
await prisma.profile.create(
data={
"userId": user.id,
"username": username,
"name": f"Test User {username}",
"description": "Test user profile for LLM tests",
"links": [], # Required field - empty array for test profiles
}
data=ProfileCreateInput(
userId=user.id,
username=username,
name=f"Test User {username}",
description="Test user profile for LLM tests",
links=[], # Required field - empty array for test profiles
)
)
# 2. Create test OpenAI credentials for the user
@@ -332,13 +333,13 @@ async def setup_firecrawl_test_data():
# 1b. Create a profile with username for the user (required for store agent lookup)
username = user.email.split("@")[0]
await prisma.profile.create(
data={
"userId": user.id,
"username": username,
"name": f"Test User {username}",
"description": "Test user profile for Firecrawl tests",
"links": [], # Required field - empty array for test profiles
}
data=ProfileCreateInput(
userId=user.id,
username=username,
name=f"Test User {username}",
description="Test user profile for Firecrawl tests",
links=[], # Required field - empty array for test profiles
)
)
# NOTE: We deliberately do NOT create Firecrawl credentials for this user

View File

@@ -0,0 +1,202 @@
"""Tool for capturing user business understanding incrementally."""
import logging
from typing import Any
from backend.api.features.chat.model import ChatSession
from backend.data.understanding import (
BusinessUnderstandingInput,
upsert_business_understanding,
)
from .base import BaseTool
from .models import ErrorResponse, ToolResponseBase, UnderstandingUpdatedResponse
logger = logging.getLogger(__name__)
class AddUnderstandingTool(BaseTool):
"""Tool for capturing user's business understanding incrementally."""
@property
def name(self) -> str:
return "add_understanding"
@property
def description(self) -> str:
return """Capture and store information about the user's business context,
workflows, pain points, and automation goals. Call this tool whenever the user
shares information about their business. Each call incrementally adds to the
existing understanding - you don't need to provide all fields at once.
Use this to build a comprehensive profile that helps recommend better agents
and automations for the user's specific needs."""
@property
def parameters(self) -> dict[str, Any]:
return {
"type": "object",
"properties": {
"user_name": {
"type": "string",
"description": "The user's name",
},
"job_title": {
"type": "string",
"description": "The user's job title (e.g., 'Marketing Manager', 'CEO', 'Software Engineer')",
},
"business_name": {
"type": "string",
"description": "Name of the user's business or organization",
},
"industry": {
"type": "string",
"description": "Industry or sector (e.g., 'e-commerce', 'healthcare', 'finance')",
},
"business_size": {
"type": "string",
"description": "Company size: '1-10', '11-50', '51-200', '201-1000', or '1000+'",
},
"user_role": {
"type": "string",
"description": "User's role in organization context (e.g., 'decision maker', 'implementer', 'end user')",
},
"key_workflows": {
"type": "array",
"items": {"type": "string"},
"description": "Key business workflows (e.g., 'lead qualification', 'content publishing')",
},
"daily_activities": {
"type": "array",
"items": {"type": "string"},
"description": "Regular daily activities the user performs",
},
"pain_points": {
"type": "array",
"items": {"type": "string"},
"description": "Current pain points or challenges",
},
"bottlenecks": {
"type": "array",
"items": {"type": "string"},
"description": "Process bottlenecks slowing things down",
},
"manual_tasks": {
"type": "array",
"items": {"type": "string"},
"description": "Manual or repetitive tasks that could be automated",
},
"automation_goals": {
"type": "array",
"items": {"type": "string"},
"description": "Desired automation outcomes or goals",
},
"current_software": {
"type": "array",
"items": {"type": "string"},
"description": "Software and tools currently in use",
},
"existing_automation": {
"type": "array",
"items": {"type": "string"},
"description": "Any existing automations or integrations",
},
"additional_notes": {
"type": "string",
"description": "Any other relevant context or notes",
},
},
"required": [],
}
@property
def requires_auth(self) -> bool:
"""Requires authentication to store user-specific data."""
return True
async def _execute(
self,
user_id: str | None,
session: ChatSession,
**kwargs,
) -> ToolResponseBase:
"""
Capture and store business understanding incrementally.
Each call merges new data with existing understanding:
- String fields are overwritten if provided
- List fields are appended (with deduplication)
"""
session_id = session.session_id
if not user_id:
return ErrorResponse(
message="Authentication required to save business understanding.",
session_id=session_id,
)
# Check if any data was provided
if not any(v is not None for v in kwargs.values()):
return ErrorResponse(
message="Please provide at least one field to update.",
session_id=session_id,
)
# Build input model
input_data = BusinessUnderstandingInput(
user_name=kwargs.get("user_name"),
job_title=kwargs.get("job_title"),
business_name=kwargs.get("business_name"),
industry=kwargs.get("industry"),
business_size=kwargs.get("business_size"),
user_role=kwargs.get("user_role"),
key_workflows=kwargs.get("key_workflows"),
daily_activities=kwargs.get("daily_activities"),
pain_points=kwargs.get("pain_points"),
bottlenecks=kwargs.get("bottlenecks"),
manual_tasks=kwargs.get("manual_tasks"),
automation_goals=kwargs.get("automation_goals"),
current_software=kwargs.get("current_software"),
existing_automation=kwargs.get("existing_automation"),
additional_notes=kwargs.get("additional_notes"),
)
# Track which fields were updated
updated_fields = [k for k, v in kwargs.items() if v is not None]
# Upsert with merge
understanding = await upsert_business_understanding(user_id, input_data)
# Build current understanding summary for the response
current_understanding = {
"user_name": understanding.user_name,
"job_title": understanding.job_title,
"business_name": understanding.business_name,
"industry": understanding.industry,
"business_size": understanding.business_size,
"user_role": understanding.user_role,
"key_workflows": understanding.key_workflows,
"daily_activities": understanding.daily_activities,
"pain_points": understanding.pain_points,
"bottlenecks": understanding.bottlenecks,
"manual_tasks": understanding.manual_tasks,
"automation_goals": understanding.automation_goals,
"current_software": understanding.current_software,
"existing_automation": understanding.existing_automation,
"additional_notes": understanding.additional_notes,
}
# Filter out empty values for cleaner response
current_understanding = {
k: v
for k, v in current_understanding.items()
if v is not None and v != [] and v != ""
}
return UnderstandingUpdatedResponse(
message=f"Updated understanding with: {', '.join(updated_fields)}. "
"I now have a better picture of your business context.",
session_id=session_id,
updated_fields=updated_fields,
current_understanding=current_understanding,
)

View File

@@ -0,0 +1,455 @@
"""Tool for retrieving agent execution outputs from user's library."""
import logging
import re
from datetime import datetime, timedelta, timezone
from typing import Any
from pydantic import BaseModel, field_validator
from backend.api.features.chat.model import ChatSession
from backend.api.features.library import db as library_db
from backend.api.features.library.model import LibraryAgent
from backend.data import execution as execution_db
from backend.data.execution import ExecutionStatus, GraphExecution, GraphExecutionMeta
from .base import BaseTool
from .models import (
AgentOutputResponse,
ErrorResponse,
ExecutionOutputInfo,
NoResultsResponse,
ToolResponseBase,
)
from .utils import fetch_graph_from_store_slug
logger = logging.getLogger(__name__)
class AgentOutputInput(BaseModel):
"""Input parameters for the agent_output tool."""
agent_name: str = ""
library_agent_id: str = ""
store_slug: str = ""
execution_id: str = ""
run_time: str = "latest"
@field_validator(
"agent_name",
"library_agent_id",
"store_slug",
"execution_id",
"run_time",
mode="before",
)
@classmethod
def strip_strings(cls, v: Any) -> Any:
"""Strip whitespace from string fields."""
return v.strip() if isinstance(v, str) else v
def parse_time_expression(
time_expr: str | None,
) -> tuple[datetime | None, datetime | None]:
"""
Parse time expression into datetime range (start, end).
Supports:
- "latest" or None -> returns (None, None) to get most recent
- "yesterday" -> 24h window for yesterday
- "today" -> Today from midnight
- "last week" / "last 7 days" -> 7 day window
- "last month" / "last 30 days" -> 30 day window
- ISO date "YYYY-MM-DD" -> 24h window for that date
"""
if not time_expr or time_expr.lower() == "latest":
return None, None
now = datetime.now(timezone.utc)
expr = time_expr.lower().strip()
# Relative expressions
if expr == "yesterday":
end = now.replace(hour=0, minute=0, second=0, microsecond=0)
start = end - timedelta(days=1)
return start, end
if expr in ("last week", "last 7 days"):
return now - timedelta(days=7), now
if expr in ("last month", "last 30 days"):
return now - timedelta(days=30), now
if expr == "today":
start = now.replace(hour=0, minute=0, second=0, microsecond=0)
return start, now
# Try ISO date format (YYYY-MM-DD)
date_match = re.match(r"^(\d{4})-(\d{2})-(\d{2})$", expr)
if date_match:
year, month, day = map(int, date_match.groups())
start = datetime(year, month, day, 0, 0, 0, tzinfo=timezone.utc)
end = start + timedelta(days=1)
return start, end
# Try ISO datetime
try:
parsed = datetime.fromisoformat(expr.replace("Z", "+00:00"))
if parsed.tzinfo is None:
parsed = parsed.replace(tzinfo=timezone.utc)
# Return +/- 1 hour window around the specified time
return parsed - timedelta(hours=1), parsed + timedelta(hours=1)
except ValueError:
pass
# Fallback: treat as "latest"
return None, None
class AgentOutputTool(BaseTool):
"""Tool for retrieving execution outputs from user's library agents."""
@property
def name(self) -> str:
return "agent_output"
@property
def description(self) -> str:
return """Retrieve execution outputs from agents in the user's library.
Identify the agent using one of:
- agent_name: Fuzzy search in user's library
- library_agent_id: Exact library agent ID
- store_slug: Marketplace format 'username/agent-name'
Select which run to retrieve using:
- execution_id: Specific execution ID
- run_time: 'latest' (default), 'yesterday', 'last week', or ISO date 'YYYY-MM-DD'
"""
@property
def parameters(self) -> dict[str, Any]:
return {
"type": "object",
"properties": {
"agent_name": {
"type": "string",
"description": "Agent name to search for in user's library (fuzzy match)",
},
"library_agent_id": {
"type": "string",
"description": "Exact library agent ID",
},
"store_slug": {
"type": "string",
"description": "Marketplace identifier: 'username/agent-slug'",
},
"execution_id": {
"type": "string",
"description": "Specific execution ID to retrieve",
},
"run_time": {
"type": "string",
"description": (
"Time filter: 'latest', 'yesterday', 'last week', or 'YYYY-MM-DD'"
),
},
},
"required": [],
}
@property
def requires_auth(self) -> bool:
return True
async def _resolve_agent(
self,
user_id: str,
agent_name: str | None,
library_agent_id: str | None,
store_slug: str | None,
) -> tuple[LibraryAgent | None, str | None]:
"""
Resolve agent from provided identifiers.
Returns (library_agent, error_message).
"""
# Priority 1: Exact library agent ID
if library_agent_id:
try:
agent = await library_db.get_library_agent(library_agent_id, user_id)
return agent, None
except Exception as e:
logger.warning(f"Failed to get library agent by ID: {e}")
return None, f"Library agent '{library_agent_id}' not found"
# Priority 2: Store slug (username/agent-name)
if store_slug and "/" in store_slug:
username, agent_slug = store_slug.split("/", 1)
graph, _ = await fetch_graph_from_store_slug(username, agent_slug)
if not graph:
return None, f"Agent '{store_slug}' not found in marketplace"
# Find in user's library by graph_id
agent = await library_db.get_library_agent_by_graph_id(user_id, graph.id)
if not agent:
return (
None,
f"Agent '{store_slug}' is not in your library. "
"Add it first to see outputs.",
)
return agent, None
# Priority 3: Fuzzy name search in library
if agent_name:
try:
response = await library_db.list_library_agents(
user_id=user_id,
search_term=agent_name,
page_size=5,
)
if not response.agents:
return (
None,
f"No agents matching '{agent_name}' found in your library",
)
# Return best match (first result from search)
return response.agents[0], None
except Exception as e:
logger.error(f"Error searching library agents: {e}")
return None, f"Error searching for agent: {e}"
return (
None,
"Please specify an agent name, library_agent_id, or store_slug",
)
async def _get_execution(
self,
user_id: str,
graph_id: str,
execution_id: str | None,
time_start: datetime | None,
time_end: datetime | None,
) -> tuple[GraphExecution | None, list[GraphExecutionMeta], str | None]:
"""
Fetch execution(s) based on filters.
Returns (single_execution, available_executions_meta, error_message).
"""
# If specific execution_id provided, fetch it directly
if execution_id:
execution = await execution_db.get_graph_execution(
user_id=user_id,
execution_id=execution_id,
include_node_executions=False,
)
if not execution:
return None, [], f"Execution '{execution_id}' not found"
return execution, [], None
# Get completed executions with time filters
executions = await execution_db.get_graph_executions(
graph_id=graph_id,
user_id=user_id,
statuses=[ExecutionStatus.COMPLETED],
created_time_gte=time_start,
created_time_lte=time_end,
limit=10,
)
if not executions:
return None, [], None # No error, just no executions
# If only one execution, fetch full details
if len(executions) == 1:
full_execution = await execution_db.get_graph_execution(
user_id=user_id,
execution_id=executions[0].id,
include_node_executions=False,
)
return full_execution, [], None
# Multiple executions - return latest with full details, plus list of available
full_execution = await execution_db.get_graph_execution(
user_id=user_id,
execution_id=executions[0].id,
include_node_executions=False,
)
return full_execution, executions, None
def _build_response(
self,
agent: LibraryAgent,
execution: GraphExecution | None,
available_executions: list[GraphExecutionMeta],
session_id: str | None,
) -> AgentOutputResponse:
"""Build the response based on execution data."""
library_agent_link = f"/library/agents/{agent.id}"
if not execution:
return AgentOutputResponse(
message=f"No completed executions found for agent '{agent.name}'",
session_id=session_id,
agent_name=agent.name,
agent_id=agent.graph_id,
library_agent_id=agent.id,
library_agent_link=library_agent_link,
total_executions=0,
)
execution_info = ExecutionOutputInfo(
execution_id=execution.id,
status=execution.status.value,
started_at=execution.started_at,
ended_at=execution.ended_at,
outputs=dict(execution.outputs),
inputs_summary=execution.inputs if execution.inputs else None,
)
available_list = None
if len(available_executions) > 1:
available_list = [
{
"id": e.id,
"status": e.status.value,
"started_at": e.started_at.isoformat() if e.started_at else None,
}
for e in available_executions[:5]
]
message = f"Found execution outputs for agent '{agent.name}'"
if len(available_executions) > 1:
message += (
f". Showing latest of {len(available_executions)} matching executions."
)
return AgentOutputResponse(
message=message,
session_id=session_id,
agent_name=agent.name,
agent_id=agent.graph_id,
library_agent_id=agent.id,
library_agent_link=library_agent_link,
execution=execution_info,
available_executions=available_list,
total_executions=len(available_executions) if available_executions else 1,
)
async def _execute(
self,
user_id: str | None,
session: ChatSession,
**kwargs,
) -> ToolResponseBase:
"""Execute the agent_output tool."""
session_id = session.session_id
# Parse and validate input
try:
input_data = AgentOutputInput(**kwargs)
except Exception as e:
logger.error(f"Invalid input: {e}")
return ErrorResponse(
message="Invalid input parameters",
error=str(e),
session_id=session_id,
)
# Ensure user_id is present (should be guaranteed by requires_auth)
if not user_id:
return ErrorResponse(
message="User authentication required",
session_id=session_id,
)
# Check if at least one identifier is provided
if not any(
[
input_data.agent_name,
input_data.library_agent_id,
input_data.store_slug,
input_data.execution_id,
]
):
return ErrorResponse(
message=(
"Please specify at least one of: agent_name, "
"library_agent_id, store_slug, or execution_id"
),
session_id=session_id,
)
# If only execution_id provided, we need to find the agent differently
if (
input_data.execution_id
and not input_data.agent_name
and not input_data.library_agent_id
and not input_data.store_slug
):
# Fetch execution directly to get graph_id
execution = await execution_db.get_graph_execution(
user_id=user_id,
execution_id=input_data.execution_id,
include_node_executions=False,
)
if not execution:
return ErrorResponse(
message=f"Execution '{input_data.execution_id}' not found",
session_id=session_id,
)
# Find library agent by graph_id
agent = await library_db.get_library_agent_by_graph_id(
user_id, execution.graph_id
)
if not agent:
return NoResultsResponse(
message=(
f"Execution found but agent not in your library. "
f"Graph ID: {execution.graph_id}"
),
session_id=session_id,
suggestions=["Add the agent to your library to see more details"],
)
return self._build_response(agent, execution, [], session_id)
# Resolve agent from identifiers
agent, error = await self._resolve_agent(
user_id=user_id,
agent_name=input_data.agent_name or None,
library_agent_id=input_data.library_agent_id or None,
store_slug=input_data.store_slug or None,
)
if error or not agent:
return NoResultsResponse(
message=error or "Agent not found",
session_id=session_id,
suggestions=[
"Check the agent name or ID",
"Make sure the agent is in your library",
],
)
# Parse time expression
time_start, time_end = parse_time_expression(input_data.run_time)
# Fetch execution(s)
execution, available_executions, exec_error = await self._get_execution(
user_id=user_id,
graph_id=agent.graph_id,
execution_id=input_data.execution_id or None,
time_start=time_start,
time_end=time_end,
)
if exec_error:
return ErrorResponse(
message=exec_error,
session_id=session_id,
)
return self._build_response(agent, execution, available_executions, session_id)

View File

@@ -0,0 +1,157 @@
"""Tool for searching agents in the user's library."""
import logging
from typing import Any
from backend.api.features.chat.model import ChatSession
from backend.api.features.library import db as library_db
from backend.util.exceptions import DatabaseError
from .base import BaseTool
from .models import (
AgentCarouselResponse,
AgentInfo,
ErrorResponse,
NoResultsResponse,
ToolResponseBase,
)
logger = logging.getLogger(__name__)
class FindLibraryAgentTool(BaseTool):
"""Tool for searching agents in the user's library."""
@property
def name(self) -> str:
return "find_library_agent"
@property
def description(self) -> str:
return (
"Search for agents in the user's library. Use this to find agents "
"the user has already added to their library, including agents they "
"created or added from the marketplace."
)
@property
def parameters(self) -> dict[str, Any]:
return {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": (
"Search query to find agents by name or description. "
"Use keywords for best results."
),
},
},
"required": ["query"],
}
@property
def requires_auth(self) -> bool:
return True
async def _execute(
self,
user_id: str | None,
session: ChatSession,
**kwargs,
) -> ToolResponseBase:
"""Search for agents in the user's library.
Args:
user_id: User ID (required)
session: Chat session
query: Search query
Returns:
AgentCarouselResponse: List of agents found in the library
NoResultsResponse: No agents found
ErrorResponse: Error message
"""
query = kwargs.get("query", "").strip()
session_id = session.session_id
if not query:
return ErrorResponse(
message="Please provide a search query",
session_id=session_id,
)
if not user_id:
return ErrorResponse(
message="User authentication required to search library",
session_id=session_id,
)
agents = []
try:
logger.info(f"Searching user library for: {query}")
library_results = await library_db.list_library_agents(
user_id=user_id,
search_term=query,
page_size=10,
)
logger.info(
f"Find library agents tool found {len(library_results.agents)} agents"
)
for agent in library_results.agents:
agents.append(
AgentInfo(
id=agent.id,
name=agent.name,
description=agent.description or "",
source="library",
in_library=True,
creator=agent.creator_name,
status=agent.status.value,
can_access_graph=agent.can_access_graph,
has_external_trigger=agent.has_external_trigger,
new_output=agent.new_output,
graph_id=agent.graph_id,
),
)
except DatabaseError as e:
logger.error(f"Error searching library agents: {e}", exc_info=True)
return ErrorResponse(
message="Failed to search library. Please try again.",
error=str(e),
session_id=session_id,
)
if not agents:
return NoResultsResponse(
message=(
f"No agents found matching '{query}' in your library. "
"Try different keywords or use find_agent to search the marketplace."
),
session_id=session_id,
suggestions=[
"Try more general terms",
"Use find_agent to search the marketplace",
"Check your library at /library",
],
)
title = (
f"Found {len(agents)} agent{'s' if len(agents) != 1 else ''} "
f"in your library for '{query}'"
)
return AgentCarouselResponse(
message=(
"Found agents in the user's library. You can provide a link to "
"view an agent at: /library/agents/{agent_id}. "
"Use agent_output to get execution results, or run_agent to execute."
),
title=title,
agents=agents,
count=len(agents),
session_id=session_id,
)

View File

@@ -1,5 +1,6 @@
"""Pydantic models for tool responses."""
from datetime import datetime
from enum import Enum
from typing import Any
@@ -19,6 +20,15 @@ class ResponseType(str, Enum):
ERROR = "error"
NO_RESULTS = "no_results"
SUCCESS = "success"
DOC_SEARCH_RESULTS = "doc_search_results"
AGENT_OUTPUT = "agent_output"
BLOCK_LIST = "block_list"
BLOCK_OUTPUT = "block_output"
UNDERSTANDING_UPDATED = "understanding_updated"
# Agent generation responses
AGENT_PREVIEW = "agent_preview"
AGENT_SAVED = "agent_saved"
CLARIFICATION_NEEDED = "clarification_needed"
# Base response model
@@ -173,3 +183,128 @@ class ErrorResponse(ToolResponseBase):
type: ResponseType = ResponseType.ERROR
error: str | None = None
details: dict[str, Any] | None = None
# Documentation search models
class DocSearchResult(BaseModel):
"""A single documentation search result."""
title: str
path: str
section: str
snippet: str # Short excerpt for UI display
content: str # Full text content for LLM to read and understand
score: float
doc_url: str | None = None
class DocSearchResultsResponse(ToolResponseBase):
"""Response for search_docs tool."""
type: ResponseType = ResponseType.DOC_SEARCH_RESULTS
results: list[DocSearchResult]
count: int
query: str
# Agent output models
class ExecutionOutputInfo(BaseModel):
"""Summary of a single execution's outputs."""
execution_id: str
status: str
started_at: datetime | None = None
ended_at: datetime | None = None
outputs: dict[str, list[Any]]
inputs_summary: dict[str, Any] | None = None
class AgentOutputResponse(ToolResponseBase):
"""Response for agent_output tool."""
type: ResponseType = ResponseType.AGENT_OUTPUT
agent_name: str
agent_id: str
library_agent_id: str | None = None
library_agent_link: str | None = None
execution: ExecutionOutputInfo | None = None
available_executions: list[dict[str, Any]] | None = None
total_executions: int = 0
# Block models
class BlockInfoSummary(BaseModel):
"""Summary of a block for search results."""
id: str
name: str
description: str
categories: list[str]
input_schema: dict[str, Any]
output_schema: dict[str, Any]
class BlockListResponse(ToolResponseBase):
"""Response for find_block tool."""
type: ResponseType = ResponseType.BLOCK_LIST
blocks: list[BlockInfoSummary]
count: int
query: str
class BlockOutputResponse(ToolResponseBase):
"""Response for run_block tool."""
type: ResponseType = ResponseType.BLOCK_OUTPUT
block_id: str
block_name: str
outputs: dict[str, list[Any]]
success: bool = True
# Business understanding models
class UnderstandingUpdatedResponse(ToolResponseBase):
"""Response for add_understanding tool."""
type: ResponseType = ResponseType.UNDERSTANDING_UPDATED
updated_fields: list[str] = Field(default_factory=list)
current_understanding: dict[str, Any] = Field(default_factory=dict)
# Agent generation models
class ClarifyingQuestion(BaseModel):
"""A question that needs user clarification."""
question: str
keyword: str
example: str | None = None
class AgentPreviewResponse(ToolResponseBase):
"""Response for previewing a generated agent before saving."""
type: ResponseType = ResponseType.AGENT_PREVIEW
agent_json: dict[str, Any]
agent_name: str
description: str
node_count: int
link_count: int = 0
class AgentSavedResponse(ToolResponseBase):
"""Response when an agent is saved to the library."""
type: ResponseType = ResponseType.AGENT_SAVED
agent_id: str
agent_name: str
library_agent_id: str
library_agent_link: str
agent_page_link: str # Link to the agent builder/editor page
class ClarificationNeededResponse(ToolResponseBase):
"""Response when the LLM needs more information from the user."""
type: ResponseType = ResponseType.CLARIFICATION_NEEDED
questions: list[ClarifyingQuestion] = Field(default_factory=list)

View File

@@ -7,6 +7,7 @@ from pydantic import BaseModel, Field, field_validator
from backend.api.features.chat.config import ChatConfig
from backend.api.features.chat.model import ChatSession
from backend.api.features.library import db as library_db
from backend.data.graph import GraphModel
from backend.data.model import CredentialsMetaInput
from backend.data.user import get_user_by_id
@@ -57,6 +58,7 @@ class RunAgentInput(BaseModel):
"""Input parameters for the run_agent tool."""
username_agent_slug: str = ""
library_agent_id: str = ""
inputs: dict[str, Any] = Field(default_factory=dict)
use_defaults: bool = False
schedule_name: str = ""
@@ -64,7 +66,12 @@ class RunAgentInput(BaseModel):
timezone: str = "UTC"
@field_validator(
"username_agent_slug", "schedule_name", "cron", "timezone", mode="before"
"username_agent_slug",
"library_agent_id",
"schedule_name",
"cron",
"timezone",
mode="before",
)
@classmethod
def strip_strings(cls, v: Any) -> Any:
@@ -90,7 +97,7 @@ class RunAgentTool(BaseTool):
@property
def description(self) -> str:
return """Run or schedule an agent from the marketplace.
return """Run or schedule an agent from the marketplace or user's library.
The tool automatically handles the setup flow:
- Returns missing inputs if required fields are not provided
@@ -98,6 +105,10 @@ class RunAgentTool(BaseTool):
- Executes immediately if all requirements are met
- Schedules execution if cron expression is provided
Identify the agent using either:
- username_agent_slug: Marketplace format 'username/agent-name'
- library_agent_id: ID of an agent in the user's library
For scheduled execution, provide: schedule_name, cron, and optionally timezone."""
@property
@@ -109,6 +120,10 @@ class RunAgentTool(BaseTool):
"type": "string",
"description": "Agent identifier in format 'username/agent-name'",
},
"library_agent_id": {
"type": "string",
"description": "Library agent ID from user's library",
},
"inputs": {
"type": "object",
"description": "Input values for the agent",
@@ -131,7 +146,7 @@ class RunAgentTool(BaseTool):
"description": "IANA timezone for schedule (default: UTC)",
},
},
"required": ["username_agent_slug"],
"required": [],
}
@property
@@ -149,10 +164,16 @@ class RunAgentTool(BaseTool):
params = RunAgentInput(**kwargs)
session_id = session.session_id
# Validate agent slug format
if not params.username_agent_slug or "/" not in params.username_agent_slug:
# Validate at least one identifier is provided
has_slug = params.username_agent_slug and "/" in params.username_agent_slug
has_library_id = bool(params.library_agent_id)
if not has_slug and not has_library_id:
return ErrorResponse(
message="Please provide an agent slug in format 'username/agent-name'",
message=(
"Please provide either a username_agent_slug "
"(format 'username/agent-name') or a library_agent_id"
),
session_id=session_id,
)
@@ -167,13 +188,41 @@ class RunAgentTool(BaseTool):
is_schedule = bool(params.schedule_name or params.cron)
try:
# Step 1: Fetch agent details (always happens first)
username, agent_name = params.username_agent_slug.split("/", 1)
graph, store_agent = await fetch_graph_from_store_slug(username, agent_name)
# Step 1: Fetch agent details
graph: GraphModel | None = None
library_agent = None
# Priority: library_agent_id if provided
if has_library_id:
library_agent = await library_db.get_library_agent(
params.library_agent_id, user_id
)
if not library_agent:
return ErrorResponse(
message=f"Library agent '{params.library_agent_id}' not found",
session_id=session_id,
)
# Get the graph from the library agent
from backend.data.graph import get_graph
graph = await get_graph(
library_agent.graph_id,
library_agent.graph_version,
user_id=user_id,
)
else:
# Fetch from marketplace slug
username, agent_name = params.username_agent_slug.split("/", 1)
graph, _ = await fetch_graph_from_store_slug(username, agent_name)
if not graph:
identifier = (
params.library_agent_id
if has_library_id
else params.username_agent_slug
)
return ErrorResponse(
message=f"Agent '{params.username_agent_slug}' not found in marketplace",
message=f"Agent '{identifier}' not found",
session_id=session_id,
)

View File

@@ -48,6 +48,7 @@ class LibraryAgent(pydantic.BaseModel):
id: str
graph_id: str
graph_version: int
owner_user_id: str # ID of user who owns/created this agent graph
image_url: str | None
@@ -163,6 +164,7 @@ class LibraryAgent(pydantic.BaseModel):
id=agent.id,
graph_id=agent.agentGraphId,
graph_version=agent.agentGraphVersion,
owner_user_id=agent.userId,
image_url=agent.imageUrl,
creator_name=creator_name,
creator_image_url=creator_image_url,

View File

@@ -42,6 +42,7 @@ async def test_get_library_agents_success(
id="test-agent-1",
graph_id="test-agent-1",
graph_version=1,
owner_user_id=test_user_id,
name="Test Agent 1",
description="Test Description 1",
image_url=None,
@@ -64,6 +65,7 @@ async def test_get_library_agents_success(
id="test-agent-2",
graph_id="test-agent-2",
graph_version=1,
owner_user_id=test_user_id,
name="Test Agent 2",
description="Test Description 2",
image_url=None,
@@ -138,6 +140,7 @@ async def test_get_favorite_library_agents_success(
id="test-agent-1",
graph_id="test-agent-1",
graph_version=1,
owner_user_id=test_user_id,
name="Favorite Agent 1",
description="Test Favorite Description 1",
image_url=None,
@@ -205,6 +208,7 @@ def test_add_agent_to_library_success(
id="test-library-agent-id",
graph_id="test-agent-1",
graph_version=1,
owner_user_id=test_user_id,
name="Test Agent 1",
description="Test Description 1",
image_url=None,

View File

@@ -614,6 +614,7 @@ async def get_store_submissions(
submission_models = []
for sub in submissions:
submission_model = store_model.StoreSubmission(
listing_id=sub.listing_id,
agent_id=sub.agent_id,
agent_version=sub.agent_version,
name=sub.name,
@@ -667,35 +668,48 @@ async def delete_store_submission(
submission_id: str,
) -> bool:
"""
Delete a store listing submission as the submitting user.
Delete a store submission version as the submitting user.
Args:
user_id: ID of the authenticated user
submission_id: ID of the submission to be deleted
submission_id: StoreListingVersion ID to delete
Returns:
bool: True if the submission was successfully deleted, False otherwise
bool: True if successfully deleted
"""
logger.debug(f"Deleting store submission {submission_id} for user {user_id}")
try:
# Verify the submission belongs to this user
submission = await prisma.models.StoreListing.prisma().find_first(
where={"agentGraphId": submission_id, "owningUserId": user_id}
# Find the submission version with ownership check
version = await prisma.models.StoreListingVersion.prisma().find_first(
where={"id": submission_id}, include={"StoreListing": True}
)
if not submission:
logger.warning(f"Submission not found for user {user_id}: {submission_id}")
raise store_exceptions.SubmissionNotFoundError(
f"Submission not found for this user. User ID: {user_id}, Submission ID: {submission_id}"
if (
not version
or not version.StoreListing
or version.StoreListing.owningUserId != user_id
):
raise store_exceptions.SubmissionNotFoundError("Submission not found")
# Prevent deletion of approved submissions
if version.submissionStatus == prisma.enums.SubmissionStatus.APPROVED:
raise store_exceptions.InvalidOperationError(
"Cannot delete approved submissions"
)
# Delete the submission
await prisma.models.StoreListing.prisma().delete(where={"id": submission.id})
logger.debug(
f"Successfully deleted submission {submission_id} for user {user_id}"
# Delete the version
await prisma.models.StoreListingVersion.prisma().delete(
where={"id": version.id}
)
# Clean up empty listing if this was the last version
remaining = await prisma.models.StoreListingVersion.prisma().count(
where={"storeListingId": version.storeListingId}
)
if remaining == 0:
await prisma.models.StoreListing.prisma().delete(
where={"id": version.storeListingId}
)
return True
except Exception as e:
@@ -759,9 +773,15 @@ async def create_store_submission(
logger.warning(
f"Agent not found for user {user_id}: {agent_id} v{agent_version}"
)
raise store_exceptions.AgentNotFoundError(
f"Agent not found for this user. User ID: {user_id}, Agent ID: {agent_id}, Version: {agent_version}"
)
# Provide more user-friendly error message when agent_id is empty
if not agent_id or agent_id.strip() == "":
raise store_exceptions.AgentNotFoundError(
"No agent selected. Please select an agent before submitting to the store."
)
else:
raise store_exceptions.AgentNotFoundError(
f"Agent not found for this user. User ID: {user_id}, Agent ID: {agent_id}, Version: {agent_version}"
)
# Check if listing already exists for this agent
existing_listing = await prisma.models.StoreListing.prisma().find_first(
@@ -833,6 +853,7 @@ async def create_store_submission(
logger.debug(f"Created store listing for agent {agent_id}")
# Return submission details
return store_model.StoreSubmission(
listing_id=listing.id,
agent_id=agent_id,
agent_version=agent_version,
name=name,
@@ -944,81 +965,56 @@ async def edit_store_submission(
# Currently we are not allowing user to update the agent associated with a submission
# If we allow it in future, then we need a check here to verify the agent belongs to this user.
# Check if we can edit this submission
if current_version.submissionStatus == prisma.enums.SubmissionStatus.REJECTED:
# Only allow editing of PENDING submissions
if current_version.submissionStatus != prisma.enums.SubmissionStatus.PENDING:
raise store_exceptions.InvalidOperationError(
"Cannot edit a rejected submission"
)
# For APPROVED submissions, we need to create a new version
if current_version.submissionStatus == prisma.enums.SubmissionStatus.APPROVED:
# Create a new version for the existing listing
return await create_store_version(
user_id=user_id,
agent_id=current_version.agentGraphId,
agent_version=current_version.agentGraphVersion,
store_listing_id=current_version.storeListingId,
name=name,
video_url=video_url,
agent_output_demo_url=agent_output_demo_url,
image_urls=image_urls,
description=description,
sub_heading=sub_heading,
categories=categories,
changes_summary=changes_summary,
recommended_schedule_cron=recommended_schedule_cron,
instructions=instructions,
f"Cannot edit a {current_version.submissionStatus.value.lower()} submission. Only pending submissions can be edited."
)
# For PENDING submissions, we can update the existing version
elif current_version.submissionStatus == prisma.enums.SubmissionStatus.PENDING:
# Update the existing version
updated_version = await prisma.models.StoreListingVersion.prisma().update(
where={"id": store_listing_version_id},
data=prisma.types.StoreListingVersionUpdateInput(
name=name,
videoUrl=video_url,
agentOutputDemoUrl=agent_output_demo_url,
imageUrls=image_urls,
description=description,
categories=categories,
subHeading=sub_heading,
changesSummary=changes_summary,
recommendedScheduleCron=recommended_schedule_cron,
instructions=instructions,
),
)
logger.debug(
f"Updated existing version {store_listing_version_id} for agent {current_version.agentGraphId}"
)
if not updated_version:
raise DatabaseError("Failed to update store listing version")
return store_model.StoreSubmission(
agent_id=current_version.agentGraphId,
agent_version=current_version.agentGraphVersion,
# Update the existing version
updated_version = await prisma.models.StoreListingVersion.prisma().update(
where={"id": store_listing_version_id},
data=prisma.types.StoreListingVersionUpdateInput(
name=name,
sub_heading=sub_heading,
slug=current_version.StoreListing.slug,
videoUrl=video_url,
agentOutputDemoUrl=agent_output_demo_url,
imageUrls=image_urls,
description=description,
instructions=instructions,
image_urls=image_urls,
date_submitted=updated_version.submittedAt or updated_version.createdAt,
status=updated_version.submissionStatus,
runs=0,
rating=0.0,
store_listing_version_id=updated_version.id,
changes_summary=changes_summary,
video_url=video_url,
categories=categories,
version=updated_version.version,
)
subHeading=sub_heading,
changesSummary=changes_summary,
recommendedScheduleCron=recommended_schedule_cron,
instructions=instructions,
),
)
else:
raise store_exceptions.InvalidOperationError(
f"Cannot edit submission with status: {current_version.submissionStatus}"
)
logger.debug(
f"Updated existing version {store_listing_version_id} for agent {current_version.agentGraphId}"
)
if not updated_version:
raise DatabaseError("Failed to update store listing version")
return store_model.StoreSubmission(
listing_id=current_version.StoreListing.id,
agent_id=current_version.agentGraphId,
agent_version=current_version.agentGraphVersion,
name=name,
sub_heading=sub_heading,
slug=current_version.StoreListing.slug,
description=description,
instructions=instructions,
image_urls=image_urls,
date_submitted=updated_version.submittedAt or updated_version.createdAt,
status=updated_version.submissionStatus,
runs=0,
rating=0.0,
store_listing_version_id=updated_version.id,
changes_summary=changes_summary,
video_url=video_url,
categories=categories,
version=updated_version.version,
)
except (
store_exceptions.SubmissionNotFoundError,
@@ -1097,38 +1093,78 @@ async def create_store_version(
f"Agent not found for this user. User ID: {user_id}, Agent ID: {agent_id}, Version: {agent_version}"
)
# Get the latest version number
latest_version = listing.Versions[0] if listing.Versions else None
next_version = (latest_version.version + 1) if latest_version else 1
# Create a new version for the existing listing
new_version = await prisma.models.StoreListingVersion.prisma().create(
data=prisma.types.StoreListingVersionCreateInput(
version=next_version,
agentGraphId=agent_id,
agentGraphVersion=agent_version,
name=name,
videoUrl=video_url,
agentOutputDemoUrl=agent_output_demo_url,
imageUrls=image_urls,
description=description,
instructions=instructions,
categories=categories,
subHeading=sub_heading,
submissionStatus=prisma.enums.SubmissionStatus.PENDING,
submittedAt=datetime.now(),
changesSummary=changes_summary,
recommendedScheduleCron=recommended_schedule_cron,
storeListingId=store_listing_id,
# Check if there's already a PENDING submission for this agent (any version)
existing_pending_submission = (
await prisma.models.StoreListingVersion.prisma().find_first(
where=prisma.types.StoreListingVersionWhereInput(
storeListingId=store_listing_id,
agentGraphId=agent_id,
submissionStatus=prisma.enums.SubmissionStatus.PENDING,
isDeleted=False,
)
)
)
# Handle existing pending submission and create new one atomically
async with transaction() as tx:
# Get the latest version number first
latest_listing = await prisma.models.StoreListing.prisma(tx).find_first(
where=prisma.types.StoreListingWhereInput(
id=store_listing_id, owningUserId=user_id
),
include={"Versions": {"order_by": {"version": "desc"}, "take": 1}},
)
if not latest_listing:
raise store_exceptions.ListingNotFoundError(
f"Store listing not found. User ID: {user_id}, Listing ID: {store_listing_id}"
)
latest_version = (
latest_listing.Versions[0] if latest_listing.Versions else None
)
next_version = (latest_version.version + 1) if latest_version else 1
# If there's an existing pending submission, delete it atomically before creating new one
if existing_pending_submission:
logger.info(
f"Found existing PENDING submission for agent {agent_id} (was v{existing_pending_submission.agentGraphVersion}, now v{agent_version}), replacing existing submission instead of creating duplicate"
)
await prisma.models.StoreListingVersion.prisma(tx).delete(
where={"id": existing_pending_submission.id}
)
logger.debug(
f"Deleted existing pending submission {existing_pending_submission.id}"
)
# Create a new version for the existing listing
new_version = await prisma.models.StoreListingVersion.prisma(tx).create(
data=prisma.types.StoreListingVersionCreateInput(
version=next_version,
agentGraphId=agent_id,
agentGraphVersion=agent_version,
name=name,
videoUrl=video_url,
agentOutputDemoUrl=agent_output_demo_url,
imageUrls=image_urls,
description=description,
instructions=instructions,
categories=categories,
subHeading=sub_heading,
submissionStatus=prisma.enums.SubmissionStatus.PENDING,
submittedAt=datetime.now(),
changesSummary=changes_summary,
recommendedScheduleCron=recommended_schedule_cron,
storeListingId=store_listing_id,
)
)
logger.debug(
f"Created new version for listing {store_listing_id} of agent {agent_id}"
)
# Return submission details
return store_model.StoreSubmission(
listing_id=listing.id,
agent_id=agent_id,
agent_version=agent_version,
name=name,
@@ -1708,15 +1744,12 @@ async def review_store_submission(
# Convert to Pydantic model for consistency
return store_model.StoreSubmission(
listing_id=(submission.StoreListing.id if submission.StoreListing else ""),
agent_id=submission.agentGraphId,
agent_version=submission.agentGraphVersion,
name=submission.name,
sub_heading=submission.subHeading,
slug=(
submission.StoreListing.slug
if hasattr(submission, "storeListing") and submission.StoreListing
else ""
),
slug=(submission.StoreListing.slug if submission.StoreListing else ""),
description=submission.description,
instructions=submission.instructions,
image_urls=submission.imageUrls or [],
@@ -1818,9 +1851,7 @@ async def get_admin_listings_with_versions(
where = prisma.types.StoreListingWhereInput(**where_dict)
include = prisma.types.StoreListingInclude(
Versions=prisma.types.FindManyStoreListingVersionArgsFromStoreListing(
order_by=prisma.types._StoreListingVersion_version_OrderByInput(
version="desc"
)
order_by={"version": "desc"}
),
OwningUser=True,
)
@@ -1845,6 +1876,7 @@ async def get_admin_listings_with_versions(
# If we have versions, turn them into StoreSubmission models
for version in listing.Versions or []:
version_model = store_model.StoreSubmission(
listing_id=listing.id,
agent_id=version.agentGraphId,
agent_version=version.agentGraphVersion,
name=version.name,

View File

@@ -110,6 +110,7 @@ class Profile(pydantic.BaseModel):
class StoreSubmission(pydantic.BaseModel):
listing_id: str
agent_id: str
agent_version: int
name: str
@@ -164,8 +165,12 @@ class StoreListingsWithVersionsResponse(pydantic.BaseModel):
class StoreSubmissionRequest(pydantic.BaseModel):
agent_id: str
agent_version: int
agent_id: str = pydantic.Field(
..., min_length=1, description="Agent ID cannot be empty"
)
agent_version: int = pydantic.Field(
..., gt=0, description="Agent version must be greater than 0"
)
slug: str
name: str
sub_heading: str

View File

@@ -138,6 +138,7 @@ def test_creator_details():
def test_store_submission():
submission = store_model.StoreSubmission(
listing_id="listing123",
agent_id="agent123",
agent_version=1,
sub_heading="Test subheading",
@@ -159,6 +160,7 @@ def test_store_submissions_response():
response = store_model.StoreSubmissionsResponse(
submissions=[
store_model.StoreSubmission(
listing_id="listing123",
agent_id="agent123",
agent_version=1,
sub_heading="Test subheading",

View File

@@ -521,6 +521,7 @@ def test_get_submissions_success(
mocked_value = store_model.StoreSubmissionsResponse(
submissions=[
store_model.StoreSubmission(
listing_id="test-listing-id",
name="Test Agent",
description="Test agent description",
image_urls=["test.jpg"],

View File

@@ -6,6 +6,9 @@ import hashlib
import hmac
import logging
from enum import Enum
from typing import cast
from prisma.types import Serializable
from backend.sdk import (
BaseWebhooksManager,
@@ -84,7 +87,9 @@ class AirtableWebhookManager(BaseWebhooksManager):
# update webhook config
await update_webhook(
webhook.id,
config={"base_id": base_id, "cursor": response.cursor},
config=cast(
dict[str, Serializable], {"base_id": base_id, "cursor": response.cursor}
),
)
event_type = "notification"

View File

@@ -495,14 +495,8 @@ class SmartDecisionMakerBlock(Block):
}
properties = {}
field_mapping = {}
for link in links:
field_name = link.sink_name
clean_field_name = SmartDecisionMakerBlock.cleanup(field_name)
field_mapping[clean_field_name] = field_name
sink_block_input_schema = sink_node.input_default["input_schema"]
sink_block_properties = sink_block_input_schema.get("properties", {}).get(
link.sink_name, {}
@@ -512,7 +506,7 @@ class SmartDecisionMakerBlock(Block):
if "description" in sink_block_properties
else f"The {link.sink_name} of the tool"
)
properties[clean_field_name] = {
properties[link.sink_name] = {
"type": "string",
"description": description,
"default": json.dumps(sink_block_properties.get("default", None)),
@@ -525,7 +519,7 @@ class SmartDecisionMakerBlock(Block):
"strict": True,
}
tool_function["_field_mapping"] = field_mapping
# Store node info for later use in output processing
tool_function["_sink_node_id"] = sink_node.id
return {"type": "function", "function": tool_function}
@@ -981,10 +975,28 @@ class SmartDecisionMakerBlock(Block):
graph_version: int,
execution_context: ExecutionContext,
execution_processor: "ExecutionProcessor",
nodes_to_skip: set[str] | None = None,
**kwargs,
) -> BlockOutput:
tool_functions = await self._create_tool_node_signatures(node_id)
original_tool_count = len(tool_functions)
# Filter out tools for nodes that should be skipped (e.g., missing optional credentials)
if nodes_to_skip:
tool_functions = [
tf
for tf in tool_functions
if tf.get("function", {}).get("_sink_node_id") not in nodes_to_skip
]
# Only raise error if we had tools but they were all filtered out
if original_tool_count > 0 and not tool_functions:
raise ValueError(
"No available tools to execute - all downstream nodes are unavailable "
"(possibly due to missing optional credentials)"
)
yield "tool_functions", json.dumps(tool_functions)
conversation_history = input_data.conversation_history or []
@@ -1135,9 +1147,8 @@ class SmartDecisionMakerBlock(Block):
original_field_name = field_mapping.get(clean_arg_name, clean_arg_name)
arg_value = tool_args.get(clean_arg_name)
# Use original_field_name directly (not sanitized) to match link sink_name
# The field_mapping already translates from LLM's cleaned names to original names
emit_key = f"tools_^_{sink_node_id}_~_{original_field_name}"
sanitized_arg_name = self.cleanup(original_field_name)
emit_key = f"tools_^_{sink_node_id}_~_{sanitized_arg_name}"
logger.debug(
"[SmartDecisionMakerBlock|geid:%s|neid:%s] emit %s",

View File

@@ -383,6 +383,7 @@ class GraphExecutionWithNodes(GraphExecution):
self,
execution_context: ExecutionContext,
compiled_nodes_input_masks: Optional[NodesInputMasks] = None,
nodes_to_skip: Optional[set[str]] = None,
):
return GraphExecutionEntry(
user_id=self.user_id,
@@ -390,6 +391,7 @@ class GraphExecutionWithNodes(GraphExecution):
graph_version=self.graph_version or 0,
graph_exec_id=self.id,
nodes_input_masks=compiled_nodes_input_masks,
nodes_to_skip=nodes_to_skip or set(),
execution_context=execution_context,
)
@@ -1145,6 +1147,8 @@ class GraphExecutionEntry(BaseModel):
graph_id: str
graph_version: int
nodes_input_masks: Optional[NodesInputMasks] = None
nodes_to_skip: set[str] = Field(default_factory=set)
"""Node IDs that should be skipped due to optional credentials not being configured."""
execution_context: ExecutionContext = Field(default_factory=ExecutionContext)

View File

@@ -94,6 +94,15 @@ class Node(BaseDbModel):
input_links: list[Link] = []
output_links: list[Link] = []
@property
def credentials_optional(self) -> bool:
"""
Whether credentials are optional for this node.
When True and credentials are not configured, the node will be skipped
during execution rather than causing a validation error.
"""
return self.metadata.get("credentials_optional", False)
@property
def block(self) -> AnyBlockSchema | "_UnknownBlockBase":
"""Get the block for this node. Returns UnknownBlock if block is deleted/missing."""
@@ -326,7 +335,35 @@ class Graph(BaseGraph):
@computed_field
@property
def credentials_input_schema(self) -> dict[str, Any]:
return self._credentials_input_schema.jsonschema()
schema = self._credentials_input_schema.jsonschema()
# Determine which credential fields are required based on credentials_optional metadata
graph_credentials_inputs = self.aggregate_credentials_inputs()
required_fields = []
# Build a map of node_id -> node for quick lookup
all_nodes = {node.id: node for node in self.nodes}
for sub_graph in self.sub_graphs:
for node in sub_graph.nodes:
all_nodes[node.id] = node
for field_key, (
_field_info,
node_field_pairs,
) in graph_credentials_inputs.items():
# A field is required if ANY node using it has credentials_optional=False
is_required = False
for node_id, _field_name in node_field_pairs:
node = all_nodes.get(node_id)
if node and not node.credentials_optional:
is_required = True
break
if is_required:
required_fields.append(field_key)
schema["required"] = required_fields
return schema
@property
def _credentials_input_schema(self) -> type[BlockSchema]:

View File

@@ -396,3 +396,58 @@ async def test_access_store_listing_graph(server: SpinTestServer):
created_graph.id, created_graph.version, "3e53486c-cf57-477e-ba2a-cb02dc828e1b"
)
assert got_graph is not None
# ============================================================================
# Tests for Optional Credentials Feature
# ============================================================================
def test_node_credentials_optional_default():
"""Test that credentials_optional defaults to False when not set in metadata."""
node = Node(
id="test_node",
block_id=StoreValueBlock().id,
input_default={},
metadata={},
)
assert node.credentials_optional is False
def test_node_credentials_optional_true():
"""Test that credentials_optional returns True when explicitly set."""
node = Node(
id="test_node",
block_id=StoreValueBlock().id,
input_default={},
metadata={"credentials_optional": True},
)
assert node.credentials_optional is True
def test_node_credentials_optional_false():
"""Test that credentials_optional returns False when explicitly set to False."""
node = Node(
id="test_node",
block_id=StoreValueBlock().id,
input_default={},
metadata={"credentials_optional": False},
)
assert node.credentials_optional is False
def test_node_credentials_optional_with_other_metadata():
"""Test that credentials_optional works correctly with other metadata present."""
node = Node(
id="test_node",
block_id=StoreValueBlock().id,
input_default={},
metadata={
"position": {"x": 100, "y": 200},
"customized_name": "My Custom Node",
"credentials_optional": True,
},
)
assert node.credentials_optional is True
assert node.metadata["position"] == {"x": 100, "y": 200}
assert node.metadata["customized_name"] == "My Custom Node"

View File

@@ -0,0 +1,429 @@
"""Data models and access layer for user business understanding."""
import logging
from datetime import datetime
from typing import Any, Optional, cast
import pydantic
from prisma.models import UserBusinessUnderstanding
from prisma.types import (
UserBusinessUnderstandingCreateInput,
UserBusinessUnderstandingUpdateInput,
)
from backend.data.redis_client import get_redis_async
from backend.util.json import SafeJson
logger = logging.getLogger(__name__)
# Cache configuration
CACHE_KEY_PREFIX = "understanding"
CACHE_TTL_SECONDS = 48 * 60 * 60 # 48 hours
def _cache_key(user_id: str) -> str:
"""Generate cache key for user business understanding."""
return f"{CACHE_KEY_PREFIX}:{user_id}"
def _json_to_list(value: Any) -> list[str]:
"""Convert Json field to list[str], handling None."""
if value is None:
return []
if isinstance(value, list):
return cast(list[str], value)
return []
class BusinessUnderstandingInput(pydantic.BaseModel):
"""Input model for updating business understanding - all fields optional for incremental updates."""
# User info
user_name: Optional[str] = pydantic.Field(None, description="The user's name")
job_title: Optional[str] = pydantic.Field(None, description="The user's job title")
# Business basics
business_name: Optional[str] = pydantic.Field(
None, description="Name of the user's business"
)
industry: Optional[str] = pydantic.Field(None, description="Industry or sector")
business_size: Optional[str] = pydantic.Field(
None, description="Company size (e.g., '1-10', '11-50')"
)
user_role: Optional[str] = pydantic.Field(
None,
description="User's role in the organization (e.g., 'decision maker', 'implementer')",
)
# Processes & activities
key_workflows: Optional[list[str]] = pydantic.Field(
None, description="Key business workflows"
)
daily_activities: Optional[list[str]] = pydantic.Field(
None, description="Daily activities performed"
)
# Pain points & goals
pain_points: Optional[list[str]] = pydantic.Field(
None, description="Current pain points"
)
bottlenecks: Optional[list[str]] = pydantic.Field(
None, description="Process bottlenecks"
)
manual_tasks: Optional[list[str]] = pydantic.Field(
None, description="Manual/repetitive tasks"
)
automation_goals: Optional[list[str]] = pydantic.Field(
None, description="Desired automation goals"
)
# Current tools
current_software: Optional[list[str]] = pydantic.Field(
None, description="Software/tools currently used"
)
existing_automation: Optional[list[str]] = pydantic.Field(
None, description="Existing automations"
)
# Additional context
additional_notes: Optional[str] = pydantic.Field(
None, description="Any additional context"
)
class BusinessUnderstanding(pydantic.BaseModel):
"""Full business understanding model returned from database."""
id: str
user_id: str
created_at: datetime
updated_at: datetime
# User info
user_name: Optional[str] = None
job_title: Optional[str] = None
# Business basics
business_name: Optional[str] = None
industry: Optional[str] = None
business_size: Optional[str] = None
user_role: Optional[str] = None
# Processes & activities
key_workflows: list[str] = pydantic.Field(default_factory=list)
daily_activities: list[str] = pydantic.Field(default_factory=list)
# Pain points & goals
pain_points: list[str] = pydantic.Field(default_factory=list)
bottlenecks: list[str] = pydantic.Field(default_factory=list)
manual_tasks: list[str] = pydantic.Field(default_factory=list)
automation_goals: list[str] = pydantic.Field(default_factory=list)
# Current tools
current_software: list[str] = pydantic.Field(default_factory=list)
existing_automation: list[str] = pydantic.Field(default_factory=list)
# Additional context
additional_notes: Optional[str] = None
@classmethod
def from_db(cls, db_record: UserBusinessUnderstanding) -> "BusinessUnderstanding":
"""Convert database record to Pydantic model."""
return cls(
id=db_record.id,
user_id=db_record.userId,
created_at=db_record.createdAt,
updated_at=db_record.updatedAt,
user_name=db_record.userName,
job_title=db_record.jobTitle,
business_name=db_record.businessName,
industry=db_record.industry,
business_size=db_record.businessSize,
user_role=db_record.userRole,
key_workflows=_json_to_list(db_record.keyWorkflows),
daily_activities=_json_to_list(db_record.dailyActivities),
pain_points=_json_to_list(db_record.painPoints),
bottlenecks=_json_to_list(db_record.bottlenecks),
manual_tasks=_json_to_list(db_record.manualTasks),
automation_goals=_json_to_list(db_record.automationGoals),
current_software=_json_to_list(db_record.currentSoftware),
existing_automation=_json_to_list(db_record.existingAutomation),
additional_notes=db_record.additionalNotes,
)
def _merge_lists(existing: list | None, new: list | None) -> list | None:
"""Merge two lists, removing duplicates while preserving order."""
if new is None:
return existing
if existing is None:
return new
# Preserve order, add new items that don't exist
merged = list(existing)
for item in new:
if item not in merged:
merged.append(item)
return merged
async def _get_from_cache(user_id: str) -> Optional[BusinessUnderstanding]:
"""Get business understanding from Redis cache."""
try:
redis = await get_redis_async()
cached_data = await redis.get(_cache_key(user_id))
if cached_data:
return BusinessUnderstanding.model_validate_json(cached_data)
except Exception as e:
logger.warning(f"Failed to get understanding from cache: {e}")
return None
async def _set_cache(user_id: str, understanding: BusinessUnderstanding) -> None:
"""Set business understanding in Redis cache with TTL."""
try:
redis = await get_redis_async()
await redis.setex(
_cache_key(user_id),
CACHE_TTL_SECONDS,
understanding.model_dump_json(),
)
except Exception as e:
logger.warning(f"Failed to set understanding in cache: {e}")
async def _delete_cache(user_id: str) -> None:
"""Delete business understanding from Redis cache."""
try:
redis = await get_redis_async()
await redis.delete(_cache_key(user_id))
except Exception as e:
logger.warning(f"Failed to delete understanding from cache: {e}")
async def get_business_understanding(
user_id: str,
) -> Optional[BusinessUnderstanding]:
"""Get the business understanding for a user.
Checks cache first, falls back to database if not cached.
Results are cached for 48 hours.
"""
# Try cache first
cached = await _get_from_cache(user_id)
if cached:
logger.debug(f"Business understanding cache hit for user {user_id}")
return cached
# Cache miss - load from database
logger.debug(f"Business understanding cache miss for user {user_id}")
record = await UserBusinessUnderstanding.prisma().find_unique(
where={"userId": user_id}
)
if record is None:
return None
understanding = BusinessUnderstanding.from_db(record)
# Store in cache for next time
await _set_cache(user_id, understanding)
return understanding
async def upsert_business_understanding(
user_id: str,
data: BusinessUnderstandingInput,
) -> BusinessUnderstanding:
"""
Create or update business understanding with incremental merge strategy.
- String fields: new value overwrites if provided (not None)
- List fields: new items are appended to existing (deduplicated)
"""
# Get existing record for merge
existing = await UserBusinessUnderstanding.prisma().find_unique(
where={"userId": user_id}
)
# Build update data with merge strategy
update_data: UserBusinessUnderstandingUpdateInput = {}
create_data: dict[str, Any] = {"userId": user_id}
# String fields - overwrite if provided
if data.user_name is not None:
update_data["userName"] = data.user_name
create_data["userName"] = data.user_name
if data.job_title is not None:
update_data["jobTitle"] = data.job_title
create_data["jobTitle"] = data.job_title
if data.business_name is not None:
update_data["businessName"] = data.business_name
create_data["businessName"] = data.business_name
if data.industry is not None:
update_data["industry"] = data.industry
create_data["industry"] = data.industry
if data.business_size is not None:
update_data["businessSize"] = data.business_size
create_data["businessSize"] = data.business_size
if data.user_role is not None:
update_data["userRole"] = data.user_role
create_data["userRole"] = data.user_role
if data.additional_notes is not None:
update_data["additionalNotes"] = data.additional_notes
create_data["additionalNotes"] = data.additional_notes
# List fields - merge with existing
if data.key_workflows is not None:
existing_list = _json_to_list(existing.keyWorkflows) if existing else None
merged = _merge_lists(existing_list, data.key_workflows)
update_data["keyWorkflows"] = SafeJson(merged)
create_data["keyWorkflows"] = SafeJson(merged)
if data.daily_activities is not None:
existing_list = _json_to_list(existing.dailyActivities) if existing else None
merged = _merge_lists(existing_list, data.daily_activities)
update_data["dailyActivities"] = SafeJson(merged)
create_data["dailyActivities"] = SafeJson(merged)
if data.pain_points is not None:
existing_list = _json_to_list(existing.painPoints) if existing else None
merged = _merge_lists(existing_list, data.pain_points)
update_data["painPoints"] = SafeJson(merged)
create_data["painPoints"] = SafeJson(merged)
if data.bottlenecks is not None:
existing_list = _json_to_list(existing.bottlenecks) if existing else None
merged = _merge_lists(existing_list, data.bottlenecks)
update_data["bottlenecks"] = SafeJson(merged)
create_data["bottlenecks"] = SafeJson(merged)
if data.manual_tasks is not None:
existing_list = _json_to_list(existing.manualTasks) if existing else None
merged = _merge_lists(existing_list, data.manual_tasks)
update_data["manualTasks"] = SafeJson(merged)
create_data["manualTasks"] = SafeJson(merged)
if data.automation_goals is not None:
existing_list = _json_to_list(existing.automationGoals) if existing else None
merged = _merge_lists(existing_list, data.automation_goals)
update_data["automationGoals"] = SafeJson(merged)
create_data["automationGoals"] = SafeJson(merged)
if data.current_software is not None:
existing_list = _json_to_list(existing.currentSoftware) if existing else None
merged = _merge_lists(existing_list, data.current_software)
update_data["currentSoftware"] = SafeJson(merged)
create_data["currentSoftware"] = SafeJson(merged)
if data.existing_automation is not None:
existing_list = _json_to_list(existing.existingAutomation) if existing else None
merged = _merge_lists(existing_list, data.existing_automation)
update_data["existingAutomation"] = SafeJson(merged)
create_data["existingAutomation"] = SafeJson(merged)
# Upsert
record = await UserBusinessUnderstanding.prisma().upsert(
where={"userId": user_id},
data={
"create": UserBusinessUnderstandingCreateInput(**create_data),
"update": update_data,
},
)
understanding = BusinessUnderstanding.from_db(record)
# Update cache with new understanding
await _set_cache(user_id, understanding)
return understanding
async def clear_business_understanding(user_id: str) -> bool:
"""Clear/delete business understanding for a user from both DB and cache."""
# Delete from cache first
await _delete_cache(user_id)
try:
await UserBusinessUnderstanding.prisma().delete(where={"userId": user_id})
return True
except Exception:
# Record might not exist
return False
def format_understanding_for_prompt(understanding: BusinessUnderstanding) -> str:
"""Format business understanding as text for system prompt injection."""
sections = []
# User info section
user_info = []
if understanding.user_name:
user_info.append(f"Name: {understanding.user_name}")
if understanding.job_title:
user_info.append(f"Job Title: {understanding.job_title}")
if user_info:
sections.append("## User\n" + "\n".join(user_info))
# Business section
business_info = []
if understanding.business_name:
business_info.append(f"Company: {understanding.business_name}")
if understanding.industry:
business_info.append(f"Industry: {understanding.industry}")
if understanding.business_size:
business_info.append(f"Size: {understanding.business_size}")
if understanding.user_role:
business_info.append(f"Role Context: {understanding.user_role}")
if business_info:
sections.append("## Business\n" + "\n".join(business_info))
# Processes section
processes = []
if understanding.key_workflows:
processes.append(f"Key Workflows: {', '.join(understanding.key_workflows)}")
if understanding.daily_activities:
processes.append(
f"Daily Activities: {', '.join(understanding.daily_activities)}"
)
if processes:
sections.append("## Processes\n" + "\n".join(processes))
# Pain points section
pain_points = []
if understanding.pain_points:
pain_points.append(f"Pain Points: {', '.join(understanding.pain_points)}")
if understanding.bottlenecks:
pain_points.append(f"Bottlenecks: {', '.join(understanding.bottlenecks)}")
if understanding.manual_tasks:
pain_points.append(f"Manual Tasks: {', '.join(understanding.manual_tasks)}")
if pain_points:
sections.append("## Pain Points\n" + "\n".join(pain_points))
# Goals section
if understanding.automation_goals:
sections.append(
"## Automation Goals\n"
+ "\n".join(f"- {goal}" for goal in understanding.automation_goals)
)
# Current tools section
tools_info = []
if understanding.current_software:
tools_info.append(
f"Current Software: {', '.join(understanding.current_software)}"
)
if understanding.existing_automation:
tools_info.append(
f"Existing Automation: {', '.join(understanding.existing_automation)}"
)
if tools_info:
sections.append("## Current Tools\n" + "\n".join(tools_info))
# Additional notes
if understanding.additional_notes:
sections.append(f"## Additional Context\n{understanding.additional_notes}")
if not sections:
return ""
return "# User Business Context\n\n" + "\n\n".join(sections)

View File

@@ -178,6 +178,7 @@ async def execute_node(
execution_processor: "ExecutionProcessor",
execution_stats: NodeExecutionStats | None = None,
nodes_input_masks: Optional[NodesInputMasks] = None,
nodes_to_skip: Optional[set[str]] = None,
) -> BlockOutput:
"""
Execute a node in the graph. This will trigger a block execution on a node,
@@ -245,6 +246,7 @@ async def execute_node(
"user_id": user_id,
"execution_context": execution_context,
"execution_processor": execution_processor,
"nodes_to_skip": nodes_to_skip or set(),
}
# Last-minute fetch credentials + acquire a system-wide read-write lock to prevent
@@ -542,6 +544,7 @@ class ExecutionProcessor:
node_exec_progress: NodeExecutionProgress,
nodes_input_masks: Optional[NodesInputMasks],
graph_stats_pair: tuple[GraphExecutionStats, threading.Lock],
nodes_to_skip: Optional[set[str]] = None,
) -> NodeExecutionStats:
log_metadata = LogMetadata(
logger=_logger,
@@ -564,6 +567,7 @@ class ExecutionProcessor:
db_client=db_client,
log_metadata=log_metadata,
nodes_input_masks=nodes_input_masks,
nodes_to_skip=nodes_to_skip,
)
if isinstance(status, BaseException):
raise status
@@ -609,6 +613,7 @@ class ExecutionProcessor:
db_client: "DatabaseManagerAsyncClient",
log_metadata: LogMetadata,
nodes_input_masks: Optional[NodesInputMasks] = None,
nodes_to_skip: Optional[set[str]] = None,
) -> ExecutionStatus:
status = ExecutionStatus.RUNNING
@@ -645,6 +650,7 @@ class ExecutionProcessor:
execution_processor=self,
execution_stats=stats,
nodes_input_masks=nodes_input_masks,
nodes_to_skip=nodes_to_skip,
):
await persist_output(output_name, output_data)
@@ -956,6 +962,21 @@ class ExecutionProcessor:
queued_node_exec = execution_queue.get()
# Check if this node should be skipped due to optional credentials
if queued_node_exec.node_id in graph_exec.nodes_to_skip:
log_metadata.info(
f"Skipping node execution {queued_node_exec.node_exec_id} "
f"for node {queued_node_exec.node_id} - optional credentials not configured"
)
# Mark the node as completed without executing
# No outputs will be produced, so downstream nodes won't trigger
update_node_execution_status(
db_client=db_client,
exec_id=queued_node_exec.node_exec_id,
status=ExecutionStatus.COMPLETED,
)
continue
log_metadata.debug(
f"Dispatching node execution {queued_node_exec.node_exec_id} "
f"for node {queued_node_exec.node_id}",
@@ -1016,6 +1037,7 @@ class ExecutionProcessor:
execution_stats,
execution_stats_lock,
),
nodes_to_skip=graph_exec.nodes_to_skip,
),
self.node_execution_loop,
)

View File

@@ -239,14 +239,19 @@ async def _validate_node_input_credentials(
graph: GraphModel,
user_id: str,
nodes_input_masks: Optional[NodesInputMasks] = None,
) -> dict[str, dict[str, str]]:
) -> tuple[dict[str, dict[str, str]], set[str]]:
"""
Checks all credentials for all nodes of the graph and returns structured errors.
Checks all credentials for all nodes of the graph and returns structured errors
and a set of nodes that should be skipped due to optional missing credentials.
Returns:
dict[node_id, dict[field_name, error_message]]: Credential validation errors per node
tuple[
dict[node_id, dict[field_name, error_message]]: Credential validation errors per node,
set[node_id]: Nodes that should be skipped (optional credentials not configured)
]
"""
credential_errors: dict[str, dict[str, str]] = defaultdict(dict)
nodes_to_skip: set[str] = set()
for node in graph.nodes:
block = node.block
@@ -256,27 +261,46 @@ async def _validate_node_input_credentials(
if not credentials_fields:
continue
# Track if any credential field is missing for this node
has_missing_credentials = False
for field_name, credentials_meta_type in credentials_fields.items():
try:
# Check nodes_input_masks first, then input_default
field_value = None
if (
nodes_input_masks
and (node_input_mask := nodes_input_masks.get(node.id))
and field_name in node_input_mask
):
credentials_meta = credentials_meta_type.model_validate(
node_input_mask[field_name]
)
field_value = node_input_mask[field_name]
elif field_name in node.input_default:
credentials_meta = credentials_meta_type.model_validate(
node.input_default[field_name]
)
else:
# Missing credentials
credential_errors[node.id][
field_name
] = "These credentials are required"
continue
# For optional credentials, don't use input_default - treat as missing
# This prevents stale credential IDs from failing validation
if node.credentials_optional:
field_value = None
else:
field_value = node.input_default[field_name]
# Check if credentials are missing (None, empty, or not present)
if field_value is None or (
isinstance(field_value, dict) and not field_value.get("id")
):
has_missing_credentials = True
# If node has credentials_optional flag, mark for skipping instead of error
if node.credentials_optional:
continue # Don't add error, will be marked for skip after loop
else:
credential_errors[node.id][
field_name
] = "These credentials are required"
continue
credentials_meta = credentials_meta_type.model_validate(field_value)
except ValidationError as e:
# Validation error means credentials were provided but invalid
# This should always be an error, even if optional
credential_errors[node.id][field_name] = f"Invalid credentials: {e}"
continue
@@ -287,6 +311,7 @@ async def _validate_node_input_credentials(
)
except Exception as e:
# Handle any errors fetching credentials
# If credentials were explicitly configured but unavailable, it's an error
credential_errors[node.id][
field_name
] = f"Credentials not available: {e}"
@@ -313,7 +338,19 @@ async def _validate_node_input_credentials(
] = "Invalid credentials: type/provider mismatch"
continue
return credential_errors
# If node has optional credentials and any are missing, mark for skipping
# But only if there are no other errors for this node
if (
has_missing_credentials
and node.credentials_optional
and node.id not in credential_errors
):
nodes_to_skip.add(node.id)
logger.info(
f"Node #{node.id} will be skipped: optional credentials not configured"
)
return credential_errors, nodes_to_skip
def make_node_credentials_input_map(
@@ -355,21 +392,25 @@ async def validate_graph_with_credentials(
graph: GraphModel,
user_id: str,
nodes_input_masks: Optional[NodesInputMasks] = None,
) -> Mapping[str, Mapping[str, str]]:
) -> tuple[Mapping[str, Mapping[str, str]], set[str]]:
"""
Validate graph including credentials and return structured errors per node.
Validate graph including credentials and return structured errors per node,
along with a set of nodes that should be skipped due to optional missing credentials.
Returns:
dict[node_id, dict[field_name, error_message]]: Validation errors per node
tuple[
dict[node_id, dict[field_name, error_message]]: Validation errors per node,
set[node_id]: Nodes that should be skipped (optional credentials not configured)
]
"""
# Get input validation errors
node_input_errors = GraphModel.validate_graph_get_errors(
graph, for_run=True, nodes_input_masks=nodes_input_masks
)
# Get credential input/availability/validation errors
node_credential_input_errors = await _validate_node_input_credentials(
graph, user_id, nodes_input_masks
# Get credential input/availability/validation errors and nodes to skip
node_credential_input_errors, nodes_to_skip = (
await _validate_node_input_credentials(graph, user_id, nodes_input_masks)
)
# Merge credential errors with structural errors
@@ -378,7 +419,7 @@ async def validate_graph_with_credentials(
node_input_errors[node_id] = {}
node_input_errors[node_id].update(field_errors)
return node_input_errors
return node_input_errors, nodes_to_skip
async def _construct_starting_node_execution_input(
@@ -386,7 +427,7 @@ async def _construct_starting_node_execution_input(
user_id: str,
graph_inputs: BlockInput,
nodes_input_masks: Optional[NodesInputMasks] = None,
) -> list[tuple[str, BlockInput]]:
) -> tuple[list[tuple[str, BlockInput]], set[str]]:
"""
Validates and prepares the input data for executing a graph.
This function checks the graph for starting nodes, validates the input data
@@ -400,11 +441,14 @@ async def _construct_starting_node_execution_input(
node_credentials_map: `dict[node_id, dict[input_name, CredentialsMetaInput]]`
Returns:
list[tuple[str, BlockInput]]: A list of tuples, each containing the node ID and
the corresponding input data for that node.
tuple[
list[tuple[str, BlockInput]]: A list of tuples, each containing the node ID
and the corresponding input data for that node.
set[str]: Node IDs that should be skipped (optional credentials not configured)
]
"""
# Use new validation function that includes credentials
validation_errors = await validate_graph_with_credentials(
validation_errors, nodes_to_skip = await validate_graph_with_credentials(
graph, user_id, nodes_input_masks
)
n_error_nodes = len(validation_errors)
@@ -445,7 +489,7 @@ async def _construct_starting_node_execution_input(
"No starting nodes found for the graph, make sure an AgentInput or blocks with no inbound links are present as starting nodes."
)
return nodes_input
return nodes_input, nodes_to_skip
async def validate_and_construct_node_execution_input(
@@ -456,7 +500,7 @@ async def validate_and_construct_node_execution_input(
graph_credentials_inputs: Optional[Mapping[str, CredentialsMetaInput]] = None,
nodes_input_masks: Optional[NodesInputMasks] = None,
is_sub_graph: bool = False,
) -> tuple[GraphModel, list[tuple[str, BlockInput]], NodesInputMasks]:
) -> tuple[GraphModel, list[tuple[str, BlockInput]], NodesInputMasks, set[str]]:
"""
Public wrapper that handles graph fetching, credential mapping, and validation+construction.
This centralizes the logic used by both scheduler validation and actual execution.
@@ -473,6 +517,7 @@ async def validate_and_construct_node_execution_input(
GraphModel: Full graph object for the given `graph_id`.
list[tuple[node_id, BlockInput]]: Starting node IDs with corresponding inputs.
dict[str, BlockInput]: Node input masks including all passed-in credentials.
set[str]: Node IDs that should be skipped (optional credentials not configured).
Raises:
NotFoundError: If the graph is not found.
@@ -514,14 +559,16 @@ async def validate_and_construct_node_execution_input(
nodes_input_masks or {},
)
starting_nodes_input = await _construct_starting_node_execution_input(
graph=graph,
user_id=user_id,
graph_inputs=graph_inputs,
nodes_input_masks=nodes_input_masks,
starting_nodes_input, nodes_to_skip = (
await _construct_starting_node_execution_input(
graph=graph,
user_id=user_id,
graph_inputs=graph_inputs,
nodes_input_masks=nodes_input_masks,
)
)
return graph, starting_nodes_input, nodes_input_masks
return graph, starting_nodes_input, nodes_input_masks, nodes_to_skip
def _merge_nodes_input_masks(
@@ -779,6 +826,9 @@ async def add_graph_execution(
# Use existing execution's compiled input masks
compiled_nodes_input_masks = graph_exec.nodes_input_masks or {}
# For resumed executions, nodes_to_skip was already determined at creation time
# TODO: Consider storing nodes_to_skip in DB if we need to preserve it across resumes
nodes_to_skip: set[str] = set()
logger.info(f"Resuming graph execution #{graph_exec.id} for graph #{graph_id}")
else:
@@ -787,7 +837,7 @@ async def add_graph_execution(
)
# Create new execution
graph, starting_nodes_input, compiled_nodes_input_masks = (
graph, starting_nodes_input, compiled_nodes_input_masks, nodes_to_skip = (
await validate_and_construct_node_execution_input(
graph_id=graph_id,
user_id=user_id,
@@ -836,6 +886,7 @@ async def add_graph_execution(
try:
graph_exec_entry = graph_exec.to_graph_execution_entry(
compiled_nodes_input_masks=compiled_nodes_input_masks,
nodes_to_skip=nodes_to_skip,
execution_context=execution_context,
)
logger.info(f"Publishing execution {graph_exec.id} to execution queue")

View File

@@ -367,10 +367,13 @@ async def test_add_graph_execution_is_repeatable(mocker: MockerFixture):
)
# Setup mock returns
# The function returns (graph, starting_nodes_input, compiled_nodes_input_masks, nodes_to_skip)
nodes_to_skip: set[str] = set()
mock_validate.return_value = (
mock_graph,
starting_nodes_input,
compiled_nodes_input_masks,
nodes_to_skip,
)
mock_prisma.is_connected.return_value = True
mock_edb.create_graph_execution = mocker.AsyncMock(return_value=mock_graph_exec)
@@ -456,3 +459,212 @@ async def test_add_graph_execution_is_repeatable(mocker: MockerFixture):
# Both executions should succeed (though they create different objects)
assert result1 == mock_graph_exec
assert result2 == mock_graph_exec_2
# ============================================================================
# Tests for Optional Credentials Feature
# ============================================================================
@pytest.mark.asyncio
async def test_validate_node_input_credentials_returns_nodes_to_skip(
mocker: MockerFixture,
):
"""
Test that _validate_node_input_credentials returns nodes_to_skip set
for nodes with credentials_optional=True and missing credentials.
"""
from backend.executor.utils import _validate_node_input_credentials
# Create a mock node with credentials_optional=True
mock_node = mocker.MagicMock()
mock_node.id = "node-with-optional-creds"
mock_node.credentials_optional = True
mock_node.input_default = {} # No credentials configured
# Create a mock block with credentials field
mock_block = mocker.MagicMock()
mock_credentials_field_type = mocker.MagicMock()
mock_block.input_schema.get_credentials_fields.return_value = {
"credentials": mock_credentials_field_type
}
mock_node.block = mock_block
# Create mock graph
mock_graph = mocker.MagicMock()
mock_graph.nodes = [mock_node]
# Call the function
errors, nodes_to_skip = await _validate_node_input_credentials(
graph=mock_graph,
user_id="test-user-id",
nodes_input_masks=None,
)
# Node should be in nodes_to_skip, not in errors
assert mock_node.id in nodes_to_skip
assert mock_node.id not in errors
@pytest.mark.asyncio
async def test_validate_node_input_credentials_required_missing_creds_error(
mocker: MockerFixture,
):
"""
Test that _validate_node_input_credentials returns errors
for nodes with credentials_optional=False and missing credentials.
"""
from backend.executor.utils import _validate_node_input_credentials
# Create a mock node with credentials_optional=False (required)
mock_node = mocker.MagicMock()
mock_node.id = "node-with-required-creds"
mock_node.credentials_optional = False
mock_node.input_default = {} # No credentials configured
# Create a mock block with credentials field
mock_block = mocker.MagicMock()
mock_credentials_field_type = mocker.MagicMock()
mock_block.input_schema.get_credentials_fields.return_value = {
"credentials": mock_credentials_field_type
}
mock_node.block = mock_block
# Create mock graph
mock_graph = mocker.MagicMock()
mock_graph.nodes = [mock_node]
# Call the function
errors, nodes_to_skip = await _validate_node_input_credentials(
graph=mock_graph,
user_id="test-user-id",
nodes_input_masks=None,
)
# Node should be in errors, not in nodes_to_skip
assert mock_node.id in errors
assert "credentials" in errors[mock_node.id]
assert "required" in errors[mock_node.id]["credentials"].lower()
assert mock_node.id not in nodes_to_skip
@pytest.mark.asyncio
async def test_validate_graph_with_credentials_returns_nodes_to_skip(
mocker: MockerFixture,
):
"""
Test that validate_graph_with_credentials returns nodes_to_skip set
from _validate_node_input_credentials.
"""
from backend.executor.utils import validate_graph_with_credentials
# Mock _validate_node_input_credentials to return specific values
mock_validate = mocker.patch(
"backend.executor.utils._validate_node_input_credentials"
)
expected_errors = {"node1": {"field": "error"}}
expected_nodes_to_skip = {"node2", "node3"}
mock_validate.return_value = (expected_errors, expected_nodes_to_skip)
# Mock GraphModel with validate_graph_get_errors method
mock_graph = mocker.MagicMock()
mock_graph.validate_graph_get_errors.return_value = {}
# Call the function
errors, nodes_to_skip = await validate_graph_with_credentials(
graph=mock_graph,
user_id="test-user-id",
nodes_input_masks=None,
)
# Verify nodes_to_skip is passed through
assert nodes_to_skip == expected_nodes_to_skip
assert "node1" in errors
@pytest.mark.asyncio
async def test_add_graph_execution_with_nodes_to_skip(mocker: MockerFixture):
"""
Test that add_graph_execution properly passes nodes_to_skip
to the graph execution entry.
"""
from backend.data.execution import GraphExecutionWithNodes
from backend.executor.utils import add_graph_execution
# Mock data
graph_id = "test-graph-id"
user_id = "test-user-id"
inputs = {"test_input": "test_value"}
graph_version = 1
# Mock the graph object
mock_graph = mocker.MagicMock()
mock_graph.version = graph_version
# Starting nodes and masks
starting_nodes_input = [("node1", {"input1": "value1"})]
compiled_nodes_input_masks = {}
nodes_to_skip = {"skipped-node-1", "skipped-node-2"}
# Mock the graph execution object
mock_graph_exec = mocker.MagicMock(spec=GraphExecutionWithNodes)
mock_graph_exec.id = "execution-id-123"
mock_graph_exec.node_executions = []
# Track what's passed to to_graph_execution_entry
captured_kwargs = {}
def capture_to_entry(**kwargs):
captured_kwargs.update(kwargs)
return mocker.MagicMock()
mock_graph_exec.to_graph_execution_entry.side_effect = capture_to_entry
# Setup mocks
mock_validate = mocker.patch(
"backend.executor.utils.validate_and_construct_node_execution_input"
)
mock_edb = mocker.patch("backend.executor.utils.execution_db")
mock_prisma = mocker.patch("backend.executor.utils.prisma")
mock_udb = mocker.patch("backend.executor.utils.user_db")
mock_gdb = mocker.patch("backend.executor.utils.graph_db")
mock_get_queue = mocker.patch("backend.executor.utils.get_async_execution_queue")
mock_get_event_bus = mocker.patch(
"backend.executor.utils.get_async_execution_event_bus"
)
# Setup returns - include nodes_to_skip in the tuple
mock_validate.return_value = (
mock_graph,
starting_nodes_input,
compiled_nodes_input_masks,
nodes_to_skip, # This should be passed through
)
mock_prisma.is_connected.return_value = True
mock_edb.create_graph_execution = mocker.AsyncMock(return_value=mock_graph_exec)
mock_edb.update_graph_execution_stats = mocker.AsyncMock(
return_value=mock_graph_exec
)
mock_edb.update_node_execution_status_batch = mocker.AsyncMock()
mock_user = mocker.MagicMock()
mock_user.timezone = "UTC"
mock_settings = mocker.MagicMock()
mock_settings.human_in_the_loop_safe_mode = True
mock_udb.get_user_by_id = mocker.AsyncMock(return_value=mock_user)
mock_gdb.get_graph_settings = mocker.AsyncMock(return_value=mock_settings)
mock_get_queue.return_value = mocker.AsyncMock()
mock_get_event_bus.return_value = mocker.MagicMock(publish=mocker.AsyncMock())
# Call the function
await add_graph_execution(
graph_id=graph_id,
user_id=user_id,
inputs=inputs,
graph_version=graph_version,
)
# Verify nodes_to_skip was passed to to_graph_execution_entry
assert "nodes_to_skip" in captured_kwargs
assert captured_kwargs["nodes_to_skip"] == nodes_to_skip

View File

@@ -0,0 +1,227 @@
#!/usr/bin/env python3
"""
Generate a lightweight stub for prisma/types.py that collapses all exported
symbols to Any. This prevents Pyright from spending time/budget on Prisma's
query DSL types while keeping runtime behavior unchanged.
Usage:
poetry run gen-prisma-stub
This script automatically finds the prisma package location and generates
the types.pyi stub file in the same directory as types.py.
"""
from __future__ import annotations
import ast
import importlib.util
import sys
from pathlib import Path
from typing import Iterable, Set
def _iter_assigned_names(target: ast.expr) -> Iterable[str]:
"""Extract names from assignment targets (handles tuple unpacking)."""
if isinstance(target, ast.Name):
yield target.id
elif isinstance(target, (ast.Tuple, ast.List)):
for elt in target.elts:
yield from _iter_assigned_names(elt)
def _is_private(name: str) -> bool:
"""Check if a name is private (starts with _ but not __)."""
return name.startswith("_") and not name.startswith("__")
def _is_safe_type_alias(node: ast.Assign) -> bool:
"""Check if an assignment is a safe type alias that shouldn't be stubbed.
Safe types are:
- Literal types (don't cause type budget issues)
- Simple type references (SortMode, SortOrder, etc.)
- TypeVar definitions
"""
if not node.value:
return False
# Check if it's a Subscript (like Literal[...], Union[...], TypeVar[...])
if isinstance(node.value, ast.Subscript):
# Get the base type name
if isinstance(node.value.value, ast.Name):
base_name = node.value.value.id
# Literal types are safe
if base_name == "Literal":
return True
# TypeVar is safe
if base_name == "TypeVar":
return True
elif isinstance(node.value.value, ast.Attribute):
# Handle typing_extensions.Literal etc.
if node.value.value.attr == "Literal":
return True
# Check if it's a simple Name reference (like SortMode = _types.SortMode)
if isinstance(node.value, ast.Attribute):
return True
# Check if it's a Call (like TypeVar(...))
if isinstance(node.value, ast.Call):
if isinstance(node.value.func, ast.Name):
if node.value.func.id == "TypeVar":
return True
return False
def collect_top_level_symbols(
tree: ast.Module, source_lines: list[str]
) -> tuple[Set[str], Set[str], list[str], Set[str]]:
"""Collect all top-level symbols from an AST module.
Returns:
Tuple of (class_names, function_names, safe_variable_sources, unsafe_variable_names)
safe_variable_sources contains the actual source code lines for safe variables
"""
classes: Set[str] = set()
functions: Set[str] = set()
safe_variable_sources: list[str] = []
unsafe_variables: Set[str] = set()
for node in tree.body:
if isinstance(node, ast.ClassDef):
if not _is_private(node.name):
classes.add(node.name)
elif isinstance(node, (ast.FunctionDef, ast.AsyncFunctionDef)):
if not _is_private(node.name):
functions.add(node.name)
elif isinstance(node, ast.Assign):
is_safe = _is_safe_type_alias(node)
names = []
for t in node.targets:
for n in _iter_assigned_names(t):
if not _is_private(n):
names.append(n)
if names:
if is_safe:
# Extract the source code for this assignment
start_line = node.lineno - 1 # 0-indexed
end_line = node.end_lineno if node.end_lineno else node.lineno
source = "\n".join(source_lines[start_line:end_line])
safe_variable_sources.append(source)
else:
unsafe_variables.update(names)
elif isinstance(node, ast.AnnAssign) and node.target:
# Annotated assignments are always stubbed
for n in _iter_assigned_names(node.target):
if not _is_private(n):
unsafe_variables.add(n)
return classes, functions, safe_variable_sources, unsafe_variables
def find_prisma_types_path() -> Path:
"""Find the prisma types.py file in the installed package."""
spec = importlib.util.find_spec("prisma")
if spec is None or spec.origin is None:
raise RuntimeError("Could not find prisma package. Is it installed?")
prisma_dir = Path(spec.origin).parent
types_path = prisma_dir / "types.py"
if not types_path.exists():
raise RuntimeError(f"prisma/types.py not found at {types_path}")
return types_path
def generate_stub(src_path: Path, stub_path: Path) -> int:
"""Generate the .pyi stub file from the source types.py."""
code = src_path.read_text(encoding="utf-8", errors="ignore")
source_lines = code.splitlines()
tree = ast.parse(code, filename=str(src_path))
classes, functions, safe_variable_sources, unsafe_variables = (
collect_top_level_symbols(tree, source_lines)
)
header = """\
# -*- coding: utf-8 -*-
# Auto-generated stub file - DO NOT EDIT
# Generated by gen_prisma_types_stub.py
#
# This stub intentionally collapses complex Prisma query DSL types to Any.
# Prisma's generated types can explode Pyright's type inference budgets
# on large schemas. We collapse them to Any so the rest of the codebase
# can remain strongly typed while keeping runtime behavior unchanged.
#
# Safe types (Literal, TypeVar, simple references) are preserved from the
# original types.py to maintain proper type checking where possible.
from __future__ import annotations
from typing import Any
from typing_extensions import Literal
# Re-export commonly used typing constructs that may be imported from this module
from typing import TYPE_CHECKING, TypeVar, Generic, Union, Optional, List, Dict
# Base type alias for stubbed Prisma types - allows any dict structure
_PrismaDict = dict[str, Any]
"""
lines = [header]
# Include safe variable definitions (Literal types, TypeVars, etc.)
lines.append("# Safe type definitions preserved from original types.py")
for source in safe_variable_sources:
lines.append(source)
lines.append("")
# Stub all classes and unsafe variables uniformly as dict[str, Any] aliases
# This allows:
# 1. Use in type annotations: x: SomeType
# 2. Constructor calls: SomeType(...)
# 3. Dict literal assignments: x: SomeType = {...}
lines.append(
"# Stubbed types (collapsed to dict[str, Any] to prevent type budget exhaustion)"
)
all_stubbed = sorted(classes | unsafe_variables)
for name in all_stubbed:
lines.append(f"{name} = _PrismaDict")
lines.append("")
# Stub functions
for name in sorted(functions):
lines.append(f"def {name}(*args: Any, **kwargs: Any) -> Any: ...")
lines.append("")
stub_path.write_text("\n".join(lines), encoding="utf-8")
return (
len(classes)
+ len(functions)
+ len(safe_variable_sources)
+ len(unsafe_variables)
)
def main() -> None:
"""Main entry point."""
try:
types_path = find_prisma_types_path()
stub_path = types_path.with_suffix(".pyi")
print(f"Found prisma types.py at: {types_path}")
print(f"Generating stub at: {stub_path}")
num_symbols = generate_stub(types_path, stub_path)
print(f"Generated {stub_path.name} with {num_symbols} Any-typed symbols")
except Exception as e:
print(f"Error: {e}", file=sys.stderr)
sys.exit(1)
if __name__ == "__main__":
main()

View File

@@ -25,6 +25,9 @@ def run(*command: str) -> None:
def lint():
# Generate Prisma types stub before running pyright to prevent type budget exhaustion
run("gen-prisma-stub")
lint_step_args: list[list[str]] = [
["ruff", "check", *TARGET_DIRS, "--exit-zero"],
["ruff", "format", "--diff", "--check", LIBS_DIR],
@@ -49,4 +52,6 @@ def format():
run("ruff", "format", LIBS_DIR)
run("isort", "--profile", "black", BACKEND_DIR)
run("black", BACKEND_DIR)
# Generate Prisma types stub before running pyright to prevent type budget exhaustion
run("gen-prisma-stub")
run("pyright", *TARGET_DIRS)

View File

@@ -0,0 +1,81 @@
-- DropIndex
DROP INDEX "StoreListingVersion_storeListingId_version_key";
-- CreateTable
CREATE TABLE "UserBusinessUnderstanding" (
"id" TEXT NOT NULL,
"createdAt" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP,
"updatedAt" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP,
"userId" TEXT NOT NULL,
"userName" TEXT,
"jobTitle" TEXT,
"businessName" TEXT,
"industry" TEXT,
"businessSize" TEXT,
"userRole" TEXT,
"keyWorkflows" JSONB,
"dailyActivities" JSONB,
"painPoints" JSONB,
"bottlenecks" JSONB,
"manualTasks" JSONB,
"automationGoals" JSONB,
"currentSoftware" JSONB,
"existingAutomation" JSONB,
"additionalNotes" TEXT,
CONSTRAINT "UserBusinessUnderstanding_pkey" PRIMARY KEY ("id")
);
-- CreateTable
CREATE TABLE "ChatSession" (
"id" TEXT NOT NULL,
"createdAt" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP,
"updatedAt" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP,
"userId" TEXT,
"title" TEXT,
"credentials" JSONB NOT NULL DEFAULT '{}',
"successfulAgentRuns" JSONB NOT NULL DEFAULT '{}',
"successfulAgentSchedules" JSONB NOT NULL DEFAULT '{}',
"totalPromptTokens" INTEGER NOT NULL DEFAULT 0,
"totalCompletionTokens" INTEGER NOT NULL DEFAULT 0,
CONSTRAINT "ChatSession_pkey" PRIMARY KEY ("id")
);
-- CreateTable
CREATE TABLE "ChatMessage" (
"id" TEXT NOT NULL,
"createdAt" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP,
"sessionId" TEXT NOT NULL,
"role" TEXT NOT NULL,
"content" TEXT,
"name" TEXT,
"toolCallId" TEXT,
"refusal" TEXT,
"toolCalls" JSONB,
"functionCall" JSONB,
"sequence" INTEGER NOT NULL,
CONSTRAINT "ChatMessage_pkey" PRIMARY KEY ("id")
);
-- CreateIndex
CREATE UNIQUE INDEX "UserBusinessUnderstanding_userId_key" ON "UserBusinessUnderstanding"("userId");
-- CreateIndex
CREATE INDEX "UserBusinessUnderstanding_userId_idx" ON "UserBusinessUnderstanding"("userId");
-- CreateIndex
CREATE INDEX "ChatSession_userId_updatedAt_idx" ON "ChatSession"("userId", "updatedAt");
-- CreateIndex
CREATE INDEX "ChatMessage_sessionId_sequence_idx" ON "ChatMessage"("sessionId", "sequence");
-- CreateIndex
CREATE UNIQUE INDEX "ChatMessage_sessionId_sequence_key" ON "ChatMessage"("sessionId", "sequence");
-- AddForeignKey
ALTER TABLE "UserBusinessUnderstanding" ADD CONSTRAINT "UserBusinessUnderstanding_userId_fkey" FOREIGN KEY ("userId") REFERENCES "User"("id") ON DELETE CASCADE ON UPDATE CASCADE;
-- AddForeignKey
ALTER TABLE "ChatMessage" ADD CONSTRAINT "ChatMessage_sessionId_fkey" FOREIGN KEY ("sessionId") REFERENCES "ChatSession"("id") ON DELETE CASCADE ON UPDATE CASCADE;

View File

@@ -117,6 +117,7 @@ lint = "linter:lint"
test = "run_tests:test"
load-store-agents = "test.load_store_agents:run"
export-api-schema = "backend.cli.generate_openapi_json:main"
gen-prisma-stub = "gen_prisma_types_stub:main"
oauth-tool = "backend.cli.oauth_tool:cli"
[tool.isort]

View File

@@ -53,6 +53,7 @@ model User {
Profile Profile[]
UserOnboarding UserOnboarding?
BusinessUnderstanding UserBusinessUnderstanding?
BuilderSearchHistory BuilderSearchHistory[]
StoreListings StoreListing[]
StoreListingReviews StoreListingReview[]
@@ -121,19 +122,109 @@ model UserOnboarding {
User User @relation(fields: [userId], references: [id], onDelete: Cascade)
}
model UserBusinessUnderstanding {
id String @id @default(uuid())
createdAt DateTime @default(now())
updatedAt DateTime @default(now()) @updatedAt
userId String @unique
User User @relation(fields: [userId], references: [id], onDelete: Cascade)
// User info
userName String?
jobTitle String?
// Business basics (string columns)
businessName String?
industry String?
businessSize String? // "1-10", "11-50", "51-200", "201-1000", "1000+"
userRole String? // Role in organization context (e.g., "decision maker", "implementer")
// Processes & activities (JSON arrays)
keyWorkflows Json?
dailyActivities Json?
// Pain points & goals (JSON arrays)
painPoints Json?
bottlenecks Json?
manualTasks Json?
automationGoals Json?
// Current tools (JSON arrays)
currentSoftware Json?
existingAutomation Json?
additionalNotes String?
@@index([userId])
}
model BuilderSearchHistory {
id String @id @default(uuid())
createdAt DateTime @default(now())
updatedAt DateTime @default(now()) @updatedAt
searchQuery String
filter String[] @default([])
byCreator String[] @default([])
filter String[] @default([])
byCreator String[] @default([])
userId String
User User @relation(fields: [userId], references: [id], onDelete: Cascade)
}
////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////
//////////////// CHAT SESSION TABLES ///////////////////
////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////
model ChatSession {
id String @id @default(uuid())
createdAt DateTime @default(now())
updatedAt DateTime @default(now()) @updatedAt
userId String?
// Session metadata
title String?
credentials Json @default("{}") // Map of provider -> credential metadata
// Rate limiting counters (stored as JSON maps)
successfulAgentRuns Json @default("{}") // Map of graph_id -> count
successfulAgentSchedules Json @default("{}") // Map of graph_id -> count
// Usage tracking
totalPromptTokens Int @default(0)
totalCompletionTokens Int @default(0)
Messages ChatMessage[]
@@index([userId, updatedAt])
}
model ChatMessage {
id String @id @default(uuid())
createdAt DateTime @default(now())
sessionId String
Session ChatSession @relation(fields: [sessionId], references: [id], onDelete: Cascade)
// Message content
role String // "user", "assistant", "system", "tool", "function"
content String?
name String?
toolCallId String?
refusal String?
toolCalls Json? // List of tool calls for assistant messages
functionCall Json? // Deprecated but kept for compatibility
// Ordering within session
sequence Int
@@unique([sessionId, sequence])
@@index([sessionId, sequence])
}
// This model describes the Agent Graph/Flow (Multi Agent System).
model AgentGraph {
id String @default(uuid())
@@ -721,26 +812,26 @@ view StoreAgent {
storeListingVersionId String
updated_at DateTime
slug String
agent_name String
agent_video String?
agent_output_demo String?
agent_image String[]
slug String
agent_name String
agent_video String?
agent_output_demo String?
agent_image String[]
featured Boolean @default(false)
creator_username String?
creator_avatar String?
sub_heading String
description String
categories String[]
search Unsupported("tsvector")? @default(dbgenerated("''::tsvector"))
runs Int
rating Float
versions String[]
agentGraphVersions String[]
agentGraphId String
is_available Boolean @default(true)
useForOnboarding Boolean @default(false)
featured Boolean @default(false)
creator_username String?
creator_avatar String?
sub_heading String
description String
categories String[]
search Unsupported("tsvector")? @default(dbgenerated("''::tsvector"))
runs Int
rating Float
versions String[]
agentGraphVersions String[]
agentGraphId String
is_available Boolean @default(true)
useForOnboarding Boolean @default(false)
// Materialized views used (refreshed every 15 minutes via pg_cron):
// - mv_agent_run_counts - Pre-aggregated agent execution counts by agentGraphId
@@ -856,14 +947,14 @@ model StoreListingVersion {
AgentGraph AgentGraph @relation(fields: [agentGraphId, agentGraphVersion], references: [id, version])
// Content fields
name String
subHeading String
videoUrl String?
agentOutputDemoUrl String?
imageUrls String[]
description String
instructions String?
categories String[]
name String
subHeading String
videoUrl String?
agentOutputDemoUrl String?
imageUrls String[]
description String
instructions String?
categories String[]
isFeatured Boolean @default(false)
@@ -899,7 +990,6 @@ model StoreListingVersion {
// Reviews for this specific version
Reviews StoreListingReview[]
@@unique([storeListingId, version])
@@index([storeListingId, submissionStatus, isAvailable])
@@index([submissionStatus])
@@index([reviewerId])
@@ -998,16 +1088,16 @@ model OAuthApplication {
updatedAt DateTime @updatedAt
// Application metadata
name String
description String?
logoUrl String? // URL to app logo stored in GCS
clientId String @unique
clientSecret String // Hashed with Scrypt (same as API keys)
clientSecretSalt String // Salt for Scrypt hashing
name String
description String?
logoUrl String? // URL to app logo stored in GCS
clientId String @unique
clientSecret String // Hashed with Scrypt (same as API keys)
clientSecretSalt String // Salt for Scrypt hashing
// OAuth configuration
redirectUris String[] // Allowed callback URLs
grantTypes String[] @default(["authorization_code", "refresh_token"])
grantTypes String[] @default(["authorization_code", "refresh_token"])
scopes APIKeyPermission[] // Which permissions the app can request
// Application management

View File

@@ -2,6 +2,7 @@
"created_at": "2025-09-04T13:37:00",
"credentials_input_schema": {
"properties": {},
"required": [],
"title": "TestGraphCredentialsInputSchema",
"type": "object"
},

View File

@@ -2,6 +2,7 @@
{
"credentials_input_schema": {
"properties": {},
"required": [],
"title": "TestGraphCredentialsInputSchema",
"type": "object"
},

View File

@@ -4,6 +4,7 @@
"id": "test-agent-1",
"graph_id": "test-agent-1",
"graph_version": 1,
"owner_user_id": "3e53486c-cf57-477e-ba2a-cb02dc828e1a",
"image_url": null,
"creator_name": "Test Creator",
"creator_image_url": "",
@@ -41,6 +42,7 @@
"id": "test-agent-2",
"graph_id": "test-agent-2",
"graph_version": 1,
"owner_user_id": "3e53486c-cf57-477e-ba2a-cb02dc828e1a",
"image_url": null,
"creator_name": "Test Creator",
"creator_image_url": "",

View File

@@ -1,6 +1,7 @@
{
"submissions": [
{
"listing_id": "test-listing-id",
"agent_id": "test-agent-id",
"agent_version": 1,
"name": "Test Agent",

View File

@@ -37,7 +37,7 @@ services:
context: ../
dockerfile: autogpt_platform/backend/Dockerfile
target: migrate
command: ["sh", "-c", "poetry run prisma generate && poetry run prisma migrate deploy"]
command: ["sh", "-c", "poetry run prisma generate && poetry run gen-prisma-stub && poetry run prisma migrate deploy"]
develop:
watch:
- path: ./

View File

@@ -92,7 +92,6 @@
"react-currency-input-field": "4.0.3",
"react-day-picker": "9.11.1",
"react-dom": "18.3.1",
"react-drag-drop-files": "2.4.0",
"react-hook-form": "7.66.0",
"react-icons": "5.5.0",
"react-markdown": "9.0.3",

View File

@@ -200,9 +200,6 @@ importers:
react-dom:
specifier: 18.3.1
version: 18.3.1(react@18.3.1)
react-drag-drop-files:
specifier: 2.4.0
version: 2.4.0(react-dom@18.3.1(react@18.3.1))(react@18.3.1)
react-hook-form:
specifier: 7.66.0
version: 7.66.0(react@18.3.1)
@@ -1004,9 +1001,6 @@ packages:
'@emotion/memoize@0.8.1':
resolution: {integrity: sha512-W2P2c/VRW1/1tLox0mVUalvnWXxavmv/Oum2aPsRcoDJuob75FC3Y8FbpfLwUegRcxINtGUMPq0tFCvYNTBXNA==}
'@emotion/unitless@0.8.1':
resolution: {integrity: sha512-KOEGMu6dmJZtpadb476IsZBclKvILjopjUii3V+7MnXIQCYh8W3NgNcgwo21n9LXZX6EDIKvqfjYxXebDwxKmQ==}
'@epic-web/invariant@1.0.0':
resolution: {integrity: sha512-lrTPqgvfFQtR/eY/qkIzp98OGdNJu0m5ji3q/nJI8v3SXkRKEnWiOxMmbvcSoAIzv/cGiuvRy57k4suKQSAdwA==}
@@ -3122,9 +3116,6 @@ packages:
'@types/statuses@2.0.6':
resolution: {integrity: sha512-xMAgYwceFhRA2zY+XbEA7mxYbA093wdiW8Vu6gZPGWy9cmOyU9XesH1tNcEWsKFd5Vzrqx5T3D38PWx1FIIXkA==}
'@types/stylis@4.2.7':
resolution: {integrity: sha512-VgDNokpBoKF+wrdvhAAfS55OMQpL6QRglwTwNC3kIgBrzZxA4WsFj+2eLfEA/uMUDzBcEhYmjSbwQakn/i3ajA==}
'@types/tedious@4.0.14':
resolution: {integrity: sha512-KHPsfX/FoVbUGbyYvk1q9MMQHLPeRZhRJZdO45Q4YjvFkv4hMNghCWTvy7rdKessBsmtz4euWCWAB6/tVpI1Iw==}
@@ -3781,9 +3772,6 @@ packages:
resolution: {integrity: sha512-QOSvevhslijgYwRx6Rv7zKdMF8lbRmx+uQGx2+vDc+KI/eBnsy9kit5aj23AgGu3pa4t9AgwbnXWqS+iOY+2aA==}
engines: {node: '>= 6'}
camelize@1.0.1:
resolution: {integrity: sha512-dU+Tx2fsypxTgtLoE36npi3UqcjSSMNYfkqgmoEhtZrraP5VWq0K7FkWVTYa8eMPtnU/G2txVsfdCJTn9uzpuQ==}
caniuse-lite@1.0.30001762:
resolution: {integrity: sha512-PxZwGNvH7Ak8WX5iXzoK1KPZttBXNPuaOvI2ZYU7NrlM+d9Ov+TUvlLOBNGzVXAntMSMMlJPd+jY6ovrVjSmUw==}
@@ -3997,10 +3985,6 @@ packages:
resolution: {integrity: sha512-r4ESw/IlusD17lgQi1O20Fa3qNnsckR126TdUuBgAu7GBYSIPvdNyONd3Zrxh0xCwA4+6w/TDArBPsMvhur+KQ==}
engines: {node: '>= 0.10'}
css-color-keywords@1.0.0:
resolution: {integrity: sha512-FyyrDHZKEjXDpNJYvVsV960FiqQyXc/LlYmsxl2BcdMb2WPx0OGRVgTg55rPSyLSNMqP52R9r8geSp7apN3Ofg==}
engines: {node: '>=4'}
css-loader@6.11.0:
resolution: {integrity: sha512-CTJ+AEQJjq5NzLga5pE39qdiSV56F8ywCIsqNIRF0r7BDgWsN25aazToqAFg7ZrtA/U016xudB3ffgweORxX7g==}
engines: {node: '>= 12.13.0'}
@@ -4016,9 +4000,6 @@ packages:
css-select@4.3.0:
resolution: {integrity: sha512-wPpOYtnsVontu2mODhA19JrqWxNsfdatRKd64kmpRbQgh1KtItko5sTnEpPdpSaJszTOhEMlF/RPz28qj4HqhQ==}
css-to-react-native@3.2.0:
resolution: {integrity: sha512-e8RKaLXMOFii+02mOlqwjbD00KSEKqblnpO9e++1aXS1fPQOpS1YoqdVHBqPjHNoxeF2mimzVqawm2KCbEdtHQ==}
css-what@6.2.2:
resolution: {integrity: sha512-u/O3vwbptzhMs3L1fQE82ZSLHQQfto5gyZzwteVIEyeaY5Fc7R4dapF/BvRoSYFeqfBk4m0V1Vafq5Pjv25wvA==}
engines: {node: '>= 6'}
@@ -6131,10 +6112,6 @@ packages:
resolution: {integrity: sha512-PS08Iboia9mts/2ygV3eLpY5ghnUcfLV/EXTOW1E2qYxJKGGBUtNjN76FYHnMs36RmARn41bC0AZmn+rR0OVpQ==}
engines: {node: ^10 || ^12 || >=14}
postcss@8.4.49:
resolution: {integrity: sha512-OCVPnIObs4N29kxTjzLfUryOkvZEq+pf8jTF0lg8E7uETuWHA+v7j3c/xJmiqpX450191LlmZfUKkXxkTry7nA==}
engines: {node: ^10 || ^12 || >=14}
postcss@8.5.6:
resolution: {integrity: sha512-3Ybi1tAuwAP9s0r1UQ2J4n5Y0G05bJkpUIO0/bI9MhwmD70S5aTWbXGBwxHrelT+XM1k6dM0pk+SwNkpTRN7Pg==}
engines: {node: ^10 || ^12 || >=14}
@@ -6306,12 +6283,6 @@ packages:
peerDependencies:
react: ^18.3.1
react-drag-drop-files@2.4.0:
resolution: {integrity: sha512-MGPV3HVVnwXEXq3gQfLtSU3jz5j5jrabvGedokpiSEMoONrDHgYl/NpIOlfsqGQ4zBv1bzzv7qbKURZNOX32PA==}
peerDependencies:
react: ^18.0.0
react-dom: ^18.0.0
react-hook-form@7.66.0:
resolution: {integrity: sha512-xXBqsWGKrY46ZqaHDo+ZUYiMUgi8suYu5kdrS20EG8KiL7VRQitEbNjm+UcrDYrNi1YLyfpmAeGjCZYXLT9YBw==}
engines: {node: '>=18.0.0'}
@@ -6678,9 +6649,6 @@ packages:
engines: {node: '>= 0.10'}
hasBin: true
shallowequal@1.1.0:
resolution: {integrity: sha512-y0m1JoUZSlPAjXVtPPW70aZWfIL/dSP7AFkRnniLCrK/8MDKog3TySTBmckD+RObVxH0v4Tox67+F14PdED2oQ==}
sharp@0.34.5:
resolution: {integrity: sha512-Ou9I5Ft9WNcCbXrU9cMgPBcCK8LiwLqcbywW3t4oDV37n1pzpuNLsYiAV8eODnjbtQlSDwZ2cUEeQz4E54Hltg==}
engines: {node: ^18.17.0 || ^20.3.0 || >=21.0.0}
@@ -6894,13 +6862,6 @@ packages:
style-to-object@1.0.14:
resolution: {integrity: sha512-LIN7rULI0jBscWQYaSswptyderlarFkjQ+t79nzty8tcIAceVomEVlLzH5VP4Cmsv6MtKhs7qaAiwlcp+Mgaxw==}
styled-components@6.2.0:
resolution: {integrity: sha512-ryFCkETE++8jlrBmC+BoGPUN96ld1/Yp0s7t5bcXDobrs4XoXroY1tN+JbFi09hV6a5h3MzbcVi8/BGDP0eCgQ==}
engines: {node: '>= 16'}
peerDependencies:
react: '>= 16.8.0'
react-dom: '>= 16.8.0'
styled-jsx@5.1.6:
resolution: {integrity: sha512-qSVyDTeMotdvQYoHWLNGwRFJHC+i+ZvdBRYosOFgC+Wg1vx4frN2/RG/NA7SYqqvKNLf39P2LSRA2pu6n0XYZA==}
engines: {node: '>= 12.0.0'}
@@ -6927,9 +6888,6 @@ packages:
babel-plugin-macros:
optional: true
stylis@4.3.6:
resolution: {integrity: sha512-yQ3rwFWRfwNUY7H5vpU0wfdkNSnvnJinhF9830Swlaxl03zsOjCfmX0ugac+3LtK0lYSgwL/KXc8oYL3mG4YFQ==}
sucrase@3.35.1:
resolution: {integrity: sha512-DhuTmvZWux4H1UOnWMB3sk0sbaCVOoQZjv8u1rDoTV0HTdGem9hkAZtl4JZy8P2z4Bg0nT+YMeOFyVr4zcG5Tw==}
engines: {node: '>=16 || 14 >=14.17'}
@@ -7096,9 +7054,6 @@ packages:
tslib@1.14.1:
resolution: {integrity: sha512-Xni35NKzjgMrwevysHTCArtLDpPvye8zV/0E4EyYn43P7/7qvQwPh9BGkHewbMulVntbigmcT7rdX3BNo9wRJg==}
tslib@2.6.2:
resolution: {integrity: sha512-AEYxH93jGFPn/a2iVAwW87VuUIkR1FVUKB77NwMF7nBTDkDrrT/Hpt/IrCJ0QXhW27jTBDcf5ZY7w6RiqTMw2Q==}
tslib@2.8.1:
resolution: {integrity: sha512-oJFu94HQb+KVduSUQL7wnpmqnfmLsOA/nAh6b6EH0wCEoK0/mPeXU6c3wKDV83MkOuHPRHtSXKKU99IBazS/2w==}
@@ -8335,10 +8290,10 @@ snapshots:
'@emotion/is-prop-valid@1.2.2':
dependencies:
'@emotion/memoize': 0.8.1
optional: true
'@emotion/memoize@0.8.1': {}
'@emotion/unitless@0.8.1': {}
'@emotion/memoize@0.8.1':
optional: true
'@epic-web/invariant@1.0.0': {}
@@ -10734,8 +10689,6 @@ snapshots:
'@types/statuses@2.0.6': {}
'@types/stylis@4.2.7': {}
'@types/tedious@4.0.14':
dependencies:
'@types/node': 24.10.0
@@ -11432,8 +11385,6 @@ snapshots:
camelcase-css@2.0.1: {}
camelize@1.0.1: {}
caniuse-lite@1.0.30001762: {}
case-sensitive-paths-webpack-plugin@2.4.0: {}
@@ -11645,8 +11596,6 @@ snapshots:
randombytes: 2.1.0
randomfill: 1.0.4
css-color-keywords@1.0.0: {}
css-loader@6.11.0(webpack@5.104.1(esbuild@0.25.12)):
dependencies:
icss-utils: 5.1.0(postcss@8.5.6)
@@ -11668,12 +11617,6 @@ snapshots:
domutils: 2.8.0
nth-check: 2.1.1
css-to-react-native@3.2.0:
dependencies:
camelize: 1.0.1
css-color-keywords: 1.0.0
postcss-value-parser: 4.2.0
css-what@6.2.2: {}
css.escape@1.5.1: {}
@@ -12127,8 +12070,8 @@ snapshots:
'@typescript-eslint/parser': 8.52.0(eslint@8.57.1)(typescript@5.9.3)
eslint: 8.57.1
eslint-import-resolver-node: 0.3.9
eslint-import-resolver-typescript: 3.10.1(eslint-plugin-import@2.32.0(@typescript-eslint/parser@8.52.0(eslint@8.57.1)(typescript@5.9.3))(eslint@8.57.1))(eslint@8.57.1)
eslint-plugin-import: 2.32.0(@typescript-eslint/parser@8.52.0(eslint@8.57.1)(typescript@5.9.3))(eslint-import-resolver-typescript@3.10.1(eslint-plugin-import@2.32.0(@typescript-eslint/parser@8.52.0(eslint@8.57.1)(typescript@5.9.3))(eslint@8.57.1))(eslint@8.57.1))(eslint@8.57.1)
eslint-import-resolver-typescript: 3.10.1(eslint-plugin-import@2.32.0)(eslint@8.57.1)
eslint-plugin-import: 2.32.0(@typescript-eslint/parser@8.52.0(eslint@8.57.1)(typescript@5.9.3))(eslint-import-resolver-typescript@3.10.1)(eslint@8.57.1)
eslint-plugin-jsx-a11y: 6.10.2(eslint@8.57.1)
eslint-plugin-react: 7.37.5(eslint@8.57.1)
eslint-plugin-react-hooks: 5.2.0(eslint@8.57.1)
@@ -12147,7 +12090,7 @@ snapshots:
transitivePeerDependencies:
- supports-color
eslint-import-resolver-typescript@3.10.1(eslint-plugin-import@2.32.0(@typescript-eslint/parser@8.52.0(eslint@8.57.1)(typescript@5.9.3))(eslint@8.57.1))(eslint@8.57.1):
eslint-import-resolver-typescript@3.10.1(eslint-plugin-import@2.32.0)(eslint@8.57.1):
dependencies:
'@nolyfill/is-core-module': 1.0.39
debug: 4.4.3
@@ -12158,22 +12101,22 @@ snapshots:
tinyglobby: 0.2.15
unrs-resolver: 1.11.1
optionalDependencies:
eslint-plugin-import: 2.32.0(@typescript-eslint/parser@8.52.0(eslint@8.57.1)(typescript@5.9.3))(eslint-import-resolver-typescript@3.10.1(eslint-plugin-import@2.32.0(@typescript-eslint/parser@8.52.0(eslint@8.57.1)(typescript@5.9.3))(eslint@8.57.1))(eslint@8.57.1))(eslint@8.57.1)
eslint-plugin-import: 2.32.0(@typescript-eslint/parser@8.52.0(eslint@8.57.1)(typescript@5.9.3))(eslint-import-resolver-typescript@3.10.1)(eslint@8.57.1)
transitivePeerDependencies:
- supports-color
eslint-module-utils@2.12.1(@typescript-eslint/parser@8.52.0(eslint@8.57.1)(typescript@5.9.3))(eslint-import-resolver-node@0.3.9)(eslint-import-resolver-typescript@3.10.1(eslint-plugin-import@2.32.0(@typescript-eslint/parser@8.52.0(eslint@8.57.1)(typescript@5.9.3))(eslint@8.57.1))(eslint@8.57.1))(eslint@8.57.1):
eslint-module-utils@2.12.1(@typescript-eslint/parser@8.52.0(eslint@8.57.1)(typescript@5.9.3))(eslint-import-resolver-node@0.3.9)(eslint-import-resolver-typescript@3.10.1)(eslint@8.57.1):
dependencies:
debug: 3.2.7
optionalDependencies:
'@typescript-eslint/parser': 8.52.0(eslint@8.57.1)(typescript@5.9.3)
eslint: 8.57.1
eslint-import-resolver-node: 0.3.9
eslint-import-resolver-typescript: 3.10.1(eslint-plugin-import@2.32.0(@typescript-eslint/parser@8.52.0(eslint@8.57.1)(typescript@5.9.3))(eslint@8.57.1))(eslint@8.57.1)
eslint-import-resolver-typescript: 3.10.1(eslint-plugin-import@2.32.0)(eslint@8.57.1)
transitivePeerDependencies:
- supports-color
eslint-plugin-import@2.32.0(@typescript-eslint/parser@8.52.0(eslint@8.57.1)(typescript@5.9.3))(eslint-import-resolver-typescript@3.10.1(eslint-plugin-import@2.32.0(@typescript-eslint/parser@8.52.0(eslint@8.57.1)(typescript@5.9.3))(eslint@8.57.1))(eslint@8.57.1))(eslint@8.57.1):
eslint-plugin-import@2.32.0(@typescript-eslint/parser@8.52.0(eslint@8.57.1)(typescript@5.9.3))(eslint-import-resolver-typescript@3.10.1)(eslint@8.57.1):
dependencies:
'@rtsao/scc': 1.1.0
array-includes: 3.1.9
@@ -12184,7 +12127,7 @@ snapshots:
doctrine: 2.1.0
eslint: 8.57.1
eslint-import-resolver-node: 0.3.9
eslint-module-utils: 2.12.1(@typescript-eslint/parser@8.52.0(eslint@8.57.1)(typescript@5.9.3))(eslint-import-resolver-node@0.3.9)(eslint-import-resolver-typescript@3.10.1(eslint-plugin-import@2.32.0(@typescript-eslint/parser@8.52.0(eslint@8.57.1)(typescript@5.9.3))(eslint@8.57.1))(eslint@8.57.1))(eslint@8.57.1)
eslint-module-utils: 2.12.1(@typescript-eslint/parser@8.52.0(eslint@8.57.1)(typescript@5.9.3))(eslint-import-resolver-node@0.3.9)(eslint-import-resolver-typescript@3.10.1)(eslint@8.57.1)
hasown: 2.0.2
is-core-module: 2.16.1
is-glob: 4.0.3
@@ -14259,12 +14202,6 @@ snapshots:
picocolors: 1.1.1
source-map-js: 1.2.1
postcss@8.4.49:
dependencies:
nanoid: 3.3.11
picocolors: 1.1.1
source-map-js: 1.2.1
postcss@8.5.6:
dependencies:
nanoid: 3.3.11
@@ -14386,13 +14323,6 @@ snapshots:
react: 18.3.1
scheduler: 0.23.2
react-drag-drop-files@2.4.0(react-dom@18.3.1(react@18.3.1))(react@18.3.1):
dependencies:
prop-types: 15.8.1
react: 18.3.1
react-dom: 18.3.1(react@18.3.1)
styled-components: 6.2.0(react-dom@18.3.1(react@18.3.1))(react@18.3.1)
react-hook-form@7.66.0(react@18.3.1):
dependencies:
react: 18.3.1
@@ -14886,8 +14816,6 @@ snapshots:
safe-buffer: 5.2.1
to-buffer: 1.2.2
shallowequal@1.1.0: {}
sharp@0.34.5:
dependencies:
'@img/colour': 1.0.0
@@ -15178,20 +15106,6 @@ snapshots:
dependencies:
inline-style-parser: 0.2.7
styled-components@6.2.0(react-dom@18.3.1(react@18.3.1))(react@18.3.1):
dependencies:
'@emotion/is-prop-valid': 1.2.2
'@emotion/unitless': 0.8.1
'@types/stylis': 4.2.7
css-to-react-native: 3.2.0
csstype: 3.2.3
postcss: 8.4.49
react: 18.3.1
react-dom: 18.3.1(react@18.3.1)
shallowequal: 1.1.0
stylis: 4.3.6
tslib: 2.6.2
styled-jsx@5.1.6(@babel/core@7.28.5)(react@18.3.1):
dependencies:
client-only: 0.0.1
@@ -15206,8 +15120,6 @@ snapshots:
optionalDependencies:
'@babel/core': 7.28.5
stylis@4.3.6: {}
sucrase@3.35.1:
dependencies:
'@jridgewell/gen-mapping': 0.3.13
@@ -15390,8 +15302,6 @@ snapshots:
tslib@1.14.1: {}
tslib@2.6.2: {}
tslib@2.8.1: {}
tty-browserify@0.0.1: {}

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.6 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

View File

@@ -66,6 +66,7 @@ export const RunInputDialog = ({
formContext={{
showHandles: false,
size: "large",
showOptionalToggle: false,
}}
/>
</div>

View File

@@ -66,7 +66,7 @@ export const useRunInputDialog = ({
if (isCredentialFieldSchema(fieldSchema)) {
dynamicUiSchema[fieldName] = {
...dynamicUiSchema[fieldName],
"ui:field": "credentials",
"ui:field": "custom/credential_field",
};
}
});
@@ -76,12 +76,18 @@ export const useRunInputDialog = ({
}, [credentialsSchema]);
const handleManualRun = async () => {
// Filter out incomplete credentials (those without a valid id)
// RJSF auto-populates const values (provider, type) but not id field
const validCredentials = Object.fromEntries(
Object.entries(credentialValues).filter(([_, cred]) => cred && cred.id),
);
await executeGraph({
graphId: flowID ?? "",
graphVersion: flowVersion || null,
data: {
inputs: inputValues,
credentials_inputs: credentialValues,
credentials_inputs: validCredentials,
source: "builder",
},
});

View File

@@ -97,6 +97,9 @@ export const Flow = () => {
onConnect={onConnect}
onEdgesChange={onEdgesChange}
onNodeDragStop={onNodeDragStop}
onNodeContextMenu={(event) => {
event.preventDefault();
}}
maxZoom={2}
minZoom={0.1}
onDragOver={onDragOver}

View File

@@ -1,24 +1,25 @@
import React from "react";
import { Node as XYNode, NodeProps } from "@xyflow/react";
import { RJSFSchema } from "@rjsf/utils";
import { BlockUIType } from "../../../types";
import { StickyNoteBlock } from "./components/StickyNoteBlock";
import { BlockInfoCategoriesItem } from "@/app/api/__generated__/models/blockInfoCategoriesItem";
import { BlockCost } from "@/app/api/__generated__/models/blockCost";
import { AgentExecutionStatus } from "@/app/api/__generated__/models/agentExecutionStatus";
import { BlockCost } from "@/app/api/__generated__/models/blockCost";
import { BlockInfoCategoriesItem } from "@/app/api/__generated__/models/blockInfoCategoriesItem";
import { NodeExecutionResult } from "@/app/api/__generated__/models/nodeExecutionResult";
import { NodeContainer } from "./components/NodeContainer";
import { NodeHeader } from "./components/NodeHeader";
import { FormCreator } from "../FormCreator";
import { preprocessInputSchema } from "@/components/renderers/InputRenderer/utils/input-schema-pre-processor";
import { OutputHandler } from "../OutputHandler";
import { NodeAdvancedToggle } from "./components/NodeAdvancedToggle";
import { NodeDataRenderer } from "./components/NodeOutput/NodeOutput";
import { NodeExecutionBadge } from "./components/NodeExecutionBadge";
import { cn } from "@/lib/utils";
import { WebhookDisclaimer } from "./components/WebhookDisclaimer";
import { AyrshareConnectButton } from "./components/AyrshareConnectButton";
import { NodeModelMetadata } from "@/app/api/__generated__/models/nodeModelMetadata";
import { preprocessInputSchema } from "@/components/renderers/InputRenderer/utils/input-schema-pre-processor";
import { cn } from "@/lib/utils";
import { RJSFSchema } from "@rjsf/utils";
import { NodeProps, Node as XYNode } from "@xyflow/react";
import React from "react";
import { BlockUIType } from "../../../types";
import { FormCreator } from "../FormCreator";
import { OutputHandler } from "../OutputHandler";
import { AyrshareConnectButton } from "./components/AyrshareConnectButton";
import { NodeAdvancedToggle } from "./components/NodeAdvancedToggle";
import { NodeContainer } from "./components/NodeContainer";
import { NodeExecutionBadge } from "./components/NodeExecutionBadge";
import { NodeHeader } from "./components/NodeHeader";
import { NodeDataRenderer } from "./components/NodeOutput/NodeOutput";
import { NodeRightClickMenu } from "./components/NodeRightClickMenu";
import { StickyNoteBlock } from "./components/StickyNoteBlock";
import { WebhookDisclaimer } from "./components/WebhookDisclaimer";
export type CustomNodeData = {
hardcodedValues: {
@@ -88,7 +89,7 @@ export const CustomNode: React.FC<NodeProps<CustomNode>> = React.memo(
// Currently all blockTypes design are similar - that's why i am using the same component for all of them
// If in future - if we need some drastic change in some blockTypes design - we can create separate components for them
return (
const node = (
<NodeContainer selected={selected} nodeId={nodeId} hasErrors={hasErrors}>
<div className="rounded-xlarge bg-white">
<NodeHeader data={data} nodeId={nodeId} />
@@ -117,6 +118,15 @@ export const CustomNode: React.FC<NodeProps<CustomNode>> = React.memo(
<NodeExecutionBadge nodeId={nodeId} />
</NodeContainer>
);
return (
<NodeRightClickMenu
nodeId={nodeId}
subGraphID={data.hardcodedValues?.graph_id}
>
{node}
</NodeRightClickMenu>
);
},
);

View File

@@ -1,26 +1,31 @@
import { Separator } from "@/components/__legacy__/ui/separator";
import { useCopyPasteStore } from "@/app/(platform)/build/stores/copyPasteStore";
import { useNodeStore } from "@/app/(platform)/build/stores/nodeStore";
import {
DropdownMenu,
DropdownMenuContent,
DropdownMenuItem,
DropdownMenuTrigger,
} from "@/components/molecules/DropdownMenu/DropdownMenu";
import { DotsThreeOutlineVerticalIcon } from "@phosphor-icons/react";
import { Copy, Trash2, ExternalLink } from "lucide-react";
import { useNodeStore } from "@/app/(platform)/build/stores/nodeStore";
import { useCopyPasteStore } from "@/app/(platform)/build/stores/copyPasteStore";
import {
SecondaryDropdownMenuContent,
SecondaryDropdownMenuItem,
SecondaryDropdownMenuSeparator,
} from "@/components/molecules/SecondaryMenu/SecondaryMenu";
import {
ArrowSquareOutIcon,
CopyIcon,
DotsThreeOutlineVerticalIcon,
TrashIcon,
} from "@phosphor-icons/react";
import { useReactFlow } from "@xyflow/react";
export const NodeContextMenu = ({
nodeId,
subGraphID,
}: {
type Props = {
nodeId: string;
subGraphID?: string;
}) => {
};
export const NodeContextMenu = ({ nodeId, subGraphID }: Props) => {
const { deleteElements } = useReactFlow();
const handleCopy = () => {
function handleCopy() {
useNodeStore.setState((state) => ({
nodes: state.nodes.map((node) => ({
...node,
@@ -30,47 +35,47 @@ export const NodeContextMenu = ({
useCopyPasteStore.getState().copySelectedNodes();
useCopyPasteStore.getState().pasteNodes();
};
}
const handleDelete = () => {
function handleDelete() {
deleteElements({ nodes: [{ id: nodeId }] });
};
}
return (
<DropdownMenu>
<DropdownMenuTrigger className="py-2">
<DotsThreeOutlineVerticalIcon size={16} weight="fill" />
</DropdownMenuTrigger>
<DropdownMenuContent
side="right"
align="start"
className="rounded-xlarge"
>
<DropdownMenuItem onClick={handleCopy} className="hover:rounded-xlarge">
<Copy className="mr-2 h-4 w-4" />
Copy Node
</DropdownMenuItem>
<SecondaryDropdownMenuContent side="right" align="start">
<SecondaryDropdownMenuItem onClick={handleCopy}>
<CopyIcon size={20} className="mr-2 dark:text-gray-100" />
<span className="dark:text-gray-100">Copy</span>
</SecondaryDropdownMenuItem>
<SecondaryDropdownMenuSeparator />
{subGraphID && (
<DropdownMenuItem
onClick={() => window.open(`/build?flowID=${subGraphID}`)}
className="hover:rounded-xlarge"
>
<ExternalLink className="mr-2 h-4 w-4" />
Open Agent
</DropdownMenuItem>
<>
<SecondaryDropdownMenuItem
onClick={() => window.open(`/build?flowID=${subGraphID}`)}
>
<ArrowSquareOutIcon
size={20}
className="mr-2 dark:text-gray-100"
/>
<span className="dark:text-gray-100">Open agent</span>
</SecondaryDropdownMenuItem>
<SecondaryDropdownMenuSeparator />
</>
)}
<Separator className="my-2" />
<DropdownMenuItem
onClick={handleDelete}
className="text-red-600 hover:rounded-xlarge"
>
<Trash2 className="mr-2 h-4 w-4" />
Delete
</DropdownMenuItem>
</DropdownMenuContent>
<SecondaryDropdownMenuItem variant="destructive" onClick={handleDelete}>
<TrashIcon
size={20}
className="mr-2 text-red-500 dark:text-red-400"
/>
<span className="dark:text-red-400">Delete</span>
</SecondaryDropdownMenuItem>
</SecondaryDropdownMenuContent>
</DropdownMenu>
);
};

View File

@@ -1,25 +1,24 @@
import { Text } from "@/components/atoms/Text/Text";
import { beautifyString, cn } from "@/lib/utils";
import { NodeCost } from "./NodeCost";
import { NodeBadges } from "./NodeBadges";
import { NodeContextMenu } from "./NodeContextMenu";
import { CustomNodeData } from "../CustomNode";
import { useNodeStore } from "@/app/(platform)/build/stores/nodeStore";
import { useState } from "react";
import { Text } from "@/components/atoms/Text/Text";
import {
Tooltip,
TooltipContent,
TooltipProvider,
TooltipTrigger,
} from "@/components/atoms/Tooltip/BaseTooltip";
import { beautifyString, cn } from "@/lib/utils";
import { useState } from "react";
import { CustomNodeData } from "../CustomNode";
import { NodeBadges } from "./NodeBadges";
import { NodeContextMenu } from "./NodeContextMenu";
import { NodeCost } from "./NodeCost";
export const NodeHeader = ({
data,
nodeId,
}: {
type Props = {
data: CustomNodeData;
nodeId: string;
}) => {
};
export const NodeHeader = ({ data, nodeId }: Props) => {
const updateNodeData = useNodeStore((state) => state.updateNodeData);
const title = (data.metadata?.customized_name as string) || data.title;
const [isEditingTitle, setIsEditingTitle] = useState(false);

View File

@@ -151,7 +151,7 @@ export const NodeDataViewer: FC<NodeDataViewerProps> = ({
</div>
<div className="flex justify-end pt-4">
{outputItems.length > 0 && (
{outputItems.length > 1 && (
<OutputActions
items={outputItems.map((item) => ({
value: item.value,

View File

@@ -0,0 +1,104 @@
import { useCopyPasteStore } from "@/app/(platform)/build/stores/copyPasteStore";
import { useNodeStore } from "@/app/(platform)/build/stores/nodeStore";
import {
SecondaryMenuContent,
SecondaryMenuItem,
SecondaryMenuSeparator,
} from "@/components/molecules/SecondaryMenu/SecondaryMenu";
import { ArrowSquareOutIcon, CopyIcon, TrashIcon } from "@phosphor-icons/react";
import * as ContextMenu from "@radix-ui/react-context-menu";
import { useReactFlow } from "@xyflow/react";
import { useEffect, useRef } from "react";
import { CustomNode } from "../CustomNode";
type Props = {
nodeId: string;
subGraphID?: string;
children: React.ReactNode;
};
const DOUBLE_CLICK_TIMEOUT = 300;
export function NodeRightClickMenu({ nodeId, subGraphID, children }: Props) {
const { deleteElements } = useReactFlow<CustomNode>();
const lastRightClickTime = useRef<number>(0);
const containerRef = useRef<HTMLDivElement>(null);
function copyNode() {
useNodeStore.setState((state) => ({
nodes: state.nodes.map((node) => ({
...node,
selected: node.id === nodeId,
})),
}));
useCopyPasteStore.getState().copySelectedNodes();
useCopyPasteStore.getState().pasteNodes();
}
function deleteNode() {
deleteElements({ nodes: [{ id: nodeId }] });
}
useEffect(() => {
const container = containerRef.current;
if (!container) return;
function handleContextMenu(e: MouseEvent) {
const now = Date.now();
const timeSinceLastClick = now - lastRightClickTime.current;
if (timeSinceLastClick < DOUBLE_CLICK_TIMEOUT) {
e.stopImmediatePropagation();
lastRightClickTime.current = 0;
return;
}
lastRightClickTime.current = now;
}
container.addEventListener("contextmenu", handleContextMenu, true);
return () => {
container.removeEventListener("contextmenu", handleContextMenu, true);
};
}, []);
return (
<ContextMenu.Root>
<ContextMenu.Trigger asChild>
<div ref={containerRef}>{children}</div>
</ContextMenu.Trigger>
<SecondaryMenuContent>
<SecondaryMenuItem onSelect={copyNode}>
<CopyIcon size={20} className="mr-2 dark:text-gray-100" />
<span className="dark:text-gray-100">Copy</span>
</SecondaryMenuItem>
<SecondaryMenuSeparator />
{subGraphID && (
<>
<SecondaryMenuItem
onClick={() => window.open(`/build?flowID=${subGraphID}`)}
>
<ArrowSquareOutIcon
size={20}
className="mr-2 dark:text-gray-100"
/>
<span className="dark:text-gray-100">Open agent</span>
</SecondaryMenuItem>
<SecondaryMenuSeparator />
</>
)}
<SecondaryMenuItem variant="destructive" onSelect={deleteNode}>
<TrashIcon
size={20}
className="mr-2 text-red-500 dark:text-red-400"
/>
<span className="dark:text-red-400">Delete</span>
</SecondaryMenuItem>
</SecondaryMenuContent>
</ContextMenu.Root>
);
}

View File

@@ -1,6 +1,6 @@
export const uiSchema = {
credentials: {
"ui:field": "credentials",
"ui:field": "custom/credential_field",
provider: { "ui:widget": "hidden" },
type: { "ui:widget": "hidden" },
id: { "ui:autofocus": true },

View File

@@ -0,0 +1,57 @@
import { useBlockMenuStore } from "@/app/(platform)/build/stores/blockMenuStore";
import { FilterChip } from "../FilterChip";
import { categories } from "./constants";
import { FilterSheet } from "../FilterSheet/FilterSheet";
import { GetV2BuilderSearchFilterAnyOfItem } from "@/app/api/__generated__/models/getV2BuilderSearchFilterAnyOfItem";
export const BlockMenuFilters = () => {
const {
filters,
addFilter,
removeFilter,
categoryCounts,
creators,
addCreator,
removeCreator,
} = useBlockMenuStore();
const handleFilterClick = (filter: GetV2BuilderSearchFilterAnyOfItem) => {
if (filters.includes(filter)) {
removeFilter(filter);
} else {
addFilter(filter);
}
};
const handleCreatorClick = (creator: string) => {
if (creators.includes(creator)) {
removeCreator(creator);
} else {
addCreator(creator);
}
};
return (
<div className="flex flex-wrap gap-2">
<FilterSheet categories={categories} />
{creators.length > 0 &&
creators.map((creator) => (
<FilterChip
key={creator}
name={"Created by " + creator.slice(0, 10) + "..."}
selected={creators.includes(creator)}
onClick={() => handleCreatorClick(creator)}
/>
))}
{categories.map((category) => (
<FilterChip
key={category.key}
name={category.name}
selected={filters.includes(category.key)}
onClick={() => handleFilterClick(category.key)}
number={categoryCounts[category.key] ?? 0}
/>
))}
</div>
);
};

View File

@@ -0,0 +1,15 @@
import { GetV2BuilderSearchFilterAnyOfItem } from "@/app/api/__generated__/models/getV2BuilderSearchFilterAnyOfItem";
import { CategoryKey } from "./types";
export const categories: Array<{ key: CategoryKey; name: string }> = [
{ key: GetV2BuilderSearchFilterAnyOfItem.blocks, name: "Blocks" },
{
key: GetV2BuilderSearchFilterAnyOfItem.integrations,
name: "Integrations",
},
{
key: GetV2BuilderSearchFilterAnyOfItem.marketplace_agents,
name: "Marketplace agents",
},
{ key: GetV2BuilderSearchFilterAnyOfItem.my_agents, name: "My agents" },
];

View File

@@ -0,0 +1,26 @@
import { GetV2BuilderSearchFilterAnyOfItem } from "@/app/api/__generated__/models/getV2BuilderSearchFilterAnyOfItem";
export type DefaultStateType =
| "suggestion"
| "all_blocks"
| "input_blocks"
| "action_blocks"
| "output_blocks"
| "integrations"
| "marketplace_agents"
| "my_agents";
export type CategoryKey = GetV2BuilderSearchFilterAnyOfItem;
export interface Filters {
categories: {
blocks: boolean;
integrations: boolean;
marketplace_agents: boolean;
my_agents: boolean;
providers: boolean;
};
createdBy: string[];
}
export type CategoryCounts = Record<CategoryKey, number>;

View File

@@ -1,111 +1,14 @@
import { Text } from "@/components/atoms/Text/Text";
import { useBlockMenuSearch } from "./useBlockMenuSearch";
import { InfiniteScroll } from "@/components/contextual/InfiniteScroll/InfiniteScroll";
import { LoadingSpinner } from "@/components/__legacy__/ui/loading";
import { SearchResponseItemsItem } from "@/app/api/__generated__/models/searchResponseItemsItem";
import { MarketplaceAgentBlock } from "../MarketplaceAgentBlock";
import { Block } from "../Block";
import { UGCAgentBlock } from "../UGCAgentBlock";
import { getSearchItemType } from "./helper";
import { useBlockMenuStore } from "../../../../stores/blockMenuStore";
import { blockMenuContainerStyle } from "../style";
import { cn } from "@/lib/utils";
import { NoSearchResult } from "../NoSearchResult";
import { BlockMenuFilters } from "../BlockMenuFilters/BlockMenuFilters";
import { BlockMenuSearchContent } from "../BlockMenuSearchContent/BlockMenuSearchContent";
export const BlockMenuSearch = () => {
const {
searchResults,
isFetchingNextPage,
fetchNextPage,
hasNextPage,
searchLoading,
handleAddLibraryAgent,
handleAddMarketplaceAgent,
addingLibraryAgentId,
addingMarketplaceAgentSlug,
} = useBlockMenuSearch();
const { searchQuery } = useBlockMenuStore();
if (searchLoading) {
return (
<div
className={cn(
blockMenuContainerStyle,
"flex items-center justify-center",
)}
>
<LoadingSpinner className="size-13" />
</div>
);
}
if (searchResults.length === 0) {
return <NoSearchResult />;
}
return (
<div className={blockMenuContainerStyle}>
<BlockMenuFilters />
<Text variant="body-medium">Search results</Text>
<InfiniteScroll
isFetchingNextPage={isFetchingNextPage}
fetchNextPage={fetchNextPage}
hasNextPage={hasNextPage}
loader={<LoadingSpinner className="size-13" />}
className="space-y-2.5"
>
{searchResults.map((item: SearchResponseItemsItem, index: number) => {
const { type, data } = getSearchItemType(item);
// backend give support to these 3 types only [right now] - we need to give support to integration and ai agent types in follow up PRs
switch (type) {
case "store_agent":
return (
<MarketplaceAgentBlock
key={index}
slug={data.slug}
highlightedText={searchQuery}
title={data.agent_name}
image_url={data.agent_image}
creator_name={data.creator}
number_of_runs={data.runs}
loading={addingMarketplaceAgentSlug === data.slug}
onClick={() =>
handleAddMarketplaceAgent({
creator_name: data.creator,
slug: data.slug,
})
}
/>
);
case "block":
return (
<Block
key={index}
title={data.name}
highlightedText={searchQuery}
description={data.description}
blockData={data}
/>
);
case "library_agent":
return (
<UGCAgentBlock
key={index}
title={data.name}
highlightedText={searchQuery}
image_url={data.image_url}
version={data.graph_version}
edited_time={data.updated_at}
isLoading={addingLibraryAgentId === data.id}
onClick={() => handleAddLibraryAgent(data)}
/>
);
default:
return null;
}
})}
</InfiniteScroll>
<BlockMenuSearchContent />
</div>
);
};

View File

@@ -0,0 +1,108 @@
import { SearchResponseItemsItem } from "@/app/api/__generated__/models/searchResponseItemsItem";
import { LoadingSpinner } from "@/components/atoms/LoadingSpinner/LoadingSpinner";
import { InfiniteScroll } from "@/components/contextual/InfiniteScroll/InfiniteScroll";
import { getSearchItemType } from "./helper";
import { MarketplaceAgentBlock } from "../MarketplaceAgentBlock";
import { Block } from "../Block";
import { UGCAgentBlock } from "../UGCAgentBlock";
import { useBlockMenuSearchContent } from "./useBlockMenuSearchContent";
import { useBlockMenuStore } from "@/app/(platform)/build/stores/blockMenuStore";
import { cn } from "@/lib/utils";
import { blockMenuContainerStyle } from "../style";
import { NoSearchResult } from "../NoSearchResult";
export const BlockMenuSearchContent = () => {
const {
searchResults,
isFetchingNextPage,
fetchNextPage,
hasNextPage,
searchLoading,
handleAddLibraryAgent,
handleAddMarketplaceAgent,
addingLibraryAgentId,
addingMarketplaceAgentSlug,
} = useBlockMenuSearchContent();
const { searchQuery } = useBlockMenuStore();
if (searchLoading) {
return (
<div
className={cn(
blockMenuContainerStyle,
"flex items-center justify-center",
)}
>
<LoadingSpinner className="size-13" />
</div>
);
}
if (searchResults.length === 0) {
return <NoSearchResult />;
}
return (
<InfiniteScroll
isFetchingNextPage={isFetchingNextPage}
fetchNextPage={fetchNextPage}
hasNextPage={hasNextPage}
loader={<LoadingSpinner className="size-13" />}
className="space-y-2.5"
>
{searchResults.map((item: SearchResponseItemsItem, index: number) => {
const { type, data } = getSearchItemType(item);
// backend give support to these 3 types only [right now] - we need to give support to integration and ai agent types in follow up PRs
switch (type) {
case "store_agent":
return (
<MarketplaceAgentBlock
key={index}
slug={data.slug}
highlightedText={searchQuery}
title={data.agent_name}
image_url={data.agent_image}
creator_name={data.creator}
number_of_runs={data.runs}
loading={addingMarketplaceAgentSlug === data.slug}
onClick={() =>
handleAddMarketplaceAgent({
creator_name: data.creator,
slug: data.slug,
})
}
/>
);
case "block":
return (
<Block
key={index}
title={data.name}
highlightedText={searchQuery}
description={data.description}
blockData={data}
/>
);
case "library_agent":
return (
<UGCAgentBlock
key={index}
title={data.name}
highlightedText={searchQuery}
image_url={data.image_url}
version={data.graph_version}
edited_time={data.updated_at}
isLoading={addingLibraryAgentId === data.id}
onClick={() => handleAddLibraryAgent(data)}
/>
);
default:
return null;
}
})}
</InfiniteScroll>
);
};

View File

@@ -23,9 +23,19 @@ import { LibraryAgent } from "@/app/api/__generated__/models/libraryAgent";
import { getQueryClient } from "@/lib/react-query/queryClient";
import { useToast } from "@/components/molecules/Toast/use-toast";
import * as Sentry from "@sentry/nextjs";
import { GetV2BuilderSearchFilterAnyOfItem } from "@/app/api/__generated__/models/getV2BuilderSearchFilterAnyOfItem";
export const useBlockMenuSearchContent = () => {
const {
searchQuery,
searchId,
setSearchId,
filters,
setCreatorsList,
creators,
setCategoryCounts,
} = useBlockMenuStore();
export const useBlockMenuSearch = () => {
const { searchQuery, searchId, setSearchId } = useBlockMenuStore();
const { toast } = useToast();
const { addAgentToBuilder, addLibraryAgentToBuilder } =
useAddAgentToBuilder();
@@ -57,6 +67,8 @@ export const useBlockMenuSearch = () => {
page_size: 8,
search_query: searchQuery,
search_id: searchId,
filter: filters.length > 0 ? filters : undefined,
by_creator: creators.length > 0 ? creators : undefined,
},
{
query: { getNextPageParam: getPaginationNextPageNumber },
@@ -98,6 +110,26 @@ export const useBlockMenuSearch = () => {
}
}, [searchQueryData, searchId, setSearchId]);
// from all the results, we need to get all the unique creators
useEffect(() => {
if (!searchQueryData?.pages?.length) {
return;
}
const latestData = okData(searchQueryData.pages.at(-1));
setCategoryCounts(
(latestData?.total_items as Record<
GetV2BuilderSearchFilterAnyOfItem,
number
>) || {
blocks: 0,
integrations: 0,
marketplace_agents: 0,
my_agents: 0,
},
);
setCreatorsList(latestData?.items || []);
}, [searchQueryData]);
useEffect(() => {
if (searchId && !searchQuery) {
resetSearchSession();

View File

@@ -1,7 +1,9 @@
import { Button } from "@/components/__legacy__/ui/button";
import { cn } from "@/lib/utils";
import { X } from "lucide-react";
import React, { ButtonHTMLAttributes } from "react";
import { XIcon } from "@phosphor-icons/react";
import { AnimatePresence, motion } from "framer-motion";
import React, { ButtonHTMLAttributes, useState } from "react";
interface Props extends ButtonHTMLAttributes<HTMLButtonElement> {
selected?: boolean;
@@ -16,39 +18,51 @@ export const FilterChip: React.FC<Props> = ({
className,
...rest
}) => {
const [isHovered, setIsHovered] = useState(false);
return (
<Button
className={cn(
"group w-fit space-x-1 rounded-[1.5rem] border border-zinc-300 bg-transparent px-[0.625rem] py-[0.375rem] shadow-none transition-transform duration-300 ease-in-out",
"hover:border-violet-500 hover:bg-transparent focus:ring-0 disabled:cursor-not-allowed",
selected && "border-0 bg-violet-700 hover:border",
className,
)}
{...rest}
>
<span
<AnimatePresence mode="wait">
<Button
onMouseEnter={() => setIsHovered(true)}
onMouseLeave={() => setIsHovered(false)}
className={cn(
"font-sans text-sm font-medium leading-[1.375rem] text-zinc-600 group-hover:text-zinc-600 group-disabled:text-zinc-400",
selected && "text-zinc-50",
"group w-fit space-x-1 rounded-[1.5rem] border border-zinc-300 bg-transparent px-[0.625rem] py-[0.375rem] shadow-none",
"hover:border-violet-500 hover:bg-transparent focus:ring-0 disabled:cursor-not-allowed",
selected && "border-0 bg-violet-700 hover:border",
className,
)}
{...rest}
>
{name}
</span>
{selected && (
<>
<span className="flex h-4 w-4 items-center justify-center rounded-full bg-zinc-50 transition-all duration-300 ease-in-out group-hover:hidden">
<X
className="h-3 w-3 rounded-full text-violet-700"
strokeWidth={2}
/>
</span>
{number !== undefined && (
<span className="hidden h-[1.375rem] items-center rounded-[1.25rem] bg-violet-700 p-[0.375rem] text-zinc-50 transition-all duration-300 ease-in-out animate-in fade-in zoom-in group-hover:flex">
{number > 100 ? "100+" : number}
</span>
<span
className={cn(
"font-sans text-sm font-medium leading-[1.375rem] text-zinc-600 group-hover:text-zinc-600 group-disabled:text-zinc-400",
selected && "text-zinc-50",
)}
</>
)}
</Button>
>
{name}
</span>
{selected && !isHovered && (
<motion.span
initial={{ opacity: 0.5, scale: 0.5, filter: "blur(20px)" }}
animate={{ opacity: 1, scale: 1, filter: "blur(0px)" }}
exit={{ opacity: 0.5, scale: 0.5, filter: "blur(20px)" }}
transition={{ duration: 0.3, type: "spring", bounce: 0.2 }}
className="flex h-4 w-4 items-center justify-center rounded-full bg-zinc-50"
>
<XIcon size={12} weight="bold" className="text-violet-700" />
</motion.span>
)}
{number !== undefined && isHovered && (
<motion.span
initial={{ opacity: 0.5, scale: 0.5, filter: "blur(10px)" }}
animate={{ opacity: 1, scale: 1, filter: "blur(0px)" }}
exit={{ opacity: 0.5, scale: 0.5, filter: "blur(10px)" }}
transition={{ duration: 0.3, type: "spring", bounce: 0.2 }}
className="flex h-[1.375rem] items-center rounded-[1.25rem] bg-violet-700 p-[0.375rem] text-zinc-50"
>
{number > 100 ? "100+" : number}
</motion.span>
)}
</Button>
</AnimatePresence>
);
};

View File

@@ -0,0 +1,156 @@
import { FilterChip } from "../FilterChip";
import { cn } from "@/lib/utils";
import { CategoryKey } from "../BlockMenuFilters/types";
import { AnimatePresence, motion } from "framer-motion";
import { XIcon } from "@phosphor-icons/react";
import { Button } from "@/components/atoms/Button/Button";
import { Text } from "@/components/atoms/Text/Text";
import { Separator } from "@/components/__legacy__/ui/separator";
import { Checkbox } from "@/components/__legacy__/ui/checkbox";
import { useFilterSheet } from "./useFilterSheet";
import { INITIAL_CREATORS_TO_SHOW } from "./constant";
export function FilterSheet({
categories,
}: {
categories: Array<{ key: CategoryKey; name: string }>;
}) {
const {
isOpen,
localCategories,
localCreators,
displayedCreatorsCount,
handleLocalCategoryChange,
handleToggleShowMoreCreators,
handleLocalCreatorChange,
handleClearFilters,
handleCloseButton,
handleApplyFilters,
hasLocalActiveFilters,
visibleCreators,
creators,
handleOpenFilters,
hasActiveFilters,
} = useFilterSheet();
return (
<div className="m-0 inline w-fit p-0">
<FilterChip
name={hasActiveFilters() ? "Edit filters" : "All filters"}
onClick={handleOpenFilters}
/>
<AnimatePresence>
{isOpen && (
<motion.div
className={cn(
"absolute bottom-2 left-2 top-2 z-20 w-3/4 max-w-[22.5rem] space-y-4 overflow-hidden rounded-[0.75rem] bg-white pb-4 shadow-[0_4px_12px_2px_rgba(0,0,0,0.1)]",
)}
initial={{ x: "-100%", filter: "blur(10px)" }}
animate={{ x: 0, filter: "blur(0px)" }}
exit={{ x: "-110%", filter: "blur(10px)" }}
transition={{ duration: 0.4, type: "spring", bounce: 0.2 }}
>
{/* Top section */}
<div className="flex items-center justify-between px-5 pt-4">
<Text variant="body">Filters</Text>
<Button
className="p-0"
variant="ghost"
size="icon"
onClick={handleCloseButton}
>
<XIcon size={20} />
</Button>
</div>
<Separator className="h-[1px] w-full text-zinc-300" />
{/* Category section */}
<div className="space-y-4 px-5">
<Text variant="large">Categories</Text>
<div className="space-y-2">
{categories.map((category) => (
<div
key={category.key}
className="flex items-center space-x-2"
>
<Checkbox
id={category.key}
checked={localCategories.includes(category.key)}
onCheckedChange={() =>
handleLocalCategoryChange(category.key)
}
className="border border-[#D4D4D4] shadow-none data-[state=checked]:border-none data-[state=checked]:bg-violet-700 data-[state=checked]:text-white"
/>
<label
htmlFor={category.key}
className="font-sans text-sm leading-[1.375rem] text-zinc-600"
>
{category.name}
</label>
</div>
))}
</div>
</div>
{/* Created by section */}
<div className="space-y-4 px-5">
<p className="font-sans text-base font-medium text-zinc-800">
Created by
</p>
<div className="space-y-2">
{visibleCreators.map((creator, i) => (
<div key={i} className="flex items-center space-x-2">
<Checkbox
id={`creator-${creator}`}
checked={localCreators.includes(creator)}
onCheckedChange={() => handleLocalCreatorChange(creator)}
className="border border-[#D4D4D4] shadow-none data-[state=checked]:border-none data-[state=checked]:bg-violet-700 data-[state=checked]:text-white"
/>
<label
htmlFor={`creator-${creator}`}
className="font-sans text-sm leading-[1.375rem] text-zinc-600"
>
{creator}
</label>
</div>
))}
</div>
{creators.length > INITIAL_CREATORS_TO_SHOW && (
<Button
variant={"link"}
className="m-0 p-0 font-sans text-sm font-medium leading-[1.375rem] text-zinc-800 underline hover:text-zinc-600"
onClick={handleToggleShowMoreCreators}
>
{displayedCreatorsCount < creators.length ? "More" : "Less"}
</Button>
)}
</div>
{/* Footer section */}
<div className="fixed bottom-0 flex w-full justify-between gap-3 border-t border-zinc-200 bg-white px-5 py-3">
<Button
size="small"
variant={"outline"}
onClick={handleClearFilters}
className="rounded-[8px] px-2 py-1.5"
>
Clear
</Button>
<Button
size="small"
onClick={handleApplyFilters}
disabled={!hasLocalActiveFilters()}
className="rounded-[8px] px-2 py-1.5"
>
Apply filters
</Button>
</div>
</motion.div>
)}
</AnimatePresence>
</div>
);
}

View File

@@ -0,0 +1 @@
export const INITIAL_CREATORS_TO_SHOW = 5;

View File

@@ -0,0 +1,100 @@
import { useBlockMenuStore } from "@/app/(platform)/build/stores/blockMenuStore";
import { useState } from "react";
import { INITIAL_CREATORS_TO_SHOW } from "./constant";
import { GetV2BuilderSearchFilterAnyOfItem } from "@/app/api/__generated__/models/getV2BuilderSearchFilterAnyOfItem";
export const useFilterSheet = () => {
const { filters, creators_list, creators, setFilters, setCreators } =
useBlockMenuStore();
const [isOpen, setIsOpen] = useState(false);
const [localCategories, setLocalCategories] =
useState<GetV2BuilderSearchFilterAnyOfItem[]>(filters);
const [localCreators, setLocalCreators] = useState<string[]>(creators);
const [displayedCreatorsCount, setDisplayedCreatorsCount] = useState(
INITIAL_CREATORS_TO_SHOW,
);
const handleLocalCategoryChange = (
category: GetV2BuilderSearchFilterAnyOfItem,
) => {
setLocalCategories((prev) => {
if (prev.includes(category)) {
return prev.filter((c) => c !== category);
}
return [...prev, category];
});
};
const hasActiveFilters = () => {
return filters.length > 0 || creators.length > 0;
};
const handleToggleShowMoreCreators = () => {
if (displayedCreatorsCount < creators.length) {
setDisplayedCreatorsCount(creators.length);
} else {
setDisplayedCreatorsCount(INITIAL_CREATORS_TO_SHOW);
}
};
const handleLocalCreatorChange = (creator: string) => {
setLocalCreators((prev) => {
if (prev.includes(creator)) {
return prev.filter((c) => c !== creator);
}
return [...prev, creator];
});
};
const handleClearFilters = () => {
setLocalCategories([]);
setLocalCreators([]);
setDisplayedCreatorsCount(INITIAL_CREATORS_TO_SHOW);
};
const handleCloseButton = () => {
setIsOpen(false);
setLocalCategories(filters);
setLocalCreators(creators);
setDisplayedCreatorsCount(INITIAL_CREATORS_TO_SHOW);
};
const handleApplyFilters = () => {
setFilters(localCategories);
setCreators(localCreators);
setIsOpen(false);
};
const handleOpenFilters = () => {
setIsOpen(true);
setLocalCategories(filters);
setLocalCreators(creators);
};
const hasLocalActiveFilters = () => {
return localCategories.length > 0 || localCreators.length > 0;
};
const visibleCreators = creators_list.slice(0, displayedCreatorsCount);
return {
creators,
isOpen,
setIsOpen,
localCategories,
localCreators,
displayedCreatorsCount,
setDisplayedCreatorsCount,
handleLocalCategoryChange,
handleToggleShowMoreCreators,
handleLocalCreatorChange,
handleClearFilters,
handleCloseButton,
handleOpenFilters,
handleApplyFilters,
hasLocalActiveFilters,
visibleCreators,
hasActiveFilters,
};
};

View File

@@ -1,12 +1,30 @@
import { create } from "zustand";
import { DefaultStateType } from "../components/NewControlPanel/NewBlockMenu/types";
import { SearchResponseItemsItem } from "@/app/api/__generated__/models/searchResponseItemsItem";
import { getSearchItemType } from "../components/NewControlPanel/NewBlockMenu/BlockMenuSearchContent/helper";
import { StoreAgent } from "@/app/api/__generated__/models/storeAgent";
import { GetV2BuilderSearchFilterAnyOfItem } from "@/app/api/__generated__/models/getV2BuilderSearchFilterAnyOfItem";
type BlockMenuStore = {
searchQuery: string;
searchId: string | undefined;
defaultState: DefaultStateType;
integration: string | undefined;
filters: GetV2BuilderSearchFilterAnyOfItem[];
creators: string[];
creators_list: string[];
categoryCounts: Record<GetV2BuilderSearchFilterAnyOfItem, number>;
setCategoryCounts: (
counts: Record<GetV2BuilderSearchFilterAnyOfItem, number>,
) => void;
setCreatorsList: (searchData: SearchResponseItemsItem[]) => void;
addCreator: (creator: string) => void;
setCreators: (creators: string[]) => void;
removeCreator: (creator: string) => void;
addFilter: (filter: GetV2BuilderSearchFilterAnyOfItem) => void;
setFilters: (filters: GetV2BuilderSearchFilterAnyOfItem[]) => void;
removeFilter: (filter: GetV2BuilderSearchFilterAnyOfItem) => void;
setSearchQuery: (query: string) => void;
setSearchId: (id: string | undefined) => void;
setDefaultState: (state: DefaultStateType) => void;
@@ -19,11 +37,44 @@ export const useBlockMenuStore = create<BlockMenuStore>((set) => ({
searchId: undefined,
defaultState: DefaultStateType.SUGGESTION,
integration: undefined,
filters: [],
creators: [], // creator filters that are applied to the search results
creators_list: [], // all creators that are available to filter by
categoryCounts: {
blocks: 0,
integrations: 0,
marketplace_agents: 0,
my_agents: 0,
},
setCategoryCounts: (counts) => set({ categoryCounts: counts }),
setCreatorsList: (searchData) => {
const marketplaceAgents = searchData.filter((item) => {
return getSearchItemType(item).type === "store_agent";
}) as StoreAgent[];
const newCreators = marketplaceAgents.map((agent) => agent.creator);
set((state) => ({
creators_list: Array.from(
new Set([...state.creators_list, ...newCreators]),
),
}));
},
setCreators: (creators) => set({ creators }),
setFilters: (filters) => set({ filters }),
setSearchQuery: (query) => set({ searchQuery: query }),
setSearchId: (id) => set({ searchId: id }),
setDefaultState: (state) => set({ defaultState: state }),
setIntegration: (integration) => set({ integration }),
addFilter: (filter) =>
set((state) => ({ filters: [...state.filters, filter] })),
removeFilter: (filter) =>
set((state) => ({ filters: state.filters.filter((f) => f !== filter) })),
addCreator: (creator) =>
set((state) => ({ creators: [...state.creators, creator] })),
removeCreator: (creator) =>
set((state) => ({ creators: state.creators.filter((c) => c !== creator) })),
reset: () =>
set({
searchQuery: "",

View File

@@ -68,6 +68,9 @@ type NodeStore = {
clearAllNodeErrors: () => void; // Add this
syncHardcodedValuesWithHandleIds: (nodeId: string) => void;
// Credentials optional helpers
setCredentialsOptional: (nodeId: string, optional: boolean) => void;
};
export const useNodeStore = create<NodeStore>((set, get) => ({
@@ -226,6 +229,9 @@ export const useNodeStore = create<NodeStore>((set, get) => ({
...(node.data.metadata?.customized_name !== undefined && {
customized_name: node.data.metadata.customized_name,
}),
...(node.data.metadata?.credentials_optional !== undefined && {
credentials_optional: node.data.metadata.credentials_optional,
}),
},
};
},
@@ -342,4 +348,30 @@ export const useNodeStore = create<NodeStore>((set, get) => ({
}));
}
},
setCredentialsOptional: (nodeId: string, optional: boolean) => {
set((state) => ({
nodes: state.nodes.map((n) =>
n.id === nodeId
? {
...n,
data: {
...n.data,
metadata: {
...n.data.metadata,
credentials_optional: optional,
},
},
}
: n,
),
}));
const newState = {
nodes: get().nodes,
edges: useEdgeStore.getState().edges,
};
useHistoryStore.getState().pushState(newState);
},
}));

View File

@@ -34,6 +34,7 @@ type Props = {
onSelectCredentials: (newValue?: CredentialsMetaInput) => void;
onLoaded?: (loaded: boolean) => void;
readOnly?: boolean;
isOptional?: boolean;
showTitle?: boolean;
};
@@ -45,6 +46,7 @@ export function CredentialsInput({
siblingInputs,
onLoaded,
readOnly = false,
isOptional = false,
showTitle = true,
}: Props) {
const hookData = useCredentialsInput({
@@ -54,6 +56,7 @@ export function CredentialsInput({
siblingInputs,
onLoaded,
readOnly,
isOptional,
});
if (!isLoaded(hookData)) {
@@ -94,7 +97,14 @@ export function CredentialsInput({
<div className={cn("mb-6", className)}>
{showTitle && (
<div className="mb-2 flex items-center gap-2">
<Text variant="large-medium">{displayName} credentials</Text>
<Text variant="large-medium">
{displayName} credentials
{isOptional && (
<span className="ml-1 text-sm font-normal text-gray-500">
(optional)
</span>
)}
</Text>
{schema.description && (
<InformationTooltip description={schema.description} />
)}
@@ -103,14 +113,16 @@ export function CredentialsInput({
{hasCredentialsToShow ? (
<>
{credentialsToShow.length > 1 && !readOnly ? (
{(credentialsToShow.length > 1 || isOptional) && !readOnly ? (
<CredentialsSelect
credentials={credentialsToShow}
provider={provider}
displayName={displayName}
selectedCredentials={selectedCredential}
onSelectCredential={handleCredentialSelect}
onClearCredential={() => onSelectCredential(undefined)}
readOnly={readOnly}
allowNone={isOptional}
/>
) : (
<div className="mb-4 space-y-2">

View File

@@ -23,7 +23,9 @@ interface Props {
displayName: string;
selectedCredentials?: CredentialsMetaInput;
onSelectCredential: (credentialId: string) => void;
onClearCredential?: () => void;
readOnly?: boolean;
allowNone?: boolean;
}
export function CredentialsSelect({
@@ -32,20 +34,30 @@ export function CredentialsSelect({
displayName,
selectedCredentials,
onSelectCredential,
onClearCredential,
readOnly = false,
allowNone = true,
}: Props) {
// Auto-select first credential if none is selected
// Auto-select first credential if none is selected (only if allowNone is false)
useEffect(() => {
if (!selectedCredentials && credentials.length > 0) {
if (!allowNone && !selectedCredentials && credentials.length > 0) {
onSelectCredential(credentials[0].id);
}
}, [selectedCredentials, credentials, onSelectCredential]);
}, [allowNone, selectedCredentials, credentials, onSelectCredential]);
const handleValueChange = (value: string) => {
if (value === "__none__") {
onClearCredential?.();
} else {
onSelectCredential(value);
}
};
return (
<div className="mb-4 w-full">
<Select
value={selectedCredentials?.id || ""}
onValueChange={(value) => onSelectCredential(value)}
value={selectedCredentials?.id || (allowNone ? "__none__" : "")}
onValueChange={handleValueChange}
>
<SelectTrigger className="h-auto min-h-12 w-full rounded-medium border-zinc-200 p-0 pr-4 shadow-none">
{selectedCredentials ? (
@@ -70,6 +82,15 @@ export function CredentialsSelect({
)}
</SelectTrigger>
<SelectContent>
{allowNone && (
<SelectItem key="__none__" value="__none__">
<div className="flex items-center gap-2">
<Text variant="body" className="tracking-tight text-gray-500">
None (skip this credential)
</Text>
</div>
</SelectItem>
)}
{credentials.map((credential) => (
<SelectItem key={credential.id} value={credential.id}>
<div className="flex items-center gap-2">

View File

@@ -22,6 +22,7 @@ type Params = {
siblingInputs?: Record<string, any>;
onLoaded?: (loaded: boolean) => void;
readOnly?: boolean;
isOptional?: boolean;
};
export function useCredentialsInput({
@@ -31,6 +32,7 @@ export function useCredentialsInput({
siblingInputs,
onLoaded,
readOnly = false,
isOptional = false,
}: Params) {
const [isAPICredentialsModalOpen, setAPICredentialsModalOpen] =
useState(false);
@@ -99,13 +101,20 @@ export function useCredentialsInput({
: null;
}, [credentials]);
// Auto-select the one available credential
// Auto-select the one available credential (only if not optional)
useEffect(() => {
if (readOnly) return;
if (isOptional) return; // Don't auto-select when credential is optional
if (singleCredential && !selectedCredential) {
onSelectCredential(singleCredential);
}
}, [singleCredential, selectedCredential, onSelectCredential, readOnly]);
}, [
singleCredential,
selectedCredential,
onSelectCredential,
readOnly,
isOptional,
]);
if (
!credentials ||

View File

@@ -8,6 +8,7 @@ import { WebhookTriggerBanner } from "../WebhookTriggerBanner/WebhookTriggerBann
export function ModalRunSection() {
const {
agent,
defaultRunType,
presetName,
setPresetName,
@@ -24,6 +25,11 @@ export function ModalRunSection() {
const inputFields = Object.entries(agentInputFields || {});
const credentialFields = Object.entries(agentCredentialsInputFields || {});
// Get the list of required credentials from the schema
const requiredCredentials = new Set(
(agent.credentials_input_schema?.required as string[]) || [],
);
return (
<div className="flex flex-col gap-4">
{defaultRunType === "automatic-trigger" ||
@@ -99,14 +105,12 @@ export function ModalRunSection() {
schema={
{ ...inputSubSchema, discriminator: undefined } as any
}
selectedCredentials={
(inputCredentials && inputCredentials[key]) ??
inputSubSchema.default
}
selectedCredentials={inputCredentials?.[key]}
onSelectCredentials={(value) =>
setInputCredentialsValue(key, value)
}
siblingInputs={inputValues}
isOptional={!requiredCredentials.has(key)}
/>
),
)}

View File

@@ -163,15 +163,21 @@ export function useAgentRunModal(
}, [agentInputSchema.required, inputValues]);
const [allCredentialsAreSet, missingCredentials] = useMemo(() => {
const availableCredentials = new Set(Object.keys(inputCredentials));
const allCredentials = new Set(
Object.keys(agentCredentialsInputFields || {}) ?? [],
);
const missing = [...allCredentials].filter(
(key) => !availableCredentials.has(key),
// Only check required credentials from schema, not all properties
// Credentials marked as optional in node metadata won't be in the required array
const requiredCredentials = new Set(
(agent.credentials_input_schema?.required as string[]) || [],
);
// Check if required credentials have valid id (not just key existence)
// A credential is valid only if it has an id field set
const missing = [...requiredCredentials].filter((key) => {
const cred = inputCredentials[key];
return !cred || !cred.id;
});
return [missing.length === 0, missing];
}, [agentCredentialsInputFields, inputCredentials]);
}, [agent.credentials_input_schema, inputCredentials]);
const credentialsRequired = useMemo(
() => Object.keys(agentCredentialsInputFields || {}).length > 0,
@@ -239,12 +245,18 @@ export function useAgentRunModal(
});
} else {
// Manual execution
// Filter out incomplete credentials (optional ones not selected)
// Only send credentials that have a valid id field
const validCredentials = Object.fromEntries(
Object.entries(inputCredentials).filter(([_, cred]) => cred && cred.id),
);
executeGraphMutation.mutate({
graphId: agent.graph_id,
graphVersion: agent.graph_version,
data: {
inputs: inputValues,
credentials_inputs: inputCredentials,
credentials_inputs: validCredentials,
source: "library",
},
});

View File

@@ -1,17 +1,25 @@
"use client";
import { getV1GetGraphVersion } from "@/app/api/__generated__/endpoints/graphs/graphs";
import {
getGetV2ListLibraryAgentsQueryKey,
useDeleteV2DeleteLibraryAgent,
} from "@/app/api/__generated__/endpoints/library/library";
import { GraphExecutionJobInfo } from "@/app/api/__generated__/models/graphExecutionJobInfo";
import { GraphExecutionMeta } from "@/app/api/__generated__/models/graphExecutionMeta";
import { LibraryAgent } from "@/app/api/__generated__/models/libraryAgent";
import { LibraryAgentPreset } from "@/app/api/__generated__/models/libraryAgentPreset";
import { Button } from "@/components/atoms/Button/Button";
import { Text } from "@/components/atoms/Text/Text";
import { Dialog } from "@/components/molecules/Dialog/Dialog";
import { ShowMoreText } from "@/components/molecules/ShowMoreText/ShowMoreText";
import { useToast } from "@/components/molecules/Toast/use-toast";
import { exportAsJSONFile } from "@/lib/utils";
import { formatDate } from "@/lib/utils/time";
import { useQueryClient } from "@tanstack/react-query";
import Link from "next/link";
import { useRouter } from "next/navigation";
import { useState } from "react";
import { RunAgentModal } from "../modals/RunAgentModal/RunAgentModal";
import { RunDetailCard } from "../selected-views/RunDetailCard/RunDetailCard";
import { EmptyTasksIllustration } from "./EmptyTasksIllustration";
@@ -30,6 +38,41 @@ export function EmptyTasks({
onScheduleCreated,
}: Props) {
const { toast } = useToast();
const queryClient = useQueryClient();
const router = useRouter();
const [showDeleteDialog, setShowDeleteDialog] = useState(false);
const [isDeletingAgent, setIsDeletingAgent] = useState(false);
const { mutateAsync: deleteAgent } = useDeleteV2DeleteLibraryAgent();
async function handleDeleteAgent() {
if (!agent.id) return;
setIsDeletingAgent(true);
try {
await deleteAgent({ libraryAgentId: agent.id });
await queryClient.refetchQueries({
queryKey: getGetV2ListLibraryAgentsQueryKey(),
});
toast({ title: "Agent deleted" });
setShowDeleteDialog(false);
router.push("/library");
} catch (error: unknown) {
toast({
title: "Failed to delete agent",
description:
error instanceof Error
? error.message
: "An unexpected error occurred.",
variant: "destructive",
});
} finally {
setIsDeletingAgent(false);
}
}
async function handleExport() {
try {
@@ -147,9 +190,50 @@ export function EmptyTasks({
<Button variant="secondary" size="small" onClick={handleExport}>
Export agent to file
</Button>
<Button
variant="secondary"
size="small"
onClick={() => setShowDeleteDialog(true)}
>
Delete agent
</Button>
</div>
</div>
</div>
<Dialog
controlled={{
isOpen: showDeleteDialog,
set: setShowDeleteDialog,
}}
styling={{ maxWidth: "32rem" }}
title="Delete agent"
>
<Dialog.Content>
<div>
<Text variant="large">
Are you sure you want to delete this agent? This action cannot be
undone.
</Text>
<Dialog.Footer>
<Button
variant="secondary"
disabled={isDeletingAgent}
onClick={() => setShowDeleteDialog(false)}
>
Cancel
</Button>
<Button
variant="destructive"
onClick={handleDeleteAgent}
loading={isDeletingAgent}
>
Delete Agent
</Button>
</Dialog.Footer>
</div>
</Dialog.Content>
</Dialog>
</div>
);
}

View File

@@ -83,7 +83,9 @@ function renderCode(
</div>
)}
<pre className="overflow-x-auto rounded-md bg-muted p-3">
<code className="font-mono text-sm">{codeValue}</code>
<code className="whitespace-pre-wrap break-words font-mono text-sm">
{codeValue}
</code>
</pre>
</div>
);

View File

@@ -13,7 +13,7 @@ import { LoadingSelectedContent } from "../LoadingSelectedContent";
import { RunDetailCard } from "../RunDetailCard/RunDetailCard";
import { RunDetailHeader } from "../RunDetailHeader/RunDetailHeader";
import { SelectedViewLayout } from "../SelectedViewLayout";
import { SelectedScheduleActions } from "./components/SelectedScheduleActions";
import { SelectedScheduleActions } from "./components/SelectedScheduleActions/SelectedScheduleActions";
import { useSelectedScheduleView } from "./useSelectedScheduleView";
interface Props {

View File

@@ -1,40 +0,0 @@
import { LibraryAgent } from "@/app/api/__generated__/models/libraryAgent";
import { Button } from "@/components/atoms/Button/Button";
import { EyeIcon } from "@phosphor-icons/react";
import { AgentActionsDropdown } from "../../AgentActionsDropdown";
import { useScheduleDetailHeader } from "../../RunDetailHeader/useScheduleDetailHeader";
import { SelectedActionsWrap } from "../../SelectedActionsWrap";
type Props = {
agent: LibraryAgent;
scheduleId: string;
onDeleted?: () => void;
};
export function SelectedScheduleActions({ agent, scheduleId }: Props) {
const { openInBuilderHref } = useScheduleDetailHeader(
agent.graph_id,
scheduleId,
agent.graph_version,
);
return (
<>
<SelectedActionsWrap>
{openInBuilderHref && (
<Button
variant="icon"
size="icon"
as="NextLink"
href={openInBuilderHref}
target="_blank"
aria-label="View scheduled task details"
>
<EyeIcon weight="bold" size={18} className="text-zinc-700" />
</Button>
)}
<AgentActionsDropdown agent={agent} scheduleId={scheduleId} />
</SelectedActionsWrap>
</>
);
}

View File

@@ -0,0 +1,96 @@
"use client";
import { LibraryAgent } from "@/app/api/__generated__/models/libraryAgent";
import { Button } from "@/components/atoms/Button/Button";
import { LoadingSpinner } from "@/components/atoms/LoadingSpinner/LoadingSpinner";
import { Text } from "@/components/atoms/Text/Text";
import { Dialog } from "@/components/molecules/Dialog/Dialog";
import { EyeIcon, TrashIcon } from "@phosphor-icons/react";
import { AgentActionsDropdown } from "../../../AgentActionsDropdown";
import { SelectedActionsWrap } from "../../../SelectedActionsWrap";
import { useSelectedScheduleActions } from "./useSelectedScheduleActions";
type Props = {
agent: LibraryAgent;
scheduleId: string;
onDeleted?: () => void;
};
export function SelectedScheduleActions({
agent,
scheduleId,
onDeleted,
}: Props) {
const {
openInBuilderHref,
showDeleteDialog,
setShowDeleteDialog,
handleDelete,
isDeleting,
} = useSelectedScheduleActions({ agent, scheduleId, onDeleted });
return (
<>
<SelectedActionsWrap>
{openInBuilderHref && (
<Button
variant="icon"
size="icon"
as="NextLink"
href={openInBuilderHref}
target="_blank"
aria-label="View scheduled task details"
>
<EyeIcon weight="bold" size={18} className="text-zinc-700" />
</Button>
)}
<Button
variant="icon"
size="icon"
aria-label="Delete schedule"
onClick={() => setShowDeleteDialog(true)}
disabled={isDeleting}
>
{isDeleting ? (
<LoadingSpinner size="small" />
) : (
<TrashIcon weight="bold" size={18} />
)}
</Button>
<AgentActionsDropdown agent={agent} scheduleId={scheduleId} />
</SelectedActionsWrap>
<Dialog
controlled={{
isOpen: showDeleteDialog,
set: setShowDeleteDialog,
}}
styling={{ maxWidth: "32rem" }}
title="Delete schedule"
>
<Dialog.Content>
<Text variant="large">
Are you sure you want to delete this schedule? This action cannot be
undone.
</Text>
<Dialog.Footer>
<Button
variant="secondary"
onClick={() => setShowDeleteDialog(false)}
disabled={isDeleting}
>
Cancel
</Button>
<Button
variant="destructive"
onClick={handleDelete}
loading={isDeleting}
>
Delete Schedule
</Button>
</Dialog.Footer>
</Dialog.Content>
</Dialog>
</>
);
}

View File

@@ -0,0 +1,65 @@
"use client";
import {
getGetV1ListExecutionSchedulesForAGraphQueryOptions,
useDeleteV1DeleteExecutionSchedule,
} from "@/app/api/__generated__/endpoints/schedules/schedules";
import { LibraryAgent } from "@/app/api/__generated__/models/libraryAgent";
import { useToast } from "@/components/molecules/Toast/use-toast";
import { useQueryClient } from "@tanstack/react-query";
import { useState } from "react";
interface UseSelectedScheduleActionsProps {
agent: LibraryAgent;
scheduleId: string;
onDeleted?: () => void;
}
export function useSelectedScheduleActions({
agent,
scheduleId,
onDeleted,
}: UseSelectedScheduleActionsProps) {
const { toast } = useToast();
const queryClient = useQueryClient();
const [showDeleteDialog, setShowDeleteDialog] = useState(false);
const deleteMutation = useDeleteV1DeleteExecutionSchedule({
mutation: {
onSuccess: () => {
toast({ title: "Schedule deleted" });
queryClient.invalidateQueries({
queryKey: getGetV1ListExecutionSchedulesForAGraphQueryOptions(
agent.graph_id,
).queryKey,
});
setShowDeleteDialog(false);
onDeleted?.();
},
onError: (error: unknown) =>
toast({
title: "Failed to delete schedule",
description:
error instanceof Error
? error.message
: "An unexpected error occurred.",
variant: "destructive",
}),
},
});
function handleDelete() {
if (!scheduleId) return;
deleteMutation.mutate({ scheduleId });
}
const openInBuilderHref = `/build?flowID=${agent.graph_id}&flowVersion=${agent.graph_version}`;
return {
openInBuilderHref,
showDeleteDialog,
setShowDeleteDialog,
handleDelete,
isDeleting: deleteMutation.isPending,
};
}

View File

@@ -40,15 +40,17 @@ export function useMarketplaceUpdate({ agent }: UseMarketplaceUpdateProps) {
},
);
// Get user's submissions to check for pending submissions
const { data: submissionsData } = useGetV2ListMySubmissions(
{ page: 1, page_size: 50 }, // Get enough to cover recent submissions
{
query: {
enabled: !!user?.id, // Only fetch if user is authenticated
// Get user's submissions - only fetch if user is the creator
const { data: submissionsData, isLoading: isSubmissionsLoading } =
useGetV2ListMySubmissions(
{ page: 1, page_size: 50 },
{
query: {
// Only fetch if user is the creator
enabled: !!(user?.id && agent?.owner_user_id === user.id),
},
},
},
);
);
const updateToLatestMutation = usePatchV2UpdateLibraryAgent({
mutation: {
@@ -78,11 +80,36 @@ export function useMarketplaceUpdate({ agent }: UseMarketplaceUpdateProps) {
// Check if marketplace has a newer version than user's current version
const marketplaceUpdateInfo = React.useMemo(() => {
const storeAgent = okData(storeAgentData) as any;
if (!agent || !storeAgent) {
if (!agent || isSubmissionsLoading) {
return {
hasUpdate: false,
latestVersion: undefined,
isUserCreator: false,
hasPublishUpdate: false,
};
}
const isUserCreator = agent?.owner_user_id === user?.id;
// Check if there's a pending submission for this specific agent version
const submissionsResponse = okData(submissionsData) as any;
const hasPendingSubmissionForCurrentVersion =
isUserCreator &&
submissionsResponse?.submissions?.some(
(submission: StoreSubmission) =>
submission.agent_id === agent.graph_id &&
submission.agent_version === agent.graph_version &&
submission.status === "PENDING",
);
if (!storeAgent) {
return {
hasUpdate: false,
latestVersion: undefined,
isUserCreator,
hasPublishUpdate:
isUserCreator && !hasPendingSubmissionForCurrentVersion,
};
}
@@ -97,29 +124,15 @@ export function useMarketplaceUpdate({ agent }: UseMarketplaceUpdateProps) {
)
: undefined;
// Determine if the user is the creator of this agent
// Compare current user ID with the marketplace listing creator ID
const isUserCreator =
user?.id && agent.marketplace_listing?.creator.id === user.id;
// Check if there's a pending submission for this specific agent version
const submissionsResponse = okData(submissionsData) as any;
const hasPendingSubmissionForCurrentVersion =
isUserCreator &&
submissionsResponse?.submissions?.some(
(submission: StoreSubmission) =>
submission.agent_id === agent.graph_id &&
submission.agent_version === agent.graph_version &&
submission.status === "PENDING",
);
// If user is creator and their version is newer than marketplace, show publish update banner
// BUT only if there's no pending submission for this version
// Show publish update button if:
// 1. User is the creator
// 2. No pending submission for current version
// 3. Either: agent not published yet OR local version is newer than marketplace
const hasPublishUpdate =
isUserCreator &&
!hasPendingSubmissionForCurrentVersion &&
latestMarketplaceVersion !== undefined &&
agent.graph_version > latestMarketplaceVersion;
(latestMarketplaceVersion === undefined || // Not published yet
agent.graph_version > latestMarketplaceVersion); // Or local version is newer
// If marketplace version is newer than user's version, show update banner
// This applies to both creators and non-creators
@@ -133,7 +146,7 @@ export function useMarketplaceUpdate({ agent }: UseMarketplaceUpdateProps) {
isUserCreator,
hasPublishUpdate,
};
}, [agent, storeAgentData, user, submissionsData]);
}, [agent, storeAgentData, user, submissionsData, isSubmissionsLoading]);
const handlePublishUpdate = () => {
setModalOpen(true);

View File

@@ -1,16 +1,17 @@
"use client";
import React from "react";
import { useFavoriteAgents } from "../../hooks/useFavoriteAgents";
import LibraryAgentCard from "../LibraryAgentCard/LibraryAgentCard";
import { useGetFlag, Flag } from "@/services/feature-flags/use-get-flag";
import { Heart } from "lucide-react";
import { Skeleton } from "@/components/__legacy__/ui/skeleton";
import { InfiniteScroll } from "@/components/contextual/InfiniteScroll/InfiniteScroll";
import { LibraryAgent } from "@/app/api/__generated__/models/libraryAgent";
import { Text } from "@/components/atoms/Text/Text";
import { InfiniteScroll } from "@/components/contextual/InfiniteScroll/InfiniteScroll";
import { HeartIcon } from "@phosphor-icons/react";
import { useFavoriteAgents } from "../../hooks/useFavoriteAgents";
import { LibraryAgentCard } from "../LibraryAgentCard/LibraryAgentCard";
export default function FavoritesSection() {
const isAgentFavoritingEnabled = useGetFlag(Flag.AGENT_FAVORITING);
interface Props {
searchTerm: string;
}
export function FavoritesSection({ searchTerm }: Props) {
const {
allAgents: favoriteAgents,
agentLoading: isLoading,
@@ -18,60 +19,50 @@ export default function FavoritesSection() {
hasNextPage,
fetchNextPage,
isFetchingNextPage,
} = useFavoriteAgents();
} = useFavoriteAgents({ searchTerm });
// Only show this section if the feature flag is enabled
if (!isAgentFavoritingEnabled) {
return null;
}
// Don't show the section if there are no favorites
if (!isLoading && favoriteAgents.length === 0) {
if (isLoading || favoriteAgents.length === 0) {
return null;
}
return (
<div className="mb-8">
<div className="flex items-center gap-[10px] p-2 pb-[10px]">
<Heart className="h-5 w-5 fill-red-500 text-red-500" />
<span className="font-poppin text-[18px] font-semibold leading-[28px] text-neutral-800">
Favorites
</span>
{!isLoading && (
<span className="font-sans text-[14px] font-normal leading-6">
{agentCount} {agentCount === 1 ? "agent" : "agents"}
</span>
)}
<div className="!mb-8">
<div className="mb-3 flex items-center gap-2 p-2">
<HeartIcon className="h-5 w-5" weight="fill" />
<div className="flex items-baseline gap-2">
<Text variant="h4">Favorites</Text>
{!isLoading && (
<Text
variant="body"
data-testid="agents-count"
className="relative bottom-px text-zinc-500"
>
{agentCount}
</Text>
)}
</div>
</div>
<div className="relative">
{isLoading ? (
<InfiniteScroll
isFetchingNextPage={isFetchingNextPage}
fetchNextPage={fetchNextPage}
hasNextPage={hasNextPage}
loader={
<div className="flex h-8 w-full items-center justify-center">
<div className="h-6 w-6 animate-spin rounded-full border-b-2 border-t-2 border-neutral-800" />
</div>
}
>
<div className="grid grid-cols-1 gap-4 sm:grid-cols-2 lg:grid-cols-3 xl:grid-cols-4">
{[...Array(4)].map((_, i) => (
<Skeleton key={i} className="h-48 w-full rounded-lg" />
{favoriteAgents.map((agent: LibraryAgent) => (
<LibraryAgentCard key={agent.id} agent={agent} />
))}
</div>
) : (
<InfiniteScroll
isFetchingNextPage={isFetchingNextPage}
fetchNextPage={fetchNextPage}
hasNextPage={hasNextPage}
loader={
<div className="flex h-8 w-full items-center justify-center">
<div className="h-6 w-6 animate-spin rounded-full border-b-2 border-t-2 border-neutral-800" />
</div>
}
>
<div className="grid grid-cols-1 gap-4 sm:grid-cols-2 lg:grid-cols-3 xl:grid-cols-4">
{favoriteAgents.map((agent: LibraryAgent) => (
<LibraryAgentCard key={agent.id} agent={agent} />
))}
</div>
</InfiniteScroll>
)}
</InfiniteScroll>
</div>
{favoriteAgents.length > 0 && <div className="mt-6 border-t pt-6" />}
{favoriteAgents.length > 0 && <div className="!mt-10 border-t" />}
</div>
);
}

View File

@@ -1,34 +1,28 @@
// import LibraryNotificationDropdown from "./library-notification-dropdown";
import { LibrarySearchBar } from "../LibrarySearchBar/LibrarySearchBar";
import LibraryUploadAgentDialog from "../LibraryUploadAgentDialog/LibraryUploadAgentDialog";
import LibrarySearchBar from "../LibrarySearchBar/LibrarySearchBar";
type LibraryActionHeaderProps = Record<string, never>;
interface Props {
setSearchTerm: (value: string) => void;
}
/**
* LibraryActionHeader component - Renders a header with search, notifications and filters
*/
const LibraryActionHeader: React.FC<LibraryActionHeaderProps> = ({}) => {
export function LibraryActionHeader({ setSearchTerm }: Props) {
return (
<>
<div className="mb-[32px] hidden items-start justify-between md:flex">
{/* <LibraryNotificationDropdown /> */}
<LibrarySearchBar />
<div className="mb-[32px] hidden items-center justify-center gap-4 md:flex">
<LibrarySearchBar setSearchTerm={setSearchTerm} />
<LibraryUploadAgentDialog />
</div>
{/* Mobile and tablet */}
<div className="flex flex-col gap-4 p-4 pt-[52px] md:hidden">
<div className="flex w-full justify-between">
{/* <LibraryNotificationDropdown /> */}
<LibraryUploadAgentDialog />
</div>
<div className="flex items-center justify-center">
<LibrarySearchBar />
<LibrarySearchBar setSearchTerm={setSearchTerm} />
</div>
</div>
</>
);
};
export default LibraryActionHeader;
}

View File

@@ -1,28 +1,28 @@
"use client";
import LibrarySortMenu from "../LibrarySortMenu/LibrarySortMenu";
import { LibraryAgentSort } from "@/app/api/__generated__/models/libraryAgentSort";
import { Text } from "@/components/atoms/Text/Text";
import { LibrarySortMenu } from "../LibrarySortMenu/LibrarySortMenu";
interface LibraryActionSubHeaderProps {
interface Props {
agentCount: number;
setLibrarySort: (value: LibraryAgentSort) => void;
}
export default function LibraryActionSubHeader({
agentCount,
}: LibraryActionSubHeaderProps) {
export function LibraryActionSubHeader({ agentCount, setLibrarySort }: Props) {
return (
<div className="flex items-center justify-between pb-[10px]">
<div className="flex items-center gap-[10px] p-2">
<span className="font-poppin w-[96px] text-[18px] font-semibold leading-[28px] text-neutral-800">
My agents
</span>
<span
className="w-[70px] font-sans text-[14px] font-normal leading-6"
<div className="flex items-baseline justify-between">
<div className="flex items-baseline gap-4">
<Text variant="h4">My agents</Text>
<Text
variant="body"
data-testid="agents-count"
className="text-zinc-500"
>
{agentCount} agents
</span>
{agentCount}
</Text>
</div>
<LibrarySortMenu />
<LibrarySortMenu setLibrarySort={setLibrarySort} />
</div>
);
}

View File

@@ -1,332 +1,126 @@
"use client";
import Link from "next/link";
import { Text } from "@/components/atoms/Text/Text";
import { CaretCircleRightIcon } from "@phosphor-icons/react";
import Image from "next/image";
import { Heart } from "@phosphor-icons/react";
import { useState, useEffect } from "react";
import { getQueryClient } from "@/lib/react-query/queryClient";
import { InfiniteData } from "@tanstack/react-query";
import NextLink from "next/link";
import { LibraryAgent } from "@/app/api/__generated__/models/libraryAgent";
import {
getV2ListLibraryAgentsResponse,
getV2ListFavoriteLibraryAgentsResponse,
} from "@/app/api/__generated__/endpoints/library/library";
import BackendAPI, { LibraryAgentID } from "@/lib/autogpt-server-api";
import { cn } from "@/lib/utils";
import { useToast } from "@/components/molecules/Toast/use-toast";
import { Flag, useGetFlag } from "@/services/feature-flags/use-get-flag";
import Avatar, {
AvatarFallback,
AvatarImage,
} from "@/components/atoms/Avatar/Avatar";
import { Link } from "@/components/atoms/Link/Link";
import { AgentCardMenu } from "./components/AgentCardMenu";
import { FavoriteButton } from "./components/FavoriteButton";
import { useLibraryAgentCard } from "./useLibraryAgentCard";
interface LibraryAgentCardProps {
interface Props {
agent: LibraryAgent;
}
export default function LibraryAgentCard({
agent: {
id,
name,
description,
graph_id,
can_access_graph,
export function LibraryAgentCard({ agent }: Props) {
const { id, name, graph_id, can_access_graph, image_url } = agent;
const {
isFromMarketplace,
isFavorite,
profile,
creator_image_url,
image_url,
is_favorite,
},
}: LibraryAgentCardProps) {
const isAgentFavoritingEnabled = useGetFlag(Flag.AGENT_FAVORITING);
const [isFavorite, setIsFavorite] = useState(is_favorite);
const [isUpdating, setIsUpdating] = useState(false);
const { toast } = useToast();
const api = new BackendAPI();
const queryClient = getQueryClient();
// Sync local state with prop when it changes (e.g., after query invalidation)
useEffect(() => {
setIsFavorite(is_favorite);
}, [is_favorite]);
const updateQueryData = (newIsFavorite: boolean) => {
// Update the agent in all library agent queries
queryClient.setQueriesData(
{ queryKey: ["/api/library/agents"] },
(
oldData:
| InfiniteData<getV2ListLibraryAgentsResponse, number | undefined>
| undefined,
) => {
if (!oldData?.pages) return oldData;
return {
...oldData,
pages: oldData.pages.map((page) => {
if (page.status !== 200) return page;
return {
...page,
data: {
...page.data,
agents: page.data.agents.map((agent: LibraryAgent) =>
agent.id === id
? { ...agent, is_favorite: newIsFavorite }
: agent,
),
},
};
}),
};
},
);
// Update or remove from favorites query based on new state
queryClient.setQueriesData(
{ queryKey: ["/api/library/agents/favorites"] },
(
oldData:
| InfiniteData<
getV2ListFavoriteLibraryAgentsResponse,
number | undefined
>
| undefined,
) => {
if (!oldData?.pages) return oldData;
if (newIsFavorite) {
// Add to favorites if not already there
const exists = oldData.pages.some(
(page) =>
page.status === 200 &&
page.data.agents.some((agent: LibraryAgent) => agent.id === id),
);
if (!exists) {
const firstPage = oldData.pages[0];
if (firstPage?.status === 200) {
const updatedAgent = {
id,
name,
description,
graph_id,
can_access_graph,
creator_image_url,
image_url,
is_favorite: true,
};
return {
...oldData,
pages: [
{
...firstPage,
data: {
...firstPage.data,
agents: [updatedAgent, ...firstPage.data.agents],
pagination: {
...firstPage.data.pagination,
total_items: firstPage.data.pagination.total_items + 1,
},
},
},
...oldData.pages.slice(1).map((page) =>
page.status === 200
? {
...page,
data: {
...page.data,
pagination: {
...page.data.pagination,
total_items: page.data.pagination.total_items + 1,
},
},
}
: page,
),
],
};
}
}
} else {
// Remove from favorites
let removedCount = 0;
return {
...oldData,
pages: oldData.pages.map((page) => {
if (page.status !== 200) return page;
const filteredAgents = page.data.agents.filter(
(agent: LibraryAgent) => agent.id !== id,
);
if (filteredAgents.length < page.data.agents.length) {
removedCount = 1;
}
return {
...page,
data: {
...page.data,
agents: filteredAgents,
pagination: {
...page.data.pagination,
total_items:
page.data.pagination.total_items - removedCount,
},
},
};
}),
};
}
return oldData;
},
);
};
const handleToggleFavorite = async (e: React.MouseEvent) => {
e.preventDefault(); // Prevent navigation when clicking the heart
e.stopPropagation();
if (isUpdating || !isAgentFavoritingEnabled) return;
const newIsFavorite = !isFavorite;
// Optimistic update
setIsFavorite(newIsFavorite);
updateQueryData(newIsFavorite);
setIsUpdating(true);
try {
await api.updateLibraryAgent(id as LibraryAgentID, {
is_favorite: newIsFavorite,
});
toast({
title: newIsFavorite ? "Added to favorites" : "Removed from favorites",
description: `${name} has been ${newIsFavorite ? "added to" : "removed from"} your favorites.`,
});
} catch (error) {
// Revert on error
console.error("Failed to update favorite status:", error);
setIsFavorite(!newIsFavorite);
updateQueryData(!newIsFavorite);
toast({
title: "Error",
description: "Failed to update favorite status. Please try again.",
variant: "destructive",
});
} finally {
setIsUpdating(false);
}
};
handleToggleFavorite,
} = useLibraryAgentCard({ agent });
return (
<div
data-testid="library-agent-card"
data-agent-id={id}
className="group inline-flex w-full max-w-[434px] flex-col items-start justify-start gap-2.5 rounded-[26px] bg-white transition-all duration-300 hover:shadow-lg dark:bg-transparent dark:hover:shadow-gray-700"
className="group relative inline-flex h-[10.625rem] w-full max-w-[25rem] flex-col items-start justify-start gap-2.5 rounded-medium border border-zinc-100 bg-white transition-all duration-300 hover:shadow-md"
>
<Link
href={`/library/agents/${id}`}
className="relative h-[200px] w-full overflow-hidden rounded-[20px]"
>
{!image_url ? (
<div
className={`h-full w-full ${
[
"bg-gradient-to-r from-green-200 to-blue-200",
"bg-gradient-to-r from-pink-200 to-purple-200",
"bg-gradient-to-r from-yellow-200 to-orange-200",
"bg-gradient-to-r from-blue-200 to-cyan-200",
"bg-gradient-to-r from-indigo-200 to-purple-200",
][parseInt(id.slice(0, 8), 16) % 5]
}`}
style={{
backgroundSize: "200% 200%",
animation: "gradient 15s ease infinite",
}}
/>
) : (
<Image
src={image_url}
alt={`${name} preview image`}
fill
className="object-cover"
/>
)}
{isAgentFavoritingEnabled && (
<button
onClick={handleToggleFavorite}
className={cn(
"absolute right-4 top-4 rounded-full bg-white/90 p-2 backdrop-blur-sm transition-all duration-200",
"hover:scale-110 hover:bg-white",
"focus:outline-none focus:ring-2 focus:ring-red-500 focus:ring-offset-2",
isUpdating && "cursor-not-allowed opacity-50",
!isFavorite && "opacity-0 group-hover:opacity-100",
)}
disabled={isUpdating}
aria-label={
isFavorite ? "Remove from favorites" : "Add to favorites"
}
>
<Heart
size={20}
weight={isFavorite ? "fill" : "regular"}
className={cn(
"transition-colors duration-200",
isFavorite
? "text-red-500"
: "text-gray-600 hover:text-red-500",
)}
/>
</button>
)}
<div className="absolute bottom-4 left-4">
<Avatar className="h-16 w-16">
<NextLink href={`/library/agents/${id}`} className="flex-shrink-0">
<div className="relative flex items-center gap-2 px-4 pt-3">
<Avatar className="h-4 w-4 rounded-full">
<AvatarImage
src={
creator_image_url
? creator_image_url
: "/avatar-placeholder.png"
isFromMarketplace
? creator_image_url || "/avatar-placeholder.png"
: profile?.avatar_url || "/avatar-placeholder.png"
}
alt={`${name} creator avatar`}
/>
<AvatarFallback size={64}>{name.charAt(0)}</AvatarFallback>
<AvatarFallback size={48}>{name.charAt(0)}</AvatarFallback>
</Avatar>
<Text
variant="small-medium"
className="uppercase tracking-wide text-zinc-400"
>
{isFromMarketplace ? "FROM MARKETPLACE" : "Built by you"}
</Text>
</div>
</Link>
</NextLink>
<FavoriteButton
isFavorite={isFavorite}
onClick={handleToggleFavorite}
className="absolute right-10 top-0"
/>
<AgentCardMenu agent={agent} />
<div className="flex w-full flex-1 flex-col px-4 py-4">
<Link href={`/library/agents/${id}`}>
<h3 className="mb-2 line-clamp-2 font-poppins text-2xl font-semibold leading-tight text-[#272727] dark:text-neutral-100">
<div className="flex w-full flex-1 flex-col px-4 pb-2">
<Link
href={`/library/agents/${id}`}
className="flex w-full items-start justify-between gap-2 no-underline hover:no-underline"
>
<Text
variant="h5"
data-testid="library-agent-card-name"
className="line-clamp-3 hyphens-auto break-words no-underline hover:no-underline"
>
{name}
</h3>
</Text>
<p className="line-clamp-3 flex-1 text-sm text-gray-600 dark:text-gray-400">
{description}
</p>
{!image_url ? (
<div
className={`h-[3.64rem] w-[6.70rem] flex-shrink-0 rounded-small ${
[
"bg-gradient-to-r from-green-200 to-blue-200",
"bg-gradient-to-r from-pink-200 to-purple-200",
"bg-gradient-to-r from-yellow-200 to-orange-200",
"bg-gradient-to-r from-blue-200 to-cyan-200",
"bg-gradient-to-r from-indigo-200 to-purple-200",
][parseInt(id.slice(0, 8), 16) % 5]
}`}
style={{
backgroundSize: "200% 200%",
animation: "gradient 15s ease infinite",
}}
/>
) : (
<Image
src={image_url}
alt={`${name} preview image`}
width={107}
height={58}
className="flex-shrink-0 rounded-small object-cover"
/>
)}
</Link>
<div className="flex-grow" />
{/* Spacer */}
<div className="items-between mt-4 flex w-full justify-between gap-3">
<div className="mt-auto flex w-full justify-start gap-6 border-t border-zinc-100 pb-1 pt-3">
<Link
href={`/library/agents/${id}`}
className="text-lg font-semibold text-neutral-800 hover:underline dark:text-neutral-200"
data-testid="library-agent-card-see-runs-link"
className="flex items-center gap-1 text-[13px]"
>
See runs
See runs <CaretCircleRightIcon size={20} />
</Link>
{can_access_graph && (
<Link
href={`/build?flowID=${graph_id}`}
className="text-lg font-semibold text-neutral-800 hover:underline dark:text-neutral-200"
data-testid="library-agent-card-open-in-builder-link"
className="flex items-center gap-1 text-[13px]"
isExternal
>
Open in builder
Open in builder <CaretCircleRightIcon size={20} />
</Link>
)}
</div>

View File

@@ -0,0 +1,188 @@
"use client";
import {
getGetV2ListLibraryAgentsQueryKey,
useDeleteV2DeleteLibraryAgent,
usePostV2ForkLibraryAgent,
} from "@/app/api/__generated__/endpoints/library/library";
import { LibraryAgent } from "@/app/api/__generated__/models/libraryAgent";
import { Button } from "@/components/atoms/Button/Button";
import { Text } from "@/components/atoms/Text/Text";
import { Dialog } from "@/components/molecules/Dialog/Dialog";
import {
DropdownMenu,
DropdownMenuContent,
DropdownMenuItem,
DropdownMenuSeparator,
DropdownMenuTrigger,
} from "@/components/molecules/DropdownMenu/DropdownMenu";
import { useToast } from "@/components/molecules/Toast/use-toast";
import { DotsThree } from "@phosphor-icons/react";
import { useQueryClient } from "@tanstack/react-query";
import Link from "next/link";
import { useRouter } from "next/navigation";
import { useState } from "react";
interface AgentCardMenuProps {
agent: LibraryAgent;
}
export function AgentCardMenu({ agent }: AgentCardMenuProps) {
const { toast } = useToast();
const queryClient = useQueryClient();
const router = useRouter();
const [showDeleteDialog, setShowDeleteDialog] = useState(false);
const [isDeletingAgent, setIsDeletingAgent] = useState(false);
const [isDuplicatingAgent, setIsDuplicatingAgent] = useState(false);
const { mutateAsync: deleteAgent } = useDeleteV2DeleteLibraryAgent();
const { mutateAsync: forkAgent } = usePostV2ForkLibraryAgent();
async function handleDuplicateAgent() {
if (!agent.id) return;
setIsDuplicatingAgent(true);
try {
const result = await forkAgent({ libraryAgentId: agent.id });
if (result.status === 200) {
await queryClient.refetchQueries({
queryKey: getGetV2ListLibraryAgentsQueryKey(),
});
toast({
title: "Agent duplicated",
description: `${result.data.name} has been created.`,
});
}
} catch (error: unknown) {
toast({
title: "Failed to duplicate agent",
description:
error instanceof Error
? error.message
: "An unexpected error occurred.",
variant: "destructive",
});
} finally {
setIsDuplicatingAgent(false);
}
}
async function handleDeleteAgent() {
if (!agent.id) return;
setIsDeletingAgent(true);
try {
await deleteAgent({ libraryAgentId: agent.id });
await queryClient.refetchQueries({
queryKey: getGetV2ListLibraryAgentsQueryKey(),
});
toast({ title: "Agent deleted" });
setShowDeleteDialog(false);
router.push("/library");
} catch (error: unknown) {
toast({
title: "Failed to delete agent",
description:
error instanceof Error
? error.message
: "An unexpected error occurred.",
variant: "destructive",
});
} finally {
setIsDeletingAgent(false);
}
}
return (
<>
<DropdownMenu>
<DropdownMenuTrigger asChild>
<button
className="absolute right-2 top-1 rounded p-1.5 transition-opacity hover:bg-neutral-100"
onClick={(e) => e.stopPropagation()}
aria-label="More actions"
>
<DotsThree className="h-5 w-5 text-neutral-600" />
</button>
</DropdownMenuTrigger>
<DropdownMenuContent align="end">
{agent.can_access_graph && (
<>
<DropdownMenuItem asChild>
<Link
href={`/build?flowID=${agent.graph_id}&flowVersion=${agent.graph_version}`}
target="_blank"
className="flex items-center gap-2"
onClick={(e) => e.stopPropagation()}
>
Edit agent
</Link>
</DropdownMenuItem>
<DropdownMenuSeparator />
</>
)}
<DropdownMenuItem
onClick={(e) => {
e.stopPropagation();
handleDuplicateAgent();
}}
disabled={isDuplicatingAgent}
className="flex items-center gap-2"
>
Duplicate agent
</DropdownMenuItem>
<DropdownMenuSeparator />
<DropdownMenuItem
onClick={(e) => {
e.stopPropagation();
setShowDeleteDialog(true);
}}
className="flex items-center gap-2 text-red-600 focus:bg-red-50 focus:text-red-600"
>
Delete agent
</DropdownMenuItem>
</DropdownMenuContent>
</DropdownMenu>
<Dialog
controlled={{
isOpen: showDeleteDialog,
set: setShowDeleteDialog,
}}
styling={{ maxWidth: "32rem" }}
title="Delete agent"
>
<Dialog.Content>
<div>
<Text variant="large">
Are you sure you want to delete this agent? This action cannot be
undone.
</Text>
<Dialog.Footer>
<Button
variant="secondary"
disabled={isDeletingAgent}
onClick={() => setShowDeleteDialog(false)}
>
Cancel
</Button>
<Button
variant="destructive"
onClick={handleDeleteAgent}
loading={isDeletingAgent}
>
Delete Agent
</Button>
</Dialog.Footer>
</div>
</Dialog.Content>
</Dialog>
</>
);
}

View File

@@ -0,0 +1,39 @@
"use client";
import { cn } from "@/lib/utils";
import { HeartIcon } from "@phosphor-icons/react";
import type { MouseEvent } from "react";
interface FavoriteButtonProps {
isFavorite: boolean;
onClick: (e: MouseEvent<HTMLButtonElement>) => void;
className?: string;
}
export function FavoriteButton({
isFavorite,
onClick,
className,
}: FavoriteButtonProps) {
return (
<button
onClick={onClick}
className={cn(
"rounded-full p-2 transition-all duration-200",
"hover:scale-110",
!isFavorite && "opacity-0 group-hover:opacity-100",
className,
)}
aria-label={isFavorite ? "Remove from favorites" : "Add to favorites"}
>
<HeartIcon
size={20}
weight={isFavorite ? "fill" : "regular"}
className={cn(
"transition-colors duration-200",
isFavorite ? "text-red-500" : "text-gray-600 hover:text-red-500",
)}
/>
</button>
);
}

View File

@@ -0,0 +1,150 @@
import { InfiniteData, QueryClient } from "@tanstack/react-query";
import {
getV2ListFavoriteLibraryAgentsResponse,
getV2ListLibraryAgentsResponse,
} from "@/app/api/__generated__/endpoints/library/library";
import { LibraryAgent } from "@/app/api/__generated__/models/libraryAgent";
interface UpdateFavoriteInQueriesParams {
queryClient: QueryClient;
agentId: string;
agent: LibraryAgent;
newIsFavorite: boolean;
}
export function updateFavoriteInQueries({
queryClient,
agentId,
agent,
newIsFavorite,
}: UpdateFavoriteInQueriesParams) {
queryClient.setQueriesData(
{ queryKey: ["/api/library/agents"] },
(
oldData:
| InfiniteData<getV2ListLibraryAgentsResponse, number | undefined>
| undefined,
) => {
if (!oldData?.pages) return oldData;
return {
...oldData,
pages: oldData.pages.map((page) => {
if (page.status !== 200) return page;
return {
...page,
data: {
...page.data,
agents: page.data.agents.map((currentAgent: LibraryAgent) =>
currentAgent.id === agentId
? { ...currentAgent, is_favorite: newIsFavorite }
: currentAgent,
),
},
};
}),
};
},
);
queryClient.setQueriesData(
{ queryKey: ["/api/library/agents/favorites"] },
(
oldData:
| InfiniteData<
getV2ListFavoriteLibraryAgentsResponse,
number | undefined
>
| undefined,
) => {
if (!oldData?.pages) return oldData;
if (newIsFavorite) {
const exists = oldData.pages.some(
(page) =>
page.status === 200 &&
page.data.agents.some(
(currentAgent: LibraryAgent) => currentAgent.id === agentId,
),
);
if (!exists) {
const firstPage = oldData.pages[0];
if (firstPage?.status === 200) {
const updatedAgent = {
id: agent.id,
name: agent.name,
description: agent.description,
graph_id: agent.graph_id,
can_access_graph: agent.can_access_graph,
creator_image_url: agent.creator_image_url,
image_url: agent.image_url,
is_favorite: true,
};
return {
...oldData,
pages: [
{
...firstPage,
data: {
...firstPage.data,
agents: [updatedAgent, ...firstPage.data.agents],
pagination: {
...firstPage.data.pagination,
total_items: firstPage.data.pagination.total_items + 1,
},
},
},
...oldData.pages.slice(1).map((page) =>
page.status === 200
? {
...page,
data: {
...page.data,
pagination: {
...page.data.pagination,
total_items: page.data.pagination.total_items + 1,
},
},
}
: page,
),
],
};
}
}
} else {
return {
...oldData,
pages: oldData.pages.map((page) => {
if (page.status !== 200) return page;
const filteredAgents = page.data.agents.filter(
(currentAgent: LibraryAgent) => currentAgent.id !== agentId,
);
const removedCount =
filteredAgents.length < page.data.agents.length ? 1 : 0;
return {
...page,
data: {
...page.data,
agents: filteredAgents,
pagination: {
...page.data.pagination,
total_items: page.data.pagination.total_items - removedCount,
},
},
};
}),
};
}
return oldData;
},
);
}

View File

@@ -0,0 +1,84 @@
"use client";
import { getQueryClient } from "@/lib/react-query/queryClient";
import { useEffect, useState } from "react";
import { usePatchV2UpdateLibraryAgent } from "@/app/api/__generated__/endpoints/library/library";
import { useGetV2GetUserProfile } from "@/app/api/__generated__/endpoints/store/store";
import { LibraryAgent } from "@/app/api/__generated__/models/libraryAgent";
import { okData } from "@/app/api/helpers";
import { useToast } from "@/components/molecules/Toast/use-toast";
import { updateFavoriteInQueries } from "./helpers";
interface Props {
agent: LibraryAgent;
}
export function useLibraryAgentCard({ agent }: Props) {
const { id, name, is_favorite, creator_image_url, marketplace_listing } =
agent;
const isFromMarketplace = Boolean(marketplace_listing);
const [isFavorite, setIsFavorite] = useState(is_favorite);
const { toast } = useToast();
const queryClient = getQueryClient();
const { mutateAsync: updateLibraryAgent } = usePatchV2UpdateLibraryAgent();
const { data: profile } = useGetV2GetUserProfile({
query: {
select: okData,
},
});
useEffect(() => {
setIsFavorite(is_favorite);
}, [is_favorite]);
function updateQueryData(newIsFavorite: boolean) {
updateFavoriteInQueries({
queryClient,
agentId: id,
agent,
newIsFavorite,
});
}
async function handleToggleFavorite(e: React.MouseEvent) {
e.preventDefault();
e.stopPropagation();
const newIsFavorite = !isFavorite;
setIsFavorite(newIsFavorite);
updateQueryData(newIsFavorite);
try {
await updateLibraryAgent({
libraryAgentId: id,
data: { is_favorite: newIsFavorite },
});
toast({
title: newIsFavorite ? "Added to favorites" : "Removed from favorites",
description: `${name} has been ${newIsFavorite ? "added to" : "removed from"} your favorites.`,
});
} catch {
setIsFavorite(!newIsFavorite);
updateQueryData(!newIsFavorite);
toast({
title: "Error",
description: "Failed to update favorite status. Please try again.",
variant: "destructive",
});
}
}
return {
isFromMarketplace,
isFavorite,
profile,
creator_image_url,
handleToggleFavorite,
};
}

View File

@@ -1,10 +1,22 @@
"use client";
import LibraryActionSubHeader from "../LibraryActionSubHeader/LibraryActionSubHeader";
import LibraryAgentCard from "../LibraryAgentCard/LibraryAgentCard";
import { LibraryAgentSort } from "@/app/api/__generated__/models/libraryAgentSort";
import { LoadingSpinner } from "@/components/atoms/LoadingSpinner/LoadingSpinner";
import { InfiniteScroll } from "@/components/contextual/InfiniteScroll/InfiniteScroll";
import { LibraryActionSubHeader } from "../LibraryActionSubHeader/LibraryActionSubHeader";
import { LibraryAgentCard } from "../LibraryAgentCard/LibraryAgentCard";
import { useLibraryAgentList } from "./useLibraryAgentList";
export default function LibraryAgentList() {
interface Props {
searchTerm: string;
librarySort: LibraryAgentSort;
setLibrarySort: (value: LibraryAgentSort) => void;
}
export function LibraryAgentList({
searchTerm,
librarySort,
setLibrarySort,
}: Props) {
const {
agentLoading,
agentCount,
@@ -12,28 +24,27 @@ export default function LibraryAgentList() {
hasNextPage,
isFetchingNextPage,
fetchNextPage,
} = useLibraryAgentList();
const LoadingSpinner = () => (
<div className="h-8 w-8 animate-spin rounded-full border-b-2 border-t-2 border-neutral-800" />
);
} = useLibraryAgentList({ searchTerm, librarySort });
return (
<>
<LibraryActionSubHeader agentCount={agentCount} />
<LibraryActionSubHeader
agentCount={agentCount}
setLibrarySort={setLibrarySort}
/>
<div className="px-2">
{agentLoading ? (
<div className="flex h-[200px] items-center justify-center">
<LoadingSpinner />
<LoadingSpinner size="large" />
</div>
) : (
<InfiniteScroll
isFetchingNextPage={isFetchingNextPage}
fetchNextPage={fetchNextPage}
hasNextPage={hasNextPage}
loader={<LoadingSpinner />}
loader={<LoadingSpinner size="medium" />}
>
<div className="grid grid-cols-1 gap-3 sm:grid-cols-2 md:grid-cols-2 lg:grid-cols-3 xl:grid-cols-4">
<div className="grid grid-cols-1 gap-6 sm:grid-cols-2 md:grid-cols-2 lg:grid-cols-3 xl:grid-cols-4">
{agents.map((agent) => (
<LibraryAgentCard key={agent.id} agent={agent} />
))}

View File

@@ -1,18 +1,23 @@
"use client";
import { useGetV2ListLibraryAgentsInfinite } from "@/app/api/__generated__/endpoints/library/library";
import { LibraryAgentSort } from "@/app/api/__generated__/models/libraryAgentSort";
import {
getPaginatedTotalCount,
getPaginationNextPageNumber,
unpaginate,
} from "@/app/api/helpers";
import { useGetV2ListLibraryAgentsInfinite } from "@/app/api/__generated__/endpoints/library/library";
import { useLibraryPageContext } from "../state-provider";
import { useLibraryAgentsStore } from "@/hooks/useLibraryAgents/store";
import { getInitialData } from "./helpers";
import { getQueryClient } from "@/lib/react-query/queryClient";
import { useEffect, useRef } from "react";
export const useLibraryAgentList = () => {
const { searchTerm, librarySort } = useLibraryPageContext();
const { agents: cachedAgents } = useLibraryAgentsStore();
interface Props {
searchTerm: string;
librarySort: LibraryAgentSort;
}
export function useLibraryAgentList({ searchTerm, librarySort }: Props) {
const queryClient = getQueryClient();
const prevSortRef = useRef<LibraryAgentSort | null>(null);
const {
data: agentsQueryData,
@@ -23,18 +28,28 @@ export const useLibraryAgentList = () => {
} = useGetV2ListLibraryAgentsInfinite(
{
page: 1,
page_size: 8,
page_size: 20,
search_term: searchTerm || undefined,
sort_by: librarySort,
},
{
query: {
initialData: getInitialData(cachedAgents, searchTerm, 8),
getNextPageParam: getPaginationNextPageNumber,
},
},
);
// Reset queries when sort changes to ensure fresh data with correct sorting
useEffect(() => {
if (prevSortRef.current !== null && prevSortRef.current !== librarySort) {
// Reset all library agent queries to ensure fresh fetch with new sort
queryClient.resetQueries({
queryKey: ["/api/library/agents"],
});
}
prevSortRef.current = librarySort;
}, [librarySort, queryClient]);
const allAgents = agentsQueryData
? unpaginate(agentsQueryData, "agents")
: [];
@@ -48,4 +63,4 @@ export const useLibraryAgentList = () => {
isFetchingNextPage,
fetchNextPage,
};
};
}

View File

@@ -1,175 +0,0 @@
import Image from "next/image";
import { Button } from "@/components/__legacy__/ui/button";
import { Separator } from "@/components/__legacy__/ui/separator";
import {
CirclePlayIcon,
ClipboardCopy,
ImageIcon,
PlayCircle,
Share2,
X,
} from "lucide-react";
export interface NotificationCardData {
type: "text" | "image" | "video" | "audio";
title: string;
id: string;
content?: string;
mediaUrl?: string;
}
interface NotificationCardProps {
notification: NotificationCardData;
onClose: () => void;
}
const NotificationCard = ({
notification: { type, title, content, mediaUrl },
onClose,
}: NotificationCardProps) => {
const barHeights = Array.from({ length: 60 }, () =>
Math.floor(Math.random() * (34 - 20 + 1) + 20),
);
const handleClose = (e: React.MouseEvent<HTMLButtonElement>) => {
e.preventDefault();
onClose();
};
return (
<div className="w-[430px] space-y-[22px] rounded-[14px] border border-neutral-100 bg-neutral-50 p-[16px] pt-[12px]">
<div className="flex items-center justify-between">
{/* count */}
<div className="flex items-center gap-[10px]">
<p className="font-sans text-[12px] font-medium text-neutral-500">
1/4
</p>
<p className="h-[26px] rounded-[45px] bg-green-100 px-[9px] py-[3px] font-sans text-[12px] font-medium text-green-800">
Success
</p>
</div>
{/* cross icon */}
<Button
variant="ghost"
className="p-0 hover:bg-transparent"
onClick={handleClose}
>
<X
className="h-6 w-6 text-[#020617] hover:scale-105"
strokeWidth={1.25}
/>
</Button>
</div>
<div className="space-y-[6px] p-0">
<p className="font-sans text-[14px] font-medium leading-[20px] text-neutral-500">
New Output Ready!
</p>
<h2 className="font-poppin text-[20px] font-medium leading-7 text-neutral-800">
{title}
</h2>
{type === "text" && <Separator />}
</div>
<div className="p-0">
{type === "text" && (
// Maybe in future we give markdown support
<div className="mt-[-8px] line-clamp-6 font-sans text-sm font-[400px] text-neutral-600">
{content}
</div>
)}
{type === "image" &&
(mediaUrl ? (
<div className="relative h-[200px] w-full">
<Image
src={mediaUrl}
alt={title}
fill
className="rounded-lg object-cover"
/>
</div>
) : (
<div className="flex h-[244px] w-full items-center justify-center rounded-lg bg-[#D9D9D9]">
<ImageIcon
className="h-[138px] w-[138px] text-neutral-400"
strokeWidth={1}
/>
</div>
))}
{type === "video" && (
<div className="space-y-4">
{mediaUrl ? (
<video src={mediaUrl} controls className="w-full rounded-lg" />
) : (
<div className="flex h-[219px] w-[398px] items-center justify-center rounded-lg bg-[#D9D9D9]">
<PlayCircle
className="h-16 w-16 text-neutral-500"
strokeWidth={1}
/>
</div>
)}
</div>
)}
{type === "audio" && (
<div className="flex gap-2">
<CirclePlayIcon
className="h-10 w-10 rounded-full bg-neutral-800 text-white"
strokeWidth={1}
/>
<div className="flex flex-1 items-center justify-between">
{/* <audio src={mediaUrl} controls className="w-full" /> */}
{barHeights.map((h, i) => {
return (
<div
key={i}
className={`rounded-[8px] bg-neutral-500`}
style={{
height: `${h}px`,
width: "3px",
}}
/>
);
})}
</div>
</div>
)}
</div>
<div className="flex justify-between gap-2 p-0">
<div className="space-x-3">
<Button
variant="outline"
onClick={() => {
navigator.share({
title,
text: content,
url: mediaUrl,
});
}}
className="h-10 w-10 rounded-full border-neutral-800 p-0"
>
<Share2 className="h-5 w-5" strokeWidth={1} />
</Button>
<Button
variant="outline"
onClick={() =>
navigator.clipboard.writeText(content || mediaUrl || "")
}
className="h-10 w-10 rounded-full border-neutral-800 p-0"
>
<ClipboardCopy className="h-5 w-5" strokeWidth={1} />
</Button>
</div>
<Button className="h-[40px] rounded-[52px] bg-neutral-800 px-4 py-2">
See run
</Button>
</div>
</div>
);
};
export default NotificationCard;

View File

@@ -1,132 +0,0 @@
"use client";
import React, { useState, useEffect, useMemo } from "react";
import { motion, useAnimationControls } from "framer-motion";
import { BellIcon, X } from "lucide-react";
import { Button } from "@/components/__legacy__/Button";
import {
DropdownMenu,
DropdownMenuContent,
DropdownMenuItem,
DropdownMenuLabel,
DropdownMenuTrigger,
} from "@/components/__legacy__/ui/dropdown-menu";
import NotificationCard, {
NotificationCardData,
} from "../LibraryNotificationCard/LibraryNotificationCard";
export default function LibraryNotificationDropdown(): React.ReactNode {
const controls = useAnimationControls();
const [open, setOpen] = useState(false);
const [notifications, setNotifications] = useState<
NotificationCardData[] | null
>(null);
const initialNotificationData = useMemo(
() =>
[
{
type: "audio",
title: "Audio Processing Complete",
id: "4",
},
{
type: "text",
title: "LinkedIn Post Generator: YouTube to Professional Content",
id: "1",
content:
"As artificial intelligence (AI) continues to evolve, it's increasingly clear that AI isn't just a trend—it's reshaping the way we work, innovate, and solve complex problems. However, for many professionals, the question remains: How can I leverage AI to drive meaningful results in my own field? In this article, we'll explore how AI can empower businesses and individuals alike to be more efficient, make better decisions, and unlock new opportunities. Whether you're in tech, finance, healthcare, or any other industry, understanding the potential of AI can set you apart.",
},
{
type: "image",
title: "New Image Upload",
id: "2",
},
{
type: "video",
title: "Video Processing Complete",
id: "3",
},
] as NotificationCardData[],
[],
);
useEffect(() => {
if (initialNotificationData) {
setNotifications(initialNotificationData);
}
}, [initialNotificationData]);
const handleHoverStart = () => {
controls.start({
rotate: [0, -10, 10, -10, 10, 0],
transition: { duration: 0.5 },
});
};
return (
<DropdownMenu open={open} onOpenChange={setOpen}>
<DropdownMenuTrigger className="sm:flex-1" asChild>
<Button
variant={open ? "primary" : "outline"}
onMouseEnter={handleHoverStart}
onMouseLeave={handleHoverStart}
className="w-fit max-w-[161px] transition-all duration-200 ease-in-out sm:w-[161px]"
>
<motion.div animate={controls}>
<BellIcon
className="h-5 w-5 transition-all duration-200 ease-in-out sm:mr-2"
strokeWidth={2}
/>
</motion.div>
<motion.div
initial={{ opacity: 1 }}
animate={{ opacity: 1 }}
exit={{ opacity: 0 }}
className="hidden items-center transition-opacity duration-300 sm:inline-flex"
>
Your updates
<span className="ml-2 text-[14px]">
{notifications?.length || 0}
</span>
</motion.div>
</Button>
</DropdownMenuTrigger>
<DropdownMenuContent
sideOffset={22}
className="relative left-[16px] h-[80vh] w-fit overflow-y-auto rounded-[26px] bg-[#C5C5CA] p-5"
>
<DropdownMenuLabel className="z-10 mb-4 font-sans text-[18px] text-white">
Agent run updates
</DropdownMenuLabel>
<button
className="absolute right-[10px] top-[20px] h-fit w-fit"
onClick={() => setOpen(false)}
>
<X className="h-6 w-6 text-white hover:text-white/60" />
</button>
<div className="space-y-[12px]">
{notifications && notifications.length ? (
notifications.map((notification) => (
<DropdownMenuItem key={notification.id} className="p-0">
<NotificationCard
notification={notification}
onClose={() =>
setNotifications((prev) => {
if (!prev) return null;
return prev.filter((n) => n.id !== notification.id);
})
}
/>
</DropdownMenuItem>
))
) : (
<div className="w-[464px] py-4 text-center text-white">
No notifications present
</div>
)}
</div>
</DropdownMenuContent>
</DropdownMenu>
);
}

View File

@@ -1,40 +1,37 @@
"use client";
import { Input } from "@/components/__legacy__/ui/input";
import { Search, X } from "lucide-react";
import { Input } from "@/components/atoms/Input/Input";
import { MagnifyingGlassIcon } from "@phosphor-icons/react";
import { useLibrarySearchbar } from "./useLibrarySearchbar";
export default function LibrarySearchBar(): React.ReactNode {
const { handleSearchInput, handleClear, setIsFocused, isFocused, inputRef } =
useLibrarySearchbar();
interface Props {
setSearchTerm: (value: string) => void;
}
export function LibrarySearchBar({ setSearchTerm }: Props) {
const { handleSearchInput } = useLibrarySearchbar({ setSearchTerm });
return (
<div
data-testid="search-bar"
onClick={() => inputRef.current?.focus()}
className="relative z-[21] mx-auto flex h-[50px] w-full max-w-[500px] flex-1 cursor-pointer items-center rounded-[45px] bg-[#EDEDED] px-[24px] py-[10px]"
className="relative z-[21] -mb-6 flex w-full items-center md:w-auto"
>
<Search
className="mr-2 h-[29px] w-[29px] text-neutral-900"
strokeWidth={1.25}
<MagnifyingGlassIcon
width={18}
height={18}
className="absolute left-4 top-[34%] z-20 -translate-y-1/2 text-zinc-800"
/>
<Input
ref={inputRef}
onFocus={() => setIsFocused(true)}
onBlur={() => !inputRef.current?.value && setIsFocused(false)}
label="Search agents"
id="library-search-bar"
hideLabel
onChange={handleSearchInput}
className="flex-1 border-none font-sans text-[16px] font-normal leading-7 shadow-none focus:shadow-none focus:ring-0"
className="min-w-[18rem] pl-12 lg:min-w-[30rem]"
type="text"
data-testid="library-textbox"
placeholder="Search agents"
/>
{isFocused && inputRef.current?.value && (
<X
className="ml-2 h-[29px] w-[29px] cursor-pointer text-neutral-900"
strokeWidth={1.25}
onClick={handleClear}
/>
)}
</div>
);
}

View File

@@ -1,36 +1,30 @@
import { useRef, useState } from "react";
import { useLibraryPageContext } from "../state-provider";
import { debounce } from "lodash";
import { useCallback, useEffect } from "react";
export const useLibrarySearchbar = () => {
const inputRef = useRef<HTMLInputElement>(null);
const [isFocused, setIsFocused] = useState(false);
const { setSearchTerm } = useLibraryPageContext();
interface Props {
setSearchTerm: (value: string) => void;
}
const debouncedSearch = debounce((value: string) => {
setSearchTerm(value);
}, 300);
export function useLibrarySearchbar({ setSearchTerm }: Props) {
const debouncedSearch = useCallback(
debounce((value: string) => {
setSearchTerm(value);
}, 300),
[setSearchTerm],
);
const handleSearchInput = (e: React.ChangeEvent<HTMLInputElement>) => {
useEffect(() => {
return () => {
debouncedSearch.cancel();
};
}, [debouncedSearch]);
function handleSearchInput(e: React.ChangeEvent<HTMLInputElement>) {
const searchTerm = e.target.value;
debouncedSearch(searchTerm);
};
const handleClear = (e: React.MouseEvent) => {
if (inputRef.current) {
inputRef.current.value = "";
inputRef.current.blur();
setSearchTerm("");
e.preventDefault();
}
setIsFocused(false);
};
}
return {
handleClear,
handleSearchInput,
isFocused,
inputRef,
setIsFocused,
};
};
}

View File

@@ -1,5 +1,5 @@
"use client";
import { ArrowDownNarrowWideIcon } from "lucide-react";
import { LibraryAgentSort } from "@/app/api/__generated__/models/libraryAgentSort";
import {
Select,
SelectContent,
@@ -8,11 +8,15 @@ import {
SelectTrigger,
SelectValue,
} from "@/components/__legacy__/ui/select";
import { LibraryAgentSort } from "@/app/api/__generated__/models/libraryAgentSort";
import { ArrowDownNarrowWideIcon } from "lucide-react";
import { useLibrarySortMenu } from "./useLibrarySortMenu";
export default function LibrarySortMenu(): React.ReactNode {
const { handleSortChange } = useLibrarySortMenu();
interface Props {
setLibrarySort: (value: LibraryAgentSort) => void;
}
export function LibrarySortMenu({ setLibrarySort }: Props) {
const { handleSortChange } = useLibrarySortMenu({ setLibrarySort });
return (
<div className="flex items-center" data-testid="sort-by-dropdown">
<span className="hidden whitespace-nowrap sm:inline">sort by</span>

Some files were not shown because too many files have changed in this diff Show More