Compare commits

...

16 Commits

Author SHA1 Message Date
Abhimanyu Yadav
8fa75c8da4 Merge branch 'dev' into testing-claude-code 2026-01-22 16:39:44 +05:30
Abhimanyu Yadav
b0953654d9 feat(frontend): add integration testing setup with Vitest, MSW, and RTL (#11813)
### Changes 🏗️

- Added Vitest and React Testing Library for frontend unit testing
- Configured MSW (Mock Service Worker) for API mocking in tests
- Created test utilities and setup files for integration tests
- Added comprehensive testing documentation in `AGENTS.md`
- Updated Orval configuration to generate MSW mock handlers
- Added mock server and browser implementations for development testing

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run `pnpm test:unit` to verify tests pass
  - [x] Verify MSW mock handlers are generated correctly
  - [x] Check that test utilities work with sample component tests

#### For configuration changes:

- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
2026-01-22 10:10:00 +00:00
Ubbe
c5069ca48f fix(frontend): chat UX improvements (#11804)
### Changes 🏗️

<img width="1920" height="998" alt="Screenshot 2026-01-19 at 22 14 51"
src="https://github.com/user-attachments/assets/ecd1c241-6f77-4702-9774-5e58806b0b64"
/>

This PR lays the groundwork for the new UX of AutoGPT Copilot. 
- moves the Copilot to its own route `/copilot`
- Makes the Copilot the homepage when enabled
- Updates the labelling of the homepage icons
- Makes the Library the homepage when Copilot is disabled
- Improves Copilot's:
  - session handling
  - styles and UX
  - message parsing
  
### Other improvements

- Improve the log out UX by adding a new `/logout` page and using a
re-direct

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run locally and test the above

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Launches the new Copilot experience and aligns API behavior with the
UI.
> 
> - **Routing/Home**: Add `/copilot` with `CopilotShell` (desktop
sidebar + mobile drawer), make homepage route flag-driven; update
login/signup/error redirects and root page to use `getHomepageRoute`.
> - **Chat UX**: Replace legacy chat with `components/contextual/Chat/*`
(new message list, bubbles, tool call/response formatting, stop button,
initial-prompt handling, refined streaming/error handling); remove old
platform chat components.
> - **Sessions**: Add paginated session list (infinite load),
auto-select/create logic, mobile/desktop navigation, and improved
session fetching/claiming guards.
> - **Auth/Logout**: New `/logout` flow with delayed redirect; gate
various queries on auth state and logout-in-progress.
> - **Backend**: `GET /api/chat/sessions/{id}` returns `null` instead of
404; service saves assistant message on `StreamFinish` to avoid loss and
prevents duplicate saves; OpenAPI updated accordingly.
> - **Misc**: Minor UI polish in library modals, loader styling, docs
(CONTRIBUTING) additions, and small formatting fixes in block docs
generator.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
1b4776dcf5. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->
2026-01-22 16:43:42 +07:00
Zamil Majdy
5d0cd88d98 fix(backend): Use unqualified vector type for pgvector queries (#11818)
## Summary
- Remove explicit schema qualification (`{schema}.vector` and
`OPERATOR({schema}.<=>)`) from pgvector queries in `embeddings.py` and
`hybrid_search.py`
- Use unqualified `::vector` type cast and `<=>` operator which work
because pgvector is in the search_path on all environments

## Problem
The previous approach tried to explicitly qualify the vector type with
schema names, but this failed because:
- **CI environment**: pgvector is in `public` schema → `platform.vector`
doesn't exist
- **Dev (Supabase)**: pgvector is in `platform` schema → `public.vector`
doesn't exist

## Solution
Use unqualified `::vector` and `<=>` operator. PostgreSQL resolves these
via `search_path`, which includes the schema where pgvector is installed
on all environments.

Tested on both local and dev environments with a test script that
verified:
-  Unqualified `::vector` type cast
-  Unqualified `<=>` operator in ORDER BY
-  Unqualified `<=>` in SELECT (similarity calculation)
-  Combined query patterns matching actual usage

## Test plan
- [ ] CI tests pass
- [ ] Marketplace approval works on dev after deployment

Fixes: AUTOGPT-SERVER-763, AUTOGPT-SERVER-764, AUTOGPT-SERVER-76B
2026-01-21 18:11:58 +00:00
Zamil Majdy
033f58c075 fix(backend): Make Redis event bus gracefully handle connection failures (#11817)
## Summary
Adds graceful error handling to AsyncRedisEventBus and RedisEventBus so
that connection failures log exceptions with full traceback while
remaining non-breaking. This allows DatabaseManager to operate without
Redis connectivity.

## Problem
DatabaseManager was failing with "Authentication required" when trying
to publish notifications via AsyncRedisNotificationEventBus. The service
has no Redis credentials configured, causing `increment_onboarding_runs`
to fail.

## Root Cause
When `increment_onboarding_runs` publishes a notification:
1. Calls `AsyncRedisNotificationEventBus().publish()`
2. Attempts to connect to Redis via `get_redis_async()`
3. Connection fails due to missing credentials
4. Exception propagates, failing the entire DB operation

Previous fix (#11775) made the cache module lazy, but didn't address the
notification bus which also requires Redis.

## Solution
Wrap Redis operations in try-except blocks:
- `publish_event`: Logs exception with traceback, continues without
publishing
- `listen_events`: Logs exception with traceback, returns empty
generator
- `wait_for_event`: Returns None on connection failure

Using `logger.exception()` instead of `logger.warning()` ensures full
stack traces are captured for debugging while keeping operations
non-breaking.

This allows services to operate without Redis when only using event bus
for non-critical notifications.

## Changes
- Modified `backend/data/event_bus.py`:
- Added graceful error handling to `RedisEventBus` and
`AsyncRedisEventBus`
- All Redis operations now catch exceptions and log with
`logger.exception()`
- Added `backend/data/event_bus_test.py`:
  - Tests verify graceful degradation when Redis is unavailable
  - Tests verify normal operation when Redis is available

## Test Plan
- [x] New tests verify graceful degradation when Redis unavailable
- [x] Existing notification tests still pass
- [x] DatabaseManager can increment onboarding runs without Redis

## Related Issues
Fixes https://significant-gravitas.sentry.io/issues/7205834440/
(AUTOGPT-SERVER-76D)
2026-01-21 15:51:26 +00:00
Ubbe
40ef2d511f fix(frontend): auto-select credentials correctly in old builder (#11815)
## Changes 🏗️

On the **Old Builder**, when running an agent...

### Before

<img width="800" height="614" alt="Screenshot 2026-01-21 at 21 27 05"
src="https://github.com/user-attachments/assets/a3b2ec17-597f-44d2-9130-9e7931599c38"
/>

Credentials are there, but it is not recognising them, you need to click
on them to be selected

### After

<img width="1029" height="728" alt="Screenshot 2026-01-21 at 21 26 47"
src="https://github.com/user-attachments/assets/c6e83846-6048-439e-919d-6807674f2d5a"
/>

It uses the new credentials UI and correctly auto-selects existing ones.

### Other

Fixed a small timezone display glitch on the new library view.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run agent in old builder
- [x] Credentials are auto-selected and using the new collapsed system
credentials UI
2026-01-21 14:55:49 +00:00
Zamil Majdy
b714c0c221 fix(backend): handle null values in GraphSettings validation (#11812)
## Summary
- Fixes AUTOGPT-SERVER-76H - Error parsing LibraryAgent from database
due to null values in GraphSettings fields
- When parsing LibraryAgent settings from the database, null values for
`human_in_the_loop_safe_mode` and `sensitive_action_safe_mode` were
causing Pydantic validation errors
- Adds `BeforeValidator` annotations to coerce null values to their
defaults (True and False respectively)

## Test plan
- [x] Verified with unit tests that GraphSettings can now handle
None/null values
- [x] Backend tests pass
- [x] Manually tested with all scenarios (None, empty dict, explicit
values)
2026-01-21 08:40:38 -05:00
Krzysztof Czerwinski
ebabc4287e feat(platform): New LLM Picker UI (#11726)
Add new LLM Picker for the new Builder.

### Changes 🏗️

- Enrich `LlmModelMeta` (in `llm.py`) with human readable model, creator
and provider names and price tier (note: this is temporary measure and
all LlmModelMeta will be removed completely once LLM Registry is ready)
- Add provider icons
- Add custom input field `LlmModelField` and its components&helpers

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] LLM model picker works correctly in the new Builder
  - [x] Legacy LLM model picker works in the old Builder
2026-01-21 10:52:55 +00:00
Zamil Majdy
8b25e62959 feat(backend,frontend): add explicit safe mode toggles for HITL and sensitive actions (#11756)
## Summary

This PR introduces two explicit safe mode toggles for controlling agent
execution behavior, providing clearer and more granular control over
when agents should pause for human review.

### Key Changes

**New Safe Mode Settings:**
- **`human_in_the_loop_safe_mode`** (bool, default `true`) - Controls
whether human-in-the-loop (HITL) blocks pause for review
- **`sensitive_action_safe_mode`** (bool, default `false`) - Controls
whether sensitive action blocks pause for review

**New Computed Properties on LibraryAgent:**
- `has_human_in_the_loop` - Indicates if agent contains HITL blocks
- `has_sensitive_action` - Indicates if agent contains sensitive action
blocks

**Block Changes:**
- Renamed `requires_human_review` to `is_sensitive_action` on blocks for
clarity
- Blocks marked as `is_sensitive_action=True` pause only when
`sensitive_action_safe_mode=True`
- HITL blocks pause when `human_in_the_loop_safe_mode=True`

**Frontend Changes:**
- Two separate toggles in Agent Settings based on block types present
- Toggle visibility based on `has_human_in_the_loop` and
`has_sensitive_action` computed properties
- Settings cog hidden if neither toggle applies
- Proper state management for both toggles with defaults

**AI-Generated Agent Behavior:**
- AI-generated agents set `sensitive_action_safe_mode=True` by default
- This ensures sensitive actions are reviewed for AI-generated content

## Changes

**Backend:**
- `backend/data/graph.py` - Updated `GraphSettings` with two boolean
toggles (non-optional with defaults), added `has_sensitive_action`
computed property
- `backend/data/block.py` - Renamed `requires_human_review` to
`is_sensitive_action`, updated review logic
- `backend/data/execution.py` - Updated `ExecutionContext` with both
safe mode fields
- `backend/api/features/library/model.py` - Added
`has_human_in_the_loop` and `has_sensitive_action` to `LibraryAgent`
- `backend/api/features/library/db.py` - Updated to use
`sensitive_action_safe_mode` parameter
- `backend/executor/utils.py` - Simplified execution context creation

**Frontend:**
- `useAgentSafeMode.ts` - Rewritten to support two independent toggles
- `AgentSettingsModal.tsx` - Shows two separate toggles
- `SelectedSettingsView.tsx` - Shows two separate toggles
- Regenerated API types with new schema

## Test Plan

- [x] All backend tests pass (Python 3.11, 3.12, 3.13)
- [x] All frontend tests pass
- [x] Backend format and lint pass
- [x] Frontend format and lint pass
- [x] Pre-commit hooks pass

---------

Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2026-01-21 00:56:02 +00:00
Zamil Majdy
35a13e3df5 fix(backend): Use explicit schema qualification for pgvector types (#11805)
## Summary
- Fix intermittent "type 'vector' does not exist" errors when using
PgBouncer in transaction mode
- The issue was that `SET search_path` and the actual query could run on
different backend connections
- Use explicit schema qualification (`{schema}.vector`,
`OPERATOR({schema}.<=>)`) instead of relying on search_path

## Test plan
- [x] Tested vector type cast on local: `'[1,2,3]'::platform.vector`
works
- [x] Tested OPERATOR syntax on local: `OPERATOR(platform.<=>)` works
- [x] Tested on dev via kubectl exec: both work correctly
- [ ] Deploy to dev and verify backfill_missing_embeddings endpoint no
longer errors

## Related Issues
Fixes: AUTOGPT-SERVER-763, AUTOGPT-SERVER-764

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 22:18:16 +00:00
Mewael Tsegay Desta
2169b433c9 feat(backend/blocks): add ConcatenateListsBlock (#11567)
# feat(backend/blocks): add ConcatenateListsBlock

## Description

This PR implements a new block `ConcatenateListsBlock` that concatenates
multiple lists into a single list. This addresses the "good first issue"
for implementing a list concatenation block in the platform/blocks area.

The block takes a list of lists as input and combines all elements in
order into a single concatenated list. This is useful for workflows that
need to merge data from multiple sources or combine results from
different operations.

### Changes 🏗️

- **Added `ConcatenateListsBlock` class** in
`autogpt_platform/backend/backend/blocks/data_manipulation.py`
- Input: `lists: List[List[Any]]` - accepts a list of lists to
concatenate
- Output: `concatenated_list: List[Any]` - returns a single concatenated
list
- Error output: `error: str` - provides clear error messages for invalid
input types
  - Block ID: `3cf9298b-5817-4141-9d80-7c2cc5199c8e`
- Category: `BlockCategory.BASIC` (consistent with other list
manipulation blocks)
  
- **Added comprehensive test suite** in
`autogpt_platform/backend/test/blocks/test_concatenate_lists.py`
  - Tests using built-in `test_input`/`test_output` validation
- Manual test cases covering edge cases (empty lists, single list, empty
input)
  - Error handling tests for invalid input types
  - Category consistency verification
  - All tests passing

- **Implementation details:**
  - Uses `extend()` method for efficient list concatenation
  - Preserves element order from all input lists
- **Runtime type validation**: Explicitly checks `isinstance(lst, list)`
before calling `extend()` to prevent:
- Strings being iterated character-by-character (e.g., `extend("abc")` →
`['a', 'b', 'c']`)
    - Non-iterable types causing `TypeError` (e.g., `extend(1)`)
  - Clear error messages indicating which index has invalid input
- Handles edge cases: empty lists, empty input, single list, None values
  - Follows existing block patterns and conventions

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Run `poetry run pytest test/blocks/test_concatenate_lists.py -v` -
all tests pass
  - [x] Verified block can be imported and instantiated
  - [x] Tested with built-in test cases (4 test scenarios)
  - [x] Tested manual edge cases (empty lists, single list, empty input)
  - [x] Tested error handling for invalid input types
  - [x] Verified category is `BASIC` for consistency
  - [x] Verified no linting errors
- [x] Confirmed block follows same patterns as other blocks in
`data_manipulation.py`

#### Code Quality:
- [x] Code follows existing patterns and conventions
- [x] Type hints are properly used
- [x] Documentation strings are clear and descriptive
- [x] Runtime type validation implemented
- [x] Error handling with clear error messages
- [x] No linting errors
- [x] Prisma client generated successfully

### Testing

**Test Results:**
```
test/blocks/test_concatenate_lists.py::test_concatenate_lists_block_builtin_tests PASSED
test/blocks/test_concatenate_lists.py::test_concatenate_lists_manual PASSED

============================== 2 passed in 8.35s ==============================
```

**Test Coverage:**
- Basic concatenation: `[[1, 2, 3], [4, 5, 6]]` → `[1, 2, 3, 4, 5, 6]`
- Mixed types: `[["a", "b"], ["c"], ["d", "e", "f"]]` → `["a", "b", "c",
"d", "e", "f"]`
- Empty list handling: `[[1, 2], []]` → `[1, 2]`
- Empty input: `[]` → `[]`
- Single list: `[[1, 2, 3]]` → `[1, 2, 3]`
- Error handling: Invalid input types (strings, non-lists) produce clear
error messages
- Category verification: Confirmed `BlockCategory.BASIC` for consistency

### Review Feedback Addressed

- **Category Consistency**: Changed from `BlockCategory.DATA` to
`BlockCategory.BASIC` to match other list manipulation blocks
(`AddToListBlock`, `FindInListBlock`, etc.)
- **Type Robustness**: Added explicit runtime validation with
`isinstance(lst, list)` check before calling `extend()` to prevent:
  - Strings being iterated character-by-character
  - Non-iterable types causing `TypeError`
- **Error Handling**: Added `error` output field with clear, descriptive
error messages indicating which index has invalid input
- **Test Coverage**: Added test case for error handling with invalid
input types

### Related Issues

- Addresses: "Implement block to concatenate lists" (good first issue,
platform/blocks, hacktoberfest)

### Notes

- This is a straightforward data manipulation block that doesn't require
external dependencies
- The block will be automatically discovered by the block loading system
- No database or configuration changes required
- Compatible with existing workflow system
- All review feedback has been addressed and incorporated


<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Adds a new list utility and updates docs.
> 
> - **New block**: `ConcatenateListsBlock` in
`backend/blocks/data_manipulation.py`
> - Input `lists: List[List[Any]]`; outputs `concatenated_list` or
`error`
> - Skips `None` entries; emits error for non-list items; preserves
order
> - **Docs**: Adds "Concatenate Lists" section to
`docs/integrations/basic.md` and links it in
`docs/integrations/README.md`
> - **Contributor guide**: New `docs/CLAUDE.md` with manual doc section
guidelines
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
4f56dd86c2. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 18:04:12 +00:00
Nicholas Tindle
fa0b7029dd fix(platform): make chat credentials type selection deterministic (#11795)
## Background

When using chat to run blocks/agents that support multiple credential
types (e.g., GitHub blocks support both `api_key` and `oauth2`), users
reported that the credentials setup UI would randomly show either "Add
API key" or "Connect account (OAuth)" - seemingly at random between
requests or server restarts.

## Root Cause

The bug was in how the backend selected which credential type to return
when building the missing credentials response:

```python
cred_type = next(iter(field_info.supported_types), "api_key")
```

The problem is that `supported_types` is a **frozenset**. When you call
`iter()` on a frozenset and take `next()`, the iteration order is
**non-deterministic** due to Python's hash randomization. This means:
- `frozenset({'api_key', 'oauth2'})` could iterate as either
`['api_key', 'oauth2']` or `['oauth2', 'api_key']`
- The order varies between Python process restarts and sometimes between
requests
- This caused the UI to randomly show different credential options

### Changes 🏗️

**Backend (`utils.py`, `run_block.py`, `run_agent.py`):**
- Added `_serialize_missing_credential()` helper that uses `sorted()`
for deterministic ordering
- Added `build_missing_credentials_from_graph()` and
`build_missing_credentials_from_field_info()` utilities
- Now returns both `type` (first sorted type, for backwards compat) and
`types` (full array with ALL supported types)

**Frontend (`helpers.ts`, `ChatCredentialsSetup.tsx`,
`useChatMessage.ts`):**
- Updated to read the `types` array from backend response
- Changed `credentialType` (single) to `credentialTypes` (array)
throughout the chat credentials flow
- Passes all supported types to `CredentialsInput` via
`credentials_types` schema field

### Result

Now `useCredentials.ts` correctly sets both `supportsApiKey=true` AND
`supportsOAuth2=true` when both are supported, ensuring:
1. **Deterministic behavior** - no more random type selection
2. **All saved credentials shown** - credentials of any supported type
appear in the selection list

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verified GitHub block shows consistent credential options across
page reloads
- [x] Verified both OAuth and API key credentials appear in selection
when user has both saved
- [x] Verified backend returns `types: ["api_key", "oauth2"]` array
(checked via Python REPL)

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Ensures deterministic credential type selection and surfaces all
supported types end-to-end.
> 
> - Backend: add `_serialize_missing_credential`,
`build_missing_credentials_from_graph/field_info`;
`run_agent`/`run_block` now return missing credentials with stable
ordering and both `type` (first) and `types` (all).
> - Frontend: chat helpers and UI (`helpers.ts`,
`ChatCredentialsSetup.tsx`, `useChatMessage.ts`) now read `types`,
switch from single `credentialType` to `credentialTypes`, and pass all
supported `credentials_types` in schemas.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
7d80f4f0e0. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <ntindle@users.noreply.github.com>
2026-01-20 16:19:57 +00:00
Abhimanyu Yadav
c20ca47bb0 feat(frontend): enhance RunGraph and RunInputDialog components with loading states and improved UI (#11808)
### Changes 🏗️

- Enhanced UI for the Run Graph button with improved loading states and
animations
- Added color-coded edges in the flow editor based on output data types
- Improved the layout of the Run Input Dialog with a two-column grid
design
- Refined the styling of flow editor controls with consistent icon sizes
and colors
- Updated tutorial icons with better color and size customization
- Fixed credential field display to show provider name with "credential"
suffix
- Optimized draft saving by excluding node position changes to prevent
excessive saves when dragging nodes

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Verified that the Run Graph button shows proper loading states
  - [x] Confirmed that edges display correct colors based on data types
- [x] Tested the Run Input Dialog layout with various input
configurations
  - [x] Checked that flow editor controls display consistently
  - [x] Verified that tutorial icons render properly
  - [x] Confirmed credential fields show proper provider names
- [x] Tested that dragging nodes doesn't trigger unnecessary draft saves
2026-01-20 15:50:23 +00:00
abhi1992002
919cc877ad feat(frontend): enhance UI components with animations and accessibility improvements
### Changes 🏗️
- Integrated `FadeIn` animations in `AgentsSection`, `FeaturedCreators`, `FeaturedSection`, `HeroSection`, and `BecomeACreator` components for improved visual appeal.
- Replaced static elements with `StaggeredList` in `FeaturedCreators` and `AgentsSection` for a more dynamic layout.
- Updated `SearchBar` to use `type="search"` and added `aria-label` for better accessibility.
- Enhanced `StoreCard` with focus-visible styles and keyboard navigation support.
- Refactored `FilterChips` to utilize `FilterChip` component for a more consistent design.

### Checklist 📋
- [x] Verified animations function correctly across components.
- [x] Ensured accessibility improvements are in place and tested.
- [x] Confirmed UI consistency with design specifications.
2026-01-20 20:12:33 +05:30
Abhimanyu Yadav
7756e2d12d refactor(frontend): refactor credentials input with unified CredentialsGroupedView component (#11801)
### Changes 🏗️

- Refactored the credentials input handling in the RunInputDialog to use
the shared CredentialsGroupedView component
- Moved CredentialsGroupedView from agent library to a shared component
location for reuse
- Fixed source name handling in edge creation to properly handle tool
source names
- Improved node output UI by replacing custom expand/collapse with
Accordion component
- Fixed timing of hardcoded values synchronization with handle IDs to
ensure proper loading
- Enabled NEW_FLOW_EDITOR and BUILDER_VIEW_SWITCH feature flags by
default

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verified credentials input works in both agent run dialog and
builder run dialog
  - [x] Confirmed node output accordion works correctly
- [x] Tested flow editor with tools to ensure source name handling works
properly
  - [x] Verified hardcoded values sync correctly with handle IDs

#### For configuration changes:

- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
2026-01-20 12:20:25 +00:00
Swifty
bc75d70e7d refactor(backend): Improve Langfuse tracing with v3 SDK patterns and @observe decorators (#11803)
<!-- Clearly explain the need for these changes: -->

This PR improves the Langfuse tracing implementation in the chat feature
by adopting the v3 SDK patterns, resulting in cleaner code and better
observability.

### Changes 🏗️

- **Simplified Langfuse client usage**: Replace manual client
initialization with `langfuse.get_client()` global singleton
- **Use v3 context managers**: Switch to
`start_as_current_observation()` and `propagate_attributes()` for
automatic trace propagation
- **Auto-instrument OpenAI calls**: Use `langfuse.openai` wrapper for
automatic LLM call tracing instead of manual generation tracking
- **Add `@observe` decorators**: All chat tools now have
`@observe(as_type="tool")` decorators for automatic tool execution
tracing:
  - `add_understanding`
  - `view_agent_output` (renamed from `agent_output`)
  - `create_agent`
  - `edit_agent`
  - `find_agent`
  - `find_block`
  - `find_library_agent`
  - `get_doc_page`
  - `run_agent`
  - `run_block`
  - `search_docs`
- **Remove manual trace lifecycle**: Eliminated the verbose `finally`
block that manually ended traces/generations
- **Rename tool**: `agent_output` → `view_agent_output` for clarity

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Verified chat feature works with Langfuse tracing enabled
- [x] Confirmed traces appear correctly in Langfuse dashboard with tool
spans
  - [x] Tested tool execution flows show up as nested observations

#### For configuration changes:

- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

No configuration changes required - uses existing Langfuse environment
variables.
2026-01-19 20:56:51 +00:00
239 changed files with 9334 additions and 2958 deletions

View File

@@ -16,6 +16,32 @@ See `docs/content/platform/getting-started.md` for setup instructions.
- Format Python code with `poetry run format`. - Format Python code with `poetry run format`.
- Format frontend code using `pnpm format`. - Format frontend code using `pnpm format`.
## Frontend guidelines:
See `/frontend/CONTRIBUTING.md` for complete patterns. Quick reference:
1. **Pages**: Create in `src/app/(platform)/feature-name/page.tsx`
- Add `usePageName.ts` hook for logic
- Put sub-components in local `components/` folder
2. **Components**: Structure as `ComponentName/ComponentName.tsx` + `useComponentName.ts` + `helpers.ts`
- Use design system components from `src/components/` (atoms, molecules, organisms)
- Never use `src/components/__legacy__/*`
3. **Data fetching**: Use generated API hooks from `@/app/api/__generated__/endpoints/`
- Regenerate with `pnpm generate:api`
- Pattern: `use{Method}{Version}{OperationName}`
4. **Styling**: Tailwind CSS only, use design tokens, Phosphor Icons only
5. **Testing**: Add Storybook stories for new components, Playwright for E2E
6. **Code conventions**: Function declarations (not arrow functions) for components/handlers
- Component props should be `interface Props { ... }` (not exported) unless the interface needs to be used outside the component
- Separate render logic from business logic (component.tsx + useComponent.ts + helpers.ts)
- Colocate state when possible and avoid creating large components, use sub-components ( local `/components` folder next to the parent component ) when sensible
- Avoid large hooks, abstract logic into `helpers.ts` files when sensible
- Use function declarations for components, arrow functions only for callbacks
- No barrel files or `index.ts` re-exports
- Do not use `useCallback` or `useMemo` unless strictly needed
- Avoid comments at all times unless the code is very complex
## Testing ## Testing
- Backend: `poetry run test` (runs pytest with a docker based postgres + prisma). - Backend: `poetry run test` (runs pytest with a docker based postgres + prisma).

View File

@@ -201,7 +201,7 @@ If you get any pushback or hit complex block conditions check the new_blocks gui
3. Write tests alongside the route file 3. Write tests alongside the route file
4. Run `poetry run test` to verify 4. Run `poetry run test` to verify
**Frontend feature development:** ### Frontend guidelines:
See `/frontend/CONTRIBUTING.md` for complete patterns. Quick reference: See `/frontend/CONTRIBUTING.md` for complete patterns. Quick reference:
@@ -217,6 +217,14 @@ See `/frontend/CONTRIBUTING.md` for complete patterns. Quick reference:
4. **Styling**: Tailwind CSS only, use design tokens, Phosphor Icons only 4. **Styling**: Tailwind CSS only, use design tokens, Phosphor Icons only
5. **Testing**: Add Storybook stories for new components, Playwright for E2E 5. **Testing**: Add Storybook stories for new components, Playwright for E2E
6. **Code conventions**: Function declarations (not arrow functions) for components/handlers 6. **Code conventions**: Function declarations (not arrow functions) for components/handlers
- Component props should be `interface Props { ... }` (not exported) unless the interface needs to be used outside the component
- Separate render logic from business logic (component.tsx + useComponent.ts + helpers.ts)
- Colocate state when possible and avoid creating large components, use sub-components ( local `/components` folder next to the parent component ) when sensible
- Avoid large hooks, abstract logic into `helpers.ts` files when sensible
- Use function declarations for components, arrow functions only for callbacks
- No barrel files or `index.ts` re-exports
- Do not use `useCallback` or `useMemo` unless strictly needed
- Avoid comments at all times unless the code is very complex
### Security Implementation ### Security Implementation

View File

@@ -290,6 +290,11 @@ async def _cache_session(session: ChatSession) -> None:
await async_redis.setex(redis_key, config.session_ttl, session.model_dump_json()) await async_redis.setex(redis_key, config.session_ttl, session.model_dump_json())
async def cache_chat_session(session: ChatSession) -> None:
"""Cache a chat session without persisting to the database."""
await _cache_session(session)
async def _get_session_from_db(session_id: str) -> ChatSession | None: async def _get_session_from_db(session_id: str) -> ChatSession | None:
"""Get a chat session from the database.""" """Get a chat session from the database."""
prisma_session = await chat_db.get_chat_session(session_id) prisma_session = await chat_db.get_chat_session(session_id)

View File

@@ -172,12 +172,12 @@ async def get_session(
user_id: The optional authenticated user ID, or None for anonymous access. user_id: The optional authenticated user ID, or None for anonymous access.
Returns: Returns:
SessionDetailResponse: Details for the requested session; raises NotFoundError if not found. SessionDetailResponse: Details for the requested session, or None if not found.
""" """
session = await get_chat_session(session_id, user_id) session = await get_chat_session(session_id, user_id)
if not session: if not session:
raise NotFoundError(f"Session {session_id} not found") raise NotFoundError(f"Session {session_id} not found.")
messages = [message.model_dump() for message in session.messages] messages = [message.model_dump() for message in session.messages]
logger.info( logger.info(
@@ -222,6 +222,8 @@ async def stream_chat_post(
session = await _validate_and_get_session(session_id, user_id) session = await _validate_and_get_session(session_id, user_id)
async def event_generator() -> AsyncGenerator[str, None]: async def event_generator() -> AsyncGenerator[str, None]:
chunk_count = 0
first_chunk_type: str | None = None
async for chunk in chat_service.stream_chat_completion( async for chunk in chat_service.stream_chat_completion(
session_id, session_id,
request.message, request.message,
@@ -230,7 +232,26 @@ async def stream_chat_post(
session=session, # Pass pre-fetched session to avoid double-fetch session=session, # Pass pre-fetched session to avoid double-fetch
context=request.context, context=request.context,
): ):
if chunk_count < 3:
logger.info(
"Chat stream chunk",
extra={
"session_id": session_id,
"chunk_type": str(chunk.type),
},
)
if not first_chunk_type:
first_chunk_type = str(chunk.type)
chunk_count += 1
yield chunk.to_sse() yield chunk.to_sse()
logger.info(
"Chat stream completed",
extra={
"session_id": session_id,
"chunk_count": chunk_count,
"first_chunk_type": first_chunk_type,
},
)
# AI SDK protocol termination # AI SDK protocol termination
yield "data: [DONE]\n\n" yield "data: [DONE]\n\n"
@@ -275,6 +296,8 @@ async def stream_chat_get(
session = await _validate_and_get_session(session_id, user_id) session = await _validate_and_get_session(session_id, user_id)
async def event_generator() -> AsyncGenerator[str, None]: async def event_generator() -> AsyncGenerator[str, None]:
chunk_count = 0
first_chunk_type: str | None = None
async for chunk in chat_service.stream_chat_completion( async for chunk in chat_service.stream_chat_completion(
session_id, session_id,
message, message,
@@ -282,7 +305,26 @@ async def stream_chat_get(
user_id=user_id, user_id=user_id,
session=session, # Pass pre-fetched session to avoid double-fetch session=session, # Pass pre-fetched session to avoid double-fetch
): ):
if chunk_count < 3:
logger.info(
"Chat stream chunk",
extra={
"session_id": session_id,
"chunk_type": str(chunk.type),
},
)
if not first_chunk_type:
first_chunk_type = str(chunk.type)
chunk_count += 1
yield chunk.to_sse() yield chunk.to_sse()
logger.info(
"Chat stream completed",
extra={
"session_id": session_id,
"chunk_count": chunk_count,
"first_chunk_type": first_chunk_type,
},
)
# AI SDK protocol termination # AI SDK protocol termination
yield "data: [DONE]\n\n" yield "data: [DONE]\n\n"

View File

@@ -1,15 +1,18 @@
import asyncio import asyncio
import logging import logging
import time
from asyncio import CancelledError
from collections.abc import AsyncGenerator from collections.abc import AsyncGenerator
from typing import Any from typing import Any
import orjson import orjson
from langfuse import Langfuse from langfuse import get_client, propagate_attributes
from langfuse.openai import openai # type: ignore
from openai import ( from openai import (
APIConnectionError, APIConnectionError,
APIError, APIError,
APIStatusError, APIStatusError,
AsyncOpenAI, PermissionDeniedError,
RateLimitError, RateLimitError,
) )
from openai.types.chat import ChatCompletionChunk, ChatCompletionToolParam from openai.types.chat import ChatCompletionChunk, ChatCompletionToolParam
@@ -21,12 +24,12 @@ from backend.data.understanding import (
from backend.util.exceptions import NotFoundError from backend.util.exceptions import NotFoundError
from backend.util.settings import Settings from backend.util.settings import Settings
from . import db as chat_db
from .config import ChatConfig from .config import ChatConfig
from .model import ( from .model import (
ChatMessage, ChatMessage,
ChatSession, ChatSession,
Usage, Usage,
cache_chat_session,
get_chat_session, get_chat_session,
update_session_title, update_session_title,
upsert_chat_session, upsert_chat_session,
@@ -50,10 +53,10 @@ logger = logging.getLogger(__name__)
config = ChatConfig() config = ChatConfig()
settings = Settings() settings = Settings()
client = AsyncOpenAI(api_key=config.api_key, base_url=config.base_url) client = openai.AsyncOpenAI(api_key=config.api_key, base_url=config.base_url)
# Langfuse client (lazy initialization)
_langfuse_client: Langfuse | None = None langfuse = get_client()
class LangfuseNotConfiguredError(Exception): class LangfuseNotConfiguredError(Exception):
@@ -69,65 +72,6 @@ def _is_langfuse_configured() -> bool:
) )
def _get_langfuse_client() -> Langfuse:
"""Get or create the Langfuse client for prompt management and tracing."""
global _langfuse_client
if _langfuse_client is None:
if not _is_langfuse_configured():
raise LangfuseNotConfiguredError(
"Langfuse is not configured. The chat feature requires Langfuse for prompt management. "
"Please set the LANGFUSE_PUBLIC_KEY and LANGFUSE_SECRET_KEY environment variables."
)
_langfuse_client = Langfuse(
public_key=settings.secrets.langfuse_public_key,
secret_key=settings.secrets.langfuse_secret_key,
host=settings.secrets.langfuse_host or "https://cloud.langfuse.com",
)
return _langfuse_client
def _get_environment() -> str:
"""Get the current environment name for Langfuse tagging."""
return settings.config.app_env.value
def _get_langfuse_prompt() -> str:
"""Fetch the latest production prompt from Langfuse.
Returns:
The compiled prompt text from Langfuse.
Raises:
Exception: If Langfuse is unavailable or prompt fetch fails.
"""
try:
langfuse = _get_langfuse_client()
# cache_ttl_seconds=0 disables SDK caching to always get the latest prompt
prompt = langfuse.get_prompt(config.langfuse_prompt_name, cache_ttl_seconds=0)
compiled = prompt.compile()
logger.info(
f"Fetched prompt '{config.langfuse_prompt_name}' from Langfuse "
f"(version: {prompt.version})"
)
return compiled
except Exception as e:
logger.error(f"Failed to fetch prompt from Langfuse: {e}")
raise
async def _is_first_session(user_id: str) -> bool:
"""Check if this is the user's first chat session.
Returns True if the user has 1 or fewer sessions (meaning this is their first).
"""
try:
session_count = await chat_db.get_user_session_count(user_id)
return session_count <= 1
except Exception as e:
logger.warning(f"Failed to check session count for user {user_id}: {e}")
return False # Default to non-onboarding if we can't check
async def _build_system_prompt(user_id: str | None) -> tuple[str, Any]: async def _build_system_prompt(user_id: str | None) -> tuple[str, Any]:
"""Build the full system prompt including business understanding if available. """Build the full system prompt including business understanding if available.
@@ -139,8 +83,6 @@ async def _build_system_prompt(user_id: str | None) -> tuple[str, Any]:
Tuple of (compiled prompt string, Langfuse prompt object for tracing) Tuple of (compiled prompt string, Langfuse prompt object for tracing)
""" """
langfuse = _get_langfuse_client()
# cache_ttl_seconds=0 disables SDK caching to always get the latest prompt # cache_ttl_seconds=0 disables SDK caching to always get the latest prompt
prompt = langfuse.get_prompt(config.langfuse_prompt_name, cache_ttl_seconds=0) prompt = langfuse.get_prompt(config.langfuse_prompt_name, cache_ttl_seconds=0)
@@ -158,7 +100,7 @@ async def _build_system_prompt(user_id: str | None) -> tuple[str, Any]:
context = "This is the first time you are meeting the user. Greet them and introduce them to the platform" context = "This is the first time you are meeting the user. Greet them and introduce them to the platform"
compiled = prompt.compile(users_information=context) compiled = prompt.compile(users_information=context)
return compiled, prompt return compiled, understanding
async def _generate_session_title(message: str) -> str | None: async def _generate_session_title(message: str) -> str | None:
@@ -217,6 +159,7 @@ async def assign_user_to_session(
async def stream_chat_completion( async def stream_chat_completion(
session_id: str, session_id: str,
message: str | None = None, message: str | None = None,
tool_call_response: str | None = None,
is_user_message: bool = True, is_user_message: bool = True,
user_id: str | None = None, user_id: str | None = None,
retry_count: int = 0, retry_count: int = 0,
@@ -256,11 +199,6 @@ async def stream_chat_completion(
yield StreamFinish() yield StreamFinish()
return return
# Langfuse observations will be created after session is loaded (need messages for input)
# Initialize to None so finally block can safely check and end them
trace = None
generation = None
# Only fetch from Redis if session not provided (initial call) # Only fetch from Redis if session not provided (initial call)
if session is None: if session is None:
session = await get_chat_session(session_id, user_id) session = await get_chat_session(session_id, user_id)
@@ -336,297 +274,349 @@ async def stream_chat_completion(
asyncio.create_task(_update_title()) asyncio.create_task(_update_title())
# Build system prompt with business understanding # Build system prompt with business understanding
system_prompt, langfuse_prompt = await _build_system_prompt(user_id) system_prompt, understanding = await _build_system_prompt(user_id)
# Build input messages including system prompt for complete Langfuse logging
trace_input_messages = [{"role": "system", "content": system_prompt}] + [
m.model_dump() for m in session.messages
]
# Create Langfuse trace for this LLM call (each call gets its own trace, grouped by session_id) # Create Langfuse trace for this LLM call (each call gets its own trace, grouped by session_id)
# Using v3 SDK: start_observation creates a root span, update_trace sets trace-level attributes # Using v3 SDK: start_observation creates a root span, update_trace sets trace-level attributes
try: input = message
langfuse = _get_langfuse_client() if not message and tool_call_response:
env = _get_environment() input = tool_call_response
trace = langfuse.start_observation(
name="chat_completion", langfuse = get_client()
input={"messages": trace_input_messages}, with langfuse.start_as_current_observation(
metadata={ as_type="span",
"environment": env, name="user-copilot-request",
"model": config.model, input=input,
"message_count": len(session.messages), ) as span:
"prompt_name": langfuse_prompt.name if langfuse_prompt else None, with propagate_attributes(
"prompt_version": langfuse_prompt.version if langfuse_prompt else None,
},
)
# Set trace-level attributes (session_id, user_id, tags)
trace.update_trace(
session_id=session_id, session_id=session_id,
user_id=user_id, user_id=user_id,
tags=[env, "copilot"], tags=["copilot"],
) metadata={
except Exception as e: "users_information": format_understanding_for_prompt(understanding)[
logger.warning(f"Failed to create Langfuse trace: {e}") :200
] # langfuse only accepts upto to 200 chars
},
):
# Initialize variables that will be used in finally block (must be defined before try) # Initialize variables that will be used in finally block (must be defined before try)
assistant_response = ChatMessage( assistant_response = ChatMessage(
role="assistant", role="assistant",
content="", content="",
)
accumulated_tool_calls: list[dict[str, Any]] = []
# Wrap main logic in try/finally to ensure Langfuse observations are always ended
try:
has_yielded_end = False
has_yielded_error = False
has_done_tool_call = False
has_received_text = False
text_streaming_ended = False
tool_response_messages: list[ChatMessage] = []
should_retry = False
# Generate unique IDs for AI SDK protocol
import uuid as uuid_module
message_id = str(uuid_module.uuid4())
text_block_id = str(uuid_module.uuid4())
# Yield message start
yield StreamStart(messageId=message_id)
# Create Langfuse generation for each LLM call, linked to the prompt
# Using v3 SDK: start_observation with as_type="generation"
generation = (
trace.start_observation(
as_type="generation",
name="llm_call",
model=config.model,
input={"messages": trace_input_messages},
prompt=langfuse_prompt,
) )
if trace accumulated_tool_calls: list[dict[str, Any]] = []
else None has_saved_assistant_message = False
) has_appended_streaming_message = False
last_cache_time = 0.0
last_cache_content_len = 0
try: # Wrap main logic in try/finally to ensure Langfuse observations are always ended
async for chunk in _stream_chat_chunks( has_yielded_end = False
session=session, has_yielded_error = False
tools=tools, has_done_tool_call = False
system_prompt=system_prompt, has_received_text = False
text_block_id=text_block_id, text_streaming_ended = False
): tool_response_messages: list[ChatMessage] = []
should_retry = False
if isinstance(chunk, StreamTextStart): # Generate unique IDs for AI SDK protocol
# Emit text-start before first text delta import uuid as uuid_module
if not has_received_text:
message_id = str(uuid_module.uuid4())
text_block_id = str(uuid_module.uuid4())
# Yield message start
yield StreamStart(messageId=message_id)
try:
async for chunk in _stream_chat_chunks(
session=session,
tools=tools,
system_prompt=system_prompt,
text_block_id=text_block_id,
):
if isinstance(chunk, StreamTextStart):
# Emit text-start before first text delta
if not has_received_text:
yield chunk
elif isinstance(chunk, StreamTextDelta):
delta = chunk.delta or ""
assert assistant_response.content is not None
assistant_response.content += delta
has_received_text = True
if not has_appended_streaming_message:
session.messages.append(assistant_response)
has_appended_streaming_message = True
current_time = time.monotonic()
content_len = len(assistant_response.content)
if (
current_time - last_cache_time >= 1.0
and content_len > last_cache_content_len
):
try:
await cache_chat_session(session)
except Exception as e:
logger.warning(
f"Failed to cache partial session {session.session_id}: {e}"
)
last_cache_time = current_time
last_cache_content_len = content_len
yield chunk yield chunk
elif isinstance(chunk, StreamTextDelta): elif isinstance(chunk, StreamTextEnd):
delta = chunk.delta or "" # Emit text-end after text completes
assert assistant_response.content is not None if has_received_text and not text_streaming_ended:
assistant_response.content += delta text_streaming_ended = True
has_received_text = True if assistant_response.content:
yield chunk logger.warn(
elif isinstance(chunk, StreamTextEnd): f"StreamTextEnd: Attempting to set output {assistant_response.content}"
# Emit text-end after text completes )
if has_received_text and not text_streaming_ended: span.update_trace(output=assistant_response.content)
text_streaming_ended = True span.update(output=assistant_response.content)
yield chunk yield chunk
elif isinstance(chunk, StreamToolInputStart): elif isinstance(chunk, StreamToolInputStart):
# Emit text-end before first tool call, but only if we've received text # Emit text-end before first tool call, but only if we've received text
if has_received_text and not text_streaming_ended:
yield StreamTextEnd(id=text_block_id)
text_streaming_ended = True
yield chunk
elif isinstance(chunk, StreamToolInputAvailable):
# Accumulate tool calls in OpenAI format
accumulated_tool_calls.append(
{
"id": chunk.toolCallId,
"type": "function",
"function": {
"name": chunk.toolName,
"arguments": orjson.dumps(chunk.input).decode("utf-8"),
},
}
)
elif isinstance(chunk, StreamToolOutputAvailable):
result_content = (
chunk.output
if isinstance(chunk.output, str)
else orjson.dumps(chunk.output).decode("utf-8")
)
tool_response_messages.append(
ChatMessage(
role="tool",
content=result_content,
tool_call_id=chunk.toolCallId,
)
)
has_done_tool_call = True
# Track if any tool execution failed
if not chunk.success:
logger.warning(
f"Tool {chunk.toolName} (ID: {chunk.toolCallId}) execution failed"
)
yield chunk
elif isinstance(chunk, StreamFinish):
if not has_done_tool_call:
# Emit text-end before finish if we received text but haven't closed it
if has_received_text and not text_streaming_ended: if has_received_text and not text_streaming_ended:
yield StreamTextEnd(id=text_block_id) yield StreamTextEnd(id=text_block_id)
text_streaming_ended = True text_streaming_ended = True
has_yielded_end = True
yield chunk yield chunk
elif isinstance(chunk, StreamError): elif isinstance(chunk, StreamToolInputAvailable):
has_yielded_error = True # Accumulate tool calls in OpenAI format
elif isinstance(chunk, StreamUsage): accumulated_tool_calls.append(
session.usage.append( {
Usage( "id": chunk.toolCallId,
prompt_tokens=chunk.promptTokens, "type": "function",
completion_tokens=chunk.completionTokens, "function": {
total_tokens=chunk.totalTokens, "name": chunk.toolName,
"arguments": orjson.dumps(chunk.input).decode(
"utf-8"
),
},
}
) )
) elif isinstance(chunk, StreamToolOutputAvailable):
else: result_content = (
logger.error(f"Unknown chunk type: {type(chunk)}", exc_info=True) chunk.output
except Exception as e: if isinstance(chunk.output, str)
logger.error(f"Error during stream: {e!s}", exc_info=True) else orjson.dumps(chunk.output).decode("utf-8")
)
tool_response_messages.append(
ChatMessage(
role="tool",
content=result_content,
tool_call_id=chunk.toolCallId,
)
)
has_done_tool_call = True
# Track if any tool execution failed
if not chunk.success:
logger.warning(
f"Tool {chunk.toolName} (ID: {chunk.toolCallId}) execution failed"
)
yield chunk
elif isinstance(chunk, StreamFinish):
if not has_done_tool_call:
# Emit text-end before finish if we received text but haven't closed it
if has_received_text and not text_streaming_ended:
yield StreamTextEnd(id=text_block_id)
text_streaming_ended = True
# Check if this is a retryable error (JSON parsing, incomplete tool calls, etc.) # Save assistant message before yielding finish to ensure it's persisted
is_retryable = isinstance(e, (orjson.JSONDecodeError, KeyError, TypeError)) # even if client disconnects immediately after receiving StreamFinish
if not has_saved_assistant_message:
messages_to_save_early: list[ChatMessage] = []
if accumulated_tool_calls:
assistant_response.tool_calls = (
accumulated_tool_calls
)
if not has_appended_streaming_message and (
assistant_response.content
or assistant_response.tool_calls
):
messages_to_save_early.append(assistant_response)
messages_to_save_early.extend(tool_response_messages)
if is_retryable and retry_count < config.max_retries: if messages_to_save_early:
logger.info( session.messages.extend(messages_to_save_early)
f"Retryable error encountered. Attempt {retry_count + 1}/{config.max_retries}" logger.info(
f"Saving assistant message before StreamFinish: "
f"content_len={len(assistant_response.content or '')}, "
f"tool_calls={len(assistant_response.tool_calls or [])}, "
f"tool_responses={len(tool_response_messages)}"
)
if (
messages_to_save_early
or has_appended_streaming_message
):
await upsert_chat_session(session)
has_saved_assistant_message = True
has_yielded_end = True
yield chunk
elif isinstance(chunk, StreamError):
has_yielded_error = True
yield chunk
elif isinstance(chunk, StreamUsage):
session.usage.append(
Usage(
prompt_tokens=chunk.promptTokens,
completion_tokens=chunk.completionTokens,
total_tokens=chunk.totalTokens,
)
)
else:
logger.error(
f"Unknown chunk type: {type(chunk)}", exc_info=True
)
if assistant_response.content:
langfuse.update_current_trace(output=assistant_response.content)
langfuse.update_current_span(output=assistant_response.content)
elif tool_response_messages:
langfuse.update_current_trace(output=str(tool_response_messages))
langfuse.update_current_span(output=str(tool_response_messages))
except CancelledError:
if not has_saved_assistant_message:
if accumulated_tool_calls:
assistant_response.tool_calls = accumulated_tool_calls
if assistant_response.content:
assistant_response.content = (
f"{assistant_response.content}\n\n[interrupted]"
)
else:
assistant_response.content = "[interrupted]"
if not has_appended_streaming_message:
session.messages.append(assistant_response)
if tool_response_messages:
session.messages.extend(tool_response_messages)
try:
await upsert_chat_session(session)
except Exception as e:
logger.warning(
f"Failed to save interrupted session {session.session_id}: {e}"
)
raise
except Exception as e:
logger.error(f"Error during stream: {e!s}", exc_info=True)
# Check if this is a retryable error (JSON parsing, incomplete tool calls, etc.)
is_retryable = isinstance(
e, (orjson.JSONDecodeError, KeyError, TypeError)
) )
should_retry = True
else: if is_retryable and retry_count < config.max_retries:
# Non-retryable error or max retries exceeded logger.info(
# Save any partial progress before reporting error f"Retryable error encountered. Attempt {retry_count + 1}/{config.max_retries}"
)
should_retry = True
else:
# Non-retryable error or max retries exceeded
# Save any partial progress before reporting error
messages_to_save: list[ChatMessage] = []
# Add assistant message if it has content or tool calls
if accumulated_tool_calls:
assistant_response.tool_calls = accumulated_tool_calls
if not has_appended_streaming_message and (
assistant_response.content or assistant_response.tool_calls
):
messages_to_save.append(assistant_response)
# Add tool response messages after assistant message
messages_to_save.extend(tool_response_messages)
if not has_saved_assistant_message:
if messages_to_save:
session.messages.extend(messages_to_save)
if messages_to_save or has_appended_streaming_message:
await upsert_chat_session(session)
if not has_yielded_error:
error_message = str(e)
if not is_retryable:
error_message = f"Non-retryable error: {error_message}"
elif retry_count >= config.max_retries:
error_message = f"Max retries ({config.max_retries}) exceeded: {error_message}"
error_response = StreamError(errorText=error_message)
yield error_response
if not has_yielded_end:
yield StreamFinish()
return
# Handle retry outside of exception handler to avoid nesting
if should_retry and retry_count < config.max_retries:
logger.info(
f"Retrying stream_chat_completion for session {session_id}, attempt {retry_count + 1}"
)
async for chunk in stream_chat_completion(
session_id=session.session_id,
user_id=user_id,
retry_count=retry_count + 1,
session=session,
context=context,
):
yield chunk
return # Exit after retry to avoid double-saving in finally block
# Normal completion path - save session and handle tool call continuation
# Only save if we haven't already saved when StreamFinish was received
if not has_saved_assistant_message:
logger.info(
f"Normal completion path: session={session.session_id}, "
f"current message_count={len(session.messages)}"
)
# Build the messages list in the correct order
messages_to_save: list[ChatMessage] = [] messages_to_save: list[ChatMessage] = []
# Add assistant message if it has content or tool calls # Add assistant message with tool_calls if any
if accumulated_tool_calls: if accumulated_tool_calls:
assistant_response.tool_calls = accumulated_tool_calls assistant_response.tool_calls = accumulated_tool_calls
if assistant_response.content or assistant_response.tool_calls: logger.info(
f"Added {len(accumulated_tool_calls)} tool calls to assistant message"
)
if not has_appended_streaming_message and (
assistant_response.content or assistant_response.tool_calls
):
messages_to_save.append(assistant_response) messages_to_save.append(assistant_response)
logger.info(
f"Saving assistant message with content_len={len(assistant_response.content or '')}, tool_calls={len(assistant_response.tool_calls or [])}"
)
# Add tool response messages after assistant message # Add tool response messages after assistant message
messages_to_save.extend(tool_response_messages) messages_to_save.extend(tool_response_messages)
logger.info(
session.messages.extend(messages_to_save) f"Saving {len(tool_response_messages)} tool response messages, "
await upsert_chat_session(session) f"total_to_save={len(messages_to_save)}"
if not has_yielded_error:
error_message = str(e)
if not is_retryable:
error_message = f"Non-retryable error: {error_message}"
elif retry_count >= config.max_retries:
error_message = f"Max retries ({config.max_retries}) exceeded: {error_message}"
error_response = StreamError(errorText=error_message)
yield error_response
if not has_yielded_end:
yield StreamFinish()
return
# Handle retry outside of exception handler to avoid nesting
if should_retry and retry_count < config.max_retries:
logger.info(
f"Retrying stream_chat_completion for session {session_id}, attempt {retry_count + 1}"
)
async for chunk in stream_chat_completion(
session_id=session.session_id,
user_id=user_id,
retry_count=retry_count + 1,
session=session,
context=context,
):
yield chunk
return # Exit after retry to avoid double-saving in finally block
# Normal completion path - save session and handle tool call continuation
logger.info(
f"Normal completion path: session={session.session_id}, "
f"current message_count={len(session.messages)}"
)
# Build the messages list in the correct order
messages_to_save: list[ChatMessage] = []
# Add assistant message with tool_calls if any
if accumulated_tool_calls:
assistant_response.tool_calls = accumulated_tool_calls
logger.info(
f"Added {len(accumulated_tool_calls)} tool calls to assistant message"
)
if assistant_response.content or assistant_response.tool_calls:
messages_to_save.append(assistant_response)
logger.info(
f"Saving assistant message with content_len={len(assistant_response.content or '')}, tool_calls={len(assistant_response.tool_calls or [])}"
)
# Add tool response messages after assistant message
messages_to_save.extend(tool_response_messages)
logger.info(
f"Saving {len(tool_response_messages)} tool response messages, "
f"total_to_save={len(messages_to_save)}"
)
session.messages.extend(messages_to_save)
logger.info(
f"Extended session messages, new message_count={len(session.messages)}"
)
await upsert_chat_session(session)
# If we did a tool call, stream the chat completion again to get the next response
if has_done_tool_call:
logger.info(
"Tool call executed, streaming chat completion again to get assistant response"
)
async for chunk in stream_chat_completion(
session_id=session.session_id,
user_id=user_id,
session=session, # Pass session object to avoid Redis refetch
context=context,
):
yield chunk
finally:
# Always end Langfuse observations to prevent resource leaks
# Guard against None and catch errors to avoid masking original exceptions
if generation is not None:
try:
latest_usage = session.usage[-1] if session.usage else None
generation.update(
model=config.model,
output={
"content": assistant_response.content,
"tool_calls": accumulated_tool_calls or None,
},
usage_details=(
{
"input": latest_usage.prompt_tokens,
"output": latest_usage.completion_tokens,
"total": latest_usage.total_tokens,
}
if latest_usage
else None
),
) )
generation.end()
except Exception as e:
logger.warning(f"Failed to end Langfuse generation: {e}")
if trace is not None: if messages_to_save:
try: session.messages.extend(messages_to_save)
if accumulated_tool_calls: logger.info(
trace.update_trace(output={"tool_calls": accumulated_tool_calls}) f"Extended session messages, new message_count={len(session.messages)}"
else: )
trace.update_trace(output={"response": assistant_response.content}) if messages_to_save or has_appended_streaming_message:
trace.end() await upsert_chat_session(session)
except Exception as e: else:
logger.warning(f"Failed to end Langfuse trace: {e}") logger.info(
"Assistant message already saved when StreamFinish was received, "
"skipping duplicate save"
)
# If we did a tool call, stream the chat completion again to get the next response
if has_done_tool_call:
logger.info(
"Tool call executed, streaming chat completion again to get assistant response"
)
async for chunk in stream_chat_completion(
session_id=session.session_id,
user_id=user_id,
session=session, # Pass session object to avoid Redis refetch
context=context,
tool_call_response=str(tool_response_messages),
):
yield chunk
# Retry configuration for OpenAI API calls # Retry configuration for OpenAI API calls
@@ -654,6 +644,12 @@ def _is_retryable_error(error: Exception) -> bool:
return False return False
def _is_region_blocked_error(error: Exception) -> bool:
if isinstance(error, PermissionDeniedError):
return "not available in your region" in str(error).lower()
return "not available in your region" in str(error).lower()
async def _stream_chat_chunks( async def _stream_chat_chunks(
session: ChatSession, session: ChatSession,
tools: list[ChatCompletionToolParam], tools: list[ChatCompletionToolParam],
@@ -846,7 +842,18 @@ async def _stream_chat_chunks(
f"Error in stream (not retrying): {e!s}", f"Error in stream (not retrying): {e!s}",
exc_info=True, exc_info=True,
) )
error_response = StreamError(errorText=str(e)) error_code = None
error_text = str(e)
if _is_region_blocked_error(e):
error_code = "MODEL_NOT_AVAILABLE_REGION"
error_text = (
"This model is not available in your region. "
"Please connect via VPN and try again."
)
error_response = StreamError(
errorText=error_text,
code=error_code,
)
yield error_response yield error_response
yield StreamFinish() yield StreamFinish()
return return
@@ -900,5 +907,4 @@ async def _yield_tool_call(
session=session, session=session,
) )
logger.info(f"Yielding Tool execution response: {tool_execution_response}")
yield tool_execution_response yield tool_execution_response

View File

@@ -30,7 +30,7 @@ TOOL_REGISTRY: dict[str, BaseTool] = {
"find_library_agent": FindLibraryAgentTool(), "find_library_agent": FindLibraryAgentTool(),
"run_agent": RunAgentTool(), "run_agent": RunAgentTool(),
"run_block": RunBlockTool(), "run_block": RunBlockTool(),
"agent_output": AgentOutputTool(), "view_agent_output": AgentOutputTool(),
"search_docs": SearchDocsTool(), "search_docs": SearchDocsTool(),
"get_doc_page": GetDocPageTool(), "get_doc_page": GetDocPageTool(),
} }

View File

@@ -3,6 +3,8 @@
import logging import logging
from typing import Any from typing import Any
from langfuse import observe
from backend.api.features.chat.model import ChatSession from backend.api.features.chat.model import ChatSession
from backend.data.understanding import ( from backend.data.understanding import (
BusinessUnderstandingInput, BusinessUnderstandingInput,
@@ -59,6 +61,7 @@ and automations for the user's specific needs."""
"""Requires authentication to store user-specific data.""" """Requires authentication to store user-specific data."""
return True return True
@observe(as_type="tool", name="add_understanding")
async def _execute( async def _execute(
self, self,
user_id: str | None, user_id: str | None,

View File

@@ -218,6 +218,7 @@ async def save_agent_to_library(
library_agents = await library_db.create_library_agent( library_agents = await library_db.create_library_agent(
graph=created_graph, graph=created_graph,
user_id=user_id, user_id=user_id,
sensitive_action_safe_mode=True,
create_library_agents_for_sub_graphs=False, create_library_agents_for_sub_graphs=False,
) )

View File

@@ -5,6 +5,7 @@ import re
from datetime import datetime, timedelta, timezone from datetime import datetime, timedelta, timezone
from typing import Any from typing import Any
from langfuse import observe
from pydantic import BaseModel, field_validator from pydantic import BaseModel, field_validator
from backend.api.features.chat.model import ChatSession from backend.api.features.chat.model import ChatSession
@@ -103,7 +104,7 @@ class AgentOutputTool(BaseTool):
@property @property
def name(self) -> str: def name(self) -> str:
return "agent_output" return "view_agent_output"
@property @property
def description(self) -> str: def description(self) -> str:
@@ -328,6 +329,7 @@ class AgentOutputTool(BaseTool):
total_executions=len(available_executions) if available_executions else 1, total_executions=len(available_executions) if available_executions else 1,
) )
@observe(as_type="tool", name="view_agent_output")
async def _execute( async def _execute(
self, self,
user_id: str | None, user_id: str | None,

View File

@@ -3,6 +3,8 @@
import logging import logging
from typing import Any from typing import Any
from langfuse import observe
from backend.api.features.chat.model import ChatSession from backend.api.features.chat.model import ChatSession
from .agent_generator import ( from .agent_generator import (
@@ -78,6 +80,7 @@ class CreateAgentTool(BaseTool):
"required": ["description"], "required": ["description"],
} }
@observe(as_type="tool", name="create_agent")
async def _execute( async def _execute(
self, self,
user_id: str | None, user_id: str | None,

View File

@@ -3,6 +3,8 @@
import logging import logging
from typing import Any from typing import Any
from langfuse import observe
from backend.api.features.chat.model import ChatSession from backend.api.features.chat.model import ChatSession
from .agent_generator import ( from .agent_generator import (
@@ -85,6 +87,7 @@ class EditAgentTool(BaseTool):
"required": ["agent_id", "changes"], "required": ["agent_id", "changes"],
} }
@observe(as_type="tool", name="edit_agent")
async def _execute( async def _execute(
self, self,
user_id: str | None, user_id: str | None,

View File

@@ -2,6 +2,8 @@
from typing import Any from typing import Any
from langfuse import observe
from backend.api.features.chat.model import ChatSession from backend.api.features.chat.model import ChatSession
from .agent_search import search_agents from .agent_search import search_agents
@@ -35,6 +37,7 @@ class FindAgentTool(BaseTool):
"required": ["query"], "required": ["query"],
} }
@observe(as_type="tool", name="find_agent")
async def _execute( async def _execute(
self, user_id: str | None, session: ChatSession, **kwargs self, user_id: str | None, session: ChatSession, **kwargs
) -> ToolResponseBase: ) -> ToolResponseBase:

View File

@@ -1,6 +1,7 @@
import logging import logging
from typing import Any from typing import Any
from langfuse import observe
from prisma.enums import ContentType from prisma.enums import ContentType
from backend.api.features.chat.model import ChatSession from backend.api.features.chat.model import ChatSession
@@ -55,6 +56,7 @@ class FindBlockTool(BaseTool):
def requires_auth(self) -> bool: def requires_auth(self) -> bool:
return True return True
@observe(as_type="tool", name="find_block")
async def _execute( async def _execute(
self, self,
user_id: str | None, user_id: str | None,

View File

@@ -2,6 +2,8 @@
from typing import Any from typing import Any
from langfuse import observe
from backend.api.features.chat.model import ChatSession from backend.api.features.chat.model import ChatSession
from .agent_search import search_agents from .agent_search import search_agents
@@ -41,6 +43,7 @@ class FindLibraryAgentTool(BaseTool):
def requires_auth(self) -> bool: def requires_auth(self) -> bool:
return True return True
@observe(as_type="tool", name="find_library_agent")
async def _execute( async def _execute(
self, user_id: str | None, session: ChatSession, **kwargs self, user_id: str | None, session: ChatSession, **kwargs
) -> ToolResponseBase: ) -> ToolResponseBase:

View File

@@ -4,6 +4,8 @@ import logging
from pathlib import Path from pathlib import Path
from typing import Any from typing import Any
from langfuse import observe
from backend.api.features.chat.model import ChatSession from backend.api.features.chat.model import ChatSession
from backend.api.features.chat.tools.base import BaseTool from backend.api.features.chat.tools.base import BaseTool
from backend.api.features.chat.tools.models import ( from backend.api.features.chat.tools.models import (
@@ -71,6 +73,7 @@ class GetDocPageTool(BaseTool):
url_path = path.rsplit(".", 1)[0] if "." in path else path url_path = path.rsplit(".", 1)[0] if "." in path else path
return f"{DOCS_BASE_URL}/{url_path}" return f"{DOCS_BASE_URL}/{url_path}"
@observe(as_type="tool", name="get_doc_page")
async def _execute( async def _execute(
self, self,
user_id: str | None, user_id: str | None,

View File

@@ -3,6 +3,7 @@
import logging import logging
from typing import Any from typing import Any
from langfuse import observe
from pydantic import BaseModel, Field, field_validator from pydantic import BaseModel, Field, field_validator
from backend.api.features.chat.config import ChatConfig from backend.api.features.chat.config import ChatConfig
@@ -32,7 +33,7 @@ from .models import (
UserReadiness, UserReadiness,
) )
from .utils import ( from .utils import (
check_user_has_required_credentials, build_missing_credentials_from_graph,
extract_credentials_from_schema, extract_credentials_from_schema,
fetch_graph_from_store_slug, fetch_graph_from_store_slug,
get_or_create_library_agent, get_or_create_library_agent,
@@ -154,6 +155,7 @@ class RunAgentTool(BaseTool):
"""All operations require authentication.""" """All operations require authentication."""
return True return True
@observe(as_type="tool", name="run_agent")
async def _execute( async def _execute(
self, self,
user_id: str | None, user_id: str | None,
@@ -235,15 +237,13 @@ class RunAgentTool(BaseTool):
# Return credentials needed response with input data info # Return credentials needed response with input data info
# The UI handles credential setup automatically, so the message # The UI handles credential setup automatically, so the message
# focuses on asking about input data # focuses on asking about input data
credentials = extract_credentials_from_schema( requirements_creds_dict = build_missing_credentials_from_graph(
graph.credentials_input_schema graph, None
) )
missing_creds_check = await check_user_has_required_credentials( missing_credentials_dict = build_missing_credentials_from_graph(
user_id, credentials graph, graph_credentials
) )
missing_credentials_dict = { requirements_creds_list = list(requirements_creds_dict.values())
c.id: c.model_dump() for c in missing_creds_check
}
return SetupRequirementsResponse( return SetupRequirementsResponse(
message=self._build_inputs_message(graph, MSG_WHAT_VALUES_TO_USE), message=self._build_inputs_message(graph, MSG_WHAT_VALUES_TO_USE),
@@ -257,7 +257,7 @@ class RunAgentTool(BaseTool):
ready_to_run=False, ready_to_run=False,
), ),
requirements={ requirements={
"credentials": [c.model_dump() for c in credentials], "credentials": requirements_creds_list,
"inputs": self._get_inputs_list(graph.input_schema), "inputs": self._get_inputs_list(graph.input_schema),
"execution_modes": self._get_execution_modes(graph), "execution_modes": self._get_execution_modes(graph),
}, },

View File

@@ -4,6 +4,8 @@ import logging
from collections import defaultdict from collections import defaultdict
from typing import Any from typing import Any
from langfuse import observe
from backend.api.features.chat.model import ChatSession from backend.api.features.chat.model import ChatSession
from backend.data.block import get_block from backend.data.block import get_block
from backend.data.execution import ExecutionContext from backend.data.execution import ExecutionContext
@@ -20,6 +22,7 @@ from .models import (
ToolResponseBase, ToolResponseBase,
UserReadiness, UserReadiness,
) )
from .utils import build_missing_credentials_from_field_info
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@@ -127,6 +130,7 @@ class RunBlockTool(BaseTool):
return matched_credentials, missing_credentials return matched_credentials, missing_credentials
@observe(as_type="tool", name="run_block")
async def _execute( async def _execute(
self, self,
user_id: str | None, user_id: str | None,
@@ -186,7 +190,11 @@ class RunBlockTool(BaseTool):
if missing_credentials: if missing_credentials:
# Return setup requirements response with missing credentials # Return setup requirements response with missing credentials
missing_creds_dict = {c.id: c.model_dump() for c in missing_credentials} credentials_fields_info = block.input_schema.get_credentials_fields_info()
missing_creds_dict = build_missing_credentials_from_field_info(
credentials_fields_info, set(matched_credentials.keys())
)
missing_creds_list = list(missing_creds_dict.values())
return SetupRequirementsResponse( return SetupRequirementsResponse(
message=( message=(
@@ -203,7 +211,7 @@ class RunBlockTool(BaseTool):
ready_to_run=False, ready_to_run=False,
), ),
requirements={ requirements={
"credentials": [c.model_dump() for c in missing_credentials], "credentials": missing_creds_list,
"inputs": self._get_inputs_list(block), "inputs": self._get_inputs_list(block),
"execution_modes": ["immediate"], "execution_modes": ["immediate"],
}, },

View File

@@ -3,6 +3,7 @@
import logging import logging
from typing import Any from typing import Any
from langfuse import observe
from prisma.enums import ContentType from prisma.enums import ContentType
from backend.api.features.chat.model import ChatSession from backend.api.features.chat.model import ChatSession
@@ -87,6 +88,7 @@ class SearchDocsTool(BaseTool):
url_path = path.rsplit(".", 1)[0] if "." in path else path url_path = path.rsplit(".", 1)[0] if "." in path else path
return f"{DOCS_BASE_URL}/{url_path}" return f"{DOCS_BASE_URL}/{url_path}"
@observe(as_type="tool", name="search_docs")
async def _execute( async def _execute(
self, self,
user_id: str | None, user_id: str | None,

View File

@@ -8,7 +8,7 @@ from backend.api.features.library import model as library_model
from backend.api.features.store import db as store_db from backend.api.features.store import db as store_db
from backend.data import graph as graph_db from backend.data import graph as graph_db
from backend.data.graph import GraphModel from backend.data.graph import GraphModel
from backend.data.model import CredentialsMetaInput from backend.data.model import CredentialsFieldInfo, CredentialsMetaInput
from backend.integrations.creds_manager import IntegrationCredentialsManager from backend.integrations.creds_manager import IntegrationCredentialsManager
from backend.util.exceptions import NotFoundError from backend.util.exceptions import NotFoundError
@@ -89,6 +89,59 @@ def extract_credentials_from_schema(
return credentials return credentials
def _serialize_missing_credential(
field_key: str, field_info: CredentialsFieldInfo
) -> dict[str, Any]:
"""
Convert credential field info into a serializable dict that preserves all supported
credential types (e.g., api_key + oauth2) so the UI can offer multiple options.
"""
supported_types = sorted(field_info.supported_types)
provider = next(iter(field_info.provider), "unknown")
scopes = sorted(field_info.required_scopes or [])
return {
"id": field_key,
"title": field_key.replace("_", " ").title(),
"provider": provider,
"provider_name": provider.replace("_", " ").title(),
"type": supported_types[0] if supported_types else "api_key",
"types": supported_types,
"scopes": scopes,
}
def build_missing_credentials_from_graph(
graph: GraphModel, matched_credentials: dict[str, CredentialsMetaInput] | None
) -> dict[str, Any]:
"""
Build a missing_credentials mapping from a graph's aggregated credentials inputs,
preserving all supported credential types for each field.
"""
matched_keys = set(matched_credentials.keys()) if matched_credentials else set()
aggregated_fields = graph.aggregate_credentials_inputs()
return {
field_key: _serialize_missing_credential(field_key, field_info)
for field_key, (field_info, _node_fields) in aggregated_fields.items()
if field_key not in matched_keys
}
def build_missing_credentials_from_field_info(
credential_fields: dict[str, CredentialsFieldInfo],
matched_keys: set[str],
) -> dict[str, Any]:
"""
Build missing_credentials mapping from a simple credentials field info dictionary.
"""
return {
field_key: _serialize_missing_credential(field_key, field_info)
for field_key, field_info in credential_fields.items()
if field_key not in matched_keys
}
def extract_credentials_as_dict( def extract_credentials_as_dict(
credentials_input_schema: dict[str, Any] | None, credentials_input_schema: dict[str, Any] | None,
) -> dict[str, CredentialsMetaInput]: ) -> dict[str, CredentialsMetaInput]:

View File

@@ -401,27 +401,11 @@ async def add_generated_agent_image(
) )
def _initialize_graph_settings(graph: graph_db.GraphModel) -> GraphSettings:
"""
Initialize GraphSettings based on graph content.
Args:
graph: The graph to analyze
Returns:
GraphSettings with appropriate human_in_the_loop_safe_mode value
"""
if graph.has_human_in_the_loop:
# Graph has HITL blocks - set safe mode to True by default
return GraphSettings(human_in_the_loop_safe_mode=True)
else:
# Graph has no HITL blocks - keep None
return GraphSettings(human_in_the_loop_safe_mode=None)
async def create_library_agent( async def create_library_agent(
graph: graph_db.GraphModel, graph: graph_db.GraphModel,
user_id: str, user_id: str,
hitl_safe_mode: bool = True,
sensitive_action_safe_mode: bool = False,
create_library_agents_for_sub_graphs: bool = True, create_library_agents_for_sub_graphs: bool = True,
) -> list[library_model.LibraryAgent]: ) -> list[library_model.LibraryAgent]:
""" """
@@ -430,6 +414,8 @@ async def create_library_agent(
Args: Args:
agent: The agent/Graph to add to the library. agent: The agent/Graph to add to the library.
user_id: The user to whom the agent will be added. user_id: The user to whom the agent will be added.
hitl_safe_mode: Whether HITL blocks require manual review (default True).
sensitive_action_safe_mode: Whether sensitive action blocks require review.
create_library_agents_for_sub_graphs: If True, creates LibraryAgent records for sub-graphs as well. create_library_agents_for_sub_graphs: If True, creates LibraryAgent records for sub-graphs as well.
Returns: Returns:
@@ -465,7 +451,11 @@ async def create_library_agent(
} }
}, },
settings=SafeJson( settings=SafeJson(
_initialize_graph_settings(graph_entry).model_dump() GraphSettings.from_graph(
graph_entry,
hitl_safe_mode=hitl_safe_mode,
sensitive_action_safe_mode=sensitive_action_safe_mode,
).model_dump()
), ),
), ),
include=library_agent_include( include=library_agent_include(
@@ -627,33 +617,6 @@ async def update_library_agent(
raise DatabaseError("Failed to update library agent") from e raise DatabaseError("Failed to update library agent") from e
async def update_library_agent_settings(
user_id: str,
agent_id: str,
settings: GraphSettings,
) -> library_model.LibraryAgent:
"""
Updates the settings for a specific LibraryAgent.
Args:
user_id: The owner of the LibraryAgent.
agent_id: The ID of the LibraryAgent to update.
settings: New GraphSettings to apply.
Returns:
The updated LibraryAgent.
Raises:
NotFoundError: If the specified LibraryAgent does not exist.
DatabaseError: If there's an error in the update operation.
"""
return await update_library_agent(
library_agent_id=agent_id,
user_id=user_id,
settings=settings,
)
async def delete_library_agent( async def delete_library_agent(
library_agent_id: str, user_id: str, soft_delete: bool = True library_agent_id: str, user_id: str, soft_delete: bool = True
) -> None: ) -> None:
@@ -838,7 +801,7 @@ async def add_store_agent_to_library(
"isCreatedByUser": False, "isCreatedByUser": False,
"useGraphIsActiveVersion": False, "useGraphIsActiveVersion": False,
"settings": SafeJson( "settings": SafeJson(
_initialize_graph_settings(graph_model).model_dump() GraphSettings.from_graph(graph_model).model_dump()
), ),
}, },
include=library_agent_include( include=library_agent_include(
@@ -1228,8 +1191,15 @@ async def fork_library_agent(
) )
new_graph = await on_graph_activate(new_graph, user_id=user_id) new_graph = await on_graph_activate(new_graph, user_id=user_id)
# Create a library agent for the new graph # Create a library agent for the new graph, preserving safe mode settings
return (await create_library_agent(new_graph, user_id))[0] return (
await create_library_agent(
new_graph,
user_id,
hitl_safe_mode=original_agent.settings.human_in_the_loop_safe_mode,
sensitive_action_safe_mode=original_agent.settings.sensitive_action_safe_mode,
)
)[0]
except prisma.errors.PrismaError as e: except prisma.errors.PrismaError as e:
logger.error(f"Database error cloning library agent: {e}") logger.error(f"Database error cloning library agent: {e}")
raise DatabaseError("Failed to fork library agent") from e raise DatabaseError("Failed to fork library agent") from e

View File

@@ -73,6 +73,12 @@ class LibraryAgent(pydantic.BaseModel):
has_external_trigger: bool = pydantic.Field( has_external_trigger: bool = pydantic.Field(
description="Whether the agent has an external trigger (e.g. webhook) node" description="Whether the agent has an external trigger (e.g. webhook) node"
) )
has_human_in_the_loop: bool = pydantic.Field(
description="Whether the agent has human-in-the-loop blocks"
)
has_sensitive_action: bool = pydantic.Field(
description="Whether the agent has sensitive action blocks"
)
trigger_setup_info: Optional[GraphTriggerInfo] = None trigger_setup_info: Optional[GraphTriggerInfo] = None
# Indicates whether there's a new output (based on recent runs) # Indicates whether there's a new output (based on recent runs)
@@ -180,6 +186,8 @@ class LibraryAgent(pydantic.BaseModel):
graph.credentials_input_schema if sub_graphs is not None else None graph.credentials_input_schema if sub_graphs is not None else None
), ),
has_external_trigger=graph.has_external_trigger, has_external_trigger=graph.has_external_trigger,
has_human_in_the_loop=graph.has_human_in_the_loop,
has_sensitive_action=graph.has_sensitive_action,
trigger_setup_info=graph.trigger_setup_info, trigger_setup_info=graph.trigger_setup_info,
new_output=new_output, new_output=new_output,
can_access_graph=can_access_graph, can_access_graph=can_access_graph,

View File

@@ -52,6 +52,8 @@ async def test_get_library_agents_success(
output_schema={"type": "object", "properties": {}}, output_schema={"type": "object", "properties": {}},
credentials_input_schema={"type": "object", "properties": {}}, credentials_input_schema={"type": "object", "properties": {}},
has_external_trigger=False, has_external_trigger=False,
has_human_in_the_loop=False,
has_sensitive_action=False,
status=library_model.LibraryAgentStatus.COMPLETED, status=library_model.LibraryAgentStatus.COMPLETED,
recommended_schedule_cron=None, recommended_schedule_cron=None,
new_output=False, new_output=False,
@@ -75,6 +77,8 @@ async def test_get_library_agents_success(
output_schema={"type": "object", "properties": {}}, output_schema={"type": "object", "properties": {}},
credentials_input_schema={"type": "object", "properties": {}}, credentials_input_schema={"type": "object", "properties": {}},
has_external_trigger=False, has_external_trigger=False,
has_human_in_the_loop=False,
has_sensitive_action=False,
status=library_model.LibraryAgentStatus.COMPLETED, status=library_model.LibraryAgentStatus.COMPLETED,
recommended_schedule_cron=None, recommended_schedule_cron=None,
new_output=False, new_output=False,
@@ -150,6 +154,8 @@ async def test_get_favorite_library_agents_success(
output_schema={"type": "object", "properties": {}}, output_schema={"type": "object", "properties": {}},
credentials_input_schema={"type": "object", "properties": {}}, credentials_input_schema={"type": "object", "properties": {}},
has_external_trigger=False, has_external_trigger=False,
has_human_in_the_loop=False,
has_sensitive_action=False,
status=library_model.LibraryAgentStatus.COMPLETED, status=library_model.LibraryAgentStatus.COMPLETED,
recommended_schedule_cron=None, recommended_schedule_cron=None,
new_output=False, new_output=False,
@@ -218,6 +224,8 @@ def test_add_agent_to_library_success(
output_schema={"type": "object", "properties": {}}, output_schema={"type": "object", "properties": {}},
credentials_input_schema={"type": "object", "properties": {}}, credentials_input_schema={"type": "object", "properties": {}},
has_external_trigger=False, has_external_trigger=False,
has_human_in_the_loop=False,
has_sensitive_action=False,
status=library_model.LibraryAgentStatus.COMPLETED, status=library_model.LibraryAgentStatus.COMPLETED,
new_output=False, new_output=False,
can_access_graph=True, can_access_graph=True,

View File

@@ -154,6 +154,7 @@ async def store_content_embedding(
# Upsert the embedding # Upsert the embedding
# WHERE clause in DO UPDATE prevents PostgreSQL 15 bug with NULLS NOT DISTINCT # WHERE clause in DO UPDATE prevents PostgreSQL 15 bug with NULLS NOT DISTINCT
# Use unqualified ::vector - pgvector is in search_path on all environments
await execute_raw_with_schema( await execute_raw_with_schema(
""" """
INSERT INTO {schema_prefix}"UnifiedContentEmbedding" ( INSERT INTO {schema_prefix}"UnifiedContentEmbedding" (
@@ -177,7 +178,6 @@ async def store_content_embedding(
searchable_text, searchable_text,
metadata_json, metadata_json,
client=client, client=client,
set_public_search_path=True,
) )
logger.info(f"Stored embedding for {content_type}:{content_id}") logger.info(f"Stored embedding for {content_type}:{content_id}")
@@ -236,7 +236,6 @@ async def get_content_embedding(
content_type, content_type,
content_id, content_id,
user_id, user_id,
set_public_search_path=True,
) )
if result and len(result) > 0: if result and len(result) > 0:
@@ -871,31 +870,45 @@ async def semantic_search(
# Add content type parameters and build placeholders dynamically # Add content type parameters and build placeholders dynamically
content_type_start_idx = len(params) + 1 content_type_start_idx = len(params) + 1
content_type_placeholders = ", ".join( content_type_placeholders = ", ".join(
f'${content_type_start_idx + i}::{{{{schema_prefix}}}}"ContentType"' "$" + str(content_type_start_idx + i) + '::{schema_prefix}"ContentType"'
for i in range(len(content_types)) for i in range(len(content_types))
) )
params.extend([ct.value for ct in content_types]) params.extend([ct.value for ct in content_types])
sql = f""" # Build min_similarity param index before appending
min_similarity_idx = len(params) + 1
params.append(min_similarity)
# Use unqualified ::vector and <=> operator - pgvector is in search_path on all environments
sql = (
"""
SELECT SELECT
"contentId" as content_id, "contentId" as content_id,
"contentType" as content_type, "contentType" as content_type,
"searchableText" as searchable_text, "searchableText" as searchable_text,
metadata, metadata,
1 - (embedding <=> '{embedding_str}'::vector) as similarity 1 - (embedding <=> '"""
FROM {{{{schema_prefix}}}}"UnifiedContentEmbedding" + embedding_str
WHERE "contentType" IN ({content_type_placeholders}) + """'::vector) as similarity
{user_filter} FROM {schema_prefix}"UnifiedContentEmbedding"
AND 1 - (embedding <=> '{embedding_str}'::vector) >= ${len(params) + 1} WHERE "contentType" IN ("""
+ content_type_placeholders
+ """)
"""
+ user_filter
+ """
AND 1 - (embedding <=> '"""
+ embedding_str
+ """'::vector) >= $"""
+ str(min_similarity_idx)
+ """
ORDER BY similarity DESC ORDER BY similarity DESC
LIMIT $1 LIMIT $1
""" """
params.append(min_similarity) )
try: try:
results = await query_raw_with_schema( results = await query_raw_with_schema(sql, *params)
sql, *params, set_public_search_path=True
)
return [ return [
{ {
"content_id": row["content_id"], "content_id": row["content_id"],
@@ -922,31 +935,41 @@ async def semantic_search(
# Add content type parameters and build placeholders dynamically # Add content type parameters and build placeholders dynamically
content_type_start_idx = len(params_lexical) + 1 content_type_start_idx = len(params_lexical) + 1
content_type_placeholders_lexical = ", ".join( content_type_placeholders_lexical = ", ".join(
f'${content_type_start_idx + i}::{{{{schema_prefix}}}}"ContentType"' "$" + str(content_type_start_idx + i) + '::{schema_prefix}"ContentType"'
for i in range(len(content_types)) for i in range(len(content_types))
) )
params_lexical.extend([ct.value for ct in content_types]) params_lexical.extend([ct.value for ct in content_types])
sql_lexical = f""" # Build query param index before appending
query_param_idx = len(params_lexical) + 1
params_lexical.append(f"%{query}%")
# Use regular string (not f-string) for template to preserve {schema_prefix} placeholders
sql_lexical = (
"""
SELECT SELECT
"contentId" as content_id, "contentId" as content_id,
"contentType" as content_type, "contentType" as content_type,
"searchableText" as searchable_text, "searchableText" as searchable_text,
metadata, metadata,
0.0 as similarity 0.0 as similarity
FROM {{{{schema_prefix}}}}"UnifiedContentEmbedding" FROM {schema_prefix}"UnifiedContentEmbedding"
WHERE "contentType" IN ({content_type_placeholders_lexical}) WHERE "contentType" IN ("""
{user_filter} + content_type_placeholders_lexical
AND "searchableText" ILIKE ${len(params_lexical) + 1} + """)
"""
+ user_filter
+ """
AND "searchableText" ILIKE $"""
+ str(query_param_idx)
+ """
ORDER BY "updatedAt" DESC ORDER BY "updatedAt" DESC
LIMIT $1 LIMIT $1
""" """
params_lexical.append(f"%{query}%") )
try: try:
results = await query_raw_with_schema( results = await query_raw_with_schema(sql_lexical, *params_lexical)
sql_lexical, *params_lexical, set_public_search_path=True
)
return [ return [
{ {
"content_id": row["content_id"], "content_id": row["content_id"],

View File

@@ -155,18 +155,14 @@ async def test_store_embedding_success(mocker):
) )
assert result is True assert result is True
# execute_raw is called twice: once for SET search_path, once for INSERT # execute_raw is called once for INSERT (no separate SET search_path needed)
assert mock_client.execute_raw.call_count == 2 assert mock_client.execute_raw.call_count == 1
# First call: SET search_path # Verify the INSERT query with the actual data
first_call_args = mock_client.execute_raw.call_args_list[0][0] call_args = mock_client.execute_raw.call_args_list[0][0]
assert "SET search_path" in first_call_args[0] assert "test-version-id" in call_args
assert "[0.1,0.2,0.3]" in call_args
# Second call: INSERT query with the actual data assert None in call_args # userId should be None for store agents
second_call_args = mock_client.execute_raw.call_args_list[1][0]
assert "test-version-id" in second_call_args
assert "[0.1,0.2,0.3]" in second_call_args
assert None in second_call_args # userId should be None for store agents
@pytest.mark.asyncio(loop_scope="session") @pytest.mark.asyncio(loop_scope="session")

View File

@@ -12,7 +12,7 @@ from dataclasses import dataclass
from typing import Any, Literal from typing import Any, Literal
from prisma.enums import ContentType from prisma.enums import ContentType
from rank_bm25 import BM25Okapi from rank_bm25 import BM25Okapi # type: ignore[import-untyped]
from backend.api.features.store.embeddings import ( from backend.api.features.store.embeddings import (
EMBEDDING_DIM, EMBEDDING_DIM,
@@ -363,9 +363,7 @@ async def unified_hybrid_search(
LIMIT {limit_param} OFFSET {offset_param} LIMIT {limit_param} OFFSET {offset_param}
""" """
results = await query_raw_with_schema( results = await query_raw_with_schema(sql_query, *params)
sql_query, *params, set_public_search_path=True
)
total = results[0]["total_count"] if results else 0 total = results[0]["total_count"] if results else 0
# Apply BM25 reranking # Apply BM25 reranking
@@ -688,9 +686,7 @@ async def hybrid_search(
LIMIT {limit_param} OFFSET {offset_param} LIMIT {limit_param} OFFSET {offset_param}
""" """
results = await query_raw_with_schema( results = await query_raw_with_schema(sql_query, *params)
sql_query, *params, set_public_search_path=True
)
total = results[0]["total_count"] if results else 0 total = results[0]["total_count"] if results else 0

View File

@@ -761,10 +761,8 @@ async def create_new_graph(
graph.reassign_ids(user_id=user_id, reassign_graph_id=True) graph.reassign_ids(user_id=user_id, reassign_graph_id=True)
graph.validate_graph(for_run=False) graph.validate_graph(for_run=False)
# The return value of the create graph & library function is intentionally not used here,
# as the graph already valid and no sub-graphs are returned back.
await graph_db.create_graph(graph, user_id=user_id) await graph_db.create_graph(graph, user_id=user_id)
await library_db.create_library_agent(graph, user_id=user_id) await library_db.create_library_agent(graph, user_id)
activated_graph = await on_graph_activate(graph, user_id=user_id) activated_graph = await on_graph_activate(graph, user_id=user_id)
if create_graph.source == "builder": if create_graph.source == "builder":
@@ -888,21 +886,19 @@ async def set_graph_active_version(
async def _update_library_agent_version_and_settings( async def _update_library_agent_version_and_settings(
user_id: str, agent_graph: graph_db.GraphModel user_id: str, agent_graph: graph_db.GraphModel
) -> library_model.LibraryAgent: ) -> library_model.LibraryAgent:
# Keep the library agent up to date with the new active version
library = await library_db.update_agent_version_in_library( library = await library_db.update_agent_version_in_library(
user_id, agent_graph.id, agent_graph.version user_id, agent_graph.id, agent_graph.version
) )
# If the graph has HITL node, initialize the setting if it's not already set. updated_settings = GraphSettings.from_graph(
if ( graph=agent_graph,
agent_graph.has_human_in_the_loop hitl_safe_mode=library.settings.human_in_the_loop_safe_mode,
and library.settings.human_in_the_loop_safe_mode is None sensitive_action_safe_mode=library.settings.sensitive_action_safe_mode,
): )
await library_db.update_library_agent_settings( if updated_settings != library.settings:
library = await library_db.update_library_agent(
library_agent_id=library.id,
user_id=user_id, user_id=user_id,
agent_id=library.id, settings=updated_settings,
settings=library.settings.model_copy(
update={"human_in_the_loop_safe_mode": True}
),
) )
return library return library
@@ -919,21 +915,18 @@ async def update_graph_settings(
user_id: Annotated[str, Security(get_user_id)], user_id: Annotated[str, Security(get_user_id)],
) -> GraphSettings: ) -> GraphSettings:
"""Update graph settings for the user's library agent.""" """Update graph settings for the user's library agent."""
# Get the library agent for this graph
library_agent = await library_db.get_library_agent_by_graph_id( library_agent = await library_db.get_library_agent_by_graph_id(
graph_id=graph_id, user_id=user_id graph_id=graph_id, user_id=user_id
) )
if not library_agent: if not library_agent:
raise HTTPException(404, f"Graph #{graph_id} not found in user's library") raise HTTPException(404, f"Graph #{graph_id} not found in user's library")
# Update the library agent settings updated_agent = await library_db.update_library_agent(
updated_agent = await library_db.update_library_agent_settings( library_agent_id=library_agent.id,
user_id=user_id, user_id=user_id,
agent_id=library_agent.id,
settings=settings, settings=settings,
) )
# Return the updated settings
return GraphSettings.model_validate(updated_agent.settings) return GraphSettings.model_validate(updated_agent.settings)

View File

@@ -680,3 +680,58 @@ class ListIsEmptyBlock(Block):
async def run(self, input_data: Input, **kwargs) -> BlockOutput: async def run(self, input_data: Input, **kwargs) -> BlockOutput:
yield "is_empty", len(input_data.list) == 0 yield "is_empty", len(input_data.list) == 0
class ConcatenateListsBlock(Block):
class Input(BlockSchemaInput):
lists: List[List[Any]] = SchemaField(
description="A list of lists to concatenate together. All lists will be combined in order into a single list.",
placeholder="e.g., [[1, 2], [3, 4], [5, 6]]",
)
class Output(BlockSchemaOutput):
concatenated_list: List[Any] = SchemaField(
description="The concatenated list containing all elements from all input lists in order."
)
error: str = SchemaField(
description="Error message if concatenation failed due to invalid input types."
)
def __init__(self):
super().__init__(
id="3cf9298b-5817-4141-9d80-7c2cc5199c8e",
description="Concatenates multiple lists into a single list. All elements from all input lists are combined in order.",
categories={BlockCategory.BASIC},
input_schema=ConcatenateListsBlock.Input,
output_schema=ConcatenateListsBlock.Output,
test_input=[
{"lists": [[1, 2, 3], [4, 5, 6]]},
{"lists": [["a", "b"], ["c"], ["d", "e", "f"]]},
{"lists": [[1, 2], []]},
{"lists": []},
],
test_output=[
("concatenated_list", [1, 2, 3, 4, 5, 6]),
("concatenated_list", ["a", "b", "c", "d", "e", "f"]),
("concatenated_list", [1, 2]),
("concatenated_list", []),
],
)
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
concatenated = []
for idx, lst in enumerate(input_data.lists):
if lst is None:
# Skip None values to avoid errors
continue
if not isinstance(lst, list):
# Type validation: each item must be a list
# Strings are iterable and would cause extend() to iterate character-by-character
# Non-iterable types would raise TypeError
yield "error", (
f"Invalid input at index {idx}: expected a list, got {type(lst).__name__}. "
f"All items in 'lists' must be lists (e.g., [[1, 2], [3, 4]])."
)
return
concatenated.extend(lst)
yield "concatenated_list", concatenated

View File

@@ -84,7 +84,7 @@ class HITLReviewHelper:
Exception: If review creation or status update fails Exception: If review creation or status update fails
""" """
# Skip review if safe mode is disabled - return auto-approved result # Skip review if safe mode is disabled - return auto-approved result
if not execution_context.safe_mode: if not execution_context.human_in_the_loop_safe_mode:
logger.info( logger.info(
f"Block {block_name} skipping review for node {node_exec_id} - safe mode disabled" f"Block {block_name} skipping review for node {node_exec_id} - safe mode disabled"
) )

View File

@@ -104,7 +104,7 @@ class HumanInTheLoopBlock(Block):
execution_context: ExecutionContext, execution_context: ExecutionContext,
**_kwargs, **_kwargs,
) -> BlockOutput: ) -> BlockOutput:
if not execution_context.safe_mode: if not execution_context.human_in_the_loop_safe_mode:
logger.info( logger.info(
f"HITL block skipping review for node {node_exec_id} - safe mode disabled" f"HITL block skipping review for node {node_exec_id} - safe mode disabled"
) )

View File

@@ -79,6 +79,10 @@ class ModelMetadata(NamedTuple):
provider: str provider: str
context_window: int context_window: int
max_output_tokens: int | None max_output_tokens: int | None
display_name: str
provider_name: str
creator_name: str
price_tier: Literal[1, 2, 3]
class LlmModelMeta(EnumMeta): class LlmModelMeta(EnumMeta):
@@ -171,6 +175,26 @@ class LlmModel(str, Enum, metaclass=LlmModelMeta):
V0_1_5_LG = "v0-1.5-lg" V0_1_5_LG = "v0-1.5-lg"
V0_1_0_MD = "v0-1.0-md" V0_1_0_MD = "v0-1.0-md"
@classmethod
def __get_pydantic_json_schema__(cls, schema, handler):
json_schema = handler(schema)
llm_model_metadata = {}
for model in cls:
model_name = model.value
metadata = model.metadata
llm_model_metadata[model_name] = {
"creator": metadata.creator_name,
"creator_name": metadata.creator_name,
"title": metadata.display_name,
"provider": metadata.provider,
"provider_name": metadata.provider_name,
"name": model_name,
"price_tier": metadata.price_tier,
}
json_schema["llm_model"] = True
json_schema["llm_model_metadata"] = llm_model_metadata
return json_schema
@property @property
def metadata(self) -> ModelMetadata: def metadata(self) -> ModelMetadata:
return MODEL_METADATA[self] return MODEL_METADATA[self]
@@ -190,119 +214,291 @@ class LlmModel(str, Enum, metaclass=LlmModelMeta):
MODEL_METADATA = { MODEL_METADATA = {
# https://platform.openai.com/docs/models # https://platform.openai.com/docs/models
LlmModel.O3: ModelMetadata("openai", 200000, 100000), LlmModel.O3: ModelMetadata("openai", 200000, 100000, "O3", "OpenAI", "OpenAI", 2),
LlmModel.O3_MINI: ModelMetadata("openai", 200000, 100000), # o3-mini-2025-01-31 LlmModel.O3_MINI: ModelMetadata(
LlmModel.O1: ModelMetadata("openai", 200000, 100000), # o1-2024-12-17 "openai", 200000, 100000, "O3 Mini", "OpenAI", "OpenAI", 1
LlmModel.O1_MINI: ModelMetadata("openai", 128000, 65536), # o1-mini-2024-09-12 ), # o3-mini-2025-01-31
LlmModel.O1: ModelMetadata(
"openai", 200000, 100000, "O1", "OpenAI", "OpenAI", 3
), # o1-2024-12-17
LlmModel.O1_MINI: ModelMetadata(
"openai", 128000, 65536, "O1 Mini", "OpenAI", "OpenAI", 2
), # o1-mini-2024-09-12
# GPT-5 models # GPT-5 models
LlmModel.GPT5_2: ModelMetadata("openai", 400000, 128000), LlmModel.GPT5_2: ModelMetadata(
LlmModel.GPT5_1: ModelMetadata("openai", 400000, 128000), "openai", 400000, 128000, "GPT-5.2", "OpenAI", "OpenAI", 3
LlmModel.GPT5: ModelMetadata("openai", 400000, 128000), ),
LlmModel.GPT5_MINI: ModelMetadata("openai", 400000, 128000), LlmModel.GPT5_1: ModelMetadata(
LlmModel.GPT5_NANO: ModelMetadata("openai", 400000, 128000), "openai", 400000, 128000, "GPT-5.1", "OpenAI", "OpenAI", 2
LlmModel.GPT5_CHAT: ModelMetadata("openai", 400000, 16384), ),
LlmModel.GPT41: ModelMetadata("openai", 1047576, 32768), LlmModel.GPT5: ModelMetadata(
LlmModel.GPT41_MINI: ModelMetadata("openai", 1047576, 32768), "openai", 400000, 128000, "GPT-5", "OpenAI", "OpenAI", 1
),
LlmModel.GPT5_MINI: ModelMetadata(
"openai", 400000, 128000, "GPT-5 Mini", "OpenAI", "OpenAI", 1
),
LlmModel.GPT5_NANO: ModelMetadata(
"openai", 400000, 128000, "GPT-5 Nano", "OpenAI", "OpenAI", 1
),
LlmModel.GPT5_CHAT: ModelMetadata(
"openai", 400000, 16384, "GPT-5 Chat Latest", "OpenAI", "OpenAI", 2
),
LlmModel.GPT41: ModelMetadata(
"openai", 1047576, 32768, "GPT-4.1", "OpenAI", "OpenAI", 1
),
LlmModel.GPT41_MINI: ModelMetadata(
"openai", 1047576, 32768, "GPT-4.1 Mini", "OpenAI", "OpenAI", 1
),
LlmModel.GPT4O_MINI: ModelMetadata( LlmModel.GPT4O_MINI: ModelMetadata(
"openai", 128000, 16384 "openai", 128000, 16384, "GPT-4o Mini", "OpenAI", "OpenAI", 1
), # gpt-4o-mini-2024-07-18 ), # gpt-4o-mini-2024-07-18
LlmModel.GPT4O: ModelMetadata("openai", 128000, 16384), # gpt-4o-2024-08-06 LlmModel.GPT4O: ModelMetadata(
"openai", 128000, 16384, "GPT-4o", "OpenAI", "OpenAI", 2
), # gpt-4o-2024-08-06
LlmModel.GPT4_TURBO: ModelMetadata( LlmModel.GPT4_TURBO: ModelMetadata(
"openai", 128000, 4096 "openai", 128000, 4096, "GPT-4 Turbo", "OpenAI", "OpenAI", 3
), # gpt-4-turbo-2024-04-09 ), # gpt-4-turbo-2024-04-09
LlmModel.GPT3_5_TURBO: ModelMetadata("openai", 16385, 4096), # gpt-3.5-turbo-0125 LlmModel.GPT3_5_TURBO: ModelMetadata(
"openai", 16385, 4096, "GPT-3.5 Turbo", "OpenAI", "OpenAI", 1
), # gpt-3.5-turbo-0125
# https://docs.anthropic.com/en/docs/about-claude/models # https://docs.anthropic.com/en/docs/about-claude/models
LlmModel.CLAUDE_4_1_OPUS: ModelMetadata( LlmModel.CLAUDE_4_1_OPUS: ModelMetadata(
"anthropic", 200000, 32000 "anthropic", 200000, 32000, "Claude Opus 4.1", "Anthropic", "Anthropic", 3
), # claude-opus-4-1-20250805 ), # claude-opus-4-1-20250805
LlmModel.CLAUDE_4_OPUS: ModelMetadata( LlmModel.CLAUDE_4_OPUS: ModelMetadata(
"anthropic", 200000, 32000 "anthropic", 200000, 32000, "Claude Opus 4", "Anthropic", "Anthropic", 3
), # claude-4-opus-20250514 ), # claude-4-opus-20250514
LlmModel.CLAUDE_4_SONNET: ModelMetadata( LlmModel.CLAUDE_4_SONNET: ModelMetadata(
"anthropic", 200000, 64000 "anthropic", 200000, 64000, "Claude Sonnet 4", "Anthropic", "Anthropic", 2
), # claude-4-sonnet-20250514 ), # claude-4-sonnet-20250514
LlmModel.CLAUDE_4_5_OPUS: ModelMetadata( LlmModel.CLAUDE_4_5_OPUS: ModelMetadata(
"anthropic", 200000, 64000 "anthropic", 200000, 64000, "Claude Opus 4.5", "Anthropic", "Anthropic", 3
), # claude-opus-4-5-20251101 ), # claude-opus-4-5-20251101
LlmModel.CLAUDE_4_5_SONNET: ModelMetadata( LlmModel.CLAUDE_4_5_SONNET: ModelMetadata(
"anthropic", 200000, 64000 "anthropic", 200000, 64000, "Claude Sonnet 4.5", "Anthropic", "Anthropic", 3
), # claude-sonnet-4-5-20250929 ), # claude-sonnet-4-5-20250929
LlmModel.CLAUDE_4_5_HAIKU: ModelMetadata( LlmModel.CLAUDE_4_5_HAIKU: ModelMetadata(
"anthropic", 200000, 64000 "anthropic", 200000, 64000, "Claude Haiku 4.5", "Anthropic", "Anthropic", 2
), # claude-haiku-4-5-20251001 ), # claude-haiku-4-5-20251001
LlmModel.CLAUDE_3_7_SONNET: ModelMetadata( LlmModel.CLAUDE_3_7_SONNET: ModelMetadata(
"anthropic", 200000, 64000 "anthropic", 200000, 64000, "Claude 3.7 Sonnet", "Anthropic", "Anthropic", 2
), # claude-3-7-sonnet-20250219 ), # claude-3-7-sonnet-20250219
LlmModel.CLAUDE_3_HAIKU: ModelMetadata( LlmModel.CLAUDE_3_HAIKU: ModelMetadata(
"anthropic", 200000, 4096 "anthropic", 200000, 4096, "Claude 3 Haiku", "Anthropic", "Anthropic", 1
), # claude-3-haiku-20240307 ), # claude-3-haiku-20240307
# https://docs.aimlapi.com/api-overview/model-database/text-models # https://docs.aimlapi.com/api-overview/model-database/text-models
LlmModel.AIML_API_QWEN2_5_72B: ModelMetadata("aiml_api", 32000, 8000), LlmModel.AIML_API_QWEN2_5_72B: ModelMetadata(
LlmModel.AIML_API_LLAMA3_1_70B: ModelMetadata("aiml_api", 128000, 40000), "aiml_api", 32000, 8000, "Qwen 2.5 72B Instruct Turbo", "AI/ML", "Qwen", 1
LlmModel.AIML_API_LLAMA3_3_70B: ModelMetadata("aiml_api", 128000, None), ),
LlmModel.AIML_API_META_LLAMA_3_1_70B: ModelMetadata("aiml_api", 131000, 2000), LlmModel.AIML_API_LLAMA3_1_70B: ModelMetadata(
LlmModel.AIML_API_LLAMA_3_2_3B: ModelMetadata("aiml_api", 128000, None), "aiml_api",
# https://console.groq.com/docs/models 128000,
LlmModel.LLAMA3_3_70B: ModelMetadata("groq", 128000, 32768), 40000,
LlmModel.LLAMA3_1_8B: ModelMetadata("groq", 128000, 8192), "Llama 3.1 Nemotron 70B Instruct",
# https://ollama.com/library "AI/ML",
LlmModel.OLLAMA_LLAMA3_3: ModelMetadata("ollama", 8192, None), "Nvidia",
LlmModel.OLLAMA_LLAMA3_2: ModelMetadata("ollama", 8192, None), 1,
LlmModel.OLLAMA_LLAMA3_8B: ModelMetadata("ollama", 8192, None), ),
LlmModel.OLLAMA_LLAMA3_405B: ModelMetadata("ollama", 8192, None), LlmModel.AIML_API_LLAMA3_3_70B: ModelMetadata(
LlmModel.OLLAMA_DOLPHIN: ModelMetadata("ollama", 32768, None), "aiml_api", 128000, None, "Llama 3.3 70B Instruct Turbo", "AI/ML", "Meta", 1
# https://openrouter.ai/models ),
LlmModel.GEMINI_2_5_PRO: ModelMetadata("open_router", 1050000, 8192), LlmModel.AIML_API_META_LLAMA_3_1_70B: ModelMetadata(
LlmModel.GEMINI_3_PRO_PREVIEW: ModelMetadata("open_router", 1048576, 65535), "aiml_api", 131000, 2000, "Llama 3.1 70B Instruct Turbo", "AI/ML", "Meta", 1
LlmModel.GEMINI_2_5_FLASH: ModelMetadata("open_router", 1048576, 65535), ),
LlmModel.GEMINI_2_0_FLASH: ModelMetadata("open_router", 1048576, 8192), LlmModel.AIML_API_LLAMA_3_2_3B: ModelMetadata(
LlmModel.GEMINI_2_5_FLASH_LITE_PREVIEW: ModelMetadata( "aiml_api", 128000, None, "Llama 3.2 3B Instruct Turbo", "AI/ML", "Meta", 1
"open_router", 1048576, 65535 ),
# https://console.groq.com/docs/models
LlmModel.LLAMA3_3_70B: ModelMetadata(
"groq", 128000, 32768, "Llama 3.3 70B Versatile", "Groq", "Meta", 1
),
LlmModel.LLAMA3_1_8B: ModelMetadata(
"groq", 128000, 8192, "Llama 3.1 8B Instant", "Groq", "Meta", 1
),
# https://ollama.com/library
LlmModel.OLLAMA_LLAMA3_3: ModelMetadata(
"ollama", 8192, None, "Llama 3.3", "Ollama", "Meta", 1
),
LlmModel.OLLAMA_LLAMA3_2: ModelMetadata(
"ollama", 8192, None, "Llama 3.2", "Ollama", "Meta", 1
),
LlmModel.OLLAMA_LLAMA3_8B: ModelMetadata(
"ollama", 8192, None, "Llama 3", "Ollama", "Meta", 1
),
LlmModel.OLLAMA_LLAMA3_405B: ModelMetadata(
"ollama", 8192, None, "Llama 3.1 405B", "Ollama", "Meta", 1
),
LlmModel.OLLAMA_DOLPHIN: ModelMetadata(
"ollama", 32768, None, "Dolphin Mistral Latest", "Ollama", "Mistral AI", 1
),
# https://openrouter.ai/models
LlmModel.GEMINI_2_5_PRO: ModelMetadata(
"open_router",
1050000,
8192,
"Gemini 2.5 Pro Preview 03.25",
"OpenRouter",
"Google",
2,
),
LlmModel.GEMINI_3_PRO_PREVIEW: ModelMetadata(
"open_router", 1048576, 65535, "Gemini 3 Pro Preview", "OpenRouter", "Google", 2
),
LlmModel.GEMINI_2_5_FLASH: ModelMetadata(
"open_router", 1048576, 65535, "Gemini 2.5 Flash", "OpenRouter", "Google", 1
),
LlmModel.GEMINI_2_0_FLASH: ModelMetadata(
"open_router", 1048576, 8192, "Gemini 2.0 Flash 001", "OpenRouter", "Google", 1
),
LlmModel.GEMINI_2_5_FLASH_LITE_PREVIEW: ModelMetadata(
"open_router",
1048576,
65535,
"Gemini 2.5 Flash Lite Preview 06.17",
"OpenRouter",
"Google",
1,
),
LlmModel.GEMINI_2_0_FLASH_LITE: ModelMetadata(
"open_router",
1048576,
8192,
"Gemini 2.0 Flash Lite 001",
"OpenRouter",
"Google",
1,
),
LlmModel.MISTRAL_NEMO: ModelMetadata(
"open_router", 128000, 4096, "Mistral Nemo", "OpenRouter", "Mistral AI", 1
),
LlmModel.COHERE_COMMAND_R_08_2024: ModelMetadata(
"open_router", 128000, 4096, "Command R 08.2024", "OpenRouter", "Cohere", 1
),
LlmModel.COHERE_COMMAND_R_PLUS_08_2024: ModelMetadata(
"open_router", 128000, 4096, "Command R Plus 08.2024", "OpenRouter", "Cohere", 2
),
LlmModel.DEEPSEEK_CHAT: ModelMetadata(
"open_router", 64000, 2048, "DeepSeek Chat", "OpenRouter", "DeepSeek", 1
),
LlmModel.DEEPSEEK_R1_0528: ModelMetadata(
"open_router", 163840, 163840, "DeepSeek R1 0528", "OpenRouter", "DeepSeek", 1
),
LlmModel.PERPLEXITY_SONAR: ModelMetadata(
"open_router", 127000, 8000, "Sonar", "OpenRouter", "Perplexity", 1
),
LlmModel.PERPLEXITY_SONAR_PRO: ModelMetadata(
"open_router", 200000, 8000, "Sonar Pro", "OpenRouter", "Perplexity", 2
), ),
LlmModel.GEMINI_2_0_FLASH_LITE: ModelMetadata("open_router", 1048576, 8192),
LlmModel.MISTRAL_NEMO: ModelMetadata("open_router", 128000, 4096),
LlmModel.COHERE_COMMAND_R_08_2024: ModelMetadata("open_router", 128000, 4096),
LlmModel.COHERE_COMMAND_R_PLUS_08_2024: ModelMetadata("open_router", 128000, 4096),
LlmModel.DEEPSEEK_CHAT: ModelMetadata("open_router", 64000, 2048),
LlmModel.DEEPSEEK_R1_0528: ModelMetadata("open_router", 163840, 163840),
LlmModel.PERPLEXITY_SONAR: ModelMetadata("open_router", 127000, 8000),
LlmModel.PERPLEXITY_SONAR_PRO: ModelMetadata("open_router", 200000, 8000),
LlmModel.PERPLEXITY_SONAR_DEEP_RESEARCH: ModelMetadata( LlmModel.PERPLEXITY_SONAR_DEEP_RESEARCH: ModelMetadata(
"open_router", "open_router",
128000, 128000,
16000, 16000,
"Sonar Deep Research",
"OpenRouter",
"Perplexity",
3,
), ),
LlmModel.NOUSRESEARCH_HERMES_3_LLAMA_3_1_405B: ModelMetadata( LlmModel.NOUSRESEARCH_HERMES_3_LLAMA_3_1_405B: ModelMetadata(
"open_router", 131000, 4096 "open_router",
131000,
4096,
"Hermes 3 Llama 3.1 405B",
"OpenRouter",
"Nous Research",
1,
), ),
LlmModel.NOUSRESEARCH_HERMES_3_LLAMA_3_1_70B: ModelMetadata( LlmModel.NOUSRESEARCH_HERMES_3_LLAMA_3_1_70B: ModelMetadata(
"open_router", 12288, 12288 "open_router",
12288,
12288,
"Hermes 3 Llama 3.1 70B",
"OpenRouter",
"Nous Research",
1,
),
LlmModel.OPENAI_GPT_OSS_120B: ModelMetadata(
"open_router", 131072, 131072, "GPT-OSS 120B", "OpenRouter", "OpenAI", 1
),
LlmModel.OPENAI_GPT_OSS_20B: ModelMetadata(
"open_router", 131072, 32768, "GPT-OSS 20B", "OpenRouter", "OpenAI", 1
),
LlmModel.AMAZON_NOVA_LITE_V1: ModelMetadata(
"open_router", 300000, 5120, "Nova Lite V1", "OpenRouter", "Amazon", 1
),
LlmModel.AMAZON_NOVA_MICRO_V1: ModelMetadata(
"open_router", 128000, 5120, "Nova Micro V1", "OpenRouter", "Amazon", 1
),
LlmModel.AMAZON_NOVA_PRO_V1: ModelMetadata(
"open_router", 300000, 5120, "Nova Pro V1", "OpenRouter", "Amazon", 1
),
LlmModel.MICROSOFT_WIZARDLM_2_8X22B: ModelMetadata(
"open_router", 65536, 4096, "WizardLM 2 8x22B", "OpenRouter", "Microsoft", 1
),
LlmModel.GRYPHE_MYTHOMAX_L2_13B: ModelMetadata(
"open_router", 4096, 4096, "MythoMax L2 13B", "OpenRouter", "Gryphe", 1
),
LlmModel.META_LLAMA_4_SCOUT: ModelMetadata(
"open_router", 131072, 131072, "Llama 4 Scout", "OpenRouter", "Meta", 1
),
LlmModel.META_LLAMA_4_MAVERICK: ModelMetadata(
"open_router", 1048576, 1000000, "Llama 4 Maverick", "OpenRouter", "Meta", 1
),
LlmModel.GROK_4: ModelMetadata(
"open_router", 256000, 256000, "Grok 4", "OpenRouter", "xAI", 3
),
LlmModel.GROK_4_FAST: ModelMetadata(
"open_router", 2000000, 30000, "Grok 4 Fast", "OpenRouter", "xAI", 1
),
LlmModel.GROK_4_1_FAST: ModelMetadata(
"open_router", 2000000, 30000, "Grok 4.1 Fast", "OpenRouter", "xAI", 1
),
LlmModel.GROK_CODE_FAST_1: ModelMetadata(
"open_router", 256000, 10000, "Grok Code Fast 1", "OpenRouter", "xAI", 1
),
LlmModel.KIMI_K2: ModelMetadata(
"open_router", 131000, 131000, "Kimi K2", "OpenRouter", "Moonshot AI", 1
),
LlmModel.QWEN3_235B_A22B_THINKING: ModelMetadata(
"open_router",
262144,
262144,
"Qwen 3 235B A22B Thinking 2507",
"OpenRouter",
"Qwen",
1,
),
LlmModel.QWEN3_CODER: ModelMetadata(
"open_router", 262144, 262144, "Qwen 3 Coder", "OpenRouter", "Qwen", 3
), ),
LlmModel.OPENAI_GPT_OSS_120B: ModelMetadata("open_router", 131072, 131072),
LlmModel.OPENAI_GPT_OSS_20B: ModelMetadata("open_router", 131072, 32768),
LlmModel.AMAZON_NOVA_LITE_V1: ModelMetadata("open_router", 300000, 5120),
LlmModel.AMAZON_NOVA_MICRO_V1: ModelMetadata("open_router", 128000, 5120),
LlmModel.AMAZON_NOVA_PRO_V1: ModelMetadata("open_router", 300000, 5120),
LlmModel.MICROSOFT_WIZARDLM_2_8X22B: ModelMetadata("open_router", 65536, 4096),
LlmModel.GRYPHE_MYTHOMAX_L2_13B: ModelMetadata("open_router", 4096, 4096),
LlmModel.META_LLAMA_4_SCOUT: ModelMetadata("open_router", 131072, 131072),
LlmModel.META_LLAMA_4_MAVERICK: ModelMetadata("open_router", 1048576, 1000000),
LlmModel.GROK_4: ModelMetadata("open_router", 256000, 256000),
LlmModel.GROK_4_FAST: ModelMetadata("open_router", 2000000, 30000),
LlmModel.GROK_4_1_FAST: ModelMetadata("open_router", 2000000, 30000),
LlmModel.GROK_CODE_FAST_1: ModelMetadata("open_router", 256000, 10000),
LlmModel.KIMI_K2: ModelMetadata("open_router", 131000, 131000),
LlmModel.QWEN3_235B_A22B_THINKING: ModelMetadata("open_router", 262144, 262144),
LlmModel.QWEN3_CODER: ModelMetadata("open_router", 262144, 262144),
# Llama API models # Llama API models
LlmModel.LLAMA_API_LLAMA_4_SCOUT: ModelMetadata("llama_api", 128000, 4028), LlmModel.LLAMA_API_LLAMA_4_SCOUT: ModelMetadata(
LlmModel.LLAMA_API_LLAMA4_MAVERICK: ModelMetadata("llama_api", 128000, 4028), "llama_api",
LlmModel.LLAMA_API_LLAMA3_3_8B: ModelMetadata("llama_api", 128000, 4028), 128000,
LlmModel.LLAMA_API_LLAMA3_3_70B: ModelMetadata("llama_api", 128000, 4028), 4028,
"Llama 4 Scout 17B 16E Instruct FP8",
"Llama API",
"Meta",
1,
),
LlmModel.LLAMA_API_LLAMA4_MAVERICK: ModelMetadata(
"llama_api",
128000,
4028,
"Llama 4 Maverick 17B 128E Instruct FP8",
"Llama API",
"Meta",
1,
),
LlmModel.LLAMA_API_LLAMA3_3_8B: ModelMetadata(
"llama_api", 128000, 4028, "Llama 3.3 8B Instruct", "Llama API", "Meta", 1
),
LlmModel.LLAMA_API_LLAMA3_3_70B: ModelMetadata(
"llama_api", 128000, 4028, "Llama 3.3 70B Instruct", "Llama API", "Meta", 1
),
# v0 by Vercel models # v0 by Vercel models
LlmModel.V0_1_5_MD: ModelMetadata("v0", 128000, 64000), LlmModel.V0_1_5_MD: ModelMetadata("v0", 128000, 64000, "v0 1.5 MD", "V0", "V0", 1),
LlmModel.V0_1_5_LG: ModelMetadata("v0", 512000, 64000), LlmModel.V0_1_5_LG: ModelMetadata("v0", 512000, 64000, "v0 1.5 LG", "V0", "V0", 1),
LlmModel.V0_1_0_MD: ModelMetadata("v0", 128000, 64000), LlmModel.V0_1_0_MD: ModelMetadata("v0", 128000, 64000, "v0 1.0 MD", "V0", "V0", 1),
} }
DEFAULT_LLM_MODEL = LlmModel.GPT5_2 DEFAULT_LLM_MODEL = LlmModel.GPT5_2

View File

@@ -242,7 +242,7 @@ async def test_smart_decision_maker_tracks_llm_stats():
outputs = {} outputs = {}
# Create execution context # Create execution context
mock_execution_context = ExecutionContext(safe_mode=False) mock_execution_context = ExecutionContext(human_in_the_loop_safe_mode=False)
# Create a mock execution processor for tests # Create a mock execution processor for tests
@@ -343,7 +343,7 @@ async def test_smart_decision_maker_parameter_validation():
# Create execution context # Create execution context
mock_execution_context = ExecutionContext(safe_mode=False) mock_execution_context = ExecutionContext(human_in_the_loop_safe_mode=False)
# Create a mock execution processor for tests # Create a mock execution processor for tests
@@ -409,7 +409,7 @@ async def test_smart_decision_maker_parameter_validation():
# Create execution context # Create execution context
mock_execution_context = ExecutionContext(safe_mode=False) mock_execution_context = ExecutionContext(human_in_the_loop_safe_mode=False)
# Create a mock execution processor for tests # Create a mock execution processor for tests
@@ -471,7 +471,7 @@ async def test_smart_decision_maker_parameter_validation():
outputs = {} outputs = {}
# Create execution context # Create execution context
mock_execution_context = ExecutionContext(safe_mode=False) mock_execution_context = ExecutionContext(human_in_the_loop_safe_mode=False)
# Create a mock execution processor for tests # Create a mock execution processor for tests
@@ -535,7 +535,7 @@ async def test_smart_decision_maker_parameter_validation():
outputs = {} outputs = {}
# Create execution context # Create execution context
mock_execution_context = ExecutionContext(safe_mode=False) mock_execution_context = ExecutionContext(human_in_the_loop_safe_mode=False)
# Create a mock execution processor for tests # Create a mock execution processor for tests
@@ -658,7 +658,7 @@ async def test_smart_decision_maker_raw_response_conversion():
outputs = {} outputs = {}
# Create execution context # Create execution context
mock_execution_context = ExecutionContext(safe_mode=False) mock_execution_context = ExecutionContext(human_in_the_loop_safe_mode=False)
# Create a mock execution processor for tests # Create a mock execution processor for tests
@@ -730,7 +730,7 @@ async def test_smart_decision_maker_raw_response_conversion():
outputs = {} outputs = {}
# Create execution context # Create execution context
mock_execution_context = ExecutionContext(safe_mode=False) mock_execution_context = ExecutionContext(human_in_the_loop_safe_mode=False)
# Create a mock execution processor for tests # Create a mock execution processor for tests
@@ -786,7 +786,7 @@ async def test_smart_decision_maker_raw_response_conversion():
outputs = {} outputs = {}
# Create execution context # Create execution context
mock_execution_context = ExecutionContext(safe_mode=False) mock_execution_context = ExecutionContext(human_in_the_loop_safe_mode=False)
# Create a mock execution processor for tests # Create a mock execution processor for tests
@@ -905,7 +905,7 @@ async def test_smart_decision_maker_agent_mode():
# Create a mock execution context # Create a mock execution context
mock_execution_context = ExecutionContext( mock_execution_context = ExecutionContext(
safe_mode=False, human_in_the_loop_safe_mode=False,
) )
# Create a mock execution processor for agent mode tests # Create a mock execution processor for agent mode tests
@@ -1027,7 +1027,7 @@ async def test_smart_decision_maker_traditional_mode_default():
# Create execution context # Create execution context
mock_execution_context = ExecutionContext(safe_mode=False) mock_execution_context = ExecutionContext(human_in_the_loop_safe_mode=False)
# Create a mock execution processor for tests # Create a mock execution processor for tests

View File

@@ -386,7 +386,7 @@ async def test_output_yielding_with_dynamic_fields():
outputs = {} outputs = {}
from backend.data.execution import ExecutionContext from backend.data.execution import ExecutionContext
mock_execution_context = ExecutionContext(safe_mode=False) mock_execution_context = ExecutionContext(human_in_the_loop_safe_mode=False)
mock_execution_processor = MagicMock() mock_execution_processor = MagicMock()
async for output_name, output_value in block.run( async for output_name, output_value in block.run(
@@ -609,7 +609,9 @@ async def test_validation_errors_dont_pollute_conversation():
outputs = {} outputs = {}
from backend.data.execution import ExecutionContext from backend.data.execution import ExecutionContext
mock_execution_context = ExecutionContext(safe_mode=False) mock_execution_context = ExecutionContext(
human_in_the_loop_safe_mode=False
)
# Create a proper mock execution processor for agent mode # Create a proper mock execution processor for agent mode
from collections import defaultdict from collections import defaultdict

View File

@@ -474,7 +474,7 @@ class Block(ABC, Generic[BlockSchemaInputType, BlockSchemaOutputType]):
self.block_type = block_type self.block_type = block_type
self.webhook_config = webhook_config self.webhook_config = webhook_config
self.execution_stats: NodeExecutionStats = NodeExecutionStats() self.execution_stats: NodeExecutionStats = NodeExecutionStats()
self.requires_human_review: bool = False self.is_sensitive_action: bool = False
if self.webhook_config: if self.webhook_config:
if isinstance(self.webhook_config, BlockWebhookConfig): if isinstance(self.webhook_config, BlockWebhookConfig):
@@ -637,8 +637,9 @@ class Block(ABC, Generic[BlockSchemaInputType, BlockSchemaOutputType]):
- should_pause: True if execution should be paused for review - should_pause: True if execution should be paused for review
- input_data_to_use: The input data to use (may be modified by reviewer) - input_data_to_use: The input data to use (may be modified by reviewer)
""" """
# Skip review if not required or safe mode is disabled if not (
if not self.requires_human_review or not execution_context.safe_mode: self.is_sensitive_action and execution_context.sensitive_action_safe_mode
):
return False, input_data return False, input_data
from backend.blocks.helpers.review import HITLReviewHelper from backend.blocks.helpers.review import HITLReviewHelper

View File

@@ -99,10 +99,15 @@ MODEL_COST: dict[LlmModel, int] = {
LlmModel.OPENAI_GPT_OSS_20B: 1, LlmModel.OPENAI_GPT_OSS_20B: 1,
LlmModel.GEMINI_2_5_PRO: 4, LlmModel.GEMINI_2_5_PRO: 4,
LlmModel.GEMINI_3_PRO_PREVIEW: 5, LlmModel.GEMINI_3_PRO_PREVIEW: 5,
LlmModel.GEMINI_2_5_FLASH: 1,
LlmModel.GEMINI_2_0_FLASH: 1,
LlmModel.GEMINI_2_5_FLASH_LITE_PREVIEW: 1,
LlmModel.GEMINI_2_0_FLASH_LITE: 1,
LlmModel.MISTRAL_NEMO: 1, LlmModel.MISTRAL_NEMO: 1,
LlmModel.COHERE_COMMAND_R_08_2024: 1, LlmModel.COHERE_COMMAND_R_08_2024: 1,
LlmModel.COHERE_COMMAND_R_PLUS_08_2024: 3, LlmModel.COHERE_COMMAND_R_PLUS_08_2024: 3,
LlmModel.DEEPSEEK_CHAT: 2, LlmModel.DEEPSEEK_CHAT: 2,
LlmModel.DEEPSEEK_R1_0528: 1,
LlmModel.PERPLEXITY_SONAR: 1, LlmModel.PERPLEXITY_SONAR: 1,
LlmModel.PERPLEXITY_SONAR_PRO: 5, LlmModel.PERPLEXITY_SONAR_PRO: 5,
LlmModel.PERPLEXITY_SONAR_DEEP_RESEARCH: 10, LlmModel.PERPLEXITY_SONAR_DEEP_RESEARCH: 10,
@@ -126,11 +131,6 @@ MODEL_COST: dict[LlmModel, int] = {
LlmModel.KIMI_K2: 1, LlmModel.KIMI_K2: 1,
LlmModel.QWEN3_235B_A22B_THINKING: 1, LlmModel.QWEN3_235B_A22B_THINKING: 1,
LlmModel.QWEN3_CODER: 9, LlmModel.QWEN3_CODER: 9,
LlmModel.GEMINI_2_5_FLASH: 1,
LlmModel.GEMINI_2_0_FLASH: 1,
LlmModel.GEMINI_2_5_FLASH_LITE_PREVIEW: 1,
LlmModel.GEMINI_2_0_FLASH_LITE: 1,
LlmModel.DEEPSEEK_R1_0528: 1,
# v0 by Vercel models # v0 by Vercel models
LlmModel.V0_1_5_MD: 1, LlmModel.V0_1_5_MD: 1,
LlmModel.V0_1_5_LG: 2, LlmModel.V0_1_5_LG: 2,

View File

@@ -38,20 +38,6 @@ POOL_TIMEOUT = os.getenv("DB_POOL_TIMEOUT")
if POOL_TIMEOUT: if POOL_TIMEOUT:
DATABASE_URL = add_param(DATABASE_URL, "pool_timeout", POOL_TIMEOUT) DATABASE_URL = add_param(DATABASE_URL, "pool_timeout", POOL_TIMEOUT)
# Add public schema to search_path for pgvector type access
# The vector extension is in public schema, but search_path is determined by schema parameter
# Extract the schema from DATABASE_URL or default to 'public' (matching get_database_schema())
parsed_url = urlparse(DATABASE_URL)
url_params = dict(parse_qsl(parsed_url.query))
db_schema = url_params.get("schema", "public")
# Build search_path, avoiding duplicates if db_schema is already 'public'
search_path_schemas = list(
dict.fromkeys([db_schema, "public"])
) # Preserves order, removes duplicates
search_path = ",".join(search_path_schemas)
# This allows using ::vector without schema qualification
DATABASE_URL = add_param(DATABASE_URL, "options", f"-c search_path={search_path}")
HTTP_TIMEOUT = int(POOL_TIMEOUT) if POOL_TIMEOUT else None HTTP_TIMEOUT = int(POOL_TIMEOUT) if POOL_TIMEOUT else None
prisma = Prisma( prisma = Prisma(
@@ -127,38 +113,48 @@ async def _raw_with_schema(
*args, *args,
execute: bool = False, execute: bool = False,
client: Prisma | None = None, client: Prisma | None = None,
set_public_search_path: bool = False,
) -> list[dict] | int: ) -> list[dict] | int:
"""Internal: Execute raw SQL with proper schema handling. """Internal: Execute raw SQL with proper schema handling.
Use query_raw_with_schema() or execute_raw_with_schema() instead. Use query_raw_with_schema() or execute_raw_with_schema() instead.
Supports placeholders:
- {schema_prefix}: Table/type prefix (e.g., "platform".)
- {schema}: Raw schema name for application tables (e.g., platform)
Note on pgvector types:
Use unqualified ::vector and <=> operator in queries. PostgreSQL resolves
these via search_path, which includes the schema where pgvector is installed
on all environments (local, CI, dev).
Args: Args:
query_template: SQL query with {schema_prefix} placeholder query_template: SQL query with {schema_prefix} and/or {schema} placeholders
*args: Query parameters *args: Query parameters
execute: If False, executes SELECT query. If True, executes INSERT/UPDATE/DELETE. execute: If False, executes SELECT query. If True, executes INSERT/UPDATE/DELETE.
client: Optional Prisma client for transactions (only used when execute=True). client: Optional Prisma client for transactions (only used when execute=True).
set_public_search_path: If True, sets search_path to include public schema.
Needed for pgvector types and other public schema objects.
Returns: Returns:
- list[dict] if execute=False (query results) - list[dict] if execute=False (query results)
- int if execute=True (number of affected rows) - int if execute=True (number of affected rows)
Example with vector type:
await execute_raw_with_schema(
'INSERT INTO {schema_prefix}"Embedding" (vec) VALUES ($1::vector)',
embedding_data
)
""" """
schema = get_database_schema() schema = get_database_schema()
schema_prefix = f'"{schema}".' if schema != "public" else "" schema_prefix = f'"{schema}".' if schema != "public" else ""
formatted_query = query_template.format(schema_prefix=schema_prefix)
formatted_query = query_template.format(
schema_prefix=schema_prefix,
schema=schema,
)
import prisma as prisma_module import prisma as prisma_module
db_client = client if client else prisma_module.get_client() db_client = client if client else prisma_module.get_client()
# Set search_path to include public schema if requested
# Prisma doesn't support the 'options' connection parameter, so we set it per-session
# This is idempotent and safe to call multiple times
if set_public_search_path:
await db_client.execute_raw(f"SET search_path = {schema}, public") # type: ignore
if execute: if execute:
result = await db_client.execute_raw(formatted_query, *args) # type: ignore result = await db_client.execute_raw(formatted_query, *args) # type: ignore
else: else:
@@ -167,16 +163,12 @@ async def _raw_with_schema(
return result return result
async def query_raw_with_schema( async def query_raw_with_schema(query_template: str, *args) -> list[dict]:
query_template: str, *args, set_public_search_path: bool = False
) -> list[dict]:
"""Execute raw SQL SELECT query with proper schema handling. """Execute raw SQL SELECT query with proper schema handling.
Args: Args:
query_template: SQL query with {schema_prefix} placeholder query_template: SQL query with {schema_prefix} and/or {schema} placeholders
*args: Query parameters *args: Query parameters
set_public_search_path: If True, sets search_path to include public schema.
Needed for pgvector types and other public schema objects.
Returns: Returns:
List of result rows as dictionaries List of result rows as dictionaries
@@ -187,23 +179,20 @@ async def query_raw_with_schema(
user_id user_id
) )
""" """
return await _raw_with_schema(query_template, *args, execute=False, set_public_search_path=set_public_search_path) # type: ignore return await _raw_with_schema(query_template, *args, execute=False) # type: ignore
async def execute_raw_with_schema( async def execute_raw_with_schema(
query_template: str, query_template: str,
*args, *args,
client: Prisma | None = None, client: Prisma | None = None,
set_public_search_path: bool = False,
) -> int: ) -> int:
"""Execute raw SQL command (INSERT/UPDATE/DELETE) with proper schema handling. """Execute raw SQL command (INSERT/UPDATE/DELETE) with proper schema handling.
Args: Args:
query_template: SQL query with {schema_prefix} placeholder query_template: SQL query with {schema_prefix} and/or {schema} placeholders
*args: Query parameters *args: Query parameters
client: Optional Prisma client for transactions client: Optional Prisma client for transactions
set_public_search_path: If True, sets search_path to include public schema.
Needed for pgvector types and other public schema objects.
Returns: Returns:
Number of affected rows Number of affected rows
@@ -215,7 +204,7 @@ async def execute_raw_with_schema(
client=tx # Optional transaction client client=tx # Optional transaction client
) )
""" """
return await _raw_with_schema(query_template, *args, execute=True, client=client, set_public_search_path=set_public_search_path) # type: ignore return await _raw_with_schema(query_template, *args, execute=True, client=client) # type: ignore
class BaseDbModel(BaseModel): class BaseDbModel(BaseModel):

View File

@@ -103,8 +103,18 @@ class RedisEventBus(BaseRedisEventBus[M], ABC):
return redis.get_redis() return redis.get_redis()
def publish_event(self, event: M, channel_key: str): def publish_event(self, event: M, channel_key: str):
message, full_channel_name = self._serialize_message(event, channel_key) """
self.connection.publish(full_channel_name, message) Publish an event to Redis. Gracefully handles connection failures
by logging the error instead of raising exceptions.
"""
try:
message, full_channel_name = self._serialize_message(event, channel_key)
self.connection.publish(full_channel_name, message)
except Exception:
logger.exception(
f"Failed to publish event to Redis channel {channel_key}. "
"Event bus operation will continue without Redis connectivity."
)
def listen_events(self, channel_key: str) -> Generator[M, None, None]: def listen_events(self, channel_key: str) -> Generator[M, None, None]:
pubsub, full_channel_name = self._get_pubsub_channel( pubsub, full_channel_name = self._get_pubsub_channel(
@@ -128,9 +138,19 @@ class AsyncRedisEventBus(BaseRedisEventBus[M], ABC):
return await redis.get_redis_async() return await redis.get_redis_async()
async def publish_event(self, event: M, channel_key: str): async def publish_event(self, event: M, channel_key: str):
message, full_channel_name = self._serialize_message(event, channel_key) """
connection = await self.connection Publish an event to Redis. Gracefully handles connection failures
await connection.publish(full_channel_name, message) by logging the error instead of raising exceptions.
"""
try:
message, full_channel_name = self._serialize_message(event, channel_key)
connection = await self.connection
await connection.publish(full_channel_name, message)
except Exception:
logger.exception(
f"Failed to publish event to Redis channel {channel_key}. "
"Event bus operation will continue without Redis connectivity."
)
async def listen_events(self, channel_key: str) -> AsyncGenerator[M, None]: async def listen_events(self, channel_key: str) -> AsyncGenerator[M, None]:
pubsub, full_channel_name = self._get_pubsub_channel( pubsub, full_channel_name = self._get_pubsub_channel(

View File

@@ -0,0 +1,56 @@
"""
Tests for event_bus graceful degradation when Redis is unavailable.
"""
from unittest.mock import AsyncMock, patch
import pytest
from pydantic import BaseModel
from backend.data.event_bus import AsyncRedisEventBus
class TestEvent(BaseModel):
"""Test event model."""
message: str
class TestNotificationBus(AsyncRedisEventBus[TestEvent]):
"""Test implementation of AsyncRedisEventBus."""
Model = TestEvent
@property
def event_bus_name(self) -> str:
return "test_event_bus"
@pytest.mark.asyncio
async def test_publish_event_handles_connection_failure_gracefully():
"""Test that publish_event logs exception instead of raising when Redis is unavailable."""
bus = TestNotificationBus()
event = TestEvent(message="test message")
# Mock get_redis_async to raise connection error
with patch(
"backend.data.event_bus.redis.get_redis_async",
side_effect=ConnectionError("Authentication required."),
):
# Should not raise exception
await bus.publish_event(event, "test_channel")
@pytest.mark.asyncio
async def test_publish_event_works_with_redis_available():
"""Test that publish_event works normally when Redis is available."""
bus = TestNotificationBus()
event = TestEvent(message="test message")
# Mock successful Redis connection
mock_redis = AsyncMock()
mock_redis.publish = AsyncMock()
with patch("backend.data.event_bus.redis.get_redis_async", return_value=mock_redis):
await bus.publish_event(event, "test_channel")
mock_redis.publish.assert_called_once()

View File

@@ -81,7 +81,10 @@ class ExecutionContext(BaseModel):
This includes information needed by blocks, sub-graphs, and execution management. This includes information needed by blocks, sub-graphs, and execution management.
""" """
safe_mode: bool = True model_config = {"extra": "ignore"}
human_in_the_loop_safe_mode: bool = True
sensitive_action_safe_mode: bool = False
user_timezone: str = "UTC" user_timezone: str = "UTC"
root_execution_id: Optional[str] = None root_execution_id: Optional[str] = None
parent_execution_id: Optional[str] = None parent_execution_id: Optional[str] = None

View File

@@ -3,7 +3,7 @@ import logging
import uuid import uuid
from collections import defaultdict from collections import defaultdict
from datetime import datetime, timezone from datetime import datetime, timezone
from typing import TYPE_CHECKING, Any, Literal, Optional, cast from typing import TYPE_CHECKING, Annotated, Any, Literal, Optional, cast
from prisma.enums import SubmissionStatus from prisma.enums import SubmissionStatus
from prisma.models import ( from prisma.models import (
@@ -20,7 +20,7 @@ from prisma.types import (
AgentNodeLinkCreateInput, AgentNodeLinkCreateInput,
StoreListingVersionWhereInput, StoreListingVersionWhereInput,
) )
from pydantic import BaseModel, Field, create_model from pydantic import BaseModel, BeforeValidator, Field, create_model
from pydantic.fields import computed_field from pydantic.fields import computed_field
from backend.blocks.agent import AgentExecutorBlock from backend.blocks.agent import AgentExecutorBlock
@@ -62,7 +62,31 @@ logger = logging.getLogger(__name__)
class GraphSettings(BaseModel): class GraphSettings(BaseModel):
human_in_the_loop_safe_mode: bool | None = None # Use Annotated with BeforeValidator to coerce None to default values.
# This handles cases where the database has null values for these fields.
model_config = {"extra": "ignore"}
human_in_the_loop_safe_mode: Annotated[
bool, BeforeValidator(lambda v: v if v is not None else True)
] = True
sensitive_action_safe_mode: Annotated[
bool, BeforeValidator(lambda v: v if v is not None else False)
] = False
@classmethod
def from_graph(
cls,
graph: "GraphModel",
hitl_safe_mode: bool | None = None,
sensitive_action_safe_mode: bool = False,
) -> "GraphSettings":
# Default to True if not explicitly set
if hitl_safe_mode is None:
hitl_safe_mode = True
return cls(
human_in_the_loop_safe_mode=hitl_safe_mode,
sensitive_action_safe_mode=sensitive_action_safe_mode,
)
class Link(BaseDbModel): class Link(BaseDbModel):
@@ -244,10 +268,14 @@ class BaseGraph(BaseDbModel):
return any( return any(
node.block_id node.block_id
for node in self.nodes for node in self.nodes
if ( if node.block.block_type == BlockType.HUMAN_IN_THE_LOOP
node.block.block_type == BlockType.HUMAN_IN_THE_LOOP )
or node.block.requires_human_review
) @computed_field
@property
def has_sensitive_action(self) -> bool:
return any(
node.block_id for node in self.nodes if node.block.is_sensitive_action
) )
@property @property

View File

@@ -328,6 +328,8 @@ async def clear_business_understanding(user_id: str) -> bool:
def format_understanding_for_prompt(understanding: BusinessUnderstanding) -> str: def format_understanding_for_prompt(understanding: BusinessUnderstanding) -> str:
"""Format business understanding as text for system prompt injection.""" """Format business understanding as text for system prompt injection."""
if not understanding:
return ""
sections = [] sections = []
# User info section # User info section

View File

@@ -309,7 +309,7 @@ def ensure_embeddings_coverage():
# Process in batches until no more missing embeddings # Process in batches until no more missing embeddings
while True: while True:
result = db_client.backfill_missing_embeddings(batch_size=10) result = db_client.backfill_missing_embeddings(batch_size=100)
total_processed += result["processed"] total_processed += result["processed"]
total_success += result["success"] total_success += result["success"]

View File

@@ -873,11 +873,8 @@ async def add_graph_execution(
settings = await gdb.get_graph_settings(user_id=user_id, graph_id=graph_id) settings = await gdb.get_graph_settings(user_id=user_id, graph_id=graph_id)
execution_context = ExecutionContext( execution_context = ExecutionContext(
safe_mode=( human_in_the_loop_safe_mode=settings.human_in_the_loop_safe_mode,
settings.human_in_the_loop_safe_mode sensitive_action_safe_mode=settings.sensitive_action_safe_mode,
if settings.human_in_the_loop_safe_mode is not None
else True
),
user_timezone=( user_timezone=(
user.timezone if user.timezone != USER_TIMEZONE_NOT_SET else "UTC" user.timezone if user.timezone != USER_TIMEZONE_NOT_SET else "UTC"
), ),

View File

@@ -386,6 +386,7 @@ async def test_add_graph_execution_is_repeatable(mocker: MockerFixture):
mock_user.timezone = "UTC" mock_user.timezone = "UTC"
mock_settings = mocker.MagicMock() mock_settings = mocker.MagicMock()
mock_settings.human_in_the_loop_safe_mode = True mock_settings.human_in_the_loop_safe_mode = True
mock_settings.sensitive_action_safe_mode = False
mock_udb.get_user_by_id = mocker.AsyncMock(return_value=mock_user) mock_udb.get_user_by_id = mocker.AsyncMock(return_value=mock_user)
mock_gdb.get_graph_settings = mocker.AsyncMock(return_value=mock_settings) mock_gdb.get_graph_settings = mocker.AsyncMock(return_value=mock_settings)
@@ -651,6 +652,7 @@ async def test_add_graph_execution_with_nodes_to_skip(mocker: MockerFixture):
mock_user.timezone = "UTC" mock_user.timezone = "UTC"
mock_settings = mocker.MagicMock() mock_settings = mocker.MagicMock()
mock_settings.human_in_the_loop_safe_mode = True mock_settings.human_in_the_loop_safe_mode = True
mock_settings.sensitive_action_safe_mode = False
mock_udb.get_user_by_id = mocker.AsyncMock(return_value=mock_user) mock_udb.get_user_by_id = mocker.AsyncMock(return_value=mock_user)
mock_gdb.get_graph_settings = mocker.AsyncMock(return_value=mock_settings) mock_gdb.get_graph_settings = mocker.AsyncMock(return_value=mock_settings)

View File

@@ -1,9 +1,10 @@
-- CreateExtension -- CreateExtension
-- Supabase: pgvector must be enabled via Dashboard → Database → Extensions first -- Supabase: pgvector must be enabled via Dashboard → Database → Extensions first
-- Create in public schema so vector type is available across all schemas -- Creates extension in current schema (determined by search_path from DATABASE_URL ?schema= param)
-- This ensures vector type is in the same schema as tables, making ::vector work without explicit qualification
DO $$ DO $$
BEGIN BEGIN
CREATE EXTENSION IF NOT EXISTS "vector" WITH SCHEMA "public"; CREATE EXTENSION IF NOT EXISTS "vector";
EXCEPTION WHEN OTHERS THEN EXCEPTION WHEN OTHERS THEN
RAISE NOTICE 'vector extension not available or already exists, skipping'; RAISE NOTICE 'vector extension not available or already exists, skipping';
END $$; END $$;
@@ -19,7 +20,7 @@ CREATE TABLE "UnifiedContentEmbedding" (
"contentType" "ContentType" NOT NULL, "contentType" "ContentType" NOT NULL,
"contentId" TEXT NOT NULL, "contentId" TEXT NOT NULL,
"userId" TEXT, "userId" TEXT,
"embedding" public.vector(1536) NOT NULL, "embedding" vector(1536) NOT NULL,
"searchableText" TEXT NOT NULL, "searchableText" TEXT NOT NULL,
"metadata" JSONB NOT NULL DEFAULT '{}', "metadata" JSONB NOT NULL DEFAULT '{}',
@@ -45,4 +46,4 @@ CREATE UNIQUE INDEX "UnifiedContentEmbedding_contentType_contentId_userId_key" O
-- Uses cosine distance operator (<=>), which matches the query in hybrid_search.py -- Uses cosine distance operator (<=>), which matches the query in hybrid_search.py
-- Note: Drop first in case Prisma created a btree index (Prisma doesn't support HNSW) -- Note: Drop first in case Prisma created a btree index (Prisma doesn't support HNSW)
DROP INDEX IF EXISTS "UnifiedContentEmbedding_embedding_idx"; DROP INDEX IF EXISTS "UnifiedContentEmbedding_embedding_idx";
CREATE INDEX "UnifiedContentEmbedding_embedding_idx" ON "UnifiedContentEmbedding" USING hnsw ("embedding" public.vector_cosine_ops); CREATE INDEX "UnifiedContentEmbedding_embedding_idx" ON "UnifiedContentEmbedding" USING hnsw ("embedding" vector_cosine_ops);

View File

@@ -366,12 +366,12 @@ def generate_block_markdown(
lines.append("") lines.append("")
# What it is (full description) # What it is (full description)
lines.append(f"### What it is") lines.append("### What it is")
lines.append(block.description or "No description available.") lines.append(block.description or "No description available.")
lines.append("") lines.append("")
# How it works (manual section) # How it works (manual section)
lines.append(f"### How it works") lines.append("### How it works")
how_it_works = manual_content.get( how_it_works = manual_content.get(
"how_it_works", "_Add technical explanation here._" "how_it_works", "_Add technical explanation here._"
) )
@@ -383,7 +383,7 @@ def generate_block_markdown(
# Inputs table (auto-generated) # Inputs table (auto-generated)
visible_inputs = [f for f in block.inputs if not f.hidden] visible_inputs = [f for f in block.inputs if not f.hidden]
if visible_inputs: if visible_inputs:
lines.append(f"### Inputs") lines.append("### Inputs")
lines.append("") lines.append("")
lines.append("| Input | Description | Type | Required |") lines.append("| Input | Description | Type | Required |")
lines.append("|-------|-------------|------|----------|") lines.append("|-------|-------------|------|----------|")
@@ -400,7 +400,7 @@ def generate_block_markdown(
# Outputs table (auto-generated) # Outputs table (auto-generated)
visible_outputs = [f for f in block.outputs if not f.hidden] visible_outputs = [f for f in block.outputs if not f.hidden]
if visible_outputs: if visible_outputs:
lines.append(f"### Outputs") lines.append("### Outputs")
lines.append("") lines.append("")
lines.append("| Output | Description | Type |") lines.append("| Output | Description | Type |")
lines.append("|--------|-------------|------|") lines.append("|--------|-------------|------|")
@@ -414,7 +414,7 @@ def generate_block_markdown(
lines.append("") lines.append("")
# Possible use case (manual section) # Possible use case (manual section)
lines.append(f"### Possible use case") lines.append("### Possible use case")
use_case = manual_content.get("use_case", "_Add practical use case examples here._") use_case = manual_content.get("use_case", "_Add practical use case examples here._")
lines.append("<!-- MANUAL: use_case -->") lines.append("<!-- MANUAL: use_case -->")
lines.append(use_case) lines.append(use_case)

View File

@@ -11,6 +11,7 @@
"forked_from_version": null, "forked_from_version": null,
"has_external_trigger": false, "has_external_trigger": false,
"has_human_in_the_loop": false, "has_human_in_the_loop": false,
"has_sensitive_action": false,
"id": "graph-123", "id": "graph-123",
"input_schema": { "input_schema": {
"properties": {}, "properties": {},

View File

@@ -11,6 +11,7 @@
"forked_from_version": null, "forked_from_version": null,
"has_external_trigger": false, "has_external_trigger": false,
"has_human_in_the_loop": false, "has_human_in_the_loop": false,
"has_sensitive_action": false,
"id": "graph-123", "id": "graph-123",
"input_schema": { "input_schema": {
"properties": {}, "properties": {},

View File

@@ -27,6 +27,8 @@
"properties": {} "properties": {}
}, },
"has_external_trigger": false, "has_external_trigger": false,
"has_human_in_the_loop": false,
"has_sensitive_action": false,
"trigger_setup_info": null, "trigger_setup_info": null,
"new_output": false, "new_output": false,
"can_access_graph": true, "can_access_graph": true,
@@ -34,7 +36,8 @@
"is_favorite": false, "is_favorite": false,
"recommended_schedule_cron": null, "recommended_schedule_cron": null,
"settings": { "settings": {
"human_in_the_loop_safe_mode": null "human_in_the_loop_safe_mode": true,
"sensitive_action_safe_mode": false
}, },
"marketplace_listing": null "marketplace_listing": null
}, },
@@ -65,6 +68,8 @@
"properties": {} "properties": {}
}, },
"has_external_trigger": false, "has_external_trigger": false,
"has_human_in_the_loop": false,
"has_sensitive_action": false,
"trigger_setup_info": null, "trigger_setup_info": null,
"new_output": false, "new_output": false,
"can_access_graph": false, "can_access_graph": false,
@@ -72,7 +77,8 @@
"is_favorite": false, "is_favorite": false,
"recommended_schedule_cron": null, "recommended_schedule_cron": null,
"settings": { "settings": {
"human_in_the_loop_safe_mode": null "human_in_the_loop_safe_mode": true,
"sensitive_action_safe_mode": false
}, },
"marketplace_listing": null "marketplace_listing": null
} }

View File

@@ -29,4 +29,4 @@ NEXT_PUBLIC_CLOUDFLARE_TURNSTILE_SITE_KEY=
NEXT_PUBLIC_TURNSTILE=disabled NEXT_PUBLIC_TURNSTILE=disabled
# PR previews # PR previews
NEXT_PUBLIC_PREVIEW_STEALING_DEV= NEXT_PUBLIC_PREVIEW_STEALING_DEV=

View File

@@ -175,6 +175,8 @@ While server components and actions are cool and cutting-edge, they introduce a
- Prefer [React Query](https://tanstack.com/query/latest/docs/framework/react/overview) for server state, colocated near consumers (see [state colocation](https://kentcdodds.com/blog/state-colocation-will-make-your-react-app-faster)) - Prefer [React Query](https://tanstack.com/query/latest/docs/framework/react/overview) for server state, colocated near consumers (see [state colocation](https://kentcdodds.com/blog/state-colocation-will-make-your-react-app-faster))
- Co-locate UI state inside components/hooks; keep global state minimal - Co-locate UI state inside components/hooks; keep global state minimal
- Avoid `useMemo` and `useCallback` unless you have a measured performance issue
- Do not abuse `useEffect`; prefer state colocation and derive values directly when possible
### Styling and components ### Styling and components
@@ -549,9 +551,48 @@ Files:
Types: Types:
- Prefer `interface` for object shapes - Prefer `interface` for object shapes
- Component props should be `interface Props { ... }` - Component props should be `interface Props { ... }` (not exported)
- Only use specific exported names (e.g., `export interface MyComponentProps`) when the interface needs to be used outside the component
- Keep type definitions inline with the component - do not create separate `types.ts` files unless types are shared across multiple files
- Use precise types; avoid `any` and unsafe casts - Use precise types; avoid `any` and unsafe casts
**Props naming examples:**
```tsx
// ✅ Good - internal props, not exported
interface Props {
title: string;
onClose: () => void;
}
export function Modal({ title, onClose }: Props) {
// ...
}
// ✅ Good - exported when needed externally
export interface ModalProps {
title: string;
onClose: () => void;
}
export function Modal({ title, onClose }: ModalProps) {
// ...
}
// ❌ Bad - unnecessarily specific name for internal use
interface ModalComponentProps {
title: string;
onClose: () => void;
}
// ❌ Bad - separate types.ts file for single component
// types.ts
export interface ModalProps { ... }
// Modal.tsx
import type { ModalProps } from './types';
```
Parameters: Parameters:
- If more than one parameter is needed, pass a single `Args` object for clarity - If more than one parameter is needed, pass a single `Args` object for clarity

View File

@@ -16,6 +16,12 @@ export default defineConfig({
client: "react-query", client: "react-query",
httpClient: "fetch", httpClient: "fetch",
indexFiles: false, indexFiles: false,
mock: {
type: "msw",
baseUrl: "http://localhost:3000/api/proxy",
generateEachHttpStatus: true,
delay: 0,
},
override: { override: {
mutator: { mutator: {
path: "./mutators/custom-mutator.ts", path: "./mutators/custom-mutator.ts",

View File

@@ -15,6 +15,8 @@
"types": "tsc --noEmit", "types": "tsc --noEmit",
"test": "NEXT_PUBLIC_PW_TEST=true next build --turbo && playwright test", "test": "NEXT_PUBLIC_PW_TEST=true next build --turbo && playwright test",
"test-ui": "NEXT_PUBLIC_PW_TEST=true next build --turbo && playwright test --ui", "test-ui": "NEXT_PUBLIC_PW_TEST=true next build --turbo && playwright test --ui",
"test:unit": "vitest run",
"test:unit:watch": "vitest",
"test:no-build": "playwright test", "test:no-build": "playwright test",
"gentests": "playwright codegen http://localhost:3000", "gentests": "playwright codegen http://localhost:3000",
"storybook": "storybook dev -p 6006", "storybook": "storybook dev -p 6006",
@@ -118,6 +120,7 @@
}, },
"devDependencies": { "devDependencies": {
"@chromatic-com/storybook": "4.1.2", "@chromatic-com/storybook": "4.1.2",
"happy-dom": "20.3.4",
"@opentelemetry/instrumentation": "0.209.0", "@opentelemetry/instrumentation": "0.209.0",
"@playwright/test": "1.56.1", "@playwright/test": "1.56.1",
"@storybook/addon-a11y": "9.1.5", "@storybook/addon-a11y": "9.1.5",
@@ -127,6 +130,8 @@
"@storybook/nextjs": "9.1.5", "@storybook/nextjs": "9.1.5",
"@tanstack/eslint-plugin-query": "5.91.2", "@tanstack/eslint-plugin-query": "5.91.2",
"@tanstack/react-query-devtools": "5.90.2", "@tanstack/react-query-devtools": "5.90.2",
"@testing-library/dom": "10.4.1",
"@testing-library/react": "16.3.2",
"@types/canvas-confetti": "1.9.0", "@types/canvas-confetti": "1.9.0",
"@types/lodash": "4.17.20", "@types/lodash": "4.17.20",
"@types/negotiator": "0.6.4", "@types/negotiator": "0.6.4",
@@ -135,6 +140,7 @@
"@types/react-dom": "18.3.5", "@types/react-dom": "18.3.5",
"@types/react-modal": "3.16.3", "@types/react-modal": "3.16.3",
"@types/react-window": "1.8.8", "@types/react-window": "1.8.8",
"@vitejs/plugin-react": "5.1.2",
"axe-playwright": "2.2.2", "axe-playwright": "2.2.2",
"chromatic": "13.3.3", "chromatic": "13.3.3",
"concurrently": "9.2.1", "concurrently": "9.2.1",
@@ -153,7 +159,9 @@
"require-in-the-middle": "8.0.1", "require-in-the-middle": "8.0.1",
"storybook": "9.1.5", "storybook": "9.1.5",
"tailwindcss": "3.4.17", "tailwindcss": "3.4.17",
"typescript": "5.9.3" "typescript": "5.9.3",
"vite-tsconfig-paths": "6.0.4",
"vitest": "4.0.17"
}, },
"msw": { "msw": {
"workerDirectory": [ "workerDirectory": [

File diff suppressed because it is too large Load Diff

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.9 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 26 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 25 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 72 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 374 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 663 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 40 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.1 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.5 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 52 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.8 KiB

View File

@@ -0,0 +1,58 @@
"use client";
import { LoadingSpinner } from "@/components/atoms/LoadingSpinner/LoadingSpinner";
import { Text } from "@/components/atoms/Text/Text";
import { useToast } from "@/components/molecules/Toast/use-toast";
import { useSupabase } from "@/lib/supabase/hooks/useSupabase";
import { useRouter } from "next/navigation";
import { useEffect, useRef } from "react";
const LOGOUT_REDIRECT_DELAY_MS = 400;
function wait(ms: number): Promise<void> {
return new Promise(function resolveAfterDelay(resolve) {
setTimeout(resolve, ms);
});
}
export default function LogoutPage() {
const { logOut } = useSupabase();
const { toast } = useToast();
const router = useRouter();
const hasStartedRef = useRef(false);
useEffect(
function handleLogoutEffect() {
if (hasStartedRef.current) return;
hasStartedRef.current = true;
async function runLogout() {
try {
await logOut();
} catch {
toast({
title: "Failed to log out. Redirecting to login.",
variant: "destructive",
});
} finally {
await wait(LOGOUT_REDIRECT_DELAY_MS);
router.replace("/login");
}
}
void runLogout();
},
[logOut, router, toast],
);
return (
<div className="flex min-h-screen items-center justify-center px-4">
<div className="flex flex-col items-center justify-center gap-4 py-8">
<LoadingSpinner size="large" />
<Text variant="body" className="text-center">
Logging you out...
</Text>
</div>
</div>
);
}

View File

@@ -9,7 +9,7 @@ export async function GET(request: Request) {
const { searchParams, origin } = new URL(request.url); const { searchParams, origin } = new URL(request.url);
const code = searchParams.get("code"); const code = searchParams.get("code");
let next = "/marketplace"; let next = "/";
if (code) { if (code) {
const supabase = await getServerSupabase(); const supabase = await getServerSupabase();

View File

@@ -5,10 +5,11 @@ import {
TooltipContent, TooltipContent,
TooltipTrigger, TooltipTrigger,
} from "@/components/atoms/Tooltip/BaseTooltip"; } from "@/components/atoms/Tooltip/BaseTooltip";
import { PlayIcon, StopIcon } from "@phosphor-icons/react"; import { CircleNotchIcon, PlayIcon, StopIcon } from "@phosphor-icons/react";
import { useShallow } from "zustand/react/shallow"; import { useShallow } from "zustand/react/shallow";
import { RunInputDialog } from "../RunInputDialog/RunInputDialog"; import { RunInputDialog } from "../RunInputDialog/RunInputDialog";
import { useRunGraph } from "./useRunGraph"; import { useRunGraph } from "./useRunGraph";
import { cn } from "@/lib/utils";
export const RunGraph = ({ flowID }: { flowID: string | null }) => { export const RunGraph = ({ flowID }: { flowID: string | null }) => {
const { const {
@@ -24,6 +25,31 @@ export const RunGraph = ({ flowID }: { flowID: string | null }) => {
useShallow((state) => state.isGraphRunning), useShallow((state) => state.isGraphRunning),
); );
const isLoading = isExecutingGraph || isTerminatingGraph || isSaving;
// Determine which icon to show with proper animation
const renderIcon = () => {
const iconClass = cn(
"size-4 transition-transform duration-200 ease-out",
!isLoading && "group-hover:scale-110",
);
if (isLoading) {
return (
<CircleNotchIcon
className={cn(iconClass, "animate-spin")}
weight="bold"
/>
);
}
if (isGraphRunning) {
return <StopIcon className={iconClass} weight="fill" />;
}
return <PlayIcon className={iconClass} weight="fill" />;
};
return ( return (
<> <>
<Tooltip> <Tooltip>
@@ -33,18 +59,18 @@ export const RunGraph = ({ flowID }: { flowID: string | null }) => {
variant={isGraphRunning ? "destructive" : "primary"} variant={isGraphRunning ? "destructive" : "primary"}
data-id={isGraphRunning ? "stop-graph-button" : "run-graph-button"} data-id={isGraphRunning ? "stop-graph-button" : "run-graph-button"}
onClick={isGraphRunning ? handleStopGraph : handleRunGraph} onClick={isGraphRunning ? handleStopGraph : handleRunGraph}
disabled={!flowID || isExecutingGraph || isTerminatingGraph} disabled={!flowID || isLoading}
loading={isExecutingGraph || isTerminatingGraph || isSaving} className="group"
> >
{!isGraphRunning ? ( {renderIcon()}
<PlayIcon className="size-4" />
) : (
<StopIcon className="size-4" />
)}
</Button> </Button>
</TooltipTrigger> </TooltipTrigger>
<TooltipContent> <TooltipContent>
{isGraphRunning ? "Stop agent" : "Run agent"} {isLoading
? "Processing..."
: isGraphRunning
? "Stop agent"
: "Run agent"}
</TooltipContent> </TooltipContent>
</Tooltip> </Tooltip>
<RunInputDialog <RunInputDialog

View File

@@ -10,6 +10,7 @@ import { useRunInputDialog } from "./useRunInputDialog";
import { CronSchedulerDialog } from "../CronSchedulerDialog/CronSchedulerDialog"; import { CronSchedulerDialog } from "../CronSchedulerDialog/CronSchedulerDialog";
import { useTutorialStore } from "@/app/(platform)/build/stores/tutorialStore"; import { useTutorialStore } from "@/app/(platform)/build/stores/tutorialStore";
import { useEffect } from "react"; import { useEffect } from "react";
import { CredentialsGroupedView } from "@/components/contextual/CredentialsInput/components/CredentialsGroupedView/CredentialsGroupedView";
export const RunInputDialog = ({ export const RunInputDialog = ({
isOpen, isOpen,
@@ -23,19 +24,17 @@ export const RunInputDialog = ({
const hasInputs = useGraphStore((state) => state.hasInputs); const hasInputs = useGraphStore((state) => state.hasInputs);
const hasCredentials = useGraphStore((state) => state.hasCredentials); const hasCredentials = useGraphStore((state) => state.hasCredentials);
const inputSchema = useGraphStore((state) => state.inputSchema); const inputSchema = useGraphStore((state) => state.inputSchema);
const credentialsSchema = useGraphStore(
(state) => state.credentialsInputSchema,
);
const { const {
credentialsUiSchema, credentialFields,
requiredCredentials,
handleManualRun, handleManualRun,
handleInputChange, handleInputChange,
openCronSchedulerDialog, openCronSchedulerDialog,
setOpenCronSchedulerDialog, setOpenCronSchedulerDialog,
inputValues, inputValues,
credentialValues, credentialValues,
handleCredentialChange, handleCredentialFieldChange,
isExecutingGraph, isExecutingGraph,
} = useRunInputDialog({ setIsOpen }); } = useRunInputDialog({ setIsOpen });
@@ -62,67 +61,67 @@ export const RunInputDialog = ({
isOpen, isOpen,
set: setIsOpen, set: setIsOpen,
}} }}
styling={{ maxWidth: "600px", minWidth: "600px" }} styling={{ maxWidth: "700px", minWidth: "700px" }}
> >
<Dialog.Content> <Dialog.Content>
<div className="space-y-6 p-1" data-id="run-input-dialog-content"> <div
{/* Credentials Section */} className="grid grid-cols-[1fr_auto] gap-10 p-1"
{hasCredentials() && ( data-id="run-input-dialog-content"
<div data-id="run-input-credentials-section"> >
<div className="mb-4"> <div className="space-y-6">
<Text variant="h4" className="text-gray-900"> {/* Credentials Section */}
Credentials {hasCredentials() && credentialFields.length > 0 && (
</Text> <div data-id="run-input-credentials-section">
<div className="mb-4">
<Text variant="h4" className="text-gray-900">
Credentials
</Text>
</div>
<div className="px-2" data-id="run-input-credentials-form">
<CredentialsGroupedView
credentialFields={credentialFields}
requiredCredentials={requiredCredentials}
inputCredentials={credentialValues}
inputValues={inputValues}
onCredentialChange={handleCredentialFieldChange}
/>
</div>
</div> </div>
<div className="px-2" data-id="run-input-credentials-form"> )}
<FormRenderer
jsonSchema={credentialsSchema as RJSFSchema}
handleChange={(v) => handleCredentialChange(v.formData)}
uiSchema={credentialsUiSchema}
initialValues={{}}
formContext={{
showHandles: false,
size: "large",
showOptionalToggle: false,
}}
/>
</div>
</div>
)}
{/* Inputs Section */} {/* Inputs Section */}
{hasInputs() && ( {hasInputs() && (
<div data-id="run-input-inputs-section"> <div data-id="run-input-inputs-section">
<div className="mb-4"> <div className="mb-4">
<Text variant="h4" className="text-gray-900"> <Text variant="h4" className="text-gray-900">
Inputs Inputs
</Text> </Text>
</div>
<div data-id="run-input-inputs-form">
<FormRenderer
jsonSchema={inputSchema as RJSFSchema}
handleChange={(v) => handleInputChange(v.formData)}
uiSchema={uiSchema}
initialValues={{}}
formContext={{
showHandles: false,
size: "large",
}}
/>
</div>
</div> </div>
<div data-id="run-input-inputs-form"> )}
<FormRenderer </div>
jsonSchema={inputSchema as RJSFSchema}
handleChange={(v) => handleInputChange(v.formData)}
uiSchema={uiSchema}
initialValues={{}}
formContext={{
showHandles: false,
size: "large",
}}
/>
</div>
</div>
)}
{/* Action Button */}
<div <div
className="flex justify-end pt-2" className="flex flex-col items-end justify-start"
data-id="run-input-actions-section" data-id="run-input-actions-section"
> >
{purpose === "run" && ( {purpose === "run" && (
<Button <Button
variant="primary" variant="primary"
size="large" size="large"
className="group h-fit min-w-0 gap-2" className="group h-fit min-w-0 gap-2 px-10"
onClick={handleManualRun} onClick={handleManualRun}
loading={isExecutingGraph} loading={isExecutingGraph}
data-id="run-input-manual-run-button" data-id="run-input-manual-run-button"
@@ -137,7 +136,7 @@ export const RunInputDialog = ({
<Button <Button
variant="primary" variant="primary"
size="large" size="large"
className="group h-fit min-w-0 gap-2" className="group h-fit min-w-0 gap-2 px-10"
onClick={() => setOpenCronSchedulerDialog(true)} onClick={() => setOpenCronSchedulerDialog(true)}
data-id="run-input-schedule-button" data-id="run-input-schedule-button"
> >

View File

@@ -7,12 +7,11 @@ import {
GraphExecutionMeta, GraphExecutionMeta,
} from "@/lib/autogpt-server-api"; } from "@/lib/autogpt-server-api";
import { parseAsInteger, parseAsString, useQueryStates } from "nuqs"; import { parseAsInteger, parseAsString, useQueryStates } from "nuqs";
import { useMemo, useState } from "react"; import { useCallback, useMemo, useState } from "react";
import { uiSchema } from "../../../FlowEditor/nodes/uiSchema";
import { isCredentialFieldSchema } from "@/components/renderers/InputRenderer/custom/CredentialField/helpers";
import { useNodeStore } from "@/app/(platform)/build/stores/nodeStore"; import { useNodeStore } from "@/app/(platform)/build/stores/nodeStore";
import { useToast } from "@/components/molecules/Toast/use-toast"; import { useToast } from "@/components/molecules/Toast/use-toast";
import { useReactFlow } from "@xyflow/react"; import { useReactFlow } from "@xyflow/react";
import type { CredentialField } from "@/components/contextual/CredentialsInput/components/CredentialsGroupedView/helpers";
export const useRunInputDialog = ({ export const useRunInputDialog = ({
setIsOpen, setIsOpen,
@@ -120,27 +119,32 @@ export const useRunInputDialog = ({
}, },
}); });
// We are rendering the credentials field differently compared to other fields. // Convert credentials schema to credential fields array for CredentialsGroupedView
// In the node, we have the field name as "credential" - so our library catches it and renders it differently. const credentialFields: CredentialField[] = useMemo(() => {
// But here we have a different name, something like `Firecrawl credentials`, so here we are telling the library that this field is a credential field type. if (!credentialsSchema?.properties) return [];
return Object.entries(credentialsSchema.properties);
}, [credentialsSchema]);
const credentialsUiSchema = useMemo(() => { // Get required credentials as a Set
const dynamicUiSchema: any = { ...uiSchema }; const requiredCredentials = useMemo(() => {
return new Set<string>(credentialsSchema?.required || []);
}, [credentialsSchema]);
if (credentialsSchema?.properties) { // Handler for individual credential changes
Object.keys(credentialsSchema.properties).forEach((fieldName) => { const handleCredentialFieldChange = useCallback(
const fieldSchema = credentialsSchema.properties[fieldName]; (key: string, value?: CredentialsMetaInput) => {
if (isCredentialFieldSchema(fieldSchema)) { setCredentialValues((prev) => {
dynamicUiSchema[fieldName] = { if (value) {
...dynamicUiSchema[fieldName], return { ...prev, [key]: value };
"ui:field": "custom/credential_field", } else {
}; const next = { ...prev };
delete next[key];
return next;
} }
}); });
} },
[],
return dynamicUiSchema; );
}, [credentialsSchema]);
const handleManualRun = async () => { const handleManualRun = async () => {
// Filter out incomplete credentials (those without a valid id) // Filter out incomplete credentials (those without a valid id)
@@ -173,12 +177,14 @@ export const useRunInputDialog = ({
}; };
return { return {
credentialsUiSchema, credentialFields,
requiredCredentials,
inputValues, inputValues,
credentialValues, credentialValues,
isExecutingGraph, isExecutingGraph,
handleInputChange, handleInputChange,
handleCredentialChange, handleCredentialChange,
handleCredentialFieldChange,
handleManualRun, handleManualRun,
openCronSchedulerDialog, openCronSchedulerDialog,
setOpenCronSchedulerDialog, setOpenCronSchedulerDialog,

View File

@@ -18,69 +18,118 @@ interface Props {
fullWidth?: boolean; fullWidth?: boolean;
} }
interface SafeModeButtonProps {
isEnabled: boolean;
label: string;
tooltipEnabled: string;
tooltipDisabled: string;
onToggle: () => void;
isPending: boolean;
fullWidth?: boolean;
}
function SafeModeButton({
isEnabled,
label,
tooltipEnabled,
tooltipDisabled,
onToggle,
isPending,
fullWidth = false,
}: SafeModeButtonProps) {
return (
<Tooltip delayDuration={100}>
<TooltipTrigger asChild>
<Button
variant={isEnabled ? "primary" : "outline"}
size="small"
onClick={onToggle}
disabled={isPending}
className={cn("justify-start", fullWidth ? "w-full" : "")}
>
{isEnabled ? (
<>
<ShieldCheckIcon weight="bold" size={16} />
<Text variant="body" className="text-zinc-200">
{label}: ON
</Text>
</>
) : (
<>
<ShieldIcon weight="bold" size={16} />
<Text variant="body" className="text-zinc-600">
{label}: OFF
</Text>
</>
)}
</Button>
</TooltipTrigger>
<TooltipContent>
<div className="text-center">
<div className="font-medium">
{label}: {isEnabled ? "ON" : "OFF"}
</div>
<div className="mt-1 text-xs text-muted-foreground">
{isEnabled ? tooltipEnabled : tooltipDisabled}
</div>
</div>
</TooltipContent>
</Tooltip>
);
}
export function FloatingSafeModeToggle({ export function FloatingSafeModeToggle({
graph, graph,
className, className,
fullWidth = false, fullWidth = false,
}: Props) { }: Props) {
const { const {
currentSafeMode, currentHITLSafeMode,
showHITLToggle,
isHITLStateUndetermined,
handleHITLToggle,
currentSensitiveActionSafeMode,
showSensitiveActionToggle,
handleSensitiveActionToggle,
isPending, isPending,
shouldShowToggle, shouldShowToggle,
isStateUndetermined,
handleToggle,
} = useAgentSafeMode(graph); } = useAgentSafeMode(graph);
if (!shouldShowToggle || isStateUndetermined || isPending) { if (!shouldShowToggle || isPending) {
return null;
}
const showHITL = showHITLToggle && !isHITLStateUndetermined;
const showSensitive = showSensitiveActionToggle;
if (!showHITL && !showSensitive) {
return null; return null;
} }
return ( return (
<div className={cn("fixed z-50", className)}> <div className={cn("fixed z-50 flex flex-col gap-2", className)}>
<Tooltip delayDuration={100}> {showHITL && (
<TooltipTrigger asChild> <SafeModeButton
<Button isEnabled={currentHITLSafeMode}
variant={currentSafeMode! ? "primary" : "outline"} label="Human in the loop block approval"
key={graph.id} tooltipEnabled="The agent will pause at human-in-the-loop blocks and wait for your approval"
size="small" tooltipDisabled="Human in the loop blocks will proceed automatically"
title={ onToggle={handleHITLToggle}
currentSafeMode! isPending={isPending}
? "Safe Mode: ON. Human in the loop blocks require manual review" fullWidth={fullWidth}
: "Safe Mode: OFF. Human in the loop blocks proceed automatically" />
} )}
onClick={handleToggle} {showSensitive && (
className={cn(fullWidth ? "w-full" : "")} <SafeModeButton
> isEnabled={currentSensitiveActionSafeMode}
{currentSafeMode! ? ( label="Sensitive actions blocks approval"
<> tooltipEnabled="The agent will pause at sensitive action blocks and wait for your approval"
<ShieldCheckIcon weight="bold" size={16} /> tooltipDisabled="Sensitive action blocks will proceed automatically"
<Text variant="body" className="text-zinc-200"> onToggle={handleSensitiveActionToggle}
Safe Mode: ON isPending={isPending}
</Text> fullWidth={fullWidth}
</> />
) : ( )}
<>
<ShieldIcon weight="bold" size={16} />
<Text variant="body" className="text-zinc-600">
Safe Mode: OFF
</Text>
</>
)}
</Button>
</TooltipTrigger>
<TooltipContent>
<div className="text-center">
<div className="font-medium">
Safe Mode: {currentSafeMode! ? "ON" : "OFF"}
</div>
<div className="mt-1 text-xs text-muted-foreground">
{currentSafeMode!
? "Human in the loop blocks require manual review"
: "Human in the loop blocks proceed automatically"}
</div>
</div>
</TooltipContent>
</Tooltip>
</div> </div>
); );
} }

View File

@@ -53,14 +53,14 @@ export const CustomControls = memo(
const controls = [ const controls = [
{ {
id: "zoom-in-button", id: "zoom-in-button",
icon: <PlusIcon className="size-4" />, icon: <PlusIcon className="size-3.5 text-zinc-600" />,
label: "Zoom In", label: "Zoom In",
onClick: () => zoomIn(), onClick: () => zoomIn(),
className: "h-10 w-10 border-none", className: "h-10 w-10 border-none",
}, },
{ {
id: "zoom-out-button", id: "zoom-out-button",
icon: <MinusIcon className="size-4" />, icon: <MinusIcon className="size-3.5 text-zinc-600" />,
label: "Zoom Out", label: "Zoom Out",
onClick: () => zoomOut(), onClick: () => zoomOut(),
className: "h-10 w-10 border-none", className: "h-10 w-10 border-none",
@@ -68,9 +68,9 @@ export const CustomControls = memo(
{ {
id: "tutorial-button", id: "tutorial-button",
icon: isTutorialLoading ? ( icon: isTutorialLoading ? (
<CircleNotchIcon className="size-4 animate-spin" /> <CircleNotchIcon className="size-3.5 animate-spin text-zinc-600" />
) : ( ) : (
<ChalkboardIcon className="size-4" /> <ChalkboardIcon className="size-3.5 text-zinc-600" />
), ),
label: isTutorialLoading ? "Loading Tutorial..." : "Start Tutorial", label: isTutorialLoading ? "Loading Tutorial..." : "Start Tutorial",
onClick: handleTutorialClick, onClick: handleTutorialClick,
@@ -79,7 +79,7 @@ export const CustomControls = memo(
}, },
{ {
id: "fit-view-button", id: "fit-view-button",
icon: <FrameCornersIcon className="size-4" />, icon: <FrameCornersIcon className="size-3.5 text-zinc-600" />,
label: "Fit View", label: "Fit View",
onClick: () => fitView({ padding: 0.2, duration: 800, maxZoom: 1 }), onClick: () => fitView({ padding: 0.2, duration: 800, maxZoom: 1 }),
className: "h-10 w-10 border-none", className: "h-10 w-10 border-none",
@@ -87,9 +87,9 @@ export const CustomControls = memo(
{ {
id: "lock-button", id: "lock-button",
icon: !isLocked ? ( icon: !isLocked ? (
<LockOpenIcon className="size-4" /> <LockOpenIcon className="size-3.5 text-zinc-600" />
) : ( ) : (
<LockIcon className="size-4" /> <LockIcon className="size-3.5 text-zinc-600" />
), ),
label: "Toggle Lock", label: "Toggle Lock",
onClick: () => setIsLocked(!isLocked), onClick: () => setIsLocked(!isLocked),

View File

@@ -139,14 +139,6 @@ export const useFlow = () => {
useNodeStore.getState().setNodes([]); useNodeStore.getState().setNodes([]);
useNodeStore.getState().clearResolutionState(); useNodeStore.getState().clearResolutionState();
addNodes(customNodes); addNodes(customNodes);
// Sync hardcoded values with handle IDs.
// If a keyvalue field has a key without a value, the backend omits it from hardcoded values.
// But if a handleId exists for that key, it causes inconsistency.
// This ensures hardcoded values stay in sync with handle IDs.
customNodes.forEach((node) => {
useNodeStore.getState().syncHardcodedValuesWithHandleIds(node.id);
});
} }
}, [customNodes, addNodes]); }, [customNodes, addNodes]);
@@ -158,6 +150,14 @@ export const useFlow = () => {
} }
}, [graph?.links, addLinks]); }, [graph?.links, addLinks]);
useEffect(() => {
if (customNodes.length > 0 && graph?.links) {
customNodes.forEach((node) => {
useNodeStore.getState().syncHardcodedValuesWithHandleIds(node.id);
});
}
}, [customNodes, graph?.links]);
// update node execution status in nodes // update node execution status in nodes
useEffect(() => { useEffect(() => {
if ( if (

View File

@@ -19,6 +19,8 @@ export type CustomEdgeData = {
beadUp?: number; beadUp?: number;
beadDown?: number; beadDown?: number;
beadData?: Map<string, NodeExecutionResult["status"]>; beadData?: Map<string, NodeExecutionResult["status"]>;
edgeColorClass?: string;
edgeHexColor?: string;
}; };
export type CustomEdge = XYEdge<CustomEdgeData, "custom">; export type CustomEdge = XYEdge<CustomEdgeData, "custom">;
@@ -36,7 +38,6 @@ const CustomEdge = ({
selected, selected,
}: EdgeProps<CustomEdge>) => { }: EdgeProps<CustomEdge>) => {
const removeConnection = useEdgeStore((state) => state.removeEdge); const removeConnection = useEdgeStore((state) => state.removeEdge);
// Subscribe to the brokenEdgeIDs map and check if this edge is broken across any node
const isBroken = useNodeStore((state) => state.isEdgeBroken(id)); const isBroken = useNodeStore((state) => state.isEdgeBroken(id));
const [isHovered, setIsHovered] = useState(false); const [isHovered, setIsHovered] = useState(false);
@@ -52,6 +53,7 @@ const CustomEdge = ({
const isStatic = data?.isStatic ?? false; const isStatic = data?.isStatic ?? false;
const beadUp = data?.beadUp ?? 0; const beadUp = data?.beadUp ?? 0;
const beadDown = data?.beadDown ?? 0; const beadDown = data?.beadDown ?? 0;
const edgeColorClass = data?.edgeColorClass;
const handleRemoveEdge = () => { const handleRemoveEdge = () => {
removeConnection(id); removeConnection(id);
@@ -70,7 +72,9 @@ const CustomEdge = ({
? "!stroke-red-500 !stroke-[2px] [stroke-dasharray:4]" ? "!stroke-red-500 !stroke-[2px] [stroke-dasharray:4]"
: selected : selected
? "stroke-zinc-800" ? "stroke-zinc-800"
: "stroke-zinc-500/50 hover:stroke-zinc-500", : edgeColorClass
? cn(edgeColorClass, "opacity-70 hover:opacity-100")
: "stroke-zinc-500/50 hover:stroke-zinc-500",
)} )}
/> />
<JSBeads <JSBeads

View File

@@ -8,6 +8,7 @@ import { useCallback } from "react";
import { useNodeStore } from "../../../stores/nodeStore"; import { useNodeStore } from "../../../stores/nodeStore";
import { useHistoryStore } from "../../../stores/historyStore"; import { useHistoryStore } from "../../../stores/historyStore";
import { CustomEdge } from "./CustomEdge"; import { CustomEdge } from "./CustomEdge";
import { getEdgeColorFromOutputType } from "../nodes/helpers";
export const useCustomEdge = () => { export const useCustomEdge = () => {
const edges = useEdgeStore((s) => s.edges); const edges = useEdgeStore((s) => s.edges);
@@ -34,8 +35,13 @@ export const useCustomEdge = () => {
if (exists) return; if (exists) return;
const nodes = useNodeStore.getState().nodes; const nodes = useNodeStore.getState().nodes;
const isStatic = nodes.find((n) => n.id === conn.source)?.data const sourceNode = nodes.find((n) => n.id === conn.source);
?.staticOutput; const isStatic = sourceNode?.data?.staticOutput;
const { colorClass, hexColor } = getEdgeColorFromOutputType(
sourceNode?.data?.outputSchema,
conn.sourceHandle,
);
addEdge({ addEdge({
source: conn.source, source: conn.source,
@@ -44,6 +50,8 @@ export const useCustomEdge = () => {
targetHandle: conn.targetHandle, targetHandle: conn.targetHandle,
data: { data: {
isStatic, isStatic,
edgeColorClass: colorClass,
edgeHexColor: hexColor,
}, },
}); });
}, },

View File

@@ -1,22 +1,21 @@
import { Button } from "@/components/atoms/Button/Button"; import { Button } from "@/components/atoms/Button/Button";
import { Text } from "@/components/atoms/Text/Text"; import { Text } from "@/components/atoms/Text/Text";
import {
Accordion,
AccordionContent,
AccordionItem,
AccordionTrigger,
} from "@/components/molecules/Accordion/Accordion";
import { beautifyString, cn } from "@/lib/utils"; import { beautifyString, cn } from "@/lib/utils";
import { CaretDownIcon, CopyIcon, CheckIcon } from "@phosphor-icons/react"; import { CopyIcon, CheckIcon } from "@phosphor-icons/react";
import { NodeDataViewer } from "./components/NodeDataViewer/NodeDataViewer"; import { NodeDataViewer } from "./components/NodeDataViewer/NodeDataViewer";
import { ContentRenderer } from "./components/ContentRenderer"; import { ContentRenderer } from "./components/ContentRenderer";
import { useNodeOutput } from "./useNodeOutput"; import { useNodeOutput } from "./useNodeOutput";
import { ViewMoreData } from "./components/ViewMoreData"; import { ViewMoreData } from "./components/ViewMoreData";
export const NodeDataRenderer = ({ nodeId }: { nodeId: string }) => { export const NodeDataRenderer = ({ nodeId }: { nodeId: string }) => {
const { const { outputData, copiedKey, handleCopy, executionResultId, inputData } =
outputData, useNodeOutput(nodeId);
isExpanded,
setIsExpanded,
copiedKey,
handleCopy,
executionResultId,
inputData,
} = useNodeOutput(nodeId);
if (Object.keys(outputData).length === 0) { if (Object.keys(outputData).length === 0) {
return null; return null;
@@ -25,122 +24,117 @@ export const NodeDataRenderer = ({ nodeId }: { nodeId: string }) => {
return ( return (
<div <div
data-tutorial-id={`node-output`} data-tutorial-id={`node-output`}
className="flex flex-col gap-3 rounded-b-xl border-t border-zinc-200 px-4 py-4" className="rounded-b-xl border-t border-zinc-200 px-4 py-2"
> >
<div className="flex items-center justify-between"> <Accordion type="single" collapsible defaultValue="node-output">
<Text variant="body-medium" className="!font-semibold text-slate-700"> <AccordionItem value="node-output" className="border-none">
Node Output <AccordionTrigger className="py-2 hover:no-underline">
</Text> <Text
<Button variant="body-medium"
variant="ghost" className="!font-semibold text-slate-700"
size="small" >
onClick={() => setIsExpanded(!isExpanded)} Node Output
className="h-fit min-w-0 p-1 text-slate-600 hover:text-slate-900" </Text>
> </AccordionTrigger>
<CaretDownIcon <AccordionContent className="pt-2">
size={16} <div className="flex max-w-[350px] flex-col gap-4">
weight="bold" <div className="space-y-2">
className={`transition-transform ${isExpanded ? "rotate-180" : ""}`} <Text variant="small-medium">Input</Text>
/>
</Button>
</div>
{isExpanded && ( <ContentRenderer value={inputData} shortContent={false} />
<>
<div className="flex max-w-[350px] flex-col gap-4">
<div className="space-y-2">
<Text variant="small-medium">Input</Text>
<ContentRenderer value={inputData} shortContent={false} /> <div className="mt-1 flex justify-end gap-1">
<NodeDataViewer
<div className="mt-1 flex justify-end gap-1"> data={inputData}
<NodeDataViewer pinName="Input"
data={inputData} execId={executionResultId}
pinName="Input" />
execId={executionResultId} <Button
/> variant="secondary"
<Button size="small"
variant="secondary" onClick={() => handleCopy("input", inputData)}
size="small" className={cn(
onClick={() => handleCopy("input", inputData)} "h-fit min-w-0 gap-1.5 border border-zinc-200 p-2 text-black hover:text-slate-900",
className={cn( copiedKey === "input" &&
"h-fit min-w-0 gap-1.5 border border-zinc-200 p-2 text-black hover:text-slate-900", "border-green-400 bg-green-100 hover:border-green-400 hover:bg-green-200",
copiedKey === "input" && )}
"border-green-400 bg-green-100 hover:border-green-400 hover:bg-green-200", >
)} {copiedKey === "input" ? (
> <CheckIcon size={12} className="text-green-600" />
{copiedKey === "input" ? ( ) : (
<CheckIcon size={12} className="text-green-600" /> <CopyIcon size={12} />
) : ( )}
<CopyIcon size={12} /> </Button>
)} </div>
</Button>
</div> </div>
</div>
{Object.entries(outputData) {Object.entries(outputData)
.slice(0, 2) .slice(0, 2)
.map(([key, value]) => ( .map(([key, value]) => (
<div key={key} className="flex flex-col gap-2"> <div key={key} className="flex flex-col gap-2">
<div className="flex items-center gap-2"> <div className="flex items-center gap-2">
<Text <Text
variant="small-medium" variant="small-medium"
className="!font-semibold text-slate-600" className="!font-semibold text-slate-600"
> >
Pin: Pin:
</Text> </Text>
<Text variant="small" className="text-slate-700"> <Text variant="small" className="text-slate-700">
{beautifyString(key)} {beautifyString(key)}
</Text> </Text>
</div> </div>
<div className="w-full space-y-2"> <div className="w-full space-y-2">
<Text <Text
variant="small" variant="small"
className="!font-semibold text-slate-600" className="!font-semibold text-slate-600"
> >
Data: Data:
</Text> </Text>
<div className="relative space-y-2"> <div className="relative space-y-2">
{value.map((item, index) => ( {value.map((item, index) => (
<div key={index}> <div key={index}>
<ContentRenderer value={item} shortContent={true} /> <ContentRenderer value={item} shortContent={true} />
</div>
))}
<div className="mt-1 flex justify-end gap-1">
<NodeDataViewer
data={value}
pinName={key}
execId={executionResultId}
/>
<Button
variant="secondary"
size="small"
onClick={() => handleCopy(key, value)}
className={cn(
"h-fit min-w-0 gap-1.5 border border-zinc-200 p-2 text-black hover:text-slate-900",
copiedKey === key &&
"border-green-400 bg-green-100 hover:border-green-400 hover:bg-green-200",
)}
>
{copiedKey === key ? (
<CheckIcon size={12} className="text-green-600" />
) : (
<CopyIcon size={12} />
)}
</Button>
</div> </div>
))}
<div className="mt-1 flex justify-end gap-1">
<NodeDataViewer
data={value}
pinName={key}
execId={executionResultId}
/>
<Button
variant="secondary"
size="small"
onClick={() => handleCopy(key, value)}
className={cn(
"h-fit min-w-0 gap-1.5 border border-zinc-200 p-2 text-black hover:text-slate-900",
copiedKey === key &&
"border-green-400 bg-green-100 hover:border-green-400 hover:bg-green-200",
)}
>
{copiedKey === key ? (
<CheckIcon size={12} className="text-green-600" />
) : (
<CopyIcon size={12} />
)}
</Button>
</div> </div>
</div> </div>
</div> </div>
</div> ))}
))} </div>
</div>
{Object.keys(outputData).length > 2 && ( {Object.keys(outputData).length > 2 && (
<ViewMoreData outputData={outputData} execId={executionResultId} /> <ViewMoreData
)} outputData={outputData}
</> execId={executionResultId}
)} />
)}
</AccordionContent>
</AccordionItem>
</Accordion>
</div> </div>
); );
}; };

View File

@@ -4,7 +4,6 @@ import { useShallow } from "zustand/react/shallow";
import { useState } from "react"; import { useState } from "react";
export const useNodeOutput = (nodeId: string) => { export const useNodeOutput = (nodeId: string) => {
const [isExpanded, setIsExpanded] = useState(true);
const [copiedKey, setCopiedKey] = useState<string | null>(null); const [copiedKey, setCopiedKey] = useState<string | null>(null);
const { toast } = useToast(); const { toast } = useToast();
@@ -37,13 +36,10 @@ export const useNodeOutput = (nodeId: string) => {
} }
}; };
return { return {
outputData: outputData, outputData,
inputData: inputData, inputData,
isExpanded: isExpanded, copiedKey,
setIsExpanded: setIsExpanded, handleCopy,
copiedKey: copiedKey,
setCopiedKey: setCopiedKey,
handleCopy: handleCopy,
executionResultId: nodeExecutionResult?.node_exec_id, executionResultId: nodeExecutionResult?.node_exec_id,
}; };
}; };

View File

@@ -187,3 +187,38 @@ export const getTypeDisplayInfo = (schema: any) => {
hexColor, hexColor,
}; };
}; };
export function getEdgeColorFromOutputType(
outputSchema: RJSFSchema | undefined,
sourceHandle: string,
): { colorClass: string; hexColor: string } {
const defaultColor = {
colorClass: "stroke-zinc-500/50",
hexColor: "#6b7280",
};
if (!outputSchema?.properties) return defaultColor;
const properties = outputSchema.properties as Record<string, unknown>;
const handleParts = sourceHandle.split("_#_");
let currentSchema: Record<string, unknown> = properties;
for (let i = 0; i < handleParts.length; i++) {
const part = handleParts[i];
const fieldSchema = currentSchema[part] as Record<string, unknown>;
if (!fieldSchema) return defaultColor;
if (i === handleParts.length - 1) {
const { hexColor, colorClass } = getTypeDisplayInfo(fieldSchema);
return { colorClass: colorClass.replace("!text-", "stroke-"), hexColor };
}
if (fieldSchema.properties) {
currentSchema = fieldSchema.properties as Record<string, unknown>;
} else {
return defaultColor;
}
}
return defaultColor;
}

View File

@@ -1,7 +1,32 @@
// These are SVG Phosphor icons type IconOptions = {
size?: number;
color?: string;
};
const DEFAULT_SIZE = 16;
const DEFAULT_COLOR = "#52525b"; // zinc-600
const iconPaths = {
ClickIcon: `M88,24V16a8,8,0,0,1,16,0v8a8,8,0,0,1-16,0ZM16,104h8a8,8,0,0,0,0-16H16a8,8,0,0,0,0,16ZM124.42,39.16a8,8,0,0,0,10.74-3.58l8-16a8,8,0,0,0-14.31-7.16l-8,16A8,8,0,0,0,124.42,39.16Zm-96,81.69-16,8a8,8,0,0,0,7.16,14.31l16-8a8,8,0,1,0-7.16-14.31ZM219.31,184a16,16,0,0,1,0,22.63l-12.68,12.68a16,16,0,0,1-22.63,0L132.7,168,115,214.09c0,.1-.08.21-.13.32a15.83,15.83,0,0,1-14.6,9.59l-.79,0a15.83,15.83,0,0,1-14.41-11L32.8,52.92A16,16,0,0,1,52.92,32.8L213,85.07a16,16,0,0,1,1.41,29.8l-.32.13L168,132.69ZM208,195.31,156.69,144h0a16,16,0,0,1,4.93-26l.32-.14,45.95-17.64L48,48l52.2,159.86,17.65-46c0-.11.08-.22.13-.33a16,16,0,0,1,11.69-9.34,16.72,16.72,0,0,1,3-.28,16,16,0,0,1,11.3,4.69L195.31,208Z`,
Keyboard: `M224,48H32A16,16,0,0,0,16,64V192a16,16,0,0,0,16,16H224a16,16,0,0,0,16-16V64A16,16,0,0,0,224,48Zm0,144H32V64H224V192Zm-16-64a8,8,0,0,1-8,8H56a8,8,0,0,1,0-16H200A8,8,0,0,1,208,128Zm0-32a8,8,0,0,1-8,8H56a8,8,0,0,1,0-16H200A8,8,0,0,1,208,96ZM72,160a8,8,0,0,1-8,8H56a8,8,0,0,1,0-16h8A8,8,0,0,1,72,160Zm96,0a8,8,0,0,1-8,8H96a8,8,0,0,1,0-16h64A8,8,0,0,1,168,160Zm40,0a8,8,0,0,1-8,8h-8a8,8,0,0,1,0-16h8A8,8,0,0,1,208,160Z`,
Drag: `M188,80a27.79,27.79,0,0,0-13.36,3.4,28,28,0,0,0-46.64-11A28,28,0,0,0,80,92v20H68a28,28,0,0,0-28,28v12a88,88,0,0,0,176,0V108A28,28,0,0,0,188,80Zm12,72a72,72,0,0,1-144,0V140a12,12,0,0,1,12-12H80v24a8,8,0,0,0,16,0V92a12,12,0,0,1,24,0v28a8,8,0,0,0,16,0V92a12,12,0,0,1,24,0v28a8,8,0,0,0,16,0V108a12,12,0,0,1,24,0Z`,
};
function createIcon(path: string, options: IconOptions = {}): string {
const size = options.size ?? DEFAULT_SIZE;
const color = options.color ?? DEFAULT_COLOR;
return `<svg xmlns="http://www.w3.org/2000/svg" width="${size}" height="${size}" fill="${color}" viewBox="0 0 256 256"><path d="${path}"></path></svg>`;
}
export const ICONS = { export const ICONS = {
ClickIcon: `<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="#000000" viewBox="0 0 256 256"><path d="M88,24V16a8,8,0,0,1,16,0v8a8,8,0,0,1-16,0ZM16,104h8a8,8,0,0,0,0-16H16a8,8,0,0,0,0,16ZM124.42,39.16a8,8,0,0,0,10.74-3.58l8-16a8,8,0,0,0-14.31-7.16l-8,16A8,8,0,0,0,124.42,39.16Zm-96,81.69-16,8a8,8,0,0,0,7.16,14.31l16-8a8,8,0,1,0-7.16-14.31ZM219.31,184a16,16,0,0,1,0,22.63l-12.68,12.68a16,16,0,0,1-22.63,0L132.7,168,115,214.09c0,.1-.08.21-.13.32a15.83,15.83,0,0,1-14.6,9.59l-.79,0a15.83,15.83,0,0,1-14.41-11L32.8,52.92A16,16,0,0,1,52.92,32.8L213,85.07a16,16,0,0,1,1.41,29.8l-.32.13L168,132.69ZM208,195.31,156.69,144h0a16,16,0,0,1,4.93-26l.32-.14,45.95-17.64L48,48l52.2,159.86,17.65-46c0-.11.08-.22.13-.33a16,16,0,0,1,11.69-9.34,16.72,16.72,0,0,1,3-.28,16,16,0,0,1,11.3,4.69L195.31,208Z"></path></svg>`, ClickIcon: createIcon(iconPaths.ClickIcon),
Keyboard: `<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="#000000" viewBox="0 0 256 256"><path d="M224,48H32A16,16,0,0,0,16,64V192a16,16,0,0,0,16,16H224a16,16,0,0,0,16-16V64A16,16,0,0,0,224,48Zm0,144H32V64H224V192Zm-16-64a8,8,0,0,1-8,8H56a8,8,0,0,1,0-16H200A8,8,0,0,1,208,128Zm0-32a8,8,0,0,1-8,8H56a8,8,0,0,1,0-16H200A8,8,0,0,1,208,96ZM72,160a8,8,0,0,1-8,8H56a8,8,0,0,1,0-16h8A8,8,0,0,1,72,160Zm96,0a8,8,0,0,1-8,8H96a8,8,0,0,1,0-16h64A8,8,0,0,1,168,160Zm40,0a8,8,0,0,1-8,8h-8a8,8,0,0,1,0-16h8A8,8,0,0,1,208,160Z"></path></svg>`, Keyboard: createIcon(iconPaths.Keyboard),
Drag: `<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="#000000" viewBox="0 0 256 256"><path d="M188,80a27.79,27.79,0,0,0-13.36,3.4,28,28,0,0,0-46.64-11A28,28,0,0,0,80,92v20H68a28,28,0,0,0-28,28v12a88,88,0,0,0,176,0V108A28,28,0,0,0,188,80Zm12,72a72,72,0,0,1-144,0V140a12,12,0,0,1,12-12H80v24a8,8,0,0,0,16,0V92a12,12,0,0,1,24,0v28a8,8,0,0,0,16,0V92a12,12,0,0,1,24,0v28a8,8,0,0,0,16,0V108a12,12,0,0,1,24,0Z"></path></svg>`, Drag: createIcon(iconPaths.Drag),
}; };
export function getIcon(
name: keyof typeof iconPaths,
options?: IconOptions,
): string {
return createIcon(iconPaths[name], options);
}

View File

@@ -11,6 +11,7 @@ import {
} from "./helpers"; } from "./helpers";
import { useNodeStore } from "../../../stores/nodeStore"; import { useNodeStore } from "../../../stores/nodeStore";
import { useEdgeStore } from "../../../stores/edgeStore"; import { useEdgeStore } from "../../../stores/edgeStore";
import { useTutorialStore } from "../../../stores/tutorialStore";
let isTutorialLoading = false; let isTutorialLoading = false;
let tutorialLoadingCallback: ((loading: boolean) => void) | null = null; let tutorialLoadingCallback: ((loading: boolean) => void) | null = null;
@@ -60,12 +61,14 @@ export const startTutorial = async () => {
handleTutorialComplete(); handleTutorialComplete();
removeTutorialStyles(); removeTutorialStyles();
clearPrefetchedBlocks(); clearPrefetchedBlocks();
useTutorialStore.getState().setIsTutorialRunning(false);
}); });
tour.on("cancel", () => { tour.on("cancel", () => {
handleTutorialCancel(tour); handleTutorialCancel(tour);
removeTutorialStyles(); removeTutorialStyles();
clearPrefetchedBlocks(); clearPrefetchedBlocks();
useTutorialStore.getState().setIsTutorialRunning(false);
}); });
for (const step of tour.steps) { for (const step of tour.steps) {

View File

@@ -61,12 +61,18 @@ export const convertNodesPlusBlockInfoIntoCustomNodes = (
return customNode; return customNode;
}; };
const isToolSourceName = (sourceName: string): boolean =>
sourceName.startsWith("tools_^_");
const cleanupSourceName = (sourceName: string): string =>
isToolSourceName(sourceName) ? "tools" : sourceName;
export const linkToCustomEdge = (link: Link): CustomEdge => ({ export const linkToCustomEdge = (link: Link): CustomEdge => ({
id: link.id ?? "", id: link.id ?? "",
type: "custom" as const, type: "custom" as const,
source: link.source_id, source: link.source_id,
target: link.sink_id, target: link.sink_id,
sourceHandle: link.source_name, sourceHandle: cleanupSourceName(link.source_name),
targetHandle: link.sink_name, targetHandle: link.sink_name,
data: { data: {
isStatic: link.is_static, isStatic: link.is_static,

View File

@@ -1,134 +0,0 @@
"use client";
import { Button } from "@/components/atoms/Button/Button";
import { Text } from "@/components/atoms/Text/Text";
import { cn } from "@/lib/utils";
import { List } from "@phosphor-icons/react";
import React, { useState } from "react";
import { ChatContainer } from "./components/ChatContainer/ChatContainer";
import { ChatErrorState } from "./components/ChatErrorState/ChatErrorState";
import { ChatLoadingState } from "./components/ChatLoadingState/ChatLoadingState";
import { SessionsDrawer } from "./components/SessionsDrawer/SessionsDrawer";
import { useChat } from "./useChat";
export interface ChatProps {
className?: string;
headerTitle?: React.ReactNode;
showHeader?: boolean;
showSessionInfo?: boolean;
showNewChatButton?: boolean;
onNewChat?: () => void;
headerActions?: React.ReactNode;
}
export function Chat({
className,
headerTitle = "AutoGPT Copilot",
showHeader = true,
showSessionInfo = true,
showNewChatButton = true,
onNewChat,
headerActions,
}: ChatProps) {
const {
messages,
isLoading,
isCreating,
error,
sessionId,
createSession,
clearSession,
loadSession,
} = useChat();
const [isSessionsDrawerOpen, setIsSessionsDrawerOpen] = useState(false);
const handleNewChat = () => {
clearSession();
onNewChat?.();
};
const handleSelectSession = async (sessionId: string) => {
try {
await loadSession(sessionId);
} catch (err) {
console.error("Failed to load session:", err);
}
};
return (
<div className={cn("flex h-full flex-col", className)}>
{/* Header */}
{showHeader && (
<header className="shrink-0 border-t border-zinc-200 bg-white p-3">
<div className="flex items-center justify-between">
<div className="flex items-center gap-3">
<button
aria-label="View sessions"
onClick={() => setIsSessionsDrawerOpen(true)}
className="flex size-8 items-center justify-center rounded hover:bg-zinc-100"
>
<List width="1.25rem" height="1.25rem" />
</button>
{typeof headerTitle === "string" ? (
<Text variant="h2" className="text-lg font-semibold">
{headerTitle}
</Text>
) : (
headerTitle
)}
</div>
<div className="flex items-center gap-3">
{showSessionInfo && sessionId && (
<>
{showNewChatButton && (
<Button
variant="outline"
size="small"
onClick={handleNewChat}
>
New Chat
</Button>
)}
</>
)}
{headerActions}
</div>
</div>
</header>
)}
{/* Main Content */}
<main className="flex min-h-0 flex-1 flex-col overflow-hidden">
{/* Loading State - show when explicitly loading/creating OR when we don't have a session yet and no error */}
{(isLoading || isCreating || (!sessionId && !error)) && (
<ChatLoadingState
message={isCreating ? "Creating session..." : "Loading..."}
/>
)}
{/* Error State */}
{error && !isLoading && (
<ChatErrorState error={error} onRetry={createSession} />
)}
{/* Session Content */}
{sessionId && !isLoading && !error && (
<ChatContainer
sessionId={sessionId}
initialMessages={messages}
className="flex-1"
/>
)}
</main>
{/* Sessions Drawer */}
<SessionsDrawer
isOpen={isSessionsDrawerOpen}
onClose={() => setIsSessionsDrawerOpen(false)}
onSelectSession={handleSelectSession}
currentSessionId={sessionId}
/>
</div>
);
}

View File

@@ -1,88 +0,0 @@
import type { SessionDetailResponse } from "@/app/api/__generated__/models/sessionDetailResponse";
import { cn } from "@/lib/utils";
import { useCallback } from "react";
import { usePageContext } from "../../usePageContext";
import { ChatInput } from "../ChatInput/ChatInput";
import { MessageList } from "../MessageList/MessageList";
import { QuickActionsWelcome } from "../QuickActionsWelcome/QuickActionsWelcome";
import { useChatContainer } from "./useChatContainer";
export interface ChatContainerProps {
sessionId: string | null;
initialMessages: SessionDetailResponse["messages"];
className?: string;
}
export function ChatContainer({
sessionId,
initialMessages,
className,
}: ChatContainerProps) {
const { messages, streamingChunks, isStreaming, sendMessage } =
useChatContainer({
sessionId,
initialMessages,
});
const { capturePageContext } = usePageContext();
// Wrap sendMessage to automatically capture page context
const sendMessageWithContext = useCallback(
async (content: string, isUserMessage: boolean = true) => {
const context = capturePageContext();
await sendMessage(content, isUserMessage, context);
},
[sendMessage, capturePageContext],
);
const quickActions = [
"Find agents for social media management",
"Show me agents for content creation",
"Help me automate my business",
"What can you help me with?",
];
return (
<div
className={cn("flex h-full min-h-0 flex-col", className)}
style={{
backgroundColor: "#ffffff",
backgroundImage:
"radial-gradient(#e5e5e5 0.5px, transparent 0.5px), radial-gradient(#e5e5e5 0.5px, #ffffff 0.5px)",
backgroundSize: "20px 20px",
backgroundPosition: "0 0, 10px 10px",
}}
>
{/* Messages or Welcome Screen */}
<div className="flex min-h-0 flex-1 flex-col overflow-hidden pb-24">
{messages.length === 0 ? (
<QuickActionsWelcome
title="Welcome to AutoGPT Copilot"
description="Start a conversation to discover and run AI agents."
actions={quickActions}
onActionClick={sendMessageWithContext}
disabled={isStreaming || !sessionId}
/>
) : (
<MessageList
messages={messages}
streamingChunks={streamingChunks}
isStreaming={isStreaming}
onSendMessage={sendMessageWithContext}
className="flex-1"
/>
)}
</div>
{/* Input - Always visible */}
<div className="fixed bottom-0 left-0 right-0 z-50 border-t border-zinc-200 bg-white p-4">
<ChatInput
onSend={sendMessageWithContext}
disabled={isStreaming || !sessionId}
placeholder={
sessionId ? "Type your message..." : "Creating session..."
}
/>
</div>
</div>
);
}

View File

@@ -1,64 +0,0 @@
import { Input } from "@/components/atoms/Input/Input";
import { cn } from "@/lib/utils";
import { ArrowUpIcon } from "@phosphor-icons/react";
import { useChatInput } from "./useChatInput";
export interface ChatInputProps {
onSend: (message: string) => void;
disabled?: boolean;
placeholder?: string;
className?: string;
}
export function ChatInput({
onSend,
disabled = false,
placeholder = "Type your message...",
className,
}: ChatInputProps) {
const inputId = "chat-input";
const { value, setValue, handleKeyDown, handleSend } = useChatInput({
onSend,
disabled,
maxRows: 5,
inputId,
});
return (
<div className={cn("relative flex-1", className)}>
<Input
id={inputId}
label="Chat message input"
hideLabel
type="textarea"
value={value}
onChange={(e) => setValue(e.target.value)}
onKeyDown={handleKeyDown}
placeholder={placeholder}
disabled={disabled}
rows={1}
wrapperClassName="mb-0 relative"
className="pr-12"
/>
<span id="chat-input-hint" className="sr-only">
Press Enter to send, Shift+Enter for new line
</span>
<button
onClick={handleSend}
disabled={disabled || !value.trim()}
className={cn(
"absolute right-3 top-1/2 flex h-8 w-8 -translate-y-1/2 items-center justify-center rounded-full",
"border border-zinc-800 bg-zinc-800 text-white",
"hover:border-zinc-900 hover:bg-zinc-900",
"disabled:border-zinc-200 disabled:bg-zinc-200 disabled:text-white disabled:opacity-50",
"transition-colors focus-visible:outline-none focus-visible:ring-1 focus-visible:ring-neutral-950",
"disabled:pointer-events-none",
)}
aria-label="Send message"
>
<ArrowUpIcon className="h-3 w-3" weight="bold" />
</button>
</div>
);
}

View File

@@ -1,60 +0,0 @@
import { KeyboardEvent, useCallback, useEffect, useState } from "react";
interface UseChatInputArgs {
onSend: (message: string) => void;
disabled?: boolean;
maxRows?: number;
inputId?: string;
}
export function useChatInput({
onSend,
disabled = false,
maxRows = 5,
inputId = "chat-input",
}: UseChatInputArgs) {
const [value, setValue] = useState("");
useEffect(() => {
const textarea = document.getElementById(inputId) as HTMLTextAreaElement;
if (!textarea) return;
textarea.style.height = "auto";
const lineHeight = parseInt(
window.getComputedStyle(textarea).lineHeight,
10,
);
const maxHeight = lineHeight * maxRows;
const newHeight = Math.min(textarea.scrollHeight, maxHeight);
textarea.style.height = `${newHeight}px`;
textarea.style.overflowY =
textarea.scrollHeight > maxHeight ? "auto" : "hidden";
}, [value, maxRows, inputId]);
const handleSend = useCallback(() => {
if (disabled || !value.trim()) return;
onSend(value.trim());
setValue("");
const textarea = document.getElementById(inputId) as HTMLTextAreaElement;
if (textarea) {
textarea.style.height = "auto";
}
}, [value, onSend, disabled, inputId]);
const handleKeyDown = useCallback(
(event: KeyboardEvent<HTMLInputElement | HTMLTextAreaElement>) => {
if (event.key === "Enter" && !event.shiftKey) {
event.preventDefault();
handleSend();
}
// Shift+Enter allows default behavior (new line) - no need to handle explicitly
},
[handleSend],
);
return {
value,
setValue,
handleKeyDown,
handleSend,
};
}

View File

@@ -1,121 +0,0 @@
"use client";
import { cn } from "@/lib/utils";
import { ChatMessage } from "../ChatMessage/ChatMessage";
import type { ChatMessageData } from "../ChatMessage/useChatMessage";
import { StreamingMessage } from "../StreamingMessage/StreamingMessage";
import { ThinkingMessage } from "../ThinkingMessage/ThinkingMessage";
import { useMessageList } from "./useMessageList";
export interface MessageListProps {
messages: ChatMessageData[];
streamingChunks?: string[];
isStreaming?: boolean;
className?: string;
onStreamComplete?: () => void;
onSendMessage?: (content: string) => void;
}
export function MessageList({
messages,
streamingChunks = [],
isStreaming = false,
className,
onStreamComplete,
onSendMessage,
}: MessageListProps) {
const { messagesEndRef, messagesContainerRef } = useMessageList({
messageCount: messages.length,
isStreaming,
});
return (
<div
ref={messagesContainerRef}
className={cn(
"flex-1 overflow-y-auto",
"scrollbar-thin scrollbar-track-transparent scrollbar-thumb-zinc-300",
className,
)}
>
<div className="mx-auto flex max-w-3xl flex-col py-4">
{/* Render all persisted messages */}
{messages.map((message, index) => {
// Check if current message is an agent_output tool_response
// and if previous message is an assistant message
let agentOutput: ChatMessageData | undefined;
if (message.type === "tool_response" && message.result) {
let parsedResult: Record<string, unknown> | null = null;
try {
parsedResult =
typeof message.result === "string"
? JSON.parse(message.result)
: (message.result as Record<string, unknown>);
} catch {
parsedResult = null;
}
if (parsedResult?.type === "agent_output") {
const prevMessage = messages[index - 1];
if (
prevMessage &&
prevMessage.type === "message" &&
prevMessage.role === "assistant"
) {
// This agent output will be rendered inside the previous assistant message
// Skip rendering this message separately
return null;
}
}
}
// Check if next message is an agent_output tool_response to include in current assistant message
if (message.type === "message" && message.role === "assistant") {
const nextMessage = messages[index + 1];
if (
nextMessage &&
nextMessage.type === "tool_response" &&
nextMessage.result
) {
let parsedResult: Record<string, unknown> | null = null;
try {
parsedResult =
typeof nextMessage.result === "string"
? JSON.parse(nextMessage.result)
: (nextMessage.result as Record<string, unknown>);
} catch {
parsedResult = null;
}
if (parsedResult?.type === "agent_output") {
agentOutput = nextMessage;
}
}
}
return (
<ChatMessage
key={index}
message={message}
onSendMessage={onSendMessage}
agentOutput={agentOutput}
/>
);
})}
{/* Render thinking message when streaming but no chunks yet */}
{isStreaming && streamingChunks.length === 0 && <ThinkingMessage />}
{/* Render streaming message if active */}
{isStreaming && streamingChunks.length > 0 && (
<StreamingMessage
chunks={streamingChunks}
onComplete={onStreamComplete}
/>
)}
{/* Invisible div to scroll to */}
<div ref={messagesEndRef} />
</div>
</div>
);
}

View File

@@ -1,24 +0,0 @@
import { Text } from "@/components/atoms/Text/Text";
import { cn } from "@/lib/utils";
import { WrenchIcon } from "@phosphor-icons/react";
import { getToolActionPhrase } from "../../helpers";
export interface ToolCallMessageProps {
toolName: string;
className?: string;
}
export function ToolCallMessage({ toolName, className }: ToolCallMessageProps) {
return (
<div className={cn("flex items-center justify-center gap-2", className)}>
<WrenchIcon
size={14}
weight="bold"
className="flex-shrink-0 text-neutral-500"
/>
<Text variant="small" className="text-neutral-500">
{getToolActionPhrase(toolName)}...
</Text>
</div>
);
}

View File

@@ -1,260 +0,0 @@
import { Text } from "@/components/atoms/Text/Text";
import "@/components/contextual/OutputRenderers";
import {
globalRegistry,
OutputItem,
} from "@/components/contextual/OutputRenderers";
import { cn } from "@/lib/utils";
import type { ToolResult } from "@/types/chat";
import { WrenchIcon } from "@phosphor-icons/react";
import { getToolActionPhrase } from "../../helpers";
export interface ToolResponseMessageProps {
toolName: string;
result?: ToolResult;
success?: boolean;
className?: string;
}
export function ToolResponseMessage({
toolName,
result,
success: _success = true,
className,
}: ToolResponseMessageProps) {
if (!result) {
return (
<div className={cn("flex items-center justify-center gap-2", className)}>
<WrenchIcon
size={14}
weight="bold"
className="flex-shrink-0 text-neutral-500"
/>
<Text variant="small" className="text-neutral-500">
{getToolActionPhrase(toolName)}...
</Text>
</div>
);
}
let parsedResult: Record<string, unknown> | null = null;
try {
parsedResult =
typeof result === "string"
? JSON.parse(result)
: (result as Record<string, unknown>);
} catch {
parsedResult = null;
}
if (parsedResult && typeof parsedResult === "object") {
const responseType = parsedResult.type as string | undefined;
if (responseType === "agent_output") {
const execution = parsedResult.execution as
| {
outputs?: Record<string, unknown[]>;
}
| null
| undefined;
const outputs = execution?.outputs || {};
const message = parsedResult.message as string | undefined;
return (
<div className={cn("space-y-4 px-4 py-2", className)}>
<div className="flex items-center gap-2">
<WrenchIcon
size={14}
weight="bold"
className="flex-shrink-0 text-neutral-500"
/>
<Text variant="small" className="text-neutral-500">
{getToolActionPhrase(toolName)}
</Text>
</div>
{message && (
<div className="rounded border p-4">
<Text variant="small" className="text-neutral-600">
{message}
</Text>
</div>
)}
{Object.keys(outputs).length > 0 && (
<div className="space-y-4">
{Object.entries(outputs).map(([outputName, values]) =>
values.map((value, index) => {
const renderer = globalRegistry.getRenderer(value);
if (renderer) {
return (
<OutputItem
key={`${outputName}-${index}`}
value={value}
renderer={renderer}
label={outputName}
/>
);
}
return (
<div
key={`${outputName}-${index}`}
className="rounded border p-4"
>
<Text variant="large-medium" className="mb-2 capitalize">
{outputName}
</Text>
<pre className="overflow-auto text-sm">
{JSON.stringify(value, null, 2)}
</pre>
</div>
);
}),
)}
</div>
)}
</div>
);
}
if (responseType === "block_output" && parsedResult.outputs) {
const outputs = parsedResult.outputs as Record<string, unknown[]>;
return (
<div className={cn("space-y-4 px-4 py-2", className)}>
<div className="flex items-center gap-2">
<WrenchIcon
size={14}
weight="bold"
className="flex-shrink-0 text-neutral-500"
/>
<Text variant="small" className="text-neutral-500">
{getToolActionPhrase(toolName)}
</Text>
</div>
<div className="space-y-4">
{Object.entries(outputs).map(([outputName, values]) =>
values.map((value, index) => {
const renderer = globalRegistry.getRenderer(value);
if (renderer) {
return (
<OutputItem
key={`${outputName}-${index}`}
value={value}
renderer={renderer}
label={outputName}
/>
);
}
return (
<div
key={`${outputName}-${index}`}
className="rounded border p-4"
>
<Text variant="large-medium" className="mb-2 capitalize">
{outputName}
</Text>
<pre className="overflow-auto text-sm">
{JSON.stringify(value, null, 2)}
</pre>
</div>
);
}),
)}
</div>
</div>
);
}
// Handle other response types with a message field (e.g., understanding_updated)
if (parsedResult.message && typeof parsedResult.message === "string") {
// Format tool name from snake_case to Title Case
const formattedToolName = toolName
.split("_")
.map((word) => word.charAt(0).toUpperCase() + word.slice(1))
.join(" ");
// Clean up message - remove incomplete user_name references
let cleanedMessage = parsedResult.message;
// Remove "Updated understanding with: user_name" pattern if user_name is just a placeholder
cleanedMessage = cleanedMessage.replace(
/Updated understanding with:\s*user_name\.?\s*/gi,
"",
);
// Remove standalone user_name references
cleanedMessage = cleanedMessage.replace(/\buser_name\b\.?\s*/gi, "");
cleanedMessage = cleanedMessage.trim();
// Only show message if it has content after cleaning
if (!cleanedMessage) {
return (
<div
className={cn(
"flex items-center justify-center gap-2 px-4 py-2",
className,
)}
>
<WrenchIcon
size={14}
weight="bold"
className="flex-shrink-0 text-neutral-500"
/>
<Text variant="small" className="text-neutral-500">
{formattedToolName}
</Text>
</div>
);
}
return (
<div className={cn("space-y-2 px-4 py-2", className)}>
<div className="flex items-center justify-center gap-2">
<WrenchIcon
size={14}
weight="bold"
className="flex-shrink-0 text-neutral-500"
/>
<Text variant="small" className="text-neutral-500">
{formattedToolName}
</Text>
</div>
<div className="rounded border p-4">
<Text variant="small" className="text-neutral-600">
{cleanedMessage}
</Text>
</div>
</div>
);
}
}
const renderer = globalRegistry.getRenderer(result);
if (renderer) {
return (
<div className={cn("px-4 py-2", className)}>
<div className="mb-2 flex items-center gap-2">
<WrenchIcon
size={14}
weight="bold"
className="flex-shrink-0 text-neutral-500"
/>
<Text variant="small" className="text-neutral-500">
{getToolActionPhrase(toolName)}
</Text>
</div>
<OutputItem value={result} renderer={renderer} />
</div>
);
}
return (
<div className={cn("flex items-center justify-center gap-2", className)}>
<WrenchIcon
size={14}
weight="bold"
className="flex-shrink-0 text-neutral-500"
/>
<Text variant="small" className="text-neutral-500">
{getToolActionPhrase(toolName)}...
</Text>
</div>
);
}

View File

@@ -1,66 +0,0 @@
/**
* Maps internal tool names to user-friendly display names with emojis.
* @deprecated Use getToolActionPhrase or getToolCompletionPhrase for status messages
*
* @param toolName - The internal tool name from the backend
* @returns A user-friendly display name with an emoji prefix
*/
export function getToolDisplayName(toolName: string): string {
const toolDisplayNames: Record<string, string> = {
find_agent: "🔍 Search Marketplace",
get_agent_details: "📋 Get Agent Details",
check_credentials: "🔑 Check Credentials",
setup_agent: "⚙️ Setup Agent",
run_agent: "▶️ Run Agent",
get_required_setup_info: "📝 Get Setup Requirements",
};
return toolDisplayNames[toolName] || toolName;
}
/**
* Maps internal tool names to human-friendly action phrases (present continuous).
* Used for tool call messages to indicate what action is currently happening.
*
* @param toolName - The internal tool name from the backend
* @returns A human-friendly action phrase in present continuous tense
*/
export function getToolActionPhrase(toolName: string): string {
const toolActionPhrases: Record<string, string> = {
find_agent: "Looking for agents in the marketplace",
agent_carousel: "Looking for agents in the marketplace",
get_agent_details: "Learning about the agent",
check_credentials: "Checking your credentials",
setup_agent: "Setting up the agent",
execution_started: "Running the agent",
run_agent: "Running the agent",
get_required_setup_info: "Getting setup requirements",
schedule_agent: "Scheduling the agent to run",
};
// Return mapped phrase or generate human-friendly fallback
return toolActionPhrases[toolName] || toolName;
}
/**
* Maps internal tool names to human-friendly completion phrases (past tense).
* Used for tool response messages to indicate what action was completed.
*
* @param toolName - The internal tool name from the backend
* @returns A human-friendly completion phrase in past tense
*/
export function getToolCompletionPhrase(toolName: string): string {
const toolCompletionPhrases: Record<string, string> = {
find_agent: "Finished searching the marketplace",
get_agent_details: "Got agent details",
check_credentials: "Checked credentials",
setup_agent: "Agent setup complete",
run_agent: "Agent execution started",
get_required_setup_info: "Got setup requirements",
};
// Return mapped phrase or generate human-friendly fallback
return (
toolCompletionPhrases[toolName] ||
`Finished ${toolName.replace(/_/g, " ").replace("...", "")}`
);
}

View File

@@ -1,271 +0,0 @@
import {
getGetV2GetSessionQueryKey,
getGetV2GetSessionQueryOptions,
postV2CreateSession,
useGetV2GetSession,
usePatchV2SessionAssignUser,
usePostV2CreateSession,
} from "@/app/api/__generated__/endpoints/chat/chat";
import type { SessionDetailResponse } from "@/app/api/__generated__/models/sessionDetailResponse";
import { okData } from "@/app/api/helpers";
import { isValidUUID } from "@/lib/utils";
import { Key, storage } from "@/services/storage/local-storage";
import { useQueryClient } from "@tanstack/react-query";
import { useCallback, useEffect, useMemo, useRef, useState } from "react";
import { toast } from "sonner";
interface UseChatSessionArgs {
urlSessionId?: string | null;
autoCreate?: boolean;
}
export function useChatSession({
urlSessionId,
autoCreate = false,
}: UseChatSessionArgs = {}) {
const queryClient = useQueryClient();
const [sessionId, setSessionId] = useState<string | null>(null);
const [error, setError] = useState<Error | null>(null);
const justCreatedSessionIdRef = useRef<string | null>(null);
useEffect(() => {
if (urlSessionId) {
if (!isValidUUID(urlSessionId)) {
console.error("Invalid session ID format:", urlSessionId);
toast.error("Invalid session ID", {
description:
"The session ID in the URL is not valid. Starting a new session...",
});
setSessionId(null);
storage.clean(Key.CHAT_SESSION_ID);
return;
}
setSessionId(urlSessionId);
storage.set(Key.CHAT_SESSION_ID, urlSessionId);
} else {
const storedSessionId = storage.get(Key.CHAT_SESSION_ID);
if (storedSessionId) {
if (!isValidUUID(storedSessionId)) {
console.error("Invalid stored session ID:", storedSessionId);
storage.clean(Key.CHAT_SESSION_ID);
setSessionId(null);
} else {
setSessionId(storedSessionId);
}
} else if (autoCreate) {
setSessionId(null);
}
}
}, [urlSessionId, autoCreate]);
const {
mutateAsync: createSessionMutation,
isPending: isCreating,
error: createError,
} = usePostV2CreateSession();
const {
data: sessionData,
isLoading: isLoadingSession,
error: loadError,
refetch,
} = useGetV2GetSession(sessionId || "", {
query: {
enabled: !!sessionId,
select: okData,
staleTime: Infinity, // Never mark as stale
refetchOnMount: false, // Don't refetch on component mount
refetchOnWindowFocus: false, // Don't refetch when window regains focus
refetchOnReconnect: false, // Don't refetch when network reconnects
retry: 1,
},
});
const { mutateAsync: claimSessionMutation } = usePatchV2SessionAssignUser();
const session = useMemo(() => {
if (sessionData) return sessionData;
if (sessionId && justCreatedSessionIdRef.current === sessionId) {
return {
id: sessionId,
user_id: null,
messages: [],
created_at: new Date().toISOString(),
updated_at: new Date().toISOString(),
} as SessionDetailResponse;
}
return null;
}, [sessionData, sessionId]);
const messages = session?.messages || [];
const isLoading = isCreating || isLoadingSession;
useEffect(() => {
if (createError) {
setError(
createError instanceof Error
? createError
: new Error("Failed to create session"),
);
} else if (loadError) {
setError(
loadError instanceof Error
? loadError
: new Error("Failed to load session"),
);
} else {
setError(null);
}
}, [createError, loadError]);
const createSession = useCallback(
async function createSession() {
try {
setError(null);
const response = await postV2CreateSession({
body: JSON.stringify({}),
});
if (response.status !== 200) {
throw new Error("Failed to create session");
}
const newSessionId = response.data.id;
setSessionId(newSessionId);
storage.set(Key.CHAT_SESSION_ID, newSessionId);
justCreatedSessionIdRef.current = newSessionId;
setTimeout(() => {
if (justCreatedSessionIdRef.current === newSessionId) {
justCreatedSessionIdRef.current = null;
}
}, 10000);
return newSessionId;
} catch (err) {
const error =
err instanceof Error ? err : new Error("Failed to create session");
setError(error);
toast.error("Failed to create chat session", {
description: error.message,
});
throw error;
}
},
[createSessionMutation],
);
const loadSession = useCallback(
async function loadSession(id: string) {
try {
setError(null);
// Invalidate the query cache for this session to force a fresh fetch
await queryClient.invalidateQueries({
queryKey: getGetV2GetSessionQueryKey(id),
});
// Set sessionId after invalidation to ensure the hook refetches
setSessionId(id);
storage.set(Key.CHAT_SESSION_ID, id);
// Force fetch with fresh data (bypass cache)
const queryOptions = getGetV2GetSessionQueryOptions(id, {
query: {
staleTime: 0, // Force fresh fetch
retry: 1,
},
});
const result = await queryClient.fetchQuery(queryOptions);
if (!result || ("status" in result && result.status !== 200)) {
console.warn("Session not found on server, clearing local state");
storage.clean(Key.CHAT_SESSION_ID);
setSessionId(null);
throw new Error("Session not found");
}
} catch (err) {
const error =
err instanceof Error ? err : new Error("Failed to load session");
setError(error);
throw error;
}
},
[queryClient],
);
const refreshSession = useCallback(
async function refreshSession() {
if (!sessionId) {
console.log("[refreshSession] Skipping - no session ID");
return;
}
try {
setError(null);
await refetch();
} catch (err) {
const error =
err instanceof Error ? err : new Error("Failed to refresh session");
setError(error);
throw error;
}
},
[sessionId, refetch],
);
const claimSession = useCallback(
async function claimSession(id: string) {
try {
setError(null);
await claimSessionMutation({ sessionId: id });
if (justCreatedSessionIdRef.current === id) {
justCreatedSessionIdRef.current = null;
}
await queryClient.invalidateQueries({
queryKey: getGetV2GetSessionQueryKey(id),
});
await refetch();
toast.success("Session claimed successfully", {
description: "Your chat history has been saved to your account",
});
} catch (err: unknown) {
const error =
err instanceof Error ? err : new Error("Failed to claim session");
const is404 =
(typeof err === "object" &&
err !== null &&
"status" in err &&
err.status === 404) ||
(typeof err === "object" &&
err !== null &&
"response" in err &&
typeof err.response === "object" &&
err.response !== null &&
"status" in err.response &&
err.response.status === 404);
if (!is404) {
setError(error);
toast.error("Failed to claim session", {
description: error.message || "Unable to claim session",
});
}
throw error;
}
},
[claimSessionMutation, queryClient, refetch],
);
const clearSession = useCallback(function clearSession() {
setSessionId(null);
setError(null);
storage.clean(Key.CHAT_SESSION_ID);
justCreatedSessionIdRef.current = null;
}, []);
return {
session,
sessionId,
messages,
isLoading,
isCreating,
error,
createSession,
loadSession,
refreshSession,
claimSession,
clearSession,
};
}

View File

@@ -1,27 +0,0 @@
"use client";
import { Flag, useGetFlag } from "@/services/feature-flags/use-get-flag";
import { useRouter } from "next/navigation";
import { useEffect } from "react";
import { Chat } from "./components/Chat/Chat";
export default function ChatPage() {
const isChatEnabled = useGetFlag(Flag.CHAT);
const router = useRouter();
useEffect(() => {
if (isChatEnabled === false) {
router.push("/marketplace");
}
}, [isChatEnabled, router]);
if (isChatEnabled === null || isChatEnabled === false) {
return null;
}
return (
<div className="flex h-full flex-col">
<Chat className="flex-1" />
</div>
);
}

View File

@@ -0,0 +1,88 @@
"use client";
import { LoadingSpinner } from "@/components/atoms/LoadingSpinner/LoadingSpinner";
import { NAVBAR_HEIGHT_PX } from "@/lib/constants";
import type { ReactNode } from "react";
import { DesktopSidebar } from "./components/DesktopSidebar/DesktopSidebar";
import { LoadingState } from "./components/LoadingState/LoadingState";
import { MobileDrawer } from "./components/MobileDrawer/MobileDrawer";
import { MobileHeader } from "./components/MobileHeader/MobileHeader";
import { useCopilotShell } from "./useCopilotShell";
interface Props {
children: ReactNode;
}
export function CopilotShell({ children }: Props) {
const {
isMobile,
isDrawerOpen,
isLoading,
isLoggedIn,
hasActiveSession,
sessions,
currentSessionId,
handleSelectSession,
handleOpenDrawer,
handleCloseDrawer,
handleDrawerOpenChange,
handleNewChat,
hasNextPage,
isFetchingNextPage,
fetchNextPage,
isReadyToShowContent,
} = useCopilotShell();
if (!isLoggedIn) {
return (
<div className="flex h-full items-center justify-center">
<LoadingSpinner size="large" />
</div>
);
}
return (
<div
className="flex overflow-hidden bg-[#EFEFF0]"
style={{ height: `calc(100vh - ${NAVBAR_HEIGHT_PX}px)` }}
>
{!isMobile && (
<DesktopSidebar
sessions={sessions}
currentSessionId={currentSessionId}
isLoading={isLoading}
hasNextPage={hasNextPage}
isFetchingNextPage={isFetchingNextPage}
onSelectSession={handleSelectSession}
onFetchNextPage={fetchNextPage}
onNewChat={handleNewChat}
hasActiveSession={Boolean(hasActiveSession)}
/>
)}
<div className="relative flex min-h-0 flex-1 flex-col">
{isMobile && <MobileHeader onOpenDrawer={handleOpenDrawer} />}
<div className="flex min-h-0 flex-1 flex-col">
{isReadyToShowContent ? children : <LoadingState />}
</div>
</div>
{isMobile && (
<MobileDrawer
isOpen={isDrawerOpen}
sessions={sessions}
currentSessionId={currentSessionId}
isLoading={isLoading}
hasNextPage={hasNextPage}
isFetchingNextPage={isFetchingNextPage}
onSelectSession={handleSelectSession}
onFetchNextPage={fetchNextPage}
onNewChat={handleNewChat}
onClose={handleCloseDrawer}
onOpenChange={handleDrawerOpenChange}
hasActiveSession={Boolean(hasActiveSession)}
/>
)}
</div>
);
}

View File

@@ -0,0 +1,70 @@
import type { SessionSummaryResponse } from "@/app/api/__generated__/models/sessionSummaryResponse";
import { Button } from "@/components/atoms/Button/Button";
import { Text } from "@/components/atoms/Text/Text";
import { scrollbarStyles } from "@/components/styles/scrollbars";
import { cn } from "@/lib/utils";
import { Plus } from "@phosphor-icons/react";
import { SessionsList } from "../SessionsList/SessionsList";
interface Props {
sessions: SessionSummaryResponse[];
currentSessionId: string | null;
isLoading: boolean;
hasNextPage: boolean;
isFetchingNextPage: boolean;
onSelectSession: (sessionId: string) => void;
onFetchNextPage: () => void;
onNewChat: () => void;
hasActiveSession: boolean;
}
export function DesktopSidebar({
sessions,
currentSessionId,
isLoading,
hasNextPage,
isFetchingNextPage,
onSelectSession,
onFetchNextPage,
onNewChat,
hasActiveSession,
}: Props) {
return (
<aside className="flex h-full w-80 flex-col border-r border-zinc-100 bg-zinc-50">
<div className="shrink-0 px-6 py-4">
<Text variant="h3" size="body-medium">
Your chats
</Text>
</div>
<div
className={cn(
"flex min-h-0 flex-1 flex-col overflow-y-auto px-3 py-3",
scrollbarStyles,
)}
>
<SessionsList
sessions={sessions}
currentSessionId={currentSessionId}
isLoading={isLoading}
hasNextPage={hasNextPage}
isFetchingNextPage={isFetchingNextPage}
onSelectSession={onSelectSession}
onFetchNextPage={onFetchNextPage}
/>
</div>
{hasActiveSession && (
<div className="shrink-0 bg-zinc-50 p-3 shadow-[0_-4px_6px_-1px_rgba(0,0,0,0.05)]">
<Button
variant="primary"
size="small"
onClick={onNewChat}
className="w-full"
leftIcon={<Plus width="1rem" height="1rem" />}
>
New Chat
</Button>
</div>
)}
</aside>
);
}

View File

@@ -0,0 +1,15 @@
import { Text } from "@/components/atoms/Text/Text";
import { ChatLoader } from "@/components/contextual/Chat/components/ChatLoader/ChatLoader";
export function LoadingState() {
return (
<div className="flex flex-1 items-center justify-center">
<div className="flex flex-col items-center gap-4">
<ChatLoader />
<Text variant="body" className="text-zinc-500">
Loading your chats...
</Text>
</div>
</div>
);
}

View File

@@ -0,0 +1,91 @@
import type { SessionSummaryResponse } from "@/app/api/__generated__/models/sessionSummaryResponse";
import { Button } from "@/components/atoms/Button/Button";
import { scrollbarStyles } from "@/components/styles/scrollbars";
import { cn } from "@/lib/utils";
import { PlusIcon, X } from "@phosphor-icons/react";
import { Drawer } from "vaul";
import { SessionsList } from "../SessionsList/SessionsList";
interface Props {
isOpen: boolean;
sessions: SessionSummaryResponse[];
currentSessionId: string | null;
isLoading: boolean;
hasNextPage: boolean;
isFetchingNextPage: boolean;
onSelectSession: (sessionId: string) => void;
onFetchNextPage: () => void;
onNewChat: () => void;
onClose: () => void;
onOpenChange: (open: boolean) => void;
hasActiveSession: boolean;
}
export function MobileDrawer({
isOpen,
sessions,
currentSessionId,
isLoading,
hasNextPage,
isFetchingNextPage,
onSelectSession,
onFetchNextPage,
onNewChat,
onClose,
onOpenChange,
hasActiveSession,
}: Props) {
return (
<Drawer.Root open={isOpen} onOpenChange={onOpenChange} direction="left">
<Drawer.Portal>
<Drawer.Overlay className="fixed inset-0 z-[60] bg-black/10 backdrop-blur-sm" />
<Drawer.Content className="fixed left-0 top-0 z-[70] flex h-full w-80 flex-col border-r border-zinc-200 bg-zinc-50">
<div className="shrink-0 border-b border-zinc-200 p-4">
<div className="flex items-center justify-between">
<Drawer.Title className="text-lg font-semibold text-zinc-800">
Your chats
</Drawer.Title>
<Button
variant="icon"
size="icon"
aria-label="Close sessions"
onClick={onClose}
>
<X width="1.25rem" height="1.25rem" />
</Button>
</div>
</div>
<div
className={cn(
"flex min-h-0 flex-1 flex-col overflow-y-auto px-3 py-3",
scrollbarStyles,
)}
>
<SessionsList
sessions={sessions}
currentSessionId={currentSessionId}
isLoading={isLoading}
hasNextPage={hasNextPage}
isFetchingNextPage={isFetchingNextPage}
onSelectSession={onSelectSession}
onFetchNextPage={onFetchNextPage}
/>
</div>
{hasActiveSession && (
<div className="shrink-0 bg-white p-3 shadow-[0_-4px_6px_-1px_rgba(0,0,0,0.05)]">
<Button
variant="primary"
size="small"
onClick={onNewChat}
className="w-full"
leftIcon={<PlusIcon width="1rem" height="1rem" />}
>
New Chat
</Button>
</div>
)}
</Drawer.Content>
</Drawer.Portal>
</Drawer.Root>
);
}

View File

@@ -0,0 +1,24 @@
import { useState } from "react";
export function useMobileDrawer() {
const [isDrawerOpen, setIsDrawerOpen] = useState(false);
function handleOpenDrawer() {
setIsDrawerOpen(true);
}
function handleCloseDrawer() {
setIsDrawerOpen(false);
}
function handleDrawerOpenChange(open: boolean) {
setIsDrawerOpen(open);
}
return {
isDrawerOpen,
handleOpenDrawer,
handleCloseDrawer,
handleDrawerOpenChange,
};
}

View File

@@ -0,0 +1,22 @@
import { Button } from "@/components/atoms/Button/Button";
import { NAVBAR_HEIGHT_PX } from "@/lib/constants";
import { ListIcon } from "@phosphor-icons/react";
interface Props {
onOpenDrawer: () => void;
}
export function MobileHeader({ onOpenDrawer }: Props) {
return (
<Button
variant="icon"
size="icon"
aria-label="Open sessions"
onClick={onOpenDrawer}
className="fixed z-50 bg-white shadow-md"
style={{ left: "1rem", top: `${NAVBAR_HEIGHT_PX + 20}px` }}
>
<ListIcon width="1.25rem" height="1.25rem" />
</Button>
);
}

View File

@@ -0,0 +1,80 @@
import type { SessionSummaryResponse } from "@/app/api/__generated__/models/sessionSummaryResponse";
import { Skeleton } from "@/components/__legacy__/ui/skeleton";
import { Text } from "@/components/atoms/Text/Text";
import { InfiniteList } from "@/components/molecules/InfiniteList/InfiniteList";
import { cn } from "@/lib/utils";
import { getSessionTitle } from "../../helpers";
interface Props {
sessions: SessionSummaryResponse[];
currentSessionId: string | null;
isLoading: boolean;
hasNextPage: boolean;
isFetchingNextPage: boolean;
onSelectSession: (sessionId: string) => void;
onFetchNextPage: () => void;
}
export function SessionsList({
sessions,
currentSessionId,
isLoading,
hasNextPage,
isFetchingNextPage,
onSelectSession,
onFetchNextPage,
}: Props) {
if (isLoading) {
return (
<div className="space-y-1">
{Array.from({ length: 5 }).map((_, i) => (
<div key={i} className="rounded-lg px-3 py-2.5">
<Skeleton className="h-5 w-full" />
</div>
))}
</div>
);
}
if (sessions.length === 0) {
return (
<div className="flex h-full items-center justify-center">
<Text variant="body" className="text-zinc-500">
You don&apos;t have previous chats
</Text>
</div>
);
}
return (
<InfiniteList
items={sessions}
hasMore={hasNextPage}
isFetchingMore={isFetchingNextPage}
onEndReached={onFetchNextPage}
className="space-y-1"
renderItem={(session) => {
const isActive = session.id === currentSessionId;
return (
<button
onClick={() => onSelectSession(session.id)}
className={cn(
"w-full rounded-lg px-3 py-2.5 text-left transition-colors",
isActive ? "bg-zinc-100" : "hover:bg-zinc-50",
)}
>
<Text
variant="body"
className={cn(
"font-normal",
isActive ? "text-zinc-600" : "text-zinc-800",
)}
>
{getSessionTitle(session)}
</Text>
</button>
);
}}
/>
);
}

View File

@@ -0,0 +1,92 @@
import { useGetV2ListSessions } from "@/app/api/__generated__/endpoints/chat/chat";
import type { SessionSummaryResponse } from "@/app/api/__generated__/models/sessionSummaryResponse";
import { okData } from "@/app/api/helpers";
import { useEffect, useMemo, useState } from "react";
const PAGE_SIZE = 50;
export interface UseSessionsPaginationArgs {
enabled: boolean;
}
export function useSessionsPagination({ enabled }: UseSessionsPaginationArgs) {
const [offset, setOffset] = useState(0);
const [accumulatedSessions, setAccumulatedSessions] = useState<
SessionSummaryResponse[]
>([]);
const [totalCount, setTotalCount] = useState<number | null>(null);
const { data, isLoading, isFetching, isError } = useGetV2ListSessions(
{ limit: PAGE_SIZE, offset },
{
query: {
enabled: enabled && offset >= 0,
},
},
);
useEffect(() => {
const responseData = okData(data);
if (responseData) {
const newSessions = responseData.sessions;
const total = responseData.total;
setTotalCount(total);
if (offset === 0) {
setAccumulatedSessions(newSessions);
} else {
setAccumulatedSessions((prev) => [...prev, ...newSessions]);
}
} else if (!enabled) {
setAccumulatedSessions([]);
setTotalCount(null);
}
}, [data, offset, enabled]);
const hasNextPage = useMemo(() => {
if (totalCount === null) return false;
return accumulatedSessions.length < totalCount;
}, [accumulatedSessions.length, totalCount]);
const areAllSessionsLoaded = useMemo(() => {
if (totalCount === null) return false;
return (
accumulatedSessions.length >= totalCount && !isFetching && !isLoading
);
}, [accumulatedSessions.length, totalCount, isFetching, isLoading]);
useEffect(() => {
if (
hasNextPage &&
!isFetching &&
!isLoading &&
!isError &&
totalCount !== null
) {
setOffset((prev) => prev + PAGE_SIZE);
}
}, [hasNextPage, isFetching, isLoading, isError, totalCount]);
function fetchNextPage() {
if (hasNextPage && !isFetching) {
setOffset((prev) => prev + PAGE_SIZE);
}
}
function reset() {
setOffset(0);
setAccumulatedSessions([]);
setTotalCount(null);
}
return {
sessions: accumulatedSessions,
isLoading,
isFetching,
hasNextPage,
areAllSessionsLoaded,
totalCount,
fetchNextPage,
reset,
};
}

Some files were not shown because too many files have changed in this diff Show More