Compare commits

..

4 Commits

Author SHA1 Message Date
Zamil Majdy
f9f984a8f4 fix(db): Remove redundant migration and fix pgvector schema handling (#11822)
### Changes 🏗️

This PR includes two database migration fixes:

#### 1. Remove redundant Supabase extensions migration

Removes the `20260112173500_add_supabase_extensions_to_platform_schema`
migration which was attempting to manage Supabase-provided extensions
and schemas.

**What was removed:**
- Migration that created extensions (pgcrypto, uuid-ossp,
pg_stat_statements, pg_net, pgjwt, pg_graphql, pgsodium, supabase_vault)
- Schema creation for these extensions

**Why it was removed:**
- These extensions and schemas are pre-installed and managed by Supabase
automatically
- The migration was redundant and could cause schema drift warnings
- Attempting to manage Supabase-owned resources in our migrations is an
anti-pattern

#### 2. Fix pgvector extension schema handling

Improves the `20260109181714_add_docs_embedding` migration to handle
cases where pgvector exists in the wrong schema.

**Problem:**
- If pgvector was previously installed in `public` schema, `CREATE
EXTENSION IF NOT EXISTS` would succeed but not actually install it in
the `platform` schema
- This causes `type "vector" does not exist` errors because the type
isn't in the search_path

**Solution:**
- Detect if vector extension exists in a different schema than the
current one
- Drop it with CASCADE and reinstall in the correct schema (platform)
- Use dynamic SQL with `EXECUTE format()` to explicitly specify the
target schema
- Split exception handling: catch errors during removal, but let
installation fail naturally with clear PostgreSQL errors

**Impact:**
- No functional changes - Supabase continues to provide extensions as
before
- pgvector now correctly installs in the platform schema
- Cleaner migration history
- Prevents schema-related errors

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Verified migrations run successfully without the redundant file
  - [x] Confirmed Supabase extensions are still available
  - [x] Tested pgvector migration handles wrong-schema scenario
  - [x] No schema drift warnings

#### For configuration changes:
- [x] .env.default is updated or already compatible with my changes
- [x] docker-compose.yml is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
  - N/A - No configuration changes required
2026-01-22 12:06:00 +00:00
Abhimanyu Yadav
fc87ed4e34 feat(ci): add integration test job and rename e2e test job (#11820)
### Changes 🏗️

- Renamed the `test` job to `e2e_test` in the CI workflow for better
clarity
- Added a new `integration_test` job to the CI workflow that runs unit
tests using `pnpm test:unit`
- Created a basic integration test for the MainMarketplacePage component
to verify CI functionality

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Verified the CI workflow runs both e2e and integration tests
  - [x] Confirmed the integration test for MainMarketplacePage passes

#### For configuration changes:

- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
2026-01-22 11:14:48 +00:00
Abhimanyu Yadav
b0953654d9 feat(frontend): add integration testing setup with Vitest, MSW, and RTL (#11813)
### Changes 🏗️

- Added Vitest and React Testing Library for frontend unit testing
- Configured MSW (Mock Service Worker) for API mocking in tests
- Created test utilities and setup files for integration tests
- Added comprehensive testing documentation in `AGENTS.md`
- Updated Orval configuration to generate MSW mock handlers
- Added mock server and browser implementations for development testing

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run `pnpm test:unit` to verify tests pass
  - [x] Verify MSW mock handlers are generated correctly
  - [x] Check that test utilities work with sample component tests

#### For configuration changes:

- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
2026-01-22 10:10:00 +00:00
Ubbe
c5069ca48f fix(frontend): chat UX improvements (#11804)
### Changes 🏗️

<img width="1920" height="998" alt="Screenshot 2026-01-19 at 22 14 51"
src="https://github.com/user-attachments/assets/ecd1c241-6f77-4702-9774-5e58806b0b64"
/>

This PR lays the groundwork for the new UX of AutoGPT Copilot. 
- moves the Copilot to its own route `/copilot`
- Makes the Copilot the homepage when enabled
- Updates the labelling of the homepage icons
- Makes the Library the homepage when Copilot is disabled
- Improves Copilot's:
  - session handling
  - styles and UX
  - message parsing
  
### Other improvements

- Improve the log out UX by adding a new `/logout` page and using a
re-direct

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run locally and test the above

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Launches the new Copilot experience and aligns API behavior with the
UI.
> 
> - **Routing/Home**: Add `/copilot` with `CopilotShell` (desktop
sidebar + mobile drawer), make homepage route flag-driven; update
login/signup/error redirects and root page to use `getHomepageRoute`.
> - **Chat UX**: Replace legacy chat with `components/contextual/Chat/*`
(new message list, bubbles, tool call/response formatting, stop button,
initial-prompt handling, refined streaming/error handling); remove old
platform chat components.
> - **Sessions**: Add paginated session list (infinite load),
auto-select/create logic, mobile/desktop navigation, and improved
session fetching/claiming guards.
> - **Auth/Logout**: New `/logout` flow with delayed redirect; gate
various queries on auth state and logout-in-progress.
> - **Backend**: `GET /api/chat/sessions/{id}` returns `null` instead of
404; service saves assistant message on `StreamFinish` to avoid loss and
prevents duplicate saves; OpenAPI updated accordingly.
> - **Misc**: Minor UI polish in library modals, loader styling, docs
(CONTRIBUTING) additions, and small formatting fixes in block docs
generator.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
1b4776dcf5. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->
2026-01-22 16:43:42 +07:00
151 changed files with 5766 additions and 3525 deletions

View File

@@ -128,7 +128,7 @@ jobs:
token: ${{ secrets.GITHUB_TOKEN }} token: ${{ secrets.GITHUB_TOKEN }}
exitOnceUploaded: true exitOnceUploaded: true
test: e2e_test:
runs-on: big-boi runs-on: big-boi
needs: setup needs: setup
strategy: strategy:
@@ -258,3 +258,39 @@ jobs:
- name: Print Final Docker Compose logs - name: Print Final Docker Compose logs
if: always() if: always()
run: docker compose -f ../docker-compose.yml logs run: docker compose -f ../docker-compose.yml logs
integration_test:
runs-on: ubuntu-latest
needs: setup
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
submodules: recursive
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: "22.18.0"
- name: Enable corepack
run: corepack enable
- name: Restore dependencies cache
uses: actions/cache@v4
with:
path: ~/.pnpm-store
key: ${{ needs.setup.outputs.cache-key }}
restore-keys: |
${{ runner.os }}-pnpm-${{ hashFiles('autogpt_platform/frontend/pnpm-lock.yaml') }}
${{ runner.os }}-pnpm-
- name: Install dependencies
run: pnpm install --frozen-lockfile
- name: Generate API client
run: pnpm generate:api
- name: Run Integration Tests
run: pnpm test:unit

View File

@@ -16,6 +16,32 @@ See `docs/content/platform/getting-started.md` for setup instructions.
- Format Python code with `poetry run format`. - Format Python code with `poetry run format`.
- Format frontend code using `pnpm format`. - Format frontend code using `pnpm format`.
## Frontend guidelines:
See `/frontend/CONTRIBUTING.md` for complete patterns. Quick reference:
1. **Pages**: Create in `src/app/(platform)/feature-name/page.tsx`
- Add `usePageName.ts` hook for logic
- Put sub-components in local `components/` folder
2. **Components**: Structure as `ComponentName/ComponentName.tsx` + `useComponentName.ts` + `helpers.ts`
- Use design system components from `src/components/` (atoms, molecules, organisms)
- Never use `src/components/__legacy__/*`
3. **Data fetching**: Use generated API hooks from `@/app/api/__generated__/endpoints/`
- Regenerate with `pnpm generate:api`
- Pattern: `use{Method}{Version}{OperationName}`
4. **Styling**: Tailwind CSS only, use design tokens, Phosphor Icons only
5. **Testing**: Add Storybook stories for new components, Playwright for E2E
6. **Code conventions**: Function declarations (not arrow functions) for components/handlers
- Component props should be `interface Props { ... }` (not exported) unless the interface needs to be used outside the component
- Separate render logic from business logic (component.tsx + useComponent.ts + helpers.ts)
- Colocate state when possible and avoid creating large components, use sub-components ( local `/components` folder next to the parent component ) when sensible
- Avoid large hooks, abstract logic into `helpers.ts` files when sensible
- Use function declarations for components, arrow functions only for callbacks
- No barrel files or `index.ts` re-exports
- Do not use `useCallback` or `useMemo` unless strictly needed
- Avoid comments at all times unless the code is very complex
## Testing ## Testing
- Backend: `poetry run test` (runs pytest with a docker based postgres + prisma). - Backend: `poetry run test` (runs pytest with a docker based postgres + prisma).

View File

@@ -201,7 +201,7 @@ If you get any pushback or hit complex block conditions check the new_blocks gui
3. Write tests alongside the route file 3. Write tests alongside the route file
4. Run `poetry run test` to verify 4. Run `poetry run test` to verify
**Frontend feature development:** ### Frontend guidelines:
See `/frontend/CONTRIBUTING.md` for complete patterns. Quick reference: See `/frontend/CONTRIBUTING.md` for complete patterns. Quick reference:
@@ -217,6 +217,14 @@ See `/frontend/CONTRIBUTING.md` for complete patterns. Quick reference:
4. **Styling**: Tailwind CSS only, use design tokens, Phosphor Icons only 4. **Styling**: Tailwind CSS only, use design tokens, Phosphor Icons only
5. **Testing**: Add Storybook stories for new components, Playwright for E2E 5. **Testing**: Add Storybook stories for new components, Playwright for E2E
6. **Code conventions**: Function declarations (not arrow functions) for components/handlers 6. **Code conventions**: Function declarations (not arrow functions) for components/handlers
- Component props should be `interface Props { ... }` (not exported) unless the interface needs to be used outside the component
- Separate render logic from business logic (component.tsx + useComponent.ts + helpers.ts)
- Colocate state when possible and avoid creating large components, use sub-components ( local `/components` folder next to the parent component ) when sensible
- Avoid large hooks, abstract logic into `helpers.ts` files when sensible
- Use function declarations for components, arrow functions only for callbacks
- No barrel files or `index.ts` re-exports
- Do not use `useCallback` or `useMemo` unless strictly needed
- Avoid comments at all times unless the code is very complex
### Security Implementation ### Security Implementation

View File

@@ -290,6 +290,11 @@ async def _cache_session(session: ChatSession) -> None:
await async_redis.setex(redis_key, config.session_ttl, session.model_dump_json()) await async_redis.setex(redis_key, config.session_ttl, session.model_dump_json())
async def cache_chat_session(session: ChatSession) -> None:
"""Cache a chat session without persisting to the database."""
await _cache_session(session)
async def _get_session_from_db(session_id: str) -> ChatSession | None: async def _get_session_from_db(session_id: str) -> ChatSession | None:
"""Get a chat session from the database.""" """Get a chat session from the database."""
prisma_session = await chat_db.get_chat_session(session_id) prisma_session = await chat_db.get_chat_session(session_id)

View File

@@ -172,12 +172,12 @@ async def get_session(
user_id: The optional authenticated user ID, or None for anonymous access. user_id: The optional authenticated user ID, or None for anonymous access.
Returns: Returns:
SessionDetailResponse: Details for the requested session; raises NotFoundError if not found. SessionDetailResponse: Details for the requested session, or None if not found.
""" """
session = await get_chat_session(session_id, user_id) session = await get_chat_session(session_id, user_id)
if not session: if not session:
raise NotFoundError(f"Session {session_id} not found") raise NotFoundError(f"Session {session_id} not found.")
messages = [message.model_dump() for message in session.messages] messages = [message.model_dump() for message in session.messages]
logger.info( logger.info(
@@ -222,6 +222,8 @@ async def stream_chat_post(
session = await _validate_and_get_session(session_id, user_id) session = await _validate_and_get_session(session_id, user_id)
async def event_generator() -> AsyncGenerator[str, None]: async def event_generator() -> AsyncGenerator[str, None]:
chunk_count = 0
first_chunk_type: str | None = None
async for chunk in chat_service.stream_chat_completion( async for chunk in chat_service.stream_chat_completion(
session_id, session_id,
request.message, request.message,
@@ -230,7 +232,26 @@ async def stream_chat_post(
session=session, # Pass pre-fetched session to avoid double-fetch session=session, # Pass pre-fetched session to avoid double-fetch
context=request.context, context=request.context,
): ):
if chunk_count < 3:
logger.info(
"Chat stream chunk",
extra={
"session_id": session_id,
"chunk_type": str(chunk.type),
},
)
if not first_chunk_type:
first_chunk_type = str(chunk.type)
chunk_count += 1
yield chunk.to_sse() yield chunk.to_sse()
logger.info(
"Chat stream completed",
extra={
"session_id": session_id,
"chunk_count": chunk_count,
"first_chunk_type": first_chunk_type,
},
)
# AI SDK protocol termination # AI SDK protocol termination
yield "data: [DONE]\n\n" yield "data: [DONE]\n\n"
@@ -275,6 +296,8 @@ async def stream_chat_get(
session = await _validate_and_get_session(session_id, user_id) session = await _validate_and_get_session(session_id, user_id)
async def event_generator() -> AsyncGenerator[str, None]: async def event_generator() -> AsyncGenerator[str, None]:
chunk_count = 0
first_chunk_type: str | None = None
async for chunk in chat_service.stream_chat_completion( async for chunk in chat_service.stream_chat_completion(
session_id, session_id,
message, message,
@@ -282,7 +305,26 @@ async def stream_chat_get(
user_id=user_id, user_id=user_id,
session=session, # Pass pre-fetched session to avoid double-fetch session=session, # Pass pre-fetched session to avoid double-fetch
): ):
if chunk_count < 3:
logger.info(
"Chat stream chunk",
extra={
"session_id": session_id,
"chunk_type": str(chunk.type),
},
)
if not first_chunk_type:
first_chunk_type = str(chunk.type)
chunk_count += 1
yield chunk.to_sse() yield chunk.to_sse()
logger.info(
"Chat stream completed",
extra={
"session_id": session_id,
"chunk_count": chunk_count,
"first_chunk_type": first_chunk_type,
},
)
# AI SDK protocol termination # AI SDK protocol termination
yield "data: [DONE]\n\n" yield "data: [DONE]\n\n"

View File

@@ -1,12 +1,20 @@
import asyncio import asyncio
import logging import logging
import time
from asyncio import CancelledError
from collections.abc import AsyncGenerator from collections.abc import AsyncGenerator
from typing import Any from typing import Any
import orjson import orjson
from langfuse import get_client, propagate_attributes from langfuse import get_client, propagate_attributes
from langfuse.openai import openai # type: ignore from langfuse.openai import openai # type: ignore
from openai import APIConnectionError, APIError, APIStatusError, RateLimitError from openai import (
APIConnectionError,
APIError,
APIStatusError,
PermissionDeniedError,
RateLimitError,
)
from openai.types.chat import ChatCompletionChunk, ChatCompletionToolParam from openai.types.chat import ChatCompletionChunk, ChatCompletionToolParam
from backend.data.understanding import ( from backend.data.understanding import (
@@ -21,6 +29,7 @@ from .model import (
ChatMessage, ChatMessage,
ChatSession, ChatSession,
Usage, Usage,
cache_chat_session,
get_chat_session, get_chat_session,
update_session_title, update_session_title,
upsert_chat_session, upsert_chat_session,
@@ -296,6 +305,10 @@ async def stream_chat_completion(
content="", content="",
) )
accumulated_tool_calls: list[dict[str, Any]] = [] accumulated_tool_calls: list[dict[str, Any]] = []
has_saved_assistant_message = False
has_appended_streaming_message = False
last_cache_time = 0.0
last_cache_content_len = 0
# Wrap main logic in try/finally to ensure Langfuse observations are always ended # Wrap main logic in try/finally to ensure Langfuse observations are always ended
has_yielded_end = False has_yielded_end = False
@@ -332,6 +345,23 @@ async def stream_chat_completion(
assert assistant_response.content is not None assert assistant_response.content is not None
assistant_response.content += delta assistant_response.content += delta
has_received_text = True has_received_text = True
if not has_appended_streaming_message:
session.messages.append(assistant_response)
has_appended_streaming_message = True
current_time = time.monotonic()
content_len = len(assistant_response.content)
if (
current_time - last_cache_time >= 1.0
and content_len > last_cache_content_len
):
try:
await cache_chat_session(session)
except Exception as e:
logger.warning(
f"Failed to cache partial session {session.session_id}: {e}"
)
last_cache_time = current_time
last_cache_content_len = content_len
yield chunk yield chunk
elif isinstance(chunk, StreamTextEnd): elif isinstance(chunk, StreamTextEnd):
# Emit text-end after text completes # Emit text-end after text completes
@@ -390,10 +420,42 @@ async def stream_chat_completion(
if has_received_text and not text_streaming_ended: if has_received_text and not text_streaming_ended:
yield StreamTextEnd(id=text_block_id) yield StreamTextEnd(id=text_block_id)
text_streaming_ended = True text_streaming_ended = True
# Save assistant message before yielding finish to ensure it's persisted
# even if client disconnects immediately after receiving StreamFinish
if not has_saved_assistant_message:
messages_to_save_early: list[ChatMessage] = []
if accumulated_tool_calls:
assistant_response.tool_calls = (
accumulated_tool_calls
)
if not has_appended_streaming_message and (
assistant_response.content
or assistant_response.tool_calls
):
messages_to_save_early.append(assistant_response)
messages_to_save_early.extend(tool_response_messages)
if messages_to_save_early:
session.messages.extend(messages_to_save_early)
logger.info(
f"Saving assistant message before StreamFinish: "
f"content_len={len(assistant_response.content or '')}, "
f"tool_calls={len(assistant_response.tool_calls or [])}, "
f"tool_responses={len(tool_response_messages)}"
)
if (
messages_to_save_early
or has_appended_streaming_message
):
await upsert_chat_session(session)
has_saved_assistant_message = True
has_yielded_end = True has_yielded_end = True
yield chunk yield chunk
elif isinstance(chunk, StreamError): elif isinstance(chunk, StreamError):
has_yielded_error = True has_yielded_error = True
yield chunk
elif isinstance(chunk, StreamUsage): elif isinstance(chunk, StreamUsage):
session.usage.append( session.usage.append(
Usage( Usage(
@@ -413,6 +475,27 @@ async def stream_chat_completion(
langfuse.update_current_trace(output=str(tool_response_messages)) langfuse.update_current_trace(output=str(tool_response_messages))
langfuse.update_current_span(output=str(tool_response_messages)) langfuse.update_current_span(output=str(tool_response_messages))
except CancelledError:
if not has_saved_assistant_message:
if accumulated_tool_calls:
assistant_response.tool_calls = accumulated_tool_calls
if assistant_response.content:
assistant_response.content = (
f"{assistant_response.content}\n\n[interrupted]"
)
else:
assistant_response.content = "[interrupted]"
if not has_appended_streaming_message:
session.messages.append(assistant_response)
if tool_response_messages:
session.messages.extend(tool_response_messages)
try:
await upsert_chat_session(session)
except Exception as e:
logger.warning(
f"Failed to save interrupted session {session.session_id}: {e}"
)
raise
except Exception as e: except Exception as e:
logger.error(f"Error during stream: {e!s}", exc_info=True) logger.error(f"Error during stream: {e!s}", exc_info=True)
@@ -434,13 +517,18 @@ async def stream_chat_completion(
# Add assistant message if it has content or tool calls # Add assistant message if it has content or tool calls
if accumulated_tool_calls: if accumulated_tool_calls:
assistant_response.tool_calls = accumulated_tool_calls assistant_response.tool_calls = accumulated_tool_calls
if assistant_response.content or assistant_response.tool_calls: if not has_appended_streaming_message and (
assistant_response.content or assistant_response.tool_calls
):
messages_to_save.append(assistant_response) messages_to_save.append(assistant_response)
# Add tool response messages after assistant message # Add tool response messages after assistant message
messages_to_save.extend(tool_response_messages) messages_to_save.extend(tool_response_messages)
if not has_saved_assistant_message:
if messages_to_save:
session.messages.extend(messages_to_save) session.messages.extend(messages_to_save)
if messages_to_save or has_appended_streaming_message:
await upsert_chat_session(session) await upsert_chat_session(session)
if not has_yielded_error: if not has_yielded_error:
@@ -472,6 +560,8 @@ async def stream_chat_completion(
return # Exit after retry to avoid double-saving in finally block return # Exit after retry to avoid double-saving in finally block
# Normal completion path - save session and handle tool call continuation # Normal completion path - save session and handle tool call continuation
# Only save if we haven't already saved when StreamFinish was received
if not has_saved_assistant_message:
logger.info( logger.info(
f"Normal completion path: session={session.session_id}, " f"Normal completion path: session={session.session_id}, "
f"current message_count={len(session.messages)}" f"current message_count={len(session.messages)}"
@@ -486,7 +576,9 @@ async def stream_chat_completion(
logger.info( logger.info(
f"Added {len(accumulated_tool_calls)} tool calls to assistant message" f"Added {len(accumulated_tool_calls)} tool calls to assistant message"
) )
if assistant_response.content or assistant_response.tool_calls: if not has_appended_streaming_message and (
assistant_response.content or assistant_response.tool_calls
):
messages_to_save.append(assistant_response) messages_to_save.append(assistant_response)
logger.info( logger.info(
f"Saving assistant message with content_len={len(assistant_response.content or '')}, tool_calls={len(assistant_response.tool_calls or [])}" f"Saving assistant message with content_len={len(assistant_response.content or '')}, tool_calls={len(assistant_response.tool_calls or [])}"
@@ -499,11 +591,18 @@ async def stream_chat_completion(
f"total_to_save={len(messages_to_save)}" f"total_to_save={len(messages_to_save)}"
) )
if messages_to_save:
session.messages.extend(messages_to_save) session.messages.extend(messages_to_save)
logger.info( logger.info(
f"Extended session messages, new message_count={len(session.messages)}" f"Extended session messages, new message_count={len(session.messages)}"
) )
if messages_to_save or has_appended_streaming_message:
await upsert_chat_session(session) await upsert_chat_session(session)
else:
logger.info(
"Assistant message already saved when StreamFinish was received, "
"skipping duplicate save"
)
# If we did a tool call, stream the chat completion again to get the next response # If we did a tool call, stream the chat completion again to get the next response
if has_done_tool_call: if has_done_tool_call:
@@ -545,6 +644,12 @@ def _is_retryable_error(error: Exception) -> bool:
return False return False
def _is_region_blocked_error(error: Exception) -> bool:
if isinstance(error, PermissionDeniedError):
return "not available in your region" in str(error).lower()
return "not available in your region" in str(error).lower()
async def _stream_chat_chunks( async def _stream_chat_chunks(
session: ChatSession, session: ChatSession,
tools: list[ChatCompletionToolParam], tools: list[ChatCompletionToolParam],
@@ -737,7 +842,18 @@ async def _stream_chat_chunks(
f"Error in stream (not retrying): {e!s}", f"Error in stream (not retrying): {e!s}",
exc_info=True, exc_info=True,
) )
error_response = StreamError(errorText=str(e)) error_code = None
error_text = str(e)
if _is_region_blocked_error(e):
error_code = "MODEL_NOT_AVAILABLE_REGION"
error_text = (
"This model is not available in your region. "
"Please connect via VPN and try again."
)
error_response = StreamError(
errorText=error_text,
code=error_code,
)
yield error_response yield error_response
yield StreamFinish() yield StreamFinish()
return return

View File

@@ -107,13 +107,6 @@ class ReviewItem(BaseModel):
reviewed_data: SafeJsonData | None = Field( reviewed_data: SafeJsonData | None = Field(
None, description="Optional edited data (ignored if approved=False)" None, description="Optional edited data (ignored if approved=False)"
) )
auto_approve_future: bool = Field(
default=False,
description=(
"If true and this review is approved, future executions of this same "
"block (node) will be automatically approved. This only affects approved reviews."
),
)
@field_validator("reviewed_data") @field_validator("reviewed_data")
@classmethod @classmethod
@@ -181,9 +174,6 @@ class ReviewRequest(BaseModel):
This request must include ALL pending reviews for a graph execution. This request must include ALL pending reviews for a graph execution.
Each review will be either approved (with optional data modifications) Each review will be either approved (with optional data modifications)
or rejected (data ignored). The execution will resume only after ALL reviews are processed. or rejected (data ignored). The execution will resume only after ALL reviews are processed.
Each review item can individually specify whether to auto-approve future executions
of the same block via the `auto_approve_future` field on ReviewItem.
""" """
reviews: List[ReviewItem] = Field( reviews: List[ReviewItem] = Field(

View File

@@ -8,12 +8,6 @@ from prisma.enums import ReviewStatus
from pytest_snapshot.plugin import Snapshot from pytest_snapshot.plugin import Snapshot
from backend.api.rest_api import handle_internal_http_error from backend.api.rest_api import handle_internal_http_error
from backend.data.execution import (
ExecutionContext,
ExecutionStatus,
NodeExecutionResult,
)
from backend.data.graph import GraphSettings
from .model import PendingHumanReviewModel from .model import PendingHumanReviewModel
from .routes import router from .routes import router
@@ -21,24 +15,20 @@ from .routes import router
# Using a fixed timestamp for reproducible tests # Using a fixed timestamp for reproducible tests
FIXED_NOW = datetime.datetime(2023, 1, 1, 0, 0, 0, tzinfo=datetime.timezone.utc) FIXED_NOW = datetime.datetime(2023, 1, 1, 0, 0, 0, tzinfo=datetime.timezone.utc)
app = fastapi.FastAPI()
app.include_router(router, prefix="/api/review")
app.add_exception_handler(ValueError, handle_internal_http_error(400))
@pytest.fixture client = fastapi.testclient.TestClient(app)
def app():
"""Create FastAPI app for testing"""
test_app = fastapi.FastAPI()
test_app.include_router(router, prefix="/api/review")
test_app.add_exception_handler(ValueError, handle_internal_http_error(400))
return test_app
@pytest.fixture @pytest.fixture(autouse=True)
def client(app, mock_jwt_user): def setup_app_auth(mock_jwt_user):
"""Create test client with auth overrides""" """Setup auth overrides for all tests in this module"""
from autogpt_libs.auth.jwt_utils import get_jwt_payload from autogpt_libs.auth.jwt_utils import get_jwt_payload
app.dependency_overrides[get_jwt_payload] = mock_jwt_user["get_jwt_payload"] app.dependency_overrides[get_jwt_payload] = mock_jwt_user["get_jwt_payload"]
with fastapi.testclient.TestClient(app) as test_client: yield
yield test_client
app.dependency_overrides.clear() app.dependency_overrides.clear()
@@ -65,7 +55,6 @@ def sample_pending_review(test_user_id: str) -> PendingHumanReviewModel:
def test_get_pending_reviews_empty( def test_get_pending_reviews_empty(
client: fastapi.testclient.TestClient,
mocker: pytest_mock.MockerFixture, mocker: pytest_mock.MockerFixture,
snapshot: Snapshot, snapshot: Snapshot,
test_user_id: str, test_user_id: str,
@@ -84,7 +73,6 @@ def test_get_pending_reviews_empty(
def test_get_pending_reviews_with_data( def test_get_pending_reviews_with_data(
client: fastapi.testclient.TestClient,
mocker: pytest_mock.MockerFixture, mocker: pytest_mock.MockerFixture,
sample_pending_review: PendingHumanReviewModel, sample_pending_review: PendingHumanReviewModel,
snapshot: Snapshot, snapshot: Snapshot,
@@ -107,7 +95,6 @@ def test_get_pending_reviews_with_data(
def test_get_pending_reviews_for_execution_success( def test_get_pending_reviews_for_execution_success(
client: fastapi.testclient.TestClient,
mocker: pytest_mock.MockerFixture, mocker: pytest_mock.MockerFixture,
sample_pending_review: PendingHumanReviewModel, sample_pending_review: PendingHumanReviewModel,
snapshot: Snapshot, snapshot: Snapshot,
@@ -136,7 +123,6 @@ def test_get_pending_reviews_for_execution_success(
def test_get_pending_reviews_for_execution_not_available( def test_get_pending_reviews_for_execution_not_available(
client: fastapi.testclient.TestClient,
mocker: pytest_mock.MockerFixture, mocker: pytest_mock.MockerFixture,
) -> None: ) -> None:
"""Test access denied when user doesn't own the execution""" """Test access denied when user doesn't own the execution"""
@@ -152,7 +138,6 @@ def test_get_pending_reviews_for_execution_not_available(
def test_process_review_action_approve_success( def test_process_review_action_approve_success(
client: fastapi.testclient.TestClient,
mocker: pytest_mock.MockerFixture, mocker: pytest_mock.MockerFixture,
sample_pending_review: PendingHumanReviewModel, sample_pending_review: PendingHumanReviewModel,
test_user_id: str, test_user_id: str,
@@ -160,12 +145,6 @@ def test_process_review_action_approve_success(
"""Test successful review approval""" """Test successful review approval"""
# Mock the route functions # Mock the route functions
# Mock get_pending_review_by_node_exec_id (called to find the graph_exec_id)
mock_get_reviews_for_user = mocker.patch(
"backend.api.features.executions.review.routes.get_pending_review_by_node_exec_id"
)
mock_get_reviews_for_user.return_value = sample_pending_review
mock_get_reviews_for_execution = mocker.patch( mock_get_reviews_for_execution = mocker.patch(
"backend.api.features.executions.review.routes.get_pending_reviews_for_execution" "backend.api.features.executions.review.routes.get_pending_reviews_for_execution"
) )
@@ -194,14 +173,6 @@ def test_process_review_action_approve_success(
) )
mock_process_all_reviews.return_value = {"test_node_123": approved_review} mock_process_all_reviews.return_value = {"test_node_123": approved_review}
# Mock get_graph_execution_meta to return execution in REVIEW status
mock_get_graph_exec = mocker.patch(
"backend.api.features.executions.review.routes.get_graph_execution_meta"
)
mock_graph_exec_meta = mocker.Mock()
mock_graph_exec_meta.status = ExecutionStatus.REVIEW
mock_get_graph_exec.return_value = mock_graph_exec_meta
mock_has_pending = mocker.patch( mock_has_pending = mocker.patch(
"backend.api.features.executions.review.routes.has_pending_reviews_for_graph_exec" "backend.api.features.executions.review.routes.has_pending_reviews_for_graph_exec"
) )
@@ -231,7 +202,6 @@ def test_process_review_action_approve_success(
def test_process_review_action_reject_success( def test_process_review_action_reject_success(
client: fastapi.testclient.TestClient,
mocker: pytest_mock.MockerFixture, mocker: pytest_mock.MockerFixture,
sample_pending_review: PendingHumanReviewModel, sample_pending_review: PendingHumanReviewModel,
test_user_id: str, test_user_id: str,
@@ -239,20 +209,6 @@ def test_process_review_action_reject_success(
"""Test successful review rejection""" """Test successful review rejection"""
# Mock the route functions # Mock the route functions
# Mock get_pending_review_by_node_exec_id (called to find the graph_exec_id)
mock_get_reviews_for_user = mocker.patch(
"backend.api.features.executions.review.routes.get_pending_review_by_node_exec_id"
)
mock_get_reviews_for_user.return_value = sample_pending_review
# Mock get_graph_execution_meta to return execution in REVIEW status
mock_get_graph_exec = mocker.patch(
"backend.api.features.executions.review.routes.get_graph_execution_meta"
)
mock_graph_exec_meta = mocker.Mock()
mock_graph_exec_meta.status = ExecutionStatus.REVIEW
mock_get_graph_exec.return_value = mock_graph_exec_meta
mock_get_reviews_for_execution = mocker.patch( mock_get_reviews_for_execution = mocker.patch(
"backend.api.features.executions.review.routes.get_pending_reviews_for_execution" "backend.api.features.executions.review.routes.get_pending_reviews_for_execution"
) )
@@ -306,7 +262,6 @@ def test_process_review_action_reject_success(
def test_process_review_action_mixed_success( def test_process_review_action_mixed_success(
client: fastapi.testclient.TestClient,
mocker: pytest_mock.MockerFixture, mocker: pytest_mock.MockerFixture,
sample_pending_review: PendingHumanReviewModel, sample_pending_review: PendingHumanReviewModel,
test_user_id: str, test_user_id: str,
@@ -333,12 +288,6 @@ def test_process_review_action_mixed_success(
# Mock the route functions # Mock the route functions
# Mock get_pending_review_by_node_exec_id (called to find the graph_exec_id)
mock_get_reviews_for_user = mocker.patch(
"backend.api.features.executions.review.routes.get_pending_review_by_node_exec_id"
)
mock_get_reviews_for_user.return_value = sample_pending_review
mock_get_reviews_for_execution = mocker.patch( mock_get_reviews_for_execution = mocker.patch(
"backend.api.features.executions.review.routes.get_pending_reviews_for_execution" "backend.api.features.executions.review.routes.get_pending_reviews_for_execution"
) )
@@ -388,14 +337,6 @@ def test_process_review_action_mixed_success(
"test_node_456": rejected_review, "test_node_456": rejected_review,
} }
# Mock get_graph_execution_meta to return execution in REVIEW status
mock_get_graph_exec = mocker.patch(
"backend.api.features.executions.review.routes.get_graph_execution_meta"
)
mock_graph_exec_meta = mocker.Mock()
mock_graph_exec_meta.status = ExecutionStatus.REVIEW
mock_get_graph_exec.return_value = mock_graph_exec_meta
mock_has_pending = mocker.patch( mock_has_pending = mocker.patch(
"backend.api.features.executions.review.routes.has_pending_reviews_for_graph_exec" "backend.api.features.executions.review.routes.has_pending_reviews_for_graph_exec"
) )
@@ -428,7 +369,6 @@ def test_process_review_action_mixed_success(
def test_process_review_action_empty_request( def test_process_review_action_empty_request(
client: fastapi.testclient.TestClient,
mocker: pytest_mock.MockerFixture, mocker: pytest_mock.MockerFixture,
test_user_id: str, test_user_id: str,
) -> None: ) -> None:
@@ -446,45 +386,10 @@ def test_process_review_action_empty_request(
def test_process_review_action_review_not_found( def test_process_review_action_review_not_found(
client: fastapi.testclient.TestClient,
mocker: pytest_mock.MockerFixture, mocker: pytest_mock.MockerFixture,
sample_pending_review: PendingHumanReviewModel,
test_user_id: str, test_user_id: str,
) -> None: ) -> None:
"""Test error when review is not found""" """Test error when review is not found"""
# Create a review with the nonexistent_node ID so the route can find the graph_exec_id
nonexistent_review = PendingHumanReviewModel(
node_exec_id="nonexistent_node",
user_id=test_user_id,
graph_exec_id="test_graph_exec_456",
graph_id="test_graph_789",
graph_version=1,
payload={"data": "test"},
instructions="Review",
editable=True,
status=ReviewStatus.WAITING,
review_message=None,
was_edited=None,
processed=False,
created_at=FIXED_NOW,
updated_at=None,
reviewed_at=None,
)
# Mock get_pending_review_by_node_exec_id (called to find the graph_exec_id)
mock_get_reviews_for_user = mocker.patch(
"backend.api.features.executions.review.routes.get_pending_review_by_node_exec_id"
)
mock_get_reviews_for_user.return_value = nonexistent_review
# Mock get_graph_execution_meta to return execution in REVIEW status
mock_get_graph_exec = mocker.patch(
"backend.api.features.executions.review.routes.get_graph_execution_meta"
)
mock_graph_exec_meta = mocker.Mock()
mock_graph_exec_meta.status = ExecutionStatus.REVIEW
mock_get_graph_exec.return_value = mock_graph_exec_meta
# Mock the functions that extract graph execution ID from the request # Mock the functions that extract graph execution ID from the request
mock_get_reviews_for_execution = mocker.patch( mock_get_reviews_for_execution = mocker.patch(
"backend.api.features.executions.review.routes.get_pending_reviews_for_execution" "backend.api.features.executions.review.routes.get_pending_reviews_for_execution"
@@ -517,26 +422,11 @@ def test_process_review_action_review_not_found(
def test_process_review_action_partial_failure( def test_process_review_action_partial_failure(
client: fastapi.testclient.TestClient,
mocker: pytest_mock.MockerFixture, mocker: pytest_mock.MockerFixture,
sample_pending_review: PendingHumanReviewModel, sample_pending_review: PendingHumanReviewModel,
test_user_id: str, test_user_id: str,
) -> None: ) -> None:
"""Test handling of partial failures in review processing""" """Test handling of partial failures in review processing"""
# Mock get_pending_review_by_node_exec_id (called to find the graph_exec_id)
mock_get_reviews_for_user = mocker.patch(
"backend.api.features.executions.review.routes.get_pending_review_by_node_exec_id"
)
mock_get_reviews_for_user.return_value = sample_pending_review
# Mock get_graph_execution_meta to return execution in REVIEW status
mock_get_graph_exec = mocker.patch(
"backend.api.features.executions.review.routes.get_graph_execution_meta"
)
mock_graph_exec_meta = mocker.Mock()
mock_graph_exec_meta.status = ExecutionStatus.REVIEW
mock_get_graph_exec.return_value = mock_graph_exec_meta
# Mock the route functions # Mock the route functions
mock_get_reviews_for_execution = mocker.patch( mock_get_reviews_for_execution = mocker.patch(
"backend.api.features.executions.review.routes.get_pending_reviews_for_execution" "backend.api.features.executions.review.routes.get_pending_reviews_for_execution"
@@ -566,50 +456,16 @@ def test_process_review_action_partial_failure(
def test_process_review_action_invalid_node_exec_id( def test_process_review_action_invalid_node_exec_id(
client: fastapi.testclient.TestClient,
mocker: pytest_mock.MockerFixture, mocker: pytest_mock.MockerFixture,
sample_pending_review: PendingHumanReviewModel, sample_pending_review: PendingHumanReviewModel,
test_user_id: str, test_user_id: str,
) -> None: ) -> None:
"""Test failure when trying to process review with invalid node execution ID""" """Test failure when trying to process review with invalid node execution ID"""
# Create a review with the invalid-node-format ID so the route can find the graph_exec_id
invalid_review = PendingHumanReviewModel(
node_exec_id="invalid-node-format",
user_id=test_user_id,
graph_exec_id="test_graph_exec_456",
graph_id="test_graph_789",
graph_version=1,
payload={"data": "test"},
instructions="Review",
editable=True,
status=ReviewStatus.WAITING,
review_message=None,
was_edited=None,
processed=False,
created_at=FIXED_NOW,
updated_at=None,
reviewed_at=None,
)
# Mock get_pending_review_by_node_exec_id (called to find the graph_exec_id)
mock_get_reviews_for_user = mocker.patch(
"backend.api.features.executions.review.routes.get_pending_review_by_node_exec_id"
)
mock_get_reviews_for_user.return_value = invalid_review
# Mock get_graph_execution_meta to return execution in REVIEW status
mock_get_graph_exec = mocker.patch(
"backend.api.features.executions.review.routes.get_graph_execution_meta"
)
mock_graph_exec_meta = mocker.Mock()
mock_graph_exec_meta.status = ExecutionStatus.REVIEW
mock_get_graph_exec.return_value = mock_graph_exec_meta
# Mock the route functions # Mock the route functions
mock_get_reviews_for_execution = mocker.patch( mock_get_reviews_for_execution = mocker.patch(
"backend.api.features.executions.review.routes.get_pending_reviews_for_execution" "backend.api.features.executions.review.routes.get_pending_reviews_for_execution"
) )
mock_get_reviews_for_execution.return_value = [invalid_review] mock_get_reviews_for_execution.return_value = [sample_pending_review]
# Mock validation failure - this should return 400, not 500 # Mock validation failure - this should return 400, not 500
mock_process_all_reviews = mocker.patch( mock_process_all_reviews = mocker.patch(
@@ -634,571 +490,3 @@ def test_process_review_action_invalid_node_exec_id(
# Should be a 400 Bad Request, not 500 Internal Server Error # Should be a 400 Bad Request, not 500 Internal Server Error
assert response.status_code == 400 assert response.status_code == 400
assert "Invalid node execution ID format" in response.json()["detail"] assert "Invalid node execution ID format" in response.json()["detail"]
def test_process_review_action_auto_approve_creates_auto_approval_records(
client: fastapi.testclient.TestClient,
mocker: pytest_mock.MockerFixture,
sample_pending_review: PendingHumanReviewModel,
test_user_id: str,
) -> None:
"""Test that auto_approve_future_actions flag creates auto-approval records"""
# Mock get_pending_review_by_node_exec_id (called to find the graph_exec_id)
mock_get_reviews_for_user = mocker.patch(
"backend.api.features.executions.review.routes.get_pending_review_by_node_exec_id"
)
mock_get_reviews_for_user.return_value = sample_pending_review
# Mock process_all_reviews
mock_process_all_reviews = mocker.patch(
"backend.api.features.executions.review.routes.process_all_reviews_for_execution"
)
approved_review = PendingHumanReviewModel(
node_exec_id="test_node_123",
user_id=test_user_id,
graph_exec_id="test_graph_exec_456",
graph_id="test_graph_789",
graph_version=1,
payload={"data": "test payload"},
instructions="Please review",
editable=True,
status=ReviewStatus.APPROVED,
review_message="Approved",
was_edited=False,
processed=False,
created_at=FIXED_NOW,
updated_at=FIXED_NOW,
reviewed_at=FIXED_NOW,
)
mock_process_all_reviews.return_value = {"test_node_123": approved_review}
# Mock get_node_execution to return node_id
mock_get_node_execution = mocker.patch(
"backend.api.features.executions.review.routes.get_node_execution"
)
mock_node_exec = mocker.Mock(spec=NodeExecutionResult)
mock_node_exec.node_id = "test_node_def_456"
mock_get_node_execution.return_value = mock_node_exec
# Mock create_auto_approval_record
mock_create_auto_approval = mocker.patch(
"backend.api.features.executions.review.routes.create_auto_approval_record"
)
# Mock get_graph_execution_meta to return execution in REVIEW status
mock_get_graph_exec = mocker.patch(
"backend.api.features.executions.review.routes.get_graph_execution_meta"
)
mock_graph_exec_meta = mocker.Mock()
mock_graph_exec_meta.status = ExecutionStatus.REVIEW
mock_get_graph_exec.return_value = mock_graph_exec_meta
# Mock has_pending_reviews_for_graph_exec
mock_has_pending = mocker.patch(
"backend.api.features.executions.review.routes.has_pending_reviews_for_graph_exec"
)
mock_has_pending.return_value = False
# Mock get_graph_settings to return custom settings
mock_get_settings = mocker.patch(
"backend.api.features.executions.review.routes.get_graph_settings"
)
mock_get_settings.return_value = GraphSettings(
human_in_the_loop_safe_mode=True,
sensitive_action_safe_mode=True,
)
# Mock add_graph_execution
mock_add_execution = mocker.patch(
"backend.api.features.executions.review.routes.add_graph_execution"
)
request_data = {
"reviews": [
{
"node_exec_id": "test_node_123",
"approved": True,
"message": "Approved",
"auto_approve_future": True,
}
],
}
response = client.post("/api/review/action", json=request_data)
assert response.status_code == 200
# Verify process_all_reviews_for_execution was called (without auto_approve param)
mock_process_all_reviews.assert_called_once()
# Verify create_auto_approval_record was called for the approved review
mock_create_auto_approval.assert_called_once_with(
user_id=test_user_id,
graph_exec_id="test_graph_exec_456",
graph_id="test_graph_789",
graph_version=1,
node_id="test_node_def_456",
payload={"data": "test payload"},
)
# Verify get_graph_settings was called with correct parameters
mock_get_settings.assert_called_once_with(
user_id=test_user_id, graph_id="test_graph_789"
)
# Verify add_graph_execution was called with proper ExecutionContext
mock_add_execution.assert_called_once()
call_kwargs = mock_add_execution.call_args.kwargs
execution_context = call_kwargs["execution_context"]
assert isinstance(execution_context, ExecutionContext)
assert execution_context.human_in_the_loop_safe_mode is True
assert execution_context.sensitive_action_safe_mode is True
def test_process_review_action_without_auto_approve_still_loads_settings(
client: fastapi.testclient.TestClient,
mocker: pytest_mock.MockerFixture,
sample_pending_review: PendingHumanReviewModel,
test_user_id: str,
) -> None:
"""Test that execution context is created with settings even without auto-approve"""
# Mock get_pending_review_by_node_exec_id (called to find the graph_exec_id)
mock_get_reviews_for_user = mocker.patch(
"backend.api.features.executions.review.routes.get_pending_review_by_node_exec_id"
)
mock_get_reviews_for_user.return_value = sample_pending_review
# Mock process_all_reviews
mock_process_all_reviews = mocker.patch(
"backend.api.features.executions.review.routes.process_all_reviews_for_execution"
)
approved_review = PendingHumanReviewModel(
node_exec_id="test_node_123",
user_id=test_user_id,
graph_exec_id="test_graph_exec_456",
graph_id="test_graph_789",
graph_version=1,
payload={"data": "test payload"},
instructions="Please review",
editable=True,
status=ReviewStatus.APPROVED,
review_message="Approved",
was_edited=False,
processed=False,
created_at=FIXED_NOW,
updated_at=FIXED_NOW,
reviewed_at=FIXED_NOW,
)
mock_process_all_reviews.return_value = {"test_node_123": approved_review}
# Mock create_auto_approval_record - should NOT be called when auto_approve is False
mock_create_auto_approval = mocker.patch(
"backend.api.features.executions.review.routes.create_auto_approval_record"
)
# Mock get_graph_execution_meta to return execution in REVIEW status
mock_get_graph_exec = mocker.patch(
"backend.api.features.executions.review.routes.get_graph_execution_meta"
)
mock_graph_exec_meta = mocker.Mock()
mock_graph_exec_meta.status = ExecutionStatus.REVIEW
mock_get_graph_exec.return_value = mock_graph_exec_meta
# Mock has_pending_reviews_for_graph_exec
mock_has_pending = mocker.patch(
"backend.api.features.executions.review.routes.has_pending_reviews_for_graph_exec"
)
mock_has_pending.return_value = False
# Mock get_graph_settings with sensitive_action_safe_mode enabled
mock_get_settings = mocker.patch(
"backend.api.features.executions.review.routes.get_graph_settings"
)
mock_get_settings.return_value = GraphSettings(
human_in_the_loop_safe_mode=False,
sensitive_action_safe_mode=True,
)
# Mock add_graph_execution
mock_add_execution = mocker.patch(
"backend.api.features.executions.review.routes.add_graph_execution"
)
# Request WITHOUT auto_approve_future (defaults to False)
request_data = {
"reviews": [
{
"node_exec_id": "test_node_123",
"approved": True,
"message": "Approved",
# auto_approve_future defaults to False
}
],
}
response = client.post("/api/review/action", json=request_data)
assert response.status_code == 200
# Verify process_all_reviews_for_execution was called
mock_process_all_reviews.assert_called_once()
# Verify create_auto_approval_record was NOT called (auto_approve_future=False)
mock_create_auto_approval.assert_not_called()
# Verify settings were loaded
mock_get_settings.assert_called_once()
# Verify ExecutionContext has proper settings
mock_add_execution.assert_called_once()
call_kwargs = mock_add_execution.call_args.kwargs
execution_context = call_kwargs["execution_context"]
assert isinstance(execution_context, ExecutionContext)
assert execution_context.human_in_the_loop_safe_mode is False
assert execution_context.sensitive_action_safe_mode is True
def test_process_review_action_auto_approve_only_applies_to_approved_reviews(
client: fastapi.testclient.TestClient,
mocker: pytest_mock.MockerFixture,
test_user_id: str,
) -> None:
"""Test that auto_approve record is created only for approved reviews"""
# Create two reviews - one approved, one rejected
approved_review = PendingHumanReviewModel(
node_exec_id="node_exec_approved",
user_id=test_user_id,
graph_exec_id="test_graph_exec_456",
graph_id="test_graph_789",
graph_version=1,
payload={"data": "approved"},
instructions="Review",
editable=True,
status=ReviewStatus.APPROVED,
review_message=None,
was_edited=False,
processed=False,
created_at=FIXED_NOW,
updated_at=FIXED_NOW,
reviewed_at=FIXED_NOW,
)
rejected_review = PendingHumanReviewModel(
node_exec_id="node_exec_rejected",
user_id=test_user_id,
graph_exec_id="test_graph_exec_456",
graph_id="test_graph_789",
graph_version=1,
payload={"data": "rejected"},
instructions="Review",
editable=True,
status=ReviewStatus.REJECTED,
review_message="Rejected",
was_edited=False,
processed=False,
created_at=FIXED_NOW,
updated_at=FIXED_NOW,
reviewed_at=FIXED_NOW,
)
# Mock get_pending_review_by_node_exec_id (called to find the graph_exec_id)
mock_get_reviews_for_user = mocker.patch(
"backend.api.features.executions.review.routes.get_pending_review_by_node_exec_id"
)
mock_get_reviews_for_user.return_value = approved_review
# Mock process_all_reviews
mock_process_all_reviews = mocker.patch(
"backend.api.features.executions.review.routes.process_all_reviews_for_execution"
)
mock_process_all_reviews.return_value = {
"node_exec_approved": approved_review,
"node_exec_rejected": rejected_review,
}
# Mock get_node_execution to return node_id (only called for approved review)
mock_get_node_execution = mocker.patch(
"backend.api.features.executions.review.routes.get_node_execution"
)
mock_node_exec = mocker.Mock(spec=NodeExecutionResult)
mock_node_exec.node_id = "test_node_def_approved"
mock_get_node_execution.return_value = mock_node_exec
# Mock create_auto_approval_record
mock_create_auto_approval = mocker.patch(
"backend.api.features.executions.review.routes.create_auto_approval_record"
)
# Mock get_graph_execution_meta to return execution in REVIEW status
mock_get_graph_exec = mocker.patch(
"backend.api.features.executions.review.routes.get_graph_execution_meta"
)
mock_graph_exec_meta = mocker.Mock()
mock_graph_exec_meta.status = ExecutionStatus.REVIEW
mock_get_graph_exec.return_value = mock_graph_exec_meta
# Mock has_pending_reviews_for_graph_exec
mock_has_pending = mocker.patch(
"backend.api.features.executions.review.routes.has_pending_reviews_for_graph_exec"
)
mock_has_pending.return_value = False
# Mock get_graph_settings
mock_get_settings = mocker.patch(
"backend.api.features.executions.review.routes.get_graph_settings"
)
mock_get_settings.return_value = GraphSettings()
# Mock add_graph_execution
mock_add_execution = mocker.patch(
"backend.api.features.executions.review.routes.add_graph_execution"
)
request_data = {
"reviews": [
{
"node_exec_id": "node_exec_approved",
"approved": True,
"auto_approve_future": True,
},
{
"node_exec_id": "node_exec_rejected",
"approved": False,
"auto_approve_future": True, # Should be ignored since rejected
},
],
}
response = client.post("/api/review/action", json=request_data)
assert response.status_code == 200
# Verify process_all_reviews_for_execution was called
mock_process_all_reviews.assert_called_once()
# Verify create_auto_approval_record was called ONLY for the approved review
# (not for the rejected one)
mock_create_auto_approval.assert_called_once_with(
user_id=test_user_id,
graph_exec_id="test_graph_exec_456",
graph_id="test_graph_789",
graph_version=1,
node_id="test_node_def_approved",
payload={"data": "approved"},
)
# Verify get_node_execution was called only for approved review
mock_get_node_execution.assert_called_once_with("node_exec_approved")
# Verify ExecutionContext was created (auto-approval is now DB-based)
call_kwargs = mock_add_execution.call_args.kwargs
execution_context = call_kwargs["execution_context"]
assert isinstance(execution_context, ExecutionContext)
def test_process_review_action_per_review_auto_approve_granularity(
client: fastapi.testclient.TestClient,
mocker: pytest_mock.MockerFixture,
sample_pending_review: PendingHumanReviewModel,
test_user_id: str,
) -> None:
"""Test that auto-approval can be set per-review (granular control)"""
# Mock get_pending_review_by_node_exec_id - return different reviews based on node_exec_id
mock_get_reviews_for_user = mocker.patch(
"backend.api.features.executions.review.routes.get_pending_review_by_node_exec_id"
)
# Create a mapping of node_exec_id to review
review_map = {
"node_1_auto": PendingHumanReviewModel(
node_exec_id="node_1_auto",
user_id=test_user_id,
graph_exec_id="test_graph_exec",
graph_id="test_graph",
graph_version=1,
payload={"data": "node1"},
instructions="Review 1",
editable=True,
status=ReviewStatus.WAITING,
review_message=None,
was_edited=False,
processed=False,
created_at=FIXED_NOW,
),
"node_2_manual": PendingHumanReviewModel(
node_exec_id="node_2_manual",
user_id=test_user_id,
graph_exec_id="test_graph_exec",
graph_id="test_graph",
graph_version=1,
payload={"data": "node2"},
instructions="Review 2",
editable=True,
status=ReviewStatus.WAITING,
review_message=None,
was_edited=False,
processed=False,
created_at=FIXED_NOW,
),
"node_3_auto": PendingHumanReviewModel(
node_exec_id="node_3_auto",
user_id=test_user_id,
graph_exec_id="test_graph_exec",
graph_id="test_graph",
graph_version=1,
payload={"data": "node3"},
instructions="Review 3",
editable=True,
status=ReviewStatus.WAITING,
review_message=None,
was_edited=False,
processed=False,
created_at=FIXED_NOW,
),
}
# Use side_effect to return different reviews based on node_exec_id parameter
def mock_get_review_by_id(node_exec_id: str, _user_id: str):
return review_map.get(node_exec_id)
mock_get_reviews_for_user.side_effect = mock_get_review_by_id
# Mock process_all_reviews - return 3 approved reviews
mock_process_all_reviews = mocker.patch(
"backend.api.features.executions.review.routes.process_all_reviews_for_execution"
)
mock_process_all_reviews.return_value = {
"node_1_auto": PendingHumanReviewModel(
node_exec_id="node_1_auto",
user_id=test_user_id,
graph_exec_id="test_graph_exec",
graph_id="test_graph",
graph_version=1,
payload={"data": "node1"},
instructions="Review 1",
editable=True,
status=ReviewStatus.APPROVED,
review_message=None,
was_edited=False,
processed=False,
created_at=FIXED_NOW,
updated_at=FIXED_NOW,
reviewed_at=FIXED_NOW,
),
"node_2_manual": PendingHumanReviewModel(
node_exec_id="node_2_manual",
user_id=test_user_id,
graph_exec_id="test_graph_exec",
graph_id="test_graph",
graph_version=1,
payload={"data": "node2"},
instructions="Review 2",
editable=True,
status=ReviewStatus.APPROVED,
review_message=None,
was_edited=False,
processed=False,
created_at=FIXED_NOW,
updated_at=FIXED_NOW,
reviewed_at=FIXED_NOW,
),
"node_3_auto": PendingHumanReviewModel(
node_exec_id="node_3_auto",
user_id=test_user_id,
graph_exec_id="test_graph_exec",
graph_id="test_graph",
graph_version=1,
payload={"data": "node3"},
instructions="Review 3",
editable=True,
status=ReviewStatus.APPROVED,
review_message=None,
was_edited=False,
processed=False,
created_at=FIXED_NOW,
updated_at=FIXED_NOW,
reviewed_at=FIXED_NOW,
),
}
# Mock get_node_execution
mock_get_node_execution = mocker.patch(
"backend.api.features.executions.review.routes.get_node_execution"
)
def mock_get_node(node_exec_id: str):
mock_node = mocker.Mock(spec=NodeExecutionResult)
mock_node.node_id = f"node_def_{node_exec_id}"
return mock_node
mock_get_node_execution.side_effect = mock_get_node
# Mock create_auto_approval_record
mock_create_auto_approval = mocker.patch(
"backend.api.features.executions.review.routes.create_auto_approval_record"
)
# Mock get_graph_execution_meta
mock_get_graph_exec = mocker.patch(
"backend.api.features.executions.review.routes.get_graph_execution_meta"
)
mock_graph_exec_meta = mocker.Mock()
mock_graph_exec_meta.status = ExecutionStatus.REVIEW
mock_get_graph_exec.return_value = mock_graph_exec_meta
# Mock has_pending_reviews_for_graph_exec
mock_has_pending = mocker.patch(
"backend.api.features.executions.review.routes.has_pending_reviews_for_graph_exec"
)
mock_has_pending.return_value = False
# Mock settings and execution
mock_get_settings = mocker.patch(
"backend.api.features.executions.review.routes.get_graph_settings"
)
mock_get_settings.return_value = GraphSettings(
human_in_the_loop_safe_mode=False, sensitive_action_safe_mode=False
)
mocker.patch("backend.api.features.executions.review.routes.add_graph_execution")
mocker.patch("backend.api.features.executions.review.routes.get_user_by_id")
# Request with granular auto-approval:
# - node_1_auto: auto_approve_future=True
# - node_2_manual: auto_approve_future=False (explicit)
# - node_3_auto: auto_approve_future=True
request_data = {
"reviews": [
{
"node_exec_id": "node_1_auto",
"approved": True,
"auto_approve_future": True,
},
{
"node_exec_id": "node_2_manual",
"approved": True,
"auto_approve_future": False, # Don't auto-approve this one
},
{
"node_exec_id": "node_3_auto",
"approved": True,
"auto_approve_future": True,
},
],
}
response = client.post("/api/review/action", json=request_data)
assert response.status_code == 200
# Verify create_auto_approval_record was called ONLY for reviews with auto_approve_future=True
assert mock_create_auto_approval.call_count == 2
# Check that it was called for node_1 and node_3, but NOT node_2
call_args_list = [call.kwargs for call in mock_create_auto_approval.call_args_list]
node_ids_with_auto_approval = [args["node_id"] for args in call_args_list]
assert "node_def_node_1_auto" in node_ids_with_auto_approval
assert "node_def_node_3_auto" in node_ids_with_auto_approval
assert "node_def_node_2_manual" not in node_ids_with_auto_approval

View File

@@ -5,23 +5,13 @@ import autogpt_libs.auth as autogpt_auth_lib
from fastapi import APIRouter, HTTPException, Query, Security, status from fastapi import APIRouter, HTTPException, Query, Security, status
from prisma.enums import ReviewStatus from prisma.enums import ReviewStatus
from backend.data.execution import ( from backend.data.execution import get_graph_execution_meta
ExecutionContext,
ExecutionStatus,
get_graph_execution_meta,
get_node_execution,
)
from backend.data.graph import get_graph_settings
from backend.data.human_review import ( from backend.data.human_review import (
create_auto_approval_record,
get_pending_review_by_node_exec_id,
get_pending_reviews_for_execution, get_pending_reviews_for_execution,
get_pending_reviews_for_user, get_pending_reviews_for_user,
has_pending_reviews_for_graph_exec, has_pending_reviews_for_graph_exec,
process_all_reviews_for_execution, process_all_reviews_for_execution,
) )
from backend.data.model import USER_TIMEZONE_NOT_SET
from backend.data.user import get_user_by_id
from backend.executor.utils import add_graph_execution from backend.executor.utils import add_graph_execution
from .model import PendingHumanReviewModel, ReviewRequest, ReviewResponse from .model import PendingHumanReviewModel, ReviewRequest, ReviewResponse
@@ -137,64 +127,17 @@ async def process_review_action(
detail="At least one review must be provided", detail="At least one review must be provided",
) )
# Get graph execution ID by directly looking up one of the requested reviews # Build review decisions map
# Use direct lookup to avoid pagination issues (can't miss reviews beyond first page)
matching_review = None
for node_exec_id in all_request_node_ids:
review = await get_pending_review_by_node_exec_id(node_exec_id, user_id)
if review:
matching_review = review
break
if not matching_review:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND,
detail="No pending reviews found for the requested node executions",
)
graph_exec_id = matching_review.graph_exec_id
# Validate execution status before processing reviews
graph_exec_meta = await get_graph_execution_meta(
user_id=user_id, execution_id=graph_exec_id
)
if not graph_exec_meta:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND,
detail=f"Graph execution #{graph_exec_id} not found",
)
# Only allow processing reviews if execution is paused for review
# or incomplete (partial execution with some reviews already processed)
if graph_exec_meta.status not in (
ExecutionStatus.REVIEW,
ExecutionStatus.INCOMPLETE,
):
raise HTTPException(
status_code=status.HTTP_409_CONFLICT,
detail=f"Cannot process reviews while execution status is {graph_exec_meta.status}. "
f"Reviews can only be processed when execution is paused (REVIEW status). "
f"Current status: {graph_exec_meta.status}",
)
# Build review decisions map and track which reviews requested auto-approval
# Auto-approved reviews use original data (no modifications allowed)
review_decisions = {} review_decisions = {}
auto_approve_requests = {} # Map node_exec_id -> auto_approve_future flag
for review in request.reviews: for review in request.reviews:
review_status = ( review_status = (
ReviewStatus.APPROVED if review.approved else ReviewStatus.REJECTED ReviewStatus.APPROVED if review.approved else ReviewStatus.REJECTED
) )
# If this review requested auto-approval, don't allow data modifications
reviewed_data = None if review.auto_approve_future else review.reviewed_data
review_decisions[review.node_exec_id] = ( review_decisions[review.node_exec_id] = (
review_status, review_status,
reviewed_data, review.reviewed_data,
review.message, review.message,
) )
auto_approve_requests[review.node_exec_id] = review.auto_approve_future
# Process all reviews # Process all reviews
updated_reviews = await process_all_reviews_for_execution( updated_reviews = await process_all_reviews_for_execution(
@@ -202,32 +145,6 @@ async def process_review_action(
review_decisions=review_decisions, review_decisions=review_decisions,
) )
# Create auto-approval records for approved reviews that requested it
# Note: Processing sequentially to avoid event loop issues in tests
for node_exec_id, review_result in updated_reviews.items():
# Only create auto-approval if:
# 1. This review was approved
# 2. The review requested auto-approval
if review_result.status == ReviewStatus.APPROVED and auto_approve_requests.get(
node_exec_id, False
):
try:
node_exec = await get_node_execution(node_exec_id)
if node_exec:
await create_auto_approval_record(
user_id=user_id,
graph_exec_id=review_result.graph_exec_id,
graph_id=review_result.graph_id,
graph_version=review_result.graph_version,
node_id=node_exec.node_id,
payload=review_result.payload,
)
except Exception as e:
logger.error(
f"Failed to create auto-approval record for {node_exec_id}",
exc_info=e,
)
# Count results # Count results
approved_count = sum( approved_count = sum(
1 1
@@ -240,37 +157,22 @@ async def process_review_action(
if review.status == ReviewStatus.REJECTED if review.status == ReviewStatus.REJECTED
) )
# Resume execution only if ALL pending reviews for this execution have been processed # Resume execution if we processed some reviews
if updated_reviews: if updated_reviews:
# Get graph execution ID from any processed review
first_review = next(iter(updated_reviews.values()))
graph_exec_id = first_review.graph_exec_id
# Check if any pending reviews remain for this execution
still_has_pending = await has_pending_reviews_for_graph_exec(graph_exec_id) still_has_pending = await has_pending_reviews_for_graph_exec(graph_exec_id)
if not still_has_pending: if not still_has_pending:
# Get the graph_id from any processed review # Resume execution
first_review = next(iter(updated_reviews.values()))
try: try:
# Fetch user and settings to build complete execution context
user = await get_user_by_id(user_id)
settings = await get_graph_settings(
user_id=user_id, graph_id=first_review.graph_id
)
# Preserve user's timezone preference when resuming execution
user_timezone = (
user.timezone if user.timezone != USER_TIMEZONE_NOT_SET else "UTC"
)
execution_context = ExecutionContext(
human_in_the_loop_safe_mode=settings.human_in_the_loop_safe_mode,
sensitive_action_safe_mode=settings.sensitive_action_safe_mode,
user_timezone=user_timezone,
)
await add_graph_execution( await add_graph_execution(
graph_id=first_review.graph_id, graph_id=first_review.graph_id,
user_id=user_id, user_id=user_id,
graph_exec_id=graph_exec_id, graph_exec_id=graph_exec_id,
execution_context=execution_context,
) )
logger.info(f"Resumed execution {graph_exec_id}") logger.info(f"Resumed execution {graph_exec_id}")
except Exception as e: except Exception as e:

View File

@@ -6,7 +6,6 @@ Handles generation and storage of OpenAI embeddings for all content types
""" """
import asyncio import asyncio
import contextvars
import logging import logging
import time import time
from typing import Any from typing import Any
@@ -22,11 +21,6 @@ from backend.util.json import dumps
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
# Context variable to track errors logged in the current task/operation
# This prevents spamming the same error multiple times when processing batches
_logged_errors: contextvars.ContextVar[set[str]] = contextvars.ContextVar(
"_logged_errors"
)
# OpenAI embedding model configuration # OpenAI embedding model configuration
EMBEDDING_MODEL = "text-embedding-3-small" EMBEDDING_MODEL = "text-embedding-3-small"
@@ -37,42 +31,6 @@ EMBEDDING_DIM = 1536
EMBEDDING_MAX_TOKENS = 8191 EMBEDDING_MAX_TOKENS = 8191
def log_once_per_task(error_key: str, log_fn, message: str, **kwargs) -> bool:
"""
Log an error/warning only once per task/operation to avoid log spam.
Uses contextvars to track what has been logged in the current async context.
Useful when processing batches where the same error might occur for many items.
Args:
error_key: Unique identifier for this error type
log_fn: Logger function to call (e.g., logger.error, logger.warning)
message: Message to log
**kwargs: Additional arguments to pass to log_fn
Returns:
True if the message was logged, False if it was suppressed (already logged)
Example:
log_once_per_task("missing_api_key", logger.error, "API key not set")
"""
# Get current logged errors, or create a new set if this is the first call in this context
logged = _logged_errors.get(None)
if logged is None:
logged = set()
_logged_errors.set(logged)
if error_key in logged:
return False
# Log the message with a note that it will only appear once
log_fn(f"{message} (This message will only be shown once per task.)", **kwargs)
# Mark as logged
logged.add(error_key)
return True
def build_searchable_text( def build_searchable_text(
name: str, name: str,
description: str, description: str,
@@ -115,11 +73,7 @@ async def generate_embedding(text: str) -> list[float] | None:
try: try:
client = get_openai_client() client = get_openai_client()
if not client: if not client:
log_once_per_task( logger.error("openai_internal_api_key not set, cannot generate embedding")
"openai_api_key_missing",
logger.error,
"openai_internal_api_key not set, cannot generate embeddings",
)
return None return None
# Truncate text to token limit using tiktoken # Truncate text to token limit using tiktoken
@@ -336,12 +290,7 @@ async def ensure_embedding(
# Generate new embedding # Generate new embedding
embedding = await generate_embedding(searchable_text) embedding = await generate_embedding(searchable_text)
if embedding is None: if embedding is None:
log_once_per_task( logger.warning(f"Could not generate embedding for version {version_id}")
"embedding_generation_failed",
logger.warning,
"Could not generate embeddings (missing API key or service unavailable). "
"Embedding generation is disabled for this task.",
)
return False return False
# Store the embedding with metadata using new function # Store the embedding with metadata using new function
@@ -660,11 +609,8 @@ async def ensure_content_embedding(
# Generate new embedding # Generate new embedding
embedding = await generate_embedding(searchable_text) embedding = await generate_embedding(searchable_text)
if embedding is None: if embedding is None:
log_once_per_task( logger.warning(
"embedding_generation_failed", f"Could not generate embedding for {content_type}:{content_id}"
logger.warning,
"Could not generate embeddings (missing API key or service unavailable). "
"Embedding generation is disabled for this task.",
) )
return False return False

View File

@@ -116,7 +116,6 @@ class PrintToConsoleBlock(Block):
input_schema=PrintToConsoleBlock.Input, input_schema=PrintToConsoleBlock.Input,
output_schema=PrintToConsoleBlock.Output, output_schema=PrintToConsoleBlock.Output,
test_input={"text": "Hello, World!"}, test_input={"text": "Hello, World!"},
is_sensitive_action=True,
test_output=[ test_output=[
("output", "Hello, World!"), ("output", "Hello, World!"),
("status", "printed"), ("status", "printed"),

View File

@@ -9,7 +9,7 @@ from typing import Any, Optional
from prisma.enums import ReviewStatus from prisma.enums import ReviewStatus
from pydantic import BaseModel from pydantic import BaseModel
from backend.data.execution import ExecutionStatus from backend.data.execution import ExecutionContext, ExecutionStatus
from backend.data.human_review import ReviewResult from backend.data.human_review import ReviewResult
from backend.executor.manager import async_update_node_execution_status from backend.executor.manager import async_update_node_execution_status
from backend.util.clients import get_database_manager_async_client from backend.util.clients import get_database_manager_async_client
@@ -28,11 +28,6 @@ class ReviewDecision(BaseModel):
class HITLReviewHelper: class HITLReviewHelper:
"""Helper class for Human-In-The-Loop review operations.""" """Helper class for Human-In-The-Loop review operations."""
@staticmethod
async def check_approval(**kwargs) -> Optional[ReviewResult]:
"""Check if there's an existing approval for this node execution."""
return await get_database_manager_async_client().check_approval(**kwargs)
@staticmethod @staticmethod
async def get_or_create_human_review(**kwargs) -> Optional[ReviewResult]: async def get_or_create_human_review(**kwargs) -> Optional[ReviewResult]:
"""Create or retrieve a human review from the database.""" """Create or retrieve a human review from the database."""
@@ -60,11 +55,11 @@ class HITLReviewHelper:
async def _handle_review_request( async def _handle_review_request(
input_data: Any, input_data: Any,
user_id: str, user_id: str,
node_id: str,
node_exec_id: str, node_exec_id: str,
graph_exec_id: str, graph_exec_id: str,
graph_id: str, graph_id: str,
graph_version: int, graph_version: int,
execution_context: ExecutionContext,
block_name: str = "Block", block_name: str = "Block",
editable: bool = False, editable: bool = False,
) -> Optional[ReviewResult]: ) -> Optional[ReviewResult]:
@@ -74,11 +69,11 @@ class HITLReviewHelper:
Args: Args:
input_data: The input data to be reviewed input_data: The input data to be reviewed
user_id: ID of the user requesting the review user_id: ID of the user requesting the review
node_id: ID of the node in the graph definition
node_exec_id: ID of the node execution node_exec_id: ID of the node execution
graph_exec_id: ID of the graph execution graph_exec_id: ID of the graph execution
graph_id: ID of the graph graph_id: ID of the graph
graph_version: Version of the graph graph_version: Version of the graph
execution_context: Current execution context
block_name: Name of the block requesting review block_name: Name of the block requesting review
editable: Whether the reviewer can edit the data editable: Whether the reviewer can edit the data
@@ -88,40 +83,15 @@ class HITLReviewHelper:
Raises: Raises:
Exception: If review creation or status update fails Exception: If review creation or status update fails
""" """
# Note: Safe mode checks (human_in_the_loop_safe_mode, sensitive_action_safe_mode) # Skip review if safe mode is disabled - return auto-approved result
# are handled by the caller: if not execution_context.human_in_the_loop_safe_mode:
# - HITL blocks check human_in_the_loop_safe_mode in their run() method
# - Sensitive action blocks check sensitive_action_safe_mode in is_block_exec_need_review()
# This function only handles checking for existing approvals.
# Check if this node has already been approved (normal or auto-approval)
if approval_result := await HITLReviewHelper.check_approval(
node_exec_id=node_exec_id,
graph_exec_id=graph_exec_id,
node_id=node_id,
user_id=user_id,
):
logger.info( logger.info(
f"Block {block_name} skipping review for node {node_exec_id} - " f"Block {block_name} skipping review for node {node_exec_id} - safe mode disabled"
f"found existing approval"
)
# Return a new ReviewResult with the current node_exec_id but approved status
# For auto-approvals, always use current input_data
# For normal approvals, use approval_result.data unless it's None
is_auto_approval = approval_result.node_exec_id != node_exec_id
approved_data = (
input_data
if is_auto_approval
else (
approval_result.data
if approval_result.data is not None
else input_data
)
) )
return ReviewResult( return ReviewResult(
data=approved_data, data=input_data,
status=ReviewStatus.APPROVED, status=ReviewStatus.APPROVED,
message=approval_result.message, message="Auto-approved (safe mode disabled)",
processed=True, processed=True,
node_exec_id=node_exec_id, node_exec_id=node_exec_id,
) )
@@ -159,11 +129,11 @@ class HITLReviewHelper:
async def handle_review_decision( async def handle_review_decision(
input_data: Any, input_data: Any,
user_id: str, user_id: str,
node_id: str,
node_exec_id: str, node_exec_id: str,
graph_exec_id: str, graph_exec_id: str,
graph_id: str, graph_id: str,
graph_version: int, graph_version: int,
execution_context: ExecutionContext,
block_name: str = "Block", block_name: str = "Block",
editable: bool = False, editable: bool = False,
) -> Optional[ReviewDecision]: ) -> Optional[ReviewDecision]:
@@ -173,11 +143,11 @@ class HITLReviewHelper:
Args: Args:
input_data: The input data to be reviewed input_data: The input data to be reviewed
user_id: ID of the user requesting the review user_id: ID of the user requesting the review
node_id: ID of the node in the graph definition
node_exec_id: ID of the node execution node_exec_id: ID of the node execution
graph_exec_id: ID of the graph execution graph_exec_id: ID of the graph execution
graph_id: ID of the graph graph_id: ID of the graph
graph_version: Version of the graph graph_version: Version of the graph
execution_context: Current execution context
block_name: Name of the block requesting review block_name: Name of the block requesting review
editable: Whether the reviewer can edit the data editable: Whether the reviewer can edit the data
@@ -188,11 +158,11 @@ class HITLReviewHelper:
review_result = await HITLReviewHelper._handle_review_request( review_result = await HITLReviewHelper._handle_review_request(
input_data=input_data, input_data=input_data,
user_id=user_id, user_id=user_id,
node_id=node_id,
node_exec_id=node_exec_id, node_exec_id=node_exec_id,
graph_exec_id=graph_exec_id, graph_exec_id=graph_exec_id,
graph_id=graph_id, graph_id=graph_id,
graph_version=graph_version, graph_version=graph_version,
execution_context=execution_context,
block_name=block_name, block_name=block_name,
editable=editable, editable=editable,
) )

View File

@@ -97,7 +97,6 @@ class HumanInTheLoopBlock(Block):
input_data: Input, input_data: Input,
*, *,
user_id: str, user_id: str,
node_id: str,
node_exec_id: str, node_exec_id: str,
graph_exec_id: str, graph_exec_id: str,
graph_id: str, graph_id: str,
@@ -116,11 +115,11 @@ class HumanInTheLoopBlock(Block):
decision = await self.handle_review_decision( decision = await self.handle_review_decision(
input_data=input_data.data, input_data=input_data.data,
user_id=user_id, user_id=user_id,
node_id=node_id,
node_exec_id=node_exec_id, node_exec_id=node_exec_id,
graph_exec_id=graph_exec_id, graph_exec_id=graph_exec_id,
graph_id=graph_id, graph_id=graph_id,
graph_version=graph_version, graph_version=graph_version,
execution_context=execution_context,
block_name=self.name, block_name=self.name,
editable=input_data.editable, editable=input_data.editable,
) )

View File

@@ -441,7 +441,6 @@ class Block(ABC, Generic[BlockSchemaInputType, BlockSchemaOutputType]):
static_output: bool = False, static_output: bool = False,
block_type: BlockType = BlockType.STANDARD, block_type: BlockType = BlockType.STANDARD,
webhook_config: Optional[BlockWebhookConfig | BlockManualWebhookConfig] = None, webhook_config: Optional[BlockWebhookConfig | BlockManualWebhookConfig] = None,
is_sensitive_action: bool = False,
): ):
""" """
Initialize the block with the given schema. Initialize the block with the given schema.
@@ -474,8 +473,8 @@ class Block(ABC, Generic[BlockSchemaInputType, BlockSchemaOutputType]):
self.static_output = static_output self.static_output = static_output
self.block_type = block_type self.block_type = block_type
self.webhook_config = webhook_config self.webhook_config = webhook_config
self.is_sensitive_action = is_sensitive_action
self.execution_stats: NodeExecutionStats = NodeExecutionStats() self.execution_stats: NodeExecutionStats = NodeExecutionStats()
self.is_sensitive_action: bool = False
if self.webhook_config: if self.webhook_config:
if isinstance(self.webhook_config, BlockWebhookConfig): if isinstance(self.webhook_config, BlockWebhookConfig):
@@ -623,7 +622,6 @@ class Block(ABC, Generic[BlockSchemaInputType, BlockSchemaOutputType]):
input_data: BlockInput, input_data: BlockInput,
*, *,
user_id: str, user_id: str,
node_id: str,
node_exec_id: str, node_exec_id: str,
graph_exec_id: str, graph_exec_id: str,
graph_id: str, graph_id: str,
@@ -650,11 +648,11 @@ class Block(ABC, Generic[BlockSchemaInputType, BlockSchemaOutputType]):
decision = await HITLReviewHelper.handle_review_decision( decision = await HITLReviewHelper.handle_review_decision(
input_data=input_data, input_data=input_data,
user_id=user_id, user_id=user_id,
node_id=node_id,
node_exec_id=node_exec_id, node_exec_id=node_exec_id,
graph_exec_id=graph_exec_id, graph_exec_id=graph_exec_id,
graph_id=graph_id, graph_id=graph_id,
graph_version=graph_version, graph_version=graph_version,
execution_context=execution_context,
block_name=self.name, block_name=self.name,
editable=True, editable=True,
) )

View File

@@ -17,7 +17,6 @@ from backend.api.features.executions.review.model import (
PendingHumanReviewModel, PendingHumanReviewModel,
SafeJsonData, SafeJsonData,
) )
from backend.data.execution import get_graph_execution_meta
from backend.util.json import SafeJson from backend.util.json import SafeJson
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@@ -33,117 +32,6 @@ class ReviewResult(BaseModel):
node_exec_id: str node_exec_id: str
def get_auto_approve_key(graph_exec_id: str, node_id: str) -> str:
"""Generate the special nodeExecId key for auto-approval records."""
return f"auto_approve_{graph_exec_id}_{node_id}"
async def check_approval(
node_exec_id: str,
graph_exec_id: str,
node_id: str,
user_id: str,
) -> Optional[ReviewResult]:
"""
Check if there's an existing approval for this node execution.
Checks both:
1. Normal approval by node_exec_id (previous run of the same node execution)
2. Auto-approval by special key pattern "auto_approve_{graph_exec_id}_{node_id}"
Args:
node_exec_id: ID of the node execution
graph_exec_id: ID of the graph execution
node_id: ID of the node definition (not execution)
user_id: ID of the user (for data isolation)
Returns:
ReviewResult if approval found (either normal or auto), None otherwise
"""
auto_approve_key = get_auto_approve_key(graph_exec_id, node_id)
# Check for either normal approval or auto-approval in a single query
existing_review = await PendingHumanReview.prisma().find_first(
where={
"OR": [
{"nodeExecId": node_exec_id},
{"nodeExecId": auto_approve_key},
],
"status": ReviewStatus.APPROVED,
"userId": user_id,
},
)
if existing_review:
is_auto_approval = existing_review.nodeExecId == auto_approve_key
logger.info(
f"Found {'auto-' if is_auto_approval else ''}approval for node {node_id} "
f"(exec: {node_exec_id}) in execution {graph_exec_id}"
)
return ReviewResult(
data=existing_review.payload,
status=ReviewStatus.APPROVED,
message=(
"Auto-approved (user approved all future actions for this node)"
if is_auto_approval
else existing_review.reviewMessage or ""
),
processed=True,
node_exec_id=existing_review.nodeExecId,
)
return None
async def create_auto_approval_record(
user_id: str,
graph_exec_id: str,
graph_id: str,
graph_version: int,
node_id: str,
payload: SafeJsonData,
) -> None:
"""
Create an auto-approval record for a node in this execution.
This is stored as a PendingHumanReview with a special nodeExecId pattern
and status=APPROVED, so future executions of the same node can skip review.
Raises:
ValueError: If the graph execution doesn't belong to the user
"""
# Validate that the graph execution belongs to this user (defense in depth)
graph_exec = await get_graph_execution_meta(
user_id=user_id, execution_id=graph_exec_id
)
if not graph_exec:
raise ValueError(
f"Graph execution {graph_exec_id} not found or doesn't belong to user {user_id}"
)
auto_approve_key = get_auto_approve_key(graph_exec_id, node_id)
await PendingHumanReview.prisma().upsert(
where={"nodeExecId": auto_approve_key},
data={
"create": {
"nodeExecId": auto_approve_key,
"userId": user_id,
"graphExecId": graph_exec_id,
"graphId": graph_id,
"graphVersion": graph_version,
"payload": SafeJson(payload),
"instructions": "Auto-approval record",
"editable": False,
"status": ReviewStatus.APPROVED,
"processed": True,
"reviewedAt": datetime.now(timezone.utc),
},
"update": {}, # Already exists, no update needed
},
)
async def get_or_create_human_review( async def get_or_create_human_review(
user_id: str, user_id: str,
node_exec_id: str, node_exec_id: str,
@@ -220,29 +108,6 @@ async def get_or_create_human_review(
) )
async def get_pending_review_by_node_exec_id(
node_exec_id: str, user_id: str
) -> Optional["PendingHumanReviewModel"]:
"""
Get a pending review by its node execution ID.
Args:
node_exec_id: The node execution ID to look up
user_id: User ID for authorization (only returns if review belongs to this user)
Returns:
The pending review if found and belongs to user, None otherwise
"""
review = await PendingHumanReview.prisma().find_unique(
where={"nodeExecId": node_exec_id}
)
if not review or review.userId != user_id or review.status != ReviewStatus.WAITING:
return None
return PendingHumanReviewModel.from_db(review)
async def has_pending_reviews_for_graph_exec(graph_exec_id: str) -> bool: async def has_pending_reviews_for_graph_exec(graph_exec_id: str) -> bool:
""" """
Check if a graph execution has any pending reviews. Check if a graph execution has any pending reviews.
@@ -391,44 +256,3 @@ async def update_review_processed_status(node_exec_id: str, processed: bool) ->
await PendingHumanReview.prisma().update( await PendingHumanReview.prisma().update(
where={"nodeExecId": node_exec_id}, data={"processed": processed} where={"nodeExecId": node_exec_id}, data={"processed": processed}
) )
async def cancel_pending_reviews_for_execution(graph_exec_id: str, user_id: str) -> int:
"""
Cancel all pending reviews for a graph execution (e.g., when execution is stopped).
Marks all WAITING reviews as REJECTED with a message indicating the execution was stopped.
Args:
graph_exec_id: The graph execution ID
user_id: User ID who owns the execution (for security validation)
Returns:
Number of reviews cancelled
Raises:
ValueError: If the graph execution doesn't belong to the user
"""
# Validate user ownership before cancelling reviews
graph_exec = await get_graph_execution_meta(
user_id=user_id, execution_id=graph_exec_id
)
if not graph_exec:
raise ValueError(
f"Graph execution {graph_exec_id} not found or doesn't belong to user {user_id}"
)
result = await PendingHumanReview.prisma().update_many(
where={
"graphExecId": graph_exec_id,
"userId": user_id,
"status": ReviewStatus.WAITING,
},
data={
"status": ReviewStatus.REJECTED,
"reviewMessage": "Execution was stopped by user",
"processed": True,
"reviewedAt": datetime.now(timezone.utc),
},
)
return result

View File

@@ -46,8 +46,8 @@ async def test_get_or_create_human_review_new(
sample_db_review.status = ReviewStatus.WAITING sample_db_review.status = ReviewStatus.WAITING
sample_db_review.processed = False sample_db_review.processed = False
mock_prisma = mocker.patch("backend.data.human_review.PendingHumanReview.prisma") mock_upsert = mocker.patch("backend.data.human_review.PendingHumanReview.prisma")
mock_prisma.return_value.upsert = AsyncMock(return_value=sample_db_review) mock_upsert.return_value.upsert = AsyncMock(return_value=sample_db_review)
result = await get_or_create_human_review( result = await get_or_create_human_review(
user_id="test-user-123", user_id="test-user-123",
@@ -75,8 +75,8 @@ async def test_get_or_create_human_review_approved(
sample_db_review.processed = False sample_db_review.processed = False
sample_db_review.reviewMessage = "Looks good" sample_db_review.reviewMessage = "Looks good"
mock_prisma = mocker.patch("backend.data.human_review.PendingHumanReview.prisma") mock_upsert = mocker.patch("backend.data.human_review.PendingHumanReview.prisma")
mock_prisma.return_value.upsert = AsyncMock(return_value=sample_db_review) mock_upsert.return_value.upsert = AsyncMock(return_value=sample_db_review)
result = await get_or_create_human_review( result = await get_or_create_human_review(
user_id="test-user-123", user_id="test-user-123",

View File

@@ -50,8 +50,6 @@ from backend.data.graph import (
validate_graph_execution_permissions, validate_graph_execution_permissions,
) )
from backend.data.human_review import ( from backend.data.human_review import (
cancel_pending_reviews_for_execution,
check_approval,
get_or_create_human_review, get_or_create_human_review,
has_pending_reviews_for_graph_exec, has_pending_reviews_for_graph_exec,
update_review_processed_status, update_review_processed_status,
@@ -192,8 +190,6 @@ class DatabaseManager(AppService):
get_user_notification_preference = _(get_user_notification_preference) get_user_notification_preference = _(get_user_notification_preference)
# Human In The Loop # Human In The Loop
cancel_pending_reviews_for_execution = _(cancel_pending_reviews_for_execution)
check_approval = _(check_approval)
get_or_create_human_review = _(get_or_create_human_review) get_or_create_human_review = _(get_or_create_human_review)
has_pending_reviews_for_graph_exec = _(has_pending_reviews_for_graph_exec) has_pending_reviews_for_graph_exec = _(has_pending_reviews_for_graph_exec)
update_review_processed_status = _(update_review_processed_status) update_review_processed_status = _(update_review_processed_status)
@@ -317,8 +313,6 @@ class DatabaseManagerAsyncClient(AppServiceClient):
set_execution_kv_data = d.set_execution_kv_data set_execution_kv_data = d.set_execution_kv_data
# Human In The Loop # Human In The Loop
cancel_pending_reviews_for_execution = d.cancel_pending_reviews_for_execution
check_approval = d.check_approval
get_or_create_human_review = d.get_or_create_human_review get_or_create_human_review = d.get_or_create_human_review
update_review_processed_status = d.update_review_processed_status update_review_processed_status = d.update_review_processed_status

View File

@@ -10,7 +10,6 @@ from pydantic import BaseModel, JsonValue, ValidationError
from backend.data import execution as execution_db from backend.data import execution as execution_db
from backend.data import graph as graph_db from backend.data import graph as graph_db
from backend.data import human_review as human_review_db
from backend.data import onboarding as onboarding_db from backend.data import onboarding as onboarding_db
from backend.data import user as user_db from backend.data import user as user_db
from backend.data.block import ( from backend.data.block import (
@@ -750,27 +749,9 @@ async def stop_graph_execution(
if graph_exec.status in [ if graph_exec.status in [
ExecutionStatus.QUEUED, ExecutionStatus.QUEUED,
ExecutionStatus.INCOMPLETE, ExecutionStatus.INCOMPLETE,
ExecutionStatus.REVIEW,
]: ]:
# If the graph is queued/incomplete/paused for review, terminate immediately # If the graph is still on the queue, we can prevent them from being executed
# No need to wait for executor since it's not actively running # by setting the status to TERMINATED.
# If graph is in REVIEW status, clean up pending reviews before terminating
if graph_exec.status == ExecutionStatus.REVIEW:
# Use human_review_db if Prisma connected, else database manager
review_db = (
human_review_db
if prisma.is_connected()
else get_database_manager_async_client()
)
# Mark all pending reviews as rejected/cancelled
cancelled_count = await review_db.cancel_pending_reviews_for_execution(
graph_exec_id, user_id
)
logger.info(
f"Cancelled {cancelled_count} pending review(s) for stopped execution {graph_exec_id}"
)
graph_exec.status = ExecutionStatus.TERMINATED graph_exec.status = ExecutionStatus.TERMINATED
await asyncio.gather( await asyncio.gather(

View File

@@ -670,232 +670,3 @@ async def test_add_graph_execution_with_nodes_to_skip(mocker: MockerFixture):
# Verify nodes_to_skip was passed to to_graph_execution_entry # Verify nodes_to_skip was passed to to_graph_execution_entry
assert "nodes_to_skip" in captured_kwargs assert "nodes_to_skip" in captured_kwargs
assert captured_kwargs["nodes_to_skip"] == nodes_to_skip assert captured_kwargs["nodes_to_skip"] == nodes_to_skip
@pytest.mark.asyncio
async def test_stop_graph_execution_in_review_status_cancels_pending_reviews(
mocker: MockerFixture,
):
"""Test that stopping an execution in REVIEW status cancels pending reviews."""
from backend.data.execution import ExecutionStatus, GraphExecutionMeta
from backend.executor.utils import stop_graph_execution
user_id = "test-user"
graph_exec_id = "test-exec-123"
# Mock graph execution in REVIEW status
mock_graph_exec = mocker.MagicMock(spec=GraphExecutionMeta)
mock_graph_exec.id = graph_exec_id
mock_graph_exec.status = ExecutionStatus.REVIEW
# Mock dependencies
mock_get_queue = mocker.patch("backend.executor.utils.get_async_execution_queue")
mock_queue_client = mocker.AsyncMock()
mock_get_queue.return_value = mock_queue_client
mock_prisma = mocker.patch("backend.executor.utils.prisma")
mock_prisma.is_connected.return_value = True
mock_human_review_db = mocker.patch("backend.executor.utils.human_review_db")
mock_human_review_db.cancel_pending_reviews_for_execution = mocker.AsyncMock(
return_value=2 # 2 reviews cancelled
)
mock_execution_db = mocker.patch("backend.executor.utils.execution_db")
mock_execution_db.get_graph_execution_meta = mocker.AsyncMock(
return_value=mock_graph_exec
)
mock_execution_db.update_graph_execution_stats = mocker.AsyncMock()
mock_get_event_bus = mocker.patch(
"backend.executor.utils.get_async_execution_event_bus"
)
mock_event_bus = mocker.MagicMock()
mock_event_bus.publish = mocker.AsyncMock()
mock_get_event_bus.return_value = mock_event_bus
mock_get_child_executions = mocker.patch(
"backend.executor.utils._get_child_executions"
)
mock_get_child_executions.return_value = [] # No children
# Call stop_graph_execution with timeout to allow status check
await stop_graph_execution(
user_id=user_id,
graph_exec_id=graph_exec_id,
wait_timeout=1.0, # Wait to allow status check
cascade=True,
)
# Verify pending reviews were cancelled
mock_human_review_db.cancel_pending_reviews_for_execution.assert_called_once_with(
graph_exec_id, user_id
)
# Verify execution status was updated to TERMINATED
mock_execution_db.update_graph_execution_stats.assert_called_once()
call_kwargs = mock_execution_db.update_graph_execution_stats.call_args[1]
assert call_kwargs["graph_exec_id"] == graph_exec_id
assert call_kwargs["status"] == ExecutionStatus.TERMINATED
@pytest.mark.asyncio
async def test_stop_graph_execution_with_database_manager_when_prisma_disconnected(
mocker: MockerFixture,
):
"""Test that stop uses database manager when Prisma is not connected."""
from backend.data.execution import ExecutionStatus, GraphExecutionMeta
from backend.executor.utils import stop_graph_execution
user_id = "test-user"
graph_exec_id = "test-exec-456"
# Mock graph execution in REVIEW status
mock_graph_exec = mocker.MagicMock(spec=GraphExecutionMeta)
mock_graph_exec.id = graph_exec_id
mock_graph_exec.status = ExecutionStatus.REVIEW
# Mock dependencies
mock_get_queue = mocker.patch("backend.executor.utils.get_async_execution_queue")
mock_queue_client = mocker.AsyncMock()
mock_get_queue.return_value = mock_queue_client
# Prisma is NOT connected
mock_prisma = mocker.patch("backend.executor.utils.prisma")
mock_prisma.is_connected.return_value = False
# Mock database manager client
mock_get_db_manager = mocker.patch(
"backend.executor.utils.get_database_manager_async_client"
)
mock_db_manager = mocker.AsyncMock()
mock_db_manager.get_graph_execution_meta = mocker.AsyncMock(
return_value=mock_graph_exec
)
mock_db_manager.cancel_pending_reviews_for_execution = mocker.AsyncMock(
return_value=3 # 3 reviews cancelled
)
mock_db_manager.update_graph_execution_stats = mocker.AsyncMock()
mock_get_db_manager.return_value = mock_db_manager
mock_get_event_bus = mocker.patch(
"backend.executor.utils.get_async_execution_event_bus"
)
mock_event_bus = mocker.MagicMock()
mock_event_bus.publish = mocker.AsyncMock()
mock_get_event_bus.return_value = mock_event_bus
mock_get_child_executions = mocker.patch(
"backend.executor.utils._get_child_executions"
)
mock_get_child_executions.return_value = [] # No children
# Call stop_graph_execution with timeout
await stop_graph_execution(
user_id=user_id,
graph_exec_id=graph_exec_id,
wait_timeout=1.0,
cascade=True,
)
# Verify database manager was used for cancel_pending_reviews
mock_db_manager.cancel_pending_reviews_for_execution.assert_called_once_with(
graph_exec_id, user_id
)
# Verify execution status was updated via database manager
mock_db_manager.update_graph_execution_stats.assert_called_once()
@pytest.mark.asyncio
async def test_stop_graph_execution_cascades_to_child_with_reviews(
mocker: MockerFixture,
):
"""Test that stopping parent execution cascades to children and cancels their reviews."""
from backend.data.execution import ExecutionStatus, GraphExecutionMeta
from backend.executor.utils import stop_graph_execution
user_id = "test-user"
parent_exec_id = "parent-exec"
child_exec_id = "child-exec"
# Mock parent execution in RUNNING status
mock_parent_exec = mocker.MagicMock(spec=GraphExecutionMeta)
mock_parent_exec.id = parent_exec_id
mock_parent_exec.status = ExecutionStatus.RUNNING
# Mock child execution in REVIEW status
mock_child_exec = mocker.MagicMock(spec=GraphExecutionMeta)
mock_child_exec.id = child_exec_id
mock_child_exec.status = ExecutionStatus.REVIEW
# Mock dependencies
mock_get_queue = mocker.patch("backend.executor.utils.get_async_execution_queue")
mock_queue_client = mocker.AsyncMock()
mock_get_queue.return_value = mock_queue_client
mock_prisma = mocker.patch("backend.executor.utils.prisma")
mock_prisma.is_connected.return_value = True
mock_human_review_db = mocker.patch("backend.executor.utils.human_review_db")
mock_human_review_db.cancel_pending_reviews_for_execution = mocker.AsyncMock(
return_value=1 # 1 child review cancelled
)
# Mock execution_db to return different status based on which execution is queried
mock_execution_db = mocker.patch("backend.executor.utils.execution_db")
# Track call count to simulate status transition
call_count = {"count": 0}
async def get_exec_meta_side_effect(execution_id, user_id):
call_count["count"] += 1
if execution_id == parent_exec_id:
# After a few calls (child processing happens), transition parent to TERMINATED
# This simulates the executor service processing the stop request
if call_count["count"] > 3:
mock_parent_exec.status = ExecutionStatus.TERMINATED
return mock_parent_exec
elif execution_id == child_exec_id:
return mock_child_exec
return None
mock_execution_db.get_graph_execution_meta = mocker.AsyncMock(
side_effect=get_exec_meta_side_effect
)
mock_execution_db.update_graph_execution_stats = mocker.AsyncMock()
mock_get_event_bus = mocker.patch(
"backend.executor.utils.get_async_execution_event_bus"
)
mock_event_bus = mocker.MagicMock()
mock_event_bus.publish = mocker.AsyncMock()
mock_get_event_bus.return_value = mock_event_bus
# Mock _get_child_executions to return the child
mock_get_child_executions = mocker.patch(
"backend.executor.utils._get_child_executions"
)
def get_children_side_effect(parent_id):
if parent_id == parent_exec_id:
return [mock_child_exec]
return []
mock_get_child_executions.side_effect = get_children_side_effect
# Call stop_graph_execution on parent with cascade=True
await stop_graph_execution(
user_id=user_id,
graph_exec_id=parent_exec_id,
wait_timeout=1.0,
cascade=True,
)
# Verify child reviews were cancelled
mock_human_review_db.cancel_pending_reviews_for_execution.assert_called_once_with(
child_exec_id, user_id
)
# Verify both parent and child status updates
assert mock_execution_db.update_graph_execution_stats.call_count >= 1

View File

@@ -1,12 +1,37 @@
-- CreateExtension -- CreateExtension
-- Supabase: pgvector must be enabled via Dashboard → Database → Extensions first -- Supabase: pgvector must be enabled via Dashboard → Database → Extensions first
-- Creates extension in current schema (determined by search_path from DATABASE_URL ?schema= param) -- Ensures vector extension is in the current schema (from DATABASE_URL ?schema= param)
-- If it exists in a different schema (e.g., public), we drop and recreate it in the current schema
-- This ensures vector type is in the same schema as tables, making ::vector work without explicit qualification -- This ensures vector type is in the same schema as tables, making ::vector work without explicit qualification
DO $$ DO $$
DECLARE
current_schema_name text;
vector_schema text;
BEGIN BEGIN
CREATE EXTENSION IF NOT EXISTS "vector"; -- Get the current schema from search_path
SELECT current_schema() INTO current_schema_name;
-- Check if vector extension exists and which schema it's in
SELECT n.nspname INTO vector_schema
FROM pg_extension e
JOIN pg_namespace n ON e.extnamespace = n.oid
WHERE e.extname = 'vector';
-- Handle removal if in wrong schema
IF vector_schema IS NOT NULL AND vector_schema != current_schema_name THEN
BEGIN
-- Vector exists in a different schema, drop it first
RAISE WARNING 'pgvector found in schema "%" but need it in "%". Dropping and reinstalling...',
vector_schema, current_schema_name;
EXECUTE 'DROP EXTENSION IF EXISTS vector CASCADE';
EXCEPTION WHEN OTHERS THEN EXCEPTION WHEN OTHERS THEN
RAISE NOTICE 'vector extension not available or already exists, skipping'; RAISE EXCEPTION 'Failed to drop pgvector from schema "%": %. You may need to drop it manually.',
vector_schema, SQLERRM;
END;
END IF;
-- Create extension in current schema (let it fail naturally if not available)
EXECUTE format('CREATE EXTENSION IF NOT EXISTS vector SCHEMA %I', current_schema_name);
END $$; END $$;
-- CreateEnum -- CreateEnum

View File

@@ -1,71 +0,0 @@
-- Acknowledge Supabase-managed extensions to prevent drift warnings
-- These extensions are pre-installed by Supabase in specific schemas
-- This migration ensures they exist where available (Supabase) or skips gracefully (CI)
-- Create schemas (safe in both CI and Supabase)
CREATE SCHEMA IF NOT EXISTS "extensions";
-- Extensions that exist in both CI and Supabase
DO $$
BEGIN
CREATE EXTENSION IF NOT EXISTS "pgcrypto" WITH SCHEMA "extensions";
EXCEPTION WHEN OTHERS THEN
RAISE NOTICE 'pgcrypto extension not available, skipping';
END $$;
DO $$
BEGIN
CREATE EXTENSION IF NOT EXISTS "uuid-ossp" WITH SCHEMA "extensions";
EXCEPTION WHEN OTHERS THEN
RAISE NOTICE 'uuid-ossp extension not available, skipping';
END $$;
-- Supabase-specific extensions (skip gracefully in CI)
DO $$
BEGIN
CREATE EXTENSION IF NOT EXISTS "pg_stat_statements" WITH SCHEMA "extensions";
EXCEPTION WHEN OTHERS THEN
RAISE NOTICE 'pg_stat_statements extension not available, skipping';
END $$;
DO $$
BEGIN
CREATE EXTENSION IF NOT EXISTS "pg_net" WITH SCHEMA "extensions";
EXCEPTION WHEN OTHERS THEN
RAISE NOTICE 'pg_net extension not available, skipping';
END $$;
DO $$
BEGIN
CREATE EXTENSION IF NOT EXISTS "pgjwt" WITH SCHEMA "extensions";
EXCEPTION WHEN OTHERS THEN
RAISE NOTICE 'pgjwt extension not available, skipping';
END $$;
DO $$
BEGIN
CREATE SCHEMA IF NOT EXISTS "graphql";
CREATE EXTENSION IF NOT EXISTS "pg_graphql" WITH SCHEMA "graphql";
EXCEPTION WHEN OTHERS THEN
RAISE NOTICE 'pg_graphql extension not available, skipping';
END $$;
DO $$
BEGIN
CREATE SCHEMA IF NOT EXISTS "pgsodium";
CREATE EXTENSION IF NOT EXISTS "pgsodium" WITH SCHEMA "pgsodium";
EXCEPTION WHEN OTHERS THEN
RAISE NOTICE 'pgsodium extension not available, skipping';
END $$;
DO $$
BEGIN
CREATE SCHEMA IF NOT EXISTS "vault";
CREATE EXTENSION IF NOT EXISTS "supabase_vault" WITH SCHEMA "vault";
EXCEPTION WHEN OTHERS THEN
RAISE NOTICE 'supabase_vault extension not available, skipping';
END $$;
-- Return to platform
CREATE SCHEMA IF NOT EXISTS "platform";

View File

@@ -1,7 +0,0 @@
-- Remove NodeExecution foreign key from PendingHumanReview
-- The nodeExecId column remains as the primary key, but we remove the FK constraint
-- to AgentNodeExecution since PendingHumanReview records can persist after node
-- execution records are deleted.
-- Drop foreign key constraint that linked PendingHumanReview.nodeExecId to AgentNodeExecution.id
ALTER TABLE "PendingHumanReview" DROP CONSTRAINT IF EXISTS "PendingHumanReview_nodeExecId_fkey";

View File

@@ -517,6 +517,8 @@ model AgentNodeExecution {
stats Json? stats Json?
PendingHumanReview PendingHumanReview?
@@index([agentGraphExecutionId, agentNodeId, executionStatus]) @@index([agentGraphExecutionId, agentNodeId, executionStatus])
@@index([agentNodeId, executionStatus]) @@index([agentNodeId, executionStatus])
@@index([addedTime, queuedTime]) @@index([addedTime, queuedTime])
@@ -565,7 +567,6 @@ enum ReviewStatus {
} }
// Pending human reviews for Human-in-the-loop blocks // Pending human reviews for Human-in-the-loop blocks
// Also stores auto-approval records with special nodeExecId patterns (e.g., "auto_approve_{graph_exec_id}_{node_id}")
model PendingHumanReview { model PendingHumanReview {
nodeExecId String @id nodeExecId String @id
userId String userId String
@@ -584,6 +585,7 @@ model PendingHumanReview {
reviewedAt DateTime? reviewedAt DateTime?
User User @relation(fields: [userId], references: [id], onDelete: Cascade) User User @relation(fields: [userId], references: [id], onDelete: Cascade)
NodeExecution AgentNodeExecution @relation(fields: [nodeExecId], references: [id], onDelete: Cascade)
GraphExecution AgentGraphExecution @relation(fields: [graphExecId], references: [id], onDelete: Cascade) GraphExecution AgentGraphExecution @relation(fields: [graphExecId], references: [id], onDelete: Cascade)
@@unique([nodeExecId]) // One pending review per node execution @@unique([nodeExecId]) // One pending review per node execution

View File

@@ -175,6 +175,8 @@ While server components and actions are cool and cutting-edge, they introduce a
- Prefer [React Query](https://tanstack.com/query/latest/docs/framework/react/overview) for server state, colocated near consumers (see [state colocation](https://kentcdodds.com/blog/state-colocation-will-make-your-react-app-faster)) - Prefer [React Query](https://tanstack.com/query/latest/docs/framework/react/overview) for server state, colocated near consumers (see [state colocation](https://kentcdodds.com/blog/state-colocation-will-make-your-react-app-faster))
- Co-locate UI state inside components/hooks; keep global state minimal - Co-locate UI state inside components/hooks; keep global state minimal
- Avoid `useMemo` and `useCallback` unless you have a measured performance issue
- Do not abuse `useEffect`; prefer state colocation and derive values directly when possible
### Styling and components ### Styling and components
@@ -549,9 +551,48 @@ Files:
Types: Types:
- Prefer `interface` for object shapes - Prefer `interface` for object shapes
- Component props should be `interface Props { ... }` - Component props should be `interface Props { ... }` (not exported)
- Only use specific exported names (e.g., `export interface MyComponentProps`) when the interface needs to be used outside the component
- Keep type definitions inline with the component - do not create separate `types.ts` files unless types are shared across multiple files
- Use precise types; avoid `any` and unsafe casts - Use precise types; avoid `any` and unsafe casts
**Props naming examples:**
```tsx
// ✅ Good - internal props, not exported
interface Props {
title: string;
onClose: () => void;
}
export function Modal({ title, onClose }: Props) {
// ...
}
// ✅ Good - exported when needed externally
export interface ModalProps {
title: string;
onClose: () => void;
}
export function Modal({ title, onClose }: ModalProps) {
// ...
}
// ❌ Bad - unnecessarily specific name for internal use
interface ModalComponentProps {
title: string;
onClose: () => void;
}
// ❌ Bad - separate types.ts file for single component
// types.ts
export interface ModalProps { ... }
// Modal.tsx
import type { ModalProps } from './types';
```
Parameters: Parameters:
- If more than one parameter is needed, pass a single `Args` object for clarity - If more than one parameter is needed, pass a single `Args` object for clarity

View File

@@ -16,6 +16,12 @@ export default defineConfig({
client: "react-query", client: "react-query",
httpClient: "fetch", httpClient: "fetch",
indexFiles: false, indexFiles: false,
mock: {
type: "msw",
baseUrl: "http://localhost:3000/api/proxy",
generateEachHttpStatus: true,
delay: 0,
},
override: { override: {
mutator: { mutator: {
path: "./mutators/custom-mutator.ts", path: "./mutators/custom-mutator.ts",

View File

@@ -15,6 +15,8 @@
"types": "tsc --noEmit", "types": "tsc --noEmit",
"test": "NEXT_PUBLIC_PW_TEST=true next build --turbo && playwright test", "test": "NEXT_PUBLIC_PW_TEST=true next build --turbo && playwright test",
"test-ui": "NEXT_PUBLIC_PW_TEST=true next build --turbo && playwright test --ui", "test-ui": "NEXT_PUBLIC_PW_TEST=true next build --turbo && playwright test --ui",
"test:unit": "vitest run",
"test:unit:watch": "vitest",
"test:no-build": "playwright test", "test:no-build": "playwright test",
"gentests": "playwright codegen http://localhost:3000", "gentests": "playwright codegen http://localhost:3000",
"storybook": "storybook dev -p 6006", "storybook": "storybook dev -p 6006",
@@ -118,6 +120,7 @@
}, },
"devDependencies": { "devDependencies": {
"@chromatic-com/storybook": "4.1.2", "@chromatic-com/storybook": "4.1.2",
"happy-dom": "20.3.4",
"@opentelemetry/instrumentation": "0.209.0", "@opentelemetry/instrumentation": "0.209.0",
"@playwright/test": "1.56.1", "@playwright/test": "1.56.1",
"@storybook/addon-a11y": "9.1.5", "@storybook/addon-a11y": "9.1.5",
@@ -127,6 +130,8 @@
"@storybook/nextjs": "9.1.5", "@storybook/nextjs": "9.1.5",
"@tanstack/eslint-plugin-query": "5.91.2", "@tanstack/eslint-plugin-query": "5.91.2",
"@tanstack/react-query-devtools": "5.90.2", "@tanstack/react-query-devtools": "5.90.2",
"@testing-library/dom": "10.4.1",
"@testing-library/react": "16.3.2",
"@types/canvas-confetti": "1.9.0", "@types/canvas-confetti": "1.9.0",
"@types/lodash": "4.17.20", "@types/lodash": "4.17.20",
"@types/negotiator": "0.6.4", "@types/negotiator": "0.6.4",
@@ -135,6 +140,7 @@
"@types/react-dom": "18.3.5", "@types/react-dom": "18.3.5",
"@types/react-modal": "3.16.3", "@types/react-modal": "3.16.3",
"@types/react-window": "1.8.8", "@types/react-window": "1.8.8",
"@vitejs/plugin-react": "5.1.2",
"axe-playwright": "2.2.2", "axe-playwright": "2.2.2",
"chromatic": "13.3.3", "chromatic": "13.3.3",
"concurrently": "9.2.1", "concurrently": "9.2.1",
@@ -153,7 +159,9 @@
"require-in-the-middle": "8.0.1", "require-in-the-middle": "8.0.1",
"storybook": "9.1.5", "storybook": "9.1.5",
"tailwindcss": "3.4.17", "tailwindcss": "3.4.17",
"typescript": "5.9.3" "typescript": "5.9.3",
"vite-tsconfig-paths": "6.0.4",
"vitest": "4.0.17"
}, },
"msw": { "msw": {
"workerDirectory": [ "workerDirectory": [

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,58 @@
"use client";
import { LoadingSpinner } from "@/components/atoms/LoadingSpinner/LoadingSpinner";
import { Text } from "@/components/atoms/Text/Text";
import { useToast } from "@/components/molecules/Toast/use-toast";
import { useSupabase } from "@/lib/supabase/hooks/useSupabase";
import { useRouter } from "next/navigation";
import { useEffect, useRef } from "react";
const LOGOUT_REDIRECT_DELAY_MS = 400;
function wait(ms: number): Promise<void> {
return new Promise(function resolveAfterDelay(resolve) {
setTimeout(resolve, ms);
});
}
export default function LogoutPage() {
const { logOut } = useSupabase();
const { toast } = useToast();
const router = useRouter();
const hasStartedRef = useRef(false);
useEffect(
function handleLogoutEffect() {
if (hasStartedRef.current) return;
hasStartedRef.current = true;
async function runLogout() {
try {
await logOut();
} catch {
toast({
title: "Failed to log out. Redirecting to login.",
variant: "destructive",
});
} finally {
await wait(LOGOUT_REDIRECT_DELAY_MS);
router.replace("/login");
}
}
void runLogout();
},
[logOut, router, toast],
);
return (
<div className="flex min-h-screen items-center justify-center px-4">
<div className="flex flex-col items-center justify-center gap-4 py-8">
<LoadingSpinner size="large" />
<Text variant="body" className="text-center">
Logging you out...
</Text>
</div>
</div>
);
}

View File

@@ -9,7 +9,7 @@ export async function GET(request: Request) {
const { searchParams, origin } = new URL(request.url); const { searchParams, origin } = new URL(request.url);
const code = searchParams.get("code"); const code = searchParams.get("code");
let next = "/marketplace"; let next = "/";
if (code) { if (code) {
const supabase = await getServerSupabase(); const supabase = await getServerSupabase();

View File

@@ -86,6 +86,7 @@ export function FloatingSafeModeToggle({
const { const {
currentHITLSafeMode, currentHITLSafeMode,
showHITLToggle, showHITLToggle,
isHITLStateUndetermined,
handleHITLToggle, handleHITLToggle,
currentSensitiveActionSafeMode, currentSensitiveActionSafeMode,
showSensitiveActionToggle, showSensitiveActionToggle,
@@ -98,9 +99,16 @@ export function FloatingSafeModeToggle({
return null; return null;
} }
const showHITL = showHITLToggle && !isHITLStateUndetermined;
const showSensitive = showSensitiveActionToggle;
if (!showHITL && !showSensitive) {
return null;
}
return ( return (
<div className={cn("fixed z-50 flex flex-col gap-2", className)}> <div className={cn("fixed z-50 flex flex-col gap-2", className)}>
{showHITLToggle && ( {showHITL && (
<SafeModeButton <SafeModeButton
isEnabled={currentHITLSafeMode} isEnabled={currentHITLSafeMode}
label="Human in the loop block approval" label="Human in the loop block approval"
@@ -111,7 +119,7 @@ export function FloatingSafeModeToggle({
fullWidth={fullWidth} fullWidth={fullWidth}
/> />
)} )}
{showSensitiveActionToggle && ( {showSensitive && (
<SafeModeButton <SafeModeButton
isEnabled={currentSensitiveActionSafeMode} isEnabled={currentSensitiveActionSafeMode}
label="Sensitive actions blocks approval" label="Sensitive actions blocks approval"

View File

@@ -1,134 +0,0 @@
"use client";
import { Button } from "@/components/atoms/Button/Button";
import { Text } from "@/components/atoms/Text/Text";
import { cn } from "@/lib/utils";
import { List } from "@phosphor-icons/react";
import React, { useState } from "react";
import { ChatContainer } from "./components/ChatContainer/ChatContainer";
import { ChatErrorState } from "./components/ChatErrorState/ChatErrorState";
import { ChatLoadingState } from "./components/ChatLoadingState/ChatLoadingState";
import { SessionsDrawer } from "./components/SessionsDrawer/SessionsDrawer";
import { useChat } from "./useChat";
export interface ChatProps {
className?: string;
headerTitle?: React.ReactNode;
showHeader?: boolean;
showSessionInfo?: boolean;
showNewChatButton?: boolean;
onNewChat?: () => void;
headerActions?: React.ReactNode;
}
export function Chat({
className,
headerTitle = "AutoGPT Copilot",
showHeader = true,
showSessionInfo = true,
showNewChatButton = true,
onNewChat,
headerActions,
}: ChatProps) {
const {
messages,
isLoading,
isCreating,
error,
sessionId,
createSession,
clearSession,
loadSession,
} = useChat();
const [isSessionsDrawerOpen, setIsSessionsDrawerOpen] = useState(false);
const handleNewChat = () => {
clearSession();
onNewChat?.();
};
const handleSelectSession = async (sessionId: string) => {
try {
await loadSession(sessionId);
} catch (err) {
console.error("Failed to load session:", err);
}
};
return (
<div className={cn("flex h-full flex-col", className)}>
{/* Header */}
{showHeader && (
<header className="shrink-0 border-t border-zinc-200 bg-white p-3">
<div className="flex items-center justify-between">
<div className="flex items-center gap-3">
<button
aria-label="View sessions"
onClick={() => setIsSessionsDrawerOpen(true)}
className="flex size-8 items-center justify-center rounded hover:bg-zinc-100"
>
<List width="1.25rem" height="1.25rem" />
</button>
{typeof headerTitle === "string" ? (
<Text variant="h2" className="text-lg font-semibold">
{headerTitle}
</Text>
) : (
headerTitle
)}
</div>
<div className="flex items-center gap-3">
{showSessionInfo && sessionId && (
<>
{showNewChatButton && (
<Button
variant="outline"
size="small"
onClick={handleNewChat}
>
New Chat
</Button>
)}
</>
)}
{headerActions}
</div>
</div>
</header>
)}
{/* Main Content */}
<main className="flex min-h-0 flex-1 flex-col overflow-hidden">
{/* Loading State - show when explicitly loading/creating OR when we don't have a session yet and no error */}
{(isLoading || isCreating || (!sessionId && !error)) && (
<ChatLoadingState
message={isCreating ? "Creating session..." : "Loading..."}
/>
)}
{/* Error State */}
{error && !isLoading && (
<ChatErrorState error={error} onRetry={createSession} />
)}
{/* Session Content */}
{sessionId && !isLoading && !error && (
<ChatContainer
sessionId={sessionId}
initialMessages={messages}
className="flex-1"
/>
)}
</main>
{/* Sessions Drawer */}
<SessionsDrawer
isOpen={isSessionsDrawerOpen}
onClose={() => setIsSessionsDrawerOpen(false)}
onSelectSession={handleSelectSession}
currentSessionId={sessionId}
/>
</div>
);
}

View File

@@ -1,88 +0,0 @@
import type { SessionDetailResponse } from "@/app/api/__generated__/models/sessionDetailResponse";
import { cn } from "@/lib/utils";
import { useCallback } from "react";
import { usePageContext } from "../../usePageContext";
import { ChatInput } from "../ChatInput/ChatInput";
import { MessageList } from "../MessageList/MessageList";
import { QuickActionsWelcome } from "../QuickActionsWelcome/QuickActionsWelcome";
import { useChatContainer } from "./useChatContainer";
export interface ChatContainerProps {
sessionId: string | null;
initialMessages: SessionDetailResponse["messages"];
className?: string;
}
export function ChatContainer({
sessionId,
initialMessages,
className,
}: ChatContainerProps) {
const { messages, streamingChunks, isStreaming, sendMessage } =
useChatContainer({
sessionId,
initialMessages,
});
const { capturePageContext } = usePageContext();
// Wrap sendMessage to automatically capture page context
const sendMessageWithContext = useCallback(
async (content: string, isUserMessage: boolean = true) => {
const context = capturePageContext();
await sendMessage(content, isUserMessage, context);
},
[sendMessage, capturePageContext],
);
const quickActions = [
"Find agents for social media management",
"Show me agents for content creation",
"Help me automate my business",
"What can you help me with?",
];
return (
<div
className={cn("flex h-full min-h-0 flex-col", className)}
style={{
backgroundColor: "#ffffff",
backgroundImage:
"radial-gradient(#e5e5e5 0.5px, transparent 0.5px), radial-gradient(#e5e5e5 0.5px, #ffffff 0.5px)",
backgroundSize: "20px 20px",
backgroundPosition: "0 0, 10px 10px",
}}
>
{/* Messages or Welcome Screen */}
<div className="flex min-h-0 flex-1 flex-col overflow-hidden pb-24">
{messages.length === 0 ? (
<QuickActionsWelcome
title="Welcome to AutoGPT Copilot"
description="Start a conversation to discover and run AI agents."
actions={quickActions}
onActionClick={sendMessageWithContext}
disabled={isStreaming || !sessionId}
/>
) : (
<MessageList
messages={messages}
streamingChunks={streamingChunks}
isStreaming={isStreaming}
onSendMessage={sendMessageWithContext}
className="flex-1"
/>
)}
</div>
{/* Input - Always visible */}
<div className="fixed bottom-0 left-0 right-0 z-50 border-t border-zinc-200 bg-white p-4">
<ChatInput
onSend={sendMessageWithContext}
disabled={isStreaming || !sessionId}
placeholder={
sessionId ? "Type your message..." : "Creating session..."
}
/>
</div>
</div>
);
}

View File

@@ -1,64 +0,0 @@
import { Input } from "@/components/atoms/Input/Input";
import { cn } from "@/lib/utils";
import { ArrowUpIcon } from "@phosphor-icons/react";
import { useChatInput } from "./useChatInput";
export interface ChatInputProps {
onSend: (message: string) => void;
disabled?: boolean;
placeholder?: string;
className?: string;
}
export function ChatInput({
onSend,
disabled = false,
placeholder = "Type your message...",
className,
}: ChatInputProps) {
const inputId = "chat-input";
const { value, setValue, handleKeyDown, handleSend } = useChatInput({
onSend,
disabled,
maxRows: 5,
inputId,
});
return (
<div className={cn("relative flex-1", className)}>
<Input
id={inputId}
label="Chat message input"
hideLabel
type="textarea"
value={value}
onChange={(e) => setValue(e.target.value)}
onKeyDown={handleKeyDown}
placeholder={placeholder}
disabled={disabled}
rows={1}
wrapperClassName="mb-0 relative"
className="pr-12"
/>
<span id="chat-input-hint" className="sr-only">
Press Enter to send, Shift+Enter for new line
</span>
<button
onClick={handleSend}
disabled={disabled || !value.trim()}
className={cn(
"absolute right-3 top-1/2 flex h-8 w-8 -translate-y-1/2 items-center justify-center rounded-full",
"border border-zinc-800 bg-zinc-800 text-white",
"hover:border-zinc-900 hover:bg-zinc-900",
"disabled:border-zinc-200 disabled:bg-zinc-200 disabled:text-white disabled:opacity-50",
"transition-colors focus-visible:outline-none focus-visible:ring-1 focus-visible:ring-neutral-950",
"disabled:pointer-events-none",
)}
aria-label="Send message"
>
<ArrowUpIcon className="h-3 w-3" weight="bold" />
</button>
</div>
);
}

View File

@@ -1,60 +0,0 @@
import { KeyboardEvent, useCallback, useEffect, useState } from "react";
interface UseChatInputArgs {
onSend: (message: string) => void;
disabled?: boolean;
maxRows?: number;
inputId?: string;
}
export function useChatInput({
onSend,
disabled = false,
maxRows = 5,
inputId = "chat-input",
}: UseChatInputArgs) {
const [value, setValue] = useState("");
useEffect(() => {
const textarea = document.getElementById(inputId) as HTMLTextAreaElement;
if (!textarea) return;
textarea.style.height = "auto";
const lineHeight = parseInt(
window.getComputedStyle(textarea).lineHeight,
10,
);
const maxHeight = lineHeight * maxRows;
const newHeight = Math.min(textarea.scrollHeight, maxHeight);
textarea.style.height = `${newHeight}px`;
textarea.style.overflowY =
textarea.scrollHeight > maxHeight ? "auto" : "hidden";
}, [value, maxRows, inputId]);
const handleSend = useCallback(() => {
if (disabled || !value.trim()) return;
onSend(value.trim());
setValue("");
const textarea = document.getElementById(inputId) as HTMLTextAreaElement;
if (textarea) {
textarea.style.height = "auto";
}
}, [value, onSend, disabled, inputId]);
const handleKeyDown = useCallback(
(event: KeyboardEvent<HTMLInputElement | HTMLTextAreaElement>) => {
if (event.key === "Enter" && !event.shiftKey) {
event.preventDefault();
handleSend();
}
// Shift+Enter allows default behavior (new line) - no need to handle explicitly
},
[handleSend],
);
return {
value,
setValue,
handleKeyDown,
handleSend,
};
}

View File

@@ -1,121 +0,0 @@
"use client";
import { cn } from "@/lib/utils";
import { ChatMessage } from "../ChatMessage/ChatMessage";
import type { ChatMessageData } from "../ChatMessage/useChatMessage";
import { StreamingMessage } from "../StreamingMessage/StreamingMessage";
import { ThinkingMessage } from "../ThinkingMessage/ThinkingMessage";
import { useMessageList } from "./useMessageList";
export interface MessageListProps {
messages: ChatMessageData[];
streamingChunks?: string[];
isStreaming?: boolean;
className?: string;
onStreamComplete?: () => void;
onSendMessage?: (content: string) => void;
}
export function MessageList({
messages,
streamingChunks = [],
isStreaming = false,
className,
onStreamComplete,
onSendMessage,
}: MessageListProps) {
const { messagesEndRef, messagesContainerRef } = useMessageList({
messageCount: messages.length,
isStreaming,
});
return (
<div
ref={messagesContainerRef}
className={cn(
"flex-1 overflow-y-auto",
"scrollbar-thin scrollbar-track-transparent scrollbar-thumb-zinc-300",
className,
)}
>
<div className="mx-auto flex max-w-3xl flex-col py-4">
{/* Render all persisted messages */}
{messages.map((message, index) => {
// Check if current message is an agent_output tool_response
// and if previous message is an assistant message
let agentOutput: ChatMessageData | undefined;
if (message.type === "tool_response" && message.result) {
let parsedResult: Record<string, unknown> | null = null;
try {
parsedResult =
typeof message.result === "string"
? JSON.parse(message.result)
: (message.result as Record<string, unknown>);
} catch {
parsedResult = null;
}
if (parsedResult?.type === "agent_output") {
const prevMessage = messages[index - 1];
if (
prevMessage &&
prevMessage.type === "message" &&
prevMessage.role === "assistant"
) {
// This agent output will be rendered inside the previous assistant message
// Skip rendering this message separately
return null;
}
}
}
// Check if next message is an agent_output tool_response to include in current assistant message
if (message.type === "message" && message.role === "assistant") {
const nextMessage = messages[index + 1];
if (
nextMessage &&
nextMessage.type === "tool_response" &&
nextMessage.result
) {
let parsedResult: Record<string, unknown> | null = null;
try {
parsedResult =
typeof nextMessage.result === "string"
? JSON.parse(nextMessage.result)
: (nextMessage.result as Record<string, unknown>);
} catch {
parsedResult = null;
}
if (parsedResult?.type === "agent_output") {
agentOutput = nextMessage;
}
}
}
return (
<ChatMessage
key={index}
message={message}
onSendMessage={onSendMessage}
agentOutput={agentOutput}
/>
);
})}
{/* Render thinking message when streaming but no chunks yet */}
{isStreaming && streamingChunks.length === 0 && <ThinkingMessage />}
{/* Render streaming message if active */}
{isStreaming && streamingChunks.length > 0 && (
<StreamingMessage
chunks={streamingChunks}
onComplete={onStreamComplete}
/>
)}
{/* Invisible div to scroll to */}
<div ref={messagesEndRef} />
</div>
</div>
);
}

View File

@@ -1,24 +0,0 @@
import { Text } from "@/components/atoms/Text/Text";
import { cn } from "@/lib/utils";
import { WrenchIcon } from "@phosphor-icons/react";
import { getToolActionPhrase } from "../../helpers";
export interface ToolCallMessageProps {
toolName: string;
className?: string;
}
export function ToolCallMessage({ toolName, className }: ToolCallMessageProps) {
return (
<div className={cn("flex items-center justify-center gap-2", className)}>
<WrenchIcon
size={14}
weight="bold"
className="flex-shrink-0 text-neutral-500"
/>
<Text variant="small" className="text-neutral-500">
{getToolActionPhrase(toolName)}...
</Text>
</div>
);
}

View File

@@ -1,260 +0,0 @@
import { Text } from "@/components/atoms/Text/Text";
import "@/components/contextual/OutputRenderers";
import {
globalRegistry,
OutputItem,
} from "@/components/contextual/OutputRenderers";
import { cn } from "@/lib/utils";
import type { ToolResult } from "@/types/chat";
import { WrenchIcon } from "@phosphor-icons/react";
import { getToolActionPhrase } from "../../helpers";
export interface ToolResponseMessageProps {
toolName: string;
result?: ToolResult;
success?: boolean;
className?: string;
}
export function ToolResponseMessage({
toolName,
result,
success: _success = true,
className,
}: ToolResponseMessageProps) {
if (!result) {
return (
<div className={cn("flex items-center justify-center gap-2", className)}>
<WrenchIcon
size={14}
weight="bold"
className="flex-shrink-0 text-neutral-500"
/>
<Text variant="small" className="text-neutral-500">
{getToolActionPhrase(toolName)}...
</Text>
</div>
);
}
let parsedResult: Record<string, unknown> | null = null;
try {
parsedResult =
typeof result === "string"
? JSON.parse(result)
: (result as Record<string, unknown>);
} catch {
parsedResult = null;
}
if (parsedResult && typeof parsedResult === "object") {
const responseType = parsedResult.type as string | undefined;
if (responseType === "agent_output") {
const execution = parsedResult.execution as
| {
outputs?: Record<string, unknown[]>;
}
| null
| undefined;
const outputs = execution?.outputs || {};
const message = parsedResult.message as string | undefined;
return (
<div className={cn("space-y-4 px-4 py-2", className)}>
<div className="flex items-center gap-2">
<WrenchIcon
size={14}
weight="bold"
className="flex-shrink-0 text-neutral-500"
/>
<Text variant="small" className="text-neutral-500">
{getToolActionPhrase(toolName)}
</Text>
</div>
{message && (
<div className="rounded border p-4">
<Text variant="small" className="text-neutral-600">
{message}
</Text>
</div>
)}
{Object.keys(outputs).length > 0 && (
<div className="space-y-4">
{Object.entries(outputs).map(([outputName, values]) =>
values.map((value, index) => {
const renderer = globalRegistry.getRenderer(value);
if (renderer) {
return (
<OutputItem
key={`${outputName}-${index}`}
value={value}
renderer={renderer}
label={outputName}
/>
);
}
return (
<div
key={`${outputName}-${index}`}
className="rounded border p-4"
>
<Text variant="large-medium" className="mb-2 capitalize">
{outputName}
</Text>
<pre className="overflow-auto text-sm">
{JSON.stringify(value, null, 2)}
</pre>
</div>
);
}),
)}
</div>
)}
</div>
);
}
if (responseType === "block_output" && parsedResult.outputs) {
const outputs = parsedResult.outputs as Record<string, unknown[]>;
return (
<div className={cn("space-y-4 px-4 py-2", className)}>
<div className="flex items-center gap-2">
<WrenchIcon
size={14}
weight="bold"
className="flex-shrink-0 text-neutral-500"
/>
<Text variant="small" className="text-neutral-500">
{getToolActionPhrase(toolName)}
</Text>
</div>
<div className="space-y-4">
{Object.entries(outputs).map(([outputName, values]) =>
values.map((value, index) => {
const renderer = globalRegistry.getRenderer(value);
if (renderer) {
return (
<OutputItem
key={`${outputName}-${index}`}
value={value}
renderer={renderer}
label={outputName}
/>
);
}
return (
<div
key={`${outputName}-${index}`}
className="rounded border p-4"
>
<Text variant="large-medium" className="mb-2 capitalize">
{outputName}
</Text>
<pre className="overflow-auto text-sm">
{JSON.stringify(value, null, 2)}
</pre>
</div>
);
}),
)}
</div>
</div>
);
}
// Handle other response types with a message field (e.g., understanding_updated)
if (parsedResult.message && typeof parsedResult.message === "string") {
// Format tool name from snake_case to Title Case
const formattedToolName = toolName
.split("_")
.map((word) => word.charAt(0).toUpperCase() + word.slice(1))
.join(" ");
// Clean up message - remove incomplete user_name references
let cleanedMessage = parsedResult.message;
// Remove "Updated understanding with: user_name" pattern if user_name is just a placeholder
cleanedMessage = cleanedMessage.replace(
/Updated understanding with:\s*user_name\.?\s*/gi,
"",
);
// Remove standalone user_name references
cleanedMessage = cleanedMessage.replace(/\buser_name\b\.?\s*/gi, "");
cleanedMessage = cleanedMessage.trim();
// Only show message if it has content after cleaning
if (!cleanedMessage) {
return (
<div
className={cn(
"flex items-center justify-center gap-2 px-4 py-2",
className,
)}
>
<WrenchIcon
size={14}
weight="bold"
className="flex-shrink-0 text-neutral-500"
/>
<Text variant="small" className="text-neutral-500">
{formattedToolName}
</Text>
</div>
);
}
return (
<div className={cn("space-y-2 px-4 py-2", className)}>
<div className="flex items-center justify-center gap-2">
<WrenchIcon
size={14}
weight="bold"
className="flex-shrink-0 text-neutral-500"
/>
<Text variant="small" className="text-neutral-500">
{formattedToolName}
</Text>
</div>
<div className="rounded border p-4">
<Text variant="small" className="text-neutral-600">
{cleanedMessage}
</Text>
</div>
</div>
);
}
}
const renderer = globalRegistry.getRenderer(result);
if (renderer) {
return (
<div className={cn("px-4 py-2", className)}>
<div className="mb-2 flex items-center gap-2">
<WrenchIcon
size={14}
weight="bold"
className="flex-shrink-0 text-neutral-500"
/>
<Text variant="small" className="text-neutral-500">
{getToolActionPhrase(toolName)}
</Text>
</div>
<OutputItem value={result} renderer={renderer} />
</div>
);
}
return (
<div className={cn("flex items-center justify-center gap-2", className)}>
<WrenchIcon
size={14}
weight="bold"
className="flex-shrink-0 text-neutral-500"
/>
<Text variant="small" className="text-neutral-500">
{getToolActionPhrase(toolName)}...
</Text>
</div>
);
}

View File

@@ -1,66 +0,0 @@
/**
* Maps internal tool names to user-friendly display names with emojis.
* @deprecated Use getToolActionPhrase or getToolCompletionPhrase for status messages
*
* @param toolName - The internal tool name from the backend
* @returns A user-friendly display name with an emoji prefix
*/
export function getToolDisplayName(toolName: string): string {
const toolDisplayNames: Record<string, string> = {
find_agent: "🔍 Search Marketplace",
get_agent_details: "📋 Get Agent Details",
check_credentials: "🔑 Check Credentials",
setup_agent: "⚙️ Setup Agent",
run_agent: "▶️ Run Agent",
get_required_setup_info: "📝 Get Setup Requirements",
};
return toolDisplayNames[toolName] || toolName;
}
/**
* Maps internal tool names to human-friendly action phrases (present continuous).
* Used for tool call messages to indicate what action is currently happening.
*
* @param toolName - The internal tool name from the backend
* @returns A human-friendly action phrase in present continuous tense
*/
export function getToolActionPhrase(toolName: string): string {
const toolActionPhrases: Record<string, string> = {
find_agent: "Looking for agents in the marketplace",
agent_carousel: "Looking for agents in the marketplace",
get_agent_details: "Learning about the agent",
check_credentials: "Checking your credentials",
setup_agent: "Setting up the agent",
execution_started: "Running the agent",
run_agent: "Running the agent",
get_required_setup_info: "Getting setup requirements",
schedule_agent: "Scheduling the agent to run",
};
// Return mapped phrase or generate human-friendly fallback
return toolActionPhrases[toolName] || toolName;
}
/**
* Maps internal tool names to human-friendly completion phrases (past tense).
* Used for tool response messages to indicate what action was completed.
*
* @param toolName - The internal tool name from the backend
* @returns A human-friendly completion phrase in past tense
*/
export function getToolCompletionPhrase(toolName: string): string {
const toolCompletionPhrases: Record<string, string> = {
find_agent: "Finished searching the marketplace",
get_agent_details: "Got agent details",
check_credentials: "Checked credentials",
setup_agent: "Agent setup complete",
run_agent: "Agent execution started",
get_required_setup_info: "Got setup requirements",
};
// Return mapped phrase or generate human-friendly fallback
return (
toolCompletionPhrases[toolName] ||
`Finished ${toolName.replace(/_/g, " ").replace("...", "")}`
);
}

View File

@@ -1,271 +0,0 @@
import {
getGetV2GetSessionQueryKey,
getGetV2GetSessionQueryOptions,
postV2CreateSession,
useGetV2GetSession,
usePatchV2SessionAssignUser,
usePostV2CreateSession,
} from "@/app/api/__generated__/endpoints/chat/chat";
import type { SessionDetailResponse } from "@/app/api/__generated__/models/sessionDetailResponse";
import { okData } from "@/app/api/helpers";
import { isValidUUID } from "@/lib/utils";
import { Key, storage } from "@/services/storage/local-storage";
import { useQueryClient } from "@tanstack/react-query";
import { useCallback, useEffect, useMemo, useRef, useState } from "react";
import { toast } from "sonner";
interface UseChatSessionArgs {
urlSessionId?: string | null;
autoCreate?: boolean;
}
export function useChatSession({
urlSessionId,
autoCreate = false,
}: UseChatSessionArgs = {}) {
const queryClient = useQueryClient();
const [sessionId, setSessionId] = useState<string | null>(null);
const [error, setError] = useState<Error | null>(null);
const justCreatedSessionIdRef = useRef<string | null>(null);
useEffect(() => {
if (urlSessionId) {
if (!isValidUUID(urlSessionId)) {
console.error("Invalid session ID format:", urlSessionId);
toast.error("Invalid session ID", {
description:
"The session ID in the URL is not valid. Starting a new session...",
});
setSessionId(null);
storage.clean(Key.CHAT_SESSION_ID);
return;
}
setSessionId(urlSessionId);
storage.set(Key.CHAT_SESSION_ID, urlSessionId);
} else {
const storedSessionId = storage.get(Key.CHAT_SESSION_ID);
if (storedSessionId) {
if (!isValidUUID(storedSessionId)) {
console.error("Invalid stored session ID:", storedSessionId);
storage.clean(Key.CHAT_SESSION_ID);
setSessionId(null);
} else {
setSessionId(storedSessionId);
}
} else if (autoCreate) {
setSessionId(null);
}
}
}, [urlSessionId, autoCreate]);
const {
mutateAsync: createSessionMutation,
isPending: isCreating,
error: createError,
} = usePostV2CreateSession();
const {
data: sessionData,
isLoading: isLoadingSession,
error: loadError,
refetch,
} = useGetV2GetSession(sessionId || "", {
query: {
enabled: !!sessionId,
select: okData,
staleTime: Infinity, // Never mark as stale
refetchOnMount: false, // Don't refetch on component mount
refetchOnWindowFocus: false, // Don't refetch when window regains focus
refetchOnReconnect: false, // Don't refetch when network reconnects
retry: 1,
},
});
const { mutateAsync: claimSessionMutation } = usePatchV2SessionAssignUser();
const session = useMemo(() => {
if (sessionData) return sessionData;
if (sessionId && justCreatedSessionIdRef.current === sessionId) {
return {
id: sessionId,
user_id: null,
messages: [],
created_at: new Date().toISOString(),
updated_at: new Date().toISOString(),
} as SessionDetailResponse;
}
return null;
}, [sessionData, sessionId]);
const messages = session?.messages || [];
const isLoading = isCreating || isLoadingSession;
useEffect(() => {
if (createError) {
setError(
createError instanceof Error
? createError
: new Error("Failed to create session"),
);
} else if (loadError) {
setError(
loadError instanceof Error
? loadError
: new Error("Failed to load session"),
);
} else {
setError(null);
}
}, [createError, loadError]);
const createSession = useCallback(
async function createSession() {
try {
setError(null);
const response = await postV2CreateSession({
body: JSON.stringify({}),
});
if (response.status !== 200) {
throw new Error("Failed to create session");
}
const newSessionId = response.data.id;
setSessionId(newSessionId);
storage.set(Key.CHAT_SESSION_ID, newSessionId);
justCreatedSessionIdRef.current = newSessionId;
setTimeout(() => {
if (justCreatedSessionIdRef.current === newSessionId) {
justCreatedSessionIdRef.current = null;
}
}, 10000);
return newSessionId;
} catch (err) {
const error =
err instanceof Error ? err : new Error("Failed to create session");
setError(error);
toast.error("Failed to create chat session", {
description: error.message,
});
throw error;
}
},
[createSessionMutation],
);
const loadSession = useCallback(
async function loadSession(id: string) {
try {
setError(null);
// Invalidate the query cache for this session to force a fresh fetch
await queryClient.invalidateQueries({
queryKey: getGetV2GetSessionQueryKey(id),
});
// Set sessionId after invalidation to ensure the hook refetches
setSessionId(id);
storage.set(Key.CHAT_SESSION_ID, id);
// Force fetch with fresh data (bypass cache)
const queryOptions = getGetV2GetSessionQueryOptions(id, {
query: {
staleTime: 0, // Force fresh fetch
retry: 1,
},
});
const result = await queryClient.fetchQuery(queryOptions);
if (!result || ("status" in result && result.status !== 200)) {
console.warn("Session not found on server, clearing local state");
storage.clean(Key.CHAT_SESSION_ID);
setSessionId(null);
throw new Error("Session not found");
}
} catch (err) {
const error =
err instanceof Error ? err : new Error("Failed to load session");
setError(error);
throw error;
}
},
[queryClient],
);
const refreshSession = useCallback(
async function refreshSession() {
if (!sessionId) {
console.log("[refreshSession] Skipping - no session ID");
return;
}
try {
setError(null);
await refetch();
} catch (err) {
const error =
err instanceof Error ? err : new Error("Failed to refresh session");
setError(error);
throw error;
}
},
[sessionId, refetch],
);
const claimSession = useCallback(
async function claimSession(id: string) {
try {
setError(null);
await claimSessionMutation({ sessionId: id });
if (justCreatedSessionIdRef.current === id) {
justCreatedSessionIdRef.current = null;
}
await queryClient.invalidateQueries({
queryKey: getGetV2GetSessionQueryKey(id),
});
await refetch();
toast.success("Session claimed successfully", {
description: "Your chat history has been saved to your account",
});
} catch (err: unknown) {
const error =
err instanceof Error ? err : new Error("Failed to claim session");
const is404 =
(typeof err === "object" &&
err !== null &&
"status" in err &&
err.status === 404) ||
(typeof err === "object" &&
err !== null &&
"response" in err &&
typeof err.response === "object" &&
err.response !== null &&
"status" in err.response &&
err.response.status === 404);
if (!is404) {
setError(error);
toast.error("Failed to claim session", {
description: error.message || "Unable to claim session",
});
}
throw error;
}
},
[claimSessionMutation, queryClient, refetch],
);
const clearSession = useCallback(function clearSession() {
setSessionId(null);
setError(null);
storage.clean(Key.CHAT_SESSION_ID);
justCreatedSessionIdRef.current = null;
}, []);
return {
session,
sessionId,
messages,
isLoading,
isCreating,
error,
createSession,
loadSession,
refreshSession,
claimSession,
clearSession,
};
}

View File

@@ -1,27 +0,0 @@
"use client";
import { Flag, useGetFlag } from "@/services/feature-flags/use-get-flag";
import { useRouter } from "next/navigation";
import { useEffect } from "react";
import { Chat } from "./components/Chat/Chat";
export default function ChatPage() {
const isChatEnabled = useGetFlag(Flag.CHAT);
const router = useRouter();
useEffect(() => {
if (isChatEnabled === false) {
router.push("/marketplace");
}
}, [isChatEnabled, router]);
if (isChatEnabled === null || isChatEnabled === false) {
return null;
}
return (
<div className="flex h-full flex-col">
<Chat className="flex-1" />
</div>
);
}

View File

@@ -0,0 +1,88 @@
"use client";
import { LoadingSpinner } from "@/components/atoms/LoadingSpinner/LoadingSpinner";
import { NAVBAR_HEIGHT_PX } from "@/lib/constants";
import type { ReactNode } from "react";
import { DesktopSidebar } from "./components/DesktopSidebar/DesktopSidebar";
import { LoadingState } from "./components/LoadingState/LoadingState";
import { MobileDrawer } from "./components/MobileDrawer/MobileDrawer";
import { MobileHeader } from "./components/MobileHeader/MobileHeader";
import { useCopilotShell } from "./useCopilotShell";
interface Props {
children: ReactNode;
}
export function CopilotShell({ children }: Props) {
const {
isMobile,
isDrawerOpen,
isLoading,
isLoggedIn,
hasActiveSession,
sessions,
currentSessionId,
handleSelectSession,
handleOpenDrawer,
handleCloseDrawer,
handleDrawerOpenChange,
handleNewChat,
hasNextPage,
isFetchingNextPage,
fetchNextPage,
isReadyToShowContent,
} = useCopilotShell();
if (!isLoggedIn) {
return (
<div className="flex h-full items-center justify-center">
<LoadingSpinner size="large" />
</div>
);
}
return (
<div
className="flex overflow-hidden bg-[#EFEFF0]"
style={{ height: `calc(100vh - ${NAVBAR_HEIGHT_PX}px)` }}
>
{!isMobile && (
<DesktopSidebar
sessions={sessions}
currentSessionId={currentSessionId}
isLoading={isLoading}
hasNextPage={hasNextPage}
isFetchingNextPage={isFetchingNextPage}
onSelectSession={handleSelectSession}
onFetchNextPage={fetchNextPage}
onNewChat={handleNewChat}
hasActiveSession={Boolean(hasActiveSession)}
/>
)}
<div className="relative flex min-h-0 flex-1 flex-col">
{isMobile && <MobileHeader onOpenDrawer={handleOpenDrawer} />}
<div className="flex min-h-0 flex-1 flex-col">
{isReadyToShowContent ? children : <LoadingState />}
</div>
</div>
{isMobile && (
<MobileDrawer
isOpen={isDrawerOpen}
sessions={sessions}
currentSessionId={currentSessionId}
isLoading={isLoading}
hasNextPage={hasNextPage}
isFetchingNextPage={isFetchingNextPage}
onSelectSession={handleSelectSession}
onFetchNextPage={fetchNextPage}
onNewChat={handleNewChat}
onClose={handleCloseDrawer}
onOpenChange={handleDrawerOpenChange}
hasActiveSession={Boolean(hasActiveSession)}
/>
)}
</div>
);
}

View File

@@ -0,0 +1,70 @@
import type { SessionSummaryResponse } from "@/app/api/__generated__/models/sessionSummaryResponse";
import { Button } from "@/components/atoms/Button/Button";
import { Text } from "@/components/atoms/Text/Text";
import { scrollbarStyles } from "@/components/styles/scrollbars";
import { cn } from "@/lib/utils";
import { Plus } from "@phosphor-icons/react";
import { SessionsList } from "../SessionsList/SessionsList";
interface Props {
sessions: SessionSummaryResponse[];
currentSessionId: string | null;
isLoading: boolean;
hasNextPage: boolean;
isFetchingNextPage: boolean;
onSelectSession: (sessionId: string) => void;
onFetchNextPage: () => void;
onNewChat: () => void;
hasActiveSession: boolean;
}
export function DesktopSidebar({
sessions,
currentSessionId,
isLoading,
hasNextPage,
isFetchingNextPage,
onSelectSession,
onFetchNextPage,
onNewChat,
hasActiveSession,
}: Props) {
return (
<aside className="flex h-full w-80 flex-col border-r border-zinc-100 bg-zinc-50">
<div className="shrink-0 px-6 py-4">
<Text variant="h3" size="body-medium">
Your chats
</Text>
</div>
<div
className={cn(
"flex min-h-0 flex-1 flex-col overflow-y-auto px-3 py-3",
scrollbarStyles,
)}
>
<SessionsList
sessions={sessions}
currentSessionId={currentSessionId}
isLoading={isLoading}
hasNextPage={hasNextPage}
isFetchingNextPage={isFetchingNextPage}
onSelectSession={onSelectSession}
onFetchNextPage={onFetchNextPage}
/>
</div>
{hasActiveSession && (
<div className="shrink-0 bg-zinc-50 p-3 shadow-[0_-4px_6px_-1px_rgba(0,0,0,0.05)]">
<Button
variant="primary"
size="small"
onClick={onNewChat}
className="w-full"
leftIcon={<Plus width="1rem" height="1rem" />}
>
New Chat
</Button>
</div>
)}
</aside>
);
}

View File

@@ -0,0 +1,15 @@
import { Text } from "@/components/atoms/Text/Text";
import { ChatLoader } from "@/components/contextual/Chat/components/ChatLoader/ChatLoader";
export function LoadingState() {
return (
<div className="flex flex-1 items-center justify-center">
<div className="flex flex-col items-center gap-4">
<ChatLoader />
<Text variant="body" className="text-zinc-500">
Loading your chats...
</Text>
</div>
</div>
);
}

View File

@@ -0,0 +1,91 @@
import type { SessionSummaryResponse } from "@/app/api/__generated__/models/sessionSummaryResponse";
import { Button } from "@/components/atoms/Button/Button";
import { scrollbarStyles } from "@/components/styles/scrollbars";
import { cn } from "@/lib/utils";
import { PlusIcon, X } from "@phosphor-icons/react";
import { Drawer } from "vaul";
import { SessionsList } from "../SessionsList/SessionsList";
interface Props {
isOpen: boolean;
sessions: SessionSummaryResponse[];
currentSessionId: string | null;
isLoading: boolean;
hasNextPage: boolean;
isFetchingNextPage: boolean;
onSelectSession: (sessionId: string) => void;
onFetchNextPage: () => void;
onNewChat: () => void;
onClose: () => void;
onOpenChange: (open: boolean) => void;
hasActiveSession: boolean;
}
export function MobileDrawer({
isOpen,
sessions,
currentSessionId,
isLoading,
hasNextPage,
isFetchingNextPage,
onSelectSession,
onFetchNextPage,
onNewChat,
onClose,
onOpenChange,
hasActiveSession,
}: Props) {
return (
<Drawer.Root open={isOpen} onOpenChange={onOpenChange} direction="left">
<Drawer.Portal>
<Drawer.Overlay className="fixed inset-0 z-[60] bg-black/10 backdrop-blur-sm" />
<Drawer.Content className="fixed left-0 top-0 z-[70] flex h-full w-80 flex-col border-r border-zinc-200 bg-zinc-50">
<div className="shrink-0 border-b border-zinc-200 p-4">
<div className="flex items-center justify-between">
<Drawer.Title className="text-lg font-semibold text-zinc-800">
Your chats
</Drawer.Title>
<Button
variant="icon"
size="icon"
aria-label="Close sessions"
onClick={onClose}
>
<X width="1.25rem" height="1.25rem" />
</Button>
</div>
</div>
<div
className={cn(
"flex min-h-0 flex-1 flex-col overflow-y-auto px-3 py-3",
scrollbarStyles,
)}
>
<SessionsList
sessions={sessions}
currentSessionId={currentSessionId}
isLoading={isLoading}
hasNextPage={hasNextPage}
isFetchingNextPage={isFetchingNextPage}
onSelectSession={onSelectSession}
onFetchNextPage={onFetchNextPage}
/>
</div>
{hasActiveSession && (
<div className="shrink-0 bg-white p-3 shadow-[0_-4px_6px_-1px_rgba(0,0,0,0.05)]">
<Button
variant="primary"
size="small"
onClick={onNewChat}
className="w-full"
leftIcon={<PlusIcon width="1rem" height="1rem" />}
>
New Chat
</Button>
</div>
)}
</Drawer.Content>
</Drawer.Portal>
</Drawer.Root>
);
}

View File

@@ -0,0 +1,24 @@
import { useState } from "react";
export function useMobileDrawer() {
const [isDrawerOpen, setIsDrawerOpen] = useState(false);
function handleOpenDrawer() {
setIsDrawerOpen(true);
}
function handleCloseDrawer() {
setIsDrawerOpen(false);
}
function handleDrawerOpenChange(open: boolean) {
setIsDrawerOpen(open);
}
return {
isDrawerOpen,
handleOpenDrawer,
handleCloseDrawer,
handleDrawerOpenChange,
};
}

View File

@@ -0,0 +1,22 @@
import { Button } from "@/components/atoms/Button/Button";
import { NAVBAR_HEIGHT_PX } from "@/lib/constants";
import { ListIcon } from "@phosphor-icons/react";
interface Props {
onOpenDrawer: () => void;
}
export function MobileHeader({ onOpenDrawer }: Props) {
return (
<Button
variant="icon"
size="icon"
aria-label="Open sessions"
onClick={onOpenDrawer}
className="fixed z-50 bg-white shadow-md"
style={{ left: "1rem", top: `${NAVBAR_HEIGHT_PX + 20}px` }}
>
<ListIcon width="1.25rem" height="1.25rem" />
</Button>
);
}

View File

@@ -0,0 +1,80 @@
import type { SessionSummaryResponse } from "@/app/api/__generated__/models/sessionSummaryResponse";
import { Skeleton } from "@/components/__legacy__/ui/skeleton";
import { Text } from "@/components/atoms/Text/Text";
import { InfiniteList } from "@/components/molecules/InfiniteList/InfiniteList";
import { cn } from "@/lib/utils";
import { getSessionTitle } from "../../helpers";
interface Props {
sessions: SessionSummaryResponse[];
currentSessionId: string | null;
isLoading: boolean;
hasNextPage: boolean;
isFetchingNextPage: boolean;
onSelectSession: (sessionId: string) => void;
onFetchNextPage: () => void;
}
export function SessionsList({
sessions,
currentSessionId,
isLoading,
hasNextPage,
isFetchingNextPage,
onSelectSession,
onFetchNextPage,
}: Props) {
if (isLoading) {
return (
<div className="space-y-1">
{Array.from({ length: 5 }).map((_, i) => (
<div key={i} className="rounded-lg px-3 py-2.5">
<Skeleton className="h-5 w-full" />
</div>
))}
</div>
);
}
if (sessions.length === 0) {
return (
<div className="flex h-full items-center justify-center">
<Text variant="body" className="text-zinc-500">
You don&apos;t have previous chats
</Text>
</div>
);
}
return (
<InfiniteList
items={sessions}
hasMore={hasNextPage}
isFetchingMore={isFetchingNextPage}
onEndReached={onFetchNextPage}
className="space-y-1"
renderItem={(session) => {
const isActive = session.id === currentSessionId;
return (
<button
onClick={() => onSelectSession(session.id)}
className={cn(
"w-full rounded-lg px-3 py-2.5 text-left transition-colors",
isActive ? "bg-zinc-100" : "hover:bg-zinc-50",
)}
>
<Text
variant="body"
className={cn(
"font-normal",
isActive ? "text-zinc-600" : "text-zinc-800",
)}
>
{getSessionTitle(session)}
</Text>
</button>
);
}}
/>
);
}

View File

@@ -0,0 +1,92 @@
import { useGetV2ListSessions } from "@/app/api/__generated__/endpoints/chat/chat";
import type { SessionSummaryResponse } from "@/app/api/__generated__/models/sessionSummaryResponse";
import { okData } from "@/app/api/helpers";
import { useEffect, useMemo, useState } from "react";
const PAGE_SIZE = 50;
export interface UseSessionsPaginationArgs {
enabled: boolean;
}
export function useSessionsPagination({ enabled }: UseSessionsPaginationArgs) {
const [offset, setOffset] = useState(0);
const [accumulatedSessions, setAccumulatedSessions] = useState<
SessionSummaryResponse[]
>([]);
const [totalCount, setTotalCount] = useState<number | null>(null);
const { data, isLoading, isFetching, isError } = useGetV2ListSessions(
{ limit: PAGE_SIZE, offset },
{
query: {
enabled: enabled && offset >= 0,
},
},
);
useEffect(() => {
const responseData = okData(data);
if (responseData) {
const newSessions = responseData.sessions;
const total = responseData.total;
setTotalCount(total);
if (offset === 0) {
setAccumulatedSessions(newSessions);
} else {
setAccumulatedSessions((prev) => [...prev, ...newSessions]);
}
} else if (!enabled) {
setAccumulatedSessions([]);
setTotalCount(null);
}
}, [data, offset, enabled]);
const hasNextPage = useMemo(() => {
if (totalCount === null) return false;
return accumulatedSessions.length < totalCount;
}, [accumulatedSessions.length, totalCount]);
const areAllSessionsLoaded = useMemo(() => {
if (totalCount === null) return false;
return (
accumulatedSessions.length >= totalCount && !isFetching && !isLoading
);
}, [accumulatedSessions.length, totalCount, isFetching, isLoading]);
useEffect(() => {
if (
hasNextPage &&
!isFetching &&
!isLoading &&
!isError &&
totalCount !== null
) {
setOffset((prev) => prev + PAGE_SIZE);
}
}, [hasNextPage, isFetching, isLoading, isError, totalCount]);
function fetchNextPage() {
if (hasNextPage && !isFetching) {
setOffset((prev) => prev + PAGE_SIZE);
}
}
function reset() {
setOffset(0);
setAccumulatedSessions([]);
setTotalCount(null);
}
return {
sessions: accumulatedSessions,
isLoading,
isFetching,
hasNextPage,
areAllSessionsLoaded,
totalCount,
fetchNextPage,
reset,
};
}

View File

@@ -0,0 +1,165 @@
import type { SessionDetailResponse } from "@/app/api/__generated__/models/sessionDetailResponse";
import type { SessionSummaryResponse } from "@/app/api/__generated__/models/sessionSummaryResponse";
import { format, formatDistanceToNow, isToday } from "date-fns";
export function convertSessionDetailToSummary(
session: SessionDetailResponse,
): SessionSummaryResponse {
return {
id: session.id,
created_at: session.created_at,
updated_at: session.updated_at,
title: undefined,
};
}
export function filterVisibleSessions(
sessions: SessionSummaryResponse[],
): SessionSummaryResponse[] {
return sessions.filter(
(session) => session.updated_at !== session.created_at,
);
}
export function getSessionTitle(session: SessionSummaryResponse): string {
if (session.title) return session.title;
const isNewSession = session.updated_at === session.created_at;
if (isNewSession) {
const createdDate = new Date(session.created_at);
if (isToday(createdDate)) {
return "Today";
}
return format(createdDate, "MMM d, yyyy");
}
return "Untitled Chat";
}
export function getSessionUpdatedLabel(
session: SessionSummaryResponse,
): string {
if (!session.updated_at) return "";
return formatDistanceToNow(new Date(session.updated_at), { addSuffix: true });
}
export function mergeCurrentSessionIntoList(
accumulatedSessions: SessionSummaryResponse[],
currentSessionId: string | null,
currentSessionData: SessionDetailResponse | null | undefined,
): SessionSummaryResponse[] {
const filteredSessions: SessionSummaryResponse[] = [];
if (accumulatedSessions.length > 0) {
const visibleSessions = filterVisibleSessions(accumulatedSessions);
if (currentSessionId) {
const currentInAll = accumulatedSessions.find(
(s) => s.id === currentSessionId,
);
if (currentInAll) {
const isInVisible = visibleSessions.some(
(s) => s.id === currentSessionId,
);
if (!isInVisible) {
filteredSessions.push(currentInAll);
}
}
}
filteredSessions.push(...visibleSessions);
}
if (currentSessionId && currentSessionData) {
const isCurrentInList = filteredSessions.some(
(s) => s.id === currentSessionId,
);
if (!isCurrentInList) {
const summarySession = convertSessionDetailToSummary(currentSessionData);
filteredSessions.unshift(summarySession);
}
}
return filteredSessions;
}
export function getCurrentSessionId(
searchParams: URLSearchParams,
): string | null {
return searchParams.get("sessionId");
}
export function shouldAutoSelectSession(
areAllSessionsLoaded: boolean,
hasAutoSelectedSession: boolean,
paramSessionId: string | null,
visibleSessions: SessionSummaryResponse[],
accumulatedSessions: SessionSummaryResponse[],
isLoading: boolean,
totalCount: number | null,
): {
shouldSelect: boolean;
sessionIdToSelect: string | null;
shouldCreate: boolean;
} {
if (!areAllSessionsLoaded || hasAutoSelectedSession) {
return {
shouldSelect: false,
sessionIdToSelect: null,
shouldCreate: false,
};
}
if (paramSessionId) {
return {
shouldSelect: false,
sessionIdToSelect: null,
shouldCreate: false,
};
}
if (visibleSessions.length > 0) {
return {
shouldSelect: true,
sessionIdToSelect: visibleSessions[0].id,
shouldCreate: false,
};
}
if (accumulatedSessions.length === 0 && !isLoading && totalCount === 0) {
return { shouldSelect: false, sessionIdToSelect: null, shouldCreate: true };
}
if (totalCount === 0) {
return {
shouldSelect: false,
sessionIdToSelect: null,
shouldCreate: false,
};
}
return { shouldSelect: false, sessionIdToSelect: null, shouldCreate: false };
}
export function checkReadyToShowContent(
areAllSessionsLoaded: boolean,
paramSessionId: string | null,
accumulatedSessions: SessionSummaryResponse[],
isCurrentSessionLoading: boolean,
currentSessionData: SessionDetailResponse | null | undefined,
hasAutoSelectedSession: boolean,
): boolean {
if (!areAllSessionsLoaded) return false;
if (paramSessionId) {
const sessionFound = accumulatedSessions.some(
(s) => s.id === paramSessionId,
);
return (
sessionFound ||
(!isCurrentSessionLoading &&
currentSessionData !== undefined &&
currentSessionData !== null)
);
}
return hasAutoSelectedSession;
}

View File

@@ -0,0 +1,170 @@
"use client";
import {
getGetV2ListSessionsQueryKey,
useGetV2GetSession,
} from "@/app/api/__generated__/endpoints/chat/chat";
import { okData } from "@/app/api/helpers";
import { useBreakpoint } from "@/lib/hooks/useBreakpoint";
import { useSupabase } from "@/lib/supabase/hooks/useSupabase";
import { useQueryClient } from "@tanstack/react-query";
import { usePathname, useRouter, useSearchParams } from "next/navigation";
import { useEffect, useRef, useState } from "react";
import { useMobileDrawer } from "./components/MobileDrawer/useMobileDrawer";
import { useSessionsPagination } from "./components/SessionsList/useSessionsPagination";
import {
checkReadyToShowContent,
filterVisibleSessions,
getCurrentSessionId,
mergeCurrentSessionIntoList,
} from "./helpers";
export function useCopilotShell() {
const router = useRouter();
const pathname = usePathname();
const searchParams = useSearchParams();
const queryClient = useQueryClient();
const breakpoint = useBreakpoint();
const { isLoggedIn } = useSupabase();
const isMobile =
breakpoint === "base" || breakpoint === "sm" || breakpoint === "md";
const isOnHomepage = pathname === "/copilot";
const paramSessionId = searchParams.get("sessionId");
const {
isDrawerOpen,
handleOpenDrawer,
handleCloseDrawer,
handleDrawerOpenChange,
} = useMobileDrawer();
const paginationEnabled = !isMobile || isDrawerOpen || !!paramSessionId;
const {
sessions: accumulatedSessions,
isLoading: isSessionsLoading,
isFetching: isSessionsFetching,
hasNextPage,
areAllSessionsLoaded,
fetchNextPage,
reset: resetPagination,
} = useSessionsPagination({
enabled: paginationEnabled,
});
const currentSessionId = getCurrentSessionId(searchParams);
const { data: currentSessionData, isLoading: isCurrentSessionLoading } =
useGetV2GetSession(currentSessionId || "", {
query: {
enabled: !!currentSessionId,
select: okData,
},
});
const [hasAutoSelectedSession, setHasAutoSelectedSession] = useState(false);
const hasAutoSelectedRef = useRef(false);
// Mark as auto-selected when sessionId is in URL
useEffect(() => {
if (paramSessionId && !hasAutoSelectedRef.current) {
hasAutoSelectedRef.current = true;
setHasAutoSelectedSession(true);
}
}, [paramSessionId]);
// On homepage without sessionId, mark as ready immediately
useEffect(() => {
if (isOnHomepage && !paramSessionId && !hasAutoSelectedRef.current) {
hasAutoSelectedRef.current = true;
setHasAutoSelectedSession(true);
}
}, [isOnHomepage, paramSessionId]);
// Invalidate sessions list when navigating to homepage (to show newly created sessions)
useEffect(() => {
if (isOnHomepage && !paramSessionId) {
queryClient.invalidateQueries({
queryKey: getGetV2ListSessionsQueryKey(),
});
}
}, [isOnHomepage, paramSessionId, queryClient]);
// Reset pagination when query becomes disabled
const prevPaginationEnabledRef = useRef(paginationEnabled);
useEffect(() => {
if (prevPaginationEnabledRef.current && !paginationEnabled) {
resetPagination();
resetAutoSelect();
}
prevPaginationEnabledRef.current = paginationEnabled;
}, [paginationEnabled, resetPagination]);
const sessions = mergeCurrentSessionIntoList(
accumulatedSessions,
currentSessionId,
currentSessionData,
);
const visibleSessions = filterVisibleSessions(sessions);
const sidebarSelectedSessionId =
isOnHomepage && !paramSessionId ? null : currentSessionId;
const isReadyToShowContent = isOnHomepage
? true
: checkReadyToShowContent(
areAllSessionsLoaded,
paramSessionId,
accumulatedSessions,
isCurrentSessionLoading,
currentSessionData,
hasAutoSelectedSession,
);
function handleSelectSession(sessionId: string) {
// Navigate using replaceState to avoid full page reload
window.history.replaceState(null, "", `/copilot?sessionId=${sessionId}`);
// Force a re-render by updating the URL through router
router.replace(`/copilot?sessionId=${sessionId}`);
if (isMobile) handleCloseDrawer();
}
function handleNewChat() {
resetAutoSelect();
resetPagination();
// Invalidate and refetch sessions list to ensure newly created sessions appear
queryClient.invalidateQueries({
queryKey: getGetV2ListSessionsQueryKey(),
});
window.history.replaceState(null, "", "/copilot");
router.replace("/copilot");
if (isMobile) handleCloseDrawer();
}
function resetAutoSelect() {
hasAutoSelectedRef.current = false;
setHasAutoSelectedSession(false);
}
return {
isMobile,
isDrawerOpen,
isLoggedIn,
hasActiveSession:
Boolean(currentSessionId) && (!isOnHomepage || Boolean(paramSessionId)),
isLoading: isSessionsLoading || !areAllSessionsLoaded,
sessions: visibleSessions,
currentSessionId: sidebarSelectedSessionId,
handleSelectSession,
handleOpenDrawer,
handleCloseDrawer,
handleDrawerOpenChange,
handleNewChat,
hasNextPage,
isFetchingNextPage: isSessionsFetching,
fetchNextPage,
isReadyToShowContent,
};
}

View File

@@ -0,0 +1,33 @@
import type { User } from "@supabase/supabase-js";
export function getGreetingName(user?: User | null): string {
if (!user) return "there";
const metadata = user.user_metadata as Record<string, unknown> | undefined;
const fullName = metadata?.full_name;
const name = metadata?.name;
if (typeof fullName === "string" && fullName.trim()) {
return fullName.split(" ")[0];
}
if (typeof name === "string" && name.trim()) {
return name.split(" ")[0];
}
if (user.email) {
return user.email.split("@")[0];
}
return "there";
}
export function buildCopilotChatUrl(prompt: string): string {
const trimmed = prompt.trim();
if (!trimmed) return "/copilot/chat";
const encoded = encodeURIComponent(trimmed);
return `/copilot/chat?prompt=${encoded}`;
}
export function getQuickActions(): string[] {
return [
"Show me what I can automate",
"Design a custom workflow",
"Help me with content creation",
];
}

View File

@@ -0,0 +1,6 @@
import type { ReactNode } from "react";
import { CopilotShell } from "./components/CopilotShell/CopilotShell";
export default function CopilotLayout({ children }: { children: ReactNode }) {
return <CopilotShell>{children}</CopilotShell>;
}

View File

@@ -0,0 +1,228 @@
"use client";
import { postV2CreateSession } from "@/app/api/__generated__/endpoints/chat/chat";
import { Skeleton } from "@/components/__legacy__/ui/skeleton";
import { Button } from "@/components/atoms/Button/Button";
import { LoadingSpinner } from "@/components/atoms/LoadingSpinner/LoadingSpinner";
import { Text } from "@/components/atoms/Text/Text";
import { Chat } from "@/components/contextual/Chat/Chat";
import { ChatInput } from "@/components/contextual/Chat/components/ChatInput/ChatInput";
import { getHomepageRoute } from "@/lib/constants";
import { useSupabase } from "@/lib/supabase/hooks/useSupabase";
import {
Flag,
type FlagValues,
useGetFlag,
} from "@/services/feature-flags/use-get-flag";
import { useFlags } from "launchdarkly-react-client-sdk";
import { useRouter, useSearchParams } from "next/navigation";
import { useEffect, useMemo, useRef, useState } from "react";
import { getGreetingName, getQuickActions } from "./helpers";
type PageState =
| { type: "welcome" }
| { type: "creating"; prompt: string }
| { type: "chat"; sessionId: string; initialPrompt?: string };
export default function CopilotPage() {
const router = useRouter();
const searchParams = useSearchParams();
const { user, isLoggedIn, isUserLoading } = useSupabase();
const isChatEnabled = useGetFlag(Flag.CHAT);
const flags = useFlags<FlagValues>();
const homepageRoute = getHomepageRoute(isChatEnabled);
const envEnabled = process.env.NEXT_PUBLIC_LAUNCHDARKLY_ENABLED === "true";
const clientId = process.env.NEXT_PUBLIC_LAUNCHDARKLY_CLIENT_ID;
const isLaunchDarklyConfigured = envEnabled && Boolean(clientId);
const isFlagReady =
!isLaunchDarklyConfigured || flags[Flag.CHAT] !== undefined;
const [pageState, setPageState] = useState<PageState>({ type: "welcome" });
const initialPromptRef = useRef<Map<string, string>>(new Map());
const urlSessionId = searchParams.get("sessionId");
// Sync with URL sessionId (preserve initialPrompt from ref)
useEffect(
function syncSessionFromUrl() {
if (urlSessionId) {
// If we're already in chat state with this sessionId, don't overwrite
if (pageState.type === "chat" && pageState.sessionId === urlSessionId) {
return;
}
// Get initialPrompt from ref or current state
const storedInitialPrompt = initialPromptRef.current.get(urlSessionId);
const currentInitialPrompt =
storedInitialPrompt ||
(pageState.type === "creating"
? pageState.prompt
: pageState.type === "chat"
? pageState.initialPrompt
: undefined);
if (currentInitialPrompt) {
initialPromptRef.current.set(urlSessionId, currentInitialPrompt);
}
setPageState({
type: "chat",
sessionId: urlSessionId,
initialPrompt: currentInitialPrompt,
});
} else if (pageState.type === "chat") {
setPageState({ type: "welcome" });
}
},
[urlSessionId],
);
useEffect(
function ensureAccess() {
if (!isFlagReady) return;
if (isChatEnabled === false) {
router.replace(homepageRoute);
}
},
[homepageRoute, isChatEnabled, isFlagReady, router],
);
const greetingName = useMemo(
function getName() {
return getGreetingName(user);
},
[user],
);
const quickActions = useMemo(function getActions() {
return getQuickActions();
}, []);
async function startChatWithPrompt(prompt: string) {
if (!prompt?.trim()) return;
if (pageState.type === "creating") return;
const trimmedPrompt = prompt.trim();
setPageState({ type: "creating", prompt: trimmedPrompt });
try {
// Create session
const sessionResponse = await postV2CreateSession({
body: JSON.stringify({}),
});
if (sessionResponse.status !== 200 || !sessionResponse.data?.id) {
throw new Error("Failed to create session");
}
const sessionId = sessionResponse.data.id;
// Store initialPrompt in ref so it persists across re-renders
initialPromptRef.current.set(sessionId, trimmedPrompt);
// Update URL and show Chat with initial prompt
// Chat will handle sending the message and streaming
window.history.replaceState(null, "", `/copilot?sessionId=${sessionId}`);
setPageState({ type: "chat", sessionId, initialPrompt: trimmedPrompt });
} catch (error) {
console.error("[CopilotPage] Failed to start chat:", error);
setPageState({ type: "welcome" });
}
}
function handleQuickAction(action: string) {
startChatWithPrompt(action);
}
function handleSessionNotFound() {
router.replace("/copilot");
}
if (!isFlagReady || isChatEnabled === false || !isLoggedIn) {
return null;
}
// Show Chat when we have an active session
if (pageState.type === "chat") {
return (
<div className="flex h-full flex-col">
<Chat
key={pageState.sessionId ?? "welcome"}
className="flex-1"
urlSessionId={pageState.sessionId}
initialPrompt={pageState.initialPrompt}
onSessionNotFound={handleSessionNotFound}
/>
</div>
);
}
// Show loading state while creating session and sending first message
if (pageState.type === "creating") {
return (
<div className="flex h-full flex-1 flex-col items-center justify-center bg-[#f8f8f9] px-6 py-10">
<LoadingSpinner size="large" />
<Text variant="body" className="mt-4 text-zinc-500">
Starting your chat...
</Text>
</div>
);
}
// Show Welcome screen
const isLoading = isUserLoading;
return (
<div className="flex h-full flex-1 items-center justify-center overflow-y-auto bg-[#f8f8f9] px-6 py-10">
<div className="w-full text-center">
{isLoading ? (
<div className="mx-auto max-w-2xl">
<Skeleton className="mx-auto mb-3 h-8 w-64" />
<Skeleton className="mx-auto mb-8 h-6 w-80" />
<div className="mb-8">
<Skeleton className="mx-auto h-14 w-full rounded-lg" />
</div>
<div className="flex flex-wrap items-center justify-center gap-3">
{Array.from({ length: 4 }).map((_, i) => (
<Skeleton key={i} className="h-9 w-48 rounded-md" />
))}
</div>
</div>
) : (
<>
<div className="mx-auto max-w-2xl">
<Text
variant="h3"
className="mb-3 !text-[1.375rem] text-zinc-700"
>
Hey, <span className="text-violet-600">{greetingName}</span>
</Text>
<Text variant="h3" className="mb-8 !font-normal">
What do you want to automate?
</Text>
<div className="mb-6">
<ChatInput
onSend={startChatWithPrompt}
placeholder='You can search or just ask - e.g. "create a blog post outline"'
/>
</div>
</div>
<div className="flex flex-nowrap items-center justify-center gap-3 overflow-x-auto [-ms-overflow-style:none] [scrollbar-width:none] [&::-webkit-scrollbar]:hidden">
{quickActions.map((action) => (
<Button
key={action}
type="button"
variant="outline"
size="small"
onClick={() => handleQuickAction(action)}
className="h-auto shrink-0 border-zinc-600 !px-4 !py-2 text-[1rem] text-zinc-600"
>
{action}
</Button>
))}
</div>
</>
)}
</div>
</div>
);
}

View File

@@ -1,6 +1,8 @@
"use client"; "use client";
import { ErrorCard } from "@/components/molecules/ErrorCard/ErrorCard"; import { ErrorCard } from "@/components/molecules/ErrorCard/ErrorCard";
import { getHomepageRoute } from "@/lib/constants";
import { Flag, useGetFlag } from "@/services/feature-flags/use-get-flag";
import { useSearchParams } from "next/navigation"; import { useSearchParams } from "next/navigation";
import { Suspense } from "react"; import { Suspense } from "react";
import { getErrorDetails } from "./helpers"; import { getErrorDetails } from "./helpers";
@@ -9,6 +11,8 @@ function ErrorPageContent() {
const searchParams = useSearchParams(); const searchParams = useSearchParams();
const errorMessage = searchParams.get("message"); const errorMessage = searchParams.get("message");
const errorDetails = getErrorDetails(errorMessage); const errorDetails = getErrorDetails(errorMessage);
const isChatEnabled = useGetFlag(Flag.CHAT);
const homepageRoute = getHomepageRoute(isChatEnabled);
function handleRetry() { function handleRetry() {
// Auth-related errors should redirect to login // Auth-related errors should redirect to login
@@ -25,8 +29,8 @@ function ErrorPageContent() {
window.location.reload(); window.location.reload();
}, 2000); }, 2000);
} else { } else {
// For server/network errors, go to marketplace // For server/network errors, go to home
window.location.href = "/marketplace"; window.location.href = homepageRoute;
} }
} }

View File

@@ -14,10 +14,6 @@ import {
import { Dialog } from "@/components/molecules/Dialog/Dialog"; import { Dialog } from "@/components/molecules/Dialog/Dialog";
import { useEffect, useRef, useState } from "react"; import { useEffect, useRef, useState } from "react";
import { ScheduleAgentModal } from "../ScheduleAgentModal/ScheduleAgentModal"; import { ScheduleAgentModal } from "../ScheduleAgentModal/ScheduleAgentModal";
import {
AIAgentSafetyPopup,
useAIAgentSafetyPopup,
} from "./components/AIAgentSafetyPopup/AIAgentSafetyPopup";
import { ModalHeader } from "./components/ModalHeader/ModalHeader"; import { ModalHeader } from "./components/ModalHeader/ModalHeader";
import { ModalRunSection } from "./components/ModalRunSection/ModalRunSection"; import { ModalRunSection } from "./components/ModalRunSection/ModalRunSection";
import { RunActions } from "./components/RunActions/RunActions"; import { RunActions } from "./components/RunActions/RunActions";
@@ -87,18 +83,8 @@ export function RunAgentModal({
const [isScheduleModalOpen, setIsScheduleModalOpen] = useState(false); const [isScheduleModalOpen, setIsScheduleModalOpen] = useState(false);
const [hasOverflow, setHasOverflow] = useState(false); const [hasOverflow, setHasOverflow] = useState(false);
const [isSafetyPopupOpen, setIsSafetyPopupOpen] = useState(false);
const [pendingRunAction, setPendingRunAction] = useState<(() => void) | null>(
null,
);
const contentRef = useRef<HTMLDivElement>(null); const contentRef = useRef<HTMLDivElement>(null);
const { shouldShowPopup, dismissPopup } = useAIAgentSafetyPopup(
agent.id,
agent.has_sensitive_action,
agent.has_human_in_the_loop,
);
const hasAnySetupFields = const hasAnySetupFields =
Object.keys(agentInputFields || {}).length > 0 || Object.keys(agentInputFields || {}).length > 0 ||
Object.keys(agentCredentialsInputFields || {}).length > 0; Object.keys(agentCredentialsInputFields || {}).length > 0;
@@ -179,24 +165,6 @@ export function RunAgentModal({
onScheduleCreated?.(schedule); onScheduleCreated?.(schedule);
} }
function handleRunWithSafetyCheck() {
if (shouldShowPopup) {
setPendingRunAction(() => handleRun);
setIsSafetyPopupOpen(true);
} else {
handleRun();
}
}
function handleSafetyPopupAcknowledge() {
setIsSafetyPopupOpen(false);
dismissPopup();
if (pendingRunAction) {
pendingRunAction();
setPendingRunAction(null);
}
}
return ( return (
<> <>
<Dialog <Dialog
@@ -212,7 +180,7 @@ export function RunAgentModal({
{/* Content */} {/* Content */}
{hasAnySetupFields ? ( {hasAnySetupFields ? (
<div className="mt-10 pb-32"> <div className="mt-4 pb-10">
<RunAgentModalContextProvider <RunAgentModalContextProvider
value={{ value={{
agent, agent,
@@ -280,7 +248,7 @@ export function RunAgentModal({
)} )}
<RunActions <RunActions
defaultRunType={defaultRunType} defaultRunType={defaultRunType}
onRun={handleRunWithSafetyCheck} onRun={handleRun}
isExecuting={isExecuting} isExecuting={isExecuting}
isSettingUpTrigger={isSettingUpTrigger} isSettingUpTrigger={isSettingUpTrigger}
isRunReady={allRequiredInputsAreSet} isRunReady={allRequiredInputsAreSet}
@@ -298,12 +266,6 @@ export function RunAgentModal({
</div> </div>
</Dialog.Content> </Dialog.Content>
</Dialog> </Dialog>
<AIAgentSafetyPopup
agentId={agent.id}
isOpen={isSafetyPopupOpen}
onAcknowledge={handleSafetyPopupAcknowledge}
/>
</> </>
); );
} }

View File

@@ -1,108 +0,0 @@
"use client";
import { Button } from "@/components/atoms/Button/Button";
import { Text } from "@/components/atoms/Text/Text";
import { Dialog } from "@/components/molecules/Dialog/Dialog";
import { Key, storage } from "@/services/storage/local-storage";
import { ShieldCheckIcon } from "@phosphor-icons/react";
import { useCallback, useEffect, useState } from "react";
interface Props {
agentId: string;
onAcknowledge: () => void;
isOpen: boolean;
}
export function AIAgentSafetyPopup({ agentId, onAcknowledge, isOpen }: Props) {
function handleAcknowledge() {
// Add this agent to the list of agents for which popup has been shown
const seenAgentsJson = storage.get(Key.AI_AGENT_SAFETY_POPUP_SHOWN);
const seenAgents: string[] = seenAgentsJson
? JSON.parse(seenAgentsJson)
: [];
if (!seenAgents.includes(agentId)) {
seenAgents.push(agentId);
storage.set(Key.AI_AGENT_SAFETY_POPUP_SHOWN, JSON.stringify(seenAgents));
}
onAcknowledge();
}
if (!isOpen) return null;
return (
<Dialog
controlled={{ isOpen, set: () => {} }}
styling={{ maxWidth: "480px" }}
>
<Dialog.Content>
<div className="flex flex-col items-center p-6 text-center">
<div className="mb-6 flex h-16 w-16 items-center justify-center rounded-full bg-blue-50">
<ShieldCheckIcon
weight="fill"
size={32}
className="text-blue-600"
/>
</div>
<Text variant="h3" className="mb-4">
Safety Checks Enabled
</Text>
<Text variant="body" className="mb-2 text-zinc-700">
AI-generated agents may take actions that affect your data or
external systems.
</Text>
<Text variant="body" className="mb-8 text-zinc-700">
AutoGPT includes safety checks so you&apos;ll always have the
opportunity to review and approve sensitive actions before they
happen.
</Text>
<Button
variant="primary"
size="large"
className="w-full"
onClick={handleAcknowledge}
>
Got it
</Button>
</div>
</Dialog.Content>
</Dialog>
);
}
export function useAIAgentSafetyPopup(
agentId: string,
hasSensitiveAction: boolean,
hasHumanInTheLoop: boolean,
) {
const [shouldShowPopup, setShouldShowPopup] = useState(false);
const [hasChecked, setHasChecked] = useState(false);
useEffect(() => {
if (hasChecked) return;
const seenAgentsJson = storage.get(Key.AI_AGENT_SAFETY_POPUP_SHOWN);
const seenAgents: string[] = seenAgentsJson
? JSON.parse(seenAgentsJson)
: [];
const hasSeenPopupForThisAgent = seenAgents.includes(agentId);
const isRelevantAgent = hasSensitiveAction || hasHumanInTheLoop;
setShouldShowPopup(!hasSeenPopupForThisAgent && isRelevantAgent);
setHasChecked(true);
}, [agentId, hasSensitiveAction, hasHumanInTheLoop, hasChecked]);
const dismissPopup = useCallback(() => {
setShouldShowPopup(false);
}, []);
return {
shouldShowPopup,
dismissPopup,
};
}

View File

@@ -29,7 +29,7 @@ export function ModalHeader({ agent }: ModalHeaderProps) {
<ShowMoreText <ShowMoreText
previewLimit={400} previewLimit={400}
variant="small" variant="small"
className="mt-4 !text-zinc-700" className="mb-2 mt-4 !text-zinc-700"
> >
{agent.description} {agent.description}
</ShowMoreText> </ShowMoreText>
@@ -40,6 +40,8 @@ export function ModalHeader({ agent }: ModalHeaderProps) {
<Text variant="lead-semibold" className="text-blue-600"> <Text variant="lead-semibold" className="text-blue-600">
Tip Tip
</Text> </Text>
<div className="h-px w-full bg-blue-100" />
<Text variant="body"> <Text variant="body">
For best results, run this agent{" "} For best results, run this agent{" "}
{humanizeCronExpression( {humanizeCronExpression(
@@ -50,7 +52,7 @@ export function ModalHeader({ agent }: ModalHeaderProps) {
) : null} ) : null}
{agent.instructions ? ( {agent.instructions ? (
<div className="flex flex-col gap-4 rounded-medium border border-purple-100 bg-[#F1EBFE/5] p-4"> <div className="mt-4 flex flex-col gap-4 rounded-medium border border-purple-100 bg-[#f1ebfe80] p-4">
<Text variant="lead-semibold" className="text-purple-600"> <Text variant="lead-semibold" className="text-purple-600">
Instructions Instructions
</Text> </Text>

View File

@@ -69,6 +69,7 @@ export function SafeModeToggle({ graph, className }: Props) {
const { const {
currentHITLSafeMode, currentHITLSafeMode,
showHITLToggle, showHITLToggle,
isHITLStateUndetermined,
handleHITLToggle, handleHITLToggle,
currentSensitiveActionSafeMode, currentSensitiveActionSafeMode,
showSensitiveActionToggle, showSensitiveActionToggle,
@@ -77,13 +78,20 @@ export function SafeModeToggle({ graph, className }: Props) {
shouldShowToggle, shouldShowToggle,
} = useAgentSafeMode(graph); } = useAgentSafeMode(graph);
if (!shouldShowToggle) { if (!shouldShowToggle || isHITLStateUndetermined) {
return null;
}
const showHITL = showHITLToggle && !isHITLStateUndetermined;
const showSensitive = showSensitiveActionToggle;
if (!showHITL && !showSensitive) {
return null; return null;
} }
return ( return (
<div className={cn("flex gap-1", className)}> <div className={cn("flex gap-1", className)}>
{showHITLToggle && ( {showHITL && (
<SafeModeIconButton <SafeModeIconButton
isEnabled={currentHITLSafeMode} isEnabled={currentHITLSafeMode}
label="Human-in-the-loop" label="Human-in-the-loop"
@@ -93,7 +101,7 @@ export function SafeModeToggle({ graph, className }: Props) {
isPending={isPending} isPending={isPending}
/> />
)} )}
{showSensitiveActionToggle && ( {showSensitive && (
<SafeModeIconButton <SafeModeIconButton
isEnabled={currentSensitiveActionSafeMode} isEnabled={currentSensitiveActionSafeMode}
label="Sensitive actions" label="Sensitive actions"

View File

@@ -8,6 +8,8 @@ import { useGetV2GetUserProfile } from "@/app/api/__generated__/endpoints/store/
import { LibraryAgent } from "@/app/api/__generated__/models/libraryAgent"; import { LibraryAgent } from "@/app/api/__generated__/models/libraryAgent";
import { okData } from "@/app/api/helpers"; import { okData } from "@/app/api/helpers";
import { useToast } from "@/components/molecules/Toast/use-toast"; import { useToast } from "@/components/molecules/Toast/use-toast";
import { isLogoutInProgress } from "@/lib/autogpt-server-api/helpers";
import { useSupabase } from "@/lib/supabase/hooks/useSupabase";
import { updateFavoriteInQueries } from "./helpers"; import { updateFavoriteInQueries } from "./helpers";
interface Props { interface Props {
@@ -23,10 +25,14 @@ export function useLibraryAgentCard({ agent }: Props) {
const { toast } = useToast(); const { toast } = useToast();
const queryClient = getQueryClient(); const queryClient = getQueryClient();
const { mutateAsync: updateLibraryAgent } = usePatchV2UpdateLibraryAgent(); const { mutateAsync: updateLibraryAgent } = usePatchV2UpdateLibraryAgent();
const { user, isLoggedIn } = useSupabase();
const logoutInProgress = isLogoutInProgress();
const { data: profile } = useGetV2GetUserProfile({ const { data: profile } = useGetV2GetUserProfile({
query: { query: {
select: okData, select: okData,
enabled: isLoggedIn && !!user && !logoutInProgress,
queryKey: ["/api/store/profile", user?.id],
}, },
}); });

View File

@@ -1,6 +1,8 @@
import { useToast } from "@/components/molecules/Toast/use-toast"; import { useToast } from "@/components/molecules/Toast/use-toast";
import { getHomepageRoute } from "@/lib/constants";
import { useSupabase } from "@/lib/supabase/hooks/useSupabase"; import { useSupabase } from "@/lib/supabase/hooks/useSupabase";
import { environment } from "@/services/environment"; import { environment } from "@/services/environment";
import { Flag, useGetFlag } from "@/services/feature-flags/use-get-flag";
import { loginFormSchema, LoginProvider } from "@/types/auth"; import { loginFormSchema, LoginProvider } from "@/types/auth";
import { zodResolver } from "@hookform/resolvers/zod"; import { zodResolver } from "@hookform/resolvers/zod";
import { useRouter, useSearchParams } from "next/navigation"; import { useRouter, useSearchParams } from "next/navigation";
@@ -20,15 +22,17 @@ export function useLoginPage() {
const [isGoogleLoading, setIsGoogleLoading] = useState(false); const [isGoogleLoading, setIsGoogleLoading] = useState(false);
const [showNotAllowedModal, setShowNotAllowedModal] = useState(false); const [showNotAllowedModal, setShowNotAllowedModal] = useState(false);
const isCloudEnv = environment.isCloud(); const isCloudEnv = environment.isCloud();
const isChatEnabled = useGetFlag(Flag.CHAT);
const homepageRoute = getHomepageRoute(isChatEnabled);
// Get redirect destination from 'next' query parameter // Get redirect destination from 'next' query parameter
const nextUrl = searchParams.get("next"); const nextUrl = searchParams.get("next");
useEffect(() => { useEffect(() => {
if (isLoggedIn && !isLoggingIn) { if (isLoggedIn && !isLoggingIn) {
router.push(nextUrl || "/marketplace"); router.push(nextUrl || homepageRoute);
} }
}, [isLoggedIn, isLoggingIn, nextUrl, router]); }, [homepageRoute, isLoggedIn, isLoggingIn, nextUrl, router]);
const form = useForm<z.infer<typeof loginFormSchema>>({ const form = useForm<z.infer<typeof loginFormSchema>>({
resolver: zodResolver(loginFormSchema), resolver: zodResolver(loginFormSchema),
@@ -98,7 +102,7 @@ export function useLoginPage() {
} else if (result.onboarding) { } else if (result.onboarding) {
router.replace("/onboarding"); router.replace("/onboarding");
} else { } else {
router.replace("/marketplace"); router.replace(homepageRoute);
} }
} catch (error) { } catch (error) {
toast({ toast({

View File

@@ -0,0 +1,15 @@
import { expect, test } from "vitest";
import { render, screen } from "@/tests/integrations/test-utils";
import { MainMarkeplacePage } from "../MainMarketplacePage";
import { server } from "@/mocks/mock-server";
import { getDeleteV2DeleteStoreSubmissionMockHandler422 } from "@/app/api/__generated__/endpoints/store/store.msw";
// Only for CI testing purpose, will remove it in future PR
test("MainMarketplacePage", async () => {
server.use(getDeleteV2DeleteStoreSubmissionMockHandler422());
render(<MainMarkeplacePage />);
expect(
await screen.findByText("Featured agents", { exact: false }),
).toBeDefined();
});

View File

@@ -3,12 +3,14 @@
import { useGetV2GetUserProfile } from "@/app/api/__generated__/endpoints/store/store"; import { useGetV2GetUserProfile } from "@/app/api/__generated__/endpoints/store/store";
import { ProfileInfoForm } from "@/components/__legacy__/ProfileInfoForm"; import { ProfileInfoForm } from "@/components/__legacy__/ProfileInfoForm";
import { ErrorCard } from "@/components/molecules/ErrorCard/ErrorCard"; import { ErrorCard } from "@/components/molecules/ErrorCard/ErrorCard";
import { isLogoutInProgress } from "@/lib/autogpt-server-api/helpers";
import { ProfileDetails } from "@/lib/autogpt-server-api/types"; import { ProfileDetails } from "@/lib/autogpt-server-api/types";
import { useSupabase } from "@/lib/supabase/hooks/useSupabase"; import { useSupabase } from "@/lib/supabase/hooks/useSupabase";
import { ProfileLoading } from "./ProfileLoading"; import { ProfileLoading } from "./ProfileLoading";
export default function UserProfilePage() { export default function UserProfilePage() {
const { user } = useSupabase(); const { user } = useSupabase();
const logoutInProgress = isLogoutInProgress();
const { const {
data: profile, data: profile,
@@ -18,7 +20,7 @@ export default function UserProfilePage() {
refetch, refetch,
} = useGetV2GetUserProfile<ProfileDetails | null>({ } = useGetV2GetUserProfile<ProfileDetails | null>({
query: { query: {
enabled: !!user, enabled: !!user && !logoutInProgress,
select: (res) => { select: (res) => {
if (res.status === 200) { if (res.status === 200) {
return { return {

View File

@@ -1,5 +1,6 @@
"use server"; "use server";
import { getHomepageRoute } from "@/lib/constants";
import { getServerSupabase } from "@/lib/supabase/server/getServerSupabase"; import { getServerSupabase } from "@/lib/supabase/server/getServerSupabase";
import { signupFormSchema } from "@/types/auth"; import { signupFormSchema } from "@/types/auth";
import * as Sentry from "@sentry/nextjs"; import * as Sentry from "@sentry/nextjs";
@@ -11,6 +12,7 @@ export async function signup(
password: string, password: string,
confirmPassword: string, confirmPassword: string,
agreeToTerms: boolean, agreeToTerms: boolean,
isChatEnabled: boolean,
) { ) {
try { try {
const parsed = signupFormSchema.safeParse({ const parsed = signupFormSchema.safeParse({
@@ -58,7 +60,9 @@ export async function signup(
} }
const isOnboardingEnabled = await shouldShowOnboarding(); const isOnboardingEnabled = await shouldShowOnboarding();
const next = isOnboardingEnabled ? "/onboarding" : "/"; const next = isOnboardingEnabled
? "/onboarding"
: getHomepageRoute(isChatEnabled);
return { success: true, next }; return { success: true, next };
} catch (err) { } catch (err) {

View File

@@ -1,6 +1,8 @@
import { useToast } from "@/components/molecules/Toast/use-toast"; import { useToast } from "@/components/molecules/Toast/use-toast";
import { getHomepageRoute } from "@/lib/constants";
import { useSupabase } from "@/lib/supabase/hooks/useSupabase"; import { useSupabase } from "@/lib/supabase/hooks/useSupabase";
import { environment } from "@/services/environment"; import { environment } from "@/services/environment";
import { Flag, useGetFlag } from "@/services/feature-flags/use-get-flag";
import { LoginProvider, signupFormSchema } from "@/types/auth"; import { LoginProvider, signupFormSchema } from "@/types/auth";
import { zodResolver } from "@hookform/resolvers/zod"; import { zodResolver } from "@hookform/resolvers/zod";
import { useRouter, useSearchParams } from "next/navigation"; import { useRouter, useSearchParams } from "next/navigation";
@@ -20,15 +22,17 @@ export function useSignupPage() {
const [isGoogleLoading, setIsGoogleLoading] = useState(false); const [isGoogleLoading, setIsGoogleLoading] = useState(false);
const [showNotAllowedModal, setShowNotAllowedModal] = useState(false); const [showNotAllowedModal, setShowNotAllowedModal] = useState(false);
const isCloudEnv = environment.isCloud(); const isCloudEnv = environment.isCloud();
const isChatEnabled = useGetFlag(Flag.CHAT);
const homepageRoute = getHomepageRoute(isChatEnabled);
// Get redirect destination from 'next' query parameter // Get redirect destination from 'next' query parameter
const nextUrl = searchParams.get("next"); const nextUrl = searchParams.get("next");
useEffect(() => { useEffect(() => {
if (isLoggedIn && !isSigningUp) { if (isLoggedIn && !isSigningUp) {
router.push(nextUrl || "/marketplace"); router.push(nextUrl || homepageRoute);
} }
}, [isLoggedIn, isSigningUp, nextUrl, router]); }, [homepageRoute, isLoggedIn, isSigningUp, nextUrl, router]);
const form = useForm<z.infer<typeof signupFormSchema>>({ const form = useForm<z.infer<typeof signupFormSchema>>({
resolver: zodResolver(signupFormSchema), resolver: zodResolver(signupFormSchema),
@@ -104,6 +108,7 @@ export function useSignupPage() {
data.password, data.password,
data.confirmPassword, data.confirmPassword,
data.agreeToTerms, data.agreeToTerms,
isChatEnabled === true,
); );
setIsLoading(false); setIsLoading(false);
@@ -129,7 +134,7 @@ export function useSignupPage() {
} }
// Prefer the URL's next parameter, then result.next (for onboarding), then default // Prefer the URL's next parameter, then result.next (for onboarding), then default
const redirectTo = nextUrl || result.next || "/"; const redirectTo = nextUrl || result.next || homepageRoute;
router.replace(redirectTo); router.replace(redirectTo);
} catch (error) { } catch (error) {
setIsLoading(false); setIsLoading(false);

View File

@@ -4,12 +4,12 @@ import {
getServerAuthToken, getServerAuthToken,
} from "@/lib/autogpt-server-api/helpers"; } from "@/lib/autogpt-server-api/helpers";
import { transformDates } from "./date-transformer";
import { environment } from "@/services/environment";
import { import {
IMPERSONATION_HEADER_NAME, IMPERSONATION_HEADER_NAME,
IMPERSONATION_STORAGE_KEY, IMPERSONATION_STORAGE_KEY,
} from "@/lib/constants"; } from "@/lib/constants";
import { environment } from "@/services/environment";
import { transformDates } from "./date-transformer";
const FRONTEND_BASE_URL = const FRONTEND_BASE_URL =
process.env.NEXT_PUBLIC_FRONTEND_BASE_URL || "http://localhost:3000"; process.env.NEXT_PUBLIC_FRONTEND_BASE_URL || "http://localhost:3000";

View File

@@ -1022,7 +1022,7 @@
"get": { "get": {
"tags": ["v2", "chat", "chat"], "tags": ["v2", "chat", "chat"],
"summary": "Get Session", "summary": "Get Session",
"description": "Retrieve the details of a specific chat session.\n\nLooks up a chat session by ID for the given user (if authenticated) and returns all session data including messages.\n\nArgs:\n session_id: The unique identifier for the desired chat session.\n user_id: The optional authenticated user ID, or None for anonymous access.\n\nReturns:\n SessionDetailResponse: Details for the requested session; raises NotFoundError if not found.", "description": "Retrieve the details of a specific chat session.\n\nLooks up a chat session by ID for the given user (if authenticated) and returns all session data including messages.\n\nArgs:\n session_id: The unique identifier for the desired chat session.\n user_id: The optional authenticated user ID, or None for anonymous access.\n\nReturns:\n SessionDetailResponse: Details for the requested session, or None if not found.",
"operationId": "getV2GetSession", "operationId": "getV2GetSession",
"security": [{ "HTTPBearerJWT": [] }], "security": [{ "HTTPBearerJWT": [] }],
"parameters": [ "parameters": [
@@ -9411,12 +9411,6 @@
], ],
"title": "Reviewed Data", "title": "Reviewed Data",
"description": "Optional edited data (ignored if approved=False)" "description": "Optional edited data (ignored if approved=False)"
},
"auto_approve_future": {
"type": "boolean",
"title": "Auto Approve Future",
"description": "If true and this review is approved, future executions of this same block (node) will be automatically approved. This only affects approved reviews.",
"default": false
} }
}, },
"type": "object", "type": "object",
@@ -9436,7 +9430,7 @@
"type": "object", "type": "object",
"required": ["reviews"], "required": ["reviews"],
"title": "ReviewRequest", "title": "ReviewRequest",
"description": "Request model for processing ALL pending reviews for an execution.\n\nThis request must include ALL pending reviews for a graph execution.\nEach review will be either approved (with optional data modifications)\nor rejected (data ignored). The execution will resume only after ALL reviews are processed.\n\nEach review item can individually specify whether to auto-approve future executions\nof the same block via the `auto_approve_future` field on ReviewItem." "description": "Request model for processing ALL pending reviews for an execution.\n\nThis request must include ALL pending reviews for a graph execution.\nEach review will be either approved (with optional data modifications)\nor rejected (data ignored). The execution will resume only after ALL reviews are processed."
}, },
"ReviewResponse": { "ReviewResponse": {
"properties": { "properties": {

View File

@@ -141,52 +141,6 @@
} }
} }
@keyframes shimmer {
0% {
background-position: -200% 0;
}
100% {
background-position: 200% 0;
}
}
@keyframes l3 {
25% {
background-position:
0 0,
100% 100%,
100% calc(100% - 5px);
}
50% {
background-position:
0 100%,
100% 100%,
0 calc(100% - 5px);
}
75% {
background-position:
0 100%,
100% 0,
100% 5px;
}
}
.loader {
width: 80px;
height: 70px;
border: 5px solid rgb(241 245 249);
padding: 0 8px;
box-sizing: border-box;
background:
linear-gradient(rgb(15 23 42) 0 0) 0 0/8px 20px,
linear-gradient(rgb(15 23 42) 0 0) 100% 0/8px 20px,
radial-gradient(farthest-side, rgb(15 23 42) 90%, #0000) 0 5px/8px 8px
content-box,
transparent;
background-repeat: no-repeat;
animation: l3 2s infinite linear;
}
input[type="number"]::-webkit-outer-spin-button, input[type="number"]::-webkit-outer-spin-button,
input[type="number"]::-webkit-inner-spin-button { input[type="number"]::-webkit-inner-spin-button {
-webkit-appearance: none; -webkit-appearance: none;

View File

@@ -1,5 +1,27 @@
import { redirect } from "next/navigation"; "use client";
import { getHomepageRoute } from "@/lib/constants";
import { Flag, useGetFlag } from "@/services/feature-flags/use-get-flag";
import { useRouter } from "next/navigation";
import { useEffect } from "react";
export default function Page() { export default function Page() {
redirect("/marketplace"); const isChatEnabled = useGetFlag(Flag.CHAT);
const router = useRouter();
const homepageRoute = getHomepageRoute(isChatEnabled);
const envEnabled = process.env.NEXT_PUBLIC_LAUNCHDARKLY_ENABLED === "true";
const clientId = process.env.NEXT_PUBLIC_LAUNCHDARKLY_CLIENT_ID;
const isLaunchDarklyConfigured = envEnabled && Boolean(clientId);
const isFlagReady =
!isLaunchDarklyConfigured || typeof isChatEnabled === "boolean";
useEffect(
function redirectToHomepage() {
if (!isFlagReady) return;
router.replace(homepageRoute);
},
[homepageRoute, isFlagReady, router],
);
return null;
} }

View File

@@ -1,81 +0,0 @@
// import { render, screen } from "@testing-library/react";
// import { describe, expect, it } from "vitest";
// import { Badge } from "./Badge";
// describe("Badge Component", () => {
// it("renders badge with content", () => {
// render(<Badge variant="success">Success</Badge>);
// expect(screen.getByText("Success")).toBeInTheDocument();
// });
// it("applies correct variant styles", () => {
// const { rerender } = render(<Badge variant="success">Success</Badge>);
// let badge = screen.getByText("Success");
// expect(badge).toHaveClass("bg-green-100", "text-green-800");
// rerender(<Badge variant="error">Error</Badge>);
// badge = screen.getByText("Error");
// expect(badge).toHaveClass("bg-red-100", "text-red-800");
// rerender(<Badge variant="info">Info</Badge>);
// badge = screen.getByText("Info");
// expect(badge).toHaveClass("bg-slate-100", "text-slate-800");
// });
// it("applies custom className", () => {
// render(
// <Badge variant="success" className="custom-class">
// Success
// </Badge>,
// );
// const badge = screen.getByText("Success");
// expect(badge).toHaveClass("custom-class");
// });
// it("renders as span element", () => {
// render(<Badge variant="success">Success</Badge>);
// const badge = screen.getByText("Success");
// expect(badge.tagName).toBe("SPAN");
// });
// it("renders children correctly", () => {
// render(
// <Badge variant="success">
// <span>Custom</span> Content
// </Badge>,
// );
// expect(screen.getByText("Custom")).toBeInTheDocument();
// expect(screen.getByText("Content")).toBeInTheDocument();
// });
// it("supports all badge variants", () => {
// const variants = ["success", "error", "info"] as const;
// variants.forEach((variant) => {
// const { unmount } = render(
// <Badge variant={variant} data-testid={`badge-${variant}`}>
// {variant}
// </Badge>,
// );
// expect(screen.getByTestId(`badge-${variant}`)).toBeInTheDocument();
// unmount();
// });
// });
// it("handles long text content", () => {
// render(
// <Badge variant="info">
// Very long text that should be handled properly by the component
// </Badge>,
// );
// const badge = screen.getByText(/Very long text/);
// expect(badge).toBeInTheDocument();
// expect(badge).toHaveClass("overflow-hidden", "text-ellipsis");
// });
// });

View File

@@ -0,0 +1,81 @@
"use client";
import { Text } from "@/components/atoms/Text/Text";
import { cn } from "@/lib/utils";
import { useEffect, useRef } from "react";
import { ChatContainer } from "./components/ChatContainer/ChatContainer";
import { ChatErrorState } from "./components/ChatErrorState/ChatErrorState";
import { ChatLoader } from "./components/ChatLoader/ChatLoader";
import { useChat } from "./useChat";
export interface ChatProps {
className?: string;
urlSessionId?: string | null;
initialPrompt?: string;
onSessionNotFound?: () => void;
}
export function Chat({
className,
urlSessionId,
initialPrompt,
onSessionNotFound,
}: ChatProps) {
const hasHandledNotFoundRef = useRef(false);
const {
messages,
isLoading,
isCreating,
error,
isSessionNotFound,
sessionId,
createSession,
showLoader,
} = useChat({ urlSessionId });
useEffect(
function handleMissingSession() {
if (!onSessionNotFound) return;
if (!urlSessionId) return;
if (!isSessionNotFound || isLoading || isCreating) return;
if (hasHandledNotFoundRef.current) return;
hasHandledNotFoundRef.current = true;
onSessionNotFound();
},
[onSessionNotFound, urlSessionId, isSessionNotFound, isLoading, isCreating],
);
return (
<div className={cn("flex h-full flex-col", className)}>
{/* Main Content */}
<main className="flex min-h-0 w-full flex-1 flex-col overflow-hidden bg-[#f8f8f9]">
{/* Loading State */}
{showLoader && (isLoading || isCreating) && (
<div className="flex flex-1 items-center justify-center">
<div className="flex flex-col items-center gap-4">
<ChatLoader />
<Text variant="body" className="text-zinc-500">
Loading your chats...
</Text>
</div>
</div>
)}
{/* Error State */}
{error && !isLoading && (
<ChatErrorState error={error} onRetry={createSession} />
)}
{/* Session Content */}
{sessionId && !isLoading && !error && (
<ChatContainer
sessionId={sessionId}
initialMessages={messages}
initialPrompt={initialPrompt}
className="flex-1"
/>
)}
</main>
</div>
);
}

View File

@@ -0,0 +1,15 @@
import { cn } from "@/lib/utils";
import { ReactNode } from "react";
export interface AIChatBubbleProps {
children: ReactNode;
className?: string;
}
export function AIChatBubble({ children, className }: AIChatBubbleProps) {
return (
<div className={cn("text-left text-[1rem] leading-relaxed", className)}>
{children}
</div>
);
}

View File

@@ -21,7 +21,7 @@ export function AuthPromptWidget({
message, message,
sessionId, sessionId,
agentInfo, agentInfo,
returnUrl = "/chat", returnUrl = "/copilot/chat",
className, className,
}: AuthPromptWidgetProps) { }: AuthPromptWidgetProps) {
const router = useRouter(); const router = useRouter();

View File

@@ -0,0 +1,106 @@
import type { SessionDetailResponse } from "@/app/api/__generated__/models/sessionDetailResponse";
import { Button } from "@/components/atoms/Button/Button";
import { Text } from "@/components/atoms/Text/Text";
import { Dialog } from "@/components/molecules/Dialog/Dialog";
import { useBreakpoint } from "@/lib/hooks/useBreakpoint";
import { cn } from "@/lib/utils";
import { ChatInput } from "../ChatInput/ChatInput";
import { MessageList } from "../MessageList/MessageList";
import { useChatContainer } from "./useChatContainer";
export interface ChatContainerProps {
sessionId: string | null;
initialMessages: SessionDetailResponse["messages"];
initialPrompt?: string;
className?: string;
}
export function ChatContainer({
sessionId,
initialMessages,
initialPrompt,
className,
}: ChatContainerProps) {
const {
messages,
streamingChunks,
isStreaming,
stopStreaming,
isRegionBlockedModalOpen,
sendMessageWithContext,
handleRegionModalOpenChange,
handleRegionModalClose,
} = useChatContainer({
sessionId,
initialMessages,
initialPrompt,
});
const breakpoint = useBreakpoint();
const isMobile =
breakpoint === "base" || breakpoint === "sm" || breakpoint === "md";
return (
<div
className={cn(
"mx-auto flex h-full min-h-0 w-full max-w-3xl flex-col bg-[#f8f8f9]",
className,
)}
>
<Dialog
title="Service unavailable"
controlled={{
isOpen: isRegionBlockedModalOpen,
set: handleRegionModalOpenChange,
}}
onClose={handleRegionModalClose}
>
<Dialog.Content>
<div className="flex flex-col gap-4">
<Text variant="body">
This model is not available in your region. Please connect via VPN
and try again.
</Text>
<div className="flex justify-end">
<Button
type="button"
variant="primary"
onClick={handleRegionModalClose}
>
Got it
</Button>
</div>
</div>
</Dialog.Content>
</Dialog>
{/* Messages - Scrollable */}
<div className="relative flex min-h-0 flex-1 flex-col">
<div className="flex min-h-full flex-col justify-end">
<MessageList
messages={messages}
streamingChunks={streamingChunks}
isStreaming={isStreaming}
onSendMessage={sendMessageWithContext}
className="flex-1"
/>
</div>
</div>
{/* Input - Fixed at bottom */}
<div className="relative px-3 pb-6 pt-2">
<div className="pointer-events-none absolute top-[-18px] z-10 h-6 w-full bg-gradient-to-b from-transparent to-[#f8f8f9]" />
<ChatInput
onSend={sendMessageWithContext}
disabled={isStreaming || !sessionId}
isStreaming={isStreaming}
onStop={stopStreaming}
placeholder={
isMobile
? "You can search or just ask"
: 'You can search or just ask — e.g. "create a blog post outline"'
}
/>
</div>
</div>
);
}

View File

@@ -1,6 +1,6 @@
import { toast } from "sonner"; import { toast } from "sonner";
import { StreamChunk } from "../../useChatStream"; import { StreamChunk } from "../../useChatStream";
import type { HandlerDependencies } from "./useChatContainer.handlers"; import type { HandlerDependencies } from "./handlers";
import { import {
handleError, handleError,
handleLoginNeeded, handleLoginNeeded,
@@ -9,12 +9,30 @@ import {
handleTextEnded, handleTextEnded,
handleToolCallStart, handleToolCallStart,
handleToolResponse, handleToolResponse,
} from "./useChatContainer.handlers"; isRegionBlockedError,
} from "./handlers";
export function createStreamEventDispatcher( export function createStreamEventDispatcher(
deps: HandlerDependencies, deps: HandlerDependencies,
): (chunk: StreamChunk) => void { ): (chunk: StreamChunk) => void {
return function dispatchStreamEvent(chunk: StreamChunk): void { return function dispatchStreamEvent(chunk: StreamChunk): void {
if (
chunk.type === "text_chunk" ||
chunk.type === "tool_call_start" ||
chunk.type === "tool_response" ||
chunk.type === "login_needed" ||
chunk.type === "need_login" ||
chunk.type === "error"
) {
if (!deps.hasResponseRef.current) {
console.info("[ChatStream] First response chunk:", {
type: chunk.type,
sessionId: deps.sessionId,
});
}
deps.hasResponseRef.current = true;
}
switch (chunk.type) { switch (chunk.type) {
case "text_chunk": case "text_chunk":
handleTextChunk(chunk, deps); handleTextChunk(chunk, deps);
@@ -38,15 +56,23 @@ export function createStreamEventDispatcher(
break; break;
case "stream_end": case "stream_end":
console.info("[ChatStream] Stream ended:", {
sessionId: deps.sessionId,
hasResponse: deps.hasResponseRef.current,
chunkCount: deps.streamingChunksRef.current.length,
});
handleStreamEnd(chunk, deps); handleStreamEnd(chunk, deps);
break; break;
case "error": case "error":
const isRegionBlocked = isRegionBlockedError(chunk);
handleError(chunk, deps); handleError(chunk, deps);
// Show toast at dispatcher level to avoid circular dependencies // Show toast at dispatcher level to avoid circular dependencies
if (!isRegionBlocked) {
toast.error("Chat Error", { toast.error("Chat Error", {
description: chunk.message || chunk.content || "An error occurred", description: chunk.message || chunk.content || "An error occurred",
}); });
}
break; break;
case "usage": case "usage":

View File

@@ -7,15 +7,30 @@ import {
parseToolResponse, parseToolResponse,
} from "./helpers"; } from "./helpers";
function isToolCallMessage(
message: ChatMessageData,
): message is Extract<ChatMessageData, { type: "tool_call" }> {
return message.type === "tool_call";
}
export interface HandlerDependencies { export interface HandlerDependencies {
setHasTextChunks: Dispatch<SetStateAction<boolean>>; setHasTextChunks: Dispatch<SetStateAction<boolean>>;
setStreamingChunks: Dispatch<SetStateAction<string[]>>; setStreamingChunks: Dispatch<SetStateAction<string[]>>;
streamingChunksRef: MutableRefObject<string[]>; streamingChunksRef: MutableRefObject<string[]>;
hasResponseRef: MutableRefObject<boolean>;
setMessages: Dispatch<SetStateAction<ChatMessageData[]>>; setMessages: Dispatch<SetStateAction<ChatMessageData[]>>;
setIsStreamingInitiated: Dispatch<SetStateAction<boolean>>; setIsStreamingInitiated: Dispatch<SetStateAction<boolean>>;
setIsRegionBlockedModalOpen: Dispatch<SetStateAction<boolean>>;
sessionId: string; sessionId: string;
} }
export function isRegionBlockedError(chunk: StreamChunk): boolean {
if (chunk.code === "MODEL_NOT_AVAILABLE_REGION") return true;
const message = chunk.message || chunk.content;
if (typeof message !== "string") return false;
return message.toLowerCase().includes("not available in your region");
}
export function handleTextChunk(chunk: StreamChunk, deps: HandlerDependencies) { export function handleTextChunk(chunk: StreamChunk, deps: HandlerDependencies) {
if (!chunk.content) return; if (!chunk.content) return;
deps.setHasTextChunks(true); deps.setHasTextChunks(true);
@@ -30,16 +45,17 @@ export function handleTextEnded(
_chunk: StreamChunk, _chunk: StreamChunk,
deps: HandlerDependencies, deps: HandlerDependencies,
) { ) {
console.log("[Text Ended] Saving streamed text as assistant message");
const completedText = deps.streamingChunksRef.current.join(""); const completedText = deps.streamingChunksRef.current.join("");
if (completedText.trim()) { if (completedText.trim()) {
deps.setMessages((prev) => {
const assistantMessage: ChatMessageData = { const assistantMessage: ChatMessageData = {
type: "message", type: "message",
role: "assistant", role: "assistant",
content: completedText, content: completedText,
timestamp: new Date(), timestamp: new Date(),
}; };
deps.setMessages((prev) => [...prev, assistantMessage]); return [...prev, assistantMessage];
});
} }
deps.setStreamingChunks([]); deps.setStreamingChunks([]);
deps.streamingChunksRef.current = []; deps.streamingChunksRef.current = [];
@@ -50,30 +66,45 @@ export function handleToolCallStart(
chunk: StreamChunk, chunk: StreamChunk,
deps: HandlerDependencies, deps: HandlerDependencies,
) { ) {
const toolCallMessage: ChatMessageData = { const toolCallMessage: Extract<ChatMessageData, { type: "tool_call" }> = {
type: "tool_call", type: "tool_call",
toolId: chunk.tool_id || `tool-${Date.now()}-${chunk.idx || 0}`, toolId: chunk.tool_id || `tool-${Date.now()}-${chunk.idx || 0}`,
toolName: chunk.tool_name || "Executing...", toolName: chunk.tool_name || "Executing",
arguments: chunk.arguments || {}, arguments: chunk.arguments || {},
timestamp: new Date(), timestamp: new Date(),
}; };
deps.setMessages((prev) => [...prev, toolCallMessage]);
console.log("[Tool Call Start]", { function updateToolCallMessages(prev: ChatMessageData[]) {
toolId: toolCallMessage.toolId, const existingIndex = prev.findIndex(function findToolCallIndex(msg) {
toolName: toolCallMessage.toolName, return isToolCallMessage(msg) && msg.toolId === toolCallMessage.toolId;
timestamp: new Date().toISOString(),
}); });
if (existingIndex === -1) {
return [...prev, toolCallMessage];
}
const nextMessages = [...prev];
const existing = nextMessages[existingIndex];
if (!isToolCallMessage(existing)) return prev;
const nextArguments =
toolCallMessage.arguments &&
Object.keys(toolCallMessage.arguments).length > 0
? toolCallMessage.arguments
: existing.arguments;
nextMessages[existingIndex] = {
...existing,
toolName: toolCallMessage.toolName || existing.toolName,
arguments: nextArguments,
timestamp: toolCallMessage.timestamp,
};
return nextMessages;
}
deps.setMessages(updateToolCallMessages);
} }
export function handleToolResponse( export function handleToolResponse(
chunk: StreamChunk, chunk: StreamChunk,
deps: HandlerDependencies, deps: HandlerDependencies,
) { ) {
console.log("[Tool Response] Received:", {
toolId: chunk.tool_id,
toolName: chunk.tool_name,
timestamp: new Date().toISOString(),
});
let toolName = chunk.tool_name || "unknown"; let toolName = chunk.tool_name || "unknown";
if (!chunk.tool_name || chunk.tool_name === "unknown") { if (!chunk.tool_name || chunk.tool_name === "unknown") {
deps.setMessages((prev) => { deps.setMessages((prev) => {
@@ -127,22 +158,15 @@ export function handleToolResponse(
const toolCallIndex = prev.findIndex( const toolCallIndex = prev.findIndex(
(msg) => msg.type === "tool_call" && msg.toolId === chunk.tool_id, (msg) => msg.type === "tool_call" && msg.toolId === chunk.tool_id,
); );
const hasResponse = prev.some(
(msg) => msg.type === "tool_response" && msg.toolId === chunk.tool_id,
);
if (hasResponse) return prev;
if (toolCallIndex !== -1) { if (toolCallIndex !== -1) {
const newMessages = [...prev]; const newMessages = [...prev];
newMessages[toolCallIndex] = responseMessage; newMessages.splice(toolCallIndex + 1, 0, responseMessage);
console.log(
"[Tool Response] Replaced tool_call with matching tool_id:",
chunk.tool_id,
"at index:",
toolCallIndex,
);
return newMessages; return newMessages;
} }
console.warn(
"[Tool Response] No tool_call found with tool_id:",
chunk.tool_id,
"appending instead",
);
return [...prev, responseMessage]; return [...prev, responseMessage];
}); });
} }
@@ -167,55 +191,38 @@ export function handleStreamEnd(
deps: HandlerDependencies, deps: HandlerDependencies,
) { ) {
const completedContent = deps.streamingChunksRef.current.join(""); const completedContent = deps.streamingChunksRef.current.join("");
// Only save message if there are uncommitted chunks if (!completedContent.trim() && !deps.hasResponseRef.current) {
// (text_ended already saved if there were tool calls) deps.setMessages((prev) => [
...prev,
{
type: "message",
role: "assistant",
content: "No response received. Please try again.",
timestamp: new Date(),
},
]);
}
if (completedContent.trim()) { if (completedContent.trim()) {
console.log(
"[Stream End] Saving remaining streamed text as assistant message",
);
const assistantMessage: ChatMessageData = { const assistantMessage: ChatMessageData = {
type: "message", type: "message",
role: "assistant", role: "assistant",
content: completedContent, content: completedContent,
timestamp: new Date(), timestamp: new Date(),
}; };
deps.setMessages((prev) => { deps.setMessages((prev) => [...prev, assistantMessage]);
const updated = [...prev, assistantMessage];
console.log("[Stream End] Final state:", {
localMessages: updated.map((m) => ({
type: m.type,
...(m.type === "message" && {
role: m.role,
contentLength: m.content.length,
}),
...(m.type === "tool_call" && {
toolId: m.toolId,
toolName: m.toolName,
}),
...(m.type === "tool_response" && {
toolId: m.toolId,
toolName: m.toolName,
success: m.success,
}),
})),
streamingChunks: deps.streamingChunksRef.current,
timestamp: new Date().toISOString(),
});
return updated;
});
} else {
console.log("[Stream End] No uncommitted chunks, message already saved");
} }
deps.setStreamingChunks([]); deps.setStreamingChunks([]);
deps.streamingChunksRef.current = []; deps.streamingChunksRef.current = [];
deps.setHasTextChunks(false); deps.setHasTextChunks(false);
deps.setIsStreamingInitiated(false); deps.setIsStreamingInitiated(false);
console.log("[Stream End] Stream complete, messages in local state");
} }
export function handleError(chunk: StreamChunk, deps: HandlerDependencies) { export function handleError(chunk: StreamChunk, deps: HandlerDependencies) {
const errorMessage = chunk.message || chunk.content || "An error occurred"; const errorMessage = chunk.message || chunk.content || "An error occurred";
console.error("Stream error:", errorMessage); console.error("Stream error:", errorMessage);
if (isRegionBlockedError(chunk)) {
deps.setIsRegionBlockedModalOpen(true);
}
deps.setIsStreamingInitiated(false); deps.setIsStreamingInitiated(false);
deps.setHasTextChunks(false); deps.setHasTextChunks(false);
deps.setStreamingChunks([]); deps.setStreamingChunks([]);

View File

@@ -1,6 +1,33 @@
import { SessionKey, sessionStorage } from "@/services/storage/session-storage";
import type { ToolResult } from "@/types/chat"; import type { ToolResult } from "@/types/chat";
import type { ChatMessageData } from "../ChatMessage/useChatMessage"; import type { ChatMessageData } from "../ChatMessage/useChatMessage";
export function hasSentInitialPrompt(sessionId: string): boolean {
try {
const sent = JSON.parse(
sessionStorage.get(SessionKey.CHAT_SENT_INITIAL_PROMPTS) || "{}",
);
return sent[sessionId] === true;
} catch {
return false;
}
}
export function markInitialPromptSent(sessionId: string): void {
try {
const sent = JSON.parse(
sessionStorage.get(SessionKey.CHAT_SENT_INITIAL_PROMPTS) || "{}",
);
sent[sessionId] = true;
sessionStorage.set(
SessionKey.CHAT_SENT_INITIAL_PROMPTS,
JSON.stringify(sent),
);
} catch {
// Ignore storage errors
}
}
export function removePageContext(content: string): string { export function removePageContext(content: string): string {
// Remove "Page URL: ..." pattern at start of line (case insensitive, handles various formats) // Remove "Page URL: ..." pattern at start of line (case insensitive, handles various formats)
let cleaned = content.replace(/^\s*Page URL:\s*[^\n\r]*/gim, ""); let cleaned = content.replace(/^\s*Page URL:\s*[^\n\r]*/gim, "");
@@ -207,12 +234,22 @@ export function parseToolResponse(
if (responseType === "setup_requirements") { if (responseType === "setup_requirements") {
return null; return null;
} }
if (responseType === "understanding_updated") {
return {
type: "tool_response",
toolId,
toolName,
result: (parsedResult || result) as ToolResult,
success: true,
timestamp: timestamp || new Date(),
};
}
} }
return { return {
type: "tool_response", type: "tool_response",
toolId, toolId,
toolName, toolName,
result, result: parsedResult ? (parsedResult as ToolResult) : result,
success: true, success: true,
timestamp: timestamp || new Date(), timestamp: timestamp || new Date(),
}; };

View File

@@ -1,14 +1,17 @@
import type { SessionDetailResponse } from "@/app/api/__generated__/models/sessionDetailResponse"; import type { SessionDetailResponse } from "@/app/api/__generated__/models/sessionDetailResponse";
import { useCallback, useMemo, useRef, useState } from "react"; import { useCallback, useEffect, useMemo, useRef, useState } from "react";
import { toast } from "sonner"; import { toast } from "sonner";
import { useChatStream } from "../../useChatStream"; import { useChatStream } from "../../useChatStream";
import { usePageContext } from "../../usePageContext";
import type { ChatMessageData } from "../ChatMessage/useChatMessage"; import type { ChatMessageData } from "../ChatMessage/useChatMessage";
import { createStreamEventDispatcher } from "./createStreamEventDispatcher"; import { createStreamEventDispatcher } from "./createStreamEventDispatcher";
import { import {
createUserMessage, createUserMessage,
filterAuthMessages, filterAuthMessages,
hasSentInitialPrompt,
isToolCallArray, isToolCallArray,
isValidMessage, isValidMessage,
markInitialPromptSent,
parseToolResponse, parseToolResponse,
removePageContext, removePageContext,
} from "./helpers"; } from "./helpers";
@@ -16,20 +19,45 @@ import {
interface Args { interface Args {
sessionId: string | null; sessionId: string | null;
initialMessages: SessionDetailResponse["messages"]; initialMessages: SessionDetailResponse["messages"];
initialPrompt?: string;
} }
export function useChatContainer({ sessionId, initialMessages }: Args) { export function useChatContainer({
sessionId,
initialMessages,
initialPrompt,
}: Args) {
const [messages, setMessages] = useState<ChatMessageData[]>([]); const [messages, setMessages] = useState<ChatMessageData[]>([]);
const [streamingChunks, setStreamingChunks] = useState<string[]>([]); const [streamingChunks, setStreamingChunks] = useState<string[]>([]);
const [hasTextChunks, setHasTextChunks] = useState(false); const [hasTextChunks, setHasTextChunks] = useState(false);
const [isStreamingInitiated, setIsStreamingInitiated] = useState(false); const [isStreamingInitiated, setIsStreamingInitiated] = useState(false);
const [isRegionBlockedModalOpen, setIsRegionBlockedModalOpen] =
useState(false);
const hasResponseRef = useRef(false);
const streamingChunksRef = useRef<string[]>([]); const streamingChunksRef = useRef<string[]>([]);
const { error, sendMessage: sendStreamMessage } = useChatStream(); const previousSessionIdRef = useRef<string | null>(null);
const {
error,
sendMessage: sendStreamMessage,
stopStreaming,
} = useChatStream();
const isStreaming = isStreamingInitiated || hasTextChunks; const isStreaming = isStreamingInitiated || hasTextChunks;
useEffect(() => {
if (sessionId !== previousSessionIdRef.current) {
stopStreaming(previousSessionIdRef.current ?? undefined, true);
previousSessionIdRef.current = sessionId;
setMessages([]);
setStreamingChunks([]);
streamingChunksRef.current = [];
setHasTextChunks(false);
setIsStreamingInitiated(false);
hasResponseRef.current = false;
}
}, [sessionId, stopStreaming]);
const allMessages = useMemo(() => { const allMessages = useMemo(() => {
const processedInitialMessages: ChatMessageData[] = []; const processedInitialMessages: ChatMessageData[] = [];
// Map to track tool calls by their ID so we can look up tool names for tool responses
const toolCallMap = new Map<string, string>(); const toolCallMap = new Map<string, string>();
for (const msg of initialMessages) { for (const msg of initialMessages) {
@@ -45,13 +73,9 @@ export function useChatContainer({ sessionId, initialMessages }: Args) {
? new Date(msg.timestamp as string) ? new Date(msg.timestamp as string)
: undefined; : undefined;
// Remove page context from user messages when loading existing sessions
if (role === "user") { if (role === "user") {
content = removePageContext(content); content = removePageContext(content);
// Skip user messages that become empty after removing page context if (!content.trim()) continue;
if (!content.trim()) {
continue;
}
processedInitialMessages.push({ processedInitialMessages.push({
type: "message", type: "message",
role: "user", role: "user",
@@ -61,19 +85,15 @@ export function useChatContainer({ sessionId, initialMessages }: Args) {
continue; continue;
} }
// Handle assistant messages first (before tool messages) to build tool call map
if (role === "assistant") { if (role === "assistant") {
// Strip <thinking> tags from content
content = content content = content
.replace(/<thinking>[\s\S]*?<\/thinking>/gi, "") .replace(/<thinking>[\s\S]*?<\/thinking>/gi, "")
.trim(); .trim();
// If assistant has tool calls, create tool_call messages for each
if (toolCalls && isToolCallArray(toolCalls) && toolCalls.length > 0) { if (toolCalls && isToolCallArray(toolCalls) && toolCalls.length > 0) {
for (const toolCall of toolCalls) { for (const toolCall of toolCalls) {
const toolName = toolCall.function.name; const toolName = toolCall.function.name;
const toolId = toolCall.id; const toolId = toolCall.id;
// Store tool name for later lookup
toolCallMap.set(toolId, toolName); toolCallMap.set(toolId, toolName);
try { try {
@@ -96,7 +116,6 @@ export function useChatContainer({ sessionId, initialMessages }: Args) {
}); });
} }
} }
// Only add assistant message if there's content after stripping thinking tags
if (content.trim()) { if (content.trim()) {
processedInitialMessages.push({ processedInitialMessages.push({
type: "message", type: "message",
@@ -106,7 +125,6 @@ export function useChatContainer({ sessionId, initialMessages }: Args) {
}); });
} }
} else if (content.trim()) { } else if (content.trim()) {
// Assistant message without tool calls, but with content
processedInitialMessages.push({ processedInitialMessages.push({
type: "message", type: "message",
role: "assistant", role: "assistant",
@@ -117,7 +135,6 @@ export function useChatContainer({ sessionId, initialMessages }: Args) {
continue; continue;
} }
// Handle tool messages - look up tool name from tool call map
if (role === "tool") { if (role === "tool") {
const toolCallId = (msg.tool_call_id as string) || ""; const toolCallId = (msg.tool_call_id as string) || "";
const toolName = toolCallMap.get(toolCallId) || "unknown"; const toolName = toolCallMap.get(toolCallId) || "unknown";
@@ -133,7 +150,6 @@ export function useChatContainer({ sessionId, initialMessages }: Args) {
continue; continue;
} }
// Handle other message types (system, etc.)
if (content.trim()) { if (content.trim()) {
processedInitialMessages.push({ processedInitialMessages.push({
type: "message", type: "message",
@@ -154,9 +170,10 @@ export function useChatContainer({ sessionId, initialMessages }: Args) {
context?: { url: string; content: string }, context?: { url: string; content: string },
) { ) {
if (!sessionId) { if (!sessionId) {
console.error("Cannot send message: no session ID"); console.error("[useChatContainer] Cannot send message: no session ID");
return; return;
} }
setIsRegionBlockedModalOpen(false);
if (isUserMessage) { if (isUserMessage) {
const userMessage = createUserMessage(content); const userMessage = createUserMessage(content);
setMessages((prev) => [...filterAuthMessages(prev), userMessage]); setMessages((prev) => [...filterAuthMessages(prev), userMessage]);
@@ -167,14 +184,19 @@ export function useChatContainer({ sessionId, initialMessages }: Args) {
streamingChunksRef.current = []; streamingChunksRef.current = [];
setHasTextChunks(false); setHasTextChunks(false);
setIsStreamingInitiated(true); setIsStreamingInitiated(true);
hasResponseRef.current = false;
const dispatcher = createStreamEventDispatcher({ const dispatcher = createStreamEventDispatcher({
setHasTextChunks, setHasTextChunks,
setStreamingChunks, setStreamingChunks,
streamingChunksRef, streamingChunksRef,
hasResponseRef,
setMessages, setMessages,
setIsRegionBlockedModalOpen,
sessionId, sessionId,
setIsStreamingInitiated, setIsStreamingInitiated,
}); });
try { try {
await sendStreamMessage( await sendStreamMessage(
sessionId, sessionId,
@@ -184,8 +206,12 @@ export function useChatContainer({ sessionId, initialMessages }: Args) {
context, context,
); );
} catch (err) { } catch (err) {
console.error("Failed to send message:", err); console.error("[useChatContainer] Failed to send message:", err);
setIsStreamingInitiated(false); setIsStreamingInitiated(false);
// Don't show error toast for AbortError (expected during cleanup)
if (err instanceof Error && err.name === "AbortError") return;
const errorMessage = const errorMessage =
err instanceof Error ? err.message : "Failed to send message"; err instanceof Error ? err.message : "Failed to send message";
toast.error("Failed to send message", { toast.error("Failed to send message", {
@@ -196,11 +222,63 @@ export function useChatContainer({ sessionId, initialMessages }: Args) {
[sessionId, sendStreamMessage], [sessionId, sendStreamMessage],
); );
const handleStopStreaming = useCallback(() => {
stopStreaming();
setStreamingChunks([]);
streamingChunksRef.current = [];
setHasTextChunks(false);
setIsStreamingInitiated(false);
}, [stopStreaming]);
const { capturePageContext } = usePageContext();
// Send initial prompt if provided (for new sessions from homepage)
useEffect(
function handleInitialPrompt() {
if (!initialPrompt || !sessionId) return;
if (initialMessages.length > 0) return;
if (hasSentInitialPrompt(sessionId)) return;
markInitialPromptSent(sessionId);
const context = capturePageContext();
sendMessage(initialPrompt, true, context);
},
[
initialPrompt,
sessionId,
initialMessages.length,
sendMessage,
capturePageContext,
],
);
async function sendMessageWithContext(
content: string,
isUserMessage: boolean = true,
) {
const context = capturePageContext();
await sendMessage(content, isUserMessage, context);
}
function handleRegionModalOpenChange(open: boolean) {
setIsRegionBlockedModalOpen(open);
}
function handleRegionModalClose() {
setIsRegionBlockedModalOpen(false);
}
return { return {
messages: allMessages, messages: allMessages,
streamingChunks, streamingChunks,
isStreaming, isStreaming,
error, error,
isRegionBlockedModalOpen,
setIsRegionBlockedModalOpen,
sendMessageWithContext,
handleRegionModalOpenChange,
handleRegionModalClose,
sendMessage, sendMessage,
stopStreaming: handleStopStreaming,
}; };
} }

View File

@@ -0,0 +1,103 @@
import { Button } from "@/components/atoms/Button/Button";
import { cn } from "@/lib/utils";
import { ArrowUpIcon, StopIcon } from "@phosphor-icons/react";
import { useChatInput } from "./useChatInput";
export interface Props {
onSend: (message: string) => void;
disabled?: boolean;
isStreaming?: boolean;
onStop?: () => void;
placeholder?: string;
className?: string;
}
export function ChatInput({
onSend,
disabled = false,
isStreaming = false,
onStop,
placeholder = "Type your message...",
className,
}: Props) {
const inputId = "chat-input";
const { value, setValue, handleKeyDown, handleSend, hasMultipleLines } =
useChatInput({
onSend,
disabled: disabled || isStreaming,
maxRows: 4,
inputId,
});
function handleSubmit(e: React.FormEvent<HTMLFormElement>) {
e.preventDefault();
handleSend();
}
function handleChange(e: React.ChangeEvent<HTMLTextAreaElement>) {
setValue(e.target.value);
}
return (
<form onSubmit={handleSubmit} className={cn("relative flex-1", className)}>
<div className="relative">
<div
id={`${inputId}-wrapper`}
className={cn(
"relative overflow-hidden border border-neutral-200 bg-white shadow-sm",
"focus-within:border-zinc-400 focus-within:ring-1 focus-within:ring-zinc-400",
hasMultipleLines ? "rounded-xlarge" : "rounded-full",
)}
>
<textarea
id={inputId}
aria-label="Chat message input"
value={value}
onChange={handleChange}
onKeyDown={handleKeyDown}
placeholder={placeholder}
disabled={disabled || isStreaming}
rows={1}
className={cn(
"w-full resize-none overflow-y-auto border-0 bg-transparent text-[1rem] leading-6 text-black",
"placeholder:text-zinc-400",
"focus:outline-none focus:ring-0",
"disabled:text-zinc-500",
hasMultipleLines ? "pb-6 pl-4 pr-4 pt-2" : "pb-4 pl-4 pr-14 pt-4",
)}
/>
</div>
<span id="chat-input-hint" className="sr-only">
Press Enter to send, Shift+Enter for new line
</span>
{isStreaming ? (
<Button
type="button"
variant="icon"
size="icon"
aria-label="Stop generating"
onClick={onStop}
className="absolute bottom-[7px] right-2 border-red-600 bg-red-600 text-white hover:border-red-800 hover:bg-red-800"
>
<StopIcon className="h-4 w-4" weight="bold" />
</Button>
) : (
<Button
type="submit"
variant="icon"
size="icon"
aria-label="Send message"
className={cn(
"absolute bottom-[7px] right-2 border-zinc-800 bg-zinc-800 text-white hover:border-zinc-900 hover:bg-zinc-900",
(disabled || !value.trim()) && "opacity-20",
)}
disabled={disabled || !value.trim()}
>
<ArrowUpIcon className="h-4 w-4" weight="bold" />
</Button>
)}
</div>
</form>
);
}

View File

@@ -0,0 +1,115 @@
import { KeyboardEvent, useCallback, useEffect, useState } from "react";
interface UseChatInputArgs {
onSend: (message: string) => void;
disabled?: boolean;
maxRows?: number;
inputId?: string;
}
export function useChatInput({
onSend,
disabled = false,
maxRows = 5,
inputId = "chat-input",
}: UseChatInputArgs) {
const [value, setValue] = useState("");
const [hasMultipleLines, setHasMultipleLines] = useState(false);
useEffect(() => {
const textarea = document.getElementById(inputId) as HTMLTextAreaElement;
const wrapper = document.getElementById(
`${inputId}-wrapper`,
) as HTMLDivElement;
if (!textarea || !wrapper) return;
const isEmpty = !value.trim();
const lines = value.split("\n").length;
const hasExplicitNewlines = lines > 1;
const computedStyle = window.getComputedStyle(textarea);
const lineHeight = parseInt(computedStyle.lineHeight, 10);
const paddingTop = parseInt(computedStyle.paddingTop, 10);
const paddingBottom = parseInt(computedStyle.paddingBottom, 10);
const singleLinePadding = paddingTop + paddingBottom;
textarea.style.height = "auto";
const scrollHeight = textarea.scrollHeight;
const singleLineHeight = lineHeight + singleLinePadding;
const isMultiLine =
hasExplicitNewlines || scrollHeight > singleLineHeight + 2;
setHasMultipleLines(isMultiLine);
if (isEmpty) {
wrapper.style.height = `${singleLineHeight}px`;
wrapper.style.maxHeight = "";
textarea.style.height = `${singleLineHeight}px`;
textarea.style.maxHeight = "";
textarea.style.overflowY = "hidden";
return;
}
if (isMultiLine) {
const wrapperMaxHeight = 196;
const currentMultilinePadding = paddingTop + paddingBottom;
const contentMaxHeight = wrapperMaxHeight - currentMultilinePadding;
const minMultiLineHeight = lineHeight * 2 + currentMultilinePadding;
const contentHeight = scrollHeight;
const targetWrapperHeight = Math.min(
Math.max(contentHeight + currentMultilinePadding, minMultiLineHeight),
wrapperMaxHeight,
);
wrapper.style.height = `${targetWrapperHeight}px`;
wrapper.style.maxHeight = `${wrapperMaxHeight}px`;
textarea.style.height = `${contentHeight}px`;
textarea.style.maxHeight = `${contentMaxHeight}px`;
textarea.style.overflowY =
contentHeight > contentMaxHeight ? "auto" : "hidden";
} else {
wrapper.style.height = `${singleLineHeight}px`;
wrapper.style.maxHeight = "";
textarea.style.height = `${singleLineHeight}px`;
textarea.style.maxHeight = "";
textarea.style.overflowY = "hidden";
}
}, [value, maxRows, inputId]);
const handleSend = useCallback(() => {
if (disabled || !value.trim()) return;
onSend(value.trim());
setValue("");
setHasMultipleLines(false);
const textarea = document.getElementById(inputId) as HTMLTextAreaElement;
const wrapper = document.getElementById(
`${inputId}-wrapper`,
) as HTMLDivElement;
if (textarea) {
textarea.style.height = "auto";
}
if (wrapper) {
wrapper.style.height = "";
wrapper.style.maxHeight = "";
}
}, [value, onSend, disabled, inputId]);
const handleKeyDown = useCallback(
(event: KeyboardEvent<HTMLTextAreaElement>) => {
if (event.key === "Enter" && !event.shiftKey) {
event.preventDefault();
handleSend();
}
},
[handleSend],
);
return {
value,
setValue,
handleKeyDown,
handleSend,
hasMultipleLines,
};
}

View File

@@ -0,0 +1,12 @@
import { Text } from "@/components/atoms/Text/Text";
export function ChatLoader() {
return (
<Text
variant="small"
className="bg-gradient-to-r from-neutral-600 via-neutral-500 to-neutral-600 bg-[length:200%_100%] bg-clip-text text-xs text-transparent [animation:shimmer_2s_ease-in-out_infinite]"
>
Taking a bit more time...
</Text>
);
}

View File

@@ -1,48 +1,65 @@
"use client"; "use client";
import { useGetV2GetUserProfile } from "@/app/api/__generated__/endpoints/store/store";
import Avatar, {
AvatarFallback,
AvatarImage,
} from "@/components/atoms/Avatar/Avatar";
import { Button } from "@/components/atoms/Button/Button"; import { Button } from "@/components/atoms/Button/Button";
import { useSupabase } from "@/lib/supabase/hooks/useSupabase"; import { useSupabase } from "@/lib/supabase/hooks/useSupabase";
import { cn } from "@/lib/utils"; import { cn } from "@/lib/utils";
import { import {
ArrowClockwise, ArrowsClockwiseIcon,
CheckCircleIcon, CheckCircleIcon,
CheckIcon, CheckIcon,
CopyIcon, CopyIcon,
RobotIcon,
} from "@phosphor-icons/react"; } from "@phosphor-icons/react";
import { useRouter } from "next/navigation"; import { useRouter } from "next/navigation";
import { useCallback, useState } from "react"; import { useCallback, useState } from "react";
import { getToolActionPhrase } from "../../helpers";
import { AgentCarouselMessage } from "../AgentCarouselMessage/AgentCarouselMessage"; import { AgentCarouselMessage } from "../AgentCarouselMessage/AgentCarouselMessage";
import { AIChatBubble } from "../AIChatBubble/AIChatBubble";
import { AuthPromptWidget } from "../AuthPromptWidget/AuthPromptWidget"; import { AuthPromptWidget } from "../AuthPromptWidget/AuthPromptWidget";
import { ChatCredentialsSetup } from "../ChatCredentialsSetup/ChatCredentialsSetup"; import { ChatCredentialsSetup } from "../ChatCredentialsSetup/ChatCredentialsSetup";
import { ExecutionStartedMessage } from "../ExecutionStartedMessage/ExecutionStartedMessage"; import { ExecutionStartedMessage } from "../ExecutionStartedMessage/ExecutionStartedMessage";
import { MarkdownContent } from "../MarkdownContent/MarkdownContent"; import { MarkdownContent } from "../MarkdownContent/MarkdownContent";
import { MessageBubble } from "../MessageBubble/MessageBubble";
import { NoResultsMessage } from "../NoResultsMessage/NoResultsMessage"; import { NoResultsMessage } from "../NoResultsMessage/NoResultsMessage";
import { ToolCallMessage } from "../ToolCallMessage/ToolCallMessage"; import { ToolCallMessage } from "../ToolCallMessage/ToolCallMessage";
import { ToolResponseMessage } from "../ToolResponseMessage/ToolResponseMessage"; import { ToolResponseMessage } from "../ToolResponseMessage/ToolResponseMessage";
import { UserChatBubble } from "../UserChatBubble/UserChatBubble";
import { useChatMessage, type ChatMessageData } from "./useChatMessage"; import { useChatMessage, type ChatMessageData } from "./useChatMessage";
function stripInternalReasoning(content: string): string {
const cleaned = content.replace(
/<internal_reasoning>[\s\S]*?<\/internal_reasoning>/gi,
"",
);
return cleaned.replace(/\n{3,}/g, "\n\n").trim();
}
function getDisplayContent(message: ChatMessageData, isUser: boolean): string {
if (message.type !== "message") return "";
if (isUser) return message.content;
return stripInternalReasoning(message.content);
}
export interface ChatMessageProps { export interface ChatMessageProps {
message: ChatMessageData; message: ChatMessageData;
messages?: ChatMessageData[];
index?: number;
isStreaming?: boolean;
className?: string; className?: string;
onDismissLogin?: () => void; onDismissLogin?: () => void;
onDismissCredentials?: () => void; onDismissCredentials?: () => void;
onSendMessage?: (content: string, isUserMessage?: boolean) => void; onSendMessage?: (content: string, isUserMessage?: boolean) => void;
agentOutput?: ChatMessageData; agentOutput?: ChatMessageData;
isFinalMessage?: boolean;
} }
export function ChatMessage({ export function ChatMessage({
message, message,
messages = [],
index = -1,
isStreaming = false,
className, className,
onDismissCredentials, onDismissCredentials,
onSendMessage, onSendMessage,
agentOutput, agentOutput,
isFinalMessage = true,
}: ChatMessageProps) { }: ChatMessageProps) {
const { user } = useSupabase(); const { user } = useSupabase();
const router = useRouter(); const router = useRouter();
@@ -54,14 +71,7 @@ export function ChatMessage({
isLoginNeeded, isLoginNeeded,
isCredentialsNeeded, isCredentialsNeeded,
} = useChatMessage(message); } = useChatMessage(message);
const displayContent = getDisplayContent(message, isUser);
const { data: profile } = useGetV2GetUserProfile({
query: {
select: (res) => (res.status === 200 ? res.data : null),
enabled: isUser && !!user,
queryKey: ["/api/store/profile", user?.id],
},
});
const handleAllCredentialsComplete = useCallback( const handleAllCredentialsComplete = useCallback(
function handleAllCredentialsComplete() { function handleAllCredentialsComplete() {
@@ -87,17 +97,25 @@ export function ChatMessage({
} }
} }
const handleCopy = useCallback(async () => { const handleCopy = useCallback(
async function handleCopy() {
if (message.type !== "message") return; if (message.type !== "message") return;
if (!displayContent) return;
try { try {
await navigator.clipboard.writeText(message.content); await navigator.clipboard.writeText(displayContent);
setCopied(true); setCopied(true);
setTimeout(() => setCopied(false), 2000); setTimeout(() => setCopied(false), 2000);
} catch (error) { } catch (error) {
console.error("Failed to copy:", error); console.error("Failed to copy:", error);
} }
}, [message]); },
[displayContent, message],
);
function isLongResponse(content: string): boolean {
return content.split("\n").length > 5;
}
const handleTryAgain = useCallback(() => { const handleTryAgain = useCallback(() => {
if (message.type !== "message" || !onSendMessage) return; if (message.type !== "message" || !onSendMessage) return;
@@ -169,9 +187,45 @@ export function ChatMessage({
// Render tool call messages // Render tool call messages
if (isToolCall && message.type === "tool_call") { if (isToolCall && message.type === "tool_call") {
// Check if this tool call is currently streaming
// A tool call is streaming if:
// 1. isStreaming is true
// 2. This is the last tool_call message
// 3. There's no tool_response for this tool call yet
const isToolCallStreaming =
isStreaming &&
index >= 0 &&
(() => {
// Find the last tool_call index
let lastToolCallIndex = -1;
for (let i = messages.length - 1; i >= 0; i--) {
if (messages[i].type === "tool_call") {
lastToolCallIndex = i;
break;
}
}
// Check if this is the last tool_call and there's no response yet
if (index === lastToolCallIndex) {
// Check if there's a tool_response for this tool call
const hasResponse = messages
.slice(index + 1)
.some(
(msg) =>
msg.type === "tool_response" && msg.toolId === message.toolId,
);
return !hasResponse;
}
return false;
})();
return ( return (
<div className={cn("px-4 py-2", className)}> <div className={cn("px-4 py-2", className)}>
<ToolCallMessage toolName={message.toolName} /> <ToolCallMessage
toolId={message.toolId}
toolName={message.toolName}
arguments={message.arguments}
isStreaming={isToolCallStreaming}
/>
</div> </div>
); );
} }
@@ -218,27 +272,11 @@ export function ChatMessage({
// Render tool response messages (but skip agent_output if it's being rendered inside assistant message) // Render tool response messages (but skip agent_output if it's being rendered inside assistant message)
if (isToolResponse && message.type === "tool_response") { if (isToolResponse && message.type === "tool_response") {
// Check if this is an agent_output that should be rendered inside assistant message
if (message.result) {
let parsedResult: Record<string, unknown> | null = null;
try {
parsedResult =
typeof message.result === "string"
? JSON.parse(message.result)
: (message.result as Record<string, unknown>);
} catch {
parsedResult = null;
}
if (parsedResult?.type === "agent_output") {
// Skip rendering - this will be rendered inside the assistant message
return null;
}
}
return ( return (
<div className={cn("px-4 py-2", className)}> <div className={cn("px-4 py-2", className)}>
<ToolResponseMessage <ToolResponseMessage
toolName={getToolActionPhrase(message.toolName)} toolId={message.toolId}
toolName={message.toolName}
result={message.result} result={message.result}
/> />
</div> </div>
@@ -256,40 +294,33 @@ export function ChatMessage({
)} )}
> >
<div className="flex w-full max-w-3xl gap-3"> <div className="flex w-full max-w-3xl gap-3">
{!isUser && (
<div className="flex-shrink-0">
<div className="flex h-7 w-7 items-center justify-center rounded-lg bg-indigo-500">
<RobotIcon className="h-4 w-4 text-indigo-50" />
</div>
</div>
)}
<div <div
className={cn( className={cn(
"flex min-w-0 flex-1 flex-col", "flex min-w-0 flex-1 flex-col",
isUser && "items-end", isUser && "items-end",
)} )}
> >
<MessageBubble variant={isUser ? "user" : "assistant"}> {isUser ? (
<MarkdownContent content={message.content} /> <UserChatBubble>
{agentOutput && <MarkdownContent content={displayContent} />
agentOutput.type === "tool_response" && </UserChatBubble>
!isUser && ( ) : (
<AIChatBubble>
<MarkdownContent content={displayContent} />
{agentOutput && agentOutput.type === "tool_response" && (
<div className="mt-4"> <div className="mt-4">
<ToolResponseMessage <ToolResponseMessage
toolName={ toolId={agentOutput.toolId}
agentOutput.toolName toolName={agentOutput.toolName || "Agent Output"}
? getToolActionPhrase(agentOutput.toolName)
: "Agent Output"
}
result={agentOutput.result} result={agentOutput.result}
/> />
</div> </div>
)} )}
</MessageBubble> </AIChatBubble>
)}
<div <div
className={cn( className={cn(
"mt-1 flex gap-1", "flex gap-0",
isUser ? "justify-end" : "justify-start", isUser ? "justify-end" : "justify-start",
)} )}
> >
@@ -300,9 +331,10 @@ export function ChatMessage({
onClick={handleTryAgain} onClick={handleTryAgain}
aria-label="Try again" aria-label="Try again"
> >
<ArrowClockwise className="size-3 text-neutral-500" /> <ArrowsClockwiseIcon className="size-4 text-zinc-600" />
</Button> </Button>
)} )}
{!isUser && isFinalMessage && isLongResponse(displayContent) && (
<Button <Button
variant="ghost" variant="ghost"
size="icon" size="icon"
@@ -310,29 +342,16 @@ export function ChatMessage({
aria-label="Copy message" aria-label="Copy message"
> >
{copied ? ( {copied ? (
<CheckIcon className="size-3 text-green-600" /> <CheckIcon className="size-4 text-green-600" />
) : ( ) : (
<CopyIcon className="size-3 text-neutral-500" /> <CopyIcon className="size-4 text-zinc-600" />
)} )}
</Button> </Button>
</div>
</div>
{isUser && (
<div className="flex-shrink-0">
<Avatar className="h-7 w-7">
<AvatarImage
src={profile?.avatar_url ?? ""}
alt={profile?.username ?? "User"}
/>
<AvatarFallback className="rounded-lg bg-neutral-200 text-neutral-600">
{profile?.username?.charAt(0)?.toUpperCase() || "U"}
</AvatarFallback>
</Avatar>
</div>
)} )}
</div> </div>
</div> </div>
</div>
</div>
); );
} }

View File

@@ -13,10 +13,9 @@ export function MessageBubble({
className, className,
}: MessageBubbleProps) { }: MessageBubbleProps) {
const userTheme = { const userTheme = {
bg: "bg-slate-900", bg: "bg-purple-100",
border: "border-slate-800", border: "border-purple-100",
gradient: "from-slate-900/30 via-slate-800/20 to-transparent", text: "text-slate-900",
text: "text-slate-50",
}; };
const assistantTheme = { const assistantTheme = {
@@ -40,9 +39,7 @@ export function MessageBubble({
)} )}
> >
{/* Gradient flare background */} {/* Gradient flare background */}
<div <div className={cn("absolute inset-0 bg-gradient-to-br")} />
className={cn("absolute inset-0 bg-gradient-to-br", theme.gradient)}
/>
<div <div
className={cn( className={cn(
"relative z-10 transition-all duration-500 ease-in-out", "relative z-10 transition-all duration-500 ease-in-out",

View File

@@ -0,0 +1,119 @@
"use client";
import { cn } from "@/lib/utils";
import type { ChatMessageData } from "../ChatMessage/useChatMessage";
import { StreamingMessage } from "../StreamingMessage/StreamingMessage";
import { ThinkingMessage } from "../ThinkingMessage/ThinkingMessage";
import { LastToolResponse } from "./components/LastToolResponse/LastToolResponse";
import { MessageItem } from "./components/MessageItem/MessageItem";
import { findLastMessageIndex, shouldSkipAgentOutput } from "./helpers";
import { useMessageList } from "./useMessageList";
export interface MessageListProps {
messages: ChatMessageData[];
streamingChunks?: string[];
isStreaming?: boolean;
className?: string;
onStreamComplete?: () => void;
onSendMessage?: (content: string) => void;
}
export function MessageList({
messages,
streamingChunks = [],
isStreaming = false,
className,
onStreamComplete,
onSendMessage,
}: MessageListProps) {
const { messagesEndRef, messagesContainerRef } = useMessageList({
messageCount: messages.length,
isStreaming,
});
/**
* Keeps this for debugging purposes 💆🏽
*/
console.log(messages);
return (
<div className="relative flex min-h-0 flex-1 flex-col">
{/* Top fade shadow */}
<div className="pointer-events-none absolute top-0 z-10 h-8 w-full bg-gradient-to-b from-[#f8f8f9] to-transparent" />
<div
ref={messagesContainerRef}
className={cn(
"flex-1 overflow-y-auto overflow-x-hidden",
"scrollbar-thin scrollbar-track-transparent scrollbar-thumb-zinc-300",
className,
)}
>
<div className="mx-auto flex min-w-0 flex-col hyphens-auto break-words py-4">
{/* Render all persisted messages */}
{(() => {
const lastAssistantMessageIndex = findLastMessageIndex(
messages,
(msg) => msg.type === "message" && msg.role === "assistant",
);
const lastToolResponseIndex = findLastMessageIndex(
messages,
(msg) => msg.type === "tool_response",
);
return messages.map((message, index) => {
// Skip agent_output tool_responses that should be rendered inside assistant messages
if (shouldSkipAgentOutput(message, messages[index - 1])) {
return null;
}
// Render last tool_response as AIChatBubble
if (
message.type === "tool_response" &&
index === lastToolResponseIndex
) {
return (
<LastToolResponse
key={index}
message={message}
prevMessage={messages[index - 1]}
/>
);
}
return (
<MessageItem
key={index}
message={message}
messages={messages}
index={index}
lastAssistantMessageIndex={lastAssistantMessageIndex}
isStreaming={isStreaming}
onSendMessage={onSendMessage}
/>
);
});
})()}
{/* Render thinking message when streaming but no chunks yet */}
{isStreaming && streamingChunks.length === 0 && <ThinkingMessage />}
{/* Render streaming message if active */}
{isStreaming && streamingChunks.length > 0 && (
<StreamingMessage
chunks={streamingChunks}
onComplete={onStreamComplete}
/>
)}
{/* Invisible div to scroll to */}
<div ref={messagesEndRef} />
</div>
</div>
{/* Bottom fade shadow */}
<div className="pointer-events-none absolute bottom-0 z-10 h-8 w-full bg-gradient-to-t from-[#f8f8f9] to-transparent" />
</div>
);
}

View File

@@ -0,0 +1,30 @@
import { AIChatBubble } from "../../../AIChatBubble/AIChatBubble";
import type { ChatMessageData } from "../../../ChatMessage/useChatMessage";
import { MarkdownContent } from "../../../MarkdownContent/MarkdownContent";
import { formatToolResponse } from "../../../ToolResponseMessage/helpers";
import { shouldSkipAgentOutput } from "../../helpers";
export interface LastToolResponseProps {
message: ChatMessageData;
prevMessage: ChatMessageData | undefined;
}
export function LastToolResponse({
message,
prevMessage,
}: LastToolResponseProps) {
if (message.type !== "tool_response") return null;
// Skip if this is an agent_output that should be rendered inside assistant message
if (shouldSkipAgentOutput(message, prevMessage)) return null;
const formattedText = formatToolResponse(message.result, message.toolName);
return (
<div className="min-w-0 overflow-x-hidden hyphens-auto break-words px-4 py-2">
<AIChatBubble>
<MarkdownContent content={formattedText} />
</AIChatBubble>
</div>
);
}

View File

@@ -0,0 +1,40 @@
import { ChatMessage } from "../../../ChatMessage/ChatMessage";
import type { ChatMessageData } from "../../../ChatMessage/useChatMessage";
import { useMessageItem } from "./useMessageItem";
export interface MessageItemProps {
message: ChatMessageData;
messages: ChatMessageData[];
index: number;
lastAssistantMessageIndex: number;
isStreaming?: boolean;
onSendMessage?: (content: string) => void;
}
export function MessageItem({
message,
messages,
index,
lastAssistantMessageIndex,
isStreaming = false,
onSendMessage,
}: MessageItemProps) {
const { messageToRender, agentOutput, isFinalMessage } = useMessageItem({
message,
messages,
index,
lastAssistantMessageIndex,
});
return (
<ChatMessage
message={messageToRender}
messages={messages}
index={index}
isStreaming={isStreaming}
onSendMessage={onSendMessage}
agentOutput={agentOutput}
isFinalMessage={isFinalMessage}
/>
);
}

View File

@@ -0,0 +1,62 @@
import type { ChatMessageData } from "../../../ChatMessage/useChatMessage";
import { isAgentOutputResult, isToolOutputPattern } from "../../helpers";
export interface UseMessageItemArgs {
message: ChatMessageData;
messages: ChatMessageData[];
index: number;
lastAssistantMessageIndex: number;
}
export function useMessageItem({
message,
messages,
index,
lastAssistantMessageIndex,
}: UseMessageItemArgs) {
let agentOutput: ChatMessageData | undefined;
let messageToRender: ChatMessageData = message;
// Check if assistant message follows a tool_call and looks like a tool output
if (message.type === "message" && message.role === "assistant") {
const prevMessage = messages[index - 1];
// Check if next message is an agent_output tool_response to include in current assistant message
const nextMessage = messages[index + 1];
if (
nextMessage &&
nextMessage.type === "tool_response" &&
nextMessage.result
) {
if (isAgentOutputResult(nextMessage.result)) {
agentOutput = nextMessage;
}
}
// Only convert to tool_response if it follows a tool_call AND looks like a tool output
if (prevMessage && prevMessage.type === "tool_call") {
if (isToolOutputPattern(message.content)) {
// Convert this message to a tool_response format for rendering
messageToRender = {
type: "tool_response",
toolId: prevMessage.toolId,
toolName: prevMessage.toolName,
result: message.content,
success: true,
timestamp: message.timestamp,
} as ChatMessageData;
}
}
}
const isFinalMessage =
messageToRender.type !== "message" ||
messageToRender.role !== "assistant" ||
index === lastAssistantMessageIndex;
return {
messageToRender,
agentOutput,
isFinalMessage,
};
}

View File

@@ -0,0 +1,68 @@
import type { ChatMessageData } from "../ChatMessage/useChatMessage";
export function parseToolResult(
result: unknown,
): Record<string, unknown> | null {
try {
return typeof result === "string"
? JSON.parse(result)
: (result as Record<string, unknown>);
} catch {
return null;
}
}
export function isAgentOutputResult(result: unknown): boolean {
const parsed = parseToolResult(result);
return parsed?.type === "agent_output";
}
export function isToolOutputPattern(content: string): boolean {
const normalizedContent = content.toLowerCase().trim();
return (
normalizedContent.startsWith("no agents found") ||
normalizedContent.startsWith("no results found") ||
normalizedContent.includes("no agents found matching") ||
!!normalizedContent.match(/^no \w+ found/i) ||
(content.length < 150 && normalizedContent.includes("try different")) ||
(content.length < 200 &&
!normalizedContent.includes("i'll") &&
!normalizedContent.includes("let me") &&
!normalizedContent.includes("i can") &&
!normalizedContent.includes("i will"))
);
}
export function formatToolResultValue(result: unknown): string {
return typeof result === "string"
? result
: result
? JSON.stringify(result, null, 2)
: "";
}
export function findLastMessageIndex(
messages: ChatMessageData[],
predicate: (msg: ChatMessageData) => boolean,
): number {
for (let i = messages.length - 1; i >= 0; i--) {
if (predicate(messages[i])) return i;
}
return -1;
}
export function shouldSkipAgentOutput(
message: ChatMessageData,
prevMessage: ChatMessageData | undefined,
): boolean {
if (message.type !== "tool_response" || !message.result) return false;
const isAgentOutput = isAgentOutputResult(message.result);
return (
isAgentOutput &&
!!prevMessage &&
prevMessage.type === "message" &&
prevMessage.role === "assistant"
);
}

Some files were not shown because too many files have changed in this diff Show More