Compare commits

..

3 Commits

Author SHA1 Message Date
Zamil Majdy
0a48c49902 test(backend): Update test to match removed SET search_path call
The SET search_path call was removed in favor of explicit schema
qualification, so the test now expects only 1 execute_raw call.
2026-01-19 21:31:40 -05:00
Zamil Majdy
1fc1102eb4 fix(backend): Use explicit schema qualification for pgvector types
Fix intermittent "type 'vector' does not exist" errors when using
PgBouncer in transaction mode. The issue was that SET search_path
and the actual query could run on different backend connections.

Changes:
- Add {schema} placeholder for raw schema name (e.g., platform)
- Use explicit type casting: {schema}.vector instead of ::vector
- Use explicit operator: OPERATOR({schema}.<=>) instead of <=>
- Remove set_public_search_path parameter (no longer needed)
- Remove search_path manipulation from DATABASE_URL

Tested on both local and dev environments via kubectl exec.

Fixes: AUTOGPT-SERVER-763, AUTOGPT-SERVER-764
2026-01-19 18:14:29 -05:00
Swifty
bc75d70e7d refactor(backend): Improve Langfuse tracing with v3 SDK patterns and @observe decorators (#11803)
<!-- Clearly explain the need for these changes: -->

This PR improves the Langfuse tracing implementation in the chat feature
by adopting the v3 SDK patterns, resulting in cleaner code and better
observability.

### Changes 🏗️

- **Simplified Langfuse client usage**: Replace manual client
initialization with `langfuse.get_client()` global singleton
- **Use v3 context managers**: Switch to
`start_as_current_observation()` and `propagate_attributes()` for
automatic trace propagation
- **Auto-instrument OpenAI calls**: Use `langfuse.openai` wrapper for
automatic LLM call tracing instead of manual generation tracking
- **Add `@observe` decorators**: All chat tools now have
`@observe(as_type="tool")` decorators for automatic tool execution
tracing:
  - `add_understanding`
  - `view_agent_output` (renamed from `agent_output`)
  - `create_agent`
  - `edit_agent`
  - `find_agent`
  - `find_block`
  - `find_library_agent`
  - `get_doc_page`
  - `run_agent`
  - `run_block`
  - `search_docs`
- **Remove manual trace lifecycle**: Eliminated the verbose `finally`
block that manually ended traces/generations
- **Rename tool**: `agent_output` → `view_agent_output` for clarity

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Verified chat feature works with Langfuse tracing enabled
- [x] Confirmed traces appear correctly in Langfuse dashboard with tool
spans
  - [x] Tested tool execution flows show up as nested observations

#### For configuration changes:

- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

No configuration changes required - uses existing Langfuse environment
variables.
2026-01-19 20:56:51 +00:00
115 changed files with 1137 additions and 2506 deletions

View File

@@ -16,20 +16,6 @@ See `docs/content/platform/getting-started.md` for setup instructions.
- Format Python code with `poetry run format`.
- Format frontend code using `pnpm format`.
## Frontend-specific guidelines
**When working on files in `autogpt_platform/frontend/`, always read and follow the conventions in `autogpt_platform/frontend/CONTRIBUTING.md`.**
Key frontend conventions:
- Component props should be `interface Props { ... }` (not exported) unless the interface needs to be used outside the component
- Separate render logic from business logic (component.tsx + useComponent.ts + helpers.ts)
- Colocate state when possible and avoid create large components, use sub-components ( local `/components` folder next to the parent component ) when sensible
- Avoid large hooks, abstract logic into `helpers.ts` files when sensible
- Use function declarations for components and handlers, arrow functions only for small inline callbacks
- No barrel files or `index.ts` re-exports
See `autogpt_platform/frontend/CONTRIBUTING.md` for complete frontend architecture, patterns, and conventions.
## Testing
- Backend: `poetry run test` (runs pytest with a docker based postgres + prisma).

View File

@@ -4,14 +4,9 @@ from collections.abc import AsyncGenerator
from typing import Any
import orjson
from langfuse import Langfuse
from openai import (
APIConnectionError,
APIError,
APIStatusError,
AsyncOpenAI,
RateLimitError,
)
from langfuse import get_client, propagate_attributes
from langfuse.openai import openai # type: ignore
from openai import APIConnectionError, APIError, APIStatusError, RateLimitError
from openai.types.chat import ChatCompletionChunk, ChatCompletionToolParam
from backend.data.understanding import (
@@ -21,7 +16,6 @@ from backend.data.understanding import (
from backend.util.exceptions import NotFoundError
from backend.util.settings import Settings
from . import db as chat_db
from .config import ChatConfig
from .model import (
ChatMessage,
@@ -50,10 +44,10 @@ logger = logging.getLogger(__name__)
config = ChatConfig()
settings = Settings()
client = AsyncOpenAI(api_key=config.api_key, base_url=config.base_url)
client = openai.AsyncOpenAI(api_key=config.api_key, base_url=config.base_url)
# Langfuse client (lazy initialization)
_langfuse_client: Langfuse | None = None
langfuse = get_client()
class LangfuseNotConfiguredError(Exception):
@@ -69,65 +63,6 @@ def _is_langfuse_configured() -> bool:
)
def _get_langfuse_client() -> Langfuse:
"""Get or create the Langfuse client for prompt management and tracing."""
global _langfuse_client
if _langfuse_client is None:
if not _is_langfuse_configured():
raise LangfuseNotConfiguredError(
"Langfuse is not configured. The chat feature requires Langfuse for prompt management. "
"Please set the LANGFUSE_PUBLIC_KEY and LANGFUSE_SECRET_KEY environment variables."
)
_langfuse_client = Langfuse(
public_key=settings.secrets.langfuse_public_key,
secret_key=settings.secrets.langfuse_secret_key,
host=settings.secrets.langfuse_host or "https://cloud.langfuse.com",
)
return _langfuse_client
def _get_environment() -> str:
"""Get the current environment name for Langfuse tagging."""
return settings.config.app_env.value
def _get_langfuse_prompt() -> str:
"""Fetch the latest production prompt from Langfuse.
Returns:
The compiled prompt text from Langfuse.
Raises:
Exception: If Langfuse is unavailable or prompt fetch fails.
"""
try:
langfuse = _get_langfuse_client()
# cache_ttl_seconds=0 disables SDK caching to always get the latest prompt
prompt = langfuse.get_prompt(config.langfuse_prompt_name, cache_ttl_seconds=0)
compiled = prompt.compile()
logger.info(
f"Fetched prompt '{config.langfuse_prompt_name}' from Langfuse "
f"(version: {prompt.version})"
)
return compiled
except Exception as e:
logger.error(f"Failed to fetch prompt from Langfuse: {e}")
raise
async def _is_first_session(user_id: str) -> bool:
"""Check if this is the user's first chat session.
Returns True if the user has 1 or fewer sessions (meaning this is their first).
"""
try:
session_count = await chat_db.get_user_session_count(user_id)
return session_count <= 1
except Exception as e:
logger.warning(f"Failed to check session count for user {user_id}: {e}")
return False # Default to non-onboarding if we can't check
async def _build_system_prompt(user_id: str | None) -> tuple[str, Any]:
"""Build the full system prompt including business understanding if available.
@@ -139,8 +74,6 @@ async def _build_system_prompt(user_id: str | None) -> tuple[str, Any]:
Tuple of (compiled prompt string, Langfuse prompt object for tracing)
"""
langfuse = _get_langfuse_client()
# cache_ttl_seconds=0 disables SDK caching to always get the latest prompt
prompt = langfuse.get_prompt(config.langfuse_prompt_name, cache_ttl_seconds=0)
@@ -158,7 +91,7 @@ async def _build_system_prompt(user_id: str | None) -> tuple[str, Any]:
context = "This is the first time you are meeting the user. Greet them and introduce them to the platform"
compiled = prompt.compile(users_information=context)
return compiled, prompt
return compiled, understanding
async def _generate_session_title(message: str) -> str | None:
@@ -217,6 +150,7 @@ async def assign_user_to_session(
async def stream_chat_completion(
session_id: str,
message: str | None = None,
tool_call_response: str | None = None,
is_user_message: bool = True,
user_id: str | None = None,
retry_count: int = 0,
@@ -256,11 +190,6 @@ async def stream_chat_completion(
yield StreamFinish()
return
# Langfuse observations will be created after session is loaded (need messages for input)
# Initialize to None so finally block can safely check and end them
trace = None
generation = None
# Only fetch from Redis if session not provided (initial call)
if session is None:
session = await get_chat_session(session_id, user_id)
@@ -336,297 +265,259 @@ async def stream_chat_completion(
asyncio.create_task(_update_title())
# Build system prompt with business understanding
system_prompt, langfuse_prompt = await _build_system_prompt(user_id)
# Build input messages including system prompt for complete Langfuse logging
trace_input_messages = [{"role": "system", "content": system_prompt}] + [
m.model_dump() for m in session.messages
]
system_prompt, understanding = await _build_system_prompt(user_id)
# Create Langfuse trace for this LLM call (each call gets its own trace, grouped by session_id)
# Using v3 SDK: start_observation creates a root span, update_trace sets trace-level attributes
try:
langfuse = _get_langfuse_client()
env = _get_environment()
trace = langfuse.start_observation(
name="chat_completion",
input={"messages": trace_input_messages},
metadata={
"environment": env,
"model": config.model,
"message_count": len(session.messages),
"prompt_name": langfuse_prompt.name if langfuse_prompt else None,
"prompt_version": langfuse_prompt.version if langfuse_prompt else None,
},
)
# Set trace-level attributes (session_id, user_id, tags)
trace.update_trace(
input = message
if not message and tool_call_response:
input = tool_call_response
langfuse = get_client()
with langfuse.start_as_current_observation(
as_type="span",
name="user-copilot-request",
input=input,
) as span:
with propagate_attributes(
session_id=session_id,
user_id=user_id,
tags=[env, "copilot"],
)
except Exception as e:
logger.warning(f"Failed to create Langfuse trace: {e}")
tags=["copilot"],
metadata={
"users_information": format_understanding_for_prompt(understanding)[
:200
] # langfuse only accepts upto to 200 chars
},
):
# Initialize variables that will be used in finally block (must be defined before try)
assistant_response = ChatMessage(
role="assistant",
content="",
)
accumulated_tool_calls: list[dict[str, Any]] = []
# Wrap main logic in try/finally to ensure Langfuse observations are always ended
try:
has_yielded_end = False
has_yielded_error = False
has_done_tool_call = False
has_received_text = False
text_streaming_ended = False
tool_response_messages: list[ChatMessage] = []
should_retry = False
# Generate unique IDs for AI SDK protocol
import uuid as uuid_module
message_id = str(uuid_module.uuid4())
text_block_id = str(uuid_module.uuid4())
# Yield message start
yield StreamStart(messageId=message_id)
# Create Langfuse generation for each LLM call, linked to the prompt
# Using v3 SDK: start_observation with as_type="generation"
generation = (
trace.start_observation(
as_type="generation",
name="llm_call",
model=config.model,
input={"messages": trace_input_messages},
prompt=langfuse_prompt,
# Initialize variables that will be used in finally block (must be defined before try)
assistant_response = ChatMessage(
role="assistant",
content="",
)
if trace
else None
)
accumulated_tool_calls: list[dict[str, Any]] = []
try:
async for chunk in _stream_chat_chunks(
session=session,
tools=tools,
system_prompt=system_prompt,
text_block_id=text_block_id,
):
# Wrap main logic in try/finally to ensure Langfuse observations are always ended
has_yielded_end = False
has_yielded_error = False
has_done_tool_call = False
has_received_text = False
text_streaming_ended = False
tool_response_messages: list[ChatMessage] = []
should_retry = False
if isinstance(chunk, StreamTextStart):
# Emit text-start before first text delta
if not has_received_text:
# Generate unique IDs for AI SDK protocol
import uuid as uuid_module
message_id = str(uuid_module.uuid4())
text_block_id = str(uuid_module.uuid4())
# Yield message start
yield StreamStart(messageId=message_id)
try:
async for chunk in _stream_chat_chunks(
session=session,
tools=tools,
system_prompt=system_prompt,
text_block_id=text_block_id,
):
if isinstance(chunk, StreamTextStart):
# Emit text-start before first text delta
if not has_received_text:
yield chunk
elif isinstance(chunk, StreamTextDelta):
delta = chunk.delta or ""
assert assistant_response.content is not None
assistant_response.content += delta
has_received_text = True
yield chunk
elif isinstance(chunk, StreamTextDelta):
delta = chunk.delta or ""
assert assistant_response.content is not None
assistant_response.content += delta
has_received_text = True
yield chunk
elif isinstance(chunk, StreamTextEnd):
# Emit text-end after text completes
if has_received_text and not text_streaming_ended:
text_streaming_ended = True
yield chunk
elif isinstance(chunk, StreamToolInputStart):
# Emit text-end before first tool call, but only if we've received text
if has_received_text and not text_streaming_ended:
yield StreamTextEnd(id=text_block_id)
text_streaming_ended = True
yield chunk
elif isinstance(chunk, StreamToolInputAvailable):
# Accumulate tool calls in OpenAI format
accumulated_tool_calls.append(
{
"id": chunk.toolCallId,
"type": "function",
"function": {
"name": chunk.toolName,
"arguments": orjson.dumps(chunk.input).decode("utf-8"),
},
}
)
elif isinstance(chunk, StreamToolOutputAvailable):
result_content = (
chunk.output
if isinstance(chunk.output, str)
else orjson.dumps(chunk.output).decode("utf-8")
)
tool_response_messages.append(
ChatMessage(
role="tool",
content=result_content,
tool_call_id=chunk.toolCallId,
)
)
has_done_tool_call = True
# Track if any tool execution failed
if not chunk.success:
logger.warning(
f"Tool {chunk.toolName} (ID: {chunk.toolCallId}) execution failed"
)
yield chunk
elif isinstance(chunk, StreamFinish):
if not has_done_tool_call:
# Emit text-end before finish if we received text but haven't closed it
elif isinstance(chunk, StreamTextEnd):
# Emit text-end after text completes
if has_received_text and not text_streaming_ended:
text_streaming_ended = True
if assistant_response.content:
logger.warn(
f"StreamTextEnd: Attempting to set output {assistant_response.content}"
)
span.update_trace(output=assistant_response.content)
span.update(output=assistant_response.content)
yield chunk
elif isinstance(chunk, StreamToolInputStart):
# Emit text-end before first tool call, but only if we've received text
if has_received_text and not text_streaming_ended:
yield StreamTextEnd(id=text_block_id)
text_streaming_ended = True
has_yielded_end = True
yield chunk
elif isinstance(chunk, StreamError):
has_yielded_error = True
elif isinstance(chunk, StreamUsage):
session.usage.append(
Usage(
prompt_tokens=chunk.promptTokens,
completion_tokens=chunk.completionTokens,
total_tokens=chunk.totalTokens,
elif isinstance(chunk, StreamToolInputAvailable):
# Accumulate tool calls in OpenAI format
accumulated_tool_calls.append(
{
"id": chunk.toolCallId,
"type": "function",
"function": {
"name": chunk.toolName,
"arguments": orjson.dumps(chunk.input).decode(
"utf-8"
),
},
}
)
elif isinstance(chunk, StreamToolOutputAvailable):
result_content = (
chunk.output
if isinstance(chunk.output, str)
else orjson.dumps(chunk.output).decode("utf-8")
)
tool_response_messages.append(
ChatMessage(
role="tool",
content=result_content,
tool_call_id=chunk.toolCallId,
)
)
has_done_tool_call = True
# Track if any tool execution failed
if not chunk.success:
logger.warning(
f"Tool {chunk.toolName} (ID: {chunk.toolCallId}) execution failed"
)
yield chunk
elif isinstance(chunk, StreamFinish):
if not has_done_tool_call:
# Emit text-end before finish if we received text but haven't closed it
if has_received_text and not text_streaming_ended:
yield StreamTextEnd(id=text_block_id)
text_streaming_ended = True
has_yielded_end = True
yield chunk
elif isinstance(chunk, StreamError):
has_yielded_error = True
elif isinstance(chunk, StreamUsage):
session.usage.append(
Usage(
prompt_tokens=chunk.promptTokens,
completion_tokens=chunk.completionTokens,
total_tokens=chunk.totalTokens,
)
)
else:
logger.error(
f"Unknown chunk type: {type(chunk)}", exc_info=True
)
if assistant_response.content:
langfuse.update_current_trace(output=assistant_response.content)
langfuse.update_current_span(output=assistant_response.content)
elif tool_response_messages:
langfuse.update_current_trace(output=str(tool_response_messages))
langfuse.update_current_span(output=str(tool_response_messages))
except Exception as e:
logger.error(f"Error during stream: {e!s}", exc_info=True)
# Check if this is a retryable error (JSON parsing, incomplete tool calls, etc.)
is_retryable = isinstance(
e, (orjson.JSONDecodeError, KeyError, TypeError)
)
if is_retryable and retry_count < config.max_retries:
logger.info(
f"Retryable error encountered. Attempt {retry_count + 1}/{config.max_retries}"
)
should_retry = True
else:
logger.error(f"Unknown chunk type: {type(chunk)}", exc_info=True)
except Exception as e:
logger.error(f"Error during stream: {e!s}", exc_info=True)
# Non-retryable error or max retries exceeded
# Save any partial progress before reporting error
messages_to_save: list[ChatMessage] = []
# Check if this is a retryable error (JSON parsing, incomplete tool calls, etc.)
is_retryable = isinstance(e, (orjson.JSONDecodeError, KeyError, TypeError))
# Add assistant message if it has content or tool calls
if accumulated_tool_calls:
assistant_response.tool_calls = accumulated_tool_calls
if assistant_response.content or assistant_response.tool_calls:
messages_to_save.append(assistant_response)
if is_retryable and retry_count < config.max_retries:
# Add tool response messages after assistant message
messages_to_save.extend(tool_response_messages)
session.messages.extend(messages_to_save)
await upsert_chat_session(session)
if not has_yielded_error:
error_message = str(e)
if not is_retryable:
error_message = f"Non-retryable error: {error_message}"
elif retry_count >= config.max_retries:
error_message = f"Max retries ({config.max_retries}) exceeded: {error_message}"
error_response = StreamError(errorText=error_message)
yield error_response
if not has_yielded_end:
yield StreamFinish()
return
# Handle retry outside of exception handler to avoid nesting
if should_retry and retry_count < config.max_retries:
logger.info(
f"Retryable error encountered. Attempt {retry_count + 1}/{config.max_retries}"
f"Retrying stream_chat_completion for session {session_id}, attempt {retry_count + 1}"
)
should_retry = True
else:
# Non-retryable error or max retries exceeded
# Save any partial progress before reporting error
messages_to_save: list[ChatMessage] = []
async for chunk in stream_chat_completion(
session_id=session.session_id,
user_id=user_id,
retry_count=retry_count + 1,
session=session,
context=context,
):
yield chunk
return # Exit after retry to avoid double-saving in finally block
# Add assistant message if it has content or tool calls
if accumulated_tool_calls:
assistant_response.tool_calls = accumulated_tool_calls
if assistant_response.content or assistant_response.tool_calls:
messages_to_save.append(assistant_response)
# Add tool response messages after assistant message
messages_to_save.extend(tool_response_messages)
session.messages.extend(messages_to_save)
await upsert_chat_session(session)
if not has_yielded_error:
error_message = str(e)
if not is_retryable:
error_message = f"Non-retryable error: {error_message}"
elif retry_count >= config.max_retries:
error_message = f"Max retries ({config.max_retries}) exceeded: {error_message}"
error_response = StreamError(errorText=error_message)
yield error_response
if not has_yielded_end:
yield StreamFinish()
return
# Handle retry outside of exception handler to avoid nesting
if should_retry and retry_count < config.max_retries:
# Normal completion path - save session and handle tool call continuation
logger.info(
f"Retrying stream_chat_completion for session {session_id}, attempt {retry_count + 1}"
)
async for chunk in stream_chat_completion(
session_id=session.session_id,
user_id=user_id,
retry_count=retry_count + 1,
session=session,
context=context,
):
yield chunk
return # Exit after retry to avoid double-saving in finally block
# Normal completion path - save session and handle tool call continuation
logger.info(
f"Normal completion path: session={session.session_id}, "
f"current message_count={len(session.messages)}"
)
# Build the messages list in the correct order
messages_to_save: list[ChatMessage] = []
# Add assistant message with tool_calls if any
if accumulated_tool_calls:
assistant_response.tool_calls = accumulated_tool_calls
logger.info(
f"Added {len(accumulated_tool_calls)} tool calls to assistant message"
)
if assistant_response.content or assistant_response.tool_calls:
messages_to_save.append(assistant_response)
logger.info(
f"Saving assistant message with content_len={len(assistant_response.content or '')}, tool_calls={len(assistant_response.tool_calls or [])}"
f"Normal completion path: session={session.session_id}, "
f"current message_count={len(session.messages)}"
)
# Add tool response messages after assistant message
messages_to_save.extend(tool_response_messages)
logger.info(
f"Saving {len(tool_response_messages)} tool response messages, "
f"total_to_save={len(messages_to_save)}"
)
# Build the messages list in the correct order
messages_to_save: list[ChatMessage] = []
session.messages.extend(messages_to_save)
logger.info(
f"Extended session messages, new message_count={len(session.messages)}"
)
await upsert_chat_session(session)
# If we did a tool call, stream the chat completion again to get the next response
if has_done_tool_call:
logger.info(
"Tool call executed, streaming chat completion again to get assistant response"
)
async for chunk in stream_chat_completion(
session_id=session.session_id,
user_id=user_id,
session=session, # Pass session object to avoid Redis refetch
context=context,
):
yield chunk
finally:
# Always end Langfuse observations to prevent resource leaks
# Guard against None and catch errors to avoid masking original exceptions
if generation is not None:
try:
latest_usage = session.usage[-1] if session.usage else None
generation.update(
model=config.model,
output={
"content": assistant_response.content,
"tool_calls": accumulated_tool_calls or None,
},
usage_details=(
{
"input": latest_usage.prompt_tokens,
"output": latest_usage.completion_tokens,
"total": latest_usage.total_tokens,
}
if latest_usage
else None
),
# Add assistant message with tool_calls if any
if accumulated_tool_calls:
assistant_response.tool_calls = accumulated_tool_calls
logger.info(
f"Added {len(accumulated_tool_calls)} tool calls to assistant message"
)
if assistant_response.content or assistant_response.tool_calls:
messages_to_save.append(assistant_response)
logger.info(
f"Saving assistant message with content_len={len(assistant_response.content or '')}, tool_calls={len(assistant_response.tool_calls or [])}"
)
generation.end()
except Exception as e:
logger.warning(f"Failed to end Langfuse generation: {e}")
if trace is not None:
try:
if accumulated_tool_calls:
trace.update_trace(output={"tool_calls": accumulated_tool_calls})
else:
trace.update_trace(output={"response": assistant_response.content})
trace.end()
except Exception as e:
logger.warning(f"Failed to end Langfuse trace: {e}")
# Add tool response messages after assistant message
messages_to_save.extend(tool_response_messages)
logger.info(
f"Saving {len(tool_response_messages)} tool response messages, "
f"total_to_save={len(messages_to_save)}"
)
session.messages.extend(messages_to_save)
logger.info(
f"Extended session messages, new message_count={len(session.messages)}"
)
await upsert_chat_session(session)
# If we did a tool call, stream the chat completion again to get the next response
if has_done_tool_call:
logger.info(
"Tool call executed, streaming chat completion again to get assistant response"
)
async for chunk in stream_chat_completion(
session_id=session.session_id,
user_id=user_id,
session=session, # Pass session object to avoid Redis refetch
context=context,
tool_call_response=str(tool_response_messages),
):
yield chunk
# Retry configuration for OpenAI API calls
@@ -900,5 +791,4 @@ async def _yield_tool_call(
session=session,
)
logger.info(f"Yielding Tool execution response: {tool_execution_response}")
yield tool_execution_response

View File

@@ -30,7 +30,7 @@ TOOL_REGISTRY: dict[str, BaseTool] = {
"find_library_agent": FindLibraryAgentTool(),
"run_agent": RunAgentTool(),
"run_block": RunBlockTool(),
"agent_output": AgentOutputTool(),
"view_agent_output": AgentOutputTool(),
"search_docs": SearchDocsTool(),
"get_doc_page": GetDocPageTool(),
}

View File

@@ -3,6 +3,8 @@
import logging
from typing import Any
from langfuse import observe
from backend.api.features.chat.model import ChatSession
from backend.data.understanding import (
BusinessUnderstandingInput,
@@ -59,6 +61,7 @@ and automations for the user's specific needs."""
"""Requires authentication to store user-specific data."""
return True
@observe(as_type="tool", name="add_understanding")
async def _execute(
self,
user_id: str | None,

View File

@@ -5,6 +5,7 @@ import re
from datetime import datetime, timedelta, timezone
from typing import Any
from langfuse import observe
from pydantic import BaseModel, field_validator
from backend.api.features.chat.model import ChatSession
@@ -103,7 +104,7 @@ class AgentOutputTool(BaseTool):
@property
def name(self) -> str:
return "agent_output"
return "view_agent_output"
@property
def description(self) -> str:
@@ -328,6 +329,7 @@ class AgentOutputTool(BaseTool):
total_executions=len(available_executions) if available_executions else 1,
)
@observe(as_type="tool", name="view_agent_output")
async def _execute(
self,
user_id: str | None,

View File

@@ -3,6 +3,8 @@
import logging
from typing import Any
from langfuse import observe
from backend.api.features.chat.model import ChatSession
from .agent_generator import (
@@ -78,6 +80,7 @@ class CreateAgentTool(BaseTool):
"required": ["description"],
}
@observe(as_type="tool", name="create_agent")
async def _execute(
self,
user_id: str | None,

View File

@@ -3,6 +3,8 @@
import logging
from typing import Any
from langfuse import observe
from backend.api.features.chat.model import ChatSession
from .agent_generator import (
@@ -85,6 +87,7 @@ class EditAgentTool(BaseTool):
"required": ["agent_id", "changes"],
}
@observe(as_type="tool", name="edit_agent")
async def _execute(
self,
user_id: str | None,

View File

@@ -2,6 +2,8 @@
from typing import Any
from langfuse import observe
from backend.api.features.chat.model import ChatSession
from .agent_search import search_agents
@@ -35,6 +37,7 @@ class FindAgentTool(BaseTool):
"required": ["query"],
}
@observe(as_type="tool", name="find_agent")
async def _execute(
self, user_id: str | None, session: ChatSession, **kwargs
) -> ToolResponseBase:

View File

@@ -1,6 +1,7 @@
import logging
from typing import Any
from langfuse import observe
from prisma.enums import ContentType
from backend.api.features.chat.model import ChatSession
@@ -55,6 +56,7 @@ class FindBlockTool(BaseTool):
def requires_auth(self) -> bool:
return True
@observe(as_type="tool", name="find_block")
async def _execute(
self,
user_id: str | None,

View File

@@ -2,6 +2,8 @@
from typing import Any
from langfuse import observe
from backend.api.features.chat.model import ChatSession
from .agent_search import search_agents
@@ -41,6 +43,7 @@ class FindLibraryAgentTool(BaseTool):
def requires_auth(self) -> bool:
return True
@observe(as_type="tool", name="find_library_agent")
async def _execute(
self, user_id: str | None, session: ChatSession, **kwargs
) -> ToolResponseBase:

View File

@@ -4,6 +4,8 @@ import logging
from pathlib import Path
from typing import Any
from langfuse import observe
from backend.api.features.chat.model import ChatSession
from backend.api.features.chat.tools.base import BaseTool
from backend.api.features.chat.tools.models import (
@@ -71,6 +73,7 @@ class GetDocPageTool(BaseTool):
url_path = path.rsplit(".", 1)[0] if "." in path else path
return f"{DOCS_BASE_URL}/{url_path}"
@observe(as_type="tool", name="get_doc_page")
async def _execute(
self,
user_id: str | None,

View File

@@ -3,6 +3,7 @@
import logging
from typing import Any
from langfuse import observe
from pydantic import BaseModel, Field, field_validator
from backend.api.features.chat.config import ChatConfig
@@ -154,6 +155,7 @@ class RunAgentTool(BaseTool):
"""All operations require authentication."""
return True
@observe(as_type="tool", name="run_agent")
async def _execute(
self,
user_id: str | None,

View File

@@ -4,6 +4,8 @@ import logging
from collections import defaultdict
from typing import Any
from langfuse import observe
from backend.api.features.chat.model import ChatSession
from backend.data.block import get_block
from backend.data.execution import ExecutionContext
@@ -127,6 +129,7 @@ class RunBlockTool(BaseTool):
return matched_credentials, missing_credentials
@observe(as_type="tool", name="run_block")
async def _execute(
self,
user_id: str | None,

View File

@@ -3,6 +3,7 @@
import logging
from typing import Any
from langfuse import observe
from prisma.enums import ContentType
from backend.api.features.chat.model import ChatSession
@@ -87,6 +88,7 @@ class SearchDocsTool(BaseTool):
url_path = path.rsplit(".", 1)[0] if "." in path else path
return f"{DOCS_BASE_URL}/{url_path}"
@observe(as_type="tool", name="search_docs")
async def _execute(
self,
user_id: str | None,

View File

@@ -154,15 +154,16 @@ async def store_content_embedding(
# Upsert the embedding
# WHERE clause in DO UPDATE prevents PostgreSQL 15 bug with NULLS NOT DISTINCT
# Use {schema}.vector for explicit pgvector type qualification
await execute_raw_with_schema(
"""
INSERT INTO {schema_prefix}"UnifiedContentEmbedding" (
"id", "contentType", "contentId", "userId", "embedding", "searchableText", "metadata", "createdAt", "updatedAt"
)
VALUES (gen_random_uuid()::text, $1::{schema_prefix}"ContentType", $2, $3, $4::vector, $5, $6::jsonb, NOW(), NOW())
VALUES (gen_random_uuid()::text, $1::{schema_prefix}"ContentType", $2, $3, $4::{schema}.vector, $5, $6::jsonb, NOW(), NOW())
ON CONFLICT ("contentType", "contentId", "userId")
DO UPDATE SET
"embedding" = $4::vector,
"embedding" = $4::{schema}.vector,
"searchableText" = $5,
"metadata" = $6::jsonb,
"updatedAt" = NOW()
@@ -177,7 +178,6 @@ async def store_content_embedding(
searchable_text,
metadata_json,
client=client,
set_public_search_path=True,
)
logger.info(f"Stored embedding for {content_type}:{content_id}")
@@ -236,7 +236,6 @@ async def get_content_embedding(
content_type,
content_id,
user_id,
set_public_search_path=True,
)
if result and len(result) > 0:
@@ -871,31 +870,34 @@ async def semantic_search(
# Add content type parameters and build placeholders dynamically
content_type_start_idx = len(params) + 1
content_type_placeholders = ", ".join(
f'${content_type_start_idx + i}::{{{{schema_prefix}}}}"ContentType"'
'$' + str(content_type_start_idx + i) + '::{schema_prefix}"ContentType"'
for i in range(len(content_types))
)
params.extend([ct.value for ct in content_types])
sql = f"""
# Build min_similarity param index before appending
min_similarity_idx = len(params) + 1
params.append(min_similarity)
# Use regular string (not f-string) for template to preserve {schema_prefix} and {schema} placeholders
# Use OPERATOR({schema}.<=>) for explicit operator schema qualification
sql = """
SELECT
"contentId" as content_id,
"contentType" as content_type,
"searchableText" as searchable_text,
metadata,
1 - (embedding <=> '{embedding_str}'::vector) as similarity
FROM {{{{schema_prefix}}}}"UnifiedContentEmbedding"
WHERE "contentType" IN ({content_type_placeholders})
{user_filter}
AND 1 - (embedding <=> '{embedding_str}'::vector) >= ${len(params) + 1}
1 - (embedding OPERATOR({schema}.<=>) '""" + embedding_str + """'::{schema}.vector) as similarity
FROM {schema_prefix}"UnifiedContentEmbedding"
WHERE "contentType" IN (""" + content_type_placeholders + """)
""" + user_filter + """
AND 1 - (embedding OPERATOR({schema}.<=>) '""" + embedding_str + """'::{schema}.vector) >= $""" + str(min_similarity_idx) + """
ORDER BY similarity DESC
LIMIT $1
"""
params.append(min_similarity)
try:
results = await query_raw_with_schema(
sql, *params, set_public_search_path=True
)
results = await query_raw_with_schema(sql, *params)
return [
{
"content_id": row["content_id"],
@@ -922,31 +924,33 @@ async def semantic_search(
# Add content type parameters and build placeholders dynamically
content_type_start_idx = len(params_lexical) + 1
content_type_placeholders_lexical = ", ".join(
f'${content_type_start_idx + i}::{{{{schema_prefix}}}}"ContentType"'
'$' + str(content_type_start_idx + i) + '::{schema_prefix}"ContentType"'
for i in range(len(content_types))
)
params_lexical.extend([ct.value for ct in content_types])
sql_lexical = f"""
# Build query param index before appending
query_param_idx = len(params_lexical) + 1
params_lexical.append(f"%{query}%")
# Use regular string (not f-string) for template to preserve {schema_prefix} placeholders
sql_lexical = """
SELECT
"contentId" as content_id,
"contentType" as content_type,
"searchableText" as searchable_text,
metadata,
0.0 as similarity
FROM {{{{schema_prefix}}}}"UnifiedContentEmbedding"
WHERE "contentType" IN ({content_type_placeholders_lexical})
{user_filter}
AND "searchableText" ILIKE ${len(params_lexical) + 1}
FROM {schema_prefix}"UnifiedContentEmbedding"
WHERE "contentType" IN (""" + content_type_placeholders_lexical + """)
""" + user_filter + """
AND "searchableText" ILIKE $""" + str(query_param_idx) + """
ORDER BY "updatedAt" DESC
LIMIT $1
"""
params_lexical.append(f"%{query}%")
try:
results = await query_raw_with_schema(
sql_lexical, *params_lexical, set_public_search_path=True
)
results = await query_raw_with_schema(sql_lexical, *params_lexical)
return [
{
"content_id": row["content_id"],

View File

@@ -155,18 +155,14 @@ async def test_store_embedding_success(mocker):
)
assert result is True
# execute_raw is called twice: once for SET search_path, once for INSERT
assert mock_client.execute_raw.call_count == 2
# execute_raw is called once for INSERT (no separate SET search_path needed)
assert mock_client.execute_raw.call_count == 1
# First call: SET search_path
first_call_args = mock_client.execute_raw.call_args_list[0][0]
assert "SET search_path" in first_call_args[0]
# Second call: INSERT query with the actual data
second_call_args = mock_client.execute_raw.call_args_list[1][0]
assert "test-version-id" in second_call_args
assert "[0.1,0.2,0.3]" in second_call_args
assert None in second_call_args # userId should be None for store agents
# Verify the INSERT query with the actual data
call_args = mock_client.execute_raw.call_args_list[0][0]
assert "test-version-id" in call_args
assert "[0.1,0.2,0.3]" in call_args
assert None in call_args # userId should be None for store agents
@pytest.mark.asyncio(loop_scope="session")

View File

@@ -295,7 +295,7 @@ async def unified_hybrid_search(
FROM {{schema_prefix}}"UnifiedContentEmbedding" uce
WHERE uce."contentType" = ANY({content_types_param}::{{schema_prefix}}"ContentType"[])
{user_filter}
ORDER BY uce.embedding <=> {embedding_param}::vector
ORDER BY uce.embedding OPERATOR({{schema}}.<=>) {embedding_param}::{{schema}}.vector
LIMIT 200
)
),
@@ -307,7 +307,7 @@ async def unified_hybrid_search(
uce.metadata,
uce."updatedAt" as updated_at,
-- Semantic score: cosine similarity (1 - distance)
COALESCE(1 - (uce.embedding <=> {embedding_param}::vector), 0) as semantic_score,
COALESCE(1 - (uce.embedding OPERATOR({{schema}}.<=>) {embedding_param}::{{schema}}.vector), 0) as semantic_score,
-- Lexical score: ts_rank_cd
COALESCE(ts_rank_cd(uce.search, plainto_tsquery('english', {query_param})), 0) as lexical_raw,
-- Category match from metadata
@@ -363,9 +363,7 @@ async def unified_hybrid_search(
LIMIT {limit_param} OFFSET {offset_param}
"""
results = await query_raw_with_schema(
sql_query, *params, set_public_search_path=True
)
results = await query_raw_with_schema(sql_query, *params)
total = results[0]["total_count"] if results else 0
# Apply BM25 reranking
@@ -585,7 +583,7 @@ async def hybrid_search(
WHERE uce."contentType" = 'STORE_AGENT'::{{schema_prefix}}"ContentType"
AND uce."userId" IS NULL
AND {where_clause}
ORDER BY uce.embedding <=> {embedding_param}::vector
ORDER BY uce.embedding OPERATOR({{schema}}.<=>) {embedding_param}::{{schema}}.vector
LIMIT 200
) uce
),
@@ -607,7 +605,7 @@ async def hybrid_search(
-- Searchable text for BM25 reranking
COALESCE(sa.agent_name, '') || ' ' || COALESCE(sa.sub_heading, '') || ' ' || COALESCE(sa.description, '') as searchable_text,
-- Semantic score
COALESCE(1 - (uce.embedding <=> {embedding_param}::vector), 0) as semantic_score,
COALESCE(1 - (uce.embedding OPERATOR({{schema}}.<=>) {embedding_param}::{{schema}}.vector), 0) as semantic_score,
-- Lexical score (raw, will normalize)
COALESCE(ts_rank_cd(uce.search, plainto_tsquery('english', {query_param})), 0) as lexical_raw,
-- Category match
@@ -688,9 +686,7 @@ async def hybrid_search(
LIMIT {limit_param} OFFSET {offset_param}
"""
results = await query_raw_with_schema(
sql_query, *params, set_public_search_path=True
)
results = await query_raw_with_schema(sql_query, *params)
total = results[0]["total_count"] if results else 0

View File

@@ -38,20 +38,6 @@ POOL_TIMEOUT = os.getenv("DB_POOL_TIMEOUT")
if POOL_TIMEOUT:
DATABASE_URL = add_param(DATABASE_URL, "pool_timeout", POOL_TIMEOUT)
# Add public schema to search_path for pgvector type access
# The vector extension is in public schema, but search_path is determined by schema parameter
# Extract the schema from DATABASE_URL or default to 'public' (matching get_database_schema())
parsed_url = urlparse(DATABASE_URL)
url_params = dict(parse_qsl(parsed_url.query))
db_schema = url_params.get("schema", "public")
# Build search_path, avoiding duplicates if db_schema is already 'public'
search_path_schemas = list(
dict.fromkeys([db_schema, "public"])
) # Preserves order, removes duplicates
search_path = ",".join(search_path_schemas)
# This allows using ::vector without schema qualification
DATABASE_URL = add_param(DATABASE_URL, "options", f"-c search_path={search_path}")
HTTP_TIMEOUT = int(POOL_TIMEOUT) if POOL_TIMEOUT else None
prisma = Prisma(
@@ -127,38 +113,43 @@ async def _raw_with_schema(
*args,
execute: bool = False,
client: Prisma | None = None,
set_public_search_path: bool = False,
) -> list[dict] | int:
"""Internal: Execute raw SQL with proper schema handling.
Use query_raw_with_schema() or execute_raw_with_schema() instead.
Supports placeholders:
- {schema_prefix}: Table/type prefix (e.g., "platform".)
- {schema}: Raw schema name (e.g., platform) for pgvector types and operators
Args:
query_template: SQL query with {schema_prefix} placeholder
query_template: SQL query with {schema_prefix} and/or {schema} placeholders
*args: Query parameters
execute: If False, executes SELECT query. If True, executes INSERT/UPDATE/DELETE.
client: Optional Prisma client for transactions (only used when execute=True).
set_public_search_path: If True, sets search_path to include public schema.
Needed for pgvector types and other public schema objects.
Returns:
- list[dict] if execute=False (query results)
- int if execute=True (number of affected rows)
Example with vector type:
await execute_raw_with_schema(
'INSERT INTO {schema_prefix}"Embedding" (vec) VALUES ($1::{schema}.vector)',
embedding_data
)
"""
schema = get_database_schema()
schema_prefix = f'"{schema}".' if schema != "public" else ""
formatted_query = query_template.format(schema_prefix=schema_prefix)
formatted_query = query_template.format(
schema_prefix=schema_prefix,
schema=schema,
)
import prisma as prisma_module
db_client = client if client else prisma_module.get_client()
# Set search_path to include public schema if requested
# Prisma doesn't support the 'options' connection parameter, so we set it per-session
# This is idempotent and safe to call multiple times
if set_public_search_path:
await db_client.execute_raw(f"SET search_path = {schema}, public") # type: ignore
if execute:
result = await db_client.execute_raw(formatted_query, *args) # type: ignore
else:
@@ -167,16 +158,12 @@ async def _raw_with_schema(
return result
async def query_raw_with_schema(
query_template: str, *args, set_public_search_path: bool = False
) -> list[dict]:
async def query_raw_with_schema(query_template: str, *args) -> list[dict]:
"""Execute raw SQL SELECT query with proper schema handling.
Args:
query_template: SQL query with {schema_prefix} placeholder
query_template: SQL query with {schema_prefix} and/or {schema} placeholders
*args: Query parameters
set_public_search_path: If True, sets search_path to include public schema.
Needed for pgvector types and other public schema objects.
Returns:
List of result rows as dictionaries
@@ -187,23 +174,20 @@ async def query_raw_with_schema(
user_id
)
"""
return await _raw_with_schema(query_template, *args, execute=False, set_public_search_path=set_public_search_path) # type: ignore
return await _raw_with_schema(query_template, *args, execute=False) # type: ignore
async def execute_raw_with_schema(
query_template: str,
*args,
client: Prisma | None = None,
set_public_search_path: bool = False,
) -> int:
"""Execute raw SQL command (INSERT/UPDATE/DELETE) with proper schema handling.
Args:
query_template: SQL query with {schema_prefix} placeholder
query_template: SQL query with {schema_prefix} and/or {schema} placeholders
*args: Query parameters
client: Optional Prisma client for transactions
set_public_search_path: If True, sets search_path to include public schema.
Needed for pgvector types and other public schema objects.
Returns:
Number of affected rows
@@ -215,7 +199,7 @@ async def execute_raw_with_schema(
client=tx # Optional transaction client
)
"""
return await _raw_with_schema(query_template, *args, execute=True, client=client, set_public_search_path=set_public_search_path) # type: ignore
return await _raw_with_schema(query_template, *args, execute=True, client=client) # type: ignore
class BaseDbModel(BaseModel):

View File

@@ -328,6 +328,8 @@ async def clear_business_understanding(user_id: str) -> bool:
def format_understanding_for_prompt(understanding: BusinessUnderstanding) -> str:
"""Format business understanding as text for system prompt injection."""
if not understanding:
return ""
sections = []
# User info section

View File

@@ -1,9 +1,10 @@
-- CreateExtension
-- Supabase: pgvector must be enabled via Dashboard → Database → Extensions first
-- Create in public schema so vector type is available across all schemas
-- Creates extension in current schema (determined by search_path)
-- The extension may already exist in a different schema (e.g., Supabase pre-enables it)
DO $$
BEGIN
CREATE EXTENSION IF NOT EXISTS "vector" WITH SCHEMA "public";
CREATE EXTENSION IF NOT EXISTS "vector";
EXCEPTION WHEN OTHERS THEN
RAISE NOTICE 'vector extension not available or already exists, skipping';
END $$;
@@ -12,6 +13,7 @@ END $$;
CREATE TYPE "ContentType" AS ENUM ('STORE_AGENT', 'BLOCK', 'INTEGRATION', 'DOCUMENTATION', 'LIBRARY_AGENT');
-- CreateTable
-- Note: vector type is unqualified - relies on search_path including the schema where pgvector is installed
CREATE TABLE "UnifiedContentEmbedding" (
"id" TEXT NOT NULL,
"createdAt" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP,
@@ -19,7 +21,7 @@ CREATE TABLE "UnifiedContentEmbedding" (
"contentType" "ContentType" NOT NULL,
"contentId" TEXT NOT NULL,
"userId" TEXT,
"embedding" public.vector(1536) NOT NULL,
"embedding" vector(1536) NOT NULL,
"searchableText" TEXT NOT NULL,
"metadata" JSONB NOT NULL DEFAULT '{}',
@@ -45,4 +47,4 @@ CREATE UNIQUE INDEX "UnifiedContentEmbedding_contentType_contentId_userId_key" O
-- Uses cosine distance operator (<=>), which matches the query in hybrid_search.py
-- Note: Drop first in case Prisma created a btree index (Prisma doesn't support HNSW)
DROP INDEX IF EXISTS "UnifiedContentEmbedding_embedding_idx";
CREATE INDEX "UnifiedContentEmbedding_embedding_idx" ON "UnifiedContentEmbedding" USING hnsw ("embedding" public.vector_cosine_ops);
CREATE INDEX "UnifiedContentEmbedding_embedding_idx" ON "UnifiedContentEmbedding" USING hnsw ("embedding" vector_cosine_ops);

View File

@@ -549,48 +549,9 @@ Files:
Types:
- Prefer `interface` for object shapes
- Component props should be `interface Props { ... }` (not exported)
- Only use specific exported names (e.g., `export interface MyComponentProps`) when the interface needs to be used outside the component
- Keep type definitions inline with the component - do not create separate `types.ts` files unless types are shared across multiple files
- Component props should be `interface Props { ... }`
- Use precise types; avoid `any` and unsafe casts
**Props naming examples:**
```tsx
// ✅ Good - internal props, not exported
interface Props {
title: string;
onClose: () => void;
}
export function Modal({ title, onClose }: Props) {
// ...
}
// ✅ Good - exported when needed externally
export interface ModalProps {
title: string;
onClose: () => void;
}
export function Modal({ title, onClose }: ModalProps) {
// ...
}
// ❌ Bad - unnecessarily specific name for internal use
interface ModalComponentProps {
title: string;
onClose: () => void;
}
// ❌ Bad - separate types.ts file for single component
// types.ts
export interface ModalProps { ... }
// Modal.tsx
import type { ModalProps } from './types';
```
Parameters:
- If more than one parameter is needed, pass a single `Args` object for clarity

View File

@@ -1,45 +0,0 @@
"use client";
import { LoadingSpinner } from "@/components/atoms/LoadingSpinner/LoadingSpinner";
import { Text } from "@/components/atoms/Text/Text";
import { useSupabase } from "@/lib/supabase/hooks/useSupabase";
import { useRouter } from "next/navigation";
import { useEffect, useRef } from "react";
const LOGOUT_REDIRECT_DELAY_MS = 400;
function wait(ms: number): Promise<void> {
return new Promise(function resolveAfterDelay(resolve) {
setTimeout(resolve, ms);
});
}
export default function LogoutPage() {
const { logOut } = useSupabase();
const router = useRouter();
const hasStartedRef = useRef(false);
useEffect(function handleLogoutEffect() {
if (hasStartedRef.current) return;
hasStartedRef.current = true;
async function runLogout() {
await logOut();
await wait(LOGOUT_REDIRECT_DELAY_MS);
router.replace("/login");
}
void runLogout();
}, []);
return (
<div className="flex min-h-screen items-center justify-center px-4">
<div className="flex flex-col items-center justify-center gap-4 py-8">
<LoadingSpinner size="large" />
<Text variant="body" className="text-center">
Logging you out...
</Text>
</div>
</div>
);
}

View File

@@ -9,7 +9,7 @@ export async function GET(request: Request) {
const { searchParams, origin } = new URL(request.url);
const code = searchParams.get("code");
let next = "/";
let next = "/marketplace";
if (code) {
const supabase = await getServerSupabase();

View File

@@ -3,32 +3,32 @@
import { Button } from "@/components/atoms/Button/Button";
import { Text } from "@/components/atoms/Text/Text";
import { cn } from "@/lib/utils";
import type { ReactNode } from "react";
import { List } from "@phosphor-icons/react";
import React, { useState } from "react";
import { ChatContainer } from "./components/ChatContainer/ChatContainer";
import { ChatErrorState } from "./components/ChatErrorState/ChatErrorState";
import { ChatLoader } from "./components/ChatLoader/ChatLoader";
import { ChatLoadingState } from "./components/ChatLoadingState/ChatLoadingState";
import { SessionsDrawer } from "./components/SessionsDrawer/SessionsDrawer";
import { useChat } from "./useChat";
export interface ChatProps {
className?: string;
headerTitle?: React.ReactNode;
showHeader?: boolean;
showSessionInfo?: boolean;
showNewChatButton?: boolean;
onNewChat?: () => void;
headerActions?: ReactNode;
urlSessionId?: string | null;
initialPrompt?: string | null;
headerActions?: React.ReactNode;
}
export function Chat({
className,
headerTitle = "AutoGPT Copilot",
showHeader = true,
showSessionInfo = true,
showNewChatButton = true,
onNewChat,
headerActions,
urlSessionId,
initialPrompt,
}: ChatProps) {
const {
messages,
@@ -38,20 +38,46 @@ export function Chat({
sessionId,
createSession,
clearSession,
showLoader,
} = useChat({ urlSessionId });
loadSession,
} = useChat();
function handleNewChat() {
const [isSessionsDrawerOpen, setIsSessionsDrawerOpen] = useState(false);
const handleNewChat = () => {
clearSession();
onNewChat?.();
}
};
const handleSelectSession = async (sessionId: string) => {
try {
await loadSession(sessionId);
} catch (err) {
console.error("Failed to load session:", err);
}
};
return (
<div className={cn("flex h-full flex-col", className)}>
{/* Header */}
{showHeader && (
<header className="shrink-0 bg-[#f8f8f9] p-3">
<header className="shrink-0 border-t border-zinc-200 bg-white p-3">
<div className="flex items-center justify-between">
<div className="flex items-center gap-3">
<button
aria-label="View sessions"
onClick={() => setIsSessionsDrawerOpen(true)}
className="flex size-8 items-center justify-center rounded hover:bg-zinc-100"
>
<List width="1.25rem" height="1.25rem" />
</button>
{typeof headerTitle === "string" ? (
<Text variant="h2" className="text-lg font-semibold">
{headerTitle}
</Text>
) : (
headerTitle
)}
</div>
<div className="flex items-center gap-3">
{showSessionInfo && sessionId && (
<>
@@ -73,17 +99,12 @@ export function Chat({
)}
{/* Main Content */}
<main className="flex min-h-0 w-full flex-1 flex-col overflow-hidden">
{/* Loading State - show loader when loading or creating a session (with 300ms delay) */}
{showLoader && (isLoading || isCreating) && (
<div className="flex flex-1 items-center justify-center">
<div className="flex flex-col items-center gap-4">
<ChatLoader />
<Text variant="body" className="text-zinc-500">
Loading your chats...
</Text>
</div>
</div>
<main className="flex min-h-0 flex-1 flex-col overflow-hidden">
{/* Loading State - show when explicitly loading/creating OR when we don't have a session yet and no error */}
{(isLoading || isCreating || (!sessionId && !error)) && (
<ChatLoadingState
message={isCreating ? "Creating session..." : "Loading..."}
/>
)}
{/* Error State */}
@@ -96,11 +117,18 @@ export function Chat({
<ChatContainer
sessionId={sessionId}
initialMessages={messages}
initialPrompt={initialPrompt}
className="flex-1"
/>
)}
</main>
{/* Sessions Drawer */}
<SessionsDrawer
isOpen={isSessionsDrawerOpen}
onClose={() => setIsSessionsDrawerOpen(false)}
onSelectSession={handleSelectSession}
currentSessionId={sessionId}
/>
</div>
);
}

View File

@@ -21,7 +21,7 @@ export function AuthPromptWidget({
message,
sessionId,
agentInfo,
returnUrl = "/copilot/chat",
returnUrl = "/chat",
className,
}: AuthPromptWidgetProps) {
const router = useRouter();

View File

@@ -1,23 +1,22 @@
import type { SessionDetailResponse } from "@/app/api/__generated__/models/sessionDetailResponse";
import { cn } from "@/lib/utils";
import { useCallback, useEffect, useRef } from "react";
import { useCallback } from "react";
import { usePageContext } from "../../usePageContext";
import { ChatInput } from "../ChatInput/ChatInput";
import { MessageList } from "../MessageList/MessageList";
import { QuickActionsWelcome } from "../QuickActionsWelcome/QuickActionsWelcome";
import { useChatContainer } from "./useChatContainer";
export interface ChatContainerProps {
sessionId: string | null;
initialMessages: SessionDetailResponse["messages"];
className?: string;
initialPrompt?: string | null;
}
export function ChatContainer({
sessionId,
initialMessages,
className,
initialPrompt,
}: ChatContainerProps) {
const { messages, streamingChunks, isStreaming, sendMessage } =
useChatContainer({
@@ -25,7 +24,6 @@ export function ChatContainer({
initialMessages,
});
const { capturePageContext } = usePageContext();
const hasSentInitialRef = useRef(false);
// Wrap sendMessage to automatically capture page context
const sendMessageWithContext = useCallback(
@@ -36,28 +34,35 @@ export function ChatContainer({
[sendMessage, capturePageContext],
);
useEffect(
function handleInitialPrompt() {
if (!initialPrompt) return;
if (hasSentInitialRef.current) return;
if (!sessionId) return;
if (messages.length > 0) return;
hasSentInitialRef.current = true;
void sendMessageWithContext(initialPrompt);
},
[initialPrompt, messages.length, sendMessageWithContext, sessionId],
);
const quickActions = [
"Find agents for social media management",
"Show me agents for content creation",
"Help me automate my business",
"What can you help me with?",
];
return (
<div
className={cn(
"mx-auto flex h-full min-h-0 w-full max-w-3xl flex-col bg-[#f8f8f9]",
className,
)}
className={cn("flex h-full min-h-0 flex-col", className)}
style={{
backgroundColor: "#ffffff",
backgroundImage:
"radial-gradient(#e5e5e5 0.5px, transparent 0.5px), radial-gradient(#e5e5e5 0.5px, #ffffff 0.5px)",
backgroundSize: "20px 20px",
backgroundPosition: "0 0, 10px 10px",
}}
>
{/* Messages or Welcome Screen - Scrollable */}
<div className="relative flex min-h-0 flex-1 flex-col overflow-y-auto">
<div className="flex min-h-full flex-col justify-end">
{/* Messages or Welcome Screen */}
<div className="flex min-h-0 flex-1 flex-col overflow-hidden pb-24">
{messages.length === 0 ? (
<QuickActionsWelcome
title="Welcome to AutoGPT Copilot"
description="Start a conversation to discover and run AI agents."
actions={quickActions}
onActionClick={sendMessageWithContext}
disabled={isStreaming || !sessionId}
/>
) : (
<MessageList
messages={messages}
streamingChunks={streamingChunks}
@@ -65,16 +70,17 @@ export function ChatContainer({
onSendMessage={sendMessageWithContext}
className="flex-1"
/>
</div>
)}
</div>
{/* Input - Fixed at bottom */}
<div className="relative pb-4 pt-2">
<div className="pointer-events-none absolute top-[-18px] z-10 h-6 w-full bg-gradient-to-b from-transparent to-[#f8f8f9]" />
{/* Input - Always visible */}
<div className="fixed bottom-0 left-0 right-0 z-50 border-t border-zinc-200 bg-white p-4">
<ChatInput
onSend={sendMessageWithContext}
disabled={isStreaming || !sessionId}
placeholder="You can search or just ask — e.g. “create a blog post outline”"
placeholder={
sessionId ? "Type your message..." : "Creating session..."
}
/>
</div>
</div>

View File

@@ -33,23 +33,13 @@ export function handleTextEnded(
console.log("[Text Ended] Saving streamed text as assistant message");
const completedText = deps.streamingChunksRef.current.join("");
if (completedText.trim()) {
deps.setMessages((prev) => {
const lastMessage = prev[prev.length - 1];
console.log("[Text Ended] Previous message:", {
type: lastMessage?.type,
toolName:
lastMessage?.type === "tool_call" ? lastMessage.toolName : undefined,
content: completedText.substring(0, 200),
});
const assistantMessage: ChatMessageData = {
type: "message",
role: "assistant",
content: completedText,
timestamp: new Date(),
};
return [...prev, assistantMessage];
});
const assistantMessage: ChatMessageData = {
type: "message",
role: "assistant",
content: completedText,
timestamp: new Date(),
};
deps.setMessages((prev) => [...prev, assistantMessage]);
}
deps.setStreamingChunks([]);
deps.streamingChunksRef.current = [];

View File

@@ -3,7 +3,7 @@ import { cn } from "@/lib/utils";
import { ArrowUpIcon } from "@phosphor-icons/react";
import { useChatInput } from "./useChatInput";
export interface Props {
export interface ChatInputProps {
onSend: (message: string) => void;
disabled?: boolean;
placeholder?: string;
@@ -15,7 +15,7 @@ export function ChatInput({
disabled = false,
placeholder = "Type your message...",
className,
}: Props) {
}: ChatInputProps) {
const inputId = "chat-input";
const { value, setValue, handleKeyDown, handleSend } = useChatInput({
onSend,

View File

@@ -1,5 +1,10 @@
"use client";
import { useGetV2GetUserProfile } from "@/app/api/__generated__/endpoints/store/store";
import Avatar, {
AvatarFallback,
AvatarImage,
} from "@/components/atoms/Avatar/Avatar";
import { Button } from "@/components/atoms/Button/Button";
import { useSupabase } from "@/lib/supabase/hooks/useSupabase";
import { cn } from "@/lib/utils";
@@ -8,19 +13,20 @@ import {
CheckCircleIcon,
CheckIcon,
CopyIcon,
RobotIcon,
} from "@phosphor-icons/react";
import { useRouter } from "next/navigation";
import { useCallback, useState } from "react";
import { getToolActionPhrase } from "../../helpers";
import { AgentCarouselMessage } from "../AgentCarouselMessage/AgentCarouselMessage";
import { AIChatBubble } from "../AIChatBubble/AIChatBubble";
import { AuthPromptWidget } from "../AuthPromptWidget/AuthPromptWidget";
import { ChatCredentialsSetup } from "../ChatCredentialsSetup/ChatCredentialsSetup";
import { ExecutionStartedMessage } from "../ExecutionStartedMessage/ExecutionStartedMessage";
import { MarkdownContent } from "../MarkdownContent/MarkdownContent";
import { MessageBubble } from "../MessageBubble/MessageBubble";
import { NoResultsMessage } from "../NoResultsMessage/NoResultsMessage";
import { ToolCallMessage } from "../ToolCallMessage/ToolCallMessage";
import { ToolResponseMessage } from "../ToolResponseMessage/ToolResponseMessage";
import { UserChatBubble } from "../UserChatBubble/UserChatBubble";
import { useChatMessage, type ChatMessageData } from "./useChatMessage";
export interface ChatMessageProps {
message: ChatMessageData;
@@ -29,7 +35,6 @@ export interface ChatMessageProps {
onDismissCredentials?: () => void;
onSendMessage?: (content: string, isUserMessage?: boolean) => void;
agentOutput?: ChatMessageData;
isFinalMessage?: boolean;
}
export function ChatMessage({
@@ -38,7 +43,6 @@ export function ChatMessage({
onDismissCredentials,
onSendMessage,
agentOutput,
isFinalMessage = true,
}: ChatMessageProps) {
const { user } = useSupabase();
const router = useRouter();
@@ -51,6 +55,14 @@ export function ChatMessage({
isCredentialsNeeded,
} = useChatMessage(message);
const { data: profile } = useGetV2GetUserProfile({
query: {
select: (res) => (res.status === 200 ? res.data : null),
enabled: isUser && !!user,
queryKey: ["/api/store/profile", user?.id],
},
});
const handleAllCredentialsComplete = useCallback(
function handleAllCredentialsComplete() {
// Send a user message that explicitly asks to retry the setup
@@ -159,11 +171,7 @@ export function ChatMessage({
if (isToolCall && message.type === "tool_call") {
return (
<div className={cn("px-4 py-2", className)}>
<ToolCallMessage
toolId={message.toolId}
toolName={message.toolName}
arguments={message.arguments}
/>
<ToolCallMessage toolName={message.toolName} />
</div>
);
}
@@ -210,11 +218,27 @@ export function ChatMessage({
// Render tool response messages (but skip agent_output if it's being rendered inside assistant message)
if (isToolResponse && message.type === "tool_response") {
// Check if this is an agent_output that should be rendered inside assistant message
if (message.result) {
let parsedResult: Record<string, unknown> | null = null;
try {
parsedResult =
typeof message.result === "string"
? JSON.parse(message.result)
: (message.result as Record<string, unknown>);
} catch {
parsedResult = null;
}
if (parsedResult?.type === "agent_output") {
// Skip rendering - this will be rendered inside the assistant message
return null;
}
}
return (
<div className={cn("px-4 py-2", className)}>
<ToolResponseMessage
toolId={message.toolId}
toolName={message.toolName}
toolName={getToolActionPhrase(message.toolName)}
result={message.result}
/>
</div>
@@ -232,33 +256,40 @@ export function ChatMessage({
)}
>
<div className="flex w-full max-w-3xl gap-3">
{!isUser && (
<div className="flex-shrink-0">
<div className="flex h-7 w-7 items-center justify-center rounded-lg bg-indigo-500">
<RobotIcon className="h-4 w-4 text-indigo-50" />
</div>
</div>
)}
<div
className={cn(
"flex min-w-0 flex-1 flex-col",
isUser && "items-end",
)}
>
{isUser ? (
<UserChatBubble>
<MarkdownContent content={message.content} />
</UserChatBubble>
) : (
<AIChatBubble>
<MarkdownContent content={message.content} />
{agentOutput && agentOutput.type === "tool_response" && (
<MessageBubble variant={isUser ? "user" : "assistant"}>
<MarkdownContent content={message.content} />
{agentOutput &&
agentOutput.type === "tool_response" &&
!isUser && (
<div className="mt-4">
<ToolResponseMessage
toolId={agentOutput.toolId}
toolName={agentOutput.toolName || "Agent Output"}
toolName={
agentOutput.toolName
? getToolActionPhrase(agentOutput.toolName)
: "Agent Output"
}
result={agentOutput.result}
/>
</div>
)}
</AIChatBubble>
)}
</MessageBubble>
<div
className={cn(
"flex gap-1",
"mt-1 flex gap-1",
isUser ? "justify-end" : "justify-start",
)}
>
@@ -272,22 +303,34 @@ export function ChatMessage({
<ArrowClockwise className="size-3 text-neutral-500" />
</Button>
)}
{(isUser || isFinalMessage) && (
<Button
variant="ghost"
size="icon"
onClick={handleCopy}
aria-label="Copy message"
>
{copied ? (
<CheckIcon className="size-3 text-green-600" />
) : (
<CopyIcon className="size-3 text-neutral-500" />
)}
</Button>
)}
<Button
variant="ghost"
size="icon"
onClick={handleCopy}
aria-label="Copy message"
>
{copied ? (
<CheckIcon className="size-3 text-green-600" />
) : (
<CopyIcon className="size-3 text-neutral-500" />
)}
</Button>
</div>
</div>
{isUser && (
<div className="flex-shrink-0">
<Avatar className="h-7 w-7">
<AvatarImage
src={profile?.avatar_url ?? ""}
alt={profile?.username ?? "User"}
/>
<AvatarFallback className="rounded-lg bg-neutral-200 text-neutral-600">
{profile?.username?.charAt(0)?.toUpperCase() || "U"}
</AvatarFallback>
</Avatar>
</div>
)}
</div>
</div>
);

View File

@@ -13,9 +13,10 @@ export function MessageBubble({
className,
}: MessageBubbleProps) {
const userTheme = {
bg: "bg-purple-100",
border: "border-purple-100",
text: "text-slate-900",
bg: "bg-slate-900",
border: "border-slate-800",
gradient: "from-slate-900/30 via-slate-800/20 to-transparent",
text: "text-slate-50",
};
const assistantTheme = {
@@ -39,7 +40,9 @@ export function MessageBubble({
)}
>
{/* Gradient flare background */}
<div className={cn("absolute inset-0 bg-gradient-to-br")} />
<div
className={cn("absolute inset-0 bg-gradient-to-br", theme.gradient)}
/>
<div
className={cn(
"relative z-10 transition-all duration-500 ease-in-out",

View File

@@ -0,0 +1,121 @@
"use client";
import { cn } from "@/lib/utils";
import { ChatMessage } from "../ChatMessage/ChatMessage";
import type { ChatMessageData } from "../ChatMessage/useChatMessage";
import { StreamingMessage } from "../StreamingMessage/StreamingMessage";
import { ThinkingMessage } from "../ThinkingMessage/ThinkingMessage";
import { useMessageList } from "./useMessageList";
export interface MessageListProps {
messages: ChatMessageData[];
streamingChunks?: string[];
isStreaming?: boolean;
className?: string;
onStreamComplete?: () => void;
onSendMessage?: (content: string) => void;
}
export function MessageList({
messages,
streamingChunks = [],
isStreaming = false,
className,
onStreamComplete,
onSendMessage,
}: MessageListProps) {
const { messagesEndRef, messagesContainerRef } = useMessageList({
messageCount: messages.length,
isStreaming,
});
return (
<div
ref={messagesContainerRef}
className={cn(
"flex-1 overflow-y-auto",
"scrollbar-thin scrollbar-track-transparent scrollbar-thumb-zinc-300",
className,
)}
>
<div className="mx-auto flex max-w-3xl flex-col py-4">
{/* Render all persisted messages */}
{messages.map((message, index) => {
// Check if current message is an agent_output tool_response
// and if previous message is an assistant message
let agentOutput: ChatMessageData | undefined;
if (message.type === "tool_response" && message.result) {
let parsedResult: Record<string, unknown> | null = null;
try {
parsedResult =
typeof message.result === "string"
? JSON.parse(message.result)
: (message.result as Record<string, unknown>);
} catch {
parsedResult = null;
}
if (parsedResult?.type === "agent_output") {
const prevMessage = messages[index - 1];
if (
prevMessage &&
prevMessage.type === "message" &&
prevMessage.role === "assistant"
) {
// This agent output will be rendered inside the previous assistant message
// Skip rendering this message separately
return null;
}
}
}
// Check if next message is an agent_output tool_response to include in current assistant message
if (message.type === "message" && message.role === "assistant") {
const nextMessage = messages[index + 1];
if (
nextMessage &&
nextMessage.type === "tool_response" &&
nextMessage.result
) {
let parsedResult: Record<string, unknown> | null = null;
try {
parsedResult =
typeof nextMessage.result === "string"
? JSON.parse(nextMessage.result)
: (nextMessage.result as Record<string, unknown>);
} catch {
parsedResult = null;
}
if (parsedResult?.type === "agent_output") {
agentOutput = nextMessage;
}
}
}
return (
<ChatMessage
key={index}
message={message}
onSendMessage={onSendMessage}
agentOutput={agentOutput}
/>
);
})}
{/* Render thinking message when streaming but no chunks yet */}
{isStreaming && streamingChunks.length === 0 && <ThinkingMessage />}
{/* Render streaming message if active */}
{isStreaming && streamingChunks.length > 0 && (
<StreamingMessage
chunks={streamingChunks}
onComplete={onStreamComplete}
/>
)}
{/* Invisible div to scroll to */}
<div ref={messagesEndRef} />
</div>
</div>
);
}

View File

@@ -1,6 +1,7 @@
import { cn } from "@/lib/utils";
import { AIChatBubble } from "../AIChatBubble/AIChatBubble";
import { RobotIcon } from "@phosphor-icons/react";
import { MarkdownContent } from "../MarkdownContent/MarkdownContent";
import { MessageBubble } from "../MessageBubble/MessageBubble";
import { useStreamingMessage } from "./useStreamingMessage";
export interface StreamingMessageProps {
@@ -24,10 +25,16 @@ export function StreamingMessage({
)}
>
<div className="flex w-full max-w-3xl gap-3">
<div className="flex-shrink-0">
<div className="flex h-7 w-7 items-center justify-center rounded-lg bg-indigo-600">
<RobotIcon className="h-4 w-4 text-indigo-50" />
</div>
</div>
<div className="flex min-w-0 flex-1 flex-col">
<AIChatBubble>
<MessageBubble variant="assistant">
<MarkdownContent content={displayText} />
</AIChatBubble>
</MessageBubble>
</div>
</div>
</div>

View File

@@ -1,6 +1,7 @@
import { cn } from "@/lib/utils";
import { RobotIcon } from "@phosphor-icons/react";
import { useEffect, useRef, useState } from "react";
import { AIChatBubble } from "../AIChatBubble/AIChatBubble";
import { MessageBubble } from "../MessageBubble/MessageBubble";
export interface ThinkingMessageProps {
className?: string;
@@ -33,8 +34,14 @@ export function ThinkingMessage({ className }: ThinkingMessageProps) {
)}
>
<div className="flex w-full max-w-3xl gap-3">
<div className="flex-shrink-0">
<div className="flex h-7 w-7 items-center justify-center rounded-lg bg-indigo-500">
<RobotIcon className="h-4 w-4 text-indigo-50" />
</div>
</div>
<div className="flex min-w-0 flex-1 flex-col">
<AIChatBubble>
<MessageBubble variant="assistant">
<div className="transition-all duration-500 ease-in-out">
{showSlowLoader ? (
<div className="flex flex-col items-center gap-3 py-2">
@@ -55,7 +62,7 @@ export function ThinkingMessage({ className }: ThinkingMessageProps) {
</span>
)}
</div>
</AIChatBubble>
</MessageBubble>
</div>
</div>
</div>

View File

@@ -0,0 +1,24 @@
import { Text } from "@/components/atoms/Text/Text";
import { cn } from "@/lib/utils";
import { WrenchIcon } from "@phosphor-icons/react";
import { getToolActionPhrase } from "../../helpers";
export interface ToolCallMessageProps {
toolName: string;
className?: string;
}
export function ToolCallMessage({ toolName, className }: ToolCallMessageProps) {
return (
<div className={cn("flex items-center justify-center gap-2", className)}>
<WrenchIcon
size={14}
weight="bold"
className="flex-shrink-0 text-neutral-500"
/>
<Text variant="small" className="text-neutral-500">
{getToolActionPhrase(toolName)}...
</Text>
</div>
);
}

View File

@@ -0,0 +1,260 @@
import { Text } from "@/components/atoms/Text/Text";
import "@/components/contextual/OutputRenderers";
import {
globalRegistry,
OutputItem,
} from "@/components/contextual/OutputRenderers";
import { cn } from "@/lib/utils";
import type { ToolResult } from "@/types/chat";
import { WrenchIcon } from "@phosphor-icons/react";
import { getToolActionPhrase } from "../../helpers";
export interface ToolResponseMessageProps {
toolName: string;
result?: ToolResult;
success?: boolean;
className?: string;
}
export function ToolResponseMessage({
toolName,
result,
success: _success = true,
className,
}: ToolResponseMessageProps) {
if (!result) {
return (
<div className={cn("flex items-center justify-center gap-2", className)}>
<WrenchIcon
size={14}
weight="bold"
className="flex-shrink-0 text-neutral-500"
/>
<Text variant="small" className="text-neutral-500">
{getToolActionPhrase(toolName)}...
</Text>
</div>
);
}
let parsedResult: Record<string, unknown> | null = null;
try {
parsedResult =
typeof result === "string"
? JSON.parse(result)
: (result as Record<string, unknown>);
} catch {
parsedResult = null;
}
if (parsedResult && typeof parsedResult === "object") {
const responseType = parsedResult.type as string | undefined;
if (responseType === "agent_output") {
const execution = parsedResult.execution as
| {
outputs?: Record<string, unknown[]>;
}
| null
| undefined;
const outputs = execution?.outputs || {};
const message = parsedResult.message as string | undefined;
return (
<div className={cn("space-y-4 px-4 py-2", className)}>
<div className="flex items-center gap-2">
<WrenchIcon
size={14}
weight="bold"
className="flex-shrink-0 text-neutral-500"
/>
<Text variant="small" className="text-neutral-500">
{getToolActionPhrase(toolName)}
</Text>
</div>
{message && (
<div className="rounded border p-4">
<Text variant="small" className="text-neutral-600">
{message}
</Text>
</div>
)}
{Object.keys(outputs).length > 0 && (
<div className="space-y-4">
{Object.entries(outputs).map(([outputName, values]) =>
values.map((value, index) => {
const renderer = globalRegistry.getRenderer(value);
if (renderer) {
return (
<OutputItem
key={`${outputName}-${index}`}
value={value}
renderer={renderer}
label={outputName}
/>
);
}
return (
<div
key={`${outputName}-${index}`}
className="rounded border p-4"
>
<Text variant="large-medium" className="mb-2 capitalize">
{outputName}
</Text>
<pre className="overflow-auto text-sm">
{JSON.stringify(value, null, 2)}
</pre>
</div>
);
}),
)}
</div>
)}
</div>
);
}
if (responseType === "block_output" && parsedResult.outputs) {
const outputs = parsedResult.outputs as Record<string, unknown[]>;
return (
<div className={cn("space-y-4 px-4 py-2", className)}>
<div className="flex items-center gap-2">
<WrenchIcon
size={14}
weight="bold"
className="flex-shrink-0 text-neutral-500"
/>
<Text variant="small" className="text-neutral-500">
{getToolActionPhrase(toolName)}
</Text>
</div>
<div className="space-y-4">
{Object.entries(outputs).map(([outputName, values]) =>
values.map((value, index) => {
const renderer = globalRegistry.getRenderer(value);
if (renderer) {
return (
<OutputItem
key={`${outputName}-${index}`}
value={value}
renderer={renderer}
label={outputName}
/>
);
}
return (
<div
key={`${outputName}-${index}`}
className="rounded border p-4"
>
<Text variant="large-medium" className="mb-2 capitalize">
{outputName}
</Text>
<pre className="overflow-auto text-sm">
{JSON.stringify(value, null, 2)}
</pre>
</div>
);
}),
)}
</div>
</div>
);
}
// Handle other response types with a message field (e.g., understanding_updated)
if (parsedResult.message && typeof parsedResult.message === "string") {
// Format tool name from snake_case to Title Case
const formattedToolName = toolName
.split("_")
.map((word) => word.charAt(0).toUpperCase() + word.slice(1))
.join(" ");
// Clean up message - remove incomplete user_name references
let cleanedMessage = parsedResult.message;
// Remove "Updated understanding with: user_name" pattern if user_name is just a placeholder
cleanedMessage = cleanedMessage.replace(
/Updated understanding with:\s*user_name\.?\s*/gi,
"",
);
// Remove standalone user_name references
cleanedMessage = cleanedMessage.replace(/\buser_name\b\.?\s*/gi, "");
cleanedMessage = cleanedMessage.trim();
// Only show message if it has content after cleaning
if (!cleanedMessage) {
return (
<div
className={cn(
"flex items-center justify-center gap-2 px-4 py-2",
className,
)}
>
<WrenchIcon
size={14}
weight="bold"
className="flex-shrink-0 text-neutral-500"
/>
<Text variant="small" className="text-neutral-500">
{formattedToolName}
</Text>
</div>
);
}
return (
<div className={cn("space-y-2 px-4 py-2", className)}>
<div className="flex items-center justify-center gap-2">
<WrenchIcon
size={14}
weight="bold"
className="flex-shrink-0 text-neutral-500"
/>
<Text variant="small" className="text-neutral-500">
{formattedToolName}
</Text>
</div>
<div className="rounded border p-4">
<Text variant="small" className="text-neutral-600">
{cleanedMessage}
</Text>
</div>
</div>
);
}
}
const renderer = globalRegistry.getRenderer(result);
if (renderer) {
return (
<div className={cn("px-4 py-2", className)}>
<div className="mb-2 flex items-center gap-2">
<WrenchIcon
size={14}
weight="bold"
className="flex-shrink-0 text-neutral-500"
/>
<Text variant="small" className="text-neutral-500">
{getToolActionPhrase(toolName)}
</Text>
</div>
<OutputItem value={result} renderer={renderer} />
</div>
);
}
return (
<div className={cn("flex items-center justify-center gap-2", className)}>
<WrenchIcon
size={14}
weight="bold"
className="flex-shrink-0 text-neutral-500"
/>
<Text variant="small" className="text-neutral-500">
{getToolActionPhrase(toolName)}...
</Text>
</div>
);
}

View File

@@ -1,20 +1,17 @@
"use client";
import { useSupabase } from "@/lib/supabase/hooks/useSupabase";
import { useEffect, useRef, useState } from "react";
import { useEffect, useRef } from "react";
import { toast } from "sonner";
import { useChatSession } from "./useChatSession";
import { useChatStream } from "./useChatStream";
interface UseChatArgs {
urlSessionId?: string | null;
}
export function useChat({ urlSessionId }: UseChatArgs = {}) {
export function useChat() {
const hasCreatedSessionRef = useRef(false);
const hasClaimedSessionRef = useRef(false);
const { user } = useSupabase();
const { sendMessage: sendStreamMessage } = useChatStream();
const [showLoader, setShowLoader] = useState(false);
const {
session,
sessionId: sessionIdFromHook,
@@ -27,10 +24,22 @@ export function useChat({ urlSessionId }: UseChatArgs = {}) {
clearSession: clearSessionBase,
loadSession,
} = useChatSession({
urlSessionId,
urlSessionId: null,
autoCreate: false,
});
useEffect(
function autoCreateSession() {
if (!hasCreatedSessionRef.current && !isCreating && !sessionIdFromHook) {
hasCreatedSessionRef.current = true;
createSession().catch((_err) => {
hasCreatedSessionRef.current = false;
});
}
},
[isCreating, sessionIdFromHook, createSession],
);
useEffect(
function autoClaimSession() {
if (
@@ -66,17 +75,6 @@ export function useChat({ urlSessionId }: UseChatArgs = {}) {
],
);
useEffect(() => {
if (isLoading || isCreating) {
const timer = setTimeout(() => {
setShowLoader(true);
}, 300);
return () => clearTimeout(timer);
} else {
setShowLoader(false);
}
}, [isLoading, isCreating]);
useEffect(function monitorNetworkStatus() {
function handleOnline() {
toast.success("Connection restored", {
@@ -101,6 +99,7 @@ export function useChat({ urlSessionId }: UseChatArgs = {}) {
function clearSession() {
clearSessionBase();
hasCreatedSessionRef.current = false;
hasClaimedSessionRef.current = false;
}
@@ -114,6 +113,5 @@ export function useChat({ urlSessionId }: UseChatArgs = {}) {
clearSession,
loadSession,
sessionId: sessionIdFromHook,
showLoader,
};
}

View File

@@ -0,0 +1,27 @@
"use client";
import { Flag, useGetFlag } from "@/services/feature-flags/use-get-flag";
import { useRouter } from "next/navigation";
import { useEffect } from "react";
import { Chat } from "./components/Chat/Chat";
export default function ChatPage() {
const isChatEnabled = useGetFlag(Flag.CHAT);
const router = useRouter();
useEffect(() => {
if (isChatEnabled === false) {
router.push("/marketplace");
}
}, [isChatEnabled, router]);
if (isChatEnabled === null || isChatEnabled === false) {
return null;
}
return (
<div className="flex h-full flex-col">
<Chat className="flex-1" />
</div>
);
}

View File

@@ -1,24 +0,0 @@
"use client";
import { Chat } from "@/components/contextual/Chat/Chat";
import { useCopilotChatPage } from "./useCopilotChatPage";
export default function CopilotChatPage() {
const { isFlagReady, isChatEnabled, sessionId, prompt } =
useCopilotChatPage();
if (!isFlagReady || isChatEnabled === false) {
return null;
}
return (
<div className="flex h-full flex-col">
<Chat
className="flex-1"
urlSessionId={sessionId}
initialPrompt={prompt}
showNewChatButton={false}
/>
</div>
);
}

View File

@@ -1,44 +0,0 @@
"use client";
import { getHomepageRoute } from "@/lib/constants";
import {
Flag,
type FlagValues,
useGetFlag,
} from "@/services/feature-flags/use-get-flag";
import { useFlags } from "launchdarkly-react-client-sdk";
import { useRouter, useSearchParams } from "next/navigation";
import { useEffect } from "react";
export function useCopilotChatPage() {
const router = useRouter();
const searchParams = useSearchParams();
const isChatEnabled = useGetFlag(Flag.CHAT);
const flags = useFlags<FlagValues>();
const homepageRoute = getHomepageRoute(isChatEnabled);
const envEnabled = process.env.NEXT_PUBLIC_LAUNCHDARKLY_ENABLED === "true";
const clientId = process.env.NEXT_PUBLIC_LAUNCHDARKLY_CLIENT_ID;
const isLaunchDarklyConfigured = envEnabled && Boolean(clientId);
const isFlagReady =
!isLaunchDarklyConfigured || flags[Flag.CHAT] !== undefined;
const sessionId = searchParams.get("sessionId");
const prompt = searchParams.get("prompt");
useEffect(
function guardAccess() {
if (!isFlagReady) return;
if (isChatEnabled === false) {
router.replace(homepageRoute);
}
},
[homepageRoute, isChatEnabled, isFlagReady, router],
);
return {
isFlagReady,
isChatEnabled,
sessionId,
prompt,
};
}

View File

@@ -1,75 +0,0 @@
"use client";
import { NAVBAR_HEIGHT_PX } from "@/lib/constants";
import type { ReactNode } from "react";
import { DesktopSidebar } from "./components/DesktopSidebar/DesktopSidebar";
import { LoadingState } from "./components/LoadingState/LoadingState";
import { MobileDrawer } from "./components/MobileDrawer/MobileDrawer";
import { MobileHeader } from "./components/MobileHeader/MobileHeader";
import { useCopilotShell } from "./useCopilotShell";
interface Props {
children: ReactNode;
}
export function CopilotShell({ children }: Props) {
const {
isMobile,
isDrawerOpen,
isLoading,
sessions,
currentSessionId,
handleSelectSession,
handleOpenDrawer,
handleCloseDrawer,
handleDrawerOpenChange,
handleNewChat,
hasNextPage,
isFetchingNextPage,
fetchNextPage,
isReadyToShowContent,
} = useCopilotShell();
return (
<div
className="flex overflow-hidden bg-zinc-50"
style={{ height: `calc(100vh - ${NAVBAR_HEIGHT_PX}px)` }}
>
{!isMobile && (
<DesktopSidebar
sessions={sessions}
currentSessionId={currentSessionId}
isLoading={isLoading}
hasNextPage={hasNextPage}
isFetchingNextPage={isFetchingNextPage}
onSelectSession={handleSelectSession}
onFetchNextPage={fetchNextPage}
onNewChat={handleNewChat}
/>
)}
<div className="flex min-h-0 flex-1 flex-col">
{isMobile && <MobileHeader onOpenDrawer={handleOpenDrawer} />}
<div className="flex min-h-0 flex-1 flex-col">
{isReadyToShowContent ? children : <LoadingState />}
</div>
</div>
{isMobile && (
<MobileDrawer
isOpen={isDrawerOpen}
sessions={sessions}
currentSessionId={currentSessionId}
isLoading={isLoading}
hasNextPage={hasNextPage}
isFetchingNextPage={isFetchingNextPage}
onSelectSession={handleSelectSession}
onFetchNextPage={fetchNextPage}
onNewChat={handleNewChat}
onClose={handleCloseDrawer}
onOpenChange={handleDrawerOpenChange}
/>
)}
</div>
);
}

View File

@@ -1,66 +0,0 @@
import type { SessionSummaryResponse } from "@/app/api/__generated__/models/sessionSummaryResponse";
import { Button } from "@/components/atoms/Button/Button";
import { Text } from "@/components/atoms/Text/Text";
import { scrollbarStyles } from "@/components/styles/scrollbars";
import { cn } from "@/lib/utils";
import { Plus } from "@phosphor-icons/react";
import { SessionsList } from "../SessionsList/SessionsList";
interface Props {
sessions: SessionSummaryResponse[];
currentSessionId: string | null;
isLoading: boolean;
hasNextPage: boolean;
isFetchingNextPage: boolean;
onSelectSession: (sessionId: string) => void;
onFetchNextPage: () => void;
onNewChat: () => void;
}
export function DesktopSidebar({
sessions,
currentSessionId,
isLoading,
hasNextPage,
isFetchingNextPage,
onSelectSession,
onFetchNextPage,
onNewChat,
}: Props) {
return (
<aside className="flex h-full w-80 flex-col border-r border-zinc-100 bg-white">
<div className="shrink-0 px-6 py-4">
<Text variant="h3" size="body-medium">
Your chats
</Text>
</div>
<div
className={cn(
"flex min-h-0 flex-1 flex-col overflow-y-auto px-3 py-3",
scrollbarStyles,
)}
>
<SessionsList
sessions={sessions}
currentSessionId={currentSessionId}
isLoading={isLoading}
hasNextPage={hasNextPage}
isFetchingNextPage={isFetchingNextPage}
onSelectSession={onSelectSession}
onFetchNextPage={onFetchNextPage}
/>
</div>
<div className="shrink-0 bg-white p-3 shadow-[0_-4px_6px_-1px_rgba(0,0,0,0.05)]">
<Button
variant="primary"
size="small"
onClick={onNewChat}
className="w-full"
leftIcon={<Plus width="1rem" height="1rem" />}
>
New Chat
</Button>
</div>
</aside>
);
}

View File

@@ -1,15 +0,0 @@
import { Text } from "@/components/atoms/Text/Text";
import { ChatLoader } from "@/components/contextual/Chat/components/ChatLoader/ChatLoader";
export function LoadingState() {
return (
<div className="flex flex-1 items-center justify-center">
<div className="flex flex-col items-center gap-4">
<ChatLoader />
<Text variant="body" className="text-zinc-500">
Loading your chats...
</Text>
</div>
</div>
);
}

View File

@@ -1,87 +0,0 @@
import type { SessionSummaryResponse } from "@/app/api/__generated__/models/sessionSummaryResponse";
import { Button } from "@/components/atoms/Button/Button";
import { scrollbarStyles } from "@/components/styles/scrollbars";
import { cn } from "@/lib/utils";
import { Plus, X } from "@phosphor-icons/react";
import { Drawer } from "vaul";
import { SessionsList } from "../SessionsList/SessionsList";
interface Props {
isOpen: boolean;
sessions: SessionSummaryResponse[];
currentSessionId: string | null;
isLoading: boolean;
hasNextPage: boolean;
isFetchingNextPage: boolean;
onSelectSession: (sessionId: string) => void;
onFetchNextPage: () => void;
onNewChat: () => void;
onClose: () => void;
onOpenChange: (open: boolean) => void;
}
export function MobileDrawer({
isOpen,
sessions,
currentSessionId,
isLoading,
hasNextPage,
isFetchingNextPage,
onSelectSession,
onFetchNextPage,
onNewChat,
onClose,
onOpenChange,
}: Props) {
return (
<Drawer.Root open={isOpen} onOpenChange={onOpenChange} direction="left">
<Drawer.Portal>
<Drawer.Overlay className="fixed inset-0 z-[60] bg-black/10 backdrop-blur-sm" />
<Drawer.Content className="fixed left-0 top-0 z-[70] flex h-full w-80 flex-col border-r border-zinc-200 bg-white">
<div className="shrink-0 border-b border-zinc-200 p-4">
<div className="flex items-center justify-between">
<Drawer.Title className="text-lg font-semibold text-zinc-800">
Your tasks
</Drawer.Title>
<Button
variant="icon"
size="icon"
aria-label="Close sessions"
onClick={onClose}
>
<X width="1.25rem" height="1.25rem" />
</Button>
</div>
</div>
<div
className={cn(
"flex min-h-0 flex-1 flex-col overflow-y-auto px-3 py-3",
scrollbarStyles,
)}
>
<SessionsList
sessions={sessions}
currentSessionId={currentSessionId}
isLoading={isLoading}
hasNextPage={hasNextPage}
isFetchingNextPage={isFetchingNextPage}
onSelectSession={onSelectSession}
onFetchNextPage={onFetchNextPage}
/>
</div>
<div className="shrink-0 bg-white p-3 shadow-[0_-4px_6px_-1px_rgba(0,0,0,0.05)]">
<Button
variant="primary"
size="small"
onClick={onNewChat}
className="w-full"
leftIcon={<Plus width="1rem" height="1rem" />}
>
New Chat
</Button>
</div>
</Drawer.Content>
</Drawer.Portal>
</Drawer.Root>
);
}

View File

@@ -1,24 +0,0 @@
import { useState } from "react";
export function useMobileDrawer() {
const [isDrawerOpen, setIsDrawerOpen] = useState(false);
function handleOpenDrawer() {
setIsDrawerOpen(true);
}
function handleCloseDrawer() {
setIsDrawerOpen(false);
}
function handleDrawerOpenChange(open: boolean) {
setIsDrawerOpen(open);
}
return {
isDrawerOpen,
handleOpenDrawer,
handleCloseDrawer,
handleDrawerOpenChange,
};
}

View File

@@ -1,21 +0,0 @@
import { Button } from "@/components/atoms/Button/Button";
import { List } from "@phosphor-icons/react";
interface Props {
onOpenDrawer: () => void;
}
export function MobileHeader({ onOpenDrawer }: Props) {
return (
<header className="flex items-center justify-between px-4 py-3">
<Button
variant="icon"
size="icon"
aria-label="Open sessions"
onClick={onOpenDrawer}
>
<List width="1.25rem" height="1.25rem" />
</Button>
</header>
);
}

View File

@@ -1,80 +0,0 @@
import type { SessionSummaryResponse } from "@/app/api/__generated__/models/sessionSummaryResponse";
import { Skeleton } from "@/components/__legacy__/ui/skeleton";
import { Text } from "@/components/atoms/Text/Text";
import { InfiniteList } from "@/components/molecules/InfiniteList/InfiniteList";
import { cn } from "@/lib/utils";
import { getSessionTitle } from "../../helpers";
interface Props {
sessions: SessionSummaryResponse[];
currentSessionId: string | null;
isLoading: boolean;
hasNextPage: boolean;
isFetchingNextPage: boolean;
onSelectSession: (sessionId: string) => void;
onFetchNextPage: () => void;
}
export function SessionsList({
sessions,
currentSessionId,
isLoading,
hasNextPage,
isFetchingNextPage,
onSelectSession,
onFetchNextPage,
}: Props) {
if (isLoading) {
return (
<div className="space-y-1">
{Array.from({ length: 5 }).map((_, i) => (
<div key={i} className="rounded-lg px-3 py-2.5">
<Skeleton className="h-5 w-full" />
</div>
))}
</div>
);
}
if (sessions.length === 0) {
return (
<div className="flex items-center justify-center py-8">
<Text variant="body" className="text-zinc-500">
No sessions found
</Text>
</div>
);
}
return (
<InfiniteList
items={sessions}
hasMore={hasNextPage}
isFetchingMore={isFetchingNextPage}
onEndReached={onFetchNextPage}
className="space-y-1"
renderItem={(session) => {
const isActive = session.id === currentSessionId;
return (
<button
onClick={() => onSelectSession(session.id)}
className={cn(
"w-full rounded-lg px-3 py-2.5 text-left transition-colors",
isActive ? "bg-zinc-100" : "hover:bg-zinc-50",
)}
>
<Text
variant="body"
className={cn(
"font-normal",
isActive ? "text-zinc-600" : "text-zinc-800",
)}
>
{getSessionTitle(session)}
</Text>
</button>
);
}}
/>
);
}

View File

@@ -1,89 +0,0 @@
import { useGetV2ListSessions } from "@/app/api/__generated__/endpoints/chat/chat";
import type { SessionSummaryResponse } from "@/app/api/__generated__/models/sessionSummaryResponse";
import { okData } from "@/app/api/helpers";
import { useEffect, useMemo, useState } from "react";
const PAGE_SIZE = 50;
export interface UseSessionsPaginationArgs {
enabled: boolean;
}
export function useSessionsPagination({ enabled }: UseSessionsPaginationArgs) {
const [offset, setOffset] = useState(0);
const [accumulatedSessions, setAccumulatedSessions] = useState<
SessionSummaryResponse[]
>([]);
const [totalCount, setTotalCount] = useState<number | null>(null);
const { data, isLoading, isFetching, isError } = useGetV2ListSessions(
{ limit: PAGE_SIZE, offset },
{
query: {
enabled: enabled && offset >= 0,
},
},
);
useEffect(() => {
const responseData = okData(data);
if (responseData) {
const newSessions = responseData.sessions;
const total = responseData.total;
setTotalCount(total);
if (offset === 0) {
setAccumulatedSessions(newSessions);
} else {
setAccumulatedSessions((prev) => [...prev, ...newSessions]);
}
}
}, [data, offset]);
const hasNextPage = useMemo(() => {
if (totalCount === null) return false;
return accumulatedSessions.length < totalCount;
}, [accumulatedSessions.length, totalCount]);
const areAllSessionsLoaded = useMemo(() => {
if (totalCount === null) return false;
return (
accumulatedSessions.length >= totalCount && !isFetching && !isLoading
);
}, [accumulatedSessions.length, totalCount, isFetching, isLoading]);
useEffect(() => {
if (
hasNextPage &&
!isFetching &&
!isLoading &&
!isError &&
totalCount !== null
) {
setOffset((prev) => prev + PAGE_SIZE);
}
}, [hasNextPage, isFetching, isLoading, isError, totalCount]);
function fetchNextPage() {
if (hasNextPage && !isFetching) {
setOffset((prev) => prev + PAGE_SIZE);
}
}
function reset() {
setOffset(0);
setAccumulatedSessions([]);
setTotalCount(null);
}
return {
sessions: accumulatedSessions,
isLoading,
isFetching,
hasNextPage,
areAllSessionsLoaded,
totalCount,
fetchNextPage,
reset,
};
}

View File

@@ -1,167 +0,0 @@
import type { SessionDetailResponse } from "@/app/api/__generated__/models/sessionDetailResponse";
import type { SessionSummaryResponse } from "@/app/api/__generated__/models/sessionSummaryResponse";
import { format, formatDistanceToNow, isToday } from "date-fns";
export function convertSessionDetailToSummary(
session: SessionDetailResponse,
): SessionSummaryResponse {
return {
id: session.id,
created_at: session.created_at,
updated_at: session.updated_at,
title: undefined,
};
}
export function filterVisibleSessions(
sessions: SessionSummaryResponse[],
): SessionSummaryResponse[] {
return sessions.filter(
(session) => session.updated_at !== session.created_at,
);
}
export function getSessionTitle(session: SessionSummaryResponse): string {
if (session.title) return session.title;
const isNewSession = session.updated_at === session.created_at;
if (isNewSession) {
const createdDate = new Date(session.created_at);
if (isToday(createdDate)) {
return "Today";
}
return format(createdDate, "MMM d, yyyy");
}
return "Untitled Chat";
}
export function getSessionUpdatedLabel(
session: SessionSummaryResponse,
): string {
if (!session.updated_at) return "";
return formatDistanceToNow(new Date(session.updated_at), { addSuffix: true });
}
export function mergeCurrentSessionIntoList(
accumulatedSessions: SessionSummaryResponse[],
currentSessionId: string | null,
currentSessionData: SessionDetailResponse | undefined,
): SessionSummaryResponse[] {
const filteredSessions: SessionSummaryResponse[] = [];
if (accumulatedSessions.length > 0) {
const visibleSessions = filterVisibleSessions(accumulatedSessions);
if (currentSessionId) {
const currentInAll = accumulatedSessions.find(
(s) => s.id === currentSessionId,
);
if (currentInAll) {
const isInVisible = visibleSessions.some(
(s) => s.id === currentSessionId,
);
if (!isInVisible) {
filteredSessions.push(currentInAll);
}
}
}
filteredSessions.push(...visibleSessions);
}
if (currentSessionId && currentSessionData) {
const isCurrentInList = filteredSessions.some(
(s) => s.id === currentSessionId,
);
if (!isCurrentInList) {
const summarySession = convertSessionDetailToSummary(currentSessionData);
filteredSessions.unshift(summarySession);
}
}
return filteredSessions;
}
export function getCurrentSessionId(
searchParams: URLSearchParams,
storedSessionId: string | null,
): string | null {
const paramSessionId = searchParams.get("sessionId");
if (paramSessionId) return paramSessionId;
if (storedSessionId) return storedSessionId;
return null;
}
export function shouldAutoSelectSession(
areAllSessionsLoaded: boolean,
hasAutoSelectedSession: boolean,
paramSessionId: string | null,
visibleSessions: SessionSummaryResponse[],
accumulatedSessions: SessionSummaryResponse[],
isLoading: boolean,
totalCount: number | null,
): {
shouldSelect: boolean;
sessionIdToSelect: string | null;
shouldCreate: boolean;
} {
if (!areAllSessionsLoaded || hasAutoSelectedSession) {
return {
shouldSelect: false,
sessionIdToSelect: null,
shouldCreate: false,
};
}
if (paramSessionId) {
return {
shouldSelect: false,
sessionIdToSelect: null,
shouldCreate: false,
};
}
if (visibleSessions.length > 0) {
return {
shouldSelect: true,
sessionIdToSelect: visibleSessions[0].id,
shouldCreate: false,
};
}
if (accumulatedSessions.length === 0 && !isLoading && totalCount === 0) {
return { shouldSelect: false, sessionIdToSelect: null, shouldCreate: true };
}
if (totalCount === 0) {
return {
shouldSelect: false,
sessionIdToSelect: null,
shouldCreate: false,
};
}
return { shouldSelect: false, sessionIdToSelect: null, shouldCreate: false };
}
export function checkReadyToShowContent(
areAllSessionsLoaded: boolean,
paramSessionId: string | null,
accumulatedSessions: SessionSummaryResponse[],
isCurrentSessionLoading: boolean,
currentSessionData: SessionDetailResponse | undefined,
hasAutoSelectedSession: boolean,
): boolean {
if (!areAllSessionsLoaded) return false;
if (paramSessionId) {
const sessionFound = accumulatedSessions.some(
(s) => s.id === paramSessionId,
);
return (
sessionFound ||
(!isCurrentSessionLoading && currentSessionData !== undefined)
);
}
return hasAutoSelectedSession;
}

View File

@@ -1,202 +0,0 @@
"use client";
import {
postV2CreateSession,
useGetV2GetSession,
} from "@/app/api/__generated__/endpoints/chat/chat";
import { okData } from "@/app/api/helpers";
import { useBreakpoint } from "@/lib/hooks/useBreakpoint";
import { Key, storage } from "@/services/storage/local-storage";
import { useRouter, useSearchParams } from "next/navigation";
import { useCallback, useEffect, useMemo, useRef, useState } from "react";
import { useMobileDrawer } from "./components/MobileDrawer/useMobileDrawer";
import { useSessionsPagination } from "./components/SessionsList/useSessionsPagination";
import {
checkReadyToShowContent,
filterVisibleSessions,
getCurrentSessionId,
mergeCurrentSessionIntoList,
shouldAutoSelectSession,
} from "./helpers";
export function useCopilotShell() {
const router = useRouter();
const searchParams = useSearchParams();
const breakpoint = useBreakpoint();
const isMobile =
breakpoint === "base" || breakpoint === "sm" || breakpoint === "md";
const {
isDrawerOpen,
handleOpenDrawer,
handleCloseDrawer,
handleDrawerOpenChange,
} = useMobileDrawer();
const paginationEnabled = !isMobile || isDrawerOpen;
const {
sessions: accumulatedSessions,
isLoading: isSessionsLoading,
isFetching: isSessionsFetching,
hasNextPage,
areAllSessionsLoaded,
totalCount,
fetchNextPage,
reset: resetPagination,
} = useSessionsPagination({
enabled: paginationEnabled,
});
const storedSessionId = storage.get(Key.CHAT_SESSION_ID) ?? null;
const currentSessionId = useMemo(
() => getCurrentSessionId(searchParams, storedSessionId),
[searchParams, storedSessionId],
);
const { data: currentSessionData, isLoading: isCurrentSessionLoading } =
useGetV2GetSession(currentSessionId || "", {
query: {
enabled: !!currentSessionId && paginationEnabled,
select: okData,
},
});
const [hasAutoSelectedSession, setHasAutoSelectedSession] = useState(false);
const hasCreatedSessionRef = useRef(false);
const paramSessionId = searchParams.get("sessionId");
const createSessionAndNavigate = useCallback(
function createSessionAndNavigate() {
postV2CreateSession({ body: JSON.stringify({}) })
.then((response) => {
if (response.status === 200 && response.data) {
router.push(`/copilot/chat?sessionId=${response.data.id}`);
setHasAutoSelectedSession(true);
}
})
.catch(() => {
hasCreatedSessionRef.current = false;
});
},
[router],
);
useEffect(() => {
if (!areAllSessionsLoaded || hasAutoSelectedSession) return;
const visibleSessions = filterVisibleSessions(accumulatedSessions);
const autoSelect = shouldAutoSelectSession(
areAllSessionsLoaded,
hasAutoSelectedSession,
paramSessionId,
visibleSessions,
accumulatedSessions,
isSessionsLoading,
totalCount,
);
if (paramSessionId) {
setHasAutoSelectedSession(true);
return;
}
if (autoSelect.shouldSelect && autoSelect.sessionIdToSelect) {
setHasAutoSelectedSession(true);
router.push(`/copilot/chat?sessionId=${autoSelect.sessionIdToSelect}`);
} else if (autoSelect.shouldCreate && !hasCreatedSessionRef.current) {
hasCreatedSessionRef.current = true;
createSessionAndNavigate();
} else if (totalCount === 0) {
setHasAutoSelectedSession(true);
}
}, [
areAllSessionsLoaded,
accumulatedSessions,
paramSessionId,
hasAutoSelectedSession,
router,
isSessionsLoading,
totalCount,
createSessionAndNavigate,
]);
useEffect(() => {
if (paramSessionId) {
setHasAutoSelectedSession(true);
}
}, [paramSessionId]);
function resetAutoSelect() {
setHasAutoSelectedSession(false);
hasCreatedSessionRef.current = false;
}
// Reset pagination and auto-selection when query becomes disabled
useEffect(() => {
if (!paginationEnabled) {
resetPagination();
resetAutoSelect();
}
}, [paginationEnabled, resetPagination]);
const sessions = useMemo(
function getSessions() {
return mergeCurrentSessionIntoList(
accumulatedSessions,
currentSessionId,
currentSessionData,
);
},
[accumulatedSessions, currentSessionId, currentSessionData],
);
function handleSelectSession(sessionId: string) {
router.push(`/copilot/chat?sessionId=${sessionId}`);
if (isMobile) handleCloseDrawer();
}
function handleNewChat() {
storage.clean(Key.CHAT_SESSION_ID);
resetAutoSelect();
createSessionAndNavigate();
if (isMobile) handleCloseDrawer();
}
const isReadyToShowContent = useMemo(
() =>
checkReadyToShowContent(
areAllSessionsLoaded,
paramSessionId,
accumulatedSessions,
isCurrentSessionLoading,
currentSessionData,
hasAutoSelectedSession,
),
[
areAllSessionsLoaded,
paramSessionId,
accumulatedSessions,
isCurrentSessionLoading,
currentSessionData,
hasAutoSelectedSession,
],
);
return {
isMobile,
isDrawerOpen,
isLoading: isSessionsLoading || !areAllSessionsLoaded,
sessions,
currentSessionId,
handleSelectSession,
handleOpenDrawer,
handleCloseDrawer,
handleDrawerOpenChange,
handleNewChat,
hasNextPage,
isFetchingNextPage: isSessionsFetching,
fetchNextPage,
isReadyToShowContent,
};
}

View File

@@ -1,33 +0,0 @@
import type { User } from "@supabase/supabase-js";
export function getGreetingName(user?: User | null): string {
if (!user) return "there";
const metadata = user.user_metadata as Record<string, unknown> | undefined;
const fullName = metadata?.full_name;
const name = metadata?.name;
if (typeof fullName === "string" && fullName.trim()) {
return fullName.split(" ")[0];
}
if (typeof name === "string" && name.trim()) {
return name.split(" ")[0];
}
if (user.email) {
return user.email.split("@")[0];
}
return "there";
}
export function buildCopilotChatUrl(prompt: string): string {
const trimmed = prompt.trim();
if (!trimmed) return "/copilot/chat";
const encoded = encodeURIComponent(trimmed);
return `/copilot/chat?prompt=${encoded}`;
}
export function getQuickActions(): string[] {
return [
"Show me what I can automate",
"Design a custom workflow",
"Help me with content creation",
];
}

View File

@@ -1,6 +0,0 @@
import type { ReactNode } from "react";
import { CopilotShell } from "./components/CopilotShell/CopilotShell";
export default function CopilotLayout({ children }: { children: ReactNode }) {
return <CopilotShell>{children}</CopilotShell>;
}

View File

@@ -1,102 +0,0 @@
"use client";
import { Skeleton } from "@/components/__legacy__/ui/skeleton";
import { Button } from "@/components/atoms/Button/Button";
import { Input } from "@/components/atoms/Input/Input";
import { Text } from "@/components/atoms/Text/Text";
import { useSupabase } from "@/lib/supabase/hooks/useSupabase";
import { ArrowUpIcon } from "@phosphor-icons/react";
import { useCopilotHome } from "./useCopilotHome";
export default function CopilotPage() {
const {
greetingName,
value,
quickActions,
isFlagReady,
isChatEnabled,
handleChange,
handleSubmit,
handleKeyDown,
handleQuickAction,
} = useCopilotHome();
const { isUserLoading } = useSupabase();
if (!isFlagReady || isChatEnabled === false) {
return null;
}
const isLoading = isUserLoading;
return (
<div className="flex h-full flex-1 items-center justify-center overflow-y-auto px-6 py-10">
<div className="w-full max-w-2xl text-center">
{isLoading ? (
<>
<Skeleton className="mx-auto mb-3 h-8 w-64" />
<Skeleton className="mx-auto mb-8 h-6 w-80" />
<div className="mb-8">
<Skeleton className="mx-auto h-14 w-full max-w-2xl rounded-lg" />
</div>
<div className="flex flex-wrap items-center justify-center gap-3">
{Array.from({ length: 4 }).map((_, i) => (
<Skeleton key={i} className="h-9 w-48 rounded-md" />
))}
</div>
</>
) : (
<>
<Text variant="h2" className="mb-3 text-zinc-700">
Hey, <span className="text-violet-600">{greetingName}</span>
</Text>
<Text variant="h3" className="mb-8 text-zinc-900">
What do you want to automate?
</Text>
<form onSubmit={handleSubmit} className="mb-8">
<div className="relative">
<Input
id="copilot-prompt"
label="Copilot prompt"
hideLabel
type="textarea"
value={value}
onChange={handleChange}
onKeyDown={handleKeyDown}
rows={1}
placeholder='You can search or just ask - e.g. "create a blog post outline"'
wrapperClassName="mb-0"
className="min-h-[3.5rem] pr-12 text-base"
/>
<Button
type="submit"
variant="icon"
size="icon"
aria-label="Submit prompt"
className="absolute right-2 top-1/2 -translate-y-1/2 border-zinc-800 bg-zinc-800 text-white hover:border-zinc-900 hover:bg-zinc-900"
disabled={!value.trim()}
>
<ArrowUpIcon className="h-4 w-4" weight="bold" />
</Button>
</div>
</form>
<div className="flex flex-wrap items-center justify-center gap-3">
{quickActions.map((action) => (
<Button
key={action}
variant="outline"
size="small"
onClick={() => handleQuickAction(action)}
className="border-zinc-300 text-zinc-700 hover:border-zinc-400 hover:bg-zinc-50"
>
{action}
</Button>
))}
</div>
</>
)}
</div>
</div>
);
}

View File

@@ -1,90 +0,0 @@
"use client";
import { getHomepageRoute } from "@/lib/constants";
import { useSupabase } from "@/lib/supabase/hooks/useSupabase";
import {
Flag,
type FlagValues,
useGetFlag,
} from "@/services/feature-flags/use-get-flag";
import { useFlags } from "launchdarkly-react-client-sdk";
import { useRouter } from "next/navigation";
import { useEffect, useMemo, useState } from "react";
import {
buildCopilotChatUrl,
getGreetingName,
getQuickActions,
} from "./helpers";
export function useCopilotHome() {
const router = useRouter();
const { user } = useSupabase();
const [value, setValue] = useState("");
const isChatEnabled = useGetFlag(Flag.CHAT);
const flags = useFlags<FlagValues>();
const homepageRoute = getHomepageRoute(isChatEnabled);
const envEnabled = process.env.NEXT_PUBLIC_LAUNCHDARKLY_ENABLED === "true";
const clientId = process.env.NEXT_PUBLIC_LAUNCHDARKLY_CLIENT_ID;
const isLaunchDarklyConfigured = envEnabled && Boolean(clientId);
const isFlagReady =
!isLaunchDarklyConfigured || flags[Flag.CHAT] !== undefined;
const greetingName = useMemo(
function getName() {
return getGreetingName(user);
},
[user],
);
const quickActions = useMemo(function getActions() {
return getQuickActions();
}, []);
useEffect(
function ensureAccess() {
if (!isFlagReady) return;
if (isChatEnabled === false) {
router.replace(homepageRoute);
}
},
[homepageRoute, isChatEnabled, isFlagReady, router],
);
function handleChange(
event: React.ChangeEvent<HTMLInputElement | HTMLTextAreaElement>,
) {
setValue(event.target.value);
}
function handleSubmit(event: React.FormEvent<HTMLFormElement>) {
event.preventDefault();
if (!value.trim()) return;
router.push(buildCopilotChatUrl(value));
}
function handleKeyDown(
event: React.KeyboardEvent<HTMLInputElement | HTMLTextAreaElement>,
) {
if (event.key !== "Enter") return;
if (event.shiftKey) return;
event.preventDefault();
if (!value.trim()) return;
router.push(buildCopilotChatUrl(value));
}
function handleQuickAction(action: string) {
router.push(buildCopilotChatUrl(action));
}
return {
greetingName,
value,
quickActions,
isFlagReady,
isChatEnabled,
handleChange,
handleSubmit,
handleKeyDown,
handleQuickAction,
};
}

View File

@@ -1,8 +1,6 @@
"use client";
import { ErrorCard } from "@/components/molecules/ErrorCard/ErrorCard";
import { getHomepageRoute } from "@/lib/constants";
import { Flag, useGetFlag } from "@/services/feature-flags/use-get-flag";
import { useSearchParams } from "next/navigation";
import { Suspense } from "react";
import { getErrorDetails } from "./helpers";
@@ -11,8 +9,6 @@ function ErrorPageContent() {
const searchParams = useSearchParams();
const errorMessage = searchParams.get("message");
const errorDetails = getErrorDetails(errorMessage);
const isChatEnabled = useGetFlag(Flag.CHAT);
const homepageRoute = getHomepageRoute(isChatEnabled);
function handleRetry() {
// Auth-related errors should redirect to login
@@ -29,8 +25,8 @@ function ErrorPageContent() {
window.location.reload();
}, 2000);
} else {
// For server/network errors, go to home
window.location.href = homepageRoute;
// For server/network errors, go to marketplace
window.location.href = "/marketplace";
}
}

View File

@@ -180,7 +180,7 @@ export function RunAgentModal({
{/* Content */}
{hasAnySetupFields ? (
<div className="mt-4 pb-10">
<div className="mt-10 pb-32">
<RunAgentModalContextProvider
value={{
agent,

View File

@@ -29,7 +29,7 @@ export function ModalHeader({ agent }: ModalHeaderProps) {
<ShowMoreText
previewLimit={400}
variant="small"
className="mb-2 mt-4 !text-zinc-700"
className="mt-4 !text-zinc-700"
>
{agent.description}
</ShowMoreText>
@@ -40,8 +40,6 @@ export function ModalHeader({ agent }: ModalHeaderProps) {
<Text variant="lead-semibold" className="text-blue-600">
Tip
</Text>
<div className="h-px w-full bg-blue-100" />
<Text variant="body">
For best results, run this agent{" "}
{humanizeCronExpression(
@@ -52,7 +50,7 @@ export function ModalHeader({ agent }: ModalHeaderProps) {
) : null}
{agent.instructions ? (
<div className="mt-4 flex flex-col gap-4 rounded-medium border border-purple-100 bg-[#f1ebfe80] p-4">
<div className="flex flex-col gap-4 rounded-medium border border-purple-100 bg-[#F1EBFE/5] p-4">
<Text variant="lead-semibold" className="text-purple-600">
Instructions
</Text>

View File

@@ -8,8 +8,6 @@ import { useGetV2GetUserProfile } from "@/app/api/__generated__/endpoints/store/
import { LibraryAgent } from "@/app/api/__generated__/models/libraryAgent";
import { okData } from "@/app/api/helpers";
import { useToast } from "@/components/molecules/Toast/use-toast";
import { isLogoutInProgress } from "@/lib/autogpt-server-api/helpers";
import { useSupabase } from "@/lib/supabase/hooks/useSupabase";
import { updateFavoriteInQueries } from "./helpers";
interface Props {
@@ -25,14 +23,10 @@ export function useLibraryAgentCard({ agent }: Props) {
const { toast } = useToast();
const queryClient = getQueryClient();
const { mutateAsync: updateLibraryAgent } = usePatchV2UpdateLibraryAgent();
const { user, isLoggedIn } = useSupabase();
const logoutInProgress = isLogoutInProgress();
const { data: profile } = useGetV2GetUserProfile({
query: {
select: okData,
enabled: isLoggedIn && !!user && !logoutInProgress,
queryKey: ["/api/store/profile", user?.id],
},
});

View File

@@ -1,8 +1,6 @@
import { useToast } from "@/components/molecules/Toast/use-toast";
import { getHomepageRoute } from "@/lib/constants";
import { useSupabase } from "@/lib/supabase/hooks/useSupabase";
import { environment } from "@/services/environment";
import { Flag, useGetFlag } from "@/services/feature-flags/use-get-flag";
import { loginFormSchema, LoginProvider } from "@/types/auth";
import { zodResolver } from "@hookform/resolvers/zod";
import { useRouter, useSearchParams } from "next/navigation";
@@ -22,17 +20,15 @@ export function useLoginPage() {
const [isGoogleLoading, setIsGoogleLoading] = useState(false);
const [showNotAllowedModal, setShowNotAllowedModal] = useState(false);
const isCloudEnv = environment.isCloud();
const isChatEnabled = useGetFlag(Flag.CHAT);
const homepageRoute = getHomepageRoute(isChatEnabled);
// Get redirect destination from 'next' query parameter
const nextUrl = searchParams.get("next");
useEffect(() => {
if (isLoggedIn && !isLoggingIn) {
router.push(nextUrl || homepageRoute);
router.push(nextUrl || "/marketplace");
}
}, [homepageRoute, isLoggedIn, isLoggingIn, nextUrl, router]);
}, [isLoggedIn, isLoggingIn, nextUrl, router]);
const form = useForm<z.infer<typeof loginFormSchema>>({
resolver: zodResolver(loginFormSchema),
@@ -102,7 +98,7 @@ export function useLoginPage() {
} else if (result.onboarding) {
router.replace("/onboarding");
} else {
router.replace(homepageRoute);
router.replace("/marketplace");
}
} catch (error) {
toast({

View File

@@ -3,14 +3,12 @@
import { useGetV2GetUserProfile } from "@/app/api/__generated__/endpoints/store/store";
import { ProfileInfoForm } from "@/components/__legacy__/ProfileInfoForm";
import { ErrorCard } from "@/components/molecules/ErrorCard/ErrorCard";
import { isLogoutInProgress } from "@/lib/autogpt-server-api/helpers";
import { ProfileDetails } from "@/lib/autogpt-server-api/types";
import { useSupabase } from "@/lib/supabase/hooks/useSupabase";
import { ProfileLoading } from "./ProfileLoading";
export default function UserProfilePage() {
const { user } = useSupabase();
const logoutInProgress = isLogoutInProgress();
const {
data: profile,
@@ -20,7 +18,7 @@ export default function UserProfilePage() {
refetch,
} = useGetV2GetUserProfile<ProfileDetails | null>({
query: {
enabled: !!user && !logoutInProgress,
enabled: !!user,
select: (res) => {
if (res.status === 200) {
return {

View File

@@ -1,6 +1,5 @@
"use server";
import { getHomepageRoute } from "@/lib/constants";
import { getServerSupabase } from "@/lib/supabase/server/getServerSupabase";
import { signupFormSchema } from "@/types/auth";
import * as Sentry from "@sentry/nextjs";
@@ -59,7 +58,7 @@ export async function signup(
}
const isOnboardingEnabled = await shouldShowOnboarding();
const next = isOnboardingEnabled ? "/onboarding" : getHomepageRoute();
const next = isOnboardingEnabled ? "/onboarding" : "/";
return { success: true, next };
} catch (err) {

View File

@@ -1,8 +1,6 @@
import { useToast } from "@/components/molecules/Toast/use-toast";
import { getHomepageRoute } from "@/lib/constants";
import { useSupabase } from "@/lib/supabase/hooks/useSupabase";
import { environment } from "@/services/environment";
import { Flag, useGetFlag } from "@/services/feature-flags/use-get-flag";
import { LoginProvider, signupFormSchema } from "@/types/auth";
import { zodResolver } from "@hookform/resolvers/zod";
import { useRouter, useSearchParams } from "next/navigation";
@@ -22,17 +20,15 @@ export function useSignupPage() {
const [isGoogleLoading, setIsGoogleLoading] = useState(false);
const [showNotAllowedModal, setShowNotAllowedModal] = useState(false);
const isCloudEnv = environment.isCloud();
const isChatEnabled = useGetFlag(Flag.CHAT);
const homepageRoute = getHomepageRoute(isChatEnabled);
// Get redirect destination from 'next' query parameter
const nextUrl = searchParams.get("next");
useEffect(() => {
if (isLoggedIn && !isSigningUp) {
router.push(nextUrl || homepageRoute);
router.push(nextUrl || "/marketplace");
}
}, [homepageRoute, isLoggedIn, isSigningUp, nextUrl, router]);
}, [isLoggedIn, isSigningUp, nextUrl, router]);
const form = useForm<z.infer<typeof signupFormSchema>>({
resolver: zodResolver(signupFormSchema),
@@ -133,7 +129,7 @@ export function useSignupPage() {
}
// Prefer the URL's next parameter, then result.next (for onboarding), then default
const redirectTo = nextUrl || result.next || homepageRoute;
const redirectTo = nextUrl || result.next || "/";
router.replace(redirectTo);
} catch (error) {
setIsLoading(false);

View File

@@ -1,33 +1,5 @@
"use client";
import { getHomepageRoute } from "@/lib/constants";
import {
Flag,
type FlagValues,
useGetFlag,
} from "@/services/feature-flags/use-get-flag";
import { useFlags } from "launchdarkly-react-client-sdk";
import { useRouter } from "next/navigation";
import { useEffect } from "react";
import { redirect } from "next/navigation";
export default function Page() {
const isChatEnabled = useGetFlag(Flag.CHAT);
const flags = useFlags<FlagValues>();
const router = useRouter();
const homepageRoute = getHomepageRoute(isChatEnabled);
const envEnabled = process.env.NEXT_PUBLIC_LAUNCHDARKLY_ENABLED === "true";
const clientId = process.env.NEXT_PUBLIC_LAUNCHDARKLY_CLIENT_ID;
const isLaunchDarklyConfigured = envEnabled && Boolean(clientId);
const isFlagReady =
!isLaunchDarklyConfigured || flags[Flag.CHAT] !== undefined;
useEffect(
function redirectToHomepage() {
if (!isFlagReady) return;
router.replace(homepageRoute);
},
[homepageRoute, isFlagReady, router],
);
return null;
redirect("/marketplace");
}

View File

@@ -1,15 +0,0 @@
import { cn } from "@/lib/utils";
import { ReactNode } from "react";
export interface AIChatBubbleProps {
children: ReactNode;
className?: string;
}
export function AIChatBubble({ children, className }: AIChatBubbleProps) {
return (
<div className={cn("text-left text-sm leading-relaxed", className)}>
{children}
</div>
);
}

View File

@@ -1,13 +0,0 @@
.loader {
width: 20px;
aspect-ratio: 1;
border-radius: 50%;
background: #000;
box-shadow: 0 0 0 0 #0004;
animation: l1 1s infinite;
}
@keyframes l1 {
100% {
box-shadow: 0 0 0 30px #0000;
}
}

View File

@@ -1,5 +0,0 @@
import styles from "./ChatLoader.module.css";
export function ChatLoader() {
return <div className={styles.loader} />;
}

View File

@@ -1,105 +0,0 @@
"use client";
import { cn } from "@/lib/utils";
import type { ChatMessageData } from "../ChatMessage/useChatMessage";
import { StreamingMessage } from "../StreamingMessage/StreamingMessage";
import { ThinkingMessage } from "../ThinkingMessage/ThinkingMessage";
import { LastToolResponse } from "./components/LastToolResponse/LastToolResponse";
import { MessageItem } from "./components/MessageItem/MessageItem";
import { findLastMessageIndex, shouldSkipAgentOutput } from "./helpers";
import { useMessageList } from "./useMessageList";
export interface MessageListProps {
messages: ChatMessageData[];
streamingChunks?: string[];
isStreaming?: boolean;
className?: string;
onStreamComplete?: () => void;
onSendMessage?: (content: string) => void;
}
export function MessageList({
messages,
streamingChunks = [],
isStreaming = false,
className,
onStreamComplete,
onSendMessage,
}: MessageListProps) {
const { messagesEndRef, messagesContainerRef } = useMessageList({
messageCount: messages.length,
isStreaming,
});
return (
<div
ref={messagesContainerRef}
className={cn(
"flex-1 overflow-y-auto overflow-x-hidden",
"scrollbar-thin scrollbar-track-transparent scrollbar-thumb-zinc-300",
className,
)}
>
<div className="mx-auto flex min-w-0 flex-col hyphens-auto break-words py-4">
{/* Render all persisted messages */}
{(() => {
const lastAssistantMessageIndex = findLastMessageIndex(
messages,
(msg) => msg.type === "message" && msg.role === "assistant",
);
const lastToolResponseIndex = findLastMessageIndex(
messages,
(msg) => msg.type === "tool_response",
);
return messages.map((message, index) => {
// Skip agent_output tool_responses that should be rendered inside assistant messages
if (shouldSkipAgentOutput(message, messages[index - 1])) {
return null;
}
// Render last tool_response as AIChatBubble
if (
message.type === "tool_response" &&
index === lastToolResponseIndex
) {
return (
<LastToolResponse
key={index}
message={message}
prevMessage={messages[index - 1]}
/>
);
}
return (
<MessageItem
key={index}
message={message}
messages={messages}
index={index}
lastAssistantMessageIndex={lastAssistantMessageIndex}
onSendMessage={onSendMessage}
/>
);
});
})()}
{/* Render thinking message when streaming but no chunks yet */}
{isStreaming && streamingChunks.length === 0 && <ThinkingMessage />}
{/* Render streaming message if active */}
{isStreaming && streamingChunks.length > 0 && (
<StreamingMessage
chunks={streamingChunks}
onComplete={onStreamComplete}
/>
)}
{/* Invisible div to scroll to */}
<div ref={messagesEndRef} />
</div>
</div>
);
}

View File

@@ -1,29 +0,0 @@
import { AIChatBubble } from "../../../AIChatBubble/AIChatBubble";
import type { ChatMessageData } from "../../../ChatMessage/useChatMessage";
import { MarkdownContent } from "../../../MarkdownContent/MarkdownContent";
import { formatToolResultValue, shouldSkipAgentOutput } from "../../helpers";
export interface LastToolResponseProps {
message: ChatMessageData;
prevMessage: ChatMessageData | undefined;
}
export function LastToolResponse({
message,
prevMessage,
}: LastToolResponseProps) {
if (message.type !== "tool_response") return null;
// Skip if this is an agent_output that should be rendered inside assistant message
if (shouldSkipAgentOutput(message, prevMessage)) return null;
const resultValue = formatToolResultValue(message.result);
return (
<div className="min-w-0 overflow-x-hidden hyphens-auto break-words px-4 py-2">
<AIChatBubble>
<MarkdownContent content={resultValue} />
</AIChatBubble>
</div>
);
}

View File

@@ -1,35 +0,0 @@
import { ChatMessage } from "../../../ChatMessage/ChatMessage";
import type { ChatMessageData } from "../../../ChatMessage/useChatMessage";
import { useMessageItem } from "./useMessageItem";
export interface MessageItemProps {
message: ChatMessageData;
messages: ChatMessageData[];
index: number;
lastAssistantMessageIndex: number;
onSendMessage?: (content: string) => void;
}
export function MessageItem({
message,
messages,
index,
lastAssistantMessageIndex,
onSendMessage,
}: MessageItemProps) {
const { messageToRender, agentOutput, isFinalMessage } = useMessageItem({
message,
messages,
index,
lastAssistantMessageIndex,
});
return (
<ChatMessage
message={messageToRender}
onSendMessage={onSendMessage}
agentOutput={agentOutput}
isFinalMessage={isFinalMessage}
/>
);
}

View File

@@ -1,83 +0,0 @@
import type { ChatMessageData } from "../../../ChatMessage/useChatMessage";
import { isAgentOutputResult, isToolOutputPattern } from "../../helpers";
export interface UseMessageItemArgs {
message: ChatMessageData;
messages: ChatMessageData[];
index: number;
lastAssistantMessageIndex: number;
}
export function useMessageItem({
message,
messages,
index,
lastAssistantMessageIndex,
}: UseMessageItemArgs) {
let agentOutput: ChatMessageData | undefined;
let messageToRender: ChatMessageData = message;
// Check if assistant message follows a tool_call and looks like a tool output
if (message.type === "message" && message.role === "assistant") {
const prevMessage = messages[index - 1];
// Check if next message is an agent_output tool_response to include in current assistant message
const nextMessage = messages[index + 1];
if (
nextMessage &&
nextMessage.type === "tool_response" &&
nextMessage.result
) {
if (isAgentOutputResult(nextMessage.result)) {
agentOutput = nextMessage;
}
}
// Only convert to tool_response if it follows a tool_call AND looks like a tool output
if (prevMessage && prevMessage.type === "tool_call") {
if (isToolOutputPattern(message.content)) {
// Convert this message to a tool_response format for rendering
messageToRender = {
type: "tool_response",
toolId: prevMessage.toolId,
toolName: prevMessage.toolName,
result: message.content,
success: true,
timestamp: message.timestamp,
} as ChatMessageData;
console.log(
"[MessageItem] Converting assistant message to tool output:",
{
content: message.content.substring(0, 100),
prevToolName: prevMessage.toolName,
},
);
}
}
// Log for debugging
if (message.type === "message" && message.role === "assistant") {
const prevMessageToolName =
prevMessage?.type === "tool_call" ? prevMessage.toolName : undefined;
console.log("[MessageItem] Assistant message:", {
index,
content: message.content.substring(0, 200),
fullContent: message.content,
prevMessageType: prevMessage?.type,
prevMessageToolName,
});
}
}
const isFinalMessage =
messageToRender.type !== "message" ||
messageToRender.role !== "assistant" ||
index === lastAssistantMessageIndex;
return {
messageToRender,
agentOutput,
isFinalMessage,
};
}

View File

@@ -1,68 +0,0 @@
import type { ChatMessageData } from "../ChatMessage/useChatMessage";
export function parseToolResult(
result: unknown,
): Record<string, unknown> | null {
try {
return typeof result === "string"
? JSON.parse(result)
: (result as Record<string, unknown>);
} catch {
return null;
}
}
export function isAgentOutputResult(result: unknown): boolean {
const parsed = parseToolResult(result);
return parsed?.type === "agent_output";
}
export function isToolOutputPattern(content: string): boolean {
const normalizedContent = content.toLowerCase().trim();
return (
normalizedContent.startsWith("no agents found") ||
normalizedContent.startsWith("no results found") ||
normalizedContent.includes("no agents found matching") ||
!!normalizedContent.match(/^no \w+ found/i) ||
(content.length < 150 && normalizedContent.includes("try different")) ||
(content.length < 200 &&
!normalizedContent.includes("i'll") &&
!normalizedContent.includes("let me") &&
!normalizedContent.includes("i can") &&
!normalizedContent.includes("i will"))
);
}
export function formatToolResultValue(result: unknown): string {
return typeof result === "string"
? result
: result
? JSON.stringify(result, null, 2)
: "";
}
export function findLastMessageIndex(
messages: ChatMessageData[],
predicate: (msg: ChatMessageData) => boolean,
): number {
for (let i = messages.length - 1; i >= 0; i--) {
if (predicate(messages[i])) return i;
}
return -1;
}
export function shouldSkipAgentOutput(
message: ChatMessageData,
prevMessage: ChatMessageData | undefined,
): boolean {
if (message.type !== "tool_response" || !message.result) return false;
const isAgentOutput = isAgentOutputResult(message.result);
return (
isAgentOutput &&
!!prevMessage &&
prevMessage.type === "message" &&
prevMessage.role === "assistant"
);
}

View File

@@ -1,33 +0,0 @@
import { Text } from "@/components/atoms/Text/Text";
import type { ToolArguments } from "@/types/chat";
import { AIChatBubble } from "../AIChatBubble/AIChatBubble";
export interface ToolCallMessageProps {
toolId?: string;
toolName: string;
arguments?: ToolArguments;
className?: string;
}
export function ToolCallMessage({
toolId,
toolName,
arguments: toolArguments,
className,
}: ToolCallMessageProps) {
const displayKey = toolName || toolId;
const displayData = toolArguments
? JSON.stringify(toolArguments)
: "No arguments";
const displayText = `${displayKey}: ${displayData}`;
return (
<AIChatBubble className={className}>
<Text variant="small" className="text-neutral-500">
{displayText}
</Text>
</AIChatBubble>
);
}

View File

@@ -1,36 +0,0 @@
import { Text } from "@/components/atoms/Text/Text";
import type { ToolResult } from "@/types/chat";
import { AIChatBubble } from "../AIChatBubble/AIChatBubble";
export interface ToolResponseMessageProps {
toolId?: string;
toolName: string;
result?: ToolResult;
success?: boolean;
className?: string;
}
export function ToolResponseMessage({
toolId,
toolName,
result: _result,
success: _success = true,
className,
}: ToolResponseMessageProps) {
const displayKey = toolId || toolName;
const resultValue =
typeof _result === "string"
? _result
: _result
? JSON.stringify(_result)
: toolName;
const displayText = `${displayKey}: ${resultValue}`;
return (
<AIChatBubble className={className}>
<Text variant="small" className="text-neutral-500">
{displayText}
</Text>
</AIChatBubble>
);
}

View File

@@ -1,25 +0,0 @@
import { cn } from "@/lib/utils";
import { ReactNode } from "react";
export interface UserChatBubbleProps {
children: ReactNode;
className?: string;
}
export function UserChatBubble({ children, className }: UserChatBubbleProps) {
return (
<div
className={cn(
"group relative min-w-20 overflow-hidden rounded-xl bg-purple-100 px-3 text-right text-sm leading-relaxed transition-all duration-500 ease-in-out",
className,
)}
style={{
borderBottomRightRadius: 0,
}}
>
<div className="relative z-10 text-slate-900 transition-all duration-500 ease-in-out">
{children}
</div>
</div>
);
}

View File

@@ -1,7 +1,6 @@
import * as React from "react";
import { useGetV2GetMyAgents } from "@/app/api/__generated__/endpoints/store/store";
import { okData } from "@/app/api/helpers";
import { useSupabase } from "@/lib/supabase/hooks/useSupabase";
export interface Agent {
name: string;
@@ -37,7 +36,6 @@ export function useAgentSelectStep({
const [selectedAgentVersion, setSelectedAgentVersion] = React.useState<
number | null
>(null);
const { isLoggedIn } = useSupabase();
const {
data: _myAgents,
@@ -45,7 +43,6 @@ export function useAgentSelectStep({
error,
} = useGetV2GetMyAgents(undefined, {
query: {
enabled: isLoggedIn,
select: (res) =>
okData(res)
?.agents.map(

View File

@@ -11,7 +11,6 @@ import {
import { okData } from "@/app/api/helpers";
import type { MyAgent } from "@/app/api/__generated__/models/myAgent";
import { useQueryClient } from "@tanstack/react-query";
import { useSupabase } from "@/lib/supabase/hooks/useSupabase";
const defaultTargetState: PublishState = {
isOpen: false,
@@ -69,19 +68,10 @@ export function usePublishAgentModal({
const router = useRouter();
const queryClient = useQueryClient();
const { isLoggedIn } = useSupabase();
// Fetch agent data for pre-populating form when agent is pre-selected
const { data: myAgents } = useGetV2GetMyAgents(undefined, {
query: {
enabled: isLoggedIn,
},
});
const { data: mySubmissions } = useGetV2ListMySubmissions(undefined, {
query: {
enabled: isLoggedIn,
},
});
const { data: myAgents } = useGetV2GetMyAgents();
const { data: mySubmissions } = useGetV2ListMySubmissions();
// Sync currentState with targetState when it changes from outside
useEffect(() => {

View File

@@ -4,8 +4,6 @@ import { useGetV2GetUserProfile } from "@/app/api/__generated__/endpoints/store/
import { okData } from "@/app/api/helpers";
import { IconAutoGPTLogo, IconType } from "@/components/__legacy__/ui/icons";
import { PreviewBanner } from "@/components/layout/Navbar/components/PreviewBanner/PreviewBanner";
import { isLogoutInProgress } from "@/lib/autogpt-server-api/helpers";
import { NAVBAR_HEIGHT_PX } from "@/lib/constants";
import { useBreakpoint } from "@/lib/hooks/useBreakpoint";
import { useSupabase } from "@/lib/supabase/hooks/useSupabase";
import { environment } from "@/services/environment";
@@ -26,13 +24,12 @@ export function Navbar() {
const dynamicMenuItems = getAccountMenuItems(user?.role);
const isChatEnabled = useGetFlag(Flag.CHAT);
const previewBranchName = environment.getPreviewStealingDev();
const logoutInProgress = isLogoutInProgress();
const { data: profile, isLoading: isProfileLoading } = useGetV2GetUserProfile(
{
query: {
select: okData,
enabled: isLoggedIn && !!user && !logoutInProgress,
enabled: isLoggedIn && !!user,
// Include user ID in query key to ensure cache invalidation when user changes
queryKey: ["/api/store/profile", user?.id],
},
@@ -43,13 +40,10 @@ export function Navbar() {
const shouldShowPreviewBanner = Boolean(isLoggedIn && previewBranchName);
const homeHref = isChatEnabled === true ? "/copilot" : "/library";
const actualLoggedInLinks = [
{ name: "Home", href: homeHref },
...(isChatEnabled === true ? [{ name: "Tasks", href: "/library" }] : []),
...loggedInLinks,
];
const actualLoggedInLinks =
isChatEnabled === true
? loggedInLinks.concat([{ name: "Chat", href: "/chat" }])
: loggedInLinks;
if (isUserLoading) {
return <NavbarLoading />;
@@ -61,10 +55,7 @@ export function Navbar() {
{shouldShowPreviewBanner && previewBranchName ? (
<PreviewBanner branchName={previewBranchName} />
) : null}
<nav
className="border-zinc-[#EFEFF0] inline-flex w-full items-center border border-[#EFEFF0] bg-[#F3F4F6]/20 p-3 backdrop-blur-[26px]"
style={{ height: NAVBAR_HEIGHT_PX }}
>
<nav className="border-zinc-[#EFEFF0] inline-flex h-[60px] w-full items-center border border-[#EFEFF0] bg-[#F3F4F6]/20 p-3 backdrop-blur-[26px]">
{/* Left section */}
{!isSmallScreen ? (
<div className="flex flex-1 items-center gap-5">
@@ -126,17 +117,21 @@ export function Navbar() {
groupName: "Navigation",
items: actualLoggedInLinks
.map((link) => {
if (link.name === "Chat" && !isChatEnabled) {
return null;
}
return {
icon:
link.href === "/marketplace"
link.name === "Marketplace"
? IconType.Marketplace
: link.href === "/build"
? IconType.Builder
: link.href === "/copilot"
? IconType.Chat
: link.href === "/library"
? IconType.Library
: link.href === "/monitor"
: link.name === "Library"
? IconType.Library
: link.name === "Build"
? IconType.Builder
: link.name === "Chat"
? IconType.Chat
: link.name === "Monitor"
? IconType.Library
: IconType.LayoutDashboard,
text: link.name,

Some files were not shown because too many files have changed in this diff Show More