Compare commits

..

11 Commits

Author SHA1 Message Date
Zamil Majdy
0ce734d28d refactor(backend): remove useless logger.debug from execution.py
Revert the early-continue refactor and logger.debug — debug logs aren't
read in production. Keep the simple inline guard style from the original
PR #12548.
2026-03-25 16:25:10 +07:00
Zamil Majdy
1a84a10af0 fix(backend): resolve merge conflict and improve debug log in execution.py
Merged dev into fix/copilot-tool-results-guidance. Resolved conflict in
from_db() by keeping the expanded early-return form with debug logging.
Improved the debug message to log block_id instead of block_type (which
is always OUTPUT at that point).
2026-03-25 14:36:25 +07:00
Ubbe
995dd1b5f3 feat(platform): replace suggestion pills with themed prompt categories (#12515)
## Summary

<img width="700" height="575" alt="Screenshot 2026-03-23 at 21 40 07"
src="https://github.com/user-attachments/assets/f6138c63-dd5e-4bde-a2e4-7434d0d3ec72"
/>

Re-applies #12452 which was reverted as collateral in #12485 (invite
system revert).

Replaces the flat list of suggestion pills in the CoPilot empty session
with themed prompt categories (Learn, Create, Automate, Organize), each
shown as a popover with contextual prompts.

- **Backend**: Adds `suggested_prompts` as a themed `dict[str,
list[str]]` keyed by category. Updates Tally extraction LLM prompt to
generate prompts per theme, and the `/suggested-prompts` API to return
grouped themes. Legacy `list[str]` rows are preserved under a
`"General"` key for backward compatibility.
- **Frontend**: Replaces inline pill buttons with a `SuggestionThemes`
popover component. Each theme button (with icon) opens a dropdown of 5
relevant prompts. Falls back to hardcoded defaults when the API has no
personalized prompts. Normalizes partial API responses by padding
missing themes with defaults. Legacy `"General"` prompts are distributed
round-robin across themes.

### Changes 🏗️

- `backend/data/understanding.py`: `suggested_prompts` field added as
`dict[str, list[str]]`; legacy list rows preserved under `"General"` key
via `_json_to_themed_prompts`
- `backend/data/tally.py`: LLM prompt updated to generate themed
prompts; validation now per-theme with blank-string rejection
- `backend/api/features/chat/routes.py`: New `SuggestedTheme` model;
endpoint returns `themes[]`
- `frontend/copilot/components/EmptySession/EmptySession.tsx`: Uses
generated API hooks for suggested prompts
- `frontend/copilot/components/EmptySession/helpers.ts`:
`DEFAULT_THEMES` replaces `DEFAULT_QUICK_ACTIONS`; `getSuggestionThemes`
normalizes partial API responses
-
`frontend/copilot/components/EmptySession/components/SuggestionThemes/`:
New popover component with theme icons and loading states

### Checklist 📋

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Verify themed suggestion buttons render on CoPilot empty session
  - [x] Click each theme button and confirm popover opens with prompts
  - [x] Click a prompt and confirm it sends the message
- [x] Verify fallback to default themes when API returns no custom
prompts
- [x] Verify legacy users' personalized prompts are preserved and
visible

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-25 15:32:49 +08:00
Zamil Majdy
ea0b398a57 fix(backend): add logger.debug and test for nameless node exec skipping
Address PR review comments:
- Add logger.debug when skipping OUTPUT node executions without 'name'
  in input_data (e.g. OrchestratorBlock nodes)
- Add test verifying GraphExecution.from_db handles mixed named/unnamed
  node executions without KeyError
2026-03-25 14:15:19 +07:00
Zamil Majdy
336114f217 fix(backend): prevent graph execution stuck + steer SDK away from bash_exec (#12548)
## Summary

Two backend fixes for CoPilot stability:

1. **Steer model away from bash_exec for SDK tool-result files** — When
the SDK returns tool results as file paths, the copilot model was
attempting to use `bash_exec` to read them instead of treating the
content directly. Added system prompt guidance to prevent this.

2. **Guard against missing 'name' in execution input_data** —
`GraphExecution.from_db()` assumed all INPUT/OUTPUT block node
executions have a `name` field in `input_data`. This crashes with
`KeyError: 'name'` when non-standard blocks (e.g., OrchestratorBlock)
produce node executions without this field. Added `"name" in
exec.input_data` guards.

## Why

- The bash_exec issue causes copilot to fail when processing SDK tool
outputs
- The KeyError crashes the `update_graph_execution_stats` endpoint,
causing graph executions to appear stuck (retries 35+ times, never
completes)

## How

- Added system prompt instruction to treat tool result file contents
directly
- Added `"name" in exec.input_data` guard in both input extraction (line
340) and output extraction (line 365) in `execution.py`

### Changes
- `backend/copilot/sdk/service.py` — system prompt guidance
- `backend/data/execution.py` — KeyError guard for missing `name` field

### Checklist 📋
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan

#### Test plan:
- [x] OrchestratorBlock graph execution no longer gets stuck
- [x] Standard Agent Input/Output blocks still work correctly
- [x] Copilot SDK tool results are processed without bash_exec
2026-03-25 13:58:24 +07:00
Zamil Majdy
a6b4cd2435 refactor(backend/copilot): deduplicate tool-results guidance to system prompt only
Remove redundant tool-results mentions from Read and bash_exec tool
descriptions — the system prompt storage supplement already covers this
clearly. Avoids wasting tokens on triple-repeated instructions.
2026-03-25 13:37:43 +07:00
Zamil Majdy
9f418807a6 style: inline 'name' guard into existing condition 2026-03-25 13:31:54 +07:00
Zamil Majdy
cbc4bee9d8 fix(backend): guard against missing 'name' in execution input_data
Prevents KeyError when node executions from non-standard blocks
(e.g., OrchestratorBlock) don't include 'name' in input_data.
The from_db fallback assumes all INPUT/OUTPUT block executions have
a 'name' field, but this isn't guaranteed for all block types.
2026-03-25 13:21:39 +07:00
Zamil Majdy
3273b9558d fix(backend/copilot): revert filesystem note — Read shares sandbox too
Read shares the sandbox filesystem for regular files; it just
additionally reads host-side tool-results. Removing it from the list
was misleading. The tool-results section below already covers the
distinction.
2026-03-25 13:06:18 +07:00
Zamil Majdy
d0f558dc68 fix(backend/copilot): clarify Read tool's dual access in filesystem note
Address self-review: the "Shell & filesystem" section listed Read
alongside bash_exec as sharing one filesystem, which is misleading now
that Read also accesses host-side SDK tool-result files.
2026-03-25 12:53:58 +07:00
Zamil Majdy
8bb828804b fix(backend/copilot): steer model away from bash_exec for SDK tool-result files
The model was using bash_exec (cd + cat) to read SDK tool-result files
under ~/.claude/projects/.../tool-results/, which fails in E2B because
the sandbox runs as /home/user and cannot access /root/. The Read tool
handles these correctly by reading from the host filesystem.

Changes:
- Add "CANNOT read SDK tool-result files" warning to bash_exec description
- Add tool-results mention to the Read tool description
- Fix system prompt: replace stale "read_file" reference with "Read",
  explicitly warn that bash_exec cannot access host-side tool-results
2026-03-25 12:51:32 +07:00
21 changed files with 881 additions and 132 deletions

View File

@@ -60,6 +60,7 @@ from backend.copilot.tools.models import (
)
from backend.copilot.tracking import track_user_message
from backend.data.redis_client import get_redis_async
from backend.data.understanding import get_business_understanding
from backend.data.workspace import get_or_create_workspace
from backend.util.exceptions import NotFoundError
@@ -894,6 +895,47 @@ async def session_assign_user(
return {"status": "ok"}
# ========== Suggested Prompts ==========
class SuggestedTheme(BaseModel):
"""A themed group of suggested prompts."""
name: str
prompts: list[str]
class SuggestedPromptsResponse(BaseModel):
"""Response model for user-specific suggested prompts grouped by theme."""
themes: list[SuggestedTheme]
@router.get(
"/suggested-prompts",
dependencies=[Security(auth.requires_user)],
)
async def get_suggested_prompts(
user_id: Annotated[str, Security(auth.get_user_id)],
) -> SuggestedPromptsResponse:
"""
Get LLM-generated suggested prompts grouped by theme.
Returns personalized quick-action prompts based on the user's
business understanding. Returns empty themes list if no custom
prompts are available.
"""
understanding = await get_business_understanding(user_id)
if understanding is None or not understanding.suggested_prompts:
return SuggestedPromptsResponse(themes=[])
themes = [
SuggestedTheme(name=name, prompts=prompts)
for name, prompts in understanding.suggested_prompts.items()
]
return SuggestedPromptsResponse(themes=themes)
# ========== Configuration ==========

View File

@@ -1,7 +1,7 @@
"""Tests for chat API routes: session title update, file attachment validation, usage, and rate limiting."""
from datetime import UTC, datetime, timedelta
from unittest.mock import AsyncMock
from unittest.mock import AsyncMock, MagicMock
import fastapi
import fastapi.testclient
@@ -400,3 +400,69 @@ def test_usage_rejects_unauthenticated_request() -> None:
response = unauthenticated_client.get("/usage")
assert response.status_code == 401
# ─── Suggested prompts endpoint ──────────────────────────────────────
def _mock_get_business_understanding(
mocker: pytest_mock.MockerFixture,
*,
return_value=None,
):
"""Mock get_business_understanding."""
return mocker.patch(
"backend.api.features.chat.routes.get_business_understanding",
new_callable=AsyncMock,
return_value=return_value,
)
def test_suggested_prompts_returns_themes(
mocker: pytest_mock.MockerFixture,
test_user_id: str,
) -> None:
"""User with themed prompts gets them back as themes list."""
mock_understanding = MagicMock()
mock_understanding.suggested_prompts = {
"Learn": ["L1", "L2"],
"Create": ["C1"],
}
_mock_get_business_understanding(mocker, return_value=mock_understanding)
response = client.get("/suggested-prompts")
assert response.status_code == 200
data = response.json()
assert "themes" in data
themes_by_name = {t["name"]: t["prompts"] for t in data["themes"]}
assert themes_by_name["Learn"] == ["L1", "L2"]
assert themes_by_name["Create"] == ["C1"]
def test_suggested_prompts_no_understanding(
mocker: pytest_mock.MockerFixture,
test_user_id: str,
) -> None:
"""User with no understanding gets empty themes list."""
_mock_get_business_understanding(mocker, return_value=None)
response = client.get("/suggested-prompts")
assert response.status_code == 200
assert response.json() == {"themes": []}
def test_suggested_prompts_empty_prompts(
mocker: pytest_mock.MockerFixture,
test_user_id: str,
) -> None:
"""User with understanding but empty prompts gets empty themes list."""
mock_understanding = MagicMock()
mock_understanding.suggested_prompts = {}
_mock_get_business_understanding(mocker, return_value=mock_understanding)
response = client.get("/suggested-prompts")
assert response.status_code == 200
assert response.json() == {"themes": []}

View File

@@ -205,9 +205,10 @@ Important files (code, configs, outputs) should be saved to workspace to ensure
### SDK tool-result files
When tool outputs are large, the SDK truncates them and saves the full output to
a local file under `~/.claude/projects/.../tool-results/`. To read these files,
always use `read_file` or `Read` (NOT `read_workspace_file`).
`read_workspace_file` reads from cloud workspace storage, where SDK
tool-results are NOT stored.
always use `Read` (NOT `bash_exec`, NOT `read_workspace_file`).
These files are on the host filesystem — `bash_exec` runs in the sandbox and
CANNOT access them. `read_workspace_file` reads from cloud workspace storage,
where SDK tool-results are NOT stored.
{_SHARED_TOOL_NOTES}{extra_notes}"""

View File

@@ -342,6 +342,7 @@ class GraphExecution(GraphExecutionMeta):
if (
(block := get_block(exec.block_id))
and block.block_type == BlockType.INPUT
and "name" in exec.input_data
)
}
),
@@ -360,8 +361,10 @@ class GraphExecution(GraphExecutionMeta):
outputs: CompletedBlockOutput = defaultdict(list)
for exec in complete_node_executions:
if (
block := get_block(exec.block_id)
) and block.block_type == BlockType.OUTPUT:
(block := get_block(exec.block_id))
and block.block_type == BlockType.OUTPUT
and "name" in exec.input_data
):
outputs[exec.input_data["name"]].append(exec.input_data.get("value"))
return GraphExecution(

View File

@@ -0,0 +1,121 @@
"""Tests for GraphExecution.from_db — verify node executions without 'name'
in input_data (e.g. OrchestratorBlock) are skipped gracefully."""
from datetime import datetime, timezone
from unittest.mock import MagicMock, patch
from prisma.enums import AgentExecutionStatus
from backend.blocks._base import BlockType
from backend.data.execution import GraphExecution
def _make_node_execution(
*,
exec_id: str,
block_id: str,
input_data: dict | None = None,
output_data: list | None = None,
status: AgentExecutionStatus = AgentExecutionStatus.COMPLETED,
) -> MagicMock:
"""Create a minimal mock AgentNodeExecution for from_db."""
ne = MagicMock()
ne.id = exec_id
ne.agentNodeId = f"node-{exec_id}"
ne.agentGraphExecutionId = "graph-exec-1"
ne.executionStatus = status
ne.addedTime = datetime(2024, 1, 1, tzinfo=timezone.utc)
ne.queuedTime = datetime(2024, 1, 1, tzinfo=timezone.utc)
ne.startedTime = datetime(2024, 1, 1, tzinfo=timezone.utc)
ne.endedTime = datetime(2024, 1, 1, tzinfo=timezone.utc)
ne.stats = None
ne.executionData = input_data or {}
ne.Input = []
ne.Output = output_data or []
node = MagicMock()
node.agentBlockId = block_id
ne.Node = node
ne.GraphExecution = None
return ne
def _make_graph_execution(node_executions: list) -> MagicMock:
"""Create a minimal mock AgentGraphExecution."""
ge = MagicMock()
ge.id = "graph-exec-1"
ge.userId = "user-1"
ge.agentGraphId = "graph-1"
ge.agentGraphVersion = 1
ge.inputs = None # Force fallback to node-based extraction
ge.credentialInputs = None
ge.nodesInputMasks = None
ge.agentPresetId = None
ge.executionStatus = AgentExecutionStatus.COMPLETED
ge.startedAt = datetime(2024, 1, 1, tzinfo=timezone.utc)
ge.endedAt = datetime(2024, 1, 1, tzinfo=timezone.utc)
ge.stats = None
ge.isShared = False
ge.shareToken = None
ge.NodeExecutions = node_executions
return ge
INPUT_BLOCK_ID = "input-block-id"
OUTPUT_BLOCK_ID = "output-block-id"
ORCHESTRATOR_BLOCK_ID = "orchestrator-block-id"
def _mock_get_block(block_id: str):
"""Return a mock block with the right block_type for each ID."""
block = MagicMock()
if block_id == INPUT_BLOCK_ID:
block.block_type = BlockType.INPUT
elif block_id == OUTPUT_BLOCK_ID:
block.block_type = BlockType.OUTPUT
else:
block.block_type = BlockType.STANDARD
return block
@patch("backend.data.execution.get_block", side_effect=_mock_get_block)
def test_from_db_skips_node_executions_without_name(mock_get_block: MagicMock):
"""Node executions without 'name' in input_data (e.g. OrchestratorBlock)
must not cause a KeyError and should be silently skipped."""
named_input = _make_node_execution(
exec_id="ne-input-1",
block_id=INPUT_BLOCK_ID,
input_data={"name": "query", "value": "hello"},
)
unnamed_input = _make_node_execution(
exec_id="ne-orchestrator-1",
block_id=INPUT_BLOCK_ID,
input_data={"value": "no name field here"},
)
named_output = _make_node_execution(
exec_id="ne-output-1",
block_id=OUTPUT_BLOCK_ID,
input_data={"name": "result", "value": "world"},
)
unnamed_output = _make_node_execution(
exec_id="ne-orchestrator-2",
block_id=OUTPUT_BLOCK_ID,
input_data={"value": "no name here either"},
)
standard_node = _make_node_execution(
exec_id="ne-standard-1",
block_id=ORCHESTRATOR_BLOCK_ID,
input_data={"some_key": "some_value"},
)
graph_exec_db = _make_graph_execution(
[named_input, unnamed_input, named_output, unnamed_output, standard_node]
)
result = GraphExecution.from_db(graph_exec_db)
# Named input extracted correctly
assert result.inputs == {"query": "hello"}
# Named output extracted; unnamed output skipped
assert dict(result.outputs) == {"result": ["world"]}

View File

@@ -40,6 +40,9 @@ _MAX_PAGES = 100
# LLM extraction timeout (seconds)
_LLM_TIMEOUT = 30
SUGGESTION_THEMES = ["Learn", "Create", "Automate", "Organize"]
PROMPTS_PER_THEME = 5
def _mask_email(email: str) -> str:
"""Mask an email for safe logging: 'alice@example.com' -> 'a***e@example.com'."""
@@ -332,6 +335,11 @@ Fields:
- current_software (list of strings): software/tools currently used
- existing_automation (list of strings): existing automations
- additional_notes (string): any additional context
- suggested_prompts (object with keys "Learn", "Create", "Automate", "Organize"): for each key, \
provide a list of 5 short action prompts (each under 20 words) that would help this person. \
"Learn" = questions about AutoGPT features; "Create" = content/document generation tasks; \
"Automate" = recurring workflow automation ideas; "Organize" = structuring/prioritizing tasks. \
Should be specific to their industry, role, and pain points; actionable and conversational in tone.
Form data:
"""
@@ -378,6 +386,29 @@ async def extract_business_understanding(
# Filter out null values before constructing
cleaned = {k: v for k, v in data.items() if v is not None}
# Validate suggested_prompts: themed dict, filter >20 words, cap at 5 per theme
raw_prompts = cleaned.get("suggested_prompts", {})
if isinstance(raw_prompts, dict):
themed: dict[str, list[str]] = {}
for theme in SUGGESTION_THEMES:
theme_prompts = raw_prompts.get(theme, [])
if not isinstance(theme_prompts, list):
continue
valid = [
s
for p in theme_prompts
if isinstance(p, str) and (s := p.strip()) and len(s.split()) <= 20
]
if valid:
themed[theme] = valid[:PROMPTS_PER_THEME]
if themed:
cleaned["suggested_prompts"] = themed
else:
cleaned.pop("suggested_prompts", None)
else:
cleaned.pop("suggested_prompts", None)
return BusinessUnderstandingInput(**cleaned)

View File

@@ -284,6 +284,7 @@ async def test_populate_understanding_full_flow():
],
}
mock_input = MagicMock()
mock_input.suggested_prompts = {"Learn": ["P1"], "Create": ["P2"]}
with (
patch(
@@ -397,15 +398,25 @@ def test_extraction_prompt_no_format_placeholders():
@pytest.mark.asyncio
async def test_extract_business_understanding_success():
"""Happy path: LLM returns valid JSON that maps to BusinessUnderstandingInput."""
async def test_extract_business_understanding_themed_prompts():
"""Happy path: LLM returns themed prompts as dict."""
mock_choice = MagicMock()
mock_choice.message.content = json.dumps(
{
"user_name": "Alice",
"business_name": "Acme Corp",
"industry": "Technology",
"pain_points": ["manual reporting"],
"suggested_prompts": {
"Learn": ["Learn 1", "Learn 2", "Learn 3", "Learn 4", "Learn 5"],
"Create": [
"Create 1",
"Create 2",
"Create 3",
"Create 4",
"Create 5",
],
"Automate": ["Auto 1", "Auto 2", "Auto 3", "Auto 4", "Auto 5"],
"Organize": ["Org 1", "Org 2", "Org 3", "Org 4", "Org 5"],
},
}
)
mock_response = MagicMock()
@@ -418,9 +429,42 @@ async def test_extract_business_understanding_success():
result = await extract_business_understanding("Q: Name?\nA: Alice")
assert result.user_name == "Alice"
assert result.business_name == "Acme Corp"
assert result.industry == "Technology"
assert result.pain_points == ["manual reporting"]
assert result.suggested_prompts is not None
assert len(result.suggested_prompts) == 4
assert len(result.suggested_prompts["Learn"]) == 5
@pytest.mark.asyncio
async def test_extract_themed_prompts_filters_long_and_unknown_keys():
"""Long prompts are filtered, unknown keys are dropped, each theme capped at 5."""
long_prompt = " ".join(["word"] * 21)
mock_choice = MagicMock()
mock_choice.message.content = json.dumps(
{
"user_name": "Alice",
"suggested_prompts": {
"Learn": [long_prompt, "Valid learn 1", "Valid learn 2"],
"UnknownTheme": ["Should be dropped"],
"Automate": ["A1", "A2", "A3", "A4", "A5", "A6"],
},
}
)
mock_response = MagicMock()
mock_response.choices = [mock_choice]
mock_client = AsyncMock()
mock_client.chat.completions.create.return_value = mock_response
with patch("backend.data.tally.AsyncOpenAI", return_value=mock_client):
result = await extract_business_understanding("Q: Name?\nA: Alice")
assert result.suggested_prompts is not None
# Unknown key dropped
assert "UnknownTheme" not in result.suggested_prompts
# Long prompt filtered
assert result.suggested_prompts["Learn"] == ["Valid learn 1", "Valid learn 2"]
# Capped at 5
assert result.suggested_prompts["Automate"] == ["A1", "A2", "A3", "A4", "A5"]
@pytest.mark.asyncio

View File

@@ -49,6 +49,25 @@ def _json_to_list(value: Any) -> list[str]:
return []
def _json_to_themed_prompts(value: Any) -> dict[str, list[str]]:
"""Convert Json field to themed prompts dict.
Handles both the new ``dict[str, list[str]]`` format and the legacy
``list[str]`` format. Legacy rows are placed under a ``"General"`` key so
existing personalised prompts remain readable until a backfill regenerates
them into the proper themed shape.
"""
if isinstance(value, dict):
return {
k: [i for i in v if isinstance(i, str)]
for k, v in value.items()
if isinstance(k, str) and isinstance(v, list)
}
if isinstance(value, list) and value:
return {"General": [str(p) for p in value if isinstance(p, str)]}
return {}
class BusinessUnderstandingInput(pydantic.BaseModel):
"""Input model for updating business understanding - all fields optional for incremental updates."""
@@ -104,6 +123,11 @@ class BusinessUnderstandingInput(pydantic.BaseModel):
None, description="Any additional context"
)
# Suggested prompts (UI-only, not included in system prompt)
suggested_prompts: Optional[dict[str, list[str]]] = pydantic.Field(
None, description="LLM-generated suggested prompts grouped by theme"
)
class BusinessUnderstanding(pydantic.BaseModel):
"""Full business understanding model returned from database."""
@@ -140,6 +164,9 @@ class BusinessUnderstanding(pydantic.BaseModel):
# Additional context
additional_notes: Optional[str] = None
# Suggested prompts (UI-only, not included in system prompt)
suggested_prompts: dict[str, list[str]] = pydantic.Field(default_factory=dict)
@classmethod
def from_db(cls, db_record: CoPilotUnderstanding) -> "BusinessUnderstanding":
"""Convert database record to Pydantic model."""
@@ -167,6 +194,7 @@ class BusinessUnderstanding(pydantic.BaseModel):
current_software=_json_to_list(business.get("current_software")),
existing_automation=_json_to_list(business.get("existing_automation")),
additional_notes=business.get("additional_notes"),
suggested_prompts=_json_to_themed_prompts(data.get("suggested_prompts")),
)
@@ -246,33 +274,22 @@ async def get_business_understanding(
return understanding
async def upsert_business_understanding(
user_id: str,
def merge_business_understanding_data(
existing_data: dict[str, Any],
input_data: BusinessUnderstandingInput,
) -> BusinessUnderstanding:
"""
Create or update business understanding with incremental merge strategy.
) -> dict[str, Any]:
"""Merge new input into existing data dict using incremental strategy.
- String fields: new value overwrites if provided (not None)
- List fields: new items are appended to existing (deduplicated)
- suggested_prompts: fully replaced if provided (not None)
Data is stored as: {name: ..., business: {version: 1, ...}}
Returns the merged data dict (mutates and returns *existing_data*).
"""
# Get existing record for merge
existing = await CoPilotUnderstanding.prisma().find_unique(
where={"userId": user_id}
)
# Get existing data structure or start fresh
existing_data: dict[str, Any] = {}
if existing and isinstance(existing.data, dict):
existing_data = dict(existing.data)
existing_business: dict[str, Any] = {}
if isinstance(existing_data.get("business"), dict):
existing_business = dict(existing_data["business"])
# Business fields (stored inside business object)
business_string_fields = [
"job_title",
"business_name",
@@ -310,16 +327,48 @@ async def upsert_business_understanding(
merged = _merge_lists(existing_list, value)
existing_business[field] = merged
# Suggested prompts - fully replace if provided
if input_data.suggested_prompts is not None:
existing_data["suggested_prompts"] = input_data.suggested_prompts
# Set version and nest business data
existing_business["version"] = 1
existing_data["business"] = existing_business
return existing_data
async def upsert_business_understanding(
user_id: str,
input_data: BusinessUnderstandingInput,
) -> BusinessUnderstanding:
"""
Create or update business understanding with incremental merge strategy.
- String fields: new value overwrites if provided (not None)
- List fields: new items are appended to existing (deduplicated)
- suggested_prompts: fully replaced if provided (not None)
Data is stored as: {name: ..., business: {version: 1, ...}}
"""
# Get existing record for merge
existing = await CoPilotUnderstanding.prisma().find_unique(
where={"userId": user_id}
)
# Get existing data structure or start fresh
existing_data: dict[str, Any] = {}
if existing and isinstance(existing.data, dict):
existing_data = dict(existing.data)
merged_data = merge_business_understanding_data(existing_data, input_data)
# Upsert with the merged data
record = await CoPilotUnderstanding.prisma().upsert(
where={"userId": user_id},
data={
"create": {"userId": user_id, "data": SafeJson(existing_data)},
"update": {"data": SafeJson(existing_data)},
"create": {"userId": user_id, "data": SafeJson(merged_data)},
"update": {"data": SafeJson(merged_data)},
},
)

View File

@@ -0,0 +1,148 @@
"""Tests for business understanding merge and format logic."""
from datetime import datetime, timezone
from typing import Any
from unittest.mock import MagicMock
from backend.data.understanding import (
BusinessUnderstanding,
BusinessUnderstandingInput,
_json_to_themed_prompts,
format_understanding_for_prompt,
merge_business_understanding_data,
)
def _make_input(**kwargs: Any) -> BusinessUnderstandingInput:
"""Create a BusinessUnderstandingInput with only the specified fields."""
return BusinessUnderstandingInput.model_validate(kwargs)
# ─── merge_business_understanding_data: themed prompts ─────────────────
def test_merge_themed_prompts_overwrites_existing():
"""New themed prompts should fully replace existing ones (not merge)."""
existing = {
"name": "Alice",
"business": {"industry": "Tech", "version": 1},
"suggested_prompts": {
"Learn": ["Old learn prompt"],
"Create": ["Old create prompt"],
},
}
new_prompts = {
"Automate": ["Schedule daily reports", "Set up email alerts"],
"Organize": ["Sort inbox by priority"],
}
input_data = _make_input(suggested_prompts=new_prompts)
result = merge_business_understanding_data(existing, input_data)
assert result["suggested_prompts"] == new_prompts
def test_merge_themed_prompts_none_preserves_existing():
"""When input has suggested_prompts=None, existing themed prompts are preserved."""
existing_prompts = {
"Learn": ["How to automate?"],
"Create": ["Build a chatbot"],
}
existing = {
"name": "Alice",
"business": {"industry": "Tech", "version": 1},
"suggested_prompts": existing_prompts,
}
input_data = _make_input(industry="Finance")
result = merge_business_understanding_data(existing, input_data)
assert result["suggested_prompts"] == existing_prompts
assert result["business"]["industry"] == "Finance"
# ─── from_db: themed prompts deserialization ───────────────────────────
def test_from_db_themed_prompts():
"""from_db correctly deserializes a themed dict for suggested_prompts."""
themed = {
"Learn": ["What can I automate?"],
"Create": ["Build a workflow"],
}
db_record = MagicMock()
db_record.id = "test-id"
db_record.userId = "user-1"
db_record.createdAt = datetime.now(tz=timezone.utc)
db_record.updatedAt = datetime.now(tz=timezone.utc)
db_record.data = {
"name": "Alice",
"business": {"industry": "Tech", "version": 1},
"suggested_prompts": themed,
}
result = BusinessUnderstanding.from_db(db_record)
assert result.suggested_prompts == themed
def test_from_db_legacy_list_prompts_preserved_under_general():
"""from_db preserves legacy list[str] prompts under a 'General' key."""
db_record = MagicMock()
db_record.id = "test-id"
db_record.userId = "user-1"
db_record.createdAt = datetime.now(tz=timezone.utc)
db_record.updatedAt = datetime.now(tz=timezone.utc)
db_record.data = {
"name": "Alice",
"business": {"industry": "Tech", "version": 1},
"suggested_prompts": ["Old prompt 1", "Old prompt 2"],
}
result = BusinessUnderstanding.from_db(db_record)
assert result.suggested_prompts == {"General": ["Old prompt 1", "Old prompt 2"]}
# ─── _json_to_themed_prompts helper ───────────────────────────────────
def test_json_to_themed_prompts_with_dict():
value = {"Learn": ["a", "b"], "Create": ["c"]}
assert _json_to_themed_prompts(value) == {"Learn": ["a", "b"], "Create": ["c"]}
def test_json_to_themed_prompts_with_list_returns_general():
assert _json_to_themed_prompts(["a", "b"]) == {"General": ["a", "b"]}
def test_json_to_themed_prompts_with_none_returns_empty():
assert _json_to_themed_prompts(None) == {}
# ─── format_understanding_for_prompt: excludes themed prompts ──────────
def test_format_understanding_excludes_themed_prompts():
"""Themed suggested_prompts are UI-only and must NOT appear in the system prompt."""
understanding = BusinessUnderstanding(
id="test-id",
user_id="user-1",
created_at=datetime.now(tz=timezone.utc),
updated_at=datetime.now(tz=timezone.utc),
user_name="Alice",
industry="Technology",
suggested_prompts={
"Learn": ["Automate reports"],
"Create": ["Set up alerts", "Track KPIs"],
},
)
formatted = format_understanding_for_prompt(understanding)
assert "Alice" in formatted
assert "Technology" in formatted
assert "suggested_prompts" not in formatted
assert "Automate reports" not in formatted
assert "Set up alerts" not in formatted
assert "Track KPIs" not in formatted

View File

@@ -9,7 +9,6 @@ from datetime import datetime, timezone
from typing import Optional
import pydantic
from prisma.errors import UniqueViolationError
from prisma.models import UserWorkspace, UserWorkspaceFile
from prisma.types import UserWorkspaceFileWhereInput
@@ -76,23 +75,22 @@ async def get_or_create_workspace(user_id: str) -> Workspace:
"""
Get user's workspace, creating one if it doesn't exist.
Uses upsert to handle race conditions when multiple concurrent requests
attempt to create a workspace for the same user.
Args:
user_id: The user's ID
Returns:
Workspace instance
"""
workspace = await UserWorkspace.prisma().find_unique(where={"userId": user_id})
if workspace:
return Workspace.from_db(workspace)
try:
workspace = await UserWorkspace.prisma().create(data={"userId": user_id})
except UniqueViolationError:
# Concurrent request already created it
workspace = await UserWorkspace.prisma().find_unique(where={"userId": user_id})
if workspace is None:
raise
workspace = await UserWorkspace.prisma().upsert(
where={"userId": user_id},
data={
"create": {"userId": user_id},
"update": {}, # No updates needed if exists
},
)
return Workspace.from_db(workspace)

View File

@@ -21,31 +21,6 @@ class DiscordChannel(str, Enum):
PRODUCT = "product" # For product alerts (low balance, zero balance, etc.)
_USER_AUTH_KEYWORDS = [
"incorrect api key",
"invalid x-api-key",
"invalid api key",
"missing authentication header",
"invalid api token",
"authentication_error",
"bad credentials",
"unauthorized",
"insufficient authentication scopes",
"http 401 error",
"http 403 error",
]
_AMQP_KEYWORDS = [
"amqpconnection",
"amqpconnector",
"connection_forced",
"channelinvalidstateerror",
"no active transport",
]
_AMQP_INDICATORS = ["aio_pika", "aiormq", "amqp", "pika", "rabbitmq"]
def _before_send(event, hint):
"""Filter out expected/transient errors from Sentry to reduce noise."""
if "exc_info" in hint:
@@ -53,21 +28,36 @@ def _before_send(event, hint):
exc_msg = str(exc_value).lower() if exc_value else ""
# AMQP/RabbitMQ transient connection errors — expected during deploys
if any(kw in exc_msg for kw in _AMQP_KEYWORDS):
amqp_keywords = [
"amqpconnection",
"amqpconnector",
"connection_forced",
"channelinvalidstateerror",
"no active transport",
]
if any(kw in exc_msg for kw in amqp_keywords):
return None
# "connection refused" only for AMQP-related exceptions (not other services)
if "connection refused" in exc_msg:
exc_module = getattr(exc_type, "__module__", "") or ""
exc_name = getattr(exc_type, "__name__", "") or ""
amqp_indicators = ["aio_pika", "aiormq", "amqp", "pika", "rabbitmq"]
if any(
ind in exc_module.lower() or ind in exc_name.lower()
for ind in _AMQP_INDICATORS
) or any(kw in exc_msg for kw in _AMQP_INDICATORS):
for ind in amqp_indicators
) or any(kw in exc_msg for kw in ["amqp", "pika", "rabbitmq"]):
return None
# User-caused credential/auth/integration errors — not platform bugs
if any(kw in exc_msg for kw in _USER_AUTH_KEYWORDS):
# User-caused credential/auth errors — not platform bugs
user_auth_keywords = [
"incorrect api key",
"invalid x-api-key",
"missing authentication header",
"invalid api token",
"authentication_error",
]
if any(kw in exc_msg for kw in user_auth_keywords):
return None
# Expected business logic — insufficient balance
@@ -103,18 +93,18 @@ def _before_send(event, hint):
)
if event.get("logger") and log_msg:
msg = log_msg.lower()
noisy_log_patterns = [
noisy_patterns = [
"amqpconnection",
"connection_forced",
"unclosed client session",
"unclosed connector",
]
if any(p in msg for p in noisy_log_patterns):
if any(p in msg for p in noisy_patterns):
return None
if "connection refused" in msg and any(ind in msg for ind in _AMQP_INDICATORS):
return None
# Same auth keywords — errors logged via logger.error() bypass exc_info
if any(kw in msg for kw in _USER_AUTH_KEYWORDS):
# "connection refused" in logs only when AMQP-related context is present
if "connection refused" in msg and any(
ind in msg for ind in ("amqp", "pika", "rabbitmq", "aio_pika", "aiormq")
):
return None
return event

View File

@@ -1,5 +1,5 @@
# Base stage for both dev and prod
FROM node:22.22-alpine3.23 AS base
FROM node:21-alpine AS base
WORKDIR /app
RUN corepack enable
COPY autogpt_platform/frontend/package.json autogpt_platform/frontend/pnpm-lock.yaml ./
@@ -33,7 +33,7 @@ ENV NEXT_PUBLIC_SOURCEMAPS="false"
RUN if [ "$NEXT_PUBLIC_PW_TEST" = "true" ]; then NEXT_PUBLIC_PW_TEST=true NODE_OPTIONS="--max-old-space-size=8192" pnpm build; else NODE_OPTIONS="--max-old-space-size=8192" pnpm build; fi
# Prod stage - based on NextJS reference Dockerfile https://github.com/vercel/next.js/blob/64271354533ed16da51be5dce85f0dbd15f17517/examples/with-docker/Dockerfile
FROM node:22.22-alpine3.23 AS prod
FROM node:21-alpine AS prod
ENV NODE_ENV=production
ENV HOSTNAME=0.0.0.0
WORKDIR /app

View File

@@ -290,12 +290,12 @@ export function ChatSidebar() {
<div className="flex min-h-[30rem] items-center justify-center py-4">
<LoadingSpinner size="small" className="text-neutral-600" />
</div>
) : !sessions?.length ? (
) : sessions.length === 0 ? (
<p className="py-4 text-center text-sm text-neutral-500">
No conversations yet
</p>
) : (
sessions?.map((session) => (
sessions.map((session) => (
<div
key={session.id}
className={cn(

View File

@@ -1,17 +1,18 @@
"use client";
import { ChatInput } from "@/app/(platform)/copilot/components/ChatInput/ChatInput";
import { Button } from "@/components/atoms/Button/Button";
import { useGetV2GetSuggestedPrompts } from "@/app/api/__generated__/endpoints/chat/chat";
import { Skeleton } from "@/components/atoms/Skeleton/Skeleton";
import { Text } from "@/components/atoms/Text/Text";
import { useSupabase } from "@/lib/supabase/hooks/useSupabase";
import { SpinnerGapIcon } from "@phosphor-icons/react";
import { motion } from "framer-motion";
import { useEffect, useState } from "react";
import {
getGreetingName,
getInputPlaceholder,
getQuickActions,
getSuggestionThemes,
} from "./helpers";
import { SuggestionThemes } from "./components/SuggestionThemes/SuggestionThemes";
interface Props {
inputLayoutId: string;
@@ -33,25 +34,35 @@ export function EmptySession({
}: Props) {
const { user } = useSupabase();
const greetingName = getGreetingName(user);
const quickActions = getQuickActions();
const [loadingAction, setLoadingAction] = useState<string | null>(null);
const { data: suggestedPromptsResponse, isLoading: isLoadingPrompts } =
useGetV2GetSuggestedPrompts({
query: { staleTime: Infinity, gcTime: Infinity, refetchOnMount: false },
});
const themes = getSuggestionThemes(
suggestedPromptsResponse?.status === 200
? suggestedPromptsResponse.data.themes
: undefined,
);
const [inputPlaceholder, setInputPlaceholder] = useState(
getInputPlaceholder(),
);
useEffect(() => {
setInputPlaceholder(getInputPlaceholder(window.innerWidth));
}, [window.innerWidth]);
async function handleQuickActionClick(action: string) {
if (isCreatingSession || loadingAction !== null) return;
setLoadingAction(action);
try {
await onSend(action);
} finally {
setLoadingAction(null);
function handleResize() {
setInputPlaceholder(getInputPlaceholder(window.innerWidth));
}
}
handleResize();
const mql = window.matchMedia("(max-width: 500px)");
mql.addEventListener("change", handleResize);
const mql2 = window.matchMedia("(max-width: 1080px)");
mql2.addEventListener("change", handleResize);
return () => {
mql.removeEventListener("change", handleResize);
mql2.removeEventListener("change", handleResize);
};
}, []);
return (
<div className="flex h-full flex-1 items-center justify-center overflow-y-auto bg-[#f8f8f9] px-0 py-5 md:px-6 md:py-10">
@@ -89,30 +100,19 @@ export function EmptySession({
</div>
</div>
<div className="flex flex-wrap items-center justify-center gap-3 overflow-x-auto [-ms-overflow-style:none] [scrollbar-width:none] [&::-webkit-scrollbar]:hidden">
{quickActions.map((action) => (
<Button
key={action}
type="button"
variant="outline"
size="small"
onClick={() => void handleQuickActionClick(action)}
disabled={isCreatingSession || loadingAction !== null}
aria-busy={loadingAction === action}
leftIcon={
loadingAction === action ? (
<SpinnerGapIcon
className="h-4 w-4 animate-spin"
weight="bold"
/>
) : null
}
className="h-auto shrink-0 border-zinc-300 px-3 py-2 text-[.9rem] text-zinc-600"
>
{action}
</Button>
))}
</div>
{isLoadingPrompts ? (
<div className="flex flex-wrap items-center justify-center gap-3">
{Array.from({ length: 4 }, (_, i) => (
<Skeleton key={i} className="h-10 w-28 shrink-0 rounded-full" />
))}
</div>
) : (
<SuggestionThemes
themes={themes}
onSend={onSend}
disabled={isCreatingSession}
/>
)}
</motion.div>
</div>
);

View File

@@ -0,0 +1,100 @@
"use client";
import {
Popover,
PopoverContent,
PopoverTrigger,
} from "@/components/molecules/Popover/Popover";
import { Button } from "@/components/atoms/Button/Button";
import {
BookOpenIcon,
PaintBrushIcon,
LightningIcon,
ListChecksIcon,
SpinnerGapIcon,
} from "@phosphor-icons/react";
import { useState } from "react";
import type { SuggestionTheme } from "../../helpers";
const THEME_ICONS: Record<string, typeof BookOpenIcon> = {
Learn: BookOpenIcon,
Create: PaintBrushIcon,
Automate: LightningIcon,
Organize: ListChecksIcon,
};
interface Props {
themes: SuggestionTheme[];
onSend: (prompt: string) => void | Promise<void>;
disabled?: boolean;
}
export function SuggestionThemes({ themes, onSend, disabled }: Props) {
const [openTheme, setOpenTheme] = useState<string | null>(null);
const [loadingPrompt, setLoadingPrompt] = useState<string | null>(null);
async function handlePromptClick(theme: string, prompt: string) {
if (disabled || loadingPrompt) return;
setLoadingPrompt(`${theme}:${prompt}`);
try {
await onSend(prompt);
} finally {
setLoadingPrompt(null);
setOpenTheme(null);
}
}
return (
<div className="flex flex-wrap items-center justify-center gap-3">
{themes.map((theme) => {
const Icon = THEME_ICONS[theme.name];
return (
<Popover
key={theme.name}
open={openTheme === theme.name}
onOpenChange={(open) => setOpenTheme(open ? theme.name : null)}
>
<PopoverTrigger asChild>
<Button
type="button"
variant="outline"
size="small"
disabled={disabled || loadingPrompt !== null}
className="shrink-0 gap-2 border-zinc-300 px-3 py-2 text-[.9rem] text-zinc-600"
>
{Icon && <Icon size={16} weight="regular" />}
{theme.name}
</Button>
</PopoverTrigger>
<PopoverContent align="center" className="w-80 p-2">
<ul className="grid gap-0.5">
{theme.prompts.map((prompt) => (
<li key={prompt}>
<button
type="button"
disabled={disabled || loadingPrompt !== null}
onClick={() => void handlePromptClick(theme.name, prompt)}
className="w-full rounded-md px-3 py-2 text-left text-sm text-zinc-700 transition-colors hover:bg-zinc-100 disabled:opacity-50"
>
{loadingPrompt === `${theme.name}:${prompt}` ? (
<span className="flex items-center gap-2">
<SpinnerGapIcon
className="h-4 w-4 animate-spin"
weight="bold"
/>
{prompt}
</span>
) : (
prompt
)}
</button>
</li>
))}
</ul>
</PopoverContent>
</Popover>
);
})}
</div>
);
}

View File

@@ -12,12 +12,87 @@ export function getInputPlaceholder(width?: number) {
return "What's your role and what eats up most of your day? e.g. 'I'm a recruiter and I hate...'";
}
export function getQuickActions() {
return [
"I don't know where to start, just ask me stuff",
"I do the same thing every week and it's killing me",
"Help me find where I'm wasting my time",
];
export interface SuggestionTheme {
name: string;
prompts: string[];
}
export const DEFAULT_THEMES: SuggestionTheme[] = [
{
name: "Learn",
prompts: [
"What can AutoGPT do for me?",
"Show me how agents work",
"What integrations are available?",
"How do I schedule an agent?",
"What are the most popular agents?",
],
},
{
name: "Create",
prompts: [
"Draft a weekly status report",
"Generate social media posts for my business",
"Create a competitive analysis summary",
"Write onboarding emails for new hires",
"Build a content calendar for next month",
],
},
{
name: "Automate",
prompts: [
"Monitor relevant websites for changes",
"Send me a daily news digest on my industry",
"Auto-reply to common customer questions",
"Track price changes on products I sell",
"Summarize my emails every morning",
],
},
{
name: "Organize",
prompts: [
"Sort my bookmarks into categories",
"Create a project timeline from my notes",
"Prioritize my task list by urgency",
"Build a decision matrix for vendor selection",
"Organize my meeting notes into action items",
],
},
];
export function getSuggestionThemes(
apiThemes?: SuggestionTheme[],
): SuggestionTheme[] {
if (!apiThemes?.length) {
return DEFAULT_THEMES;
}
const promptsByTheme = new Map(
apiThemes.map((theme) => [theme.name, theme.prompts] as const),
);
// Legacy users have prompts under "General" — distribute them across themes
const generalPrompts = (promptsByTheme.get("General") ?? []).filter(
(p) => p.trim().length > 0,
);
return DEFAULT_THEMES.map((theme, idx) => {
const personalized = (promptsByTheme.get(theme.name) ?? []).filter(
(p) => p.trim().length > 0,
);
// Spread legacy "General" prompts round-robin across themes
const legacySlice = generalPrompts.filter(
(_, i) => i % DEFAULT_THEMES.length === idx,
);
return {
name: theme.name,
prompts: Array.from(
new Set([...personalized, ...legacySlice, ...theme.prompts]),
).slice(0, theme.prompts.length),
};
});
}
export function getGreetingName(user?: User | null) {

View File

@@ -20,7 +20,7 @@ export function UsageLimits() {
},
});
if (isLoading || !usage?.daily || !usage?.weekly) return null;
if (isLoading || !usage) return null;
if (usage.daily.limit <= 0 && usage.weekly.limit <= 0) return null;
return (

View File

@@ -34,7 +34,7 @@ function CoPilotUsageSection() {
},
});
if (isLoading || !usage?.daily || !usage?.weekly) return null;
if (isLoading || !usage) return null;
if (usage.daily.limit <= 0 && usage.weekly.limit <= 0) return null;
return (

View File

@@ -0,0 +1,15 @@
/**
* Generated by orval v7.13.0 🍺
* Do not edit manually.
* AutoGPT Agent Server
* This server is used to execute agents that are created by the AutoGPT system.
* OpenAPI spec version: 0.1
*/
import type { SuggestedTheme } from "./suggestedTheme";
/**
* Response model for user-specific suggested prompts grouped by theme.
*/
export interface SuggestedPromptsResponse {
themes: SuggestedTheme[];
}

View File

@@ -0,0 +1,15 @@
/**
* Generated by orval v7.13.0 🍺
* Do not edit manually.
* AutoGPT Agent Server
* This server is used to execute agents that are created by the AutoGPT system.
* OpenAPI spec version: 0.1
*/
/**
* A themed group of suggested prompts.
*/
export interface SuggestedTheme {
name: string;
prompts: string[];
}

View File

@@ -1358,6 +1358,30 @@
}
}
},
"/api/chat/suggested-prompts": {
"get": {
"tags": ["v2", "chat", "chat"],
"summary": "Get Suggested Prompts",
"description": "Get LLM-generated suggested prompts grouped by theme.\n\nReturns personalized quick-action prompts based on the user's\nbusiness understanding. Returns empty themes list if no custom\nprompts are available.",
"operationId": "getV2GetSuggestedPrompts",
"responses": {
"200": {
"description": "Successful Response",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/SuggestedPromptsResponse"
}
}
}
},
"401": {
"$ref": "#/components/responses/HTTP401NotAuthenticatedError"
}
},
"security": [{ "HTTPBearerJWT": [] }]
}
},
"/api/chat/usage": {
"get": {
"tags": ["v2", "chat", "chat"],
@@ -12754,6 +12778,33 @@
"title": "SuggestedGoalResponse",
"description": "Response when the goal needs refinement with a suggested alternative."
},
"SuggestedPromptsResponse": {
"properties": {
"themes": {
"items": { "$ref": "#/components/schemas/SuggestedTheme" },
"type": "array",
"title": "Themes"
}
},
"type": "object",
"required": ["themes"],
"title": "SuggestedPromptsResponse",
"description": "Response model for user-specific suggested prompts grouped by theme."
},
"SuggestedTheme": {
"properties": {
"name": { "type": "string", "title": "Name" },
"prompts": {
"items": { "type": "string" },
"type": "array",
"title": "Prompts"
}
},
"type": "object",
"required": ["name", "prompts"],
"title": "SuggestedTheme",
"description": "A themed group of suggested prompts."
},
"SuggestionsResponse": {
"properties": {
"recent_searches": {