mirror of
https://github.com/Significant-Gravitas/AutoGPT.git
synced 2026-04-08 03:00:28 -04:00
Compare commits
12 Commits
test-scree
...
dev
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
f5e2eccda7 | ||
|
|
58b230ff5a | ||
|
|
67bdef13e7 | ||
|
|
e67dd93ee8 | ||
|
|
3140a60816 | ||
|
|
41c2ee9f83 | ||
|
|
ca748ee12a | ||
|
|
243b12778f | ||
|
|
43c81910ae | ||
|
|
a11199aa67 | ||
|
|
5f82a71d5f | ||
|
|
1a305db162 |
545
.claude/skills/orchestrate/SKILL.md
Normal file
545
.claude/skills/orchestrate/SKILL.md
Normal file
@@ -0,0 +1,545 @@
|
||||
---
|
||||
name: orchestrate
|
||||
description: "Meta-agent supervisor that manages a fleet of Claude Code agents running in tmux windows. Auto-discovers spare worktrees, spawns agents, monitors state, kicks idle agents, approves safe confirmations, and recycles worktrees when done. TRIGGER when user asks to supervise agents, run parallel tasks, manage worktrees, check agent status, or orchestrate parallel work."
|
||||
user-invocable: true
|
||||
argument-hint: "any free text — e.g. 'start 3 agents on X Y Z', 'show status', 'add task: implement feature A', 'stop', 'how many are free?'"
|
||||
metadata:
|
||||
author: autogpt-team
|
||||
version: "6.0.0"
|
||||
---
|
||||
|
||||
# Orchestrate — Agent Fleet Supervisor
|
||||
|
||||
One tmux session, N windows — each window is one agent working in its own worktree. Speak naturally; Claude maps your intent to the right scripts.
|
||||
|
||||
## Scripts
|
||||
|
||||
```bash
|
||||
SKILLS_DIR=$(git rev-parse --show-toplevel)/.claude/skills/orchestrate/scripts
|
||||
STATE_FILE=~/.claude/orchestrator-state.json
|
||||
```
|
||||
|
||||
| Script | Purpose |
|
||||
|---|---|
|
||||
| `find-spare.sh [REPO_ROOT]` | List free worktrees — one `PATH BRANCH` per line |
|
||||
| `spawn-agent.sh SESSION PATH SPARE NEW_BRANCH OBJECTIVE [PR_NUMBER] [STEPS...]` | Create window + checkout branch + launch claude + send task. **Stdout: `SESSION:WIN` only** |
|
||||
| `recycle-agent.sh WINDOW PATH SPARE_BRANCH` | Kill window + restore spare branch |
|
||||
| `run-loop.sh` | **Mechanical babysitter** — idle restart + dialog approval + recycle on ORCHESTRATOR:DONE + supervisor health check + all-done notification |
|
||||
| `verify-complete.sh WINDOW` | Verify PR is done: checkpoints ✓ + 0 unresolved threads + CI green + no fresh CHANGES_REQUESTED. Repo auto-derived from state file `.repo` or git remote. |
|
||||
| `notify.sh MESSAGE` | Send notification via Discord webhook (env `DISCORD_WEBHOOK_URL` or state `.discord_webhook`), macOS notification center, and stdout |
|
||||
| `capacity.sh [REPO_ROOT]` | Print available + in-use worktrees |
|
||||
| `status.sh` | Print fleet status + live pane commands |
|
||||
| `poll-cycle.sh` | One monitoring cycle — classifies panes, tracks checkpoints, returns JSON action array |
|
||||
| `classify-pane.sh WINDOW` | Classify one pane state |
|
||||
|
||||
## Supervision model
|
||||
|
||||
```
|
||||
Orchestrating Claude (this Claude session — IS the supervisor)
|
||||
└── Reads pane output, checks CI, intervenes with targeted guidance
|
||||
run-loop.sh (separate tmux window, every 30s)
|
||||
└── Mechanical only: idle restart, dialog approval, recycle on ORCHESTRATOR:DONE
|
||||
```
|
||||
|
||||
**You (the orchestrating Claude)** are the supervisor. After spawning agents, stay in this conversation and actively monitor: poll each agent's pane every 2-3 minutes, check CI, nudge stalled agents, and verify completions. Do not spawn a separate supervisor Claude window — it loses context, is hard to observe, and compounds context compression problems.
|
||||
|
||||
**run-loop.sh** is the mechanical layer — zero tokens, handles things that need no judgment: restart crashed agents, press Enter on dialogs, recycle completed worktrees (only after `verify-complete.sh` passes).
|
||||
|
||||
## Checkpoint protocol
|
||||
|
||||
Agents output checkpoints as they complete each required step:
|
||||
|
||||
```
|
||||
CHECKPOINT:<step-name>
|
||||
```
|
||||
|
||||
Required steps are passed as args to `spawn-agent.sh` (e.g. `pr-address pr-test`). `run-loop.sh` will not recycle a window until all required checkpoints are found in the pane output. If `verify-complete.sh` fails, the agent is re-briefed automatically.
|
||||
|
||||
## Worktree lifecycle
|
||||
|
||||
```text
|
||||
spare/N branch → spawn-agent.sh (--session-id UUID) → window + feat/branch + claude running
|
||||
↓
|
||||
CHECKPOINT:<step> (as steps complete)
|
||||
↓
|
||||
ORCHESTRATOR:DONE
|
||||
↓
|
||||
verify-complete.sh: checkpoints ✓ + 0 threads + CI green + no fresh CHANGES_REQUESTED
|
||||
↓
|
||||
state → "done", notify, window KEPT OPEN
|
||||
↓
|
||||
user/orchestrator explicitly requests recycle
|
||||
↓
|
||||
recycle-agent.sh → spare/N (free again)
|
||||
```
|
||||
|
||||
**Windows are never auto-killed.** The worktree stays on its branch, the session stays alive. The agent is done working but the window, git state, and Claude session are all preserved until you choose to recycle.
|
||||
|
||||
**To resume a done or crashed session:**
|
||||
```bash
|
||||
# Resume by stored session ID (preferred — exact session, full context)
|
||||
claude --resume SESSION_ID --permission-mode bypassPermissions
|
||||
|
||||
# Or resume most recent session in that worktree directory
|
||||
cd /path/to/worktree && claude --continue --permission-mode bypassPermissions
|
||||
```
|
||||
|
||||
**To manually recycle when ready:**
|
||||
```bash
|
||||
bash ~/.claude/orchestrator/scripts/recycle-agent.sh SESSION:WIN WORKTREE_PATH spare/N
|
||||
# Then update state:
|
||||
jq --arg w "SESSION:WIN" '.agents |= map(if .window == $w then .state = "recycled" else . end)' \
|
||||
~/.claude/orchestrator-state.json > /tmp/orch.tmp && mv /tmp/orch.tmp ~/.claude/orchestrator-state.json
|
||||
```
|
||||
|
||||
## State file (`~/.claude/orchestrator-state.json`)
|
||||
|
||||
Never committed to git. You maintain this file directly using `jq` + atomic writes (`.tmp` → `mv`).
|
||||
|
||||
```json
|
||||
{
|
||||
"active": true,
|
||||
"tmux_session": "autogpt1",
|
||||
"idle_threshold_seconds": 300,
|
||||
"loop_window": "autogpt1:5",
|
||||
"repo": "Significant-Gravitas/AutoGPT",
|
||||
"discord_webhook": "https://discord.com/api/webhooks/...",
|
||||
"last_poll_at": 0,
|
||||
"agents": [
|
||||
{
|
||||
"window": "autogpt1:3",
|
||||
"worktree": "AutoGPT6",
|
||||
"worktree_path": "/path/to/AutoGPT6",
|
||||
"spare_branch": "spare/6",
|
||||
"branch": "feat/my-feature",
|
||||
"objective": "Implement X and open a PR",
|
||||
"pr_number": "12345",
|
||||
"session_id": "550e8400-e29b-41d4-a716-446655440000",
|
||||
"steps": ["pr-address", "pr-test"],
|
||||
"checkpoints": ["pr-address"],
|
||||
"state": "running",
|
||||
"last_output_hash": "",
|
||||
"last_seen_at": 0,
|
||||
"spawned_at": 0,
|
||||
"idle_since": 0,
|
||||
"revision_count": 0,
|
||||
"last_rebriefed_at": 0
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
Top-level optional fields:
|
||||
- `repo` — GitHub `owner/repo` for CI/thread checks. Auto-derived from git remote if omitted.
|
||||
- `discord_webhook` — Discord webhook URL for completion notifications. Also reads `DISCORD_WEBHOOK_URL` env var.
|
||||
|
||||
Per-agent fields:
|
||||
- `session_id` — UUID passed to `claude --session-id` at spawn; use with `claude --resume UUID` to restore exact session context after a crash or window close.
|
||||
- `last_rebriefed_at` — Unix timestamp of last re-brief; enforces 5-min cooldown to prevent spam.
|
||||
|
||||
Agent states: `running` | `idle` | `stuck` | `waiting_approval` | `complete` | `done` | `escalated`
|
||||
|
||||
`done` means verified complete — window is still open, session still alive, worktree still on task branch. Not recycled yet.
|
||||
|
||||
## Serial /pr-test rule
|
||||
|
||||
`/pr-test` and `/pr-test --fix` run local Docker + integration tests that use shared ports, a shared database, and shared build caches. **Running two `/pr-test` jobs simultaneously will cause port conflicts and database corruption.**
|
||||
|
||||
**Rule: only one `/pr-test` runs at a time. The orchestrator serializes them.**
|
||||
|
||||
You (the orchestrating Claude) own the test queue:
|
||||
1. Agents do `pr-review` and `pr-address` in parallel — that's safe (they only push code and reply to GitHub).
|
||||
2. When a PR needs local testing, add it to your mental queue — don't give agents a `pr-test` step.
|
||||
3. Run `/pr-test https://github.com/OWNER/REPO/pull/PR_NUMBER --fix` yourself, sequentially.
|
||||
4. Feed results back to the relevant agent via `tmux send-keys`:
|
||||
```bash
|
||||
tmux send-keys -t SESSION:WIN "Local tests for PR #N: <paste failure output or 'all passed'>. Fix any failures and push, then output ORCHESTRATOR:DONE."
|
||||
sleep 0.3
|
||||
tmux send-keys -t SESSION:WIN Enter
|
||||
```
|
||||
5. Wait for CI to confirm green before marking the agent done.
|
||||
|
||||
If multiple PRs need testing at the same time, pick the one furthest along (fewest pending CI checks) and test it first. Only start the next test after the previous one completes.
|
||||
|
||||
## Session restore (tested and confirmed)
|
||||
|
||||
Agent sessions are saved to disk. To restore a closed or crashed session:
|
||||
|
||||
```bash
|
||||
# If session_id is in state (preferred):
|
||||
NEW_WIN=$(tmux new-window -t SESSION -n WORKTREE_NAME -P -F '#{window_index}')
|
||||
tmux send-keys -t "SESSION:${NEW_WIN}" "cd /path/to/worktree && claude --resume SESSION_ID --permission-mode bypassPermissions" Enter
|
||||
|
||||
# If no session_id (use --continue for most recent session in that directory):
|
||||
tmux send-keys -t "SESSION:${NEW_WIN}" "cd /path/to/worktree && claude --continue --permission-mode bypassPermissions" Enter
|
||||
```
|
||||
|
||||
`--continue` restores the full conversation history including all tool calls, file edits, and context. The agent resumes exactly where it left off. After restoring, update the window address in the state file:
|
||||
|
||||
```bash
|
||||
jq --arg old "SESSION:OLD_WIN" --arg new "SESSION:NEW_WIN" \
|
||||
'(.agents[] | select(.window == $old)).window = $new' \
|
||||
~/.claude/orchestrator-state.json > /tmp/orch.tmp && mv /tmp/orch.tmp ~/.claude/orchestrator-state.json
|
||||
```
|
||||
|
||||
## Intent → action mapping
|
||||
|
||||
Match the user's message to one of these intents:
|
||||
|
||||
| The user says something like… | What to do |
|
||||
|---|---|
|
||||
| "status", "what's running", "show agents" | Run `status.sh` + `capacity.sh`, show output |
|
||||
| "how many free", "capacity", "available worktrees" | Run `capacity.sh`, show output |
|
||||
| "start N agents on X, Y, Z" or "run these tasks: …" | See **Spawning agents** below |
|
||||
| "add task: …", "add one more agent for …" | See **Adding an agent** below |
|
||||
| "stop", "shut down", "pause the fleet" | See **Stopping** below |
|
||||
| "poll", "check now", "run a cycle" | Run `poll-cycle.sh`, process actions |
|
||||
| "recycle window X", "free up autogpt3" | Run `recycle-agent.sh` directly |
|
||||
|
||||
When the intent is ambiguous, show capacity first and ask what tasks to run.
|
||||
|
||||
## Spawning agents
|
||||
|
||||
### 1. Resolve tmux session
|
||||
|
||||
```bash
|
||||
tmux list-sessions -F "#{session_name}: #{session_windows} windows" 2>/dev/null
|
||||
```
|
||||
|
||||
Use an existing session. **Never create a tmux session from within Claude** — it becomes a child of Claude's process and dies when the session ends. If no session exists, tell the user to run `tmux new-session -d -s autogpt1` in their terminal first, then re-invoke `/orchestrate`.
|
||||
|
||||
### 2. Show available capacity
|
||||
|
||||
```bash
|
||||
bash $SKILLS_DIR/capacity.sh $(git rev-parse --show-toplevel)
|
||||
```
|
||||
|
||||
### 3. Collect tasks from the user
|
||||
|
||||
For each task, gather:
|
||||
- **objective** — what to do (e.g. "implement feature X and open a PR")
|
||||
- **branch name** — e.g. `feat/my-feature` (derive from objective if not given)
|
||||
- **pr_number** — GitHub PR number if working on an existing PR (for verification)
|
||||
- **steps** — required checkpoint names in order (e.g. `pr-address pr-test`) — derive from objective
|
||||
|
||||
Ask for `idle_threshold_seconds` only if the user mentions it (default: 300).
|
||||
|
||||
Never ask the user to specify a worktree — auto-assign from `find-spare.sh`.
|
||||
|
||||
### 4. Spawn one agent per task
|
||||
|
||||
```bash
|
||||
# Get ordered list of spare worktrees
|
||||
SPARE_LIST=$(bash $SKILLS_DIR/find-spare.sh $(git rev-parse --show-toplevel))
|
||||
|
||||
# For each task, take the next spare line:
|
||||
WORKTREE_PATH=$(echo "$SPARE_LINE" | awk '{print $1}')
|
||||
SPARE_BRANCH=$(echo "$SPARE_LINE" | awk '{print $2}')
|
||||
|
||||
# With PR number and required steps:
|
||||
WINDOW=$(bash $SKILLS_DIR/spawn-agent.sh "$SESSION" "$WORKTREE_PATH" "$SPARE_BRANCH" "$NEW_BRANCH" "$OBJECTIVE" "$PR_NUMBER" "pr-address" "pr-test")
|
||||
|
||||
# Without PR (new work):
|
||||
WINDOW=$(bash $SKILLS_DIR/spawn-agent.sh "$SESSION" "$WORKTREE_PATH" "$SPARE_BRANCH" "$NEW_BRANCH" "$OBJECTIVE")
|
||||
```
|
||||
|
||||
Build an agent record and append it to the state file. If the state file doesn't exist yet, initialize it:
|
||||
|
||||
```bash
|
||||
# Derive repo from git remote (used by verify-complete.sh + supervisor)
|
||||
REPO=$(git remote get-url origin 2>/dev/null | sed 's|.*github\.com[:/]||; s|\.git$||' || echo "")
|
||||
|
||||
jq -n \
|
||||
--arg session "$SESSION" \
|
||||
--arg repo "$REPO" \
|
||||
--argjson threshold 300 \
|
||||
'{active:true, tmux_session:$session, idle_threshold_seconds:$threshold,
|
||||
repo:$repo, loop_window:null, supervisor_window:null, last_poll_at:0, agents:[]}' \
|
||||
> ~/.claude/orchestrator-state.json
|
||||
```
|
||||
|
||||
Optionally add a Discord webhook for completion notifications:
|
||||
```bash
|
||||
jq --arg hook "$DISCORD_WEBHOOK_URL" '.discord_webhook = $hook' ~/.claude/orchestrator-state.json \
|
||||
> /tmp/orch.tmp && mv /tmp/orch.tmp ~/.claude/orchestrator-state.json
|
||||
```
|
||||
|
||||
`spawn-agent.sh` writes the initial agent record (window, worktree_path, branch, objective, state, etc.) to the state file automatically — **do not append the record again after calling it.** The record already exists and `pr_number`/`steps` are patched in by the script itself.
|
||||
|
||||
### 5. Start the mechanical babysitter
|
||||
|
||||
```bash
|
||||
LOOP_WIN=$(tmux new-window -t "$SESSION" -n "orchestrator" -P -F '#{window_index}')
|
||||
LOOP_WINDOW="${SESSION}:${LOOP_WIN}"
|
||||
tmux send-keys -t "$LOOP_WINDOW" "bash $SKILLS_DIR/run-loop.sh" Enter
|
||||
|
||||
jq --arg w "$LOOP_WINDOW" '.loop_window = $w' ~/.claude/orchestrator-state.json \
|
||||
> /tmp/orch.tmp && mv /tmp/orch.tmp ~/.claude/orchestrator-state.json
|
||||
```
|
||||
|
||||
### 6. Begin supervising directly in this conversation
|
||||
|
||||
You are the supervisor. After spawning, immediately start your first poll loop (see **Supervisor duties** below) and continue every 2-3 minutes. Do NOT spawn a separate supervisor Claude window.
|
||||
|
||||
## Adding an agent
|
||||
|
||||
Find the next spare worktree, then spawn and append to state — same as steps 2–4 above but for a single task. If no spare worktrees are available, tell the user.
|
||||
|
||||
## Supervisor duties (YOUR job, every 2-3 min in this conversation)
|
||||
|
||||
You are the supervisor. Run this poll loop directly in your Claude session — not in a separate window.
|
||||
|
||||
### Poll loop mechanism
|
||||
|
||||
You are reactive — you only act when a tool completes or the user sends a message. To create a self-sustaining poll loop without user involvement:
|
||||
|
||||
1. Start each poll with `run_in_background: true` + a sleep before the work:
|
||||
```bash
|
||||
sleep 120 && tmux capture-pane -t autogpt1:0 -p -S -200 | tail -40
|
||||
# + similar for each active window
|
||||
```
|
||||
2. When the background job notifies you, read the pane output and take action.
|
||||
3. Immediately schedule the next background poll — this keeps the loop alive.
|
||||
4. Stop scheduling when all agents are done/escalated.
|
||||
|
||||
**Never tell the user "I'll poll every 2-3 minutes"** — that does nothing without a trigger. Start the background job instead.
|
||||
|
||||
### Each poll: what to check
|
||||
|
||||
```bash
|
||||
# 1. Read state
|
||||
cat ~/.claude/orchestrator-state.json | jq '.agents[] | {window, worktree, branch, state, pr_number, checkpoints}'
|
||||
|
||||
# 2. For each running/stuck/idle agent, capture pane
|
||||
tmux capture-pane -t SESSION:WIN -p -S -200 | tail -60
|
||||
```
|
||||
|
||||
For each agent, decide:
|
||||
|
||||
| What you see | Action |
|
||||
|---|---|
|
||||
| Spinner / tools running | Do nothing — agent is working |
|
||||
| Idle `❯` prompt, no `ORCHESTRATOR:DONE` | Stalled — send specific nudge with objective from state |
|
||||
| Stuck in error loop | Send targeted fix with exact error + solution |
|
||||
| Waiting for input / question | Answer and unblock via `tmux send-keys` |
|
||||
| CI red | `gh pr checks PR_NUMBER --repo REPO` → tell agent exactly what's failing |
|
||||
| Context compacted / agent lost | Send recovery: `cat ~/.claude/orchestrator-state.json | jq '.agents[] | select(.window=="WIN")'` + `gh pr view PR_NUMBER --json title,body` |
|
||||
| `ORCHESTRATOR:DONE` in output | Run `verify-complete.sh` — if it fails, re-brief with specific reason |
|
||||
|
||||
### Strict ORCHESTRATOR:DONE gate
|
||||
|
||||
`verify-complete.sh` handles the main checks automatically (checkpoints, threads, CI green, spawned_at, and CHANGES_REQUESTED). Run it:
|
||||
|
||||
**CHANGES_REQUESTED staleness rule**: a `CHANGES_REQUESTED` review only blocks if it was submitted *after* the latest commit. If the latest commit postdates the review, the review is considered stale (feedback already addressed) and does not block. This avoids false negatives when a bot reviewer hasn't re-reviewed after the agent's fixing commits.
|
||||
|
||||
```bash
|
||||
SKILLS_DIR=~/.claude/orchestrator/scripts
|
||||
bash $SKILLS_DIR/verify-complete.sh SESSION:WIN
|
||||
```
|
||||
|
||||
If it passes → run-loop.sh will recycle the window automatically. No manual action needed.
|
||||
If it fails → re-brief the agent with the failure reason. Never manually mark state `done` to bypass this.
|
||||
|
||||
### Re-brief a stalled agent
|
||||
|
||||
```bash
|
||||
OBJ=$(jq -r --arg w SESSION:WIN '.agents[] | select(.window==$w) | .objective' ~/.claude/orchestrator-state.json)
|
||||
PR=$(jq -r --arg w SESSION:WIN '.agents[] | select(.window==$w) | .pr_number' ~/.claude/orchestrator-state.json)
|
||||
tmux send-keys -t SESSION:WIN "You appear stalled. Your objective: $OBJ. Check: gh pr view $PR --json title,body,headRefName to reorient."
|
||||
sleep 0.3
|
||||
tmux send-keys -t SESSION:WIN Enter
|
||||
```
|
||||
|
||||
If `image_path` is set on the agent record, include: "Re-read context at IMAGE_PATH with the Read tool."
|
||||
|
||||
## Self-recovery protocol (agents)
|
||||
|
||||
spawn-agent.sh automatically includes this instruction in every objective:
|
||||
|
||||
> If your context compacts and you lose track of what to do, run:
|
||||
> `cat ~/.claude/orchestrator-state.json | jq '.agents[] | select(.window=="SESSION:WIN")'`
|
||||
> and `gh pr view PR_NUMBER --json title,body,headRefName` to reorient.
|
||||
> Output each completed step as `CHECKPOINT:<step-name>` on its own line.
|
||||
|
||||
## Passing images and screenshots to agents
|
||||
|
||||
`tmux send-keys` is text-only — you cannot paste a raw image into a pane. To give an agent visual context (screenshots, diagrams, mockups):
|
||||
|
||||
1. **Save the image to a temp file** with a stable path:
|
||||
```bash
|
||||
# If the user drags in a screenshot or you receive a file path:
|
||||
IMAGE_PATH="/tmp/orchestrator-context-$(date +%s).png"
|
||||
cp "$USER_PROVIDED_PATH" "$IMAGE_PATH"
|
||||
```
|
||||
|
||||
2. **Reference the path in the objective string**:
|
||||
```bash
|
||||
OBJECTIVE="Implement the layout shown in /tmp/orchestrator-context-1234567890.png. Read that image first with the Read tool to understand the design."
|
||||
```
|
||||
|
||||
3. The agent uses its `Read` tool to view the image at startup — Claude Code agents are multimodal and can read image files directly.
|
||||
|
||||
**Rule**: always use `/tmp/orchestrator-context-<timestamp>.png` as the naming convention so the supervisor knows what to look for if it needs to re-brief an agent with the same image.
|
||||
|
||||
---
|
||||
|
||||
## Orchestrator final evaluation (YOU decide, not the script)
|
||||
|
||||
`verify-complete.sh` is a gate — it blocks premature marking. But it cannot tell you if the work is actually good. That is YOUR job.
|
||||
|
||||
When run-loop marks an agent `pending_evaluation` and you're notified, do all of these before marking done:
|
||||
|
||||
### 1. Run /pr-test (required, serialized, use TodoWrite to queue)
|
||||
|
||||
`/pr-test` is the only reliable confirmation that the objective is actually met. Run it yourself, not the agent.
|
||||
|
||||
**When multiple PRs reach `pending_evaluation` at the same time, use TodoWrite to queue them:**
|
||||
```
|
||||
- [ ] /pr-test PR #12636 — fix copilot retry logic
|
||||
- [ ] /pr-test PR #12699 — builder chat panel
|
||||
```
|
||||
Run one at a time. Check off as you go.
|
||||
|
||||
```
|
||||
/pr-test https://github.com/Significant-Gravitas/AutoGPT/pull/PR_NUMBER
|
||||
```
|
||||
|
||||
**/pr-test can be lazy** — if it gives vague output, re-run with full context:
|
||||
|
||||
```
|
||||
/pr-test https://github.com/OWNER/REPO/pull/PR_NUMBER
|
||||
Context: This PR implements <objective from state file>. Key files: <list>.
|
||||
Please verify: <specific behaviors to check>.
|
||||
```
|
||||
|
||||
Only one `/pr-test` at a time — they share ports and DB.
|
||||
|
||||
### /pr-test result evaluation
|
||||
|
||||
**PARTIAL on any headline feature scenario is an immediate blocker.** Do not approve, do not mark done, do not let the agent output `ORCHESTRATOR:DONE`.
|
||||
|
||||
| `/pr-test` result | Action |
|
||||
|---|---|
|
||||
| All headline scenarios **PASS** | Proceed to evaluation step 2 |
|
||||
| Any headline scenario **PARTIAL** | Re-brief the agent immediately — see below |
|
||||
| Any headline scenario **FAIL** | Re-brief the agent immediately |
|
||||
|
||||
**What PARTIAL means**: the feature is only partly working. Example: the Apply button never appeared, or the AI returned no action blocks. The agent addressed part of the objective but not all of it.
|
||||
|
||||
**When any headline scenario is PARTIAL or FAIL:**
|
||||
|
||||
1. Do NOT mark the agent done or accept `ORCHESTRATOR:DONE`
|
||||
2. Re-brief the agent with the specific scenario that failed and what was missing:
|
||||
```bash
|
||||
tmux send-keys -t SESSION:WIN "PARTIAL result on /pr-test — S5 (Apply button) never appeared. The AI must output JSON action blocks for the Apply button to render. Fix this before re-running /pr-test."
|
||||
sleep 0.3
|
||||
tmux send-keys -t SESSION:WIN Enter
|
||||
```
|
||||
3. Set state back to `running`:
|
||||
```bash
|
||||
jq --arg w "SESSION:WIN" '(.agents[] | select(.window == $w)).state = "running"' \
|
||||
~/.claude/orchestrator-state.json > /tmp/orch.tmp && mv /tmp/orch.tmp ~/.claude/orchestrator-state.json
|
||||
```
|
||||
4. Wait for new `ORCHESTRATOR:DONE`, then re-run `/pr-test` from scratch
|
||||
|
||||
**Rule: only ALL-PASS qualifies for approval.** A mix of PASS + PARTIAL is a failure.
|
||||
|
||||
> **Why this matters**: PR #12699 was wrongly approved with S5 PARTIAL — the AI never output JSON action blocks so the Apply button never appeared. The fix was already in the agent's reach but slipped through because PARTIAL was not treated as blocking.
|
||||
|
||||
### 2. Do your own evaluation
|
||||
|
||||
1. **Read the PR diff and objective** — does the code actually implement what was asked? Is anything obviously missing or half-done?
|
||||
2. **Read the resolved threads** — were comments addressed with real fixes, or just dismissed/resolved without changes?
|
||||
3. **Check CI run names** — any suspicious retries that shouldn't have passed?
|
||||
4. **Check the PR description** — title, summary, test plan complete?
|
||||
|
||||
### 3. Decide
|
||||
|
||||
- `/pr-test` all scenarios PASS + evaluation looks good → mark `done` in state, tell the user the PR is ready, ask if window should be closed
|
||||
- `/pr-test` any scenario PARTIAL or FAIL → re-brief the agent with the specific failing scenario, set state back to `running` (see `/pr-test result evaluation` above)
|
||||
- Evaluation finds gaps even with all PASS → re-brief the agent with specific gaps, set state back to `running`
|
||||
|
||||
**Never mark done based purely on script output.** You hold the full objective context; the script does not.
|
||||
|
||||
```bash
|
||||
# Mark done after your positive evaluation:
|
||||
jq --arg w "SESSION:WIN" '(.agents[] | select(.window == $w)).state = "done"' \
|
||||
~/.claude/orchestrator-state.json > /tmp/orch.tmp && mv /tmp/orch.tmp ~/.claude/orchestrator-state.json
|
||||
```
|
||||
|
||||
## When to stop the fleet
|
||||
|
||||
Stop the fleet (`active = false`) when **all** of the following are true:
|
||||
|
||||
| Check | How to verify |
|
||||
|---|---|
|
||||
| All agents are `done` or `escalated` | `jq '[.agents[] | select(.state | test("running\|stuck\|idle\|waiting_approval"))] | length' ~/.claude/orchestrator-state.json` == 0 |
|
||||
| All PRs have 0 unresolved review threads | GraphQL `isResolved` check per PR |
|
||||
| All PRs have green CI **on a run triggered after the agent's last push** | `gh run list --branch BRANCH --limit 1` timestamp > `spawned_at` in state |
|
||||
| No fresh CHANGES_REQUESTED (after latest commit) | `verify-complete.sh` checks this — stale pre-commit reviews are ignored |
|
||||
| No agents are `escalated` without human review | If any are escalated, surface to user first |
|
||||
|
||||
**Do NOT stop just because agents output `ORCHESTRATOR:DONE`.** That is a signal to verify, not a signal to stop.
|
||||
|
||||
**Do stop** if the user explicitly says "stop", "shut down", or "kill everything", even with agents still running.
|
||||
|
||||
```bash
|
||||
# Graceful stop
|
||||
jq '.active = false' ~/.claude/orchestrator-state.json > /tmp/orch.tmp \
|
||||
&& mv /tmp/orch.tmp ~/.claude/orchestrator-state.json
|
||||
|
||||
LOOP_WINDOW=$(jq -r '.loop_window // ""' ~/.claude/orchestrator-state.json)
|
||||
[ -n "$LOOP_WINDOW" ] && tmux kill-window -t "$LOOP_WINDOW" 2>/dev/null || true
|
||||
```
|
||||
|
||||
Does **not** recycle running worktrees — agents may still be mid-task. Run `capacity.sh` to see what's still in progress.
|
||||
|
||||
## tmux send-keys pattern
|
||||
|
||||
**Always split long messages into text + Enter as two separate calls with a sleep between them.** If sent as one call (`"text" Enter`), Enter can fire before the full string is buffered into Claude's input — leaving the message stuck as `[Pasted text +N lines]` unsent.
|
||||
|
||||
```bash
|
||||
# CORRECT — text then Enter separately
|
||||
tmux send-keys -t "$WINDOW" "your long message here"
|
||||
sleep 0.3
|
||||
tmux send-keys -t "$WINDOW" Enter
|
||||
|
||||
# WRONG — Enter may fire before text is buffered
|
||||
tmux send-keys -t "$WINDOW" "your long message here" Enter
|
||||
```
|
||||
|
||||
Short single-character sends (`y`, `Down`, empty Enter for dialog approval) are safe to combine since they have no buffering lag.
|
||||
|
||||
---
|
||||
|
||||
## Protected worktrees
|
||||
|
||||
Some worktrees must **never** be used as spare worktrees for agent tasks because they host files critical to the orchestrator itself:
|
||||
|
||||
| Worktree | Protected branch | Why |
|
||||
|---|---|---|
|
||||
| `AutoGPT1` | `dx/orchestrate-skill` | Hosts the orchestrate skill scripts. `recycle-agent.sh` would check out `spare/1`, wiping `.claude/skills/` and breaking all subsequent `spawn-agent.sh` calls. |
|
||||
|
||||
**Rule**: when selecting spare worktrees via `find-spare.sh`, skip any worktree whose CURRENT branch matches a protected branch. If you accidentally spawn an agent in a protected worktree, do not let `recycle-agent.sh` run on it — manually restore the branch after the agent finishes.
|
||||
|
||||
When `dx/orchestrate-skill` is merged into `dev`, `AutoGPT1` becomes a normal spare again.
|
||||
|
||||
---
|
||||
|
||||
## Key rules
|
||||
|
||||
1. **Scripts do all the heavy lifting** — don't reimplement their logic inline in this file
|
||||
2. **Never ask the user to pick a worktree** — auto-assign from `find-spare.sh` output
|
||||
3. **Never restart a running agent** — only restart on `idle` kicks (foreground is a shell)
|
||||
4. **Auto-dismiss settings dialogs** — if "Enter to confirm" appears, send Down+Enter
|
||||
5. **Always `--permission-mode bypassPermissions`** on every spawn
|
||||
6. **Escalate after 3 kicks** — mark `escalated`, surface to user
|
||||
7. **Atomic state writes** — always write to `.tmp` then `mv`
|
||||
8. **Never approve destructive commands** outside the worktree scope — when in doubt, escalate
|
||||
9. **Never recycle without verification** — `verify-complete.sh` must pass before recycling
|
||||
10. **No TASK.md files** — commit risk; use state file + `gh pr view` for agent context persistence
|
||||
11. **Re-brief stalled agents** — read objective from state file + `gh pr view`, send via tmux
|
||||
12. **ORCHESTRATOR:DONE is a signal to verify, not to accept** — always run `verify-complete.sh` and check CI run timestamp before recycling
|
||||
13. **Protected worktrees** — never use the worktree hosting the skill scripts as a spare
|
||||
14. **Images via file path** — save screenshots to `/tmp/orchestrator-context-<ts>.png`, pass path in objective; agents read with the `Read` tool
|
||||
15. **Split send-keys** — always separate text and Enter with `sleep 0.3` between calls for long strings
|
||||
43
.claude/skills/orchestrate/scripts/capacity.sh
Executable file
43
.claude/skills/orchestrate/scripts/capacity.sh
Executable file
@@ -0,0 +1,43 @@
|
||||
#!/usr/bin/env bash
|
||||
# capacity.sh — show fleet capacity: available spare worktrees + in-use agents
|
||||
#
|
||||
# Usage: capacity.sh [REPO_ROOT]
|
||||
# REPO_ROOT defaults to the root worktree of the current git repo.
|
||||
#
|
||||
# Reads: ~/.claude/orchestrator-state.json (skipped if missing or corrupt)
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPTS_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
STATE_FILE="${ORCHESTRATOR_STATE_FILE:-$HOME/.claude/orchestrator-state.json}"
|
||||
REPO_ROOT="${1:-$(git rev-parse --show-toplevel 2>/dev/null || echo "")}"
|
||||
|
||||
echo "=== Available (spare) worktrees ==="
|
||||
if [ -n "$REPO_ROOT" ]; then
|
||||
SPARE=$("$SCRIPTS_DIR/find-spare.sh" "$REPO_ROOT" 2>/dev/null || echo "")
|
||||
else
|
||||
SPARE=$("$SCRIPTS_DIR/find-spare.sh" 2>/dev/null || echo "")
|
||||
fi
|
||||
|
||||
if [ -z "$SPARE" ]; then
|
||||
echo " (none)"
|
||||
else
|
||||
while IFS= read -r line; do
|
||||
[ -z "$line" ] && continue
|
||||
echo " ✓ $line"
|
||||
done <<< "$SPARE"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "=== In-use worktrees ==="
|
||||
if [ -f "$STATE_FILE" ] && jq -e '.' "$STATE_FILE" >/dev/null 2>&1; then
|
||||
IN_USE=$(jq -r '.agents[] | select(.state != "done") | " [\(.state)] \(.worktree_path) → \(.branch)"' \
|
||||
"$STATE_FILE" 2>/dev/null || echo "")
|
||||
if [ -n "$IN_USE" ]; then
|
||||
echo "$IN_USE"
|
||||
else
|
||||
echo " (none)"
|
||||
fi
|
||||
else
|
||||
echo " (no active state file)"
|
||||
fi
|
||||
85
.claude/skills/orchestrate/scripts/classify-pane.sh
Executable file
85
.claude/skills/orchestrate/scripts/classify-pane.sh
Executable file
@@ -0,0 +1,85 @@
|
||||
#!/usr/bin/env bash
|
||||
# classify-pane.sh — Classify the current state of a tmux pane
|
||||
#
|
||||
# Usage: classify-pane.sh <tmux-target>
|
||||
# tmux-target: e.g. "work:0", "work:1.0"
|
||||
#
|
||||
# Output (stdout): JSON object:
|
||||
# { "state": "running|idle|waiting_approval|complete", "reason": "...", "pane_cmd": "..." }
|
||||
#
|
||||
# Exit codes: 0=ok, 1=error (invalid target or tmux window not found)
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
TARGET="${1:-}"
|
||||
|
||||
if [ -z "$TARGET" ]; then
|
||||
echo '{"state":"error","reason":"no target provided","pane_cmd":""}'
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Validate tmux target format: session:window or session:window.pane
|
||||
if ! [[ "$TARGET" =~ ^[a-zA-Z0-9_.-]+:[a-zA-Z0-9_.-]+(\.[0-9]+)?$ ]]; then
|
||||
echo '{"state":"error","reason":"invalid tmux target format","pane_cmd":""}'
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check session exists (use %%:* to extract session name from session:window)
|
||||
if ! tmux list-windows -t "${TARGET%%:*}" &>/dev/null 2>&1; then
|
||||
echo '{"state":"error","reason":"tmux target not found","pane_cmd":""}'
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Get the current foreground command in the pane
|
||||
PANE_CMD=$(tmux display-message -t "$TARGET" -p '#{pane_current_command}' 2>/dev/null || echo "unknown")
|
||||
|
||||
# Capture and strip ANSI codes (use perl for cross-platform compatibility — BSD sed lacks \x1b support)
|
||||
RAW=$(tmux capture-pane -t "$TARGET" -p -S -50 2>/dev/null || echo "")
|
||||
CLEAN=$(echo "$RAW" | perl -pe 's/\x1b\[[0-9;]*[a-zA-Z]//g; s/\x1b\(B//g; s/\x1b\[\?[0-9]*[hl]//g; s/\r//g' \
|
||||
| grep -v '^[[:space:]]*$' || true)
|
||||
|
||||
# --- Check: explicit completion marker ---
|
||||
# Must be on its own line (not buried in the objective text sent at spawn time).
|
||||
if echo "$CLEAN" | grep -qE "^[[:space:]]*ORCHESTRATOR:DONE[[:space:]]*$"; then
|
||||
jq -n --arg cmd "$PANE_CMD" '{"state":"complete","reason":"ORCHESTRATOR:DONE marker found","pane_cmd":$cmd}'
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# --- Check: Claude Code approval prompt patterns ---
|
||||
LAST_40=$(echo "$CLEAN" | tail -40)
|
||||
APPROVAL_PATTERNS=(
|
||||
"Do you want to proceed"
|
||||
"Do you want to make this"
|
||||
"\\[y/n\\]"
|
||||
"\\[Y/n\\]"
|
||||
"\\[n/Y\\]"
|
||||
"Proceed\\?"
|
||||
"Allow this command"
|
||||
"Run bash command"
|
||||
"Allow bash"
|
||||
"Would you like"
|
||||
"Press enter to continue"
|
||||
"Esc to cancel"
|
||||
)
|
||||
for pattern in "${APPROVAL_PATTERNS[@]}"; do
|
||||
if echo "$LAST_40" | grep -qiE "$pattern"; then
|
||||
jq -n --arg pattern "$pattern" --arg cmd "$PANE_CMD" \
|
||||
'{"state":"waiting_approval","reason":"approval pattern: \($pattern)","pane_cmd":$cmd}'
|
||||
exit 0
|
||||
fi
|
||||
done
|
||||
|
||||
# --- Check: shell prompt (claude has exited) ---
|
||||
# If the foreground process is a shell (not claude/node), the agent has exited
|
||||
case "$PANE_CMD" in
|
||||
zsh|bash|fish|sh|dash|tcsh|ksh)
|
||||
jq -n --arg cmd "$PANE_CMD" \
|
||||
'{"state":"idle","reason":"agent exited — shell prompt active","pane_cmd":$cmd}'
|
||||
exit 0
|
||||
;;
|
||||
esac
|
||||
|
||||
# Agent is still running (claude/node/python is the foreground process)
|
||||
jq -n --arg cmd "$PANE_CMD" \
|
||||
'{"state":"running","reason":"foreground process: \($cmd)","pane_cmd":$cmd}'
|
||||
exit 0
|
||||
24
.claude/skills/orchestrate/scripts/find-spare.sh
Executable file
24
.claude/skills/orchestrate/scripts/find-spare.sh
Executable file
@@ -0,0 +1,24 @@
|
||||
#!/usr/bin/env bash
|
||||
# find-spare.sh — list worktrees on spare/N branches (free to use)
|
||||
#
|
||||
# Usage: find-spare.sh [REPO_ROOT]
|
||||
# REPO_ROOT defaults to the root worktree containing the current git repo.
|
||||
#
|
||||
# Output (stdout): one line per available worktree: "PATH BRANCH"
|
||||
# e.g.: /Users/me/Code/AutoGPT3 spare/3
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
REPO_ROOT="${1:-$(git rev-parse --show-toplevel 2>/dev/null || echo "")}"
|
||||
if [ -z "$REPO_ROOT" ]; then
|
||||
echo "Error: not inside a git repo and no REPO_ROOT provided" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
git -C "$REPO_ROOT" worktree list --porcelain \
|
||||
| awk '
|
||||
/^worktree / { path = substr($0, 10) }
|
||||
/^branch / { branch = substr($0, 8); print path " " branch }
|
||||
' \
|
||||
| { grep -E " refs/heads/spare/[0-9]+$" || true; } \
|
||||
| sed 's|refs/heads/||'
|
||||
40
.claude/skills/orchestrate/scripts/notify.sh
Executable file
40
.claude/skills/orchestrate/scripts/notify.sh
Executable file
@@ -0,0 +1,40 @@
|
||||
#!/usr/bin/env bash
|
||||
# notify.sh — send a fleet notification message
|
||||
#
|
||||
# Delivery order (first available wins):
|
||||
# 1. Discord webhook — DISCORD_WEBHOOK_URL env var OR state file .discord_webhook
|
||||
# 2. macOS notification center — osascript (silent fail if unavailable)
|
||||
# 3. Stdout only
|
||||
#
|
||||
# Usage: notify.sh MESSAGE
|
||||
# Exit: always 0 (notification failure must not abort the caller)
|
||||
|
||||
MESSAGE="${1:-}"
|
||||
[ -z "$MESSAGE" ] && exit 0
|
||||
|
||||
STATE_FILE="${ORCHESTRATOR_STATE_FILE:-$HOME/.claude/orchestrator-state.json}"
|
||||
|
||||
# --- Resolve Discord webhook ---
|
||||
WEBHOOK="${DISCORD_WEBHOOK_URL:-}"
|
||||
if [ -z "$WEBHOOK" ] && [ -f "$STATE_FILE" ]; then
|
||||
WEBHOOK=$(jq -r '.discord_webhook // ""' "$STATE_FILE" 2>/dev/null || echo "")
|
||||
fi
|
||||
|
||||
# --- Discord delivery ---
|
||||
if [ -n "$WEBHOOK" ]; then
|
||||
PAYLOAD=$(jq -n --arg msg "$MESSAGE" '{"content": $msg}')
|
||||
curl -s -X POST "$WEBHOOK" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d "$PAYLOAD" > /dev/null 2>&1 || true
|
||||
fi
|
||||
|
||||
# --- macOS notification center (silent if not macOS or osascript missing) ---
|
||||
if command -v osascript &>/dev/null 2>&1; then
|
||||
# Escape single quotes for AppleScript
|
||||
SAFE_MSG=$(echo "$MESSAGE" | sed "s/'/\\\\'/g")
|
||||
osascript -e "display notification \"${SAFE_MSG}\" with title \"Orchestrator\"" 2>/dev/null || true
|
||||
fi
|
||||
|
||||
# Always print to stdout so run-loop.sh logs it
|
||||
echo "$MESSAGE"
|
||||
exit 0
|
||||
257
.claude/skills/orchestrate/scripts/poll-cycle.sh
Executable file
257
.claude/skills/orchestrate/scripts/poll-cycle.sh
Executable file
@@ -0,0 +1,257 @@
|
||||
#!/usr/bin/env bash
|
||||
# poll-cycle.sh — Single orchestrator poll cycle
|
||||
#
|
||||
# Reads ~/.claude/orchestrator-state.json, classifies each agent, updates state,
|
||||
# and outputs a JSON array of actions for Claude to take.
|
||||
#
|
||||
# Usage: poll-cycle.sh
|
||||
# Output (stdout): JSON array of action objects
|
||||
# [{ "window": "work:0", "action": "kick|approve|none", "state": "...",
|
||||
# "worktree": "...", "objective": "...", "reason": "..." }]
|
||||
#
|
||||
# The state file is updated in-place (atomic write via .tmp).
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
STATE_FILE="${ORCHESTRATOR_STATE_FILE:-$HOME/.claude/orchestrator-state.json}"
|
||||
SCRIPTS_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
CLASSIFY="$SCRIPTS_DIR/classify-pane.sh"
|
||||
|
||||
# Cross-platform md5: always outputs just the hex digest
|
||||
md5_hash() {
|
||||
if command -v md5sum &>/dev/null; then
|
||||
md5sum | awk '{print $1}'
|
||||
else
|
||||
md5 | awk '{print $NF}'
|
||||
fi
|
||||
}
|
||||
|
||||
# Clean up temp file on any exit (avoids stale .tmp if jq write fails)
|
||||
trap 'rm -f "${STATE_FILE}.tmp"' EXIT
|
||||
|
||||
# Ensure state file exists
|
||||
if [ ! -f "$STATE_FILE" ]; then
|
||||
echo '{"active":false,"agents":[]}' > "$STATE_FILE"
|
||||
fi
|
||||
|
||||
# Validate JSON upfront before any jq reads that run under set -e.
|
||||
# A truncated/corrupt file (e.g. from a SIGKILL mid-write) would otherwise
|
||||
# abort the script at the ACTIVE read below without emitting any JSON output.
|
||||
if ! jq -e '.' "$STATE_FILE" >/dev/null 2>&1; then
|
||||
echo "State file parse error — check $STATE_FILE" >&2
|
||||
echo "[]"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
ACTIVE=$(jq -r '.active // false' "$STATE_FILE")
|
||||
if [ "$ACTIVE" != "true" ]; then
|
||||
echo "[]"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
NOW=$(date +%s)
|
||||
IDLE_THRESHOLD=$(jq -r '.idle_threshold_seconds // 300' "$STATE_FILE")
|
||||
|
||||
ACTIONS="[]"
|
||||
UPDATED_AGENTS="[]"
|
||||
|
||||
# Read agents as newline-delimited JSON objects.
|
||||
# jq exits non-zero when .agents[] has no matches on an empty array, which is valid —
|
||||
# so we suppress that exit code and separately validate the file is well-formed JSON.
|
||||
if ! AGENTS_JSON=$(jq -e -c '.agents // empty | .[]' "$STATE_FILE" 2>/dev/null); then
|
||||
if ! jq -e '.' "$STATE_FILE" > /dev/null 2>&1; then
|
||||
echo "State file parse error — check $STATE_FILE" >&2
|
||||
fi
|
||||
echo "[]"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
if [ -z "$AGENTS_JSON" ]; then
|
||||
echo "[]"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
while IFS= read -r agent; do
|
||||
[ -z "$agent" ] && continue
|
||||
|
||||
# Use // "" defaults so a single malformed field doesn't abort the whole cycle
|
||||
WINDOW=$(echo "$agent" | jq -r '.window // ""')
|
||||
WORKTREE=$(echo "$agent" | jq -r '.worktree // ""')
|
||||
OBJECTIVE=$(echo "$agent"| jq -r '.objective // ""')
|
||||
STATE=$(echo "$agent" | jq -r '.state // "running"')
|
||||
LAST_HASH=$(echo "$agent"| jq -r '.last_output_hash // ""')
|
||||
IDLE_SINCE=$(echo "$agent"| jq -r '.idle_since // 0')
|
||||
REVISION_COUNT=$(echo "$agent"| jq -r '.revision_count // 0')
|
||||
|
||||
# Validate window format to prevent tmux target injection.
|
||||
# Allow session:window (numeric or named) and session:window.pane
|
||||
if ! [[ "$WINDOW" =~ ^[a-zA-Z0-9_.-]+:[a-zA-Z0-9_.-]+(\.[0-9]+)?$ ]]; then
|
||||
echo "Skipping agent with invalid window value: $WINDOW" >&2
|
||||
UPDATED_AGENTS=$(echo "$UPDATED_AGENTS" | jq --argjson a "$agent" '. + [$a]')
|
||||
continue
|
||||
fi
|
||||
|
||||
# Pass-through terminal-state agents
|
||||
if [[ "$STATE" == "done" || "$STATE" == "escalated" || "$STATE" == "complete" || "$STATE" == "pending_evaluation" ]]; then
|
||||
UPDATED_AGENTS=$(echo "$UPDATED_AGENTS" | jq --argjson a "$agent" '. + [$a]')
|
||||
continue
|
||||
fi
|
||||
|
||||
# Classify pane.
|
||||
# classify-pane.sh always emits JSON before exit (even on error), so using
|
||||
# "|| echo '...'" would concatenate two JSON objects when it exits non-zero.
|
||||
# Use "|| true" inside the substitution so set -euo pipefail does not abort
|
||||
# the poll cycle when classify exits with a non-zero status code.
|
||||
CLASSIFICATION=$("$CLASSIFY" "$WINDOW" 2>/dev/null || true)
|
||||
[ -z "$CLASSIFICATION" ] && CLASSIFICATION='{"state":"error","reason":"classify failed","pane_cmd":"unknown"}'
|
||||
|
||||
PANE_STATE=$(echo "$CLASSIFICATION" | jq -r '.state')
|
||||
PANE_REASON=$(echo "$CLASSIFICATION" | jq -r '.reason')
|
||||
|
||||
# Capture full pane output once — used for hash (stuck detection) and checkpoint parsing.
|
||||
# Use -S -500 to get the last ~500 lines of scrollback so checkpoints aren't missed.
|
||||
RAW=$(tmux capture-pane -t "$WINDOW" -p -S -500 2>/dev/null || echo "")
|
||||
|
||||
# --- Checkpoint tracking ---
|
||||
# Parse any "CHECKPOINT:<step>" lines the agent has output and merge into state file.
|
||||
# The agent writes these as it completes each required step so verify-complete.sh can gate recycling.
|
||||
EXISTING_CPS=$(echo "$agent" | jq -c '.checkpoints // []')
|
||||
NEW_CHECKPOINTS_JSON="$EXISTING_CPS"
|
||||
if [ -n "$RAW" ]; then
|
||||
FOUND_CPS=$(echo "$RAW" \
|
||||
| grep -oE "CHECKPOINT:[a-zA-Z0-9_-]+" \
|
||||
| sed 's/CHECKPOINT://' \
|
||||
| sort -u \
|
||||
| jq -R . | jq -s . 2>/dev/null || echo "[]")
|
||||
NEW_CHECKPOINTS_JSON=$(jq -n \
|
||||
--argjson existing "$EXISTING_CPS" \
|
||||
--argjson found "$FOUND_CPS" \
|
||||
'($existing + $found) | unique' 2>/dev/null || echo "$EXISTING_CPS")
|
||||
fi
|
||||
|
||||
# Compute content hash for stuck-detection (only for running agents)
|
||||
CURRENT_HASH=""
|
||||
if [[ "$PANE_STATE" == "running" ]] && [ -n "$RAW" ]; then
|
||||
CURRENT_HASH=$(echo "$RAW" | tail -20 | md5_hash)
|
||||
fi
|
||||
|
||||
NEW_STATE="$STATE"
|
||||
NEW_IDLE_SINCE="$IDLE_SINCE"
|
||||
NEW_REVISION_COUNT="$REVISION_COUNT"
|
||||
ACTION="none"
|
||||
REASON="$PANE_REASON"
|
||||
|
||||
case "$PANE_STATE" in
|
||||
complete)
|
||||
# Agent output ORCHESTRATOR:DONE — mark pending_evaluation so orchestrator handles it.
|
||||
# run-loop does NOT verify or notify; orchestrator's background poll picks this up.
|
||||
NEW_STATE="pending_evaluation"
|
||||
ACTION="complete" # run-loop logs it but takes no action
|
||||
;;
|
||||
waiting_approval)
|
||||
NEW_STATE="waiting_approval"
|
||||
ACTION="approve"
|
||||
;;
|
||||
idle)
|
||||
# Agent process has exited — needs restart
|
||||
NEW_STATE="idle"
|
||||
ACTION="kick"
|
||||
REASON="agent exited (shell is foreground)"
|
||||
NEW_REVISION_COUNT=$(( REVISION_COUNT + 1 ))
|
||||
NEW_IDLE_SINCE=$NOW
|
||||
if [ "$NEW_REVISION_COUNT" -ge 3 ]; then
|
||||
NEW_STATE="escalated"
|
||||
ACTION="none"
|
||||
REASON="escalated after ${NEW_REVISION_COUNT} kicks — needs human attention"
|
||||
fi
|
||||
;;
|
||||
running)
|
||||
# Clear idle_since only when transitioning from idle (agent was kicked and
|
||||
# restarted). Do NOT reset for stuck — idle_since must persist across polls
|
||||
# so STUCK_DURATION can accumulate and trigger escalation.
|
||||
# Also update the local IDLE_SINCE so the hash-stability check below uses
|
||||
# the reset value on this same poll, not the stale kick timestamp.
|
||||
if [[ "$STATE" == "idle" ]]; then
|
||||
NEW_IDLE_SINCE=0
|
||||
IDLE_SINCE=0
|
||||
fi
|
||||
# Check if hash has been stable (agent may be stuck mid-task)
|
||||
if [ -n "$CURRENT_HASH" ] && [ "$CURRENT_HASH" = "$LAST_HASH" ] && [ "$LAST_HASH" != "" ]; then
|
||||
if [ "$IDLE_SINCE" = "0" ] || [ "$IDLE_SINCE" = "null" ]; then
|
||||
NEW_IDLE_SINCE=$NOW
|
||||
else
|
||||
STUCK_DURATION=$(( NOW - IDLE_SINCE ))
|
||||
if [ "$STUCK_DURATION" -gt "$IDLE_THRESHOLD" ]; then
|
||||
NEW_REVISION_COUNT=$(( REVISION_COUNT + 1 ))
|
||||
NEW_IDLE_SINCE=$NOW
|
||||
if [ "$NEW_REVISION_COUNT" -ge 3 ]; then
|
||||
NEW_STATE="escalated"
|
||||
ACTION="none"
|
||||
REASON="escalated after ${NEW_REVISION_COUNT} kicks — needs human attention"
|
||||
else
|
||||
NEW_STATE="stuck"
|
||||
ACTION="kick"
|
||||
REASON="output unchanged for ${STUCK_DURATION}s (threshold: ${IDLE_THRESHOLD}s)"
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
else
|
||||
# Only reset the idle timer when we have a valid hash comparison (pane
|
||||
# capture succeeded). If CURRENT_HASH is empty (tmux capture-pane failed),
|
||||
# preserve existing timers so stuck detection is not inadvertently reset.
|
||||
if [ -n "$CURRENT_HASH" ]; then
|
||||
NEW_STATE="running"
|
||||
NEW_IDLE_SINCE=0
|
||||
fi
|
||||
fi
|
||||
;;
|
||||
error)
|
||||
REASON="classify error: $PANE_REASON"
|
||||
;;
|
||||
esac
|
||||
|
||||
# Build updated agent record (ensure idle_since and revision_count are numeric)
|
||||
# Use || true on each jq call so a malformed field skips this agent rather than
|
||||
# aborting the entire poll cycle under set -e.
|
||||
UPDATED_AGENT=$(echo "$agent" | jq \
|
||||
--arg state "$NEW_STATE" \
|
||||
--arg hash "$CURRENT_HASH" \
|
||||
--argjson now "$NOW" \
|
||||
--arg idle_since "$NEW_IDLE_SINCE" \
|
||||
--arg revision_count "$NEW_REVISION_COUNT" \
|
||||
--argjson checkpoints "$NEW_CHECKPOINTS_JSON" \
|
||||
'.state = $state
|
||||
| .last_output_hash = (if $hash == "" then .last_output_hash else $hash end)
|
||||
| .last_seen_at = $now
|
||||
| .idle_since = ($idle_since | tonumber)
|
||||
| .revision_count = ($revision_count | tonumber)
|
||||
| .checkpoints = $checkpoints' 2>/dev/null) || {
|
||||
echo "Warning: failed to build updated agent for window $WINDOW — keeping original" >&2
|
||||
UPDATED_AGENTS=$(echo "$UPDATED_AGENTS" | jq --argjson a "$agent" '. + [$a]')
|
||||
continue
|
||||
}
|
||||
|
||||
UPDATED_AGENTS=$(echo "$UPDATED_AGENTS" | jq --argjson a "$UPDATED_AGENT" '. + [$a]')
|
||||
|
||||
# Add action if needed
|
||||
if [ "$ACTION" != "none" ]; then
|
||||
ACTION_OBJ=$(jq -n \
|
||||
--arg window "$WINDOW" \
|
||||
--arg action "$ACTION" \
|
||||
--arg state "$NEW_STATE" \
|
||||
--arg worktree "$WORKTREE" \
|
||||
--arg objective "$OBJECTIVE" \
|
||||
--arg reason "$REASON" \
|
||||
'{window:$window, action:$action, state:$state, worktree:$worktree, objective:$objective, reason:$reason}')
|
||||
ACTIONS=$(echo "$ACTIONS" | jq --argjson a "$ACTION_OBJ" '. + [$a]')
|
||||
fi
|
||||
|
||||
done <<< "$AGENTS_JSON"
|
||||
|
||||
# Atomic state file update
|
||||
jq --argjson agents "$UPDATED_AGENTS" \
|
||||
--argjson now "$NOW" \
|
||||
'.agents = $agents | .last_poll_at = $now' \
|
||||
"$STATE_FILE" > "${STATE_FILE}.tmp" && mv "${STATE_FILE}.tmp" "$STATE_FILE"
|
||||
|
||||
echo "$ACTIONS"
|
||||
32
.claude/skills/orchestrate/scripts/recycle-agent.sh
Executable file
32
.claude/skills/orchestrate/scripts/recycle-agent.sh
Executable file
@@ -0,0 +1,32 @@
|
||||
#!/usr/bin/env bash
|
||||
# recycle-agent.sh — kill a tmux window and restore the worktree to its spare branch
|
||||
#
|
||||
# Usage: recycle-agent.sh WINDOW WORKTREE_PATH SPARE_BRANCH
|
||||
# WINDOW — tmux target, e.g. autogpt1:3
|
||||
# WORKTREE_PATH — absolute path to the git worktree
|
||||
# SPARE_BRANCH — branch to restore, e.g. spare/6
|
||||
#
|
||||
# Stdout: one status line
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
if [ $# -lt 3 ]; then
|
||||
echo "Usage: recycle-agent.sh WINDOW WORKTREE_PATH SPARE_BRANCH" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
WINDOW="$1"
|
||||
WORKTREE_PATH="$2"
|
||||
SPARE_BRANCH="$3"
|
||||
|
||||
# Kill the tmux window (ignore error — may already be gone)
|
||||
tmux kill-window -t "$WINDOW" 2>/dev/null || true
|
||||
|
||||
# Restore to spare branch: abort any in-progress operation, then clean
|
||||
git -C "$WORKTREE_PATH" rebase --abort 2>/dev/null || true
|
||||
git -C "$WORKTREE_PATH" merge --abort 2>/dev/null || true
|
||||
git -C "$WORKTREE_PATH" reset --hard HEAD 2>/dev/null
|
||||
git -C "$WORKTREE_PATH" clean -fd 2>/dev/null
|
||||
git -C "$WORKTREE_PATH" checkout "$SPARE_BRANCH"
|
||||
|
||||
echo "Recycled: $(basename "$WORKTREE_PATH") → $SPARE_BRANCH (window $WINDOW closed)"
|
||||
164
.claude/skills/orchestrate/scripts/run-loop.sh
Executable file
164
.claude/skills/orchestrate/scripts/run-loop.sh
Executable file
@@ -0,0 +1,164 @@
|
||||
#!/usr/bin/env bash
|
||||
# run-loop.sh — Mechanical babysitter for the agent fleet (runs in its own tmux window)
|
||||
#
|
||||
# Handles ONLY two things that need no intelligence:
|
||||
# idle → restart claude using --resume SESSION_ID (or --continue) to restore context
|
||||
# approve → auto-approve safe dialogs, press Enter on numbered-option dialogs
|
||||
#
|
||||
# Everything else — ORCHESTRATOR:DONE, verification, /pr-test, final evaluation,
|
||||
# marking done, deciding to close windows — is the orchestrating Claude's job.
|
||||
# poll-cycle.sh sets state to pending_evaluation when ORCHESTRATOR:DONE is detected;
|
||||
# the orchestrator's background poll loop handles it from there.
|
||||
#
|
||||
# Usage: run-loop.sh
|
||||
# Env: POLL_INTERVAL (default: 30), ORCHESTRATOR_STATE_FILE
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# Copy scripts to a stable location outside the repo so they survive branch
|
||||
# checkouts (e.g. recycle-agent.sh switching spare/N back into this worktree
|
||||
# would wipe .claude/skills/orchestrate/scripts if the skill only exists on the
|
||||
# current branch).
|
||||
_ORIGIN_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
STABLE_SCRIPTS_DIR="$HOME/.claude/orchestrator/scripts"
|
||||
mkdir -p "$STABLE_SCRIPTS_DIR"
|
||||
cp "$_ORIGIN_DIR"/*.sh "$STABLE_SCRIPTS_DIR/"
|
||||
chmod +x "$STABLE_SCRIPTS_DIR"/*.sh
|
||||
SCRIPTS_DIR="$STABLE_SCRIPTS_DIR"
|
||||
|
||||
STATE_FILE="${ORCHESTRATOR_STATE_FILE:-$HOME/.claude/orchestrator-state.json}"
|
||||
POLL_INTERVAL="${POLL_INTERVAL:-30}"
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# update_state WINDOW FIELD VALUE
|
||||
# ---------------------------------------------------------------------------
|
||||
update_state() {
|
||||
local window="$1" field="$2" value="$3"
|
||||
jq --arg w "$window" --arg f "$field" --arg v "$value" \
|
||||
'.agents |= map(if .window == $w then .[$f] = $v else . end)' \
|
||||
"$STATE_FILE" > "${STATE_FILE}.tmp" && mv "${STATE_FILE}.tmp" "$STATE_FILE"
|
||||
}
|
||||
|
||||
update_state_int() {
|
||||
local window="$1" field="$2" value="$3"
|
||||
jq --arg w "$window" --arg f "$field" --argjson v "$value" \
|
||||
'.agents |= map(if .window == $w then .[$f] = $v else . end)' \
|
||||
"$STATE_FILE" > "${STATE_FILE}.tmp" && mv "${STATE_FILE}.tmp" "$STATE_FILE"
|
||||
}
|
||||
|
||||
agent_field() {
|
||||
jq -r --arg w "$1" --arg f "$2" \
|
||||
'.agents[] | select(.window == $w) | .[$f] // ""' \
|
||||
"$STATE_FILE" 2>/dev/null
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# wait_for_prompt WINDOW — wait up to 60s for Claude's ❯ prompt
|
||||
# ---------------------------------------------------------------------------
|
||||
wait_for_prompt() {
|
||||
local window="$1"
|
||||
for i in $(seq 1 60); do
|
||||
local cmd pane
|
||||
cmd=$(tmux display-message -t "$window" -p '#{pane_current_command}' 2>/dev/null || echo "")
|
||||
pane=$(tmux capture-pane -t "$window" -p 2>/dev/null || echo "")
|
||||
if echo "$pane" | grep -q "Enter to confirm"; then
|
||||
tmux send-keys -t "$window" Down Enter; sleep 2; continue
|
||||
fi
|
||||
[[ "$cmd" == "node" ]] && echo "$pane" | grep -q "❯" && return 0
|
||||
sleep 1
|
||||
done
|
||||
return 1 # timed out
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# handle_kick WINDOW STATE — only for idle (crashed) agents, not stuck
|
||||
# ---------------------------------------------------------------------------
|
||||
handle_kick() {
|
||||
local window="$1" state="$2"
|
||||
[[ "$state" != "idle" ]] && return # stuck agents handled by supervisor
|
||||
|
||||
local worktree_path session_id
|
||||
worktree_path=$(agent_field "$window" "worktree_path")
|
||||
session_id=$(agent_field "$window" "session_id")
|
||||
|
||||
echo "[$(date +%H:%M:%S)] KICK restart $window — agent exited, resuming session"
|
||||
|
||||
# Resume the exact session so the agent retains full context — no need to re-send objective
|
||||
if [ -n "$session_id" ]; then
|
||||
tmux send-keys -t "$window" "cd '${worktree_path}' && claude --resume '${session_id}' --permission-mode bypassPermissions" Enter
|
||||
else
|
||||
tmux send-keys -t "$window" "cd '${worktree_path}' && claude --continue --permission-mode bypassPermissions" Enter
|
||||
fi
|
||||
|
||||
wait_for_prompt "$window" || echo "[$(date +%H:%M:%S)] KICK WARNING $window — timed out waiting for ❯"
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# handle_approve WINDOW — auto-approve dialogs that need no judgment
|
||||
# ---------------------------------------------------------------------------
|
||||
handle_approve() {
|
||||
local window="$1"
|
||||
local pane_tail
|
||||
pane_tail=$(tmux capture-pane -t "$window" -p 2>/dev/null | tail -3 || echo "")
|
||||
|
||||
# Settings error dialog at startup
|
||||
if echo "$pane_tail" | grep -q "Enter to confirm"; then
|
||||
echo "[$(date +%H:%M:%S)] APPROVE dialog $window — settings error"
|
||||
tmux send-keys -t "$window" Down Enter
|
||||
return
|
||||
fi
|
||||
|
||||
# Numbered-option dialog (e.g. "Do you want to make this edit?")
|
||||
# ❯ is already on option 1 (Yes) — Enter confirms it
|
||||
if echo "$pane_tail" | grep -qE "❯\s*1\." || echo "$pane_tail" | grep -q "Esc to cancel"; then
|
||||
echo "[$(date +%H:%M:%S)] APPROVE edit $window"
|
||||
tmux send-keys -t "$window" "" Enter
|
||||
return
|
||||
fi
|
||||
|
||||
# y/n prompt for safe operations
|
||||
if echo "$pane_tail" | grep -qiE "(^git |^npm |^pnpm |^poetry |^pytest|^docker |^make |^cargo |^pip |^yarn |curl .*(localhost|127\.0\.0\.1))"; then
|
||||
echo "[$(date +%H:%M:%S)] APPROVE safe $window"
|
||||
tmux send-keys -t "$window" "y" Enter
|
||||
return
|
||||
fi
|
||||
|
||||
# Anything else — supervisor handles it, just log
|
||||
echo "[$(date +%H:%M:%S)] APPROVE skip $window — unknown dialog, supervisor will handle"
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Main loop
|
||||
# ---------------------------------------------------------------------------
|
||||
echo "[$(date +%H:%M:%S)] run-loop started (mechanical only, poll every ${POLL_INTERVAL}s)"
|
||||
echo "[$(date +%H:%M:%S)] Supervisor: orchestrating Claude session (not a separate window)"
|
||||
echo "---"
|
||||
|
||||
while true; do
|
||||
if ! jq -e '.active == true' "$STATE_FILE" >/dev/null 2>&1; then
|
||||
echo "[$(date +%H:%M:%S)] active=false — exiting."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
ACTIONS=$("$SCRIPTS_DIR/poll-cycle.sh" 2>/dev/null || echo "[]")
|
||||
KICKED=0; DONE=0
|
||||
|
||||
while IFS= read -r action; do
|
||||
[ -z "$action" ] && continue
|
||||
WINDOW=$(echo "$action" | jq -r '.window // ""')
|
||||
ACTION=$(echo "$action" | jq -r '.action // ""')
|
||||
STATE=$(echo "$action" | jq -r '.state // ""')
|
||||
|
||||
case "$ACTION" in
|
||||
kick) handle_kick "$WINDOW" "$STATE" || true; KICKED=$(( KICKED + 1 )) ;;
|
||||
approve) handle_approve "$WINDOW" || true ;;
|
||||
complete) DONE=$(( DONE + 1 )) ;; # poll-cycle already set state=pending_evaluation; orchestrator handles
|
||||
esac
|
||||
done < <(echo "$ACTIONS" | jq -c '.[]' 2>/dev/null || true)
|
||||
|
||||
RUNNING=$(jq '[.agents[] | select(.state | test("running|stuck|waiting_approval|idle"))] | length' \
|
||||
"$STATE_FILE" 2>/dev/null || echo 0)
|
||||
|
||||
echo "[$(date +%H:%M:%S)] Poll — ${RUNNING} running ${KICKED} kicked ${DONE} recycled"
|
||||
sleep "$POLL_INTERVAL"
|
||||
done
|
||||
122
.claude/skills/orchestrate/scripts/spawn-agent.sh
Executable file
122
.claude/skills/orchestrate/scripts/spawn-agent.sh
Executable file
@@ -0,0 +1,122 @@
|
||||
#!/usr/bin/env bash
|
||||
# spawn-agent.sh — create tmux window, checkout branch, launch claude, send task
|
||||
#
|
||||
# Usage: spawn-agent.sh SESSION WORKTREE_PATH SPARE_BRANCH NEW_BRANCH OBJECTIVE [PR_NUMBER] [STEPS...]
|
||||
# SESSION — tmux session name, e.g. autogpt1
|
||||
# WORKTREE_PATH — absolute path to the git worktree
|
||||
# SPARE_BRANCH — spare branch being replaced, e.g. spare/6 (saved for recycle)
|
||||
# NEW_BRANCH — task branch to create, e.g. feat/my-feature
|
||||
# OBJECTIVE — task description sent to the agent
|
||||
# PR_NUMBER — (optional) GitHub PR number for completion verification
|
||||
# STEPS... — (optional) required checkpoint names, e.g. pr-address pr-test
|
||||
#
|
||||
# Stdout: SESSION:WINDOW_INDEX (nothing else — callers rely on this)
|
||||
# Exit non-zero on failure.
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
if [ $# -lt 5 ]; then
|
||||
echo "Usage: spawn-agent.sh SESSION WORKTREE_PATH SPARE_BRANCH NEW_BRANCH OBJECTIVE [PR_NUMBER] [STEPS...]" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
SESSION="$1"
|
||||
WORKTREE_PATH="$2"
|
||||
SPARE_BRANCH="$3"
|
||||
NEW_BRANCH="$4"
|
||||
OBJECTIVE="$5"
|
||||
PR_NUMBER="${6:-}"
|
||||
STEPS=("${@:7}")
|
||||
WORKTREE_NAME=$(basename "$WORKTREE_PATH")
|
||||
STATE_FILE="${ORCHESTRATOR_STATE_FILE:-$HOME/.claude/orchestrator-state.json}"
|
||||
|
||||
# Generate a stable session ID so this agent's Claude session can always be resumed:
|
||||
# claude --resume $SESSION_ID --permission-mode bypassPermissions
|
||||
SESSION_ID=$(uuidgen 2>/dev/null || python3 -c "import uuid; print(uuid.uuid4())")
|
||||
|
||||
# Create (or switch to) the task branch
|
||||
git -C "$WORKTREE_PATH" checkout -b "$NEW_BRANCH" 2>/dev/null \
|
||||
|| git -C "$WORKTREE_PATH" checkout "$NEW_BRANCH"
|
||||
|
||||
# Open a new named tmux window; capture its numeric index
|
||||
WIN_IDX=$(tmux new-window -t "$SESSION" -n "$WORKTREE_NAME" -P -F '#{window_index}')
|
||||
WINDOW="${SESSION}:${WIN_IDX}"
|
||||
|
||||
# Append the initial agent record to the state file so subsequent jq updates find it.
|
||||
# This must happen before the pr_number/steps update below.
|
||||
if [ -f "$STATE_FILE" ]; then
|
||||
NOW=$(date +%s)
|
||||
jq --arg window "$WINDOW" \
|
||||
--arg worktree "$WORKTREE_NAME" \
|
||||
--arg worktree_path "$WORKTREE_PATH" \
|
||||
--arg spare_branch "$SPARE_BRANCH" \
|
||||
--arg branch "$NEW_BRANCH" \
|
||||
--arg objective "$OBJECTIVE" \
|
||||
--arg session_id "$SESSION_ID" \
|
||||
--argjson now "$NOW" \
|
||||
'.agents += [{
|
||||
"window": $window,
|
||||
"worktree": $worktree,
|
||||
"worktree_path": $worktree_path,
|
||||
"spare_branch": $spare_branch,
|
||||
"branch": $branch,
|
||||
"objective": $objective,
|
||||
"session_id": $session_id,
|
||||
"state": "running",
|
||||
"checkpoints": [],
|
||||
"last_output_hash": "",
|
||||
"last_seen_at": $now,
|
||||
"spawned_at": $now,
|
||||
"idle_since": 0,
|
||||
"revision_count": 0,
|
||||
"last_rebriefed_at": 0
|
||||
}]' "$STATE_FILE" > "${STATE_FILE}.tmp" && mv "${STATE_FILE}.tmp" "$STATE_FILE"
|
||||
fi
|
||||
|
||||
# Store pr_number + steps in state file if provided (enables verify-complete.sh).
|
||||
# The agent record was appended above so the jq select now finds it.
|
||||
if [ -n "$PR_NUMBER" ] && [ -f "$STATE_FILE" ]; then
|
||||
if [ "${#STEPS[@]}" -gt 0 ]; then
|
||||
STEPS_JSON=$(printf '%s\n' "${STEPS[@]}" | jq -R . | jq -s .)
|
||||
else
|
||||
STEPS_JSON='[]'
|
||||
fi
|
||||
jq --arg w "$WINDOW" --arg pr "$PR_NUMBER" --argjson steps "$STEPS_JSON" \
|
||||
'.agents |= map(if .window == $w then . + {pr_number: $pr, steps: $steps, checkpoints: []} else . end)' \
|
||||
"$STATE_FILE" > "${STATE_FILE}.tmp" && mv "${STATE_FILE}.tmp" "$STATE_FILE"
|
||||
fi
|
||||
|
||||
# Launch claude with a stable session ID so it can always be resumed after a crash:
|
||||
# claude --resume SESSION_ID --permission-mode bypassPermissions
|
||||
tmux send-keys -t "$WINDOW" "cd '${WORKTREE_PATH}' && claude --permission-mode bypassPermissions --session-id '${SESSION_ID}'" Enter
|
||||
|
||||
# Wait up to 60s for claude to be fully interactive:
|
||||
# both pane_current_command == 'node' AND the '❯' prompt is visible.
|
||||
PROMPT_FOUND=false
|
||||
for i in $(seq 1 60); do
|
||||
CMD=$(tmux display-message -t "$WINDOW" -p '#{pane_current_command}' 2>/dev/null || echo "")
|
||||
PANE=$(tmux capture-pane -t "$WINDOW" -p 2>/dev/null || echo "")
|
||||
if echo "$PANE" | grep -q "Enter to confirm"; then
|
||||
tmux send-keys -t "$WINDOW" Down Enter
|
||||
sleep 2
|
||||
continue
|
||||
fi
|
||||
if [[ "$CMD" == "node" ]] && echo "$PANE" | grep -q "❯"; then
|
||||
PROMPT_FOUND=true
|
||||
break
|
||||
fi
|
||||
sleep 1
|
||||
done
|
||||
|
||||
if ! $PROMPT_FOUND; then
|
||||
echo "[spawn-agent] WARNING: timed out waiting for ❯ prompt on $WINDOW — sending objective anyway" >&2
|
||||
fi
|
||||
|
||||
# Send the task. Split text and Enter — if combined, Enter can fire before the string
|
||||
# is fully buffered, leaving the message stuck as "[Pasted text +N lines]" unsent.
|
||||
tmux send-keys -t "$WINDOW" "${OBJECTIVE} Output each completed step as CHECKPOINT:<step-name>. When ALL steps are done, output ORCHESTRATOR:DONE on its own line."
|
||||
sleep 0.3
|
||||
tmux send-keys -t "$WINDOW" Enter
|
||||
|
||||
# Only output the window address — nothing else (callers parse this)
|
||||
echo "$WINDOW"
|
||||
43
.claude/skills/orchestrate/scripts/status.sh
Executable file
43
.claude/skills/orchestrate/scripts/status.sh
Executable file
@@ -0,0 +1,43 @@
|
||||
#!/usr/bin/env bash
|
||||
# status.sh — print orchestrator status: state file summary + live tmux pane commands
|
||||
#
|
||||
# Usage: status.sh
|
||||
# Reads: ~/.claude/orchestrator-state.json
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
STATE_FILE="${ORCHESTRATOR_STATE_FILE:-$HOME/.claude/orchestrator-state.json}"
|
||||
|
||||
if [ ! -f "$STATE_FILE" ] || ! jq -e '.' "$STATE_FILE" >/dev/null 2>&1; then
|
||||
echo "No orchestrator state found at $STATE_FILE"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Header: active status, session, thresholds, last poll
|
||||
jq -r '
|
||||
"=== Orchestrator [\(if .active then "RUNNING" else "STOPPED" end)] ===",
|
||||
"Session: \(.tmux_session // "unknown") | Idle threshold: \(.idle_threshold_seconds // 300)s",
|
||||
"Last poll: \(if (.last_poll_at // 0) == 0 then "never" else (.last_poll_at | strftime("%H:%M:%S")) end)",
|
||||
""
|
||||
' "$STATE_FILE"
|
||||
|
||||
# Each agent: state, window, worktree/branch, truncated objective
|
||||
AGENT_COUNT=$(jq '.agents | length' "$STATE_FILE")
|
||||
if [ "$AGENT_COUNT" -eq 0 ]; then
|
||||
echo " (no agents registered)"
|
||||
else
|
||||
jq -r '
|
||||
.agents[] |
|
||||
" [\(.state | ascii_upcase)] \(.window) \(.worktree)/\(.branch)",
|
||||
" \(.objective // "" | .[0:70])"
|
||||
' "$STATE_FILE"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
|
||||
# Live pane_current_command for non-done agents
|
||||
while IFS= read -r WINDOW; do
|
||||
[ -z "$WINDOW" ] && continue
|
||||
CMD=$(tmux display-message -t "$WINDOW" -p '#{pane_current_command}' 2>/dev/null || echo "unreachable")
|
||||
echo " $WINDOW live: $CMD"
|
||||
done < <(jq -r '.agents[] | select(.state != "done") | .window' "$STATE_FILE" 2>/dev/null || true)
|
||||
180
.claude/skills/orchestrate/scripts/verify-complete.sh
Normal file
180
.claude/skills/orchestrate/scripts/verify-complete.sh
Normal file
@@ -0,0 +1,180 @@
|
||||
#!/usr/bin/env bash
|
||||
# verify-complete.sh — verify a PR task is truly done before marking the agent done
|
||||
#
|
||||
# Check order matters:
|
||||
# 1. Checkpoints — did the agent do all required steps?
|
||||
# 2. CI complete — no pending (bots post comments AFTER their check runs, must wait)
|
||||
# 3. CI passing — no failures (agent must fix before done)
|
||||
# 4. spawned_at — a new CI run was triggered after agent spawned (proves real work)
|
||||
# 5. Unresolved threads — checked AFTER CI so bot-posted comments are included
|
||||
# 6. CHANGES_REQUESTED — checked AFTER CI so bot reviews are included
|
||||
#
|
||||
# Usage: verify-complete.sh WINDOW
|
||||
# Exit 0 = verified complete; exit 1 = not complete (stderr has reason)
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
WINDOW="$1"
|
||||
STATE_FILE="${ORCHESTRATOR_STATE_FILE:-$HOME/.claude/orchestrator-state.json}"
|
||||
|
||||
PR_NUMBER=$(jq -r --arg w "$WINDOW" '.agents[] | select(.window == $w) | .pr_number // ""' "$STATE_FILE" 2>/dev/null)
|
||||
STEPS=$(jq -r --arg w "$WINDOW" '.agents[] | select(.window == $w) | .steps // [] | .[]' "$STATE_FILE" 2>/dev/null || true)
|
||||
CHECKPOINTS=$(jq -r --arg w "$WINDOW" '.agents[] | select(.window == $w) | .checkpoints // [] | .[]' "$STATE_FILE" 2>/dev/null || true)
|
||||
WORKTREE_PATH=$(jq -r --arg w "$WINDOW" '.agents[] | select(.window == $w) | .worktree_path // ""' "$STATE_FILE" 2>/dev/null)
|
||||
BRANCH=$(jq -r --arg w "$WINDOW" '.agents[] | select(.window == $w) | .branch // ""' "$STATE_FILE" 2>/dev/null)
|
||||
SPAWNED_AT=$(jq -r --arg w "$WINDOW" '.agents[] | select(.window == $w) | .spawned_at // "0"' "$STATE_FILE" 2>/dev/null || echo "0")
|
||||
|
||||
# No PR number = cannot verify
|
||||
if [ -z "$PR_NUMBER" ]; then
|
||||
echo "NOT COMPLETE: no pr_number in state — set pr_number or mark done manually" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# --- Check 1: all required steps are checkpointed ---
|
||||
MISSING=""
|
||||
while IFS= read -r step; do
|
||||
[ -z "$step" ] && continue
|
||||
if ! echo "$CHECKPOINTS" | grep -qFx "$step"; then
|
||||
MISSING="$MISSING $step"
|
||||
fi
|
||||
done <<< "$STEPS"
|
||||
|
||||
if [ -n "$MISSING" ]; then
|
||||
echo "NOT COMPLETE: missing checkpoints:$MISSING on PR #$PR_NUMBER" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Resolve repo for all GitHub checks below
|
||||
REPO=$(jq -r '.repo // ""' "$STATE_FILE" 2>/dev/null || echo "")
|
||||
if [ -z "$REPO" ] && [ -n "$WORKTREE_PATH" ] && [ -d "$WORKTREE_PATH" ]; then
|
||||
REPO=$(git -C "$WORKTREE_PATH" remote get-url origin 2>/dev/null \
|
||||
| sed 's|.*github\.com[:/]||; s|\.git$||' || echo "")
|
||||
fi
|
||||
|
||||
if [ -z "$REPO" ]; then
|
||||
echo "Warning: cannot resolve repo — skipping CI/thread checks" >&2
|
||||
echo "VERIFIED: PR #$PR_NUMBER — checkpoints ✓ (CI/thread checks skipped — no repo)"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
CI_BUCKETS=$(gh pr checks "$PR_NUMBER" --repo "$REPO" --json bucket 2>/dev/null || echo "[]")
|
||||
|
||||
# --- Check 2: CI fully complete — no pending checks ---
|
||||
# Pending checks MUST finish before we check threads/reviews:
|
||||
# bots (Seer, Check PR Status, etc.) post comments and CHANGES_REQUESTED AFTER their CI check runs.
|
||||
PENDING=$(echo "$CI_BUCKETS" | jq '[.[] | select(.bucket == "pending")] | length' 2>/dev/null || echo "0")
|
||||
if [ "$PENDING" -gt 0 ]; then
|
||||
PENDING_NAMES=$(gh pr checks "$PR_NUMBER" --repo "$REPO" --json bucket,name 2>/dev/null \
|
||||
| jq -r '[.[] | select(.bucket == "pending") | .name] | join(", ")' 2>/dev/null || echo "unknown")
|
||||
echo "NOT COMPLETE: $PENDING CI checks still pending on PR #$PR_NUMBER ($PENDING_NAMES)" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# --- Check 3: CI passing — no failures ---
|
||||
FAILING=$(echo "$CI_BUCKETS" | jq '[.[] | select(.bucket == "fail")] | length' 2>/dev/null || echo "0")
|
||||
if [ "$FAILING" -gt 0 ]; then
|
||||
FAILING_NAMES=$(gh pr checks "$PR_NUMBER" --repo "$REPO" --json bucket,name 2>/dev/null \
|
||||
| jq -r '[.[] | select(.bucket == "fail") | .name] | join(", ")' 2>/dev/null || echo "unknown")
|
||||
echo "NOT COMPLETE: $FAILING failing CI checks on PR #$PR_NUMBER ($FAILING_NAMES)" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# --- Check 4: a new CI run was triggered AFTER the agent spawned ---
|
||||
if [ -n "$BRANCH" ] && [ "${SPAWNED_AT:-0}" -gt 0 ]; then
|
||||
LATEST_RUN_AT=$(gh run list --repo "$REPO" --branch "$BRANCH" \
|
||||
--json createdAt --limit 1 2>/dev/null | jq -r '.[0].createdAt // ""')
|
||||
if [ -n "$LATEST_RUN_AT" ]; then
|
||||
if date --version >/dev/null 2>&1; then
|
||||
LATEST_RUN_EPOCH=$(date -d "$LATEST_RUN_AT" "+%s" 2>/dev/null || echo "0")
|
||||
else
|
||||
LATEST_RUN_EPOCH=$(TZ=UTC date -j -f "%Y-%m-%dT%H:%M:%SZ" "$LATEST_RUN_AT" "+%s" 2>/dev/null || echo "0")
|
||||
fi
|
||||
if [ "$LATEST_RUN_EPOCH" -le "$SPAWNED_AT" ]; then
|
||||
echo "NOT COMPLETE: latest CI run on $BRANCH predates agent spawn — agent may not have pushed yet" >&2
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
|
||||
OWNER=$(echo "$REPO" | cut -d/ -f1)
|
||||
REPONAME=$(echo "$REPO" | cut -d/ -f2)
|
||||
|
||||
# --- Check 5: no unresolved review threads (checked AFTER CI — bots post after their check) ---
|
||||
UNRESOLVED=$(gh api graphql -f query="
|
||||
{ repository(owner: \"${OWNER}\", name: \"${REPONAME}\") {
|
||||
pullRequest(number: ${PR_NUMBER}) {
|
||||
reviewThreads(first: 50) { nodes { isResolved } }
|
||||
}
|
||||
}
|
||||
}
|
||||
" --jq '[.data.repository.pullRequest.reviewThreads.nodes[] | select(.isResolved == false)] | length' 2>/dev/null || echo "0")
|
||||
|
||||
if [ "$UNRESOLVED" -gt 0 ]; then
|
||||
echo "NOT COMPLETE: $UNRESOLVED unresolved review threads on PR #$PR_NUMBER" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# --- Check 6: no CHANGES_REQUESTED (checked AFTER CI — bots post reviews after their check) ---
|
||||
# A CHANGES_REQUESTED review is stale if the latest commit was pushed AFTER the review was submitted.
|
||||
# Stale reviews (pre-dating the fixing commits) should not block verification.
|
||||
#
|
||||
# Fetch commits and latestReviews in a single call and fail closed — if gh fails,
|
||||
# treat that as NOT COMPLETE rather than silently passing.
|
||||
# Use latestReviews (not reviews) so each reviewer's latest state is used — superseded
|
||||
# CHANGES_REQUESTED entries are automatically excluded when the reviewer later approved.
|
||||
# Note: we intentionally use committedDate (not PR updatedAt) because updatedAt changes on any
|
||||
# PR activity (bot comments, label changes) which would create false negatives.
|
||||
PR_REVIEW_METADATA=$(gh pr view "$PR_NUMBER" --repo "$REPO" \
|
||||
--json commits,latestReviews 2>/dev/null) || {
|
||||
echo "NOT COMPLETE: unable to fetch PR review metadata for PR #$PR_NUMBER" >&2
|
||||
exit 1
|
||||
}
|
||||
|
||||
LATEST_COMMIT_DATE=$(jq -r '.commits[-1].committedDate // ""' <<< "$PR_REVIEW_METADATA")
|
||||
CHANGES_REQUESTED_REVIEWS=$(jq '[.latestReviews[]? | select(.state == "CHANGES_REQUESTED")]' <<< "$PR_REVIEW_METADATA")
|
||||
|
||||
BLOCKING_CHANGES_REQUESTED=0
|
||||
BLOCKING_REQUESTERS=""
|
||||
|
||||
if [ -n "$LATEST_COMMIT_DATE" ] && [ "$(echo "$CHANGES_REQUESTED_REVIEWS" | jq length)" -gt 0 ]; then
|
||||
if date --version >/dev/null 2>&1; then
|
||||
LATEST_COMMIT_EPOCH=$(date -d "$LATEST_COMMIT_DATE" "+%s" 2>/dev/null || echo "0")
|
||||
else
|
||||
LATEST_COMMIT_EPOCH=$(TZ=UTC date -j -f "%Y-%m-%dT%H:%M:%SZ" "$LATEST_COMMIT_DATE" "+%s" 2>/dev/null || echo "0")
|
||||
fi
|
||||
|
||||
while IFS= read -r review; do
|
||||
[ -z "$review" ] && continue
|
||||
REVIEW_DATE=$(echo "$review" | jq -r '.submittedAt // ""')
|
||||
REVIEWER=$(echo "$review" | jq -r '.author.login // "unknown"')
|
||||
if [ -z "$REVIEW_DATE" ]; then
|
||||
# No submission date — treat as fresh (conservative: blocks verification)
|
||||
BLOCKING_CHANGES_REQUESTED=$(( BLOCKING_CHANGES_REQUESTED + 1 ))
|
||||
BLOCKING_REQUESTERS="${BLOCKING_REQUESTERS:+$BLOCKING_REQUESTERS, }${REVIEWER}"
|
||||
else
|
||||
if date --version >/dev/null 2>&1; then
|
||||
REVIEW_EPOCH=$(date -d "$REVIEW_DATE" "+%s" 2>/dev/null || echo "0")
|
||||
else
|
||||
REVIEW_EPOCH=$(TZ=UTC date -j -f "%Y-%m-%dT%H:%M:%SZ" "$REVIEW_DATE" "+%s" 2>/dev/null || echo "0")
|
||||
fi
|
||||
if [ "$REVIEW_EPOCH" -gt "$LATEST_COMMIT_EPOCH" ]; then
|
||||
# Review was submitted AFTER latest commit — still fresh, blocks verification
|
||||
BLOCKING_CHANGES_REQUESTED=$(( BLOCKING_CHANGES_REQUESTED + 1 ))
|
||||
BLOCKING_REQUESTERS="${BLOCKING_REQUESTERS:+$BLOCKING_REQUESTERS, }${REVIEWER}"
|
||||
fi
|
||||
# Review submitted BEFORE latest commit — stale, skip
|
||||
fi
|
||||
done <<< "$(echo "$CHANGES_REQUESTED_REVIEWS" | jq -c '.[]')"
|
||||
else
|
||||
# No commit date or no changes_requested — check raw count as fallback
|
||||
BLOCKING_CHANGES_REQUESTED=$(echo "$CHANGES_REQUESTED_REVIEWS" | jq length 2>/dev/null || echo "0")
|
||||
BLOCKING_REQUESTERS=$(echo "$CHANGES_REQUESTED_REVIEWS" | jq -r '[.[].author.login] | join(", ")' 2>/dev/null || echo "unknown")
|
||||
fi
|
||||
|
||||
if [ "$BLOCKING_CHANGES_REQUESTED" -gt 0 ]; then
|
||||
echo "NOT COMPLETE: CHANGES_REQUESTED (after latest commit) from ${BLOCKING_REQUESTERS} on PR #$PR_NUMBER" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "VERIFIED: PR #$PR_NUMBER — checkpoints ✓, CI complete + green, 0 unresolved threads, no CHANGES_REQUESTED"
|
||||
exit 0
|
||||
@@ -90,10 +90,12 @@ Address comments **one at a time**: fix → commit → push → inline reply →
|
||||
2. Commit and push the fix
|
||||
3. Reply **inline** (not as a new top-level comment) referencing the fixing commit — this is what resolves the conversation for bot reviewers (coderabbitai, sentry):
|
||||
|
||||
Use a **markdown commit link** so GitHub renders it as a clickable reference. Get the full SHA with `git rev-parse HEAD` after committing:
|
||||
|
||||
| Comment type | How to reply |
|
||||
|---|---|
|
||||
| Inline review (`pulls/{N}/comments`) | `gh api repos/Significant-Gravitas/AutoGPT/pulls/{N}/comments/{ID}/replies -f body="🤖 Fixed in <commit-sha>: <description>"` |
|
||||
| Conversation (`issues/{N}/comments`) | `gh api repos/Significant-Gravitas/AutoGPT/issues/{N}/comments -f body="🤖 Fixed in <commit-sha>: <description>"` |
|
||||
| Inline review (`pulls/{N}/comments`) | `gh api repos/Significant-Gravitas/AutoGPT/pulls/{N}/comments/{ID}/replies -f body="🤖 Fixed in [abc1234](https://github.com/Significant-Gravitas/AutoGPT/commit/FULL_SHA): <description>"` |
|
||||
| Conversation (`issues/{N}/comments`) | `gh api repos/Significant-Gravitas/AutoGPT/issues/{N}/comments -f body="🤖 Fixed in [abc1234](https://github.com/Significant-Gravitas/AutoGPT/commit/FULL_SHA): <description>"` |
|
||||
|
||||
## Codecov coverage
|
||||
|
||||
|
||||
@@ -547,6 +547,8 @@ Upload screenshots to the PR using the GitHub Git API (no local git operations
|
||||
|
||||
**This step is MANDATORY. Every test run MUST post a PR comment with screenshots. No exceptions.**
|
||||
|
||||
**CRITICAL — NEVER post a bare directory link like `https://github.com/.../tree/...`.** Every screenshot MUST appear as `` inline in the PR comment so reviewers can see them without clicking any links. After posting, the verification step below greps the comment for `![` tags and exits 1 if none are found — the test run is considered incomplete until this passes.
|
||||
|
||||
```bash
|
||||
# Upload screenshots via GitHub Git API (creates blobs, tree, commit, and ref remotely)
|
||||
REPO="Significant-Gravitas/AutoGPT"
|
||||
@@ -584,15 +586,27 @@ TREE_JSON+=']'
|
||||
|
||||
# Step 2: Create tree, commit, and branch ref
|
||||
TREE_SHA=$(echo "$TREE_JSON" | jq -c '{tree: .}' | gh api "repos/${REPO}/git/trees" --input - --jq '.sha')
|
||||
COMMIT_SHA=$(gh api "repos/${REPO}/git/commits" \
|
||||
-f message="test: add E2E test screenshots for PR #${PR_NUMBER}" \
|
||||
-f tree="$TREE_SHA" \
|
||||
--jq '.sha')
|
||||
|
||||
# Resolve parent commit so screenshots are chained, not orphan root commits
|
||||
PARENT_SHA=$(gh api "repos/${REPO}/git/refs/heads/${SCREENSHOTS_BRANCH}" --jq '.object.sha' 2>/dev/null || echo "")
|
||||
if [ -n "$PARENT_SHA" ]; then
|
||||
COMMIT_SHA=$(gh api "repos/${REPO}/git/commits" \
|
||||
-f message="test: add E2E test screenshots for PR #${PR_NUMBER}" \
|
||||
-f tree="$TREE_SHA" \
|
||||
-f "parents[]=$PARENT_SHA" \
|
||||
--jq '.sha')
|
||||
else
|
||||
COMMIT_SHA=$(gh api "repos/${REPO}/git/commits" \
|
||||
-f message="test: add E2E test screenshots for PR #${PR_NUMBER}" \
|
||||
-f tree="$TREE_SHA" \
|
||||
--jq '.sha')
|
||||
fi
|
||||
|
||||
gh api "repos/${REPO}/git/refs" \
|
||||
-f ref="refs/heads/${SCREENSHOTS_BRANCH}" \
|
||||
-f sha="$COMMIT_SHA" 2>/dev/null \
|
||||
|| gh api "repos/${REPO}/git/refs/heads/${SCREENSHOTS_BRANCH}" \
|
||||
-X PATCH -f sha="$COMMIT_SHA" -f force=true
|
||||
-X PATCH -f sha="$COMMIT_SHA" -F force=true
|
||||
```
|
||||
|
||||
Then post the comment with **inline images AND explanations for each screenshot**:
|
||||
@@ -658,6 +672,15 @@ INNEREOF
|
||||
|
||||
gh api "repos/${REPO}/issues/$PR_NUMBER/comments" -F body=@"$COMMENT_FILE"
|
||||
rm -f "$COMMENT_FILE"
|
||||
|
||||
# Verify the posted comment contains inline images — exit 1 if none found
|
||||
# Use separate --paginate + jq pipe: --jq applies per-page, not to the full list
|
||||
LAST_COMMENT=$(gh api "repos/${REPO}/issues/$PR_NUMBER/comments" --paginate 2>/dev/null | jq -r '.[-1].body // ""')
|
||||
if ! echo "$LAST_COMMENT" | grep -q '!\['; then
|
||||
echo "ERROR: Posted comment contains no inline images (![). Bare directory links are not acceptable." >&2
|
||||
exit 1
|
||||
fi
|
||||
echo "✓ Inline images verified in posted comment"
|
||||
```
|
||||
|
||||
**The PR comment MUST include:**
|
||||
@@ -667,6 +690,103 @@ rm -f "$COMMENT_FILE"
|
||||
|
||||
This approach uses the GitHub Git API to create blobs, trees, commits, and refs entirely server-side. No local `git checkout` or `git push` — safe for worktrees and won't interfere with the PR branch.
|
||||
|
||||
## Step 8: Evaluate and post a formal PR review
|
||||
|
||||
After the test comment is posted, evaluate whether the run was thorough enough to make a merge decision, then post a formal GitHub review (approve or request changes). **This step is mandatory — every test run MUST end with a formal review decision.**
|
||||
|
||||
### Evaluation criteria
|
||||
|
||||
Re-read the PR description:
|
||||
```bash
|
||||
gh pr view "$PR_NUMBER" --json body --jq '.body' --repo "$REPO"
|
||||
```
|
||||
|
||||
Score the run against each criterion:
|
||||
|
||||
| Criterion | Pass condition |
|
||||
|-----------|---------------|
|
||||
| **Coverage** | Every feature/change described in the PR has at least one test scenario |
|
||||
| **All scenarios pass** | No FAIL rows in the results table |
|
||||
| **Negative tests** | At least one failure-path test per feature (invalid input, unauthorized, edge case) |
|
||||
| **Before/after evidence** | Every state-changing API call has before/after values logged |
|
||||
| **Screenshots are meaningful** | Screenshots show the actual state change, not just a loading spinner or blank page |
|
||||
| **No regressions** | Existing core flows (login, agent create/run) still work |
|
||||
|
||||
### Decision logic
|
||||
|
||||
```
|
||||
ALL criteria pass → APPROVE
|
||||
Any scenario FAIL or missing PR feature → REQUEST_CHANGES (list gaps)
|
||||
Evidence weak (no before/after, vague shots) → REQUEST_CHANGES (list what's missing)
|
||||
```
|
||||
|
||||
### Post the review
|
||||
|
||||
```bash
|
||||
REVIEW_FILE=$(mktemp)
|
||||
|
||||
# Count results
|
||||
PASS_COUNT=$(echo "$TEST_RESULTS_TABLE" | grep -c "PASS" || true)
|
||||
FAIL_COUNT=$(echo "$TEST_RESULTS_TABLE" | grep -c "FAIL" || true)
|
||||
TOTAL=$(( PASS_COUNT + FAIL_COUNT ))
|
||||
|
||||
# List any coverage gaps found during evaluation (populate this array as you assess)
|
||||
# e.g. COVERAGE_GAPS=("PR claims to add X but no test covers it")
|
||||
COVERAGE_GAPS=()
|
||||
```
|
||||
|
||||
**If APPROVING** — all criteria met, zero failures, full coverage:
|
||||
|
||||
```bash
|
||||
cat > "$REVIEW_FILE" <<REVIEWEOF
|
||||
## E2E Test Evaluation — APPROVED
|
||||
|
||||
**Results:** ${PASS_COUNT}/${TOTAL} scenarios passed.
|
||||
|
||||
**Coverage:** All features described in the PR were exercised.
|
||||
|
||||
**Evidence:** Before/after API values logged for all state-changing operations; screenshots show meaningful state transitions.
|
||||
|
||||
**Negative tests:** Failure paths tested for each feature.
|
||||
|
||||
No regressions observed on core flows.
|
||||
REVIEWEOF
|
||||
|
||||
gh pr review "$PR_NUMBER" --repo "$REPO" --approve --body "$(cat "$REVIEW_FILE")"
|
||||
echo "✅ PR approved"
|
||||
```
|
||||
|
||||
**If REQUESTING CHANGES** — any failure, coverage gap, or missing evidence:
|
||||
|
||||
```bash
|
||||
FAIL_LIST=$(echo "$TEST_RESULTS_TABLE" | grep "FAIL" | awk -F'|' '{print "- Scenario" $2 "failed"}' || true)
|
||||
|
||||
cat > "$REVIEW_FILE" <<REVIEWEOF
|
||||
## E2E Test Evaluation — Changes Requested
|
||||
|
||||
**Results:** ${PASS_COUNT}/${TOTAL} scenarios passed, ${FAIL_COUNT} failed.
|
||||
|
||||
### Required before merge
|
||||
|
||||
${FAIL_LIST}
|
||||
$(for gap in "${COVERAGE_GAPS[@]}"; do echo "- $gap"; done)
|
||||
|
||||
Please fix the above and re-run the E2E tests.
|
||||
REVIEWEOF
|
||||
|
||||
gh pr review "$PR_NUMBER" --repo "$REPO" --request-changes --body "$(cat "$REVIEW_FILE")"
|
||||
echo "❌ Changes requested"
|
||||
```
|
||||
|
||||
```bash
|
||||
rm -f "$REVIEW_FILE"
|
||||
```
|
||||
|
||||
**Rules:**
|
||||
- In `--fix` mode, fix all failures before posting the review — the review reflects the final state after fixes
|
||||
- Never approve if any scenario failed, even if it seems like a flake — rerun that scenario first
|
||||
- Never request changes for issues already fixed in this run
|
||||
|
||||
## Fix mode (--fix flag)
|
||||
|
||||
When `--fix` is present, the standard is HIGHER. Do not just note issues — FIX them immediately.
|
||||
|
||||
224
.claude/skills/write-frontend-tests/SKILL.md
Normal file
224
.claude/skills/write-frontend-tests/SKILL.md
Normal file
@@ -0,0 +1,224 @@
|
||||
---
|
||||
name: write-frontend-tests
|
||||
description: "Analyze the current branch diff against dev, plan integration tests for changed frontend pages/components, and write them. TRIGGER when user asks to write frontend tests, add test coverage, or 'write tests for my changes'."
|
||||
user-invocable: true
|
||||
args: "[base branch] — defaults to dev. Optionally pass a specific base branch to diff against."
|
||||
metadata:
|
||||
author: autogpt-team
|
||||
version: "1.0.0"
|
||||
---
|
||||
|
||||
# Write Frontend Tests
|
||||
|
||||
Analyze the current branch's frontend changes, plan integration tests, and write them.
|
||||
|
||||
## References
|
||||
|
||||
Before writing any tests, read the testing rules and conventions:
|
||||
|
||||
- `autogpt_platform/frontend/TESTING.md` — testing strategy, file locations, examples
|
||||
- `autogpt_platform/frontend/src/tests/AGENTS.md` — detailed testing rules, MSW patterns, decision flowchart
|
||||
- `autogpt_platform/frontend/src/tests/integrations/test-utils.tsx` — custom render with providers
|
||||
- `autogpt_platform/frontend/src/tests/integrations/vitest.setup.tsx` — MSW server setup
|
||||
|
||||
## Step 1: Identify changed frontend files
|
||||
|
||||
```bash
|
||||
BASE_BRANCH="${ARGUMENTS:-dev}"
|
||||
cd autogpt_platform/frontend
|
||||
|
||||
# Get changed frontend files (excluding generated, config, and test files)
|
||||
git diff "$BASE_BRANCH"...HEAD --name-only -- src/ \
|
||||
| grep -v '__generated__' \
|
||||
| grep -v '__tests__' \
|
||||
| grep -v '\.test\.' \
|
||||
| grep -v '\.stories\.' \
|
||||
| grep -v '\.spec\.'
|
||||
```
|
||||
|
||||
Also read the diff to understand what changed:
|
||||
|
||||
```bash
|
||||
git diff "$BASE_BRANCH"...HEAD --stat -- src/
|
||||
git diff "$BASE_BRANCH"...HEAD -- src/ | head -500
|
||||
```
|
||||
|
||||
## Step 2: Categorize changes and find test targets
|
||||
|
||||
For each changed file, determine:
|
||||
|
||||
1. **Is it a page?** (`page.tsx`) — these are the primary test targets
|
||||
2. **Is it a hook?** (`use*.ts`) — test via the page that uses it
|
||||
3. **Is it a component?** (`.tsx` in `components/`) — test via the parent page unless it's complex enough to warrant isolation
|
||||
4. **Is it a helper?** (`helpers.ts`, `utils.ts`) — unit test directly if pure logic
|
||||
|
||||
**Priority order:**
|
||||
1. Pages with new/changed data fetching or user interactions
|
||||
2. Components with complex internal logic (modals, forms, wizards)
|
||||
3. Hooks with non-trivial business logic
|
||||
4. Pure helper functions
|
||||
|
||||
Skip: styling-only changes, type-only changes, config changes.
|
||||
|
||||
## Step 3: Check for existing tests
|
||||
|
||||
For each test target, check if tests already exist:
|
||||
|
||||
```bash
|
||||
# For a page at src/app/(platform)/library/page.tsx
|
||||
ls src/app/\(platform\)/library/__tests__/ 2>/dev/null
|
||||
|
||||
# For a component at src/app/(platform)/library/components/AgentCard/AgentCard.tsx
|
||||
ls src/app/\(platform\)/library/components/AgentCard/__tests__/ 2>/dev/null
|
||||
```
|
||||
|
||||
Note which targets have no tests (need new files) vs which have tests that need updating.
|
||||
|
||||
## Step 4: Identify API endpoints used
|
||||
|
||||
For each test target, find which API hooks are used:
|
||||
|
||||
```bash
|
||||
# Find generated API hook imports in the changed files
|
||||
grep -rn 'from.*__generated__/endpoints' src/app/\(platform\)/library/
|
||||
grep -rn 'use[A-Z].*V[12]' src/app/\(platform\)/library/
|
||||
```
|
||||
|
||||
For each API hook found, locate the corresponding MSW handler:
|
||||
|
||||
```bash
|
||||
# If the page uses useGetV2ListLibraryAgents, find its MSW handlers
|
||||
grep -rn 'getGetV2ListLibraryAgents.*Handler' src/app/api/__generated__/endpoints/library/library.msw.ts
|
||||
```
|
||||
|
||||
List every MSW handler you will need (200 for happy path, 4xx for error paths).
|
||||
|
||||
## Step 5: Write the test plan
|
||||
|
||||
Before writing code, output a plan as a numbered list:
|
||||
|
||||
```
|
||||
Test plan for [branch name]:
|
||||
|
||||
1. src/app/(platform)/library/__tests__/main.test.tsx (NEW)
|
||||
- Renders page with agent list (MSW 200)
|
||||
- Shows loading state
|
||||
- Shows error state (MSW 422)
|
||||
- Handles empty agent list
|
||||
|
||||
2. src/app/(platform)/library/__tests__/search.test.tsx (NEW)
|
||||
- Filters agents by search query
|
||||
- Shows no results message
|
||||
- Clears search
|
||||
|
||||
3. src/app/(platform)/library/components/AgentCard/__tests__/AgentCard.test.tsx (UPDATE)
|
||||
- Add test for new "duplicate" action
|
||||
```
|
||||
|
||||
Present this plan to the user. Wait for confirmation before proceeding. If the user has feedback, adjust the plan.
|
||||
|
||||
## Step 6: Write the tests
|
||||
|
||||
For each test file in the plan, follow these conventions:
|
||||
|
||||
### File structure
|
||||
|
||||
```tsx
|
||||
import { render, screen, waitFor } from "@/tests/integrations/test-utils";
|
||||
import { server } from "@/mocks/mock-server";
|
||||
// Import MSW handlers for endpoints the page uses
|
||||
import {
|
||||
getGetV2ListLibraryAgentsMockHandler200,
|
||||
getGetV2ListLibraryAgentsMockHandler422,
|
||||
} from "@/app/api/__generated__/endpoints/library/library.msw";
|
||||
// Import the component under test
|
||||
import LibraryPage from "../page";
|
||||
|
||||
describe("LibraryPage", () => {
|
||||
test("renders agent list from API", async () => {
|
||||
server.use(getGetV2ListLibraryAgentsMockHandler200());
|
||||
|
||||
render(<LibraryPage />);
|
||||
|
||||
expect(await screen.findByText(/my agents/i)).toBeDefined();
|
||||
});
|
||||
|
||||
test("shows error state on API failure", async () => {
|
||||
server.use(getGetV2ListLibraryAgentsMockHandler422());
|
||||
|
||||
render(<LibraryPage />);
|
||||
|
||||
expect(await screen.findByText(/error/i)).toBeDefined();
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
### Rules
|
||||
|
||||
- Use `render()` from `@/tests/integrations/test-utils` (NOT from `@testing-library/react` directly)
|
||||
- Use `server.use()` to set up MSW handlers BEFORE rendering
|
||||
- Use `findBy*` (async) for elements that appear after data fetching — NOT `getBy*`
|
||||
- Use `getBy*` only for elements that are immediately present in the DOM
|
||||
- Use `screen` queries — do NOT destructure from `render()`
|
||||
- Use `waitFor` when asserting side effects or state changes after interactions
|
||||
- Import `fireEvent` or `userEvent` from the test-utils for interactions
|
||||
- Do NOT mock internal hooks or functions — mock at the API boundary via MSW
|
||||
- Do NOT use `act()` manually — `render` and `fireEvent` handle it
|
||||
- Keep tests focused: one behavior per test
|
||||
- Use descriptive test names that read like sentences
|
||||
|
||||
### Test location
|
||||
|
||||
```
|
||||
# For pages: __tests__/ next to page.tsx
|
||||
src/app/(platform)/library/__tests__/main.test.tsx
|
||||
|
||||
# For complex standalone components: __tests__/ inside component folder
|
||||
src/app/(platform)/library/components/AgentCard/__tests__/AgentCard.test.tsx
|
||||
|
||||
# For pure helpers: co-located .test.ts
|
||||
src/app/(platform)/library/helpers.test.ts
|
||||
```
|
||||
|
||||
### Custom MSW overrides
|
||||
|
||||
When the auto-generated faker data is not enough, override with specific data:
|
||||
|
||||
```tsx
|
||||
import { http, HttpResponse } from "msw";
|
||||
|
||||
server.use(
|
||||
http.get("http://localhost:3000/api/proxy/api/v2/library/agents", () => {
|
||||
return HttpResponse.json({
|
||||
agents: [
|
||||
{ id: "1", name: "Test Agent", description: "A test agent" },
|
||||
],
|
||||
pagination: { total_items: 1, total_pages: 1, page: 1, page_size: 10 },
|
||||
});
|
||||
}),
|
||||
);
|
||||
```
|
||||
|
||||
Use the proxy URL pattern: `http://localhost:3000/api/proxy/api/v{version}/{path}` — this matches the MSW base URL configured in `orval.config.ts`.
|
||||
|
||||
## Step 7: Run and verify
|
||||
|
||||
After writing all tests:
|
||||
|
||||
```bash
|
||||
cd autogpt_platform/frontend
|
||||
pnpm test:unit --reporter=verbose
|
||||
```
|
||||
|
||||
If tests fail:
|
||||
1. Read the error output carefully
|
||||
2. Fix the test (not the source code, unless there is a genuine bug)
|
||||
3. Re-run until all pass
|
||||
|
||||
Then run the full checks:
|
||||
|
||||
```bash
|
||||
pnpm format
|
||||
pnpm lint
|
||||
pnpm types
|
||||
```
|
||||
25
.github/workflows/platform-fullstack-ci.yml
vendored
25
.github/workflows/platform-fullstack-ci.yml
vendored
@@ -179,21 +179,30 @@ jobs:
|
||||
pip install pyyaml
|
||||
|
||||
# Resolve extends and generate a flat compose file that bake can understand
|
||||
export NEXT_PUBLIC_SOURCEMAPS NEXT_PUBLIC_PW_TEST
|
||||
docker compose -f docker-compose.yml config > docker-compose.resolved.yml
|
||||
|
||||
# Ensure NEXT_PUBLIC_SOURCEMAPS is in resolved compose
|
||||
# (docker compose config on some versions drops this arg)
|
||||
if ! grep -q "NEXT_PUBLIC_SOURCEMAPS" docker-compose.resolved.yml; then
|
||||
echo "Injecting NEXT_PUBLIC_SOURCEMAPS into resolved compose (docker compose config dropped it)"
|
||||
sed -i '/NEXT_PUBLIC_PW_TEST/a\ NEXT_PUBLIC_SOURCEMAPS: "true"' docker-compose.resolved.yml
|
||||
fi
|
||||
|
||||
# Add cache configuration to the resolved compose file
|
||||
python ../.github/workflows/scripts/docker-ci-fix-compose-build-cache.py \
|
||||
--source docker-compose.resolved.yml \
|
||||
--cache-from "type=gha" \
|
||||
--cache-to "type=gha,mode=max" \
|
||||
--backend-hash "${{ hashFiles('autogpt_platform/backend/Dockerfile', 'autogpt_platform/backend/poetry.lock', 'autogpt_platform/backend/backend/**') }}" \
|
||||
--frontend-hash "${{ hashFiles('autogpt_platform/frontend/Dockerfile', 'autogpt_platform/frontend/pnpm-lock.yaml', 'autogpt_platform/frontend/src/**') }}" \
|
||||
--frontend-hash "${{ hashFiles('autogpt_platform/frontend/Dockerfile', 'autogpt_platform/frontend/pnpm-lock.yaml', 'autogpt_platform/frontend/src/**') }}-sourcemaps" \
|
||||
--git-ref "${{ github.ref }}"
|
||||
|
||||
# Build with bake using the resolved compose file (now includes cache config)
|
||||
docker buildx bake --allow=fs.read=.. -f docker-compose.resolved.yml --load
|
||||
env:
|
||||
NEXT_PUBLIC_PW_TEST: true
|
||||
NEXT_PUBLIC_SOURCEMAPS: true
|
||||
|
||||
- name: Set up tests - Cache E2E test data
|
||||
id: e2e-data-cache
|
||||
@@ -279,6 +288,11 @@ jobs:
|
||||
cache: "pnpm"
|
||||
cache-dependency-path: autogpt_platform/frontend/pnpm-lock.yaml
|
||||
|
||||
- name: Copy source maps from Docker for E2E coverage
|
||||
run: |
|
||||
FRONTEND_CONTAINER=$(docker compose -f ../docker-compose.resolved.yml ps -q frontend)
|
||||
docker cp "$FRONTEND_CONTAINER":/app/.next/static .next-static-coverage
|
||||
|
||||
- name: Set up tests - Install dependencies
|
||||
run: pnpm install --frozen-lockfile
|
||||
|
||||
@@ -289,6 +303,15 @@ jobs:
|
||||
run: pnpm test:no-build
|
||||
continue-on-error: false
|
||||
|
||||
- name: Upload E2E coverage to Codecov
|
||||
if: ${{ !cancelled() }}
|
||||
uses: codecov/codecov-action@v5
|
||||
with:
|
||||
token: ${{ secrets.CODECOV_TOKEN }}
|
||||
flags: platform-frontend-e2e
|
||||
files: ./autogpt_platform/frontend/coverage/e2e/cobertura-coverage.xml
|
||||
disable_search: true
|
||||
|
||||
- name: Upload Playwright report
|
||||
if: always()
|
||||
uses: actions/upload-artifact@v4
|
||||
|
||||
@@ -30,7 +30,7 @@ See `/frontend/CONTRIBUTING.md` for complete patterns. Quick reference:
|
||||
- Regenerate with `pnpm generate:api`
|
||||
- Pattern: `use{Method}{Version}{OperationName}`
|
||||
4. **Styling**: Tailwind CSS only, use design tokens, Phosphor Icons only
|
||||
5. **Testing**: Add Storybook stories for new components, Playwright for E2E
|
||||
5. **Testing**: Integration tests (Vitest + RTL + MSW) are the default (~90%, page-level). Playwright for E2E critical flows. Storybook for design system components. See `autogpt_platform/frontend/TESTING.md`
|
||||
6. **Code conventions**: Function declarations (not arrow functions) for components/handlers
|
||||
|
||||
- Component props should be `interface Props { ... }` (not exported) unless the interface needs to be used outside the component
|
||||
@@ -47,7 +47,9 @@ See `/frontend/CONTRIBUTING.md` for complete patterns. Quick reference:
|
||||
## Testing
|
||||
|
||||
- Backend: `poetry run test` (runs pytest with a docker based postgres + prisma).
|
||||
- Frontend: `pnpm test` or `pnpm test-ui` for Playwright tests. See `docs/content/platform/contributing/tests.md` for tips.
|
||||
- Frontend integration tests: `pnpm test:unit` (Vitest + RTL + MSW, primary testing approach).
|
||||
- Frontend E2E tests: `pnpm test` or `pnpm test-ui` for Playwright tests.
|
||||
- See `autogpt_platform/frontend/TESTING.md` for the full testing strategy.
|
||||
|
||||
Always run the relevant linters and tests before committing.
|
||||
Use conventional commit messages for all commits (e.g. `feat(backend): add API`).
|
||||
|
||||
@@ -16,6 +16,7 @@ from pydantic import BaseModel, ConfigDict, Field, field_validator
|
||||
from backend.copilot import service as chat_service
|
||||
from backend.copilot import stream_registry
|
||||
from backend.copilot.config import ChatConfig, CopilotMode
|
||||
from backend.copilot.db import get_chat_messages_paginated
|
||||
from backend.copilot.executor.utils import enqueue_cancel_task, enqueue_copilot_turn
|
||||
from backend.copilot.model import (
|
||||
ChatMessage,
|
||||
@@ -155,6 +156,8 @@ class SessionDetailResponse(BaseModel):
|
||||
user_id: str | None
|
||||
messages: list[dict]
|
||||
active_stream: ActiveStreamInfo | None = None # Present if stream is still active
|
||||
has_more_messages: bool = False
|
||||
oldest_sequence: int | None = None
|
||||
total_prompt_tokens: int = 0
|
||||
total_completion_tokens: int = 0
|
||||
metadata: ChatSessionMetadata = ChatSessionMetadata()
|
||||
@@ -394,60 +397,78 @@ async def update_session_title_route(
|
||||
async def get_session(
|
||||
session_id: str,
|
||||
user_id: Annotated[str, Security(auth.get_user_id)],
|
||||
limit: int = Query(default=50, ge=1, le=200),
|
||||
before_sequence: int | None = Query(default=None, ge=0),
|
||||
) -> SessionDetailResponse:
|
||||
"""
|
||||
Retrieve the details of a specific chat session.
|
||||
|
||||
Looks up a chat session by ID for the given user (if authenticated) and returns all session data including messages.
|
||||
If there's an active stream for this session, returns active_stream info for reconnection.
|
||||
Supports cursor-based pagination via ``limit`` and ``before_sequence``.
|
||||
When no pagination params are provided, returns the most recent messages.
|
||||
|
||||
Args:
|
||||
session_id: The unique identifier for the desired chat session.
|
||||
user_id: The optional authenticated user ID, or None for anonymous access.
|
||||
user_id: The authenticated user's ID.
|
||||
limit: Maximum number of messages to return (1-200, default 50).
|
||||
before_sequence: Return messages with sequence < this value (cursor).
|
||||
|
||||
Returns:
|
||||
SessionDetailResponse: Details for the requested session, including active_stream info if applicable.
|
||||
|
||||
SessionDetailResponse: Details for the requested session, including
|
||||
active_stream info and pagination metadata.
|
||||
"""
|
||||
session = await get_chat_session(session_id, user_id)
|
||||
if not session:
|
||||
page = await get_chat_messages_paginated(
|
||||
session_id, limit, before_sequence, user_id=user_id
|
||||
)
|
||||
if page is None:
|
||||
raise NotFoundError(f"Session {session_id} not found.")
|
||||
messages = [message.model_dump() for message in page.messages]
|
||||
|
||||
messages = [message.model_dump() for message in session.messages]
|
||||
|
||||
# Check if there's an active stream for this session
|
||||
# Only check active stream on initial load (not on "load more" requests)
|
||||
active_stream_info = None
|
||||
active_session, last_message_id = await stream_registry.get_active_session(
|
||||
session_id, user_id
|
||||
)
|
||||
logger.info(
|
||||
f"[GET_SESSION] session={session_id}, active_session={active_session is not None}, "
|
||||
f"msg_count={len(messages)}, last_role={messages[-1].get('role') if messages else 'none'}"
|
||||
)
|
||||
if active_session:
|
||||
# Keep the assistant message (including tool_calls) so the frontend can
|
||||
# render the correct tool UI (e.g. CreateAgent with mini game).
|
||||
# convertChatSessionToUiMessages handles isComplete=false by setting
|
||||
# tool parts without output to state "input-available".
|
||||
active_stream_info = ActiveStreamInfo(
|
||||
turn_id=active_session.turn_id,
|
||||
last_message_id=last_message_id,
|
||||
if before_sequence is None:
|
||||
active_session, last_message_id = await stream_registry.get_active_session(
|
||||
session_id, user_id
|
||||
)
|
||||
logger.info(
|
||||
f"[GET_SESSION] session={session_id}, active_session={active_session is not None}, "
|
||||
f"msg_count={len(messages)}, last_role={messages[-1].get('role') if messages else 'none'}"
|
||||
)
|
||||
if active_session:
|
||||
active_stream_info = ActiveStreamInfo(
|
||||
turn_id=active_session.turn_id,
|
||||
last_message_id=last_message_id,
|
||||
)
|
||||
|
||||
# Skip session metadata on "load more" — frontend only needs messages
|
||||
if before_sequence is not None:
|
||||
return SessionDetailResponse(
|
||||
id=page.session.session_id,
|
||||
created_at=page.session.started_at.isoformat(),
|
||||
updated_at=page.session.updated_at.isoformat(),
|
||||
user_id=page.session.user_id or None,
|
||||
messages=messages,
|
||||
active_stream=None,
|
||||
has_more_messages=page.has_more,
|
||||
oldest_sequence=page.oldest_sequence,
|
||||
total_prompt_tokens=0,
|
||||
total_completion_tokens=0,
|
||||
)
|
||||
|
||||
# Sum token usage from session
|
||||
total_prompt = sum(u.prompt_tokens for u in session.usage)
|
||||
total_completion = sum(u.completion_tokens for u in session.usage)
|
||||
total_prompt = sum(u.prompt_tokens for u in page.session.usage)
|
||||
total_completion = sum(u.completion_tokens for u in page.session.usage)
|
||||
|
||||
return SessionDetailResponse(
|
||||
id=session.session_id,
|
||||
created_at=session.started_at.isoformat(),
|
||||
updated_at=session.updated_at.isoformat(),
|
||||
user_id=session.user_id or None,
|
||||
id=page.session.session_id,
|
||||
created_at=page.session.started_at.isoformat(),
|
||||
updated_at=page.session.updated_at.isoformat(),
|
||||
user_id=page.session.user_id or None,
|
||||
messages=messages,
|
||||
active_stream=active_stream_info,
|
||||
has_more_messages=page.has_more,
|
||||
oldest_sequence=page.oldest_sequence,
|
||||
total_prompt_tokens=total_prompt,
|
||||
total_completion_tokens=total_completion,
|
||||
metadata=session.metadata,
|
||||
metadata=page.session.metadata,
|
||||
)
|
||||
|
||||
|
||||
|
||||
@@ -12,7 +12,7 @@ import fastapi
|
||||
from autogpt_libs.auth.dependencies import get_user_id, requires_user
|
||||
from fastapi import Query, UploadFile
|
||||
from fastapi.responses import Response
|
||||
from pydantic import BaseModel
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
from backend.data.workspace import (
|
||||
WorkspaceFile,
|
||||
@@ -131,9 +131,26 @@ class StorageUsageResponse(BaseModel):
|
||||
file_count: int
|
||||
|
||||
|
||||
class WorkspaceFileItem(BaseModel):
|
||||
id: str
|
||||
name: str
|
||||
path: str
|
||||
mime_type: str
|
||||
size_bytes: int
|
||||
metadata: dict = Field(default_factory=dict)
|
||||
created_at: str
|
||||
|
||||
|
||||
class ListFilesResponse(BaseModel):
|
||||
files: list[WorkspaceFileItem]
|
||||
offset: int = 0
|
||||
has_more: bool = False
|
||||
|
||||
|
||||
@router.get(
|
||||
"/files/{file_id}/download",
|
||||
summary="Download file by ID",
|
||||
operation_id="getWorkspaceDownloadFileById",
|
||||
)
|
||||
async def download_file(
|
||||
user_id: Annotated[str, fastapi.Security(get_user_id)],
|
||||
@@ -158,6 +175,7 @@ async def download_file(
|
||||
@router.delete(
|
||||
"/files/{file_id}",
|
||||
summary="Delete a workspace file",
|
||||
operation_id="deleteWorkspaceFile",
|
||||
)
|
||||
async def delete_workspace_file(
|
||||
user_id: Annotated[str, fastapi.Security(get_user_id)],
|
||||
@@ -183,6 +201,7 @@ async def delete_workspace_file(
|
||||
@router.post(
|
||||
"/files/upload",
|
||||
summary="Upload file to workspace",
|
||||
operation_id="uploadWorkspaceFile",
|
||||
)
|
||||
async def upload_file(
|
||||
user_id: Annotated[str, fastapi.Security(get_user_id)],
|
||||
@@ -196,6 +215,9 @@ async def upload_file(
|
||||
Files are stored in session-scoped paths when session_id is provided,
|
||||
so the agent's session-scoped tools can discover them automatically.
|
||||
"""
|
||||
# Empty-string session_id drops session scoping; normalize to None.
|
||||
session_id = session_id or None
|
||||
|
||||
config = Config()
|
||||
|
||||
# Sanitize filename — strip any directory components
|
||||
@@ -250,16 +272,27 @@ async def upload_file(
|
||||
manager = WorkspaceManager(user_id, workspace.id, session_id)
|
||||
try:
|
||||
workspace_file = await manager.write_file(
|
||||
content, filename, overwrite=overwrite
|
||||
content, filename, overwrite=overwrite, metadata={"origin": "user-upload"}
|
||||
)
|
||||
except ValueError as e:
|
||||
raise fastapi.HTTPException(status_code=409, detail=str(e)) from e
|
||||
# write_file raises ValueError for both path-conflict and size-limit
|
||||
# cases; map each to its correct HTTP status.
|
||||
message = str(e)
|
||||
if message.startswith("File too large"):
|
||||
raise fastapi.HTTPException(status_code=413, detail=message) from e
|
||||
raise fastapi.HTTPException(status_code=409, detail=message) from e
|
||||
|
||||
# Post-write storage check — eliminates TOCTOU race on the quota.
|
||||
# If a concurrent upload pushed us over the limit, undo this write.
|
||||
new_total = await get_workspace_total_size(workspace.id)
|
||||
if storage_limit_bytes and new_total > storage_limit_bytes:
|
||||
await soft_delete_workspace_file(workspace_file.id, workspace.id)
|
||||
try:
|
||||
await soft_delete_workspace_file(workspace_file.id, workspace.id)
|
||||
except Exception as e:
|
||||
logger.warning(
|
||||
f"Failed to soft-delete over-quota file {workspace_file.id} "
|
||||
f"in workspace {workspace.id}: {e}"
|
||||
)
|
||||
raise fastapi.HTTPException(
|
||||
status_code=413,
|
||||
detail={
|
||||
@@ -281,6 +314,7 @@ async def upload_file(
|
||||
@router.get(
|
||||
"/storage/usage",
|
||||
summary="Get workspace storage usage",
|
||||
operation_id="getWorkspaceStorageUsage",
|
||||
)
|
||||
async def get_storage_usage(
|
||||
user_id: Annotated[str, fastapi.Security(get_user_id)],
|
||||
@@ -301,3 +335,57 @@ async def get_storage_usage(
|
||||
used_percent=round((used_bytes / limit_bytes) * 100, 1) if limit_bytes else 0,
|
||||
file_count=file_count,
|
||||
)
|
||||
|
||||
|
||||
@router.get(
|
||||
"/files",
|
||||
summary="List workspace files",
|
||||
operation_id="listWorkspaceFiles",
|
||||
)
|
||||
async def list_workspace_files(
|
||||
user_id: Annotated[str, fastapi.Security(get_user_id)],
|
||||
session_id: str | None = Query(default=None),
|
||||
limit: int = Query(default=200, ge=1, le=1000),
|
||||
offset: int = Query(default=0, ge=0),
|
||||
) -> ListFilesResponse:
|
||||
"""
|
||||
List files in the user's workspace.
|
||||
|
||||
When session_id is provided, only files for that session are returned.
|
||||
Otherwise, all files across sessions are listed. Results are paginated
|
||||
via `limit`/`offset`; `has_more` indicates whether additional pages exist.
|
||||
"""
|
||||
workspace = await get_or_create_workspace(user_id)
|
||||
|
||||
# Treat empty-string session_id the same as omitted — an empty value
|
||||
# would otherwise silently list files across every session instead of
|
||||
# scoping to one.
|
||||
session_id = session_id or None
|
||||
|
||||
manager = WorkspaceManager(user_id, workspace.id, session_id)
|
||||
include_all = session_id is None
|
||||
# Fetch one extra to compute has_more without a separate count query.
|
||||
files = await manager.list_files(
|
||||
limit=limit + 1,
|
||||
offset=offset,
|
||||
include_all_sessions=include_all,
|
||||
)
|
||||
has_more = len(files) > limit
|
||||
page = files[:limit]
|
||||
|
||||
return ListFilesResponse(
|
||||
files=[
|
||||
WorkspaceFileItem(
|
||||
id=f.id,
|
||||
name=f.name,
|
||||
path=f.path,
|
||||
mime_type=f.mime_type,
|
||||
size_bytes=f.size_bytes,
|
||||
metadata=f.metadata or {},
|
||||
created_at=f.created_at.isoformat(),
|
||||
)
|
||||
for f in page
|
||||
],
|
||||
offset=offset,
|
||||
has_more=has_more,
|
||||
)
|
||||
|
||||
@@ -1,48 +1,28 @@
|
||||
"""Tests for workspace file upload and download routes."""
|
||||
|
||||
import io
|
||||
from datetime import datetime, timezone
|
||||
from unittest.mock import AsyncMock, MagicMock, patch
|
||||
|
||||
import fastapi
|
||||
import fastapi.testclient
|
||||
import pytest
|
||||
import pytest_mock
|
||||
|
||||
from backend.api.features.workspace import routes as workspace_routes
|
||||
from backend.data.workspace import WorkspaceFile
|
||||
from backend.api.features.workspace.routes import router
|
||||
from backend.data.workspace import Workspace, WorkspaceFile
|
||||
|
||||
app = fastapi.FastAPI()
|
||||
app.include_router(workspace_routes.router)
|
||||
app.include_router(router)
|
||||
|
||||
|
||||
@app.exception_handler(ValueError)
|
||||
async def _value_error_handler(
|
||||
request: fastapi.Request, exc: ValueError
|
||||
) -> fastapi.responses.JSONResponse:
|
||||
"""Mirror the production ValueError → 400 mapping from rest_api.py."""
|
||||
"""Mirror the production ValueError → 400 mapping from the REST app."""
|
||||
return fastapi.responses.JSONResponse(status_code=400, content={"detail": str(exc)})
|
||||
|
||||
|
||||
client = fastapi.testclient.TestClient(app)
|
||||
|
||||
TEST_USER_ID = "3e53486c-cf57-477e-ba2a-cb02dc828e1a"
|
||||
|
||||
MOCK_WORKSPACE = type("W", (), {"id": "ws-1"})()
|
||||
|
||||
_NOW = datetime(2023, 1, 1, tzinfo=timezone.utc)
|
||||
|
||||
MOCK_FILE = WorkspaceFile(
|
||||
id="file-aaa-bbb",
|
||||
workspace_id="ws-1",
|
||||
created_at=_NOW,
|
||||
updated_at=_NOW,
|
||||
name="hello.txt",
|
||||
path="/session/hello.txt",
|
||||
mime_type="text/plain",
|
||||
size_bytes=13,
|
||||
storage_path="local://hello.txt",
|
||||
)
|
||||
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def setup_app_auth(mock_jwt_user):
|
||||
@@ -53,25 +33,201 @@ def setup_app_auth(mock_jwt_user):
|
||||
app.dependency_overrides.clear()
|
||||
|
||||
|
||||
def _make_workspace(user_id: str = "test-user-id") -> Workspace:
|
||||
return Workspace(
|
||||
id="ws-001",
|
||||
user_id=user_id,
|
||||
created_at=datetime(2026, 1, 1, tzinfo=timezone.utc),
|
||||
updated_at=datetime(2026, 1, 1, tzinfo=timezone.utc),
|
||||
)
|
||||
|
||||
|
||||
def _make_file(**overrides) -> WorkspaceFile:
|
||||
defaults = {
|
||||
"id": "file-001",
|
||||
"workspace_id": "ws-001",
|
||||
"created_at": datetime(2026, 1, 1, tzinfo=timezone.utc),
|
||||
"updated_at": datetime(2026, 1, 1, tzinfo=timezone.utc),
|
||||
"name": "test.txt",
|
||||
"path": "/test.txt",
|
||||
"storage_path": "local://test.txt",
|
||||
"mime_type": "text/plain",
|
||||
"size_bytes": 100,
|
||||
"checksum": None,
|
||||
"is_deleted": False,
|
||||
"deleted_at": None,
|
||||
"metadata": {},
|
||||
}
|
||||
defaults.update(overrides)
|
||||
return WorkspaceFile(**defaults)
|
||||
|
||||
|
||||
def _make_file_mock(**overrides) -> MagicMock:
|
||||
"""Create a mock WorkspaceFile to simulate DB records with null fields."""
|
||||
defaults = {
|
||||
"id": "file-001",
|
||||
"name": "test.txt",
|
||||
"path": "/test.txt",
|
||||
"mime_type": "text/plain",
|
||||
"size_bytes": 100,
|
||||
"metadata": {},
|
||||
"created_at": datetime(2026, 1, 1, tzinfo=timezone.utc),
|
||||
}
|
||||
defaults.update(overrides)
|
||||
mock = MagicMock(spec=WorkspaceFile)
|
||||
for k, v in defaults.items():
|
||||
setattr(mock, k, v)
|
||||
return mock
|
||||
|
||||
|
||||
# -- list_workspace_files tests --
|
||||
|
||||
|
||||
@patch("backend.api.features.workspace.routes.get_or_create_workspace")
|
||||
@patch("backend.api.features.workspace.routes.WorkspaceManager")
|
||||
def test_list_files_returns_all_when_no_session(mock_manager_cls, mock_get_workspace):
|
||||
mock_get_workspace.return_value = _make_workspace()
|
||||
files = [
|
||||
_make_file(id="f1", name="a.txt", metadata={"origin": "user-upload"}),
|
||||
_make_file(id="f2", name="b.csv", metadata={"origin": "agent-created"}),
|
||||
]
|
||||
mock_instance = AsyncMock()
|
||||
mock_instance.list_files.return_value = files
|
||||
mock_manager_cls.return_value = mock_instance
|
||||
|
||||
response = client.get("/files")
|
||||
assert response.status_code == 200
|
||||
|
||||
data = response.json()
|
||||
assert len(data["files"]) == 2
|
||||
assert data["has_more"] is False
|
||||
assert data["offset"] == 0
|
||||
assert data["files"][0]["id"] == "f1"
|
||||
assert data["files"][0]["metadata"] == {"origin": "user-upload"}
|
||||
assert data["files"][1]["id"] == "f2"
|
||||
mock_instance.list_files.assert_called_once_with(
|
||||
limit=201, offset=0, include_all_sessions=True
|
||||
)
|
||||
|
||||
|
||||
@patch("backend.api.features.workspace.routes.get_or_create_workspace")
|
||||
@patch("backend.api.features.workspace.routes.WorkspaceManager")
|
||||
def test_list_files_scopes_to_session_when_provided(
|
||||
mock_manager_cls, mock_get_workspace, test_user_id
|
||||
):
|
||||
mock_get_workspace.return_value = _make_workspace(user_id=test_user_id)
|
||||
mock_instance = AsyncMock()
|
||||
mock_instance.list_files.return_value = []
|
||||
mock_manager_cls.return_value = mock_instance
|
||||
|
||||
response = client.get("/files?session_id=sess-123")
|
||||
assert response.status_code == 200
|
||||
|
||||
data = response.json()
|
||||
assert data["files"] == []
|
||||
assert data["has_more"] is False
|
||||
mock_manager_cls.assert_called_once_with(test_user_id, "ws-001", "sess-123")
|
||||
mock_instance.list_files.assert_called_once_with(
|
||||
limit=201, offset=0, include_all_sessions=False
|
||||
)
|
||||
|
||||
|
||||
@patch("backend.api.features.workspace.routes.get_or_create_workspace")
|
||||
@patch("backend.api.features.workspace.routes.WorkspaceManager")
|
||||
def test_list_files_null_metadata_coerced_to_empty_dict(
|
||||
mock_manager_cls, mock_get_workspace
|
||||
):
|
||||
"""Route uses `f.metadata or {}` for pre-existing files with null metadata."""
|
||||
mock_get_workspace.return_value = _make_workspace()
|
||||
mock_instance = AsyncMock()
|
||||
mock_instance.list_files.return_value = [_make_file_mock(metadata=None)]
|
||||
mock_manager_cls.return_value = mock_instance
|
||||
|
||||
response = client.get("/files")
|
||||
assert response.status_code == 200
|
||||
assert response.json()["files"][0]["metadata"] == {}
|
||||
|
||||
|
||||
# -- upload_file metadata tests --
|
||||
|
||||
|
||||
@patch("backend.api.features.workspace.routes.get_or_create_workspace")
|
||||
@patch("backend.api.features.workspace.routes.get_workspace_total_size")
|
||||
@patch("backend.api.features.workspace.routes.scan_content_safe")
|
||||
@patch("backend.api.features.workspace.routes.WorkspaceManager")
|
||||
def test_upload_passes_user_upload_origin_metadata(
|
||||
mock_manager_cls, mock_scan, mock_total_size, mock_get_workspace
|
||||
):
|
||||
mock_get_workspace.return_value = _make_workspace()
|
||||
mock_total_size.return_value = 100
|
||||
written = _make_file(id="new-file", name="doc.pdf")
|
||||
mock_instance = AsyncMock()
|
||||
mock_instance.write_file.return_value = written
|
||||
mock_manager_cls.return_value = mock_instance
|
||||
|
||||
response = client.post(
|
||||
"/files/upload",
|
||||
files={"file": ("doc.pdf", b"fake-pdf-content", "application/pdf")},
|
||||
)
|
||||
assert response.status_code == 200
|
||||
|
||||
mock_instance.write_file.assert_called_once()
|
||||
call_kwargs = mock_instance.write_file.call_args
|
||||
assert call_kwargs.kwargs.get("metadata") == {"origin": "user-upload"}
|
||||
|
||||
|
||||
@patch("backend.api.features.workspace.routes.get_or_create_workspace")
|
||||
@patch("backend.api.features.workspace.routes.get_workspace_total_size")
|
||||
@patch("backend.api.features.workspace.routes.scan_content_safe")
|
||||
@patch("backend.api.features.workspace.routes.WorkspaceManager")
|
||||
def test_upload_returns_409_on_file_conflict(
|
||||
mock_manager_cls, mock_scan, mock_total_size, mock_get_workspace
|
||||
):
|
||||
mock_get_workspace.return_value = _make_workspace()
|
||||
mock_total_size.return_value = 100
|
||||
mock_instance = AsyncMock()
|
||||
mock_instance.write_file.side_effect = ValueError("File already exists at path")
|
||||
mock_manager_cls.return_value = mock_instance
|
||||
|
||||
response = client.post(
|
||||
"/files/upload",
|
||||
files={"file": ("dup.txt", b"content", "text/plain")},
|
||||
)
|
||||
assert response.status_code == 409
|
||||
assert "already exists" in response.json()["detail"]
|
||||
|
||||
|
||||
# -- Restored upload/download/delete security + invariant tests --
|
||||
|
||||
|
||||
def _upload(
|
||||
filename: str = "hello.txt",
|
||||
content: bytes = b"Hello, world!",
|
||||
content_type: str = "text/plain",
|
||||
):
|
||||
"""Helper to POST a file upload."""
|
||||
return client.post(
|
||||
"/files/upload?session_id=sess-1",
|
||||
files={"file": (filename, io.BytesIO(content), content_type)},
|
||||
)
|
||||
|
||||
|
||||
# ---- Happy path ----
|
||||
_MOCK_FILE = WorkspaceFile(
|
||||
id="file-aaa-bbb",
|
||||
workspace_id="ws-001",
|
||||
created_at=datetime(2026, 1, 1, tzinfo=timezone.utc),
|
||||
updated_at=datetime(2026, 1, 1, tzinfo=timezone.utc),
|
||||
name="hello.txt",
|
||||
path="/sessions/sess-1/hello.txt",
|
||||
mime_type="text/plain",
|
||||
size_bytes=13,
|
||||
storage_path="local://hello.txt",
|
||||
)
|
||||
|
||||
|
||||
def test_upload_happy_path(mocker: pytest_mock.MockFixture):
|
||||
def test_upload_happy_path(mocker):
|
||||
mocker.patch(
|
||||
"backend.api.features.workspace.routes.get_or_create_workspace",
|
||||
return_value=MOCK_WORKSPACE,
|
||||
return_value=_make_workspace(),
|
||||
)
|
||||
mocker.patch(
|
||||
"backend.api.features.workspace.routes.get_workspace_total_size",
|
||||
@@ -82,7 +238,7 @@ def test_upload_happy_path(mocker: pytest_mock.MockFixture):
|
||||
return_value=None,
|
||||
)
|
||||
mock_manager = mocker.MagicMock()
|
||||
mock_manager.write_file = mocker.AsyncMock(return_value=MOCK_FILE)
|
||||
mock_manager.write_file = mocker.AsyncMock(return_value=_MOCK_FILE)
|
||||
mocker.patch(
|
||||
"backend.api.features.workspace.routes.WorkspaceManager",
|
||||
return_value=mock_manager,
|
||||
@@ -96,10 +252,7 @@ def test_upload_happy_path(mocker: pytest_mock.MockFixture):
|
||||
assert data["size_bytes"] == 13
|
||||
|
||||
|
||||
# ---- Per-file size limit ----
|
||||
|
||||
|
||||
def test_upload_exceeds_max_file_size(mocker: pytest_mock.MockFixture):
|
||||
def test_upload_exceeds_max_file_size(mocker):
|
||||
"""Files larger than max_file_size_mb should be rejected with 413."""
|
||||
cfg = mocker.patch("backend.api.features.workspace.routes.Config")
|
||||
cfg.return_value.max_file_size_mb = 0 # 0 MB → any content is too big
|
||||
@@ -109,15 +262,11 @@ def test_upload_exceeds_max_file_size(mocker: pytest_mock.MockFixture):
|
||||
assert response.status_code == 413
|
||||
|
||||
|
||||
# ---- Storage quota exceeded ----
|
||||
|
||||
|
||||
def test_upload_storage_quota_exceeded(mocker: pytest_mock.MockFixture):
|
||||
def test_upload_storage_quota_exceeded(mocker):
|
||||
mocker.patch(
|
||||
"backend.api.features.workspace.routes.get_or_create_workspace",
|
||||
return_value=MOCK_WORKSPACE,
|
||||
return_value=_make_workspace(),
|
||||
)
|
||||
# Current usage already at limit
|
||||
mocker.patch(
|
||||
"backend.api.features.workspace.routes.get_workspace_total_size",
|
||||
return_value=500 * 1024 * 1024,
|
||||
@@ -128,27 +277,22 @@ def test_upload_storage_quota_exceeded(mocker: pytest_mock.MockFixture):
|
||||
assert "Storage limit exceeded" in response.text
|
||||
|
||||
|
||||
# ---- Post-write quota race (B2) ----
|
||||
|
||||
|
||||
def test_upload_post_write_quota_race(mocker: pytest_mock.MockFixture):
|
||||
"""If a concurrent upload tips the total over the limit after write,
|
||||
the file should be soft-deleted and 413 returned."""
|
||||
def test_upload_post_write_quota_race(mocker):
|
||||
"""Concurrent upload tipping over limit after write should soft-delete + 413."""
|
||||
mocker.patch(
|
||||
"backend.api.features.workspace.routes.get_or_create_workspace",
|
||||
return_value=MOCK_WORKSPACE,
|
||||
return_value=_make_workspace(),
|
||||
)
|
||||
# Pre-write check passes (under limit), but post-write check fails
|
||||
mocker.patch(
|
||||
"backend.api.features.workspace.routes.get_workspace_total_size",
|
||||
side_effect=[0, 600 * 1024 * 1024], # first call OK, second over limit
|
||||
side_effect=[0, 600 * 1024 * 1024],
|
||||
)
|
||||
mocker.patch(
|
||||
"backend.api.features.workspace.routes.scan_content_safe",
|
||||
return_value=None,
|
||||
)
|
||||
mock_manager = mocker.MagicMock()
|
||||
mock_manager.write_file = mocker.AsyncMock(return_value=MOCK_FILE)
|
||||
mock_manager.write_file = mocker.AsyncMock(return_value=_MOCK_FILE)
|
||||
mocker.patch(
|
||||
"backend.api.features.workspace.routes.WorkspaceManager",
|
||||
return_value=mock_manager,
|
||||
@@ -160,17 +304,14 @@ def test_upload_post_write_quota_race(mocker: pytest_mock.MockFixture):
|
||||
|
||||
response = _upload()
|
||||
assert response.status_code == 413
|
||||
mock_delete.assert_called_once_with("file-aaa-bbb", "ws-1")
|
||||
mock_delete.assert_called_once_with("file-aaa-bbb", "ws-001")
|
||||
|
||||
|
||||
# ---- Any extension accepted (no allowlist) ----
|
||||
|
||||
|
||||
def test_upload_any_extension(mocker: pytest_mock.MockFixture):
|
||||
def test_upload_any_extension(mocker):
|
||||
"""Any file extension should be accepted — ClamAV is the security layer."""
|
||||
mocker.patch(
|
||||
"backend.api.features.workspace.routes.get_or_create_workspace",
|
||||
return_value=MOCK_WORKSPACE,
|
||||
return_value=_make_workspace(),
|
||||
)
|
||||
mocker.patch(
|
||||
"backend.api.features.workspace.routes.get_workspace_total_size",
|
||||
@@ -181,7 +322,7 @@ def test_upload_any_extension(mocker: pytest_mock.MockFixture):
|
||||
return_value=None,
|
||||
)
|
||||
mock_manager = mocker.MagicMock()
|
||||
mock_manager.write_file = mocker.AsyncMock(return_value=MOCK_FILE)
|
||||
mock_manager.write_file = mocker.AsyncMock(return_value=_MOCK_FILE)
|
||||
mocker.patch(
|
||||
"backend.api.features.workspace.routes.WorkspaceManager",
|
||||
return_value=mock_manager,
|
||||
@@ -191,16 +332,13 @@ def test_upload_any_extension(mocker: pytest_mock.MockFixture):
|
||||
assert response.status_code == 200
|
||||
|
||||
|
||||
# ---- Virus scan rejection ----
|
||||
|
||||
|
||||
def test_upload_blocked_by_virus_scan(mocker: pytest_mock.MockFixture):
|
||||
def test_upload_blocked_by_virus_scan(mocker):
|
||||
"""Files flagged by ClamAV should be rejected and never written to storage."""
|
||||
from backend.api.features.store.exceptions import VirusDetectedError
|
||||
|
||||
mocker.patch(
|
||||
"backend.api.features.workspace.routes.get_or_create_workspace",
|
||||
return_value=MOCK_WORKSPACE,
|
||||
return_value=_make_workspace(),
|
||||
)
|
||||
mocker.patch(
|
||||
"backend.api.features.workspace.routes.get_workspace_total_size",
|
||||
@@ -211,7 +349,7 @@ def test_upload_blocked_by_virus_scan(mocker: pytest_mock.MockFixture):
|
||||
side_effect=VirusDetectedError("Eicar-Test-Signature"),
|
||||
)
|
||||
mock_manager = mocker.MagicMock()
|
||||
mock_manager.write_file = mocker.AsyncMock(return_value=MOCK_FILE)
|
||||
mock_manager.write_file = mocker.AsyncMock(return_value=_MOCK_FILE)
|
||||
mocker.patch(
|
||||
"backend.api.features.workspace.routes.WorkspaceManager",
|
||||
return_value=mock_manager,
|
||||
@@ -219,18 +357,14 @@ def test_upload_blocked_by_virus_scan(mocker: pytest_mock.MockFixture):
|
||||
|
||||
response = _upload(filename="evil.exe", content=b"X5O!P%@AP...")
|
||||
assert response.status_code == 400
|
||||
assert "Virus detected" in response.text
|
||||
mock_manager.write_file.assert_not_called()
|
||||
|
||||
|
||||
# ---- No file extension ----
|
||||
|
||||
|
||||
def test_upload_file_without_extension(mocker: pytest_mock.MockFixture):
|
||||
def test_upload_file_without_extension(mocker):
|
||||
"""Files without an extension should be accepted and stored as-is."""
|
||||
mocker.patch(
|
||||
"backend.api.features.workspace.routes.get_or_create_workspace",
|
||||
return_value=MOCK_WORKSPACE,
|
||||
return_value=_make_workspace(),
|
||||
)
|
||||
mocker.patch(
|
||||
"backend.api.features.workspace.routes.get_workspace_total_size",
|
||||
@@ -241,7 +375,7 @@ def test_upload_file_without_extension(mocker: pytest_mock.MockFixture):
|
||||
return_value=None,
|
||||
)
|
||||
mock_manager = mocker.MagicMock()
|
||||
mock_manager.write_file = mocker.AsyncMock(return_value=MOCK_FILE)
|
||||
mock_manager.write_file = mocker.AsyncMock(return_value=_MOCK_FILE)
|
||||
mocker.patch(
|
||||
"backend.api.features.workspace.routes.WorkspaceManager",
|
||||
return_value=mock_manager,
|
||||
@@ -257,14 +391,11 @@ def test_upload_file_without_extension(mocker: pytest_mock.MockFixture):
|
||||
assert mock_manager.write_file.call_args[0][1] == "Makefile"
|
||||
|
||||
|
||||
# ---- Filename sanitization (SF5) ----
|
||||
|
||||
|
||||
def test_upload_strips_path_components(mocker: pytest_mock.MockFixture):
|
||||
def test_upload_strips_path_components(mocker):
|
||||
"""Path-traversal filenames should be reduced to their basename."""
|
||||
mocker.patch(
|
||||
"backend.api.features.workspace.routes.get_or_create_workspace",
|
||||
return_value=MOCK_WORKSPACE,
|
||||
return_value=_make_workspace(),
|
||||
)
|
||||
mocker.patch(
|
||||
"backend.api.features.workspace.routes.get_workspace_total_size",
|
||||
@@ -275,28 +406,23 @@ def test_upload_strips_path_components(mocker: pytest_mock.MockFixture):
|
||||
return_value=None,
|
||||
)
|
||||
mock_manager = mocker.MagicMock()
|
||||
mock_manager.write_file = mocker.AsyncMock(return_value=MOCK_FILE)
|
||||
mock_manager.write_file = mocker.AsyncMock(return_value=_MOCK_FILE)
|
||||
mocker.patch(
|
||||
"backend.api.features.workspace.routes.WorkspaceManager",
|
||||
return_value=mock_manager,
|
||||
)
|
||||
|
||||
# Filename with traversal
|
||||
_upload(filename="../../etc/passwd.txt")
|
||||
|
||||
# write_file should have been called with just the basename
|
||||
mock_manager.write_file.assert_called_once()
|
||||
call_args = mock_manager.write_file.call_args
|
||||
assert call_args[0][1] == "passwd.txt"
|
||||
|
||||
|
||||
# ---- Download ----
|
||||
|
||||
|
||||
def test_download_file_not_found(mocker: pytest_mock.MockFixture):
|
||||
def test_download_file_not_found(mocker):
|
||||
mocker.patch(
|
||||
"backend.api.features.workspace.routes.get_workspace",
|
||||
return_value=MOCK_WORKSPACE,
|
||||
return_value=_make_workspace(),
|
||||
)
|
||||
mocker.patch(
|
||||
"backend.api.features.workspace.routes.get_workspace_file",
|
||||
@@ -307,14 +433,11 @@ def test_download_file_not_found(mocker: pytest_mock.MockFixture):
|
||||
assert response.status_code == 404
|
||||
|
||||
|
||||
# ---- Delete ----
|
||||
|
||||
|
||||
def test_delete_file_success(mocker: pytest_mock.MockFixture):
|
||||
def test_delete_file_success(mocker):
|
||||
"""Deleting an existing file should return {"deleted": true}."""
|
||||
mocker.patch(
|
||||
"backend.api.features.workspace.routes.get_workspace",
|
||||
return_value=MOCK_WORKSPACE,
|
||||
return_value=_make_workspace(),
|
||||
)
|
||||
mock_manager = mocker.MagicMock()
|
||||
mock_manager.delete_file = mocker.AsyncMock(return_value=True)
|
||||
@@ -329,11 +452,11 @@ def test_delete_file_success(mocker: pytest_mock.MockFixture):
|
||||
mock_manager.delete_file.assert_called_once_with("file-aaa-bbb")
|
||||
|
||||
|
||||
def test_delete_file_not_found(mocker: pytest_mock.MockFixture):
|
||||
def test_delete_file_not_found(mocker):
|
||||
"""Deleting a non-existent file should return 404."""
|
||||
mocker.patch(
|
||||
"backend.api.features.workspace.routes.get_workspace",
|
||||
return_value=MOCK_WORKSPACE,
|
||||
return_value=_make_workspace(),
|
||||
)
|
||||
mock_manager = mocker.MagicMock()
|
||||
mock_manager.delete_file = mocker.AsyncMock(return_value=False)
|
||||
@@ -347,7 +470,7 @@ def test_delete_file_not_found(mocker: pytest_mock.MockFixture):
|
||||
assert "File not found" in response.text
|
||||
|
||||
|
||||
def test_delete_file_no_workspace(mocker: pytest_mock.MockFixture):
|
||||
def test_delete_file_no_workspace(mocker):
|
||||
"""Deleting when user has no workspace should return 404."""
|
||||
mocker.patch(
|
||||
"backend.api.features.workspace.routes.get_workspace",
|
||||
@@ -357,3 +480,123 @@ def test_delete_file_no_workspace(mocker: pytest_mock.MockFixture):
|
||||
response = client.delete("/files/file-aaa-bbb")
|
||||
assert response.status_code == 404
|
||||
assert "Workspace not found" in response.text
|
||||
|
||||
|
||||
def test_upload_write_file_too_large_returns_413(mocker):
|
||||
"""write_file raises ValueError("File too large: …") → must map to 413."""
|
||||
mocker.patch(
|
||||
"backend.api.features.workspace.routes.get_or_create_workspace",
|
||||
return_value=_make_workspace(),
|
||||
)
|
||||
mocker.patch(
|
||||
"backend.api.features.workspace.routes.get_workspace_total_size",
|
||||
return_value=0,
|
||||
)
|
||||
mocker.patch(
|
||||
"backend.api.features.workspace.routes.scan_content_safe",
|
||||
return_value=None,
|
||||
)
|
||||
mock_manager = mocker.MagicMock()
|
||||
mock_manager.write_file = mocker.AsyncMock(
|
||||
side_effect=ValueError("File too large: 900 bytes exceeds 1MB limit")
|
||||
)
|
||||
mocker.patch(
|
||||
"backend.api.features.workspace.routes.WorkspaceManager",
|
||||
return_value=mock_manager,
|
||||
)
|
||||
|
||||
response = _upload()
|
||||
assert response.status_code == 413
|
||||
assert "File too large" in response.text
|
||||
|
||||
|
||||
def test_upload_write_file_conflict_returns_409(mocker):
|
||||
"""Non-'File too large' ValueErrors from write_file stay as 409."""
|
||||
mocker.patch(
|
||||
"backend.api.features.workspace.routes.get_or_create_workspace",
|
||||
return_value=_make_workspace(),
|
||||
)
|
||||
mocker.patch(
|
||||
"backend.api.features.workspace.routes.get_workspace_total_size",
|
||||
return_value=0,
|
||||
)
|
||||
mocker.patch(
|
||||
"backend.api.features.workspace.routes.scan_content_safe",
|
||||
return_value=None,
|
||||
)
|
||||
mock_manager = mocker.MagicMock()
|
||||
mock_manager.write_file = mocker.AsyncMock(
|
||||
side_effect=ValueError("File already exists at path: /sessions/x/a.txt")
|
||||
)
|
||||
mocker.patch(
|
||||
"backend.api.features.workspace.routes.WorkspaceManager",
|
||||
return_value=mock_manager,
|
||||
)
|
||||
|
||||
response = _upload()
|
||||
assert response.status_code == 409
|
||||
assert "already exists" in response.text
|
||||
|
||||
|
||||
@patch("backend.api.features.workspace.routes.get_or_create_workspace")
|
||||
@patch("backend.api.features.workspace.routes.WorkspaceManager")
|
||||
def test_list_files_has_more_true_when_limit_exceeded(
|
||||
mock_manager_cls, mock_get_workspace
|
||||
):
|
||||
"""The limit+1 fetch trick must flip has_more=True and trim the page."""
|
||||
mock_get_workspace.return_value = _make_workspace()
|
||||
# Backend was asked for limit+1=3, and returned exactly 3 items.
|
||||
files = [
|
||||
_make_file(id="f1", name="a.txt"),
|
||||
_make_file(id="f2", name="b.txt"),
|
||||
_make_file(id="f3", name="c.txt"),
|
||||
]
|
||||
mock_instance = AsyncMock()
|
||||
mock_instance.list_files.return_value = files
|
||||
mock_manager_cls.return_value = mock_instance
|
||||
|
||||
response = client.get("/files?limit=2")
|
||||
assert response.status_code == 200
|
||||
data = response.json()
|
||||
assert data["has_more"] is True
|
||||
assert len(data["files"]) == 2
|
||||
assert data["files"][0]["id"] == "f1"
|
||||
assert data["files"][1]["id"] == "f2"
|
||||
mock_instance.list_files.assert_called_once_with(
|
||||
limit=3, offset=0, include_all_sessions=True
|
||||
)
|
||||
|
||||
|
||||
@patch("backend.api.features.workspace.routes.get_or_create_workspace")
|
||||
@patch("backend.api.features.workspace.routes.WorkspaceManager")
|
||||
def test_list_files_has_more_false_when_exactly_page_size(
|
||||
mock_manager_cls, mock_get_workspace
|
||||
):
|
||||
"""Exactly `limit` rows means we're on the last page — has_more=False."""
|
||||
mock_get_workspace.return_value = _make_workspace()
|
||||
files = [_make_file(id="f1", name="a.txt"), _make_file(id="f2", name="b.txt")]
|
||||
mock_instance = AsyncMock()
|
||||
mock_instance.list_files.return_value = files
|
||||
mock_manager_cls.return_value = mock_instance
|
||||
|
||||
response = client.get("/files?limit=2")
|
||||
assert response.status_code == 200
|
||||
data = response.json()
|
||||
assert data["has_more"] is False
|
||||
assert len(data["files"]) == 2
|
||||
|
||||
|
||||
@patch("backend.api.features.workspace.routes.get_or_create_workspace")
|
||||
@patch("backend.api.features.workspace.routes.WorkspaceManager")
|
||||
def test_list_files_offset_is_echoed_back(mock_manager_cls, mock_get_workspace):
|
||||
mock_get_workspace.return_value = _make_workspace()
|
||||
mock_instance = AsyncMock()
|
||||
mock_instance.list_files.return_value = []
|
||||
mock_manager_cls.return_value = mock_instance
|
||||
|
||||
response = client.get("/files?offset=50&limit=10")
|
||||
assert response.status_code == 200
|
||||
assert response.json()["offset"] == 50
|
||||
mock_instance.list_files.assert_called_once_with(
|
||||
limit=11, offset=50, include_all_sessions=True
|
||||
)
|
||||
|
||||
@@ -7,19 +7,24 @@ shared tool registry as the SDK path.
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import base64
|
||||
import logging
|
||||
import os
|
||||
import re
|
||||
import shutil
|
||||
import tempfile
|
||||
import uuid
|
||||
from collections.abc import AsyncGenerator, Sequence
|
||||
from dataclasses import dataclass, field
|
||||
from functools import partial
|
||||
from typing import Any, cast
|
||||
from typing import TYPE_CHECKING, Any, cast
|
||||
|
||||
import orjson
|
||||
from langfuse import propagate_attributes
|
||||
from openai.types.chat import ChatCompletionMessageParam, ChatCompletionToolParam
|
||||
|
||||
from backend.copilot.config import CopilotMode
|
||||
from backend.copilot.context import set_execution_context
|
||||
from backend.copilot.context import get_workspace_manager, set_execution_context
|
||||
from backend.copilot.model import (
|
||||
ChatMessage,
|
||||
ChatSession,
|
||||
@@ -75,6 +80,9 @@ from backend.util.tool_call_loop import (
|
||||
tool_call_loop,
|
||||
)
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from backend.copilot.permissions import CopilotPermissions
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Set to hold background tasks to prevent garbage collection
|
||||
@@ -83,6 +91,128 @@ _background_tasks: set[asyncio.Task[Any]] = set()
|
||||
# Maximum number of tool-call rounds before forcing a text response.
|
||||
_MAX_TOOL_ROUNDS = 30
|
||||
|
||||
# Max seconds to wait for transcript upload in the finally block before
|
||||
# letting it continue as a background task (tracked in _background_tasks).
|
||||
_TRANSCRIPT_UPLOAD_TIMEOUT_S = 5
|
||||
|
||||
# MIME types that can be embedded as vision content blocks (OpenAI format).
|
||||
_VISION_MIME_TYPES = frozenset({"image/png", "image/jpeg", "image/gif", "image/webp"})
|
||||
|
||||
# Max size for embedding images directly in the user message (20 MiB raw).
|
||||
_MAX_INLINE_IMAGE_BYTES = 20 * 1024 * 1024
|
||||
|
||||
# Matches characters unsafe for filenames.
|
||||
_UNSAFE_FILENAME = re.compile(r"[^\w.\-]")
|
||||
|
||||
|
||||
async def _prepare_baseline_attachments(
|
||||
file_ids: list[str],
|
||||
user_id: str,
|
||||
session_id: str,
|
||||
working_dir: str,
|
||||
) -> tuple[str, list[dict[str, Any]]]:
|
||||
"""Download workspace files and prepare them for the baseline LLM.
|
||||
|
||||
Images become OpenAI-format vision content blocks. Non-image files are
|
||||
saved to *working_dir* so tool handlers can access them.
|
||||
|
||||
Returns ``(hint_text, image_blocks)``.
|
||||
"""
|
||||
if not file_ids or not user_id:
|
||||
return "", []
|
||||
|
||||
try:
|
||||
manager = await get_workspace_manager(user_id, session_id)
|
||||
except Exception:
|
||||
logger.warning(
|
||||
"Failed to create workspace manager for file attachments",
|
||||
exc_info=True,
|
||||
)
|
||||
return "", []
|
||||
|
||||
image_blocks: list[dict[str, Any]] = []
|
||||
file_descriptions: list[str] = []
|
||||
|
||||
for fid in file_ids:
|
||||
try:
|
||||
file_info = await manager.get_file_info(fid)
|
||||
if file_info is None:
|
||||
continue
|
||||
content = await manager.read_file_by_id(fid)
|
||||
mime = (file_info.mime_type or "").split(";")[0].strip().lower()
|
||||
|
||||
if mime in _VISION_MIME_TYPES and len(content) <= _MAX_INLINE_IMAGE_BYTES:
|
||||
b64 = base64.b64encode(content).decode("ascii")
|
||||
image_blocks.append(
|
||||
{
|
||||
"type": "image",
|
||||
"source": {"type": "base64", "media_type": mime, "data": b64},
|
||||
}
|
||||
)
|
||||
file_descriptions.append(
|
||||
f"- {file_info.name} ({mime}, "
|
||||
f"{file_info.size_bytes:,} bytes) [embedded as image]"
|
||||
)
|
||||
else:
|
||||
safe = _UNSAFE_FILENAME.sub("_", file_info.name) or "file"
|
||||
candidate = os.path.join(working_dir, safe)
|
||||
if os.path.exists(candidate):
|
||||
stem, ext = os.path.splitext(safe)
|
||||
idx = 1
|
||||
while os.path.exists(candidate):
|
||||
candidate = os.path.join(working_dir, f"{stem}_{idx}{ext}")
|
||||
idx += 1
|
||||
with open(candidate, "wb") as f:
|
||||
f.write(content)
|
||||
file_descriptions.append(
|
||||
f"- {file_info.name} ({mime}, "
|
||||
f"{file_info.size_bytes:,} bytes) saved to "
|
||||
f"{os.path.basename(candidate)}"
|
||||
)
|
||||
except Exception:
|
||||
logger.warning("Failed to prepare file %s", fid[:12], exc_info=True)
|
||||
|
||||
if not file_descriptions:
|
||||
return "", []
|
||||
|
||||
noun = "file" if len(file_descriptions) == 1 else "files"
|
||||
has_non_images = len(file_descriptions) > len(image_blocks)
|
||||
read_hint = (
|
||||
" Use the read_workspace_file tool to view non-image files."
|
||||
if has_non_images
|
||||
else ""
|
||||
)
|
||||
hint = (
|
||||
f"\n[The user attached {len(file_descriptions)} {noun}.{read_hint}\n"
|
||||
+ "\n".join(file_descriptions)
|
||||
+ "]"
|
||||
)
|
||||
return hint, image_blocks
|
||||
|
||||
|
||||
def _filter_tools_by_permissions(
|
||||
tools: list[ChatCompletionToolParam],
|
||||
permissions: "CopilotPermissions",
|
||||
) -> list[ChatCompletionToolParam]:
|
||||
"""Filter OpenAI-format tools based on CopilotPermissions.
|
||||
|
||||
Uses short tool names (the ``function.name`` field) to compute the
|
||||
effective allowed set, then keeps only matching tools.
|
||||
"""
|
||||
from backend.copilot.permissions import all_known_tool_names
|
||||
|
||||
if permissions.is_empty():
|
||||
return tools
|
||||
|
||||
all_tools = all_known_tool_names()
|
||||
effective = permissions.effective_allowed_tools(all_tools)
|
||||
|
||||
return [
|
||||
t
|
||||
for t in tools
|
||||
if t.get("function", {}).get("name") in effective # type: ignore[union-attr]
|
||||
]
|
||||
|
||||
|
||||
def _resolve_baseline_model(mode: CopilotMode | None) -> str:
|
||||
"""Pick the model for the baseline path based on the per-request mode.
|
||||
@@ -97,6 +227,98 @@ def _resolve_baseline_model(mode: CopilotMode | None) -> str:
|
||||
return config.model
|
||||
|
||||
|
||||
# Tag pairs to strip from baseline streaming output. Different models use
|
||||
# different tag names for their internal reasoning (Claude uses <thinking>,
|
||||
# Gemini uses <internal_reasoning>, etc.).
|
||||
_REASONING_TAG_PAIRS: list[tuple[str, str]] = [
|
||||
("<thinking>", "</thinking>"),
|
||||
("<internal_reasoning>", "</internal_reasoning>"),
|
||||
]
|
||||
|
||||
# Longest opener — used to size the partial-tag buffer.
|
||||
_MAX_OPEN_TAG_LEN = max(len(o) for o, _ in _REASONING_TAG_PAIRS)
|
||||
|
||||
|
||||
class _ThinkingStripper:
|
||||
"""Strip reasoning blocks from a stream of text deltas.
|
||||
|
||||
Handles multiple tag patterns (``<thinking>``, ``<internal_reasoning>``,
|
||||
etc.) so the same stripper works across Claude, Gemini, and other models.
|
||||
|
||||
Buffers just enough characters to detect a tag that may be split
|
||||
across chunks; emits text immediately when no tag is in-flight.
|
||||
Robust to single chunks that open and close a block, multiple
|
||||
blocks per stream, and tags that straddle chunk boundaries.
|
||||
"""
|
||||
|
||||
def __init__(self) -> None:
|
||||
self._buffer: str = ""
|
||||
self._in_thinking: bool = False
|
||||
self._close_tag: str = "" # closing tag for the currently open block
|
||||
|
||||
def _find_open_tag(self) -> tuple[int, str, str]:
|
||||
"""Find the earliest opening tag in the buffer.
|
||||
|
||||
Returns (position, open_tag, close_tag) or (-1, "", "") if none.
|
||||
"""
|
||||
best_pos = -1
|
||||
best_open = ""
|
||||
best_close = ""
|
||||
for open_tag, close_tag in _REASONING_TAG_PAIRS:
|
||||
pos = self._buffer.find(open_tag)
|
||||
if pos != -1 and (best_pos == -1 or pos < best_pos):
|
||||
best_pos = pos
|
||||
best_open = open_tag
|
||||
best_close = close_tag
|
||||
return best_pos, best_open, best_close
|
||||
|
||||
def process(self, chunk: str) -> str:
|
||||
"""Feed a chunk and return the text that is safe to emit now."""
|
||||
self._buffer += chunk
|
||||
out: list[str] = []
|
||||
while self._buffer:
|
||||
if self._in_thinking:
|
||||
end = self._buffer.find(self._close_tag)
|
||||
if end == -1:
|
||||
keep = len(self._close_tag) - 1
|
||||
self._buffer = self._buffer[-keep:] if keep else ""
|
||||
return "".join(out)
|
||||
self._buffer = self._buffer[end + len(self._close_tag) :]
|
||||
self._in_thinking = False
|
||||
self._close_tag = ""
|
||||
else:
|
||||
start, open_tag, close_tag = self._find_open_tag()
|
||||
if start == -1:
|
||||
# No opening tag; emit everything except a tail that
|
||||
# could start a partial opener on the next chunk.
|
||||
safe_end = len(self._buffer)
|
||||
for keep in range(
|
||||
min(_MAX_OPEN_TAG_LEN - 1, len(self._buffer)), 0, -1
|
||||
):
|
||||
tail = self._buffer[-keep:]
|
||||
if any(o[:keep] == tail for o, _ in _REASONING_TAG_PAIRS):
|
||||
safe_end = len(self._buffer) - keep
|
||||
break
|
||||
out.append(self._buffer[:safe_end])
|
||||
self._buffer = self._buffer[safe_end:]
|
||||
return "".join(out)
|
||||
out.append(self._buffer[:start])
|
||||
self._buffer = self._buffer[start + len(open_tag) :]
|
||||
self._in_thinking = True
|
||||
self._close_tag = close_tag
|
||||
return "".join(out)
|
||||
|
||||
def flush(self) -> str:
|
||||
"""Return any remaining emittable text when the stream ends."""
|
||||
if self._in_thinking:
|
||||
# Unclosed thinking block — discard the buffered reasoning.
|
||||
self._buffer = ""
|
||||
return ""
|
||||
out = self._buffer
|
||||
self._buffer = ""
|
||||
return out
|
||||
|
||||
|
||||
@dataclass
|
||||
class _BaselineStreamState:
|
||||
"""Mutable state shared between the tool-call loop callbacks.
|
||||
@@ -112,6 +334,8 @@ class _BaselineStreamState:
|
||||
text_started: bool = False
|
||||
turn_prompt_tokens: int = 0
|
||||
turn_completion_tokens: int = 0
|
||||
thinking_stripper: _ThinkingStripper = field(default_factory=_ThinkingStripper)
|
||||
session_messages: list[ChatMessage] = field(default_factory=list)
|
||||
|
||||
|
||||
async def _baseline_llm_caller(
|
||||
@@ -125,6 +349,9 @@ async def _baseline_llm_caller(
|
||||
Extracted from ``stream_chat_completion_baseline`` for readability.
|
||||
"""
|
||||
state.pending_events.append(StreamStartStep())
|
||||
# Fresh thinking-strip state per round so a malformed unclosed
|
||||
# block in one LLM call cannot silently drop content in the next.
|
||||
state.thinking_stripper = _ThinkingStripper()
|
||||
|
||||
round_text = ""
|
||||
try:
|
||||
@@ -158,13 +385,17 @@ async def _baseline_llm_caller(
|
||||
continue
|
||||
|
||||
if delta.content:
|
||||
if not state.text_started:
|
||||
state.pending_events.append(StreamTextStart(id=state.text_block_id))
|
||||
state.text_started = True
|
||||
round_text += delta.content
|
||||
state.pending_events.append(
|
||||
StreamTextDelta(id=state.text_block_id, delta=delta.content)
|
||||
)
|
||||
emit = state.thinking_stripper.process(delta.content)
|
||||
if emit:
|
||||
if not state.text_started:
|
||||
state.pending_events.append(
|
||||
StreamTextStart(id=state.text_block_id)
|
||||
)
|
||||
state.text_started = True
|
||||
round_text += emit
|
||||
state.pending_events.append(
|
||||
StreamTextDelta(id=state.text_block_id, delta=emit)
|
||||
)
|
||||
|
||||
if delta.tool_calls:
|
||||
for tc in delta.tool_calls:
|
||||
@@ -183,6 +414,16 @@ async def _baseline_llm_caller(
|
||||
if tc.function and tc.function.arguments:
|
||||
entry["arguments"] += tc.function.arguments
|
||||
|
||||
# Flush any buffered text held back by the thinking stripper.
|
||||
tail = state.thinking_stripper.flush()
|
||||
if tail:
|
||||
if not state.text_started:
|
||||
state.pending_events.append(StreamTextStart(id=state.text_block_id))
|
||||
state.text_started = True
|
||||
round_text += tail
|
||||
state.pending_events.append(
|
||||
StreamTextDelta(id=state.text_block_id, delta=tail)
|
||||
)
|
||||
# Close text block
|
||||
if state.text_started:
|
||||
state.pending_events.append(StreamTextEnd(id=state.text_block_id))
|
||||
@@ -404,11 +645,13 @@ def _baseline_conversation_updater(
|
||||
*,
|
||||
transcript_builder: TranscriptBuilder,
|
||||
model: str = "",
|
||||
state: _BaselineStreamState | None = None,
|
||||
) -> None:
|
||||
"""Update OpenAI message list with assistant response + tool results.
|
||||
|
||||
Thin composition of :func:`_mutate_openai_messages` and
|
||||
:func:`_record_turn_to_transcript`.
|
||||
Also records structured ChatMessage entries in ``state.session_messages``
|
||||
so the full tool-call history is persisted to the session (not just the
|
||||
concatenated assistant text).
|
||||
"""
|
||||
_mutate_openai_messages(messages, response, tool_results)
|
||||
_record_turn_to_transcript(
|
||||
@@ -417,6 +660,30 @@ def _baseline_conversation_updater(
|
||||
transcript_builder=transcript_builder,
|
||||
model=model,
|
||||
)
|
||||
# Record structured messages for session persistence so tool calls
|
||||
# and tool results survive across turns and mode switches.
|
||||
if state is not None and tool_results:
|
||||
assistant_msg = ChatMessage(
|
||||
role="assistant",
|
||||
content=response.response_text or "",
|
||||
tool_calls=[
|
||||
{
|
||||
"id": tc.id,
|
||||
"type": "function",
|
||||
"function": {"name": tc.name, "arguments": tc.arguments},
|
||||
}
|
||||
for tc in response.tool_calls
|
||||
],
|
||||
)
|
||||
state.session_messages.append(assistant_msg)
|
||||
for tr in tool_results:
|
||||
state.session_messages.append(
|
||||
ChatMessage(
|
||||
role="tool",
|
||||
content=tr.content,
|
||||
tool_call_id=tr.tool_call_id,
|
||||
)
|
||||
)
|
||||
|
||||
|
||||
async def _update_title_async(
|
||||
@@ -606,7 +873,15 @@ async def _upload_final_transcript(
|
||||
# Bound the wait: a hung storage backend must not block the response
|
||||
# from finishing. The task keeps running in _background_tasks on
|
||||
# timeout and will be cleaned up when it resolves.
|
||||
await asyncio.wait_for(asyncio.shield(upload_task), timeout=30)
|
||||
await asyncio.wait_for(
|
||||
asyncio.shield(upload_task), timeout=_TRANSCRIPT_UPLOAD_TIMEOUT_S
|
||||
)
|
||||
except asyncio.TimeoutError:
|
||||
# Upload is still running in _background_tasks; we just stopped waiting.
|
||||
logger.info(
|
||||
"[Baseline] Transcript upload exceeded %ss wait — continuing as background task",
|
||||
_TRANSCRIPT_UPLOAD_TIMEOUT_S,
|
||||
)
|
||||
except Exception as upload_err:
|
||||
logger.error("[Baseline] Transcript upload failed: %s", upload_err)
|
||||
|
||||
@@ -617,6 +892,9 @@ async def stream_chat_completion_baseline(
|
||||
is_user_message: bool = True,
|
||||
user_id: str | None = None,
|
||||
session: ChatSession | None = None,
|
||||
file_ids: list[str] | None = None,
|
||||
permissions: "CopilotPermissions | None" = None,
|
||||
context: dict[str, str] | None = None,
|
||||
mode: CopilotMode | None = None,
|
||||
**_kwargs: Any,
|
||||
) -> AsyncGenerator[StreamBaseResponse, None]:
|
||||
@@ -650,6 +928,23 @@ async def stream_chat_completion_baseline(
|
||||
# the cheaper/faster model; everything else keeps the default.
|
||||
active_model = _resolve_baseline_model(mode)
|
||||
|
||||
# --- E2B sandbox setup (feature parity with SDK path) ---
|
||||
e2b_sandbox = None
|
||||
e2b_api_key = config.active_e2b_api_key
|
||||
if e2b_api_key:
|
||||
try:
|
||||
from backend.copilot.tools.e2b_sandbox import get_or_create_sandbox
|
||||
|
||||
e2b_sandbox = await get_or_create_sandbox(
|
||||
session_id,
|
||||
api_key=e2b_api_key,
|
||||
template=config.e2b_sandbox_template,
|
||||
timeout=config.e2b_sandbox_timeout,
|
||||
on_timeout=config.e2b_sandbox_on_timeout,
|
||||
)
|
||||
except Exception:
|
||||
logger.warning("[Baseline] E2B sandbox setup failed", exc_info=True)
|
||||
|
||||
# --- Transcript support (feature parity with SDK path) ---
|
||||
transcript_builder = TranscriptBuilder()
|
||||
transcript_covers_prefix = True
|
||||
@@ -735,10 +1030,70 @@ async def stream_chat_completion_baseline(
|
||||
elif msg.role == "user" and msg.content:
|
||||
openai_messages.append({"role": msg.role, "content": msg.content})
|
||||
|
||||
# --- File attachments (feature parity with SDK path) ---
|
||||
working_dir: str | None = None
|
||||
attachment_hint = ""
|
||||
image_blocks: list[dict[str, Any]] = []
|
||||
if file_ids and user_id:
|
||||
working_dir = tempfile.mkdtemp(prefix=f"copilot-baseline-{session_id[:8]}-")
|
||||
attachment_hint, image_blocks = await _prepare_baseline_attachments(
|
||||
file_ids, user_id, session_id, working_dir
|
||||
)
|
||||
|
||||
# --- URL context ---
|
||||
context_hint = ""
|
||||
if context and context.get("url"):
|
||||
url = context["url"]
|
||||
content_text = context.get("content", "")
|
||||
if content_text:
|
||||
context_hint = (
|
||||
f"\n[The user shared a URL: {url}\n" f"Content:\n{content_text[:8000]}]"
|
||||
)
|
||||
else:
|
||||
context_hint = f"\n[The user shared a URL: {url}]"
|
||||
|
||||
# Append attachment + context hints and image blocks to the last user
|
||||
# message in a single reverse scan.
|
||||
extra_hint = attachment_hint + context_hint
|
||||
if extra_hint or image_blocks:
|
||||
for i in range(len(openai_messages) - 1, -1, -1):
|
||||
if openai_messages[i].get("role") == "user":
|
||||
existing = openai_messages[i].get("content", "")
|
||||
if isinstance(existing, str):
|
||||
text = existing + "\n" + extra_hint if extra_hint else existing
|
||||
if image_blocks:
|
||||
parts: list[dict[str, Any]] = [{"type": "text", "text": text}]
|
||||
for img in image_blocks:
|
||||
parts.append(
|
||||
{
|
||||
"type": "image_url",
|
||||
"image_url": {
|
||||
"url": (
|
||||
f"data:{img['source']['media_type']};"
|
||||
f"base64,{img['source']['data']}"
|
||||
)
|
||||
},
|
||||
}
|
||||
)
|
||||
openai_messages[i]["content"] = parts
|
||||
else:
|
||||
openai_messages[i]["content"] = text
|
||||
break
|
||||
|
||||
tools = get_available_tools()
|
||||
|
||||
# --- Permission filtering ---
|
||||
if permissions is not None:
|
||||
tools = _filter_tools_by_permissions(tools, permissions)
|
||||
|
||||
# Propagate execution context so tool handlers can read session-level flags.
|
||||
set_execution_context(user_id, session)
|
||||
set_execution_context(
|
||||
user_id,
|
||||
session,
|
||||
sandbox=e2b_sandbox,
|
||||
sdk_cwd=working_dir,
|
||||
permissions=permissions,
|
||||
)
|
||||
|
||||
yield StreamStart(messageId=message_id, sessionId=session_id)
|
||||
|
||||
@@ -770,6 +1125,7 @@ async def stream_chat_completion_baseline(
|
||||
_baseline_conversation_updater,
|
||||
transcript_builder=transcript_builder,
|
||||
model=active_model,
|
||||
state=state,
|
||||
)
|
||||
|
||||
try:
|
||||
@@ -872,11 +1228,24 @@ async def stream_chat_completion_baseline(
|
||||
log_prefix="[Baseline]",
|
||||
)
|
||||
|
||||
# Persist assistant response
|
||||
if state.assistant_text:
|
||||
session.messages.append(
|
||||
ChatMessage(role="assistant", content=state.assistant_text)
|
||||
# Persist structured tool-call history (assistant + tool messages)
|
||||
# collected by the conversation updater, then the final text response.
|
||||
for msg in state.session_messages:
|
||||
session.messages.append(msg)
|
||||
# Append the final assistant text (from the last LLM call that had
|
||||
# no tool calls, i.e. the natural finish). Only add it if the
|
||||
# conversation updater didn't already record it as part of a
|
||||
# tool-call round (which would have empty response_text).
|
||||
final_text = state.assistant_text
|
||||
if state.session_messages:
|
||||
# Strip text already captured in tool-call round messages
|
||||
recorded = "".join(
|
||||
m.content or "" for m in state.session_messages if m.role == "assistant"
|
||||
)
|
||||
if final_text.startswith(recorded):
|
||||
final_text = final_text[len(recorded) :]
|
||||
if final_text.strip():
|
||||
session.messages.append(ChatMessage(role="assistant", content=final_text))
|
||||
try:
|
||||
await upsert_chat_session(session)
|
||||
except Exception as persist_err:
|
||||
@@ -903,6 +1272,10 @@ async def stream_chat_completion_baseline(
|
||||
session_msg_count=len(session.messages),
|
||||
)
|
||||
|
||||
# Clean up the ephemeral working directory used for file attachments.
|
||||
if working_dir is not None:
|
||||
shutil.rmtree(working_dir, ignore_errors=True)
|
||||
|
||||
# Yield usage and finish AFTER try/finally (not inside finally).
|
||||
# PEP 525 prohibits yielding from finally in async generators during
|
||||
# aclose() — doing so raises RuntimeError on client disconnect.
|
||||
|
||||
@@ -7,11 +7,13 @@ without requiring API keys, database connections, or network access.
|
||||
from unittest.mock import AsyncMock, patch
|
||||
|
||||
import pytest
|
||||
from openai.types.chat import ChatCompletionToolParam
|
||||
|
||||
from backend.copilot.baseline.service import (
|
||||
_baseline_conversation_updater,
|
||||
_BaselineStreamState,
|
||||
_compress_session_messages,
|
||||
_ThinkingStripper,
|
||||
)
|
||||
from backend.copilot.model import ChatMessage
|
||||
from backend.copilot.transcript_builder import TranscriptBuilder
|
||||
@@ -365,3 +367,267 @@ class TestCompressSessionMessagesPreservesToolCalls:
|
||||
assert out[0].tool_calls is not None
|
||||
assert out[0].tool_calls[0]["id"] == "t1"
|
||||
assert out[1].tool_call_id == "t1"
|
||||
|
||||
|
||||
# ---- _ThinkingStripper tests ---- #
|
||||
|
||||
|
||||
def test_thinking_stripper_basic_thinking_tag() -> None:
|
||||
"""<thinking>...</thinking> blocks are fully stripped."""
|
||||
s = _ThinkingStripper()
|
||||
assert s.process("<thinking>internal reasoning here</thinking>Hello!") == "Hello!"
|
||||
|
||||
|
||||
def test_thinking_stripper_internal_reasoning_tag() -> None:
|
||||
"""<internal_reasoning>...</internal_reasoning> blocks (Gemini) are stripped."""
|
||||
s = _ThinkingStripper()
|
||||
assert (
|
||||
s.process("<internal_reasoning>step by step</internal_reasoning>Answer")
|
||||
== "Answer"
|
||||
)
|
||||
|
||||
|
||||
def test_thinking_stripper_split_across_chunks() -> None:
|
||||
"""Tags split across multiple chunks are handled correctly."""
|
||||
s = _ThinkingStripper()
|
||||
out = s.process("Hello <thin")
|
||||
out += s.process("king>secret</thinking> world")
|
||||
assert out == "Hello world"
|
||||
|
||||
|
||||
def test_thinking_stripper_plain_text_preserved() -> None:
|
||||
"""Plain text with the word 'thinking' is not stripped."""
|
||||
s = _ThinkingStripper()
|
||||
assert (
|
||||
s.process("I am thinking about this problem")
|
||||
== "I am thinking about this problem"
|
||||
)
|
||||
|
||||
|
||||
def test_thinking_stripper_multiple_blocks() -> None:
|
||||
"""Multiple reasoning blocks in one stream are all stripped."""
|
||||
s = _ThinkingStripper()
|
||||
result = s.process(
|
||||
"A<thinking>x</thinking>B<internal_reasoning>y</internal_reasoning>C"
|
||||
)
|
||||
assert result == "ABC"
|
||||
|
||||
|
||||
def test_thinking_stripper_flush_discards_unclosed() -> None:
|
||||
"""Unclosed reasoning block is discarded on flush."""
|
||||
s = _ThinkingStripper()
|
||||
s.process("Start<thinking>never closed")
|
||||
flushed = s.flush()
|
||||
assert "never closed" not in flushed
|
||||
|
||||
|
||||
def test_thinking_stripper_empty_block() -> None:
|
||||
"""Empty reasoning blocks are handled gracefully."""
|
||||
s = _ThinkingStripper()
|
||||
assert s.process("Before<thinking></thinking>After") == "BeforeAfter"
|
||||
|
||||
|
||||
# ---- _filter_tools_by_permissions tests ---- #
|
||||
|
||||
|
||||
def _make_tool(name: str) -> ChatCompletionToolParam:
|
||||
"""Build a minimal OpenAI ChatCompletionToolParam."""
|
||||
return ChatCompletionToolParam(
|
||||
type="function",
|
||||
function={"name": name, "parameters": {}},
|
||||
)
|
||||
|
||||
|
||||
class TestFilterToolsByPermissions:
|
||||
"""Tests for _filter_tools_by_permissions."""
|
||||
|
||||
@patch(
|
||||
"backend.copilot.permissions.all_known_tool_names",
|
||||
return_value=frozenset({"run_block", "web_fetch", "bash_exec"}),
|
||||
)
|
||||
def test_empty_permissions_returns_all(self, _mock_names):
|
||||
"""Empty permissions (no filtering) returns every tool unchanged."""
|
||||
from backend.copilot.baseline.service import _filter_tools_by_permissions
|
||||
from backend.copilot.permissions import CopilotPermissions
|
||||
|
||||
tools = [_make_tool("run_block"), _make_tool("web_fetch")]
|
||||
perms = CopilotPermissions()
|
||||
result = _filter_tools_by_permissions(tools, perms)
|
||||
assert result == tools
|
||||
|
||||
@patch(
|
||||
"backend.copilot.permissions.all_known_tool_names",
|
||||
return_value=frozenset({"run_block", "web_fetch", "bash_exec"}),
|
||||
)
|
||||
def test_allowlist_keeps_only_matching(self, _mock_names):
|
||||
"""Explicit allowlist (tools_exclude=False) keeps only listed tools."""
|
||||
from backend.copilot.baseline.service import _filter_tools_by_permissions
|
||||
from backend.copilot.permissions import CopilotPermissions
|
||||
|
||||
tools = [
|
||||
_make_tool("run_block"),
|
||||
_make_tool("web_fetch"),
|
||||
_make_tool("bash_exec"),
|
||||
]
|
||||
perms = CopilotPermissions(tools=["web_fetch"], tools_exclude=False)
|
||||
result = _filter_tools_by_permissions(tools, perms)
|
||||
assert len(result) == 1
|
||||
assert result[0]["function"]["name"] == "web_fetch"
|
||||
|
||||
@patch(
|
||||
"backend.copilot.permissions.all_known_tool_names",
|
||||
return_value=frozenset({"run_block", "web_fetch", "bash_exec"}),
|
||||
)
|
||||
def test_blacklist_excludes_listed(self, _mock_names):
|
||||
"""Blacklist (tools_exclude=True) removes only the listed tools."""
|
||||
from backend.copilot.baseline.service import _filter_tools_by_permissions
|
||||
from backend.copilot.permissions import CopilotPermissions
|
||||
|
||||
tools = [
|
||||
_make_tool("run_block"),
|
||||
_make_tool("web_fetch"),
|
||||
_make_tool("bash_exec"),
|
||||
]
|
||||
perms = CopilotPermissions(tools=["bash_exec"], tools_exclude=True)
|
||||
result = _filter_tools_by_permissions(tools, perms)
|
||||
names = [t["function"]["name"] for t in result]
|
||||
assert "bash_exec" not in names
|
||||
assert "run_block" in names
|
||||
assert "web_fetch" in names
|
||||
assert len(result) == 2
|
||||
|
||||
@patch(
|
||||
"backend.copilot.permissions.all_known_tool_names",
|
||||
return_value=frozenset({"run_block", "web_fetch", "bash_exec"}),
|
||||
)
|
||||
def test_unknown_tool_name_filtered_out(self, _mock_names):
|
||||
"""A tool whose name is not in all_known_tool_names is dropped."""
|
||||
from backend.copilot.baseline.service import _filter_tools_by_permissions
|
||||
from backend.copilot.permissions import CopilotPermissions
|
||||
|
||||
tools = [_make_tool("run_block"), _make_tool("unknown_tool")]
|
||||
perms = CopilotPermissions(tools=["run_block"], tools_exclude=False)
|
||||
result = _filter_tools_by_permissions(tools, perms)
|
||||
names = [t["function"]["name"] for t in result]
|
||||
assert "unknown_tool" not in names
|
||||
assert names == ["run_block"]
|
||||
|
||||
|
||||
# ---- _prepare_baseline_attachments tests ---- #
|
||||
|
||||
|
||||
class TestPrepareBaselineAttachments:
|
||||
"""Tests for _prepare_baseline_attachments."""
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_empty_file_ids(self):
|
||||
"""Empty file_ids returns empty hint and blocks."""
|
||||
from backend.copilot.baseline.service import _prepare_baseline_attachments
|
||||
|
||||
hint, blocks = await _prepare_baseline_attachments([], "user1", "sess1", "/tmp")
|
||||
assert hint == ""
|
||||
assert blocks == []
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_empty_user_id(self):
|
||||
"""Empty user_id returns empty hint and blocks."""
|
||||
from backend.copilot.baseline.service import _prepare_baseline_attachments
|
||||
|
||||
hint, blocks = await _prepare_baseline_attachments(
|
||||
["file1"], "", "sess1", "/tmp"
|
||||
)
|
||||
assert hint == ""
|
||||
assert blocks == []
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_image_file_returns_vision_blocks(self):
|
||||
"""A PNG image within size limits is returned as a base64 vision block."""
|
||||
from backend.copilot.baseline.service import _prepare_baseline_attachments
|
||||
|
||||
fake_info = AsyncMock()
|
||||
fake_info.name = "photo.png"
|
||||
fake_info.mime_type = "image/png"
|
||||
fake_info.size_bytes = 1024
|
||||
|
||||
fake_manager = AsyncMock()
|
||||
fake_manager.get_file_info = AsyncMock(return_value=fake_info)
|
||||
fake_manager.read_file_by_id = AsyncMock(return_value=b"\x89PNG_FAKE_DATA")
|
||||
|
||||
with patch(
|
||||
"backend.copilot.baseline.service.get_workspace_manager",
|
||||
new=AsyncMock(return_value=fake_manager),
|
||||
):
|
||||
hint, blocks = await _prepare_baseline_attachments(
|
||||
["fid1"], "user1", "sess1", "/tmp/workdir"
|
||||
)
|
||||
|
||||
assert len(blocks) == 1
|
||||
assert blocks[0]["type"] == "image"
|
||||
assert blocks[0]["source"]["media_type"] == "image/png"
|
||||
assert blocks[0]["source"]["type"] == "base64"
|
||||
assert "photo.png" in hint
|
||||
assert "embedded as image" in hint
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_non_image_file_saved_to_working_dir(self, tmp_path):
|
||||
"""A non-image file is written to working_dir."""
|
||||
from backend.copilot.baseline.service import _prepare_baseline_attachments
|
||||
|
||||
fake_info = AsyncMock()
|
||||
fake_info.name = "data.csv"
|
||||
fake_info.mime_type = "text/csv"
|
||||
fake_info.size_bytes = 42
|
||||
|
||||
fake_manager = AsyncMock()
|
||||
fake_manager.get_file_info = AsyncMock(return_value=fake_info)
|
||||
fake_manager.read_file_by_id = AsyncMock(return_value=b"col1,col2\na,b")
|
||||
|
||||
with patch(
|
||||
"backend.copilot.baseline.service.get_workspace_manager",
|
||||
new=AsyncMock(return_value=fake_manager),
|
||||
):
|
||||
hint, blocks = await _prepare_baseline_attachments(
|
||||
["fid1"], "user1", "sess1", str(tmp_path)
|
||||
)
|
||||
|
||||
assert blocks == []
|
||||
assert "data.csv" in hint
|
||||
assert "saved to" in hint
|
||||
saved = tmp_path / "data.csv"
|
||||
assert saved.exists()
|
||||
assert saved.read_bytes() == b"col1,col2\na,b"
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_file_not_found_skipped(self):
|
||||
"""When get_file_info returns None the file is silently skipped."""
|
||||
from backend.copilot.baseline.service import _prepare_baseline_attachments
|
||||
|
||||
fake_manager = AsyncMock()
|
||||
fake_manager.get_file_info = AsyncMock(return_value=None)
|
||||
|
||||
with patch(
|
||||
"backend.copilot.baseline.service.get_workspace_manager",
|
||||
new=AsyncMock(return_value=fake_manager),
|
||||
):
|
||||
hint, blocks = await _prepare_baseline_attachments(
|
||||
["missing_id"], "user1", "sess1", "/tmp"
|
||||
)
|
||||
|
||||
assert hint == ""
|
||||
assert blocks == []
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_workspace_manager_error(self):
|
||||
"""When get_workspace_manager raises, returns empty results."""
|
||||
from backend.copilot.baseline.service import _prepare_baseline_attachments
|
||||
|
||||
with patch(
|
||||
"backend.copilot.baseline.service.get_workspace_manager",
|
||||
new=AsyncMock(side_effect=RuntimeError("connection failed")),
|
||||
):
|
||||
hint, blocks = await _prepare_baseline_attachments(
|
||||
["fid1"], "user1", "sess1", "/tmp"
|
||||
)
|
||||
|
||||
assert hint == ""
|
||||
assert blocks == []
|
||||
|
||||
@@ -14,6 +14,7 @@ from prisma.types import (
|
||||
ChatSessionUpdateInput,
|
||||
ChatSessionWhereInput,
|
||||
)
|
||||
from pydantic import BaseModel
|
||||
|
||||
from backend.data import db
|
||||
from backend.util.json import SafeJson, sanitize_string
|
||||
@@ -30,6 +31,15 @@ from .model import get_chat_session as get_chat_session_cached
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class PaginatedMessages(BaseModel):
|
||||
"""Result of a paginated message query."""
|
||||
|
||||
messages: list[ChatMessage]
|
||||
has_more: bool
|
||||
oldest_sequence: int | None
|
||||
session: ChatSessionInfo
|
||||
|
||||
|
||||
async def get_chat_session(session_id: str) -> ChatSession | None:
|
||||
"""Get a chat session by ID from the database."""
|
||||
session = await PrismaChatSession.prisma().find_unique(
|
||||
@@ -39,6 +49,116 @@ async def get_chat_session(session_id: str) -> ChatSession | None:
|
||||
return ChatSession.from_db(session) if session else None
|
||||
|
||||
|
||||
async def get_chat_session_metadata(session_id: str) -> ChatSessionInfo | None:
|
||||
"""Get chat session metadata (without messages) for ownership validation."""
|
||||
session = await PrismaChatSession.prisma().find_unique(
|
||||
where={"id": session_id},
|
||||
)
|
||||
return ChatSessionInfo.from_db(session) if session else None
|
||||
|
||||
|
||||
async def get_chat_messages_paginated(
|
||||
session_id: str,
|
||||
limit: int = 50,
|
||||
before_sequence: int | None = None,
|
||||
user_id: str | None = None,
|
||||
) -> PaginatedMessages | None:
|
||||
"""Get paginated messages for a session, newest first.
|
||||
|
||||
Verifies session existence (and ownership when ``user_id`` is provided)
|
||||
in parallel with the message query. Returns ``None`` when the session
|
||||
is not found or does not belong to the user.
|
||||
|
||||
Args:
|
||||
session_id: The chat session ID.
|
||||
limit: Max messages to return.
|
||||
before_sequence: Cursor — return messages with sequence < this value.
|
||||
user_id: If provided, filters via ``Session.userId`` so only the
|
||||
session owner's messages are returned (acts as an ownership guard).
|
||||
"""
|
||||
# Build session-existence / ownership check
|
||||
session_where: ChatSessionWhereInput = {"id": session_id}
|
||||
if user_id is not None:
|
||||
session_where["userId"] = user_id
|
||||
|
||||
# Build message include — fetch paginated messages in the same query
|
||||
msg_include: dict[str, Any] = {
|
||||
"order_by": {"sequence": "desc"},
|
||||
"take": limit + 1,
|
||||
}
|
||||
if before_sequence is not None:
|
||||
msg_include["where"] = {"sequence": {"lt": before_sequence}}
|
||||
|
||||
# Single query: session existence/ownership + paginated messages
|
||||
session = await PrismaChatSession.prisma().find_first(
|
||||
where=session_where,
|
||||
include={"Messages": msg_include},
|
||||
)
|
||||
|
||||
if session is None:
|
||||
return None
|
||||
|
||||
session_info = ChatSessionInfo.from_db(session)
|
||||
results = list(session.Messages) if session.Messages else []
|
||||
|
||||
has_more = len(results) > limit
|
||||
results = results[:limit]
|
||||
|
||||
# Reverse to ascending order
|
||||
results.reverse()
|
||||
|
||||
# Tool-call boundary fix: if the oldest message is a tool message,
|
||||
# expand backward to include the preceding assistant message that
|
||||
# owns the tool_calls, so convertChatSessionMessagesToUiMessages
|
||||
# can pair them correctly.
|
||||
_BOUNDARY_SCAN_LIMIT = 10
|
||||
if results and results[0].role == "tool":
|
||||
boundary_where: dict[str, Any] = {
|
||||
"sessionId": session_id,
|
||||
"sequence": {"lt": results[0].sequence},
|
||||
}
|
||||
if user_id is not None:
|
||||
boundary_where["Session"] = {"is": {"userId": user_id}}
|
||||
extra = await PrismaChatMessage.prisma().find_many(
|
||||
where=boundary_where,
|
||||
order={"sequence": "desc"},
|
||||
take=_BOUNDARY_SCAN_LIMIT,
|
||||
)
|
||||
# Find the first non-tool message (should be the assistant)
|
||||
boundary_msgs = []
|
||||
found_owner = False
|
||||
for msg in extra:
|
||||
boundary_msgs.append(msg)
|
||||
if msg.role != "tool":
|
||||
found_owner = True
|
||||
break
|
||||
boundary_msgs.reverse()
|
||||
if not found_owner:
|
||||
logger.warning(
|
||||
"Boundary expansion did not find owning assistant message "
|
||||
"for session=%s before sequence=%s (%d msgs scanned)",
|
||||
session_id,
|
||||
results[0].sequence,
|
||||
len(extra),
|
||||
)
|
||||
if boundary_msgs:
|
||||
results = boundary_msgs + results
|
||||
# Only mark has_more if the expanded boundary isn't the
|
||||
# very start of the conversation (sequence 0).
|
||||
if boundary_msgs[0].sequence > 0:
|
||||
has_more = True
|
||||
|
||||
messages = [ChatMessage.from_db(m) for m in results]
|
||||
oldest_sequence = messages[0].sequence if messages else None
|
||||
|
||||
return PaginatedMessages(
|
||||
messages=messages,
|
||||
has_more=has_more,
|
||||
oldest_sequence=oldest_sequence,
|
||||
session=session_info,
|
||||
)
|
||||
|
||||
|
||||
async def create_chat_session(
|
||||
session_id: str,
|
||||
user_id: str,
|
||||
|
||||
@@ -1,7 +1,341 @@
|
||||
import pytest
|
||||
"""Unit tests for copilot.db — paginated message queries."""
|
||||
|
||||
from .db import set_turn_duration
|
||||
from .model import ChatMessage, ChatSession, get_chat_session, upsert_chat_session
|
||||
from __future__ import annotations
|
||||
|
||||
from datetime import UTC, datetime
|
||||
from typing import Any
|
||||
from unittest.mock import AsyncMock, patch
|
||||
|
||||
import pytest
|
||||
from prisma.models import ChatMessage as PrismaChatMessage
|
||||
from prisma.models import ChatSession as PrismaChatSession
|
||||
|
||||
from backend.copilot.db import (
|
||||
PaginatedMessages,
|
||||
get_chat_messages_paginated,
|
||||
set_turn_duration,
|
||||
)
|
||||
from backend.copilot.model import ChatMessage as CopilotChatMessage
|
||||
from backend.copilot.model import ChatSession, get_chat_session, upsert_chat_session
|
||||
|
||||
|
||||
def _make_msg(
|
||||
sequence: int,
|
||||
role: str = "assistant",
|
||||
content: str | None = "hello",
|
||||
tool_calls: Any = None,
|
||||
) -> PrismaChatMessage:
|
||||
"""Build a minimal PrismaChatMessage for testing."""
|
||||
return PrismaChatMessage(
|
||||
id=f"msg-{sequence}",
|
||||
createdAt=datetime.now(UTC),
|
||||
sessionId="sess-1",
|
||||
role=role,
|
||||
content=content,
|
||||
sequence=sequence,
|
||||
toolCalls=tool_calls,
|
||||
name=None,
|
||||
toolCallId=None,
|
||||
refusal=None,
|
||||
functionCall=None,
|
||||
)
|
||||
|
||||
|
||||
def _make_session(
|
||||
session_id: str = "sess-1",
|
||||
user_id: str = "user-1",
|
||||
messages: list[PrismaChatMessage] | None = None,
|
||||
) -> PrismaChatSession:
|
||||
"""Build a minimal PrismaChatSession for testing."""
|
||||
now = datetime.now(UTC)
|
||||
session = PrismaChatSession.model_construct(
|
||||
id=session_id,
|
||||
createdAt=now,
|
||||
updatedAt=now,
|
||||
userId=user_id,
|
||||
credentials={},
|
||||
successfulAgentRuns={},
|
||||
successfulAgentSchedules={},
|
||||
totalPromptTokens=0,
|
||||
totalCompletionTokens=0,
|
||||
title=None,
|
||||
metadata={},
|
||||
Messages=messages or [],
|
||||
)
|
||||
return session
|
||||
|
||||
|
||||
SESSION_ID = "sess-1"
|
||||
|
||||
|
||||
@pytest.fixture()
|
||||
def mock_db():
|
||||
"""Patch ChatSession.prisma().find_first and ChatMessage.prisma().find_many.
|
||||
|
||||
find_first is used for the main query (session + included messages).
|
||||
find_many is used only for boundary expansion queries.
|
||||
"""
|
||||
with (
|
||||
patch.object(PrismaChatSession, "prisma") as mock_session_prisma,
|
||||
patch.object(PrismaChatMessage, "prisma") as mock_msg_prisma,
|
||||
):
|
||||
find_first = AsyncMock()
|
||||
mock_session_prisma.return_value.find_first = find_first
|
||||
|
||||
find_many = AsyncMock(return_value=[])
|
||||
mock_msg_prisma.return_value.find_many = find_many
|
||||
|
||||
yield find_first, find_many
|
||||
|
||||
|
||||
# ---------- Basic pagination ----------
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_basic_page_returns_messages_ascending(
|
||||
mock_db: tuple[AsyncMock, AsyncMock],
|
||||
):
|
||||
"""Messages are returned in ascending sequence order."""
|
||||
find_first, _ = mock_db
|
||||
find_first.return_value = _make_session(
|
||||
messages=[_make_msg(3), _make_msg(2), _make_msg(1)],
|
||||
)
|
||||
|
||||
page = await get_chat_messages_paginated(SESSION_ID, limit=5)
|
||||
|
||||
assert isinstance(page, PaginatedMessages)
|
||||
assert [m.sequence for m in page.messages] == [1, 2, 3]
|
||||
assert page.has_more is False
|
||||
assert page.oldest_sequence == 1
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_has_more_when_results_exceed_limit(
|
||||
mock_db: tuple[AsyncMock, AsyncMock],
|
||||
):
|
||||
"""has_more is True when DB returns more than limit items."""
|
||||
find_first, _ = mock_db
|
||||
find_first.return_value = _make_session(
|
||||
messages=[_make_msg(3), _make_msg(2), _make_msg(1)],
|
||||
)
|
||||
|
||||
page = await get_chat_messages_paginated(SESSION_ID, limit=2)
|
||||
|
||||
assert page is not None
|
||||
assert page.has_more is True
|
||||
assert len(page.messages) == 2
|
||||
assert [m.sequence for m in page.messages] == [2, 3]
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_empty_session_returns_no_messages(
|
||||
mock_db: tuple[AsyncMock, AsyncMock],
|
||||
):
|
||||
find_first, _ = mock_db
|
||||
find_first.return_value = _make_session(messages=[])
|
||||
|
||||
page = await get_chat_messages_paginated(SESSION_ID, limit=50)
|
||||
|
||||
assert page is not None
|
||||
assert page.messages == []
|
||||
assert page.has_more is False
|
||||
assert page.oldest_sequence is None
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_before_sequence_filters_correctly(
|
||||
mock_db: tuple[AsyncMock, AsyncMock],
|
||||
):
|
||||
"""before_sequence is passed as a where filter inside the Messages include."""
|
||||
find_first, _ = mock_db
|
||||
find_first.return_value = _make_session(
|
||||
messages=[_make_msg(2), _make_msg(1)],
|
||||
)
|
||||
|
||||
await get_chat_messages_paginated(SESSION_ID, limit=50, before_sequence=5)
|
||||
|
||||
call_kwargs = find_first.call_args
|
||||
include = call_kwargs.kwargs.get("include") or call_kwargs[1].get("include")
|
||||
assert include["Messages"]["where"] == {"sequence": {"lt": 5}}
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_no_where_on_messages_without_before_sequence(
|
||||
mock_db: tuple[AsyncMock, AsyncMock],
|
||||
):
|
||||
"""Without before_sequence, the Messages include has no where clause."""
|
||||
find_first, _ = mock_db
|
||||
find_first.return_value = _make_session(messages=[_make_msg(1)])
|
||||
|
||||
await get_chat_messages_paginated(SESSION_ID, limit=50)
|
||||
|
||||
call_kwargs = find_first.call_args
|
||||
include = call_kwargs.kwargs.get("include") or call_kwargs[1].get("include")
|
||||
assert "where" not in include["Messages"]
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_user_id_filter_applied_to_session_where(
|
||||
mock_db: tuple[AsyncMock, AsyncMock],
|
||||
):
|
||||
"""user_id adds a userId filter to the session-level where clause."""
|
||||
find_first, _ = mock_db
|
||||
find_first.return_value = _make_session(messages=[_make_msg(1)])
|
||||
|
||||
await get_chat_messages_paginated(SESSION_ID, limit=50, user_id="user-abc")
|
||||
|
||||
call_kwargs = find_first.call_args
|
||||
where = call_kwargs.kwargs.get("where") or call_kwargs[1].get("where")
|
||||
assert where["userId"] == "user-abc"
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_session_not_found_returns_none(
|
||||
mock_db: tuple[AsyncMock, AsyncMock],
|
||||
):
|
||||
"""Returns None when session doesn't exist or user doesn't own it."""
|
||||
find_first, _ = mock_db
|
||||
find_first.return_value = None
|
||||
|
||||
page = await get_chat_messages_paginated(SESSION_ID, limit=50)
|
||||
|
||||
assert page is None
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_session_info_included_in_result(
|
||||
mock_db: tuple[AsyncMock, AsyncMock],
|
||||
):
|
||||
"""PaginatedMessages includes session metadata."""
|
||||
find_first, _ = mock_db
|
||||
find_first.return_value = _make_session(messages=[_make_msg(1)])
|
||||
|
||||
page = await get_chat_messages_paginated(SESSION_ID, limit=50)
|
||||
|
||||
assert page is not None
|
||||
assert page.session.session_id == SESSION_ID
|
||||
|
||||
|
||||
# ---------- Backward boundary expansion ----------
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_boundary_expansion_includes_assistant(
|
||||
mock_db: tuple[AsyncMock, AsyncMock],
|
||||
):
|
||||
"""When page starts with a tool message, expand backward to include
|
||||
the owning assistant message."""
|
||||
find_first, find_many = mock_db
|
||||
find_first.return_value = _make_session(
|
||||
messages=[_make_msg(5, role="tool"), _make_msg(4, role="tool")],
|
||||
)
|
||||
find_many.return_value = [_make_msg(3, role="assistant")]
|
||||
|
||||
page = await get_chat_messages_paginated(SESSION_ID, limit=5)
|
||||
|
||||
assert page is not None
|
||||
assert [m.sequence for m in page.messages] == [3, 4, 5]
|
||||
assert page.messages[0].role == "assistant"
|
||||
assert page.oldest_sequence == 3
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_boundary_expansion_includes_multiple_tool_msgs(
|
||||
mock_db: tuple[AsyncMock, AsyncMock],
|
||||
):
|
||||
"""Boundary expansion scans past consecutive tool messages to find
|
||||
the owning assistant."""
|
||||
find_first, find_many = mock_db
|
||||
find_first.return_value = _make_session(
|
||||
messages=[_make_msg(7, role="tool")],
|
||||
)
|
||||
find_many.return_value = [
|
||||
_make_msg(6, role="tool"),
|
||||
_make_msg(5, role="tool"),
|
||||
_make_msg(4, role="assistant"),
|
||||
]
|
||||
|
||||
page = await get_chat_messages_paginated(SESSION_ID, limit=5)
|
||||
|
||||
assert page is not None
|
||||
assert [m.sequence for m in page.messages] == [4, 5, 6, 7]
|
||||
assert page.messages[0].role == "assistant"
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_boundary_expansion_sets_has_more_when_not_at_start(
|
||||
mock_db: tuple[AsyncMock, AsyncMock],
|
||||
):
|
||||
"""After boundary expansion, has_more=True if expanded msgs aren't at seq 0."""
|
||||
find_first, find_many = mock_db
|
||||
find_first.return_value = _make_session(
|
||||
messages=[_make_msg(3, role="tool")],
|
||||
)
|
||||
find_many.return_value = [_make_msg(2, role="assistant")]
|
||||
|
||||
page = await get_chat_messages_paginated(SESSION_ID, limit=5)
|
||||
|
||||
assert page is not None
|
||||
assert page.has_more is True
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_boundary_expansion_no_has_more_at_conversation_start(
|
||||
mock_db: tuple[AsyncMock, AsyncMock],
|
||||
):
|
||||
"""has_more stays False when boundary expansion reaches seq 0."""
|
||||
find_first, find_many = mock_db
|
||||
find_first.return_value = _make_session(
|
||||
messages=[_make_msg(1, role="tool")],
|
||||
)
|
||||
find_many.return_value = [_make_msg(0, role="assistant")]
|
||||
|
||||
page = await get_chat_messages_paginated(SESSION_ID, limit=5)
|
||||
|
||||
assert page is not None
|
||||
assert page.has_more is False
|
||||
assert page.oldest_sequence == 0
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_no_boundary_expansion_when_first_msg_not_tool(
|
||||
mock_db: tuple[AsyncMock, AsyncMock],
|
||||
):
|
||||
"""No boundary expansion when the first message is not a tool message."""
|
||||
find_first, find_many = mock_db
|
||||
find_first.return_value = _make_session(
|
||||
messages=[_make_msg(3, role="user"), _make_msg(2, role="assistant")],
|
||||
)
|
||||
|
||||
page = await get_chat_messages_paginated(SESSION_ID, limit=5)
|
||||
|
||||
assert page is not None
|
||||
assert find_many.call_count == 0
|
||||
assert [m.sequence for m in page.messages] == [2, 3]
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_boundary_expansion_warns_when_no_owner_found(
|
||||
mock_db: tuple[AsyncMock, AsyncMock],
|
||||
):
|
||||
"""When boundary scan doesn't find a non-tool message, a warning is logged
|
||||
and the boundary messages are still included."""
|
||||
find_first, find_many = mock_db
|
||||
find_first.return_value = _make_session(
|
||||
messages=[_make_msg(10, role="tool")],
|
||||
)
|
||||
find_many.return_value = [_make_msg(i, role="tool") for i in range(9, -1, -1)]
|
||||
|
||||
with patch("backend.copilot.db.logger") as mock_logger:
|
||||
page = await get_chat_messages_paginated(SESSION_ID, limit=5)
|
||||
mock_logger.warning.assert_called_once()
|
||||
|
||||
assert page is not None
|
||||
assert page.messages[0].role == "tool"
|
||||
assert len(page.messages) > 1
|
||||
|
||||
|
||||
# ---------- Turn duration (integration tests) ----------
|
||||
|
||||
|
||||
@pytest.mark.asyncio(loop_scope="session")
|
||||
@@ -15,8 +349,8 @@ async def test_set_turn_duration_updates_cache_in_place(setup_test_user, test_us
|
||||
"""
|
||||
session = ChatSession.new(user_id=test_user_id, dry_run=False)
|
||||
session.messages = [
|
||||
ChatMessage(role="user", content="hello"),
|
||||
ChatMessage(role="assistant", content="hi there"),
|
||||
CopilotChatMessage(role="user", content="hello"),
|
||||
CopilotChatMessage(role="assistant", content="hi there"),
|
||||
]
|
||||
session = await upsert_chat_session(session)
|
||||
|
||||
@@ -41,7 +375,7 @@ async def test_set_turn_duration_no_assistant_message(setup_test_user, test_user
|
||||
"""set_turn_duration is a no-op when there are no assistant messages."""
|
||||
session = ChatSession.new(user_id=test_user_id, dry_run=False)
|
||||
session.messages = [
|
||||
ChatMessage(role="user", content="hello"),
|
||||
CopilotChatMessage(role="user", content="hello"),
|
||||
]
|
||||
session = await upsert_chat_session(session)
|
||||
|
||||
|
||||
@@ -64,6 +64,7 @@ class ChatMessage(BaseModel):
|
||||
refusal: str | None = None
|
||||
tool_calls: list[dict] | None = None
|
||||
function_call: dict | None = None
|
||||
sequence: int | None = None
|
||||
duration_ms: int | None = None
|
||||
|
||||
@staticmethod
|
||||
@@ -77,6 +78,7 @@ class ChatMessage(BaseModel):
|
||||
refusal=prisma_message.refusal,
|
||||
tool_calls=_parse_json_field(prisma_message.toolCalls),
|
||||
function_call=_parse_json_field(prisma_message.functionCall),
|
||||
sequence=prisma_message.sequence,
|
||||
duration_ms=prisma_message.durationMs,
|
||||
)
|
||||
|
||||
|
||||
@@ -50,7 +50,7 @@ from backend.executor.cluster_lock import AsyncClusterLock
|
||||
from backend.util.exceptions import NotFoundError
|
||||
from backend.util.settings import Settings
|
||||
|
||||
from ..config import ChatConfig
|
||||
from ..config import ChatConfig, CopilotMode
|
||||
from ..constants import (
|
||||
COPILOT_ERROR_PREFIX,
|
||||
COPILOT_RETRYABLE_ERROR_PREFIX,
|
||||
@@ -1677,6 +1677,7 @@ async def stream_chat_completion_sdk(
|
||||
session: ChatSession | None = None,
|
||||
file_ids: list[str] | None = None,
|
||||
permissions: "CopilotPermissions | None" = None,
|
||||
mode: CopilotMode | None = None,
|
||||
**_kwargs: Any,
|
||||
) -> AsyncIterator[StreamBaseResponse]:
|
||||
"""Stream chat completion using Claude Agent SDK.
|
||||
@@ -1685,7 +1686,10 @@ async def stream_chat_completion_sdk(
|
||||
file_ids: Optional workspace file IDs attached to the user's message.
|
||||
Images are embedded as vision content blocks; other files are
|
||||
saved to the SDK working directory for the Read tool.
|
||||
mode: Accepted for signature compatibility with the baseline path.
|
||||
The SDK path does not currently branch on this value.
|
||||
"""
|
||||
_ = mode # SDK path ignores the requested mode.
|
||||
|
||||
if session is None:
|
||||
session = await get_chat_session(session_id, user_id)
|
||||
|
||||
@@ -890,6 +890,12 @@ class AgentFixer:
|
||||
)
|
||||
|
||||
if is_ai_block:
|
||||
# Skip AI blocks that don't expose a "model" input property
|
||||
# (some AI-category blocks have no model selector at all).
|
||||
input_properties = block.get("inputSchema", {}).get("properties", {})
|
||||
if "model" not in input_properties:
|
||||
continue
|
||||
|
||||
node_id = node.get("id")
|
||||
input_default = node.get("input_default", {})
|
||||
current_model = input_default.get("model")
|
||||
@@ -898,9 +904,7 @@ class AgentFixer:
|
||||
# Blocks with a block-specific enum on the model field (e.g.
|
||||
# PerplexityBlock) use their own enum values; others use the
|
||||
# generic set.
|
||||
model_schema = (
|
||||
block.get("inputSchema", {}).get("properties", {}).get("model", {})
|
||||
)
|
||||
model_schema = input_properties.get("model", {})
|
||||
block_model_enum = model_schema.get("enum")
|
||||
|
||||
if block_model_enum:
|
||||
|
||||
@@ -580,6 +580,29 @@ class TestFixAiModelParameter:
|
||||
|
||||
assert result["nodes"][0]["input_default"]["model"] == "perplexity/sonar"
|
||||
|
||||
def test_ai_block_without_model_property_is_skipped(self):
|
||||
"""AI-category blocks that have no 'model' input property should not
|
||||
have a model injected — they simply don't expose a model selector."""
|
||||
fixer = AgentFixer()
|
||||
block_id = generate_uuid()
|
||||
node = _make_node(node_id="n1", block_id=block_id, input_default={})
|
||||
agent = _make_agent(nodes=[node])
|
||||
|
||||
blocks = [
|
||||
{
|
||||
"id": block_id,
|
||||
"name": "SomeAIBlock",
|
||||
"categories": [{"category": "AI"}],
|
||||
"inputSchema": {
|
||||
"properties": {"prompt": {"type": "string"}},
|
||||
},
|
||||
}
|
||||
]
|
||||
|
||||
result = fixer.fix_ai_model_parameter(agent, blocks)
|
||||
|
||||
assert "model" not in result["nodes"][0]["input_default"]
|
||||
|
||||
|
||||
class TestFixAgentExecutorBlocks:
|
||||
"""Tests for fix_agent_executor_blocks."""
|
||||
|
||||
@@ -845,6 +845,7 @@ class WriteWorkspaceFileTool(BaseTool):
|
||||
path=path,
|
||||
mime_type=mime_type,
|
||||
overwrite=overwrite,
|
||||
metadata={"origin": "agent-created"},
|
||||
)
|
||||
|
||||
# Build informative source label and message.
|
||||
|
||||
@@ -1,5 +1,6 @@
|
||||
import contextlib
|
||||
import logging
|
||||
import os
|
||||
from enum import Enum
|
||||
from functools import wraps
|
||||
from typing import Any, Awaitable, Callable, TypeVar
|
||||
@@ -166,6 +167,30 @@ async def get_feature_flag_value(
|
||||
return default
|
||||
|
||||
|
||||
def _env_flag_override(flag_key: Flag) -> bool | None:
|
||||
"""Return a local override for ``flag_key`` from the environment.
|
||||
|
||||
Set ``FORCE_FLAG_<NAME>=true|false`` (``NAME`` = flag value with
|
||||
``-`` → ``_``, upper-cased) to bypass LaunchDarkly for a single
|
||||
flag in local dev or tests. Returns ``None`` when no override
|
||||
is configured so the caller falls through to LaunchDarkly.
|
||||
|
||||
The ``NEXT_PUBLIC_FORCE_FLAG_<NAME>`` prefix is also accepted so a
|
||||
single shared env var can toggle a flag across backend and
|
||||
frontend (the frontend requires the ``NEXT_PUBLIC_`` prefix to
|
||||
expose the value to the browser bundle).
|
||||
|
||||
Example: ``FORCE_FLAG_CHAT_MODE_OPTION=true`` forces
|
||||
``Flag.CHAT_MODE_OPTION`` on regardless of LaunchDarkly.
|
||||
"""
|
||||
suffix = flag_key.value.upper().replace("-", "_")
|
||||
for prefix in ("FORCE_FLAG_", "NEXT_PUBLIC_FORCE_FLAG_"):
|
||||
raw = os.environ.get(prefix + suffix)
|
||||
if raw is not None:
|
||||
return raw.strip().lower() in ("1", "true", "yes", "on")
|
||||
return None
|
||||
|
||||
|
||||
async def is_feature_enabled(
|
||||
flag_key: Flag,
|
||||
user_id: str,
|
||||
@@ -182,6 +207,11 @@ async def is_feature_enabled(
|
||||
Returns:
|
||||
True if feature is enabled, False otherwise
|
||||
"""
|
||||
override = _env_flag_override(flag_key)
|
||||
if override is not None:
|
||||
logger.debug(f"Feature flag {flag_key} overridden by env: {override}")
|
||||
return override
|
||||
|
||||
result = await get_feature_flag_value(flag_key.value, user_id, default)
|
||||
|
||||
# If the result is already a boolean, return it
|
||||
|
||||
@@ -4,6 +4,7 @@ from ldclient import LDClient
|
||||
|
||||
from backend.util.feature_flag import (
|
||||
Flag,
|
||||
_env_flag_override,
|
||||
feature_flag,
|
||||
is_feature_enabled,
|
||||
mock_flag_variation,
|
||||
@@ -111,3 +112,59 @@ async def test_is_feature_enabled_with_flag_enum(mocker):
|
||||
assert result is True
|
||||
# Should call with the flag's string value
|
||||
mock_get_feature_flag_value.assert_called_once()
|
||||
|
||||
|
||||
class TestEnvFlagOverride:
|
||||
def test_force_flag_true(self, monkeypatch: pytest.MonkeyPatch):
|
||||
monkeypatch.setenv("FORCE_FLAG_CHAT", "true")
|
||||
assert _env_flag_override(Flag.CHAT) is True
|
||||
|
||||
def test_force_flag_false(self, monkeypatch: pytest.MonkeyPatch):
|
||||
monkeypatch.setenv("FORCE_FLAG_CHAT", "false")
|
||||
assert _env_flag_override(Flag.CHAT) is False
|
||||
|
||||
def test_next_public_prefix_true(self, monkeypatch: pytest.MonkeyPatch):
|
||||
monkeypatch.setenv("NEXT_PUBLIC_FORCE_FLAG_CHAT", "true")
|
||||
assert _env_flag_override(Flag.CHAT) is True
|
||||
|
||||
def test_unset_returns_none(self, monkeypatch: pytest.MonkeyPatch):
|
||||
monkeypatch.delenv("FORCE_FLAG_CHAT", raising=False)
|
||||
monkeypatch.delenv("NEXT_PUBLIC_FORCE_FLAG_CHAT", raising=False)
|
||||
assert _env_flag_override(Flag.CHAT) is None
|
||||
|
||||
def test_invalid_value_returns_false(self, monkeypatch: pytest.MonkeyPatch):
|
||||
monkeypatch.setenv("FORCE_FLAG_CHAT", "notaboolean")
|
||||
assert _env_flag_override(Flag.CHAT) is False
|
||||
|
||||
def test_numeric_one_returns_true(self, monkeypatch: pytest.MonkeyPatch):
|
||||
monkeypatch.setenv("FORCE_FLAG_CHAT", "1")
|
||||
assert _env_flag_override(Flag.CHAT) is True
|
||||
|
||||
def test_yes_returns_true(self, monkeypatch: pytest.MonkeyPatch):
|
||||
monkeypatch.setenv("FORCE_FLAG_CHAT", "yes")
|
||||
assert _env_flag_override(Flag.CHAT) is True
|
||||
|
||||
def test_on_returns_true(self, monkeypatch: pytest.MonkeyPatch):
|
||||
monkeypatch.setenv("FORCE_FLAG_CHAT", "on")
|
||||
assert _env_flag_override(Flag.CHAT) is True
|
||||
|
||||
def test_hyphenated_flag_converts_to_underscore(
|
||||
self, monkeypatch: pytest.MonkeyPatch
|
||||
):
|
||||
monkeypatch.setenv("FORCE_FLAG_CHAT_MODE_OPTION", "true")
|
||||
assert _env_flag_override(Flag.CHAT_MODE_OPTION) is True
|
||||
|
||||
def test_force_flag_takes_precedence_over_next_public(
|
||||
self, monkeypatch: pytest.MonkeyPatch
|
||||
):
|
||||
monkeypatch.setenv("FORCE_FLAG_CHAT", "false")
|
||||
monkeypatch.setenv("NEXT_PUBLIC_FORCE_FLAG_CHAT", "true")
|
||||
assert _env_flag_override(Flag.CHAT) is False
|
||||
|
||||
def test_whitespace_is_stripped(self, monkeypatch: pytest.MonkeyPatch):
|
||||
monkeypatch.setenv("FORCE_FLAG_CHAT", " true ")
|
||||
assert _env_flag_override(Flag.CHAT) is True
|
||||
|
||||
def test_case_insensitive_value(self, monkeypatch: pytest.MonkeyPatch):
|
||||
monkeypatch.setenv("FORCE_FLAG_CHAT", "TRUE")
|
||||
assert _env_flag_override(Flag.CHAT) is True
|
||||
|
||||
@@ -155,6 +155,7 @@ class WorkspaceManager:
|
||||
path: Optional[str] = None,
|
||||
mime_type: Optional[str] = None,
|
||||
overwrite: bool = False,
|
||||
metadata: Optional[dict] = None,
|
||||
) -> WorkspaceFile:
|
||||
"""
|
||||
Write file to workspace.
|
||||
@@ -168,6 +169,7 @@ class WorkspaceManager:
|
||||
path: Virtual path (defaults to "/{filename}", session-scoped if session_id set)
|
||||
mime_type: MIME type (auto-detected if not provided)
|
||||
overwrite: Whether to overwrite existing file at path
|
||||
metadata: Optional metadata dict (e.g., origin tracking)
|
||||
|
||||
Returns:
|
||||
Created WorkspaceFile instance
|
||||
@@ -246,6 +248,7 @@ class WorkspaceManager:
|
||||
mime_type=mime_type,
|
||||
size_bytes=len(content),
|
||||
checksum=checksum,
|
||||
metadata=metadata,
|
||||
)
|
||||
except UniqueViolationError:
|
||||
if retries > 0:
|
||||
|
||||
@@ -40,6 +40,8 @@ After making **any** code changes in the frontend, you MUST run the following co
|
||||
|
||||
Do NOT skip these steps. If any command reports errors, fix them and re-run until clean. Only then may you consider the task complete. If typing keeps failing, stop and ask the user.
|
||||
|
||||
4. `pnpm test:unit` — run integration tests; fix any failures
|
||||
|
||||
### Code Style
|
||||
|
||||
- Fully capitalize acronyms in symbols, e.g. `graphID`, `useBackendAPI`
|
||||
@@ -62,7 +64,7 @@ Do NOT skip these steps. If any command reports errors, fix them and re-run unti
|
||||
- **Icons**: Phosphor Icons only
|
||||
- **Feature Flags**: LaunchDarkly integration
|
||||
- **Error Handling**: ErrorCard for render errors, toast for mutations, Sentry for exceptions
|
||||
- **Testing**: Playwright for E2E, Storybook for component development
|
||||
- **Testing**: Vitest + React Testing Library + MSW for integration tests (primary), Playwright for E2E, Storybook for visual
|
||||
|
||||
## Environment Configuration
|
||||
|
||||
@@ -84,7 +86,12 @@ See @CONTRIBUTING.md for complete patterns. Quick reference:
|
||||
- Regenerate with `pnpm generate:api`
|
||||
- Pattern: `use{Method}{Version}{OperationName}`
|
||||
4. **Styling**: Tailwind CSS only, use design tokens, Phosphor Icons only
|
||||
5. **Testing**: Add Storybook stories for new components, Playwright for E2E. When fixing a bug, write a failing Playwright test first (use `.fixme` annotation), implement the fix, then remove the annotation.
|
||||
5. **Testing**: Integration tests are the default (~90%). See `TESTING.md` for full details.
|
||||
- **New pages/features**: Write integration tests in `__tests__/` next to `page.tsx` using Vitest + RTL + MSW
|
||||
- **API mocking**: Use Orval-generated MSW handlers from `@/app/api/__generated__/endpoints/{tag}/{tag}.msw.ts`
|
||||
- **Run**: `pnpm test:unit` (integration/unit), `pnpm test` (Playwright E2E)
|
||||
- **Storybook**: For design system components in `src/components/`
|
||||
- **TDD**: Write a failing test first, implement, then verify
|
||||
6. **Code conventions**:
|
||||
- Use function declarations (not arrow functions) for components/handlers
|
||||
- Do not use `useCallback` or `useMemo` unless asked to optimise a given function
|
||||
|
||||
@@ -747,9 +747,65 @@ export function CreateButton() {
|
||||
|
||||
---
|
||||
|
||||
## 🧪 Testing & Storybook
|
||||
## 🧪 Testing
|
||||
|
||||
- See `TESTING.md` for Playwright setup, E2E data seeding, and Storybook usage.
|
||||
See `TESTING.md` for full details. Key principles:
|
||||
|
||||
### Integration tests are the default (~90% of tests)
|
||||
|
||||
We test at the **page level**: render the page with React Testing Library, mock API requests with MSW (auto-generated by Orval), and assert with testing-library queries.
|
||||
|
||||
```bash
|
||||
pnpm test:unit # run integration/unit tests
|
||||
pnpm test:unit:watch # watch mode
|
||||
```
|
||||
|
||||
### Test file location
|
||||
|
||||
Tests live in `__tests__/` next to the page or component:
|
||||
|
||||
```
|
||||
app/(platform)/library/
|
||||
__tests__/
|
||||
main.test.tsx # main page rendering & interactions
|
||||
search.test.tsx # search-specific behavior
|
||||
components/
|
||||
page.tsx
|
||||
useLibraryPage.ts
|
||||
```
|
||||
|
||||
### Writing a test
|
||||
|
||||
1. Render the page using `render()` from `@/tests/integrations/test-utils`
|
||||
2. Mock API responses using Orval-generated MSW handlers from `@/app/api/__generated__/endpoints/{tag}/{tag}.msw.ts`
|
||||
3. Assert with `screen.findByText`, `screen.getByRole`, etc.
|
||||
|
||||
```tsx
|
||||
import { render, screen } from "@/tests/integrations/test-utils";
|
||||
import { server } from "@/mocks/mock-server";
|
||||
import { getGetV2ListLibraryAgentsMockHandler200 } from "@/app/api/__generated__/endpoints/library/library.msw";
|
||||
import LibraryPage from "../page";
|
||||
|
||||
test("renders agent list", async () => {
|
||||
server.use(getGetV2ListLibraryAgentsMockHandler200());
|
||||
render(<LibraryPage />);
|
||||
expect(await screen.findByText("My Agents")).toBeDefined();
|
||||
});
|
||||
```
|
||||
|
||||
### When to use each test type
|
||||
|
||||
| Type | When |
|
||||
| ------------------------------------ | --------------------------------------------- |
|
||||
| **Integration (Vitest + RTL + MSW)** | Default for all new pages and features |
|
||||
| **E2E (Playwright)** | Auth flows, payments, cross-page navigation |
|
||||
| **Storybook** | Design system components in `src/components/` |
|
||||
|
||||
### TDD workflow
|
||||
|
||||
1. Write a failing test (integration test or Playwright with `.fixme`)
|
||||
2. Implement the fix/feature
|
||||
3. Remove annotations and run the full suite
|
||||
|
||||
---
|
||||
|
||||
@@ -763,8 +819,10 @@ Common scripts (see `package.json` for full list):
|
||||
- `pnpm lint` — ESLint + Prettier check
|
||||
- `pnpm format` — Format code
|
||||
- `pnpm types` — Type-check
|
||||
- `pnpm test:unit` — Run integration/unit tests (Vitest + RTL + MSW)
|
||||
- `pnpm test:unit:watch` — Watch mode for integration tests
|
||||
- `pnpm test` — Run Playwright E2E tests
|
||||
- `pnpm storybook` — Run Storybook
|
||||
- `pnpm test` — Run Playwright tests
|
||||
|
||||
Generated API client:
|
||||
|
||||
@@ -780,6 +838,7 @@ Generated API client:
|
||||
- Logic is separated into `use*.ts` and `helpers.ts` when non-trivial
|
||||
- Reusable logic extracted to `src/services/` or `src/lib/utils.ts` when appropriate
|
||||
- Navigation uses the Next.js router
|
||||
- Integration tests added/updated for new pages and features (`pnpm test:unit`)
|
||||
- Lint, format, type-check, and tests pass locally
|
||||
- Stories updated/added if UI changed; verified in Storybook
|
||||
|
||||
|
||||
@@ -12,6 +12,10 @@ COPY autogpt_platform/frontend/ .
|
||||
# Allow CI to opt-in to Playwright test build-time flags
|
||||
ARG NEXT_PUBLIC_PW_TEST="false"
|
||||
ENV NEXT_PUBLIC_PW_TEST=$NEXT_PUBLIC_PW_TEST
|
||||
# Allow CI to opt-in to browser sourcemaps for coverage path resolution.
|
||||
# Keep Docker builds defaulting to false to avoid the memory hit.
|
||||
ARG NEXT_PUBLIC_SOURCEMAPS="false"
|
||||
ENV NEXT_PUBLIC_SOURCEMAPS=$NEXT_PUBLIC_SOURCEMAPS
|
||||
ENV NODE_ENV="production"
|
||||
# Merge env files appropriately based on environment
|
||||
RUN if [ -f .env.production ]; then \
|
||||
@@ -25,10 +29,6 @@ RUN if [ -f .env.production ]; then \
|
||||
cp .env.default .env; \
|
||||
fi
|
||||
RUN pnpm run generate:api
|
||||
# Disable source-map generation in Docker builds to halve webpack memory usage.
|
||||
# Source maps are only useful when SENTRY_AUTH_TOKEN is set (Vercel deploys);
|
||||
# the Docker image never uploads them, so generating them just wastes RAM.
|
||||
ENV NEXT_PUBLIC_SOURCEMAPS="false"
|
||||
# In CI, we want NEXT_PUBLIC_PW_TEST=true during build so Next.js inlines it
|
||||
RUN if [ "$NEXT_PUBLIC_PW_TEST" = "true" ]; then NEXT_PUBLIC_PW_TEST=true NODE_OPTIONS="--max-old-space-size=8192" pnpm build; else NODE_OPTIONS="--max-old-space-size=8192" pnpm build; fi
|
||||
|
||||
|
||||
@@ -1,57 +1,168 @@
|
||||
# Frontend Testing 🧪
|
||||
# Frontend Testing
|
||||
|
||||
## Quick Start (local) 🚀
|
||||
## Testing Strategy
|
||||
|
||||
| Type | Tool | Speed | When to use |
|
||||
| ------------------------- | ------------------------------------ | ------------- | ----------------------------------------------------- |
|
||||
| **Integration (primary)** | Vitest + React Testing Library + MSW | Fast (~100ms) | ~90% of tests — page-level rendering with mocked API |
|
||||
| **E2E** | Playwright | Slow (~5s) | Critical flows: auth, payments, cross-page navigation |
|
||||
| **Visual** | Storybook + Chromatic | N/A | Design system components |
|
||||
|
||||
**Integration tests are the default.** Since most of our code is client-only, we test at the page level: render the page with React Testing Library, mock API requests with MSW (handlers auto-generated by Orval), and assert with testing-library queries.
|
||||
|
||||
## Integration Tests (Vitest + RTL + MSW)
|
||||
|
||||
### Running
|
||||
|
||||
```bash
|
||||
pnpm test:unit # run all integration/unit tests with coverage
|
||||
pnpm test:unit:watch # watch mode for development
|
||||
```
|
||||
|
||||
### File location
|
||||
|
||||
Tests live in a `__tests__/` folder next to the page or component they test:
|
||||
|
||||
```
|
||||
app/(platform)/library/
|
||||
__tests__/
|
||||
main.test.tsx # tests the main page rendering & interactions
|
||||
search.test.tsx # tests search-specific behavior
|
||||
components/
|
||||
AgentCard/
|
||||
AgentCard.tsx
|
||||
__tests__/
|
||||
AgentCard.test.tsx # only when testing the component in isolation
|
||||
page.tsx
|
||||
useLibraryPage.ts
|
||||
```
|
||||
|
||||
**Naming**: use descriptive names like `main.test.tsx`, `search.test.tsx`, `filters.test.tsx` — not `page.test.tsx` or `index.test.tsx`.
|
||||
|
||||
### Writing an integration test
|
||||
|
||||
1. **Render the page** using the custom `render()` from `@/tests/integrations/test-utils` (wraps providers)
|
||||
2. **Mock API responses** using Orval-generated MSW handlers from `@/app/api/__generated__/endpoints/{tag}/{tag}.msw.ts`
|
||||
3. **Assert** with React Testing Library queries (`screen.findByText`, `screen.getByRole`, etc.)
|
||||
|
||||
```tsx
|
||||
import { render, screen } from "@/tests/integrations/test-utils";
|
||||
import { server } from "@/mocks/mock-server";
|
||||
import {
|
||||
getGetV2ListLibraryAgentsMockHandler200,
|
||||
getGetV2ListLibraryAgentsMockHandler422,
|
||||
} from "@/app/api/__generated__/endpoints/library/library.msw";
|
||||
import LibraryPage from "../page";
|
||||
|
||||
describe("LibraryPage", () => {
|
||||
test("renders agent list from API", async () => {
|
||||
server.use(getGetV2ListLibraryAgentsMockHandler200());
|
||||
|
||||
render(<LibraryPage />);
|
||||
|
||||
expect(await screen.findByText("My Agents")).toBeDefined();
|
||||
});
|
||||
|
||||
test("shows error state on API failure", async () => {
|
||||
server.use(getGetV2ListLibraryAgentsMockHandler422());
|
||||
|
||||
render(<LibraryPage />);
|
||||
|
||||
expect(await screen.findByText(/error/i)).toBeDefined();
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
### MSW handlers
|
||||
|
||||
Orval generates typed MSW handlers for every endpoint and HTTP status code:
|
||||
|
||||
- `getGetV2ListLibraryAgentsMockHandler200()` — success response with faker data
|
||||
- `getGetV2ListLibraryAgentsMockHandler422()` — validation error response
|
||||
- `getGetV2ListLibraryAgentsMockHandler401()` — unauthorized response
|
||||
|
||||
To override with custom data, pass a resolver:
|
||||
|
||||
```tsx
|
||||
import { http, HttpResponse } from "msw";
|
||||
|
||||
server.use(
|
||||
http.get("http://localhost:3000/api/proxy/api/library/agents", () => {
|
||||
return HttpResponse.json({
|
||||
agents: [{ id: "1", name: "My Agent" }],
|
||||
pagination: { total: 1 },
|
||||
});
|
||||
}),
|
||||
);
|
||||
```
|
||||
|
||||
All handlers are aggregated in `src/mocks/mock-handlers.ts` and the MSW server is set up in `src/mocks/mock-server.ts`.
|
||||
|
||||
### Test utilities
|
||||
|
||||
- **`@/tests/integrations/test-utils`** — custom `render()` that wraps components with `QueryClientProvider`, `BackendAPIProvider`, `OnboardingProvider`, `NuqsTestingAdapter`, and `TooltipProvider`, so query-state hooks and tooltips work out of the box in page-level tests
|
||||
- **`@/tests/integrations/setup-nextjs-mocks`** — mocks for `next/navigation`, `next/image`, `next/headers`, `next/link`
|
||||
- **`@/tests/integrations/mock-supabase-request`** — mocks Supabase auth (returns null user by default)
|
||||
|
||||
### What to test at page level
|
||||
|
||||
- Page renders with API data (happy path)
|
||||
- Loading and error states
|
||||
- User interactions that trigger mutations (clicks, form submissions)
|
||||
- Conditional rendering based on API responses
|
||||
- Search, filtering, pagination behavior
|
||||
|
||||
### When to test a component in isolation
|
||||
|
||||
Only when the component has complex internal logic that is hard to exercise through the page test. Prefer page-level tests as the default.
|
||||
|
||||
## E2E Tests (Playwright)
|
||||
|
||||
### Running
|
||||
|
||||
```bash
|
||||
pnpm test # build + run all Playwright tests
|
||||
pnpm test-ui # run with Playwright UI
|
||||
pnpm test:no-build # run against a running dev server
|
||||
```
|
||||
|
||||
### Setup
|
||||
|
||||
1. Start the backend + Supabase stack:
|
||||
- From `autogpt_platform`: `docker compose --profile local up deps_backend -d`
|
||||
- Or run the full stack: `docker compose up -d`
|
||||
2. Seed rich E2E data (creates `test123@gmail.com` with library agents):
|
||||
- From `autogpt_platform/backend`: `poetry run python test/e2e_test_data.py`
|
||||
3. Run Playwright:
|
||||
- From `autogpt_platform/frontend`: `pnpm test` or `pnpm test-ui`
|
||||
|
||||
## How Playwright setup works 🎭
|
||||
### How Playwright setup works
|
||||
|
||||
- Playwright runs from `frontend/playwright.config.ts` with a global setup step.
|
||||
- The global setup creates a user pool via the real signup UI and stores it in `frontend/.auth/user-pool.json`.
|
||||
- Most tests call `getTestUser()` (from `src/tests/utils/auth.ts`) which pulls a random user from that pool.
|
||||
- these users do not contain library agents, it's user that just "signed up" on the platform, hence some tests to make use of users created via script (see below) with more data
|
||||
- Playwright runs from `frontend/playwright.config.ts` with a global setup step
|
||||
- Global setup creates a user pool via the real signup UI, stored in `frontend/.auth/user-pool.json`
|
||||
- `getTestUser()` (from `src/tests/utils/auth.ts`) pulls a random user from the pool
|
||||
- `getTestUserWithLibraryAgents()` uses the rich user created by the data script
|
||||
|
||||
## Test users 👤
|
||||
### Test users
|
||||
|
||||
- **User pool (basic users)**
|
||||
Created automatically by the Playwright global setup through `/signup`.
|
||||
Used by `getTestUser()` in `src/tests/utils/auth.ts`.
|
||||
- **User pool (basic users)** — created automatically by Playwright global setup. Used by `getTestUser()`
|
||||
- **Rich user with library agents** — created by `backend/test/e2e_test_data.py`. Used by `getTestUserWithLibraryAgents()`
|
||||
|
||||
- **Rich user with library agents**
|
||||
Created by `backend/test/e2e_test_data.py`.
|
||||
Accessed via `getTestUserWithLibraryAgents()` in `src/tests/credentials/index.ts`.
|
||||
|
||||
Use the rich user when a test needs existing library agents (e.g. `library.spec.ts`).
|
||||
|
||||
## Resetting or wiping the DB 🔁
|
||||
### Resetting the DB
|
||||
|
||||
If you reset the Docker DB and logins start failing:
|
||||
|
||||
1. Delete `frontend/.auth/user-pool.json` so the pool is regenerated.
|
||||
2. Re-run the E2E data script to recreate the rich user + library agents:
|
||||
- `poetry run python test/e2e_test_data.py`
|
||||
1. Delete `frontend/.auth/user-pool.json`
|
||||
2. Re-run `poetry run python test/e2e_test_data.py`
|
||||
|
||||
## Storybook 📚
|
||||
## Storybook
|
||||
|
||||
## Flow diagram 🗺️
|
||||
- `pnpm storybook` — run locally
|
||||
- `pnpm build-storybook` — build static
|
||||
- `pnpm test-storybook` — CI runner
|
||||
- When changing components in `src/components`, update or add stories and verify in Storybook/Chromatic
|
||||
|
||||
```mermaid
|
||||
flowchart TD
|
||||
A[Start Docker stack] --> B[Run e2e_test_data.py]
|
||||
B --> C[Run Playwright tests]
|
||||
C --> D[Global setup creates user pool]
|
||||
D --> E{Test needs rich data?}
|
||||
E -->|No| F[getTestUser from user pool]
|
||||
E -->|Yes| G[getTestUserWithLibraryAgents]
|
||||
```
|
||||
## TDD Workflow
|
||||
|
||||
- `pnpm storybook` – Run Storybook locally
|
||||
- `pnpm build-storybook` – Build a static Storybook
|
||||
- CI runner: `pnpm test-storybook`
|
||||
- When changing components in `src/components`, update or add stories and verify in Storybook/Chromatic.
|
||||
When fixing a bug or adding a feature:
|
||||
|
||||
1. **Write a failing test first** — for integration tests, write the test and confirm it fails. For Playwright, use `.fixme` annotation
|
||||
2. **Implement the fix/feature** — write the minimal code to make the test pass
|
||||
3. **Remove annotations** — once passing, remove `.fixme` and run the full suite
|
||||
|
||||
@@ -161,6 +161,7 @@
|
||||
"eslint-plugin-storybook": "9.1.5",
|
||||
"happy-dom": "20.3.4",
|
||||
"import-in-the-middle": "2.0.2",
|
||||
"monocart-reporter": "2.10.0",
|
||||
"msw": "2.11.6",
|
||||
"msw-storybook-addon": "2.0.6",
|
||||
"orval": "7.13.0",
|
||||
|
||||
@@ -5,10 +5,57 @@ import { defineConfig, devices } from "@playwright/test";
|
||||
* https://github.com/motdotla/dotenv
|
||||
*/
|
||||
import dotenv from "dotenv";
|
||||
import fs from "fs";
|
||||
import path from "path";
|
||||
dotenv.config({ path: path.resolve(__dirname, ".env") });
|
||||
dotenv.config({ path: path.resolve(__dirname, "../backend/.env") });
|
||||
|
||||
const frontendRoot = __dirname.replaceAll("\\", "/");
|
||||
|
||||
// Directory where CI copies .next/static from the Docker container
|
||||
const staticCoverageDir = path.resolve(__dirname, ".next-static-coverage");
|
||||
|
||||
function normalizeCoverageSourcePath(filePath: string) {
|
||||
const normalizedFilePath = filePath.replaceAll("\\", "/");
|
||||
const withoutWebpackPrefix = normalizedFilePath.replace(
|
||||
/^webpack:\/\/_N_E\//,
|
||||
"",
|
||||
);
|
||||
|
||||
if (withoutWebpackPrefix.startsWith("./")) {
|
||||
return withoutWebpackPrefix.slice(2);
|
||||
}
|
||||
|
||||
if (withoutWebpackPrefix.startsWith(frontendRoot)) {
|
||||
return path.posix.relative(frontendRoot, withoutWebpackPrefix);
|
||||
}
|
||||
|
||||
return withoutWebpackPrefix;
|
||||
}
|
||||
|
||||
// Resolve source maps from the copied .next/static directory.
|
||||
// Cache parsed results to avoid repeated disk reads during report generation.
|
||||
const sourceMapCache = new Map<string, object | undefined>();
|
||||
|
||||
function resolveSourceMap(sourcePath: string) {
|
||||
// sourcePath is the sourceMappingURL, e.g.:
|
||||
// "http://localhost:3000/_next/static/chunks/abc123.js.map"
|
||||
const match = sourcePath.match(/_next\/static\/(.+)$/);
|
||||
if (!match) return undefined;
|
||||
|
||||
const mapFile = path.join(staticCoverageDir, match[1]);
|
||||
if (sourceMapCache.has(mapFile)) return sourceMapCache.get(mapFile);
|
||||
|
||||
try {
|
||||
const result = JSON.parse(fs.readFileSync(mapFile, "utf8")) as object;
|
||||
sourceMapCache.set(mapFile, result);
|
||||
return result;
|
||||
} catch {
|
||||
sourceMapCache.set(mapFile, undefined);
|
||||
return undefined;
|
||||
}
|
||||
}
|
||||
|
||||
export default defineConfig({
|
||||
testDir: "./src/tests",
|
||||
/* Global setup file that runs before all tests */
|
||||
@@ -22,7 +69,30 @@ export default defineConfig({
|
||||
/* use more workers on CI. */
|
||||
workers: process.env.CI ? 4 : undefined,
|
||||
/* Reporter to use. See https://playwright.dev/docs/test-reporters */
|
||||
reporter: [["list"], ["html", { open: "never" }]],
|
||||
reporter: [
|
||||
["list"],
|
||||
["html", { open: "never" }],
|
||||
[
|
||||
"monocart-reporter",
|
||||
{
|
||||
name: "E2E Coverage Report",
|
||||
outputFile: "./coverage/e2e/report.html",
|
||||
coverage: {
|
||||
reports: ["cobertura"],
|
||||
outputDir: "./coverage/e2e",
|
||||
entryFilter: (entry: { url: string }) =>
|
||||
entry.url.includes("/_next/static/") &&
|
||||
!entry.url.includes("node_modules"),
|
||||
sourceFilter: (sourcePath: string) =>
|
||||
sourcePath.includes("src/") && !sourcePath.includes("node_modules"),
|
||||
sourcePath: (filePath: string) =>
|
||||
normalizeCoverageSourcePath(filePath),
|
||||
sourceMapResolver: (sourcePath: string) =>
|
||||
resolveSourceMap(sourcePath),
|
||||
},
|
||||
},
|
||||
],
|
||||
],
|
||||
/* Shared settings for all the projects below. See https://playwright.dev/docs/api/class-testoptions. */
|
||||
use: {
|
||||
/* Base URL to use in actions like `await page.goto('/')`. */
|
||||
|
||||
314
autogpt_platform/frontend/pnpm-lock.yaml
generated
314
autogpt_platform/frontend/pnpm-lock.yaml
generated
@@ -400,6 +400,9 @@ importers:
|
||||
import-in-the-middle:
|
||||
specifier: 2.0.2
|
||||
version: 2.0.2
|
||||
monocart-reporter:
|
||||
specifier: 2.10.0
|
||||
version: 2.10.0
|
||||
msw:
|
||||
specifier: 2.11.6
|
||||
version: 2.11.6(@types/node@24.10.0)(typescript@5.9.3)
|
||||
@@ -4064,6 +4067,10 @@ packages:
|
||||
resolution: {integrity: sha512-h8lQ8tacZYnR3vNQTgibj+tODHI5/+l06Au2Pcriv/Gmet0eaj4TwWH41sO9wnHDiQsEj19q0drzdWdeAHtweg==}
|
||||
engines: {node: '>=6.5'}
|
||||
|
||||
accepts@1.3.8:
|
||||
resolution: {integrity: sha512-PYAthTa2m2VKxuvSD3DPC/Gy+U+sOA1LAuT8mkmRuvw+NACSaeXEQ+NHcVF7rONl6qcaxV3Uuemwawk+7+SJLw==}
|
||||
engines: {node: '>= 0.6'}
|
||||
|
||||
acorn-import-attributes@1.9.5:
|
||||
resolution: {integrity: sha512-n02Vykv5uA3eHGM/Z2dQrcD56kL8TyDb2p1+0P83PClMnC/nc+anbQRhIOWnSq4Ke/KvDPrY3C9hDtC/A3eHnQ==}
|
||||
peerDependencies:
|
||||
@@ -4080,6 +4087,14 @@ packages:
|
||||
peerDependencies:
|
||||
acorn: ^6.0.0 || ^7.0.0 || ^8.0.0
|
||||
|
||||
acorn-loose@8.5.2:
|
||||
resolution: {integrity: sha512-PPvV6g8UGMGgjrMu+n/f9E/tCSkNQ2Y97eFvuVdJfG11+xdIeDcLyNdC8SHcrHbRqkfwLASdplyR6B6sKM1U4A==}
|
||||
engines: {node: '>=0.4.0'}
|
||||
|
||||
acorn-walk@8.3.5:
|
||||
resolution: {integrity: sha512-HEHNfbars9v4pgpW6SO1KSPkfoS0xVOM/9UzkJltjlsHZmJasxg8aXkuZa7SMf8vKGIBhpUsPluQSqhJFCqebw==}
|
||||
engines: {node: '>=0.4.0'}
|
||||
|
||||
acorn@8.15.0:
|
||||
resolution: {integrity: sha512-NZyJarBfL7nWwIq+FDL6Zp/yHEhePMNnnJ0y3qfieCrmNvYct8uvtiV41UvlSe6apAfk0fY1FbWx+NwfmpvtTg==}
|
||||
engines: {node: '>=0.4.0'}
|
||||
@@ -4610,9 +4625,20 @@ packages:
|
||||
console-browserify@1.2.0:
|
||||
resolution: {integrity: sha512-ZMkYO/LkF17QvCPqM0gxw8yUzigAOZOSWSHg91FH6orS7vcEj5dVZTidN2fQ14yBSdg97RqhSNwLUXInd52OTA==}
|
||||
|
||||
console-grid@2.2.3:
|
||||
resolution: {integrity: sha512-+mecFacaFxGl+1G31IsCx41taUXuW2FxX+4xIE0TIPhgML+Jb9JFcBWGhhWerd1/vhScubdmHqTwOhB0KCUUAg==}
|
||||
|
||||
constants-browserify@1.0.0:
|
||||
resolution: {integrity: sha512-xFxOwqIzR/e1k1gLiWEophSCMqXcwVHIH7akf7b/vxcUeGunlj3hvZaaqxwHsTgn+IndtkQJgSztIDWeumWJDQ==}
|
||||
|
||||
content-disposition@1.0.1:
|
||||
resolution: {integrity: sha512-oIXISMynqSqm241k6kcQ5UwttDILMK4BiurCfGEREw6+X9jkkpEe5T9FZaApyLGGOnFuyMWZpdolTXMtvEJ08Q==}
|
||||
engines: {node: '>=18'}
|
||||
|
||||
content-type@1.0.5:
|
||||
resolution: {integrity: sha512-nTjqfcBFEipKdXCv4YDQWCfmcLZKm81ldF0pAopTvyrFGVbcR6P/VAAd5G7N+0tTr8QqiU0tFadD6FK4NtJwOA==}
|
||||
engines: {node: '>= 0.6'}
|
||||
|
||||
convert-source-map@1.9.0:
|
||||
resolution: {integrity: sha512-ASFBup0Mz1uyiIjANan1jzLQami9z1PoYSZCiiYW2FczPbenXc45FZdBZLzOT+r6+iciuEModtmCti+hjaAk0A==}
|
||||
|
||||
@@ -4623,6 +4649,10 @@ packages:
|
||||
resolution: {integrity: sha512-9Kr/j4O16ISv8zBBhJoi4bXOYNTkFLOqSL3UDB0njXxCXNezjeyVrJyGOWtgfs/q2km1gwBcfH8q1yEGoMYunA==}
|
||||
engines: {node: '>=18'}
|
||||
|
||||
cookies@0.9.1:
|
||||
resolution: {integrity: sha512-TG2hpqe4ELx54QER/S3HQ9SRVnQnGBtKUz5bLQWtYAQ+o6GpgMs6sYUvaiJjVxb+UXwhRhAEP3m7LbsIZ77Hmw==}
|
||||
engines: {node: '>= 0.8'}
|
||||
|
||||
core-js-compat@3.47.0:
|
||||
resolution: {integrity: sha512-IGfuznZ/n7Kp9+nypamBhvwdwLsW6KC8IOaURw2doAK5e98AG3acVLdh0woOnEqCfUtS+Vu882JE4k/DAm3ItQ==}
|
||||
|
||||
@@ -4931,6 +4961,9 @@ packages:
|
||||
resolution: {integrity: sha512-h5k/5U50IJJFpzfL6nO9jaaumfjO/f2NjK/oYB2Djzm4p9L+3T9qWpZqZ2hAbLPuuYq9wrU08WQyBTL5GbPk5Q==}
|
||||
engines: {node: '>=6'}
|
||||
|
||||
deep-equal@1.0.1:
|
||||
resolution: {integrity: sha512-bHtC0iYvWhyaTzvV3CZgPeZQqCOBGyGsVV7v4eevpdkLHfiSrXUdBG+qAuSz4RI70sszvjQ1QSZ98An1yNwpSw==}
|
||||
|
||||
deep-is@0.1.4:
|
||||
resolution: {integrity: sha512-oIPzksmTg4/MriiaYGO+okXDT7ztn/w3Eptv/+gSIdMdKsJo0u4CfYNFJPy+4SKMuCqGw2wxnA+URMg3t8a/bQ==}
|
||||
|
||||
@@ -4957,6 +4990,17 @@ packages:
|
||||
delaunator@5.0.1:
|
||||
resolution: {integrity: sha512-8nvh+XBe96aCESrGOqMp/84b13H9cdKbG5P2ejQCh4d4sK9RL4371qou9drQjMhvnPmhWl5hnmqbEE0fXr9Xnw==}
|
||||
|
||||
delegates@1.0.0:
|
||||
resolution: {integrity: sha512-bd2L678uiWATM6m5Z1VzNCErI3jiGzt6HGY8OVICs40JQq/HALfbyNJmp0UDakEY4pMMaN0Ly5om/B1VI/+xfQ==}
|
||||
|
||||
depd@1.1.2:
|
||||
resolution: {integrity: sha512-7emPTl6Dpo6JRXOXjLRxck+FlLRX5847cLKEn00PLAgc3g2hTZZgr+e4c2v6QpSmLeFP3n5yUo7ft6avBK/5jQ==}
|
||||
engines: {node: '>= 0.6'}
|
||||
|
||||
depd@2.0.0:
|
||||
resolution: {integrity: sha512-g7nH6P6dyDioJogAAGprGpCtVImJhpPk/roCzdb3fIh61/s/nPsfR6onyMwkCAR/OlC3yBC0lESvUoQEAssIrw==}
|
||||
engines: {node: '>= 0.8'}
|
||||
|
||||
dependency-graph@0.11.0:
|
||||
resolution: {integrity: sha512-JeMq7fEshyepOWDfcfHK06N3MhyPhz++vtqWhMT5O9A3K42rdsEDpfdVqjaqaAhsw6a+ZqeDvQVtD0hFHQWrzg==}
|
||||
engines: {node: '>= 0.6.0'}
|
||||
@@ -4968,6 +5012,10 @@ packages:
|
||||
des.js@1.1.0:
|
||||
resolution: {integrity: sha512-r17GxjhUCjSRy8aiJpr8/UadFIzMzJGexI3Nmz4ADi9LYSFx4gTBp80+NaX/YsXWWLhpZ7v/v/ubEc/bCNfKwg==}
|
||||
|
||||
destroy@1.2.0:
|
||||
resolution: {integrity: sha512-2sJGJTaXIIaR1w4iJSNoN0hnMY7Gpc/n8D4qSCJw8QqFWXf7cuAgnEHxBpweaVcPevC2l3KpjYCx3NypQQgaJg==}
|
||||
engines: {node: '>= 0.8', npm: 1.2.8000 || >= 1.4.16}
|
||||
|
||||
detect-libc@2.1.2:
|
||||
resolution: {integrity: sha512-Btj2BOOO83o3WyH59e8MgXsxEQVcarkUOpEYrubB0urwnN10yQ364rsiByU11nZlqWYZm05i/of7io4mzihBtQ==}
|
||||
engines: {node: '>=8'}
|
||||
@@ -5049,6 +5097,12 @@ packages:
|
||||
eastasianwidth@0.2.0:
|
||||
resolution: {integrity: sha512-I88TYZWc9XiYHRQ4/3c5rjjfgkjhLyW2luGIheGERbNQ6OY7yTybanSpDXZa8y7VUP9YmDcYa+eyq4ca7iLqWA==}
|
||||
|
||||
ee-first@1.1.1:
|
||||
resolution: {integrity: sha512-WMwm9LhRUo+WUaRN+vRuETqG89IgZphVSNkdFgeb6sS/E4OrDIN7t48CAewSHXc6C8lefD8KKfr5vY61brQlow==}
|
||||
|
||||
eight-colors@1.3.2:
|
||||
resolution: {integrity: sha512-qo7BAEbNnadiWn3EgZFD8tk2DWpifEHJE7CVyp09I0FiUJZ6z0YSyCGFmmtopVMi32iaL4hEK6m+/pPkx1iMFA==}
|
||||
|
||||
electron-to-chromium@1.5.267:
|
||||
resolution: {integrity: sha512-0Drusm6MVRXSOJpGbaSVgcQsuB4hEkMpHXaVstcPmhu5LIedxs1xNK/nIxmQIU/RPC0+1/o0AVZfBTkTNJOdUw==}
|
||||
|
||||
@@ -5081,6 +5135,10 @@ packages:
|
||||
resolution: {integrity: sha512-/kyM18EfinwXZbno9FyUGeFh87KC8HRQBQGildHZbEuRyWFOmv1U10o9BBp8XVZDVNNuQKyIGIu5ZYAAXJ0V2Q==}
|
||||
engines: {node: '>= 4'}
|
||||
|
||||
encodeurl@2.0.0:
|
||||
resolution: {integrity: sha512-Q0n9HRi4m6JuGIV1eFlmvJB7ZEVxu93IrMyiMsGC0lrMJMWzRgx6WGquyfQgZVb31vhGgXnfmPNNXmxnOkRBrg==}
|
||||
engines: {node: '>= 0.8'}
|
||||
|
||||
endent@2.1.0:
|
||||
resolution: {integrity: sha512-r8VyPX7XL8U01Xgnb1CjZ3XV+z90cXIJ9JPE/R9SEC9vpw2P6CfsRPJmp20DppC5N7ZAMCmjYkJIa744Iyg96w==}
|
||||
|
||||
@@ -5180,6 +5238,9 @@ packages:
|
||||
resolution: {integrity: sha512-WUj2qlxaQtO4g6Pq5c29GTcWGDyd8itL8zTlipgECz3JesAiiOKotd8JU6otB3PACgG6xkJUyVhboMS+bje/jA==}
|
||||
engines: {node: '>=6'}
|
||||
|
||||
escape-html@1.0.3:
|
||||
resolution: {integrity: sha512-NiSupZ4OeuGwr68lGIeym/ksIZMJodUGOSCZ/FSnTxcrekbvqrgdUxlJOMpijaKZVjAJrWrGs/6Jy8OMuyj9ow==}
|
||||
|
||||
escape-string-regexp@4.0.0:
|
||||
resolution: {integrity: sha512-TtpcNJ3XAzx3Gq8sWRzJaVajRs0uVxA2YAkdb1jm2YkPz4G6egUFAyA3n5vtEIZefPk5Wa4UXbKuS5fKkJWdgA==}
|
||||
engines: {node: '>=10'}
|
||||
@@ -5493,6 +5554,10 @@ packages:
|
||||
react-dom:
|
||||
optional: true
|
||||
|
||||
fresh@0.5.2:
|
||||
resolution: {integrity: sha512-zJ2mQYM18rEFOudeV4GShTGIQ7RbzA7ozbU9I/XBpm7kqgMywgmylMwXHxZJmkVoYkna9d2pVXVXPdYTP9ej8Q==}
|
||||
engines: {node: '>= 0.6'}
|
||||
|
||||
fs-extra@10.1.0:
|
||||
resolution: {integrity: sha512-oRXApq54ETRj4eMiFzGnHWGy+zo5raudjuxN0b8H7s/RU2oW0Wvsx9O0ACRN/kRq9E8Vu/ReskGB5o3ji+FzHQ==}
|
||||
engines: {node: '>=12'}
|
||||
@@ -5773,6 +5838,18 @@ packages:
|
||||
htmlparser2@6.1.0:
|
||||
resolution: {integrity: sha512-gyyPk6rgonLFEDGoeRgQNaEUvdJ4ktTmmUh/h2t7s+M8oPpIPxgNACWa+6ESR57kXstwqPiCut0V8NRpcwgU7A==}
|
||||
|
||||
http-assert@1.5.0:
|
||||
resolution: {integrity: sha512-uPpH7OKX4H25hBmU6G1jWNaqJGpTXxey+YOUizJUAgu0AjLUeC8D73hTrhvDS5D+GJN1DN1+hhc/eF/wpxtp0w==}
|
||||
engines: {node: '>= 0.8'}
|
||||
|
||||
http-errors@1.8.1:
|
||||
resolution: {integrity: sha512-Kpk9Sm7NmI+RHhnj6OIWDI1d6fIoFAtFt9RLaTMRlg/8w49juAStsrBgp0Dp4OdxdVbRIeKhtCUvoi/RuAhO4g==}
|
||||
engines: {node: '>= 0.6'}
|
||||
|
||||
http-errors@2.0.1:
|
||||
resolution: {integrity: sha512-4FbRdAX+bSdmo4AUFuS0WNiPz8NgFt+r8ThgNWmlrjQjt1Q7ZR9+zTlce2859x4KSXrwIsaeTqDoKQmtP8pLmQ==}
|
||||
engines: {node: '>= 0.8'}
|
||||
|
||||
http-proxy-agent@7.0.2:
|
||||
resolution: {integrity: sha512-T1gkAiYYDWYx3V5Bmyu7HcfcvL7mUrTWiM6yOfa3PIphViJ/gFPbvidQ+veqSOHci/PxBcDabeUNCzpOODJZig==}
|
||||
engines: {node: '>= 14'}
|
||||
@@ -6193,12 +6270,26 @@ packages:
|
||||
resolution: {integrity: sha512-YHzO7721WbmAL6Ov1uzN/l5mY5WWWhJBSW+jq4tkfZfsxmo1hu6frS0EOswvjBUnWE6NtjEs48SFn5CQESRLZg==}
|
||||
hasBin: true
|
||||
|
||||
keygrip@1.1.0:
|
||||
resolution: {integrity: sha512-iYSchDJ+liQ8iwbSI2QqsQOvqv58eJCEanyJPJi+Khyu8smkcKSFUCbPwzFcL7YVtZ6eONjqRX/38caJ7QjRAQ==}
|
||||
engines: {node: '>= 0.6'}
|
||||
|
||||
keyv@4.5.4:
|
||||
resolution: {integrity: sha512-oxVHkHR/EJf2CNXnWxRLW6mg7JyCCUcG0DtEGmL2ctUo1PNTin1PUil+r/+4r5MpVgC/fn1kjsx7mjSujKqIpw==}
|
||||
|
||||
khroma@2.1.0:
|
||||
resolution: {integrity: sha512-Ls993zuzfayK269Svk9hzpeGUKob/sIgZzyHYdjQoAdQetRKpOLj+k/QQQ/6Qi0Yz65mlROrfd+Ev+1+7dz9Kw==}
|
||||
|
||||
koa-compose@4.1.0:
|
||||
resolution: {integrity: sha512-8ODW8TrDuMYvXRwra/Kh7/rJo9BtOfPc6qO8eAfC80CnCvSjSl0bkRM24X6/XBBEyj0v1nRUQ1LyOy3dbqOWXw==}
|
||||
|
||||
koa-static-resolver@1.0.6:
|
||||
resolution: {integrity: sha512-ZX5RshSzH8nFn05/vUNQzqw32nEigsPa67AVUr6ZuQxuGdnCcTLcdgr4C81+YbJjpgqKHfacMBd7NmJIbj7fXw==}
|
||||
|
||||
koa@3.2.0:
|
||||
resolution: {integrity: sha512-TrM4/tnNY7uJ1aW55sIIa+dqBvc4V14WRIAlGcWat9wV5pRS9Wr5Zk2ZTjQP1jtfIHDoHiSbPuV08P0fUZo2pg==}
|
||||
engines: {node: '>= 18'}
|
||||
|
||||
langium@3.3.1:
|
||||
resolution: {integrity: sha512-QJv/h939gDpvT+9SiLVlY7tZC3xB2qK57v0J04Sh9wpMb6MP1q8gB21L3WIo8T5P1MSMg3Ep14L7KkDCFG3y4w==}
|
||||
engines: {node: '>=16.0.0'}
|
||||
@@ -6351,6 +6442,9 @@ packages:
|
||||
resolution: {integrity: sha512-h5bgJWpxJNswbU7qCrV0tIKQCaS3blPDrqKWx+QxzuzL1zGUzij9XCWLrSLsJPu5t+eWA/ycetzYAO5IOMcWAQ==}
|
||||
hasBin: true
|
||||
|
||||
lz-utils@2.1.0:
|
||||
resolution: {integrity: sha512-CMkfimAypidTtWjNDxY8a1bc1mJdyEh04V2FfEQ5Zh8Nx4v7k850EYa+dOWGn9hKG5xOyHP5MkuduAZCTHRvJw==}
|
||||
|
||||
magic-string@0.30.21:
|
||||
resolution: {integrity: sha512-vd2F4YUyEXKGcLHoq+TEyCjxueSeHnFxyyjNp80yg0XV4vUhnDer/lvvlqM/arB5bXQN5K2/3oinyCRyx8T2CQ==}
|
||||
|
||||
@@ -6456,6 +6550,10 @@ packages:
|
||||
mdurl@2.0.0:
|
||||
resolution: {integrity: sha512-Lf+9+2r+Tdp5wXDXC4PcIBjTDtq4UKjCPMQhKIuzpJNW0b96kVqSwW0bT7FhRSfmAiFYgP+SCRvdrDozfh0U5w==}
|
||||
|
||||
media-typer@1.1.0:
|
||||
resolution: {integrity: sha512-aisnrDP4GNe06UcKFnV5bfMNPBUw4jsLGaWwWfnH3v02GnBuXX2MCVn5RbrWo0j3pczUilYblq7fQ7Nw2t5XKw==}
|
||||
engines: {node: '>= 0.8'}
|
||||
|
||||
memfs@3.5.3:
|
||||
resolution: {integrity: sha512-UERzLsxzllchadvbPs5aolHh65ISpKpM+ccLbOJ8/vvpBKmAWf+la7dXFy7Mr0ySHbdHrFv5kGFCUHHe6GFEmw==}
|
||||
engines: {node: '>= 4.0.0'}
|
||||
@@ -6598,10 +6696,18 @@ packages:
|
||||
resolution: {integrity: sha512-sPU4uV7dYlvtWJxwwxHD0PuihVNiE7TyAbQ5SWxDCB9mUYvOgroQOwYQQOKPJ8CIbE+1ETVlOoK1UC2nU3gYvg==}
|
||||
engines: {node: '>= 0.6'}
|
||||
|
||||
mime-db@1.54.0:
|
||||
resolution: {integrity: sha512-aU5EJuIN2WDemCcAp2vFBfp/m4EAhWJnUNSSw0ixs7/kXbd6Pg64EmwJkNdFhB8aWt1sH2CTXrLxo/iAGV3oPQ==}
|
||||
engines: {node: '>= 0.6'}
|
||||
|
||||
mime-types@2.1.35:
|
||||
resolution: {integrity: sha512-ZDY+bPm5zTTF+YpCrAU9nK0UgICYPT0QtT1NZWFv4s++TNkcgVaT0g6+4R2uI4MjQjzysHB1zxuWL50hzaeXiw==}
|
||||
engines: {node: '>= 0.6'}
|
||||
|
||||
mime-types@3.0.2:
|
||||
resolution: {integrity: sha512-Lbgzdk0h4juoQ9fCKXW4by0UJqj+nOOrI9MJ1sSj4nI8aI2eo1qmvQEie4VD1glsS250n15LsWsYtCugiStS5A==}
|
||||
engines: {node: '>=18'}
|
||||
|
||||
mimic-fn@2.1.0:
|
||||
resolution: {integrity: sha512-OqbOk5oEQeAZ8WXWydlu9HJjz9WVdEIvamMCcXmuqUYjTknH/sqsWvhQ3vgwKFRR1HpjvNBKQ37nbJgYzGqGcg==}
|
||||
engines: {node: '>=6'}
|
||||
@@ -6640,6 +6746,17 @@ packages:
|
||||
module-details-from-path@1.0.4:
|
||||
resolution: {integrity: sha512-EGWKgxALGMgzvxYF1UyGTy0HXX/2vHLkw6+NvDKW2jypWbHpjQuj4UMcqQWXHERJhVGKikolT06G3bcKe4fi7w==}
|
||||
|
||||
monocart-coverage-reports@2.12.9:
|
||||
resolution: {integrity: sha512-vtFqbC3Egl4nVa1FSIrQvMPO6HZtb9lo+3IW7/crdvrLNW2IH8lUsxaK0TsKNmMO2mhFWwqQywLV2CZelqPgwA==}
|
||||
hasBin: true
|
||||
|
||||
monocart-locator@1.0.2:
|
||||
resolution: {integrity: sha512-v8W5hJLcWMIxLCcSi/MHh+VeefI+ycFmGz23Froer9QzWjrbg4J3gFJBuI/T1VLNoYxF47bVPPxq8ZlNX4gVCw==}
|
||||
|
||||
monocart-reporter@2.10.0:
|
||||
resolution: {integrity: sha512-Q421HL8hCr024HMjQcQylEpOLy69FE6Zli2s/A0zptfFEPW/kaz6B1Ll3CYs8L1j67+egt1HeNC1LTHUsp6W+A==}
|
||||
hasBin: true
|
||||
|
||||
motion-dom@12.24.8:
|
||||
resolution: {integrity: sha512-wX64WITk6gKOhaTqhsFqmIkayLAAx45SVFiMnJIxIrH5uqyrwrxjrfo8WX9Kh8CaUAixjeMn82iH0W0QT9wD5w==}
|
||||
|
||||
@@ -6688,6 +6805,10 @@ packages:
|
||||
natural-compare@1.4.0:
|
||||
resolution: {integrity: sha512-OWND8ei3VtNC9h7V60qff3SVobHr996CTwgxubgyQYEpg290h9J0buyECNNJexkFm5sOajh5G116RYA1c8ZMSw==}
|
||||
|
||||
negotiator@0.6.3:
|
||||
resolution: {integrity: sha512-+EUsqGPLsM+j/zdChZjsnX51g4XrHFOIXwfnCVPGlQk/k5giakcKsuxCObBRu6DSm9opw/O6slWbJdghQM4bBg==}
|
||||
engines: {node: '>= 0.6'}
|
||||
|
||||
neo-async@2.6.2:
|
||||
resolution: {integrity: sha512-Yd3UES5mWCSqR+qNT93S3UoYUkqAZ9lLg8a7g9rimsWmYGK8cVToA4/sF3RrshdyV3sAGMXVUmpMYOw+dLpOuw==}
|
||||
|
||||
@@ -6757,6 +6878,10 @@ packages:
|
||||
node-releases@2.0.27:
|
||||
resolution: {integrity: sha512-nmh3lCkYZ3grZvqcCH+fjmQ7X+H0OeZgP40OierEaAptX4XofMh5kwNbWh7lBduUzCcV/8kZ+NDLCwm2iorIlA==}
|
||||
|
||||
nodemailer@7.0.13:
|
||||
resolution: {integrity: sha512-PNDFSJdP+KFgdsG3ZzMXCgquO7I6McjY2vlqILjtJd0hy8wEvtugS9xKRF2NWlPNGxvLCXlTNIae4serI7dinw==}
|
||||
engines: {node: '>=6.0.0'}
|
||||
|
||||
normalize-path@3.0.0:
|
||||
resolution: {integrity: sha512-6eZs5Ls3WtCisHWp9S2GUy8dqkpGi4BVSz3GaqiE6ezub0512ESztXUwUB6C6IKbQkY2Pnb/mD4WYojCRwcwLA==}
|
||||
engines: {node: '>=0.10.0'}
|
||||
@@ -6851,6 +6976,10 @@ packages:
|
||||
obug@2.1.1:
|
||||
resolution: {integrity: sha512-uTqF9MuPraAQ+IsnPf366RG4cP9RtUi7MLO1N3KEc+wb0a6yKpeL0lmk2IB1jY5KHPAlTc6T/JRdC/YqxHNwkQ==}
|
||||
|
||||
on-finished@2.4.1:
|
||||
resolution: {integrity: sha512-oVlzkg3ENAhCk2zdv7IJwd/QUD4z2RxRwpkcGY8psCVcCYZNq4wYnVWALHM+brtuJjePWiYF/ClmuDr8Ch5+kg==}
|
||||
engines: {node: '>= 0.8'}
|
||||
|
||||
once@1.4.0:
|
||||
resolution: {integrity: sha512-lNaJgI+2Q5URQBkccEKHTQOPaXdUxnZZElQTZY0MFUAuaEqe1E+Nyvgdz/aIyNi6Z9MzO5dv1H8n58/GELp3+w==}
|
||||
|
||||
@@ -6953,6 +7082,10 @@ packages:
|
||||
parse5@8.0.0:
|
||||
resolution: {integrity: sha512-9m4m5GSgXjL4AjumKzq1Fgfp3Z8rsvjRNbnkVwfu2ImRqE5D0LnY2QfDen18FSY9C573YU5XxSapdHZTZ2WolA==}
|
||||
|
||||
parseurl@1.3.3:
|
||||
resolution: {integrity: sha512-CiyeOxFT/JZyN5m0z9PfXw4SCBJ6Sygz1Dpl0wqjlhDEGGBP1GnsUVEL0p63hoG1fcj3fHynXi9NYO4nWOL+qQ==}
|
||||
engines: {node: '>= 0.8'}
|
||||
|
||||
pascal-case@3.1.2:
|
||||
resolution: {integrity: sha512-uWlGT3YSnK9x3BQJaOdcZwrnV6hPpd8jFH1/ucpiLRPh/2zCVJKS19E4GvYHvaCcACn3foXZ0cLB9Wrx1KGe5g==}
|
||||
|
||||
@@ -7751,6 +7884,9 @@ packages:
|
||||
setimmediate@1.0.5:
|
||||
resolution: {integrity: sha512-MATJdZp8sLqDl/68LfQmbP8zKPLQNV6BIZoIgrscFDQ+RsvK/BxeDQOgyxKKoh0y/8h3BqVFnCqQ/gd+reiIXA==}
|
||||
|
||||
setprototypeof@1.2.0:
|
||||
resolution: {integrity: sha512-E5LDX7Wrp85Kil5bhZv46j8jOeboKq5JMmYM3gVGdGH8xFpPWXUMsNrlODCrkoxMEeNi/XZIwuRvY4XNwYMJpw==}
|
||||
|
||||
sha.js@2.4.12:
|
||||
resolution: {integrity: sha512-8LzC5+bvI45BjpfXU8V5fdU2mfeKiQe1D1gIMn7XUlF3OTUrpdJpPPH4EMAnF0DsHHdSZqCdSss5qCmJKuiO3w==}
|
||||
engines: {node: '>= 0.10'}
|
||||
@@ -7872,6 +8008,10 @@ packages:
|
||||
resolution: {integrity: sha512-WjlahMgHmCJpqzU8bIBy4qtsZdU9lRlcZE3Lvyej6t4tuOuv1vk57OW3MBrj6hXBFx/nNoC9MPMTcr5YA7NQbg==}
|
||||
engines: {node: '>=6'}
|
||||
|
||||
statuses@1.5.0:
|
||||
resolution: {integrity: sha512-OpZ3zP+jT1PI7I8nemJX4AKmAX070ZkYPVWV/AaKTJl+tXCTGyVdC1a4SL8RUQYEwk/f34ZX8UTykN68FwrqAA==}
|
||||
engines: {node: '>= 0.6'}
|
||||
|
||||
statuses@2.0.2:
|
||||
resolution: {integrity: sha512-DvEy55V3DB7uknRo+4iOGT5fP1slR8wQohVdknigZPMpMstaKJQWhwiYBACJE3Ul2pTnATihhBYnRhZQHGBiRw==}
|
||||
engines: {node: '>= 0.8'}
|
||||
@@ -8157,6 +8297,10 @@ packages:
|
||||
resolution: {integrity: sha512-65P7iz6X5yEr1cwcgvQxbbIw7Uk3gOy5dIdtZ4rDveLqhrdJP+Li/Hx6tyK0NEb+2GCyneCMJiGqrADCSNk8sQ==}
|
||||
engines: {node: '>=8.0'}
|
||||
|
||||
toidentifier@1.0.1:
|
||||
resolution: {integrity: sha512-o5sSPKEkg/DIQNmH43V0/uerLrpzVedkUh8tGNvaeXpfpuwjKenlSox/2O/BTlZUtEe+JG7s5YhEz608PlAHRA==}
|
||||
engines: {node: '>=0.6'}
|
||||
|
||||
tough-cookie@6.0.0:
|
||||
resolution: {integrity: sha512-kXuRi1mtaKMrsLUxz3sQYvVl37B0Ns6MzfrtV5DvJceE9bPyspOqk9xxv7XbZWcfLWbFmm997vl83qUWVJA64w==}
|
||||
engines: {node: '>=16'}
|
||||
@@ -8228,6 +8372,10 @@ packages:
|
||||
tslib@2.8.1:
|
||||
resolution: {integrity: sha512-oJFu94HQb+KVduSUQL7wnpmqnfmLsOA/nAh6b6EH0wCEoK0/mPeXU6c3wKDV83MkOuHPRHtSXKKU99IBazS/2w==}
|
||||
|
||||
tsscmp@1.0.6:
|
||||
resolution: {integrity: sha512-LxhtAkPDTkVCMQjt2h6eBVY28KCjikZqZfMcC15YBeNjkgUpdCfBu5HoiOTDu86v6smE8yOjyEktJ8hlbANHQA==}
|
||||
engines: {node: '>=0.6.x'}
|
||||
|
||||
tty-browserify@0.0.1:
|
||||
resolution: {integrity: sha512-C3TaO7K81YvjCgQH9Q1S3R3P3BtN3RIM8n+OvX4il1K1zgE8ZhI0op7kClgkxtutIE8hQrcrHBXvIheqKUUCxw==}
|
||||
|
||||
@@ -8257,6 +8405,10 @@ packages:
|
||||
resolution: {integrity: sha512-TeTSQ6H5YHvpqVwBRcnLDCBnDOHWYu7IvGbHT6N8AOymcr9PJGjc1GTtiWZTYg0NCgYwvnYWEkVChQAr9bjfwA==}
|
||||
engines: {node: '>=16'}
|
||||
|
||||
type-is@2.0.1:
|
||||
resolution: {integrity: sha512-OZs6gsjF4vMp32qrCbiVSkrFmXtG/AZhY3t0iAMrMBiAZyV9oALtXO8hsrHbMXF9x6L3grlFuwW2oAz7cav+Gw==}
|
||||
engines: {node: '>= 0.6'}
|
||||
|
||||
typed-array-buffer@1.0.3:
|
||||
resolution: {integrity: sha512-nAYYwfY3qnzX30IkA6AQZjVbtK6duGontcQm1WSG1MD94YLqK0515GNApXkoxKOWMusVssAHWLh9SeaoefYFGw==}
|
||||
engines: {node: '>= 0.4'}
|
||||
@@ -8457,6 +8609,10 @@ packages:
|
||||
resolution: {integrity: sha512-spH26xU080ydGggxRyR1Yhcbgx+j3y5jbNXk/8L+iRvdIEQ4uTRH2Sgf2dokud6Q4oAtsbNvJ1Ft+9xmm6IZcA==}
|
||||
engines: {node: '>= 0.10'}
|
||||
|
||||
vary@1.1.2:
|
||||
resolution: {integrity: sha512-BNGbWLfd0eUPabhkXUVm0j8uuvREyTh5ovRa/dyow/BqAbZJyC+5fU+IzQOzmAKzYqYRAISoRhdQr3eIZ/PXqg==}
|
||||
engines: {node: '>= 0.8'}
|
||||
|
||||
vaul@1.1.2:
|
||||
resolution: {integrity: sha512-ZFkClGpWyI2WUQjdLJ/BaGuV6AVQiJ3uELGk3OYtP+B6yCO7Cmn9vPFXVJkRaGkOJu3m8bQMgtyzNHixULceQA==}
|
||||
peerDependencies:
|
||||
@@ -12911,6 +13067,11 @@ snapshots:
|
||||
dependencies:
|
||||
event-target-shim: 5.0.1
|
||||
|
||||
accepts@1.3.8:
|
||||
dependencies:
|
||||
mime-types: 2.1.35
|
||||
negotiator: 0.6.3
|
||||
|
||||
acorn-import-attributes@1.9.5(acorn@8.15.0):
|
||||
dependencies:
|
||||
acorn: 8.15.0
|
||||
@@ -12923,6 +13084,14 @@ snapshots:
|
||||
dependencies:
|
||||
acorn: 8.15.0
|
||||
|
||||
acorn-loose@8.5.2:
|
||||
dependencies:
|
||||
acorn: 8.15.0
|
||||
|
||||
acorn-walk@8.3.5:
|
||||
dependencies:
|
||||
acorn: 8.15.0
|
||||
|
||||
acorn@8.15.0: {}
|
||||
|
||||
adjust-sourcemap-loader@4.0.0:
|
||||
@@ -13472,14 +13641,25 @@ snapshots:
|
||||
|
||||
console-browserify@1.2.0: {}
|
||||
|
||||
console-grid@2.2.3: {}
|
||||
|
||||
constants-browserify@1.0.0: {}
|
||||
|
||||
content-disposition@1.0.1: {}
|
||||
|
||||
content-type@1.0.5: {}
|
||||
|
||||
convert-source-map@1.9.0: {}
|
||||
|
||||
convert-source-map@2.0.0: {}
|
||||
|
||||
cookie@1.0.2: {}
|
||||
|
||||
cookies@0.9.1:
|
||||
dependencies:
|
||||
depd: 2.0.0
|
||||
keygrip: 1.1.0
|
||||
|
||||
core-js-compat@3.47.0:
|
||||
dependencies:
|
||||
browserslist: 4.28.1
|
||||
@@ -13843,6 +14023,8 @@ snapshots:
|
||||
|
||||
deep-eql@5.0.2: {}
|
||||
|
||||
deep-equal@1.0.1: {}
|
||||
|
||||
deep-is@0.1.4: {}
|
||||
|
||||
deepmerge-ts@7.1.5: {}
|
||||
@@ -13867,6 +14049,12 @@ snapshots:
|
||||
dependencies:
|
||||
robust-predicates: 3.0.2
|
||||
|
||||
delegates@1.0.0: {}
|
||||
|
||||
depd@1.1.2: {}
|
||||
|
||||
depd@2.0.0: {}
|
||||
|
||||
dependency-graph@0.11.0: {}
|
||||
|
||||
dequal@2.0.3: {}
|
||||
@@ -13876,6 +14064,8 @@ snapshots:
|
||||
inherits: 2.0.4
|
||||
minimalistic-assert: 1.0.1
|
||||
|
||||
destroy@1.2.0: {}
|
||||
|
||||
detect-libc@2.1.2:
|
||||
optional: true
|
||||
|
||||
@@ -13958,6 +14148,10 @@ snapshots:
|
||||
|
||||
eastasianwidth@0.2.0: {}
|
||||
|
||||
ee-first@1.1.1: {}
|
||||
|
||||
eight-colors@1.3.2: {}
|
||||
|
||||
electron-to-chromium@1.5.267: {}
|
||||
|
||||
elliptic@6.6.1:
|
||||
@@ -13990,6 +14184,8 @@ snapshots:
|
||||
|
||||
emojis-list@3.0.0: {}
|
||||
|
||||
encodeurl@2.0.0: {}
|
||||
|
||||
endent@2.1.0:
|
||||
dependencies:
|
||||
dedent: 0.7.0
|
||||
@@ -14209,6 +14405,8 @@ snapshots:
|
||||
|
||||
escalade@3.2.0: {}
|
||||
|
||||
escape-html@1.0.3: {}
|
||||
|
||||
escape-string-regexp@4.0.0: {}
|
||||
|
||||
escape-string-regexp@5.0.0: {}
|
||||
@@ -14606,6 +14804,8 @@ snapshots:
|
||||
react: 18.3.1
|
||||
react-dom: 18.3.1(react@18.3.1)
|
||||
|
||||
fresh@0.5.2: {}
|
||||
|
||||
fs-extra@10.1.0:
|
||||
dependencies:
|
||||
graceful-fs: 4.2.11
|
||||
@@ -14994,6 +15194,27 @@ snapshots:
|
||||
domutils: 2.8.0
|
||||
entities: 2.2.0
|
||||
|
||||
http-assert@1.5.0:
|
||||
dependencies:
|
||||
deep-equal: 1.0.1
|
||||
http-errors: 1.8.1
|
||||
|
||||
http-errors@1.8.1:
|
||||
dependencies:
|
||||
depd: 1.1.2
|
||||
inherits: 2.0.4
|
||||
setprototypeof: 1.2.0
|
||||
statuses: 1.5.0
|
||||
toidentifier: 1.0.1
|
||||
|
||||
http-errors@2.0.1:
|
||||
dependencies:
|
||||
depd: 2.0.0
|
||||
inherits: 2.0.4
|
||||
setprototypeof: 1.2.0
|
||||
statuses: 2.0.2
|
||||
toidentifier: 1.0.1
|
||||
|
||||
http-proxy-agent@7.0.2:
|
||||
dependencies:
|
||||
agent-base: 7.1.4
|
||||
@@ -15409,12 +15630,41 @@ snapshots:
|
||||
dependencies:
|
||||
commander: 8.3.0
|
||||
|
||||
keygrip@1.1.0:
|
||||
dependencies:
|
||||
tsscmp: 1.0.6
|
||||
|
||||
keyv@4.5.4:
|
||||
dependencies:
|
||||
json-buffer: 3.0.1
|
||||
|
||||
khroma@2.1.0: {}
|
||||
|
||||
koa-compose@4.1.0: {}
|
||||
|
||||
koa-static-resolver@1.0.6: {}
|
||||
|
||||
koa@3.2.0:
|
||||
dependencies:
|
||||
accepts: 1.3.8
|
||||
content-disposition: 1.0.1
|
||||
content-type: 1.0.5
|
||||
cookies: 0.9.1
|
||||
delegates: 1.0.0
|
||||
destroy: 1.2.0
|
||||
encodeurl: 2.0.0
|
||||
escape-html: 1.0.3
|
||||
fresh: 0.5.2
|
||||
http-assert: 1.5.0
|
||||
http-errors: 2.0.1
|
||||
koa-compose: 4.1.0
|
||||
mime-types: 3.0.2
|
||||
on-finished: 2.4.1
|
||||
parseurl: 1.3.3
|
||||
statuses: 2.0.2
|
||||
type-is: 2.0.1
|
||||
vary: 1.1.2
|
||||
|
||||
langium@3.3.1:
|
||||
dependencies:
|
||||
chevrotain: 11.0.3
|
||||
@@ -15552,6 +15802,8 @@ snapshots:
|
||||
|
||||
lz-string@1.5.0: {}
|
||||
|
||||
lz-utils@2.1.0: {}
|
||||
|
||||
magic-string@0.30.21:
|
||||
dependencies:
|
||||
'@jridgewell/sourcemap-codec': 1.5.5
|
||||
@@ -15771,6 +16023,8 @@ snapshots:
|
||||
|
||||
mdurl@2.0.0: {}
|
||||
|
||||
media-typer@1.1.0: {}
|
||||
|
||||
memfs@3.5.3:
|
||||
dependencies:
|
||||
fs-monkey: 1.1.0
|
||||
@@ -16047,10 +16301,16 @@ snapshots:
|
||||
|
||||
mime-db@1.52.0: {}
|
||||
|
||||
mime-db@1.54.0: {}
|
||||
|
||||
mime-types@2.1.35:
|
||||
dependencies:
|
||||
mime-db: 1.52.0
|
||||
|
||||
mime-types@3.0.2:
|
||||
dependencies:
|
||||
mime-db: 1.54.0
|
||||
|
||||
mimic-fn@2.1.0: {}
|
||||
|
||||
min-indent@1.0.1: {}
|
||||
@@ -16084,6 +16344,34 @@ snapshots:
|
||||
|
||||
module-details-from-path@1.0.4: {}
|
||||
|
||||
monocart-coverage-reports@2.12.9:
|
||||
dependencies:
|
||||
acorn: 8.15.0
|
||||
acorn-loose: 8.5.2
|
||||
acorn-walk: 8.3.5
|
||||
commander: 14.0.2
|
||||
console-grid: 2.2.3
|
||||
eight-colors: 1.3.2
|
||||
foreground-child: 3.3.1
|
||||
istanbul-lib-coverage: 3.2.2
|
||||
istanbul-lib-report: 3.0.1
|
||||
istanbul-reports: 3.2.0
|
||||
lz-utils: 2.1.0
|
||||
monocart-locator: 1.0.2
|
||||
|
||||
monocart-locator@1.0.2: {}
|
||||
|
||||
monocart-reporter@2.10.0:
|
||||
dependencies:
|
||||
console-grid: 2.2.3
|
||||
eight-colors: 1.3.2
|
||||
koa: 3.2.0
|
||||
koa-static-resolver: 1.0.6
|
||||
lz-utils: 2.1.0
|
||||
monocart-coverage-reports: 2.12.9
|
||||
monocart-locator: 1.0.2
|
||||
nodemailer: 7.0.13
|
||||
|
||||
motion-dom@12.24.8:
|
||||
dependencies:
|
||||
motion-utils: 12.23.28
|
||||
@@ -16138,6 +16426,8 @@ snapshots:
|
||||
|
||||
natural-compare@1.4.0: {}
|
||||
|
||||
negotiator@0.6.3: {}
|
||||
|
||||
neo-async@2.6.2: {}
|
||||
|
||||
next-themes@0.4.6(react-dom@18.3.1(react@18.3.1))(react@18.3.1):
|
||||
@@ -16237,6 +16527,8 @@ snapshots:
|
||||
|
||||
node-releases@2.0.27: {}
|
||||
|
||||
nodemailer@7.0.13: {}
|
||||
|
||||
normalize-path@3.0.0: {}
|
||||
|
||||
npm-run-path@4.0.1:
|
||||
@@ -16338,6 +16630,10 @@ snapshots:
|
||||
|
||||
obug@2.1.1: {}
|
||||
|
||||
on-finished@2.4.1:
|
||||
dependencies:
|
||||
ee-first: 1.1.1
|
||||
|
||||
once@1.4.0:
|
||||
dependencies:
|
||||
wrappy: 1.0.2
|
||||
@@ -16495,6 +16791,8 @@ snapshots:
|
||||
entities: 6.0.1
|
||||
optional: true
|
||||
|
||||
parseurl@1.3.3: {}
|
||||
|
||||
pascal-case@3.1.2:
|
||||
dependencies:
|
||||
no-case: 3.0.4
|
||||
@@ -17365,6 +17663,8 @@ snapshots:
|
||||
|
||||
setimmediate@1.0.5: {}
|
||||
|
||||
setprototypeof@1.2.0: {}
|
||||
|
||||
sha.js@2.4.12:
|
||||
dependencies:
|
||||
inherits: 2.0.4
|
||||
@@ -17526,6 +17826,8 @@ snapshots:
|
||||
dependencies:
|
||||
type-fest: 0.7.1
|
||||
|
||||
statuses@1.5.0: {}
|
||||
|
||||
statuses@2.0.2: {}
|
||||
|
||||
std-env@3.10.0: {}
|
||||
@@ -17873,6 +18175,8 @@ snapshots:
|
||||
dependencies:
|
||||
is-number: 7.0.0
|
||||
|
||||
toidentifier@1.0.1: {}
|
||||
|
||||
tough-cookie@6.0.0:
|
||||
dependencies:
|
||||
tldts: 7.0.19
|
||||
@@ -17930,6 +18234,8 @@ snapshots:
|
||||
|
||||
tslib@2.8.1: {}
|
||||
|
||||
tsscmp@1.0.6: {}
|
||||
|
||||
tty-browserify@0.0.1: {}
|
||||
|
||||
twemoji-parser@14.0.0: {}
|
||||
@@ -17953,6 +18259,12 @@ snapshots:
|
||||
|
||||
type-fest@4.41.0: {}
|
||||
|
||||
type-is@2.0.1:
|
||||
dependencies:
|
||||
content-type: 1.0.5
|
||||
media-typer: 1.1.0
|
||||
mime-types: 3.0.2
|
||||
|
||||
typed-array-buffer@1.0.3:
|
||||
dependencies:
|
||||
call-bound: 1.0.4
|
||||
@@ -18182,6 +18494,8 @@ snapshots:
|
||||
|
||||
validator@13.15.26: {}
|
||||
|
||||
vary@1.1.2: {}
|
||||
|
||||
vaul@1.1.2(@types/react-dom@18.3.5(@types/react@18.3.17))(@types/react@18.3.17)(react-dom@18.3.1(react@18.3.1))(react@18.3.1):
|
||||
dependencies:
|
||||
'@radix-ui/react-dialog': 1.1.15(@types/react-dom@18.3.5(@types/react@18.3.17))(@types/react@18.3.17)(react-dom@18.3.1(react@18.3.1))(react@18.3.1)
|
||||
|
||||
@@ -66,6 +66,29 @@ describe("useOnboardingWizardStore", () => {
|
||||
"no tests",
|
||||
]);
|
||||
});
|
||||
|
||||
it("ignores new selections when at the max limit", () => {
|
||||
useOnboardingWizardStore.getState().togglePainPoint("a");
|
||||
useOnboardingWizardStore.getState().togglePainPoint("b");
|
||||
useOnboardingWizardStore.getState().togglePainPoint("c");
|
||||
useOnboardingWizardStore.getState().togglePainPoint("d");
|
||||
expect(useOnboardingWizardStore.getState().painPoints).toEqual([
|
||||
"a",
|
||||
"b",
|
||||
"c",
|
||||
]);
|
||||
});
|
||||
|
||||
it("still allows deselecting when at the max limit", () => {
|
||||
useOnboardingWizardStore.getState().togglePainPoint("a");
|
||||
useOnboardingWizardStore.getState().togglePainPoint("b");
|
||||
useOnboardingWizardStore.getState().togglePainPoint("c");
|
||||
useOnboardingWizardStore.getState().togglePainPoint("b");
|
||||
expect(useOnboardingWizardStore.getState().painPoints).toEqual([
|
||||
"a",
|
||||
"c",
|
||||
]);
|
||||
});
|
||||
});
|
||||
|
||||
describe("setOtherPainPoint", () => {
|
||||
|
||||
@@ -7,9 +7,9 @@ export function ProgressBar({ currentStep, totalSteps }: Props) {
|
||||
const percent = (currentStep / totalSteps) * 100;
|
||||
|
||||
return (
|
||||
<div className="absolute left-0 top-0 h-[0.625rem] w-full bg-neutral-300">
|
||||
<div className="absolute left-0 top-0 h-[3px] w-full bg-neutral-200">
|
||||
<div
|
||||
className="h-full bg-purple-400 shadow-[0_0_4px_2px_rgba(168,85,247,0.5)] transition-all duration-500 ease-out"
|
||||
className="h-full bg-purple-400 transition-all duration-500 ease-out"
|
||||
style={{ width: `${percent}%` }}
|
||||
/>
|
||||
</div>
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
|
||||
import { Text } from "@/components/atoms/Text/Text";
|
||||
import { cn } from "@/lib/utils";
|
||||
import { Check } from "@phosphor-icons/react";
|
||||
|
||||
interface Props {
|
||||
icon: React.ReactNode;
|
||||
@@ -24,13 +25,18 @@ export function SelectableCard({
|
||||
onClick={onClick}
|
||||
aria-pressed={selected}
|
||||
className={cn(
|
||||
"flex h-[9rem] w-[10.375rem] shrink-0 flex-col items-center justify-center gap-3 rounded-xl border-2 bg-white px-6 py-5 transition-all hover:shadow-sm md:shrink lg:gap-2 lg:px-10 lg:py-8",
|
||||
"relative flex h-[9rem] w-[10.375rem] shrink-0 flex-col items-center justify-center gap-3 rounded-xl border-2 bg-white px-6 py-5 transition-all hover:shadow-sm md:shrink lg:gap-2 lg:px-10 lg:py-8",
|
||||
className,
|
||||
selected
|
||||
? "border-purple-500 bg-purple-50 shadow-sm"
|
||||
: "border-transparent",
|
||||
)}
|
||||
>
|
||||
{selected && (
|
||||
<span className="absolute right-2 top-2 flex h-5 w-5 items-center justify-center rounded-full bg-purple-500">
|
||||
<Check size={12} weight="bold" className="text-white" />
|
||||
</span>
|
||||
)}
|
||||
<Text
|
||||
variant="lead"
|
||||
as="span"
|
||||
|
||||
@@ -3,6 +3,7 @@
|
||||
import { Button } from "@/components/atoms/Button/Button";
|
||||
import { Input } from "@/components/atoms/Input/Input";
|
||||
import { Text } from "@/components/atoms/Text/Text";
|
||||
import { cn } from "@/lib/utils";
|
||||
import { ReactNode } from "react";
|
||||
|
||||
import { FadeIn } from "@/components/atoms/FadeIn/FadeIn";
|
||||
@@ -73,6 +74,8 @@ export function PainPointsStep() {
|
||||
togglePainPoint,
|
||||
setOtherPainPoint,
|
||||
hasSomethingElse,
|
||||
atLimit,
|
||||
shaking,
|
||||
canContinue,
|
||||
handleLaunch,
|
||||
} = usePainPointsStep();
|
||||
@@ -90,7 +93,7 @@ export function PainPointsStep() {
|
||||
What's eating your time?
|
||||
</Text>
|
||||
<Text variant="lead" className="!text-zinc-500">
|
||||
Pick the tasks you'd love to hand off to Autopilot
|
||||
Pick the tasks you'd love to hand off to AutoPilot
|
||||
</Text>
|
||||
</div>
|
||||
|
||||
@@ -107,11 +110,22 @@ export function PainPointsStep() {
|
||||
/>
|
||||
))}
|
||||
</div>
|
||||
{!hasSomethingElse ? (
|
||||
<Text variant="small" className="!text-zinc-500">
|
||||
Pick as many as you want — you can always change later
|
||||
</Text>
|
||||
) : null}
|
||||
<Text
|
||||
variant="small"
|
||||
className={cn(
|
||||
"transition-colors",
|
||||
atLimit && canContinue ? "!text-green-600" : "!text-zinc-500",
|
||||
shaking && "animate-shake",
|
||||
)}
|
||||
>
|
||||
{shaking
|
||||
? "You've picked 3 — tap one to swap it out"
|
||||
: atLimit && canContinue
|
||||
? "3 selected — you're all set!"
|
||||
: atLimit && hasSomethingElse
|
||||
? "Tell us what else takes up your time"
|
||||
: "Pick up to 3 to start — AutoPilot can help with anything else later"}
|
||||
</Text>
|
||||
</div>
|
||||
|
||||
{hasSomethingElse && (
|
||||
@@ -133,7 +147,7 @@ export function PainPointsStep() {
|
||||
disabled={!canContinue}
|
||||
className="w-full max-w-xs"
|
||||
>
|
||||
Launch Autopilot
|
||||
Launch AutoPilot
|
||||
</Button>
|
||||
</div>
|
||||
</FadeIn>
|
||||
|
||||
@@ -8,6 +8,7 @@ import { FadeIn } from "@/components/atoms/FadeIn/FadeIn";
|
||||
import { SelectableCard } from "../components/SelectableCard";
|
||||
import { useOnboardingWizardStore } from "../store";
|
||||
import { Emoji } from "@/components/atoms/Emoji/Emoji";
|
||||
import { useEffect, useRef } from "react";
|
||||
|
||||
const IMG_SIZE = 42;
|
||||
|
||||
@@ -57,12 +58,26 @@ export function RoleStep() {
|
||||
const setRole = useOnboardingWizardStore((s) => s.setRole);
|
||||
const setOtherRole = useOnboardingWizardStore((s) => s.setOtherRole);
|
||||
const nextStep = useOnboardingWizardStore((s) => s.nextStep);
|
||||
const autoAdvanceTimer = useRef<ReturnType<typeof setTimeout> | null>(null);
|
||||
|
||||
const isOther = role === "Other";
|
||||
const canContinue = role && (!isOther || otherRole.trim());
|
||||
|
||||
function handleContinue() {
|
||||
if (canContinue) {
|
||||
useEffect(() => {
|
||||
return () => {
|
||||
if (autoAdvanceTimer.current) clearTimeout(autoAdvanceTimer.current);
|
||||
};
|
||||
}, []);
|
||||
|
||||
function handleRoleSelect(id: string) {
|
||||
if (autoAdvanceTimer.current) clearTimeout(autoAdvanceTimer.current);
|
||||
setRole(id);
|
||||
if (id !== "Other") {
|
||||
autoAdvanceTimer.current = setTimeout(nextStep, 350);
|
||||
}
|
||||
}
|
||||
|
||||
function handleOtherContinue() {
|
||||
if (otherRole.trim()) {
|
||||
nextStep();
|
||||
}
|
||||
}
|
||||
@@ -78,7 +93,7 @@ export function RoleStep() {
|
||||
What best describes you, {name}?
|
||||
</Text>
|
||||
<Text variant="lead" className="!text-zinc-500">
|
||||
Autopilot will tailor automations to your world
|
||||
So AutoPilot knows how to help you best
|
||||
</Text>
|
||||
</div>
|
||||
|
||||
@@ -89,33 +104,35 @@ export function RoleStep() {
|
||||
icon={r.icon}
|
||||
label={r.label}
|
||||
selected={role === r.id}
|
||||
onClick={() => setRole(r.id)}
|
||||
onClick={() => handleRoleSelect(r.id)}
|
||||
className="p-8"
|
||||
/>
|
||||
))}
|
||||
</div>
|
||||
|
||||
{isOther && (
|
||||
<div className="-mb-5 w-full px-8 md:px-0">
|
||||
<Input
|
||||
id="other-role"
|
||||
label="Other role"
|
||||
hideLabel
|
||||
placeholder="Describe your role..."
|
||||
value={otherRole}
|
||||
onChange={(e) => setOtherRole(e.target.value)}
|
||||
autoFocus
|
||||
/>
|
||||
</div>
|
||||
)}
|
||||
<>
|
||||
<div className="-mb-5 w-full px-8 md:px-0">
|
||||
<Input
|
||||
id="other-role"
|
||||
label="Other role"
|
||||
hideLabel
|
||||
placeholder="Describe your role..."
|
||||
value={otherRole}
|
||||
onChange={(e) => setOtherRole(e.target.value)}
|
||||
autoFocus
|
||||
/>
|
||||
</div>
|
||||
|
||||
<Button
|
||||
onClick={handleContinue}
|
||||
disabled={!canContinue}
|
||||
className="w-full max-w-xs"
|
||||
>
|
||||
Continue
|
||||
</Button>
|
||||
<Button
|
||||
onClick={handleOtherContinue}
|
||||
disabled={!otherRole.trim()}
|
||||
className="w-full max-w-xs"
|
||||
>
|
||||
Continue
|
||||
</Button>
|
||||
</>
|
||||
)}
|
||||
</div>
|
||||
</FadeIn>
|
||||
);
|
||||
|
||||
@@ -4,13 +4,6 @@ import { AutoGPTLogo } from "@/components/atoms/AutoGPTLogo/AutoGPTLogo";
|
||||
import { Button } from "@/components/atoms/Button/Button";
|
||||
import { Input } from "@/components/atoms/Input/Input";
|
||||
import { Text } from "@/components/atoms/Text/Text";
|
||||
import {
|
||||
Tooltip,
|
||||
TooltipContent,
|
||||
TooltipProvider,
|
||||
TooltipTrigger,
|
||||
} from "@/components/atoms/Tooltip/BaseTooltip";
|
||||
import { Question } from "@phosphor-icons/react";
|
||||
import { FadeIn } from "@/components/atoms/FadeIn/FadeIn";
|
||||
import { useOnboardingWizardStore } from "../store";
|
||||
|
||||
@@ -40,36 +33,16 @@ export function WelcomeStep() {
|
||||
<Text variant="h3">Welcome to AutoGPT</Text>
|
||||
<Text variant="lead" as="span" className="!text-zinc-500">
|
||||
Let's personalize your experience so{" "}
|
||||
<span className="relative mr-3 inline-block bg-gradient-to-r from-purple-500 to-indigo-500 bg-clip-text text-transparent">
|
||||
Autopilot
|
||||
<span className="absolute -right-4 top-0">
|
||||
<TooltipProvider delayDuration={400}>
|
||||
<Tooltip>
|
||||
<TooltipTrigger asChild>
|
||||
<button
|
||||
type="button"
|
||||
aria-label="What is Autopilot?"
|
||||
className="inline-flex text-purple-500"
|
||||
>
|
||||
<Question size={14} />
|
||||
</button>
|
||||
</TooltipTrigger>
|
||||
<TooltipContent>
|
||||
Autopilot is AutoGPT's AI assistant that watches your
|
||||
connected apps, spots repetitive tasks you do every day
|
||||
and runs them for you automatically.
|
||||
</TooltipContent>
|
||||
</Tooltip>
|
||||
</TooltipProvider>
|
||||
</span>
|
||||
<span className="bg-gradient-to-r from-purple-500 to-indigo-500 bg-clip-text text-transparent">
|
||||
AutoPilot
|
||||
</span>{" "}
|
||||
can start saving you time right away
|
||||
can start saving you time
|
||||
</Text>
|
||||
</div>
|
||||
|
||||
<Input
|
||||
id="first-name"
|
||||
label="Your first name"
|
||||
label="What should I call you?"
|
||||
placeholder="e.g. John"
|
||||
value={name}
|
||||
onChange={(e) => setName(e.target.value)}
|
||||
|
||||
@@ -0,0 +1,154 @@
|
||||
import {
|
||||
render,
|
||||
screen,
|
||||
fireEvent,
|
||||
cleanup,
|
||||
} from "@/tests/integrations/test-utils";
|
||||
import { afterEach, beforeEach, describe, expect, test, vi } from "vitest";
|
||||
import { useOnboardingWizardStore } from "../../store";
|
||||
import { PainPointsStep } from "../PainPointsStep";
|
||||
|
||||
vi.mock("@/components/atoms/Emoji/Emoji", () => ({
|
||||
Emoji: ({ text }: { text: string }) => <span>{text}</span>,
|
||||
}));
|
||||
|
||||
vi.mock("@/components/atoms/FadeIn/FadeIn", () => ({
|
||||
FadeIn: ({ children }: { children: React.ReactNode }) => (
|
||||
<div>{children}</div>
|
||||
),
|
||||
}));
|
||||
|
||||
function getCard(name: RegExp) {
|
||||
return screen.getByRole("button", { name });
|
||||
}
|
||||
|
||||
function clickCard(name: RegExp) {
|
||||
fireEvent.click(getCard(name));
|
||||
}
|
||||
|
||||
function getLaunchButton() {
|
||||
return screen.getByRole("button", { name: /launch autopilot/i });
|
||||
}
|
||||
|
||||
afterEach(cleanup);
|
||||
|
||||
beforeEach(() => {
|
||||
useOnboardingWizardStore.getState().reset();
|
||||
useOnboardingWizardStore.getState().setName("Alice");
|
||||
useOnboardingWizardStore.getState().setRole("Founder/CEO");
|
||||
useOnboardingWizardStore.getState().goToStep(3);
|
||||
});
|
||||
|
||||
describe("PainPointsStep", () => {
|
||||
test("renders all pain point cards", () => {
|
||||
render(<PainPointsStep />);
|
||||
|
||||
expect(getCard(/finding leads/i)).toBeDefined();
|
||||
expect(getCard(/email & outreach/i)).toBeDefined();
|
||||
expect(getCard(/reports & data/i)).toBeDefined();
|
||||
expect(getCard(/customer support/i)).toBeDefined();
|
||||
expect(getCard(/social media/i)).toBeDefined();
|
||||
expect(getCard(/something else/i)).toBeDefined();
|
||||
});
|
||||
|
||||
test("shows default helper text", () => {
|
||||
render(<PainPointsStep />);
|
||||
|
||||
expect(
|
||||
screen.getAllByText(/pick up to 3 to start/i).length,
|
||||
).toBeGreaterThan(0);
|
||||
});
|
||||
|
||||
test("selecting a card marks it as pressed", () => {
|
||||
render(<PainPointsStep />);
|
||||
|
||||
clickCard(/finding leads/i);
|
||||
|
||||
expect(getCard(/finding leads/i).getAttribute("aria-pressed")).toBe("true");
|
||||
});
|
||||
|
||||
test("launch button is disabled when nothing is selected", () => {
|
||||
render(<PainPointsStep />);
|
||||
|
||||
expect(getLaunchButton().hasAttribute("disabled")).toBe(true);
|
||||
});
|
||||
|
||||
test("launch button is enabled after selecting a pain point", () => {
|
||||
render(<PainPointsStep />);
|
||||
|
||||
clickCard(/finding leads/i);
|
||||
|
||||
expect(getLaunchButton().hasAttribute("disabled")).toBe(false);
|
||||
});
|
||||
|
||||
test("shows success text when 3 items are selected", () => {
|
||||
render(<PainPointsStep />);
|
||||
|
||||
clickCard(/finding leads/i);
|
||||
clickCard(/email & outreach/i);
|
||||
clickCard(/reports & data/i);
|
||||
|
||||
expect(screen.getAllByText(/3 selected/i).length).toBeGreaterThan(0);
|
||||
});
|
||||
|
||||
test("does not select a 4th item when at the limit", () => {
|
||||
render(<PainPointsStep />);
|
||||
|
||||
clickCard(/finding leads/i);
|
||||
clickCard(/email & outreach/i);
|
||||
clickCard(/reports & data/i);
|
||||
clickCard(/customer support/i);
|
||||
|
||||
expect(getCard(/customer support/i).getAttribute("aria-pressed")).toBe(
|
||||
"false",
|
||||
);
|
||||
});
|
||||
|
||||
test("can deselect when at the limit and select a different one", () => {
|
||||
render(<PainPointsStep />);
|
||||
|
||||
clickCard(/finding leads/i);
|
||||
clickCard(/email & outreach/i);
|
||||
clickCard(/reports & data/i);
|
||||
|
||||
clickCard(/finding leads/i);
|
||||
expect(getCard(/finding leads/i).getAttribute("aria-pressed")).toBe(
|
||||
"false",
|
||||
);
|
||||
|
||||
clickCard(/customer support/i);
|
||||
expect(getCard(/customer support/i).getAttribute("aria-pressed")).toBe(
|
||||
"true",
|
||||
);
|
||||
});
|
||||
|
||||
test("shows input when 'Something else' is selected", () => {
|
||||
render(<PainPointsStep />);
|
||||
|
||||
clickCard(/something else/i);
|
||||
|
||||
expect(
|
||||
screen.getByPlaceholderText(/what else takes up your time/i),
|
||||
).toBeDefined();
|
||||
});
|
||||
|
||||
test("launch button is disabled when 'Something else' selected but input empty", () => {
|
||||
render(<PainPointsStep />);
|
||||
|
||||
clickCard(/something else/i);
|
||||
|
||||
expect(getLaunchButton().hasAttribute("disabled")).toBe(true);
|
||||
});
|
||||
|
||||
test("launch button is enabled when 'Something else' selected and input filled", () => {
|
||||
render(<PainPointsStep />);
|
||||
|
||||
clickCard(/something else/i);
|
||||
fireEvent.change(
|
||||
screen.getByPlaceholderText(/what else takes up your time/i),
|
||||
{ target: { value: "Manual invoicing" } },
|
||||
);
|
||||
|
||||
expect(getLaunchButton().hasAttribute("disabled")).toBe(false);
|
||||
});
|
||||
});
|
||||
@@ -0,0 +1,123 @@
|
||||
import {
|
||||
render,
|
||||
screen,
|
||||
fireEvent,
|
||||
cleanup,
|
||||
} from "@/tests/integrations/test-utils";
|
||||
import { afterEach, beforeEach, describe, expect, test, vi } from "vitest";
|
||||
import { useOnboardingWizardStore } from "../../store";
|
||||
import { RoleStep } from "../RoleStep";
|
||||
|
||||
vi.mock("@/components/atoms/Emoji/Emoji", () => ({
|
||||
Emoji: ({ text }: { text: string }) => <span>{text}</span>,
|
||||
}));
|
||||
|
||||
vi.mock("@/components/atoms/FadeIn/FadeIn", () => ({
|
||||
FadeIn: ({ children }: { children: React.ReactNode }) => (
|
||||
<div>{children}</div>
|
||||
),
|
||||
}));
|
||||
|
||||
afterEach(() => {
|
||||
cleanup();
|
||||
vi.useRealTimers();
|
||||
});
|
||||
|
||||
beforeEach(() => {
|
||||
vi.useFakeTimers();
|
||||
useOnboardingWizardStore.getState().reset();
|
||||
useOnboardingWizardStore.getState().setName("Alice");
|
||||
useOnboardingWizardStore.getState().goToStep(2);
|
||||
});
|
||||
|
||||
describe("RoleStep", () => {
|
||||
test("renders all role cards", () => {
|
||||
render(<RoleStep />);
|
||||
|
||||
expect(screen.getByText("Founder / CEO")).toBeDefined();
|
||||
expect(screen.getByText("Operations")).toBeDefined();
|
||||
expect(screen.getByText("Sales / BD")).toBeDefined();
|
||||
expect(screen.getByText("Marketing")).toBeDefined();
|
||||
expect(screen.getByText("Product / PM")).toBeDefined();
|
||||
expect(screen.getByText("Engineering")).toBeDefined();
|
||||
expect(screen.getByText("HR / People")).toBeDefined();
|
||||
expect(screen.getByText("Other")).toBeDefined();
|
||||
});
|
||||
|
||||
test("displays the user name in the heading", () => {
|
||||
render(<RoleStep />);
|
||||
|
||||
expect(
|
||||
screen.getAllByText(/what best describes you, alice/i).length,
|
||||
).toBeGreaterThan(0);
|
||||
});
|
||||
|
||||
test("selecting a non-Other role auto-advances after delay", () => {
|
||||
render(<RoleStep />);
|
||||
|
||||
fireEvent.click(screen.getByRole("button", { name: /engineering/i }));
|
||||
|
||||
expect(useOnboardingWizardStore.getState().role).toBe("Engineering");
|
||||
expect(useOnboardingWizardStore.getState().currentStep).toBe(2);
|
||||
|
||||
vi.advanceTimersByTime(350);
|
||||
|
||||
expect(useOnboardingWizardStore.getState().currentStep).toBe(3);
|
||||
});
|
||||
|
||||
test("selecting 'Other' does not auto-advance", () => {
|
||||
render(<RoleStep />);
|
||||
|
||||
fireEvent.click(screen.getByRole("button", { name: /\bother\b/i }));
|
||||
|
||||
vi.advanceTimersByTime(500);
|
||||
|
||||
expect(useOnboardingWizardStore.getState().currentStep).toBe(2);
|
||||
});
|
||||
|
||||
test("selecting 'Other' shows text input and Continue button", () => {
|
||||
render(<RoleStep />);
|
||||
|
||||
fireEvent.click(screen.getByRole("button", { name: /\bother\b/i }));
|
||||
|
||||
expect(screen.getByPlaceholderText(/describe your role/i)).toBeDefined();
|
||||
expect(screen.getByRole("button", { name: /continue/i })).toBeDefined();
|
||||
});
|
||||
|
||||
test("Continue button is disabled when Other input is empty", () => {
|
||||
render(<RoleStep />);
|
||||
|
||||
fireEvent.click(screen.getByRole("button", { name: /\bother\b/i }));
|
||||
|
||||
const continueBtn = screen.getByRole("button", { name: /continue/i });
|
||||
expect(continueBtn.hasAttribute("disabled")).toBe(true);
|
||||
});
|
||||
|
||||
test("Continue button advances when Other role text is filled", () => {
|
||||
render(<RoleStep />);
|
||||
|
||||
fireEvent.click(screen.getByRole("button", { name: /\bother\b/i }));
|
||||
fireEvent.change(screen.getByPlaceholderText(/describe your role/i), {
|
||||
target: { value: "Designer" },
|
||||
});
|
||||
|
||||
const continueBtn = screen.getByRole("button", { name: /continue/i });
|
||||
expect(continueBtn.hasAttribute("disabled")).toBe(false);
|
||||
|
||||
fireEvent.click(continueBtn);
|
||||
expect(useOnboardingWizardStore.getState().currentStep).toBe(3);
|
||||
});
|
||||
|
||||
test("switching from Other to a regular role cancels Other and auto-advances", () => {
|
||||
render(<RoleStep />);
|
||||
|
||||
fireEvent.click(screen.getByRole("button", { name: /\bother\b/i }));
|
||||
expect(screen.getByPlaceholderText(/describe your role/i)).toBeDefined();
|
||||
|
||||
fireEvent.click(screen.getByRole("button", { name: /marketing/i }));
|
||||
|
||||
expect(useOnboardingWizardStore.getState().role).toBe("Marketing");
|
||||
vi.advanceTimersByTime(350);
|
||||
expect(useOnboardingWizardStore.getState().currentStep).toBe(3);
|
||||
});
|
||||
});
|
||||
@@ -1,4 +1,5 @@
|
||||
import { useOnboardingWizardStore } from "../store";
|
||||
import { useEffect, useRef, useState } from "react";
|
||||
import { MAX_PAIN_POINT_SELECTIONS, useOnboardingWizardStore } from "../store";
|
||||
|
||||
const ROLE_TOP_PICKS: Record<string, string[]> = {
|
||||
"Founder/CEO": [
|
||||
@@ -23,18 +24,38 @@ export function usePainPointsStep() {
|
||||
const role = useOnboardingWizardStore((s) => s.role);
|
||||
const painPoints = useOnboardingWizardStore((s) => s.painPoints);
|
||||
const otherPainPoint = useOnboardingWizardStore((s) => s.otherPainPoint);
|
||||
const togglePainPoint = useOnboardingWizardStore((s) => s.togglePainPoint);
|
||||
const storeToggle = useOnboardingWizardStore((s) => s.togglePainPoint);
|
||||
const setOtherPainPoint = useOnboardingWizardStore(
|
||||
(s) => s.setOtherPainPoint,
|
||||
);
|
||||
const nextStep = useOnboardingWizardStore((s) => s.nextStep);
|
||||
const [shaking, setShaking] = useState(false);
|
||||
const shakeTimer = useRef<ReturnType<typeof setTimeout> | null>(null);
|
||||
|
||||
useEffect(() => {
|
||||
return () => {
|
||||
if (shakeTimer.current) clearTimeout(shakeTimer.current);
|
||||
};
|
||||
}, []);
|
||||
|
||||
const topIDs = getTopPickIDs(role);
|
||||
const hasSomethingElse = painPoints.includes("Something else");
|
||||
const atLimit = painPoints.length >= MAX_PAIN_POINT_SELECTIONS;
|
||||
const canContinue =
|
||||
painPoints.length > 0 &&
|
||||
(!hasSomethingElse || Boolean(otherPainPoint.trim()));
|
||||
|
||||
function togglePainPoint(id: string) {
|
||||
const alreadySelected = painPoints.includes(id);
|
||||
if (!alreadySelected && atLimit) {
|
||||
if (shakeTimer.current) clearTimeout(shakeTimer.current);
|
||||
setShaking(true);
|
||||
shakeTimer.current = setTimeout(() => setShaking(false), 600);
|
||||
return;
|
||||
}
|
||||
storeToggle(id);
|
||||
}
|
||||
|
||||
function handleLaunch() {
|
||||
if (canContinue) {
|
||||
nextStep();
|
||||
@@ -48,6 +69,8 @@ export function usePainPointsStep() {
|
||||
togglePainPoint,
|
||||
setOtherPainPoint,
|
||||
hasSomethingElse,
|
||||
atLimit,
|
||||
shaking,
|
||||
canContinue,
|
||||
handleLaunch,
|
||||
};
|
||||
|
||||
@@ -1,5 +1,6 @@
|
||||
import { create } from "zustand";
|
||||
|
||||
export const MAX_PAIN_POINT_SELECTIONS = 3;
|
||||
export type Step = 1 | 2 | 3 | 4;
|
||||
|
||||
interface OnboardingWizardState {
|
||||
@@ -40,6 +41,8 @@ export const useOnboardingWizardStore = create<OnboardingWizardState>(
|
||||
togglePainPoint(painPoint) {
|
||||
set((state) => {
|
||||
const exists = state.painPoints.includes(painPoint);
|
||||
if (!exists && state.painPoints.length >= MAX_PAIN_POINT_SELECTIONS)
|
||||
return state;
|
||||
return {
|
||||
painPoints: exists
|
||||
? state.painPoints.filter((p) => p !== painPoint)
|
||||
|
||||
@@ -40,14 +40,14 @@ export const ContentRenderer: React.FC<{
|
||||
!shortContent
|
||||
) {
|
||||
return (
|
||||
<div className="overflow-hidden [&>*]:rounded-xlarge [&>*]:!text-xs [&_pre]:whitespace-pre-wrap [&_pre]:break-words">
|
||||
<div className="overflow-x-auto [&>*]:rounded-xlarge [&>*]:!text-xs [&_pre]:whitespace-pre-wrap [&_pre]:break-words">
|
||||
{renderer?.render(value, metadata)}
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
return (
|
||||
<div className="overflow-hidden [&>*]:rounded-xlarge [&>*]:!text-xs">
|
||||
<div className="overflow-x-auto [&>*]:rounded-xlarge [&>*]:!text-xs">
|
||||
<TextRenderer value={value} truncateLengthLimit={200} />
|
||||
</div>
|
||||
);
|
||||
|
||||
@@ -8,6 +8,7 @@ import { Flag, useGetFlag } from "@/services/feature-flags/use-get-flag";
|
||||
import { SidebarProvider } from "@/components/ui/sidebar";
|
||||
import { cn } from "@/lib/utils";
|
||||
import { UploadSimple } from "@phosphor-icons/react";
|
||||
import dynamic from "next/dynamic";
|
||||
import { useCallback, useEffect, useRef, useState } from "react";
|
||||
import { ChatContainer } from "./components/ChatContainer/ChatContainer";
|
||||
import { ChatSidebar } from "./components/ChatSidebar/ChatSidebar";
|
||||
@@ -20,6 +21,14 @@ import { RateLimitResetDialog } from "./components/RateLimitResetDialog/RateLimi
|
||||
import { ScaleLoader } from "./components/ScaleLoader/ScaleLoader";
|
||||
import { useCopilotPage } from "./useCopilotPage";
|
||||
|
||||
const ArtifactPanel = dynamic(
|
||||
() =>
|
||||
import("./components/ArtifactPanel/ArtifactPanel").then(
|
||||
(m) => m.ArtifactPanel,
|
||||
),
|
||||
{ ssr: false },
|
||||
);
|
||||
|
||||
export function CopilotPage() {
|
||||
const [isDragging, setIsDragging] = useState(false);
|
||||
const [droppedFiles, setDroppedFiles] = useState<File[]>([]);
|
||||
@@ -80,6 +89,10 @@ export function CopilotPage() {
|
||||
isUploadingFiles,
|
||||
isUserLoading,
|
||||
isLoggedIn,
|
||||
// Pagination
|
||||
hasMoreMessages,
|
||||
isLoadingMore,
|
||||
loadMore,
|
||||
// Mobile drawer
|
||||
isMobile,
|
||||
isDrawerOpen,
|
||||
@@ -116,6 +129,7 @@ export function CopilotPage() {
|
||||
const resetCost = usage?.reset_cost;
|
||||
|
||||
const isBillingEnabled = useGetFlag(Flag.ENABLE_PLATFORM_PAYMENT);
|
||||
const isArtifactsEnabled = useGetFlag(Flag.ARTIFACTS);
|
||||
const { credits, fetchCredits } = useCredits({ fetchInitialCredits: true });
|
||||
const hasInsufficientCredits =
|
||||
credits !== null && resetCost != null && credits < resetCost;
|
||||
@@ -150,48 +164,55 @@ export function CopilotPage() {
|
||||
className="h-[calc(100vh-72px)] min-h-0"
|
||||
>
|
||||
{!isMobile && <ChatSidebar />}
|
||||
<div
|
||||
className="relative flex h-full w-full flex-col overflow-hidden bg-[#f8f8f9] px-0"
|
||||
onDragEnter={handleDragEnter}
|
||||
onDragOver={handleDragOver}
|
||||
onDragLeave={handleDragLeave}
|
||||
onDrop={handleDrop}
|
||||
>
|
||||
{isMobile && <MobileHeader onOpenDrawer={handleOpenDrawer} />}
|
||||
<NotificationBanner />
|
||||
{/* Drop overlay */}
|
||||
<div className="flex h-full w-full flex-row overflow-hidden">
|
||||
<div
|
||||
className={cn(
|
||||
"pointer-events-none absolute inset-0 z-50 flex flex-col items-center justify-center gap-3 rounded-lg border-2 border-dashed border-violet-400 bg-violet-500/10 transition-opacity duration-150",
|
||||
isDragging ? "opacity-100" : "opacity-0",
|
||||
)}
|
||||
className="relative flex min-w-0 flex-1 flex-col overflow-hidden bg-[#f8f8f9] px-0"
|
||||
onDragEnter={handleDragEnter}
|
||||
onDragOver={handleDragOver}
|
||||
onDragLeave={handleDragLeave}
|
||||
onDrop={handleDrop}
|
||||
>
|
||||
<UploadSimple className="h-10 w-10 text-violet-500" weight="bold" />
|
||||
<span className="text-lg font-medium text-violet-600">
|
||||
Drop files here
|
||||
</span>
|
||||
</div>
|
||||
<div className="flex-1 overflow-hidden">
|
||||
<ChatContainer
|
||||
messages={messages}
|
||||
status={status}
|
||||
error={error}
|
||||
sessionId={sessionId}
|
||||
isLoadingSession={isLoadingSession}
|
||||
isSessionError={isSessionError}
|
||||
isCreatingSession={isCreatingSession}
|
||||
isReconnecting={isReconnecting}
|
||||
isSyncing={isSyncing}
|
||||
onCreateSession={createSession}
|
||||
onSend={onSend}
|
||||
onStop={stop}
|
||||
isUploadingFiles={isUploadingFiles}
|
||||
droppedFiles={droppedFiles}
|
||||
onDroppedFilesConsumed={handleDroppedFilesConsumed}
|
||||
historicalDurations={historicalDurations}
|
||||
/>
|
||||
{isMobile && <MobileHeader onOpenDrawer={handleOpenDrawer} />}
|
||||
<NotificationBanner />
|
||||
{/* Drop overlay */}
|
||||
<div
|
||||
className={cn(
|
||||
"pointer-events-none absolute inset-0 z-50 flex flex-col items-center justify-center gap-3 rounded-lg border-2 border-dashed border-violet-400 bg-violet-500/10 transition-opacity duration-150",
|
||||
isDragging ? "opacity-100" : "opacity-0",
|
||||
)}
|
||||
>
|
||||
<UploadSimple className="h-10 w-10 text-violet-500" weight="bold" />
|
||||
<span className="text-lg font-medium text-violet-600">
|
||||
Drop files here
|
||||
</span>
|
||||
</div>
|
||||
<div className="flex-1 overflow-hidden">
|
||||
<ChatContainer
|
||||
messages={messages}
|
||||
status={status}
|
||||
error={error}
|
||||
sessionId={sessionId}
|
||||
isLoadingSession={isLoadingSession}
|
||||
isSessionError={isSessionError}
|
||||
isCreatingSession={isCreatingSession}
|
||||
isReconnecting={isReconnecting}
|
||||
isSyncing={isSyncing}
|
||||
onCreateSession={createSession}
|
||||
onSend={onSend}
|
||||
onStop={stop}
|
||||
isUploadingFiles={isUploadingFiles}
|
||||
hasMoreMessages={hasMoreMessages}
|
||||
isLoadingMore={isLoadingMore}
|
||||
onLoadMore={loadMore}
|
||||
droppedFiles={droppedFiles}
|
||||
onDroppedFilesConsumed={handleDroppedFilesConsumed}
|
||||
historicalDurations={historicalDurations}
|
||||
/>
|
||||
</div>
|
||||
</div>
|
||||
{!isMobile && isArtifactsEnabled && <ArtifactPanel />}
|
||||
</div>
|
||||
{isMobile && isArtifactsEnabled && <ArtifactPanel mobile />}
|
||||
{isMobile && (
|
||||
<MobileDrawer
|
||||
isOpen={isDrawerOpen}
|
||||
|
||||
@@ -0,0 +1,114 @@
|
||||
"use client";
|
||||
|
||||
import { toast } from "@/components/molecules/Toast/use-toast";
|
||||
import { cn } from "@/lib/utils";
|
||||
import { CaretRight, DownloadSimple } from "@phosphor-icons/react";
|
||||
import type { ArtifactRef } from "../../store";
|
||||
import { useCopilotUIStore } from "../../store";
|
||||
import { downloadArtifact } from "../ArtifactPanel/downloadArtifact";
|
||||
import { classifyArtifact } from "../ArtifactPanel/helpers";
|
||||
|
||||
interface Props {
|
||||
artifact: ArtifactRef;
|
||||
}
|
||||
|
||||
function formatSize(bytes?: number): string {
|
||||
if (!bytes) return "";
|
||||
if (bytes < 1024) return `${bytes} B`;
|
||||
if (bytes < 1024 * 1024) return `${(bytes / 1024).toFixed(1)} KB`;
|
||||
return `${(bytes / (1024 * 1024)).toFixed(1)} MB`;
|
||||
}
|
||||
|
||||
export function ArtifactCard({ artifact }: Props) {
|
||||
const activeID = useCopilotUIStore((s) => s.artifactPanel.activeArtifact?.id);
|
||||
const isOpen = useCopilotUIStore((s) => s.artifactPanel.isOpen);
|
||||
const openArtifact = useCopilotUIStore((s) => s.openArtifact);
|
||||
|
||||
const isActive = isOpen && activeID === artifact.id;
|
||||
const classification = classifyArtifact(
|
||||
artifact.mimeType,
|
||||
artifact.title,
|
||||
artifact.sizeBytes,
|
||||
);
|
||||
const Icon = classification.icon;
|
||||
|
||||
function handleDownloadOnly() {
|
||||
downloadArtifact(artifact).catch(() => {
|
||||
toast({
|
||||
title: "Download failed",
|
||||
description: "Couldn't fetch the file.",
|
||||
variant: "destructive",
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
if (!classification.openable) {
|
||||
return (
|
||||
<button
|
||||
type="button"
|
||||
onClick={handleDownloadOnly}
|
||||
className="my-1 flex w-full items-center gap-3 rounded-lg border border-zinc-200 bg-white px-3 py-2.5 text-left transition-colors hover:bg-zinc-50"
|
||||
>
|
||||
<Icon size={20} className="shrink-0 text-zinc-400" />
|
||||
<div className="min-w-0 flex-1">
|
||||
<p className="truncate text-sm font-medium text-zinc-900">
|
||||
{artifact.title}
|
||||
</p>
|
||||
<p className="text-xs text-zinc-400">
|
||||
{classification.label}
|
||||
{artifact.sizeBytes
|
||||
? ` \u2022 ${formatSize(artifact.sizeBytes)}`
|
||||
: ""}
|
||||
</p>
|
||||
</div>
|
||||
<DownloadSimple size={16} className="shrink-0 text-zinc-400" />
|
||||
</button>
|
||||
);
|
||||
}
|
||||
|
||||
return (
|
||||
<button
|
||||
type="button"
|
||||
onClick={() => openArtifact(artifact)}
|
||||
className={cn(
|
||||
"my-1 flex w-full items-center gap-3 rounded-lg border bg-white px-3 py-2.5 text-left transition-colors hover:bg-zinc-50",
|
||||
isActive ? "border-violet-300 bg-violet-50/50" : "border-zinc-200",
|
||||
)}
|
||||
>
|
||||
<Icon
|
||||
size={20}
|
||||
className={cn(
|
||||
"shrink-0",
|
||||
isActive ? "text-violet-500" : "text-zinc-400",
|
||||
)}
|
||||
/>
|
||||
<div className="min-w-0 flex-1">
|
||||
<p className="truncate text-sm font-medium text-zinc-900">
|
||||
{artifact.title}
|
||||
</p>
|
||||
<p className="text-xs text-zinc-400">
|
||||
<span
|
||||
className={cn(
|
||||
"inline-block rounded-full px-1.5 py-0.5 text-xs font-medium",
|
||||
artifact.origin === "user-upload"
|
||||
? "bg-blue-50 text-blue-500"
|
||||
: "bg-violet-50 text-violet-500",
|
||||
)}
|
||||
>
|
||||
{classification.label}
|
||||
</span>
|
||||
{artifact.sizeBytes
|
||||
? ` \u2022 ${formatSize(artifact.sizeBytes)}`
|
||||
: ""}
|
||||
</p>
|
||||
</div>
|
||||
<CaretRight
|
||||
size={16}
|
||||
className={cn(
|
||||
"shrink-0",
|
||||
isActive ? "text-violet-400" : "text-zinc-300",
|
||||
)}
|
||||
/>
|
||||
</button>
|
||||
);
|
||||
}
|
||||
@@ -0,0 +1,125 @@
|
||||
"use client";
|
||||
|
||||
import {
|
||||
Sheet,
|
||||
SheetContent,
|
||||
SheetHeader,
|
||||
SheetTitle,
|
||||
} from "@/components/ui/sheet";
|
||||
import { AnimatePresence, motion } from "framer-motion";
|
||||
import { ArtifactContent } from "./components/ArtifactContent";
|
||||
import { ArtifactDragHandle } from "./components/ArtifactDragHandle";
|
||||
import { ArtifactMinimizedStrip } from "./components/ArtifactMinimizedStrip";
|
||||
import { ArtifactPanelHeader } from "./components/ArtifactPanelHeader";
|
||||
import { useArtifactPanel } from "./useArtifactPanel";
|
||||
|
||||
interface Props {
|
||||
mobile?: boolean;
|
||||
}
|
||||
|
||||
export function ArtifactPanel({ mobile }: Props) {
|
||||
const {
|
||||
isOpen,
|
||||
isMinimized,
|
||||
isMaximized,
|
||||
activeArtifact,
|
||||
history,
|
||||
effectiveWidth,
|
||||
isSourceView,
|
||||
classification,
|
||||
setIsSourceView,
|
||||
closeArtifactPanel,
|
||||
minimizeArtifactPanel,
|
||||
maximizeArtifactPanel,
|
||||
restoreArtifactPanel,
|
||||
setArtifactPanelWidth,
|
||||
goBackArtifact,
|
||||
canCopy,
|
||||
handleCopy,
|
||||
handleDownload,
|
||||
} = useArtifactPanel();
|
||||
|
||||
if (!activeArtifact || !classification) return null;
|
||||
|
||||
const headerProps = {
|
||||
artifact: activeArtifact,
|
||||
classification,
|
||||
canGoBack: history.length > 0,
|
||||
isMaximized,
|
||||
isSourceView,
|
||||
hasSourceToggle: classification.hasSourceToggle,
|
||||
mobile: !!mobile,
|
||||
canCopy,
|
||||
onBack: goBackArtifact,
|
||||
onClose: closeArtifactPanel,
|
||||
onMinimize: minimizeArtifactPanel,
|
||||
onMaximize: maximizeArtifactPanel,
|
||||
onRestore: restoreArtifactPanel,
|
||||
onCopy: handleCopy,
|
||||
onDownload: handleDownload,
|
||||
onSourceToggle: setIsSourceView,
|
||||
};
|
||||
|
||||
// Mobile: fullscreen Sheet overlay
|
||||
if (mobile) {
|
||||
return (
|
||||
<Sheet
|
||||
open={isOpen}
|
||||
onOpenChange={(open) => !open && closeArtifactPanel()}
|
||||
>
|
||||
<SheetContent
|
||||
side="right"
|
||||
className="flex w-full flex-col p-0 sm:max-w-full"
|
||||
>
|
||||
<SheetHeader className="sr-only">
|
||||
<SheetTitle>{activeArtifact.title}</SheetTitle>
|
||||
</SheetHeader>
|
||||
<ArtifactPanelHeader {...headerProps} />
|
||||
<ArtifactContent
|
||||
artifact={activeArtifact}
|
||||
isSourceView={isSourceView}
|
||||
classification={classification}
|
||||
/>
|
||||
</SheetContent>
|
||||
</Sheet>
|
||||
);
|
||||
}
|
||||
|
||||
// Minimized strip
|
||||
if (isOpen && isMinimized) {
|
||||
return (
|
||||
<ArtifactMinimizedStrip
|
||||
artifact={activeArtifact}
|
||||
classification={classification}
|
||||
onExpand={restoreArtifactPanel}
|
||||
/>
|
||||
);
|
||||
}
|
||||
|
||||
// Keep AnimatePresence mounted across the open→closed transition so the
|
||||
// exit animation on the motion.div has a chance to run.
|
||||
return (
|
||||
<AnimatePresence>
|
||||
{isOpen && (
|
||||
<motion.div
|
||||
key="artifact-panel"
|
||||
data-artifact-panel
|
||||
initial={{ opacity: 0 }}
|
||||
animate={{ opacity: 1 }}
|
||||
exit={{ opacity: 0 }}
|
||||
transition={{ duration: 0.25, ease: "easeInOut" }}
|
||||
className="relative flex h-full flex-col overflow-hidden border-l border-zinc-200 bg-white"
|
||||
style={{ width: effectiveWidth }}
|
||||
>
|
||||
<ArtifactDragHandle onWidthChange={setArtifactPanelWidth} />
|
||||
<ArtifactPanelHeader {...headerProps} />
|
||||
<ArtifactContent
|
||||
artifact={activeArtifact}
|
||||
isSourceView={isSourceView}
|
||||
classification={classification}
|
||||
/>
|
||||
</motion.div>
|
||||
)}
|
||||
</AnimatePresence>
|
||||
);
|
||||
}
|
||||
@@ -0,0 +1,198 @@
|
||||
"use client";
|
||||
|
||||
import { globalRegistry } from "@/components/contextual/OutputRenderers";
|
||||
import { codeRenderer } from "@/components/contextual/OutputRenderers/renderers/CodeRenderer";
|
||||
import { Suspense } from "react";
|
||||
import type { ArtifactRef } from "../../../store";
|
||||
import type { ArtifactClassification } from "../helpers";
|
||||
import { ArtifactReactPreview } from "./ArtifactReactPreview";
|
||||
import { ArtifactSkeleton } from "./ArtifactSkeleton";
|
||||
import {
|
||||
TAILWIND_CDN_URL,
|
||||
wrapWithHeadInjection,
|
||||
} from "@/lib/iframe-sandbox-csp";
|
||||
import { useArtifactContent } from "./useArtifactContent";
|
||||
|
||||
interface Props {
|
||||
artifact: ArtifactRef;
|
||||
isSourceView: boolean;
|
||||
classification: ArtifactClassification;
|
||||
}
|
||||
|
||||
function ArtifactContentLoader({
|
||||
artifact,
|
||||
isSourceView,
|
||||
classification,
|
||||
}: Props) {
|
||||
const { content, pdfUrl, isLoading, error, scrollRef, retry } =
|
||||
useArtifactContent(artifact, classification);
|
||||
|
||||
if (isLoading) {
|
||||
return <ArtifactSkeleton extraLine />;
|
||||
}
|
||||
|
||||
if (error) {
|
||||
return (
|
||||
<div
|
||||
role="alert"
|
||||
className="flex flex-col items-center justify-center gap-3 p-8 text-center"
|
||||
>
|
||||
<p className="text-sm text-zinc-500">Failed to load content</p>
|
||||
<p className="text-xs text-zinc-400">{error}</p>
|
||||
<button
|
||||
type="button"
|
||||
onClick={retry}
|
||||
className="rounded-md border border-zinc-200 bg-white px-3 py-1.5 text-xs font-medium text-zinc-700 shadow-sm transition-colors hover:bg-zinc-50 focus-visible:outline-none focus-visible:ring-2 focus-visible:ring-violet-400"
|
||||
>
|
||||
Try again
|
||||
</button>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
return (
|
||||
<div ref={scrollRef} className="flex-1 overflow-y-auto">
|
||||
<ArtifactRenderer
|
||||
artifact={artifact}
|
||||
content={content}
|
||||
pdfUrl={pdfUrl}
|
||||
isSourceView={isSourceView}
|
||||
classification={classification}
|
||||
/>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
function ArtifactRenderer({
|
||||
artifact,
|
||||
content,
|
||||
pdfUrl,
|
||||
isSourceView,
|
||||
classification,
|
||||
}: {
|
||||
artifact: ArtifactRef;
|
||||
content: string | null;
|
||||
pdfUrl: string | null;
|
||||
isSourceView: boolean;
|
||||
classification: ArtifactClassification;
|
||||
}) {
|
||||
// Image: render directly from URL (no content fetch)
|
||||
if (classification.type === "image") {
|
||||
return (
|
||||
<div className="flex items-center justify-center p-4">
|
||||
{/* eslint-disable-next-line @next/next/no-img-element */}
|
||||
<img
|
||||
src={artifact.sourceUrl}
|
||||
alt={artifact.title}
|
||||
className="max-h-full max-w-full object-contain"
|
||||
/>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
if (classification.type === "pdf" && pdfUrl) {
|
||||
// No sandbox — Chrome/Edge block PDF rendering in sandboxed iframes
|
||||
// (Chromium bug #413851). The blob URL has a null origin so it can't
|
||||
// access the parent page regardless.
|
||||
return (
|
||||
<iframe src={pdfUrl} className="h-full w-full" title={artifact.title} />
|
||||
);
|
||||
}
|
||||
|
||||
if (content === null) return null;
|
||||
|
||||
// Source view: always show raw text
|
||||
if (isSourceView) {
|
||||
return (
|
||||
<pre className="whitespace-pre-wrap break-words p-4 font-mono text-sm text-zinc-800">
|
||||
{content}
|
||||
</pre>
|
||||
);
|
||||
}
|
||||
|
||||
if (classification.type === "html") {
|
||||
// Inject Tailwind CDN — no CSP (see iframe-sandbox-csp.ts for why)
|
||||
const tailwindScript = `<script src="${TAILWIND_CDN_URL}"></script>`;
|
||||
const wrapped = wrapWithHeadInjection(content, tailwindScript);
|
||||
return (
|
||||
<iframe
|
||||
sandbox="allow-scripts"
|
||||
srcDoc={wrapped}
|
||||
className="h-full w-full border-0"
|
||||
title={artifact.title}
|
||||
/>
|
||||
);
|
||||
}
|
||||
|
||||
if (classification.type === "react") {
|
||||
return <ArtifactReactPreview source={content} title={artifact.title} />;
|
||||
}
|
||||
|
||||
// Code: pass with explicit type metadata so CodeRenderer matches
|
||||
// (prevents higher-priority MarkdownRenderer from claiming it)
|
||||
if (classification.type === "code") {
|
||||
const ext = artifact.title.split(".").pop() ?? "";
|
||||
const codeMeta = {
|
||||
mimeType: artifact.mimeType ?? undefined,
|
||||
filename: artifact.title,
|
||||
type: "code",
|
||||
language: ext,
|
||||
};
|
||||
return <div className="p-4">{codeRenderer.render(content, codeMeta)}</div>;
|
||||
}
|
||||
|
||||
// JSON: parse first so the JSONRenderer gets an object, not a string
|
||||
// (prevents higher-priority MarkdownRenderer from claiming it)
|
||||
if (classification.type === "json") {
|
||||
try {
|
||||
const parsed = JSON.parse(content);
|
||||
const jsonMeta = {
|
||||
mimeType: "application/json",
|
||||
type: "json",
|
||||
filename: artifact.title,
|
||||
};
|
||||
const jsonRenderer = globalRegistry.getRenderer(parsed, jsonMeta);
|
||||
if (jsonRenderer) {
|
||||
return (
|
||||
<div className="p-4">{jsonRenderer.render(parsed, jsonMeta)}</div>
|
||||
);
|
||||
}
|
||||
} catch {
|
||||
// invalid JSON — fall through to plain text
|
||||
}
|
||||
}
|
||||
|
||||
// CSV: pass with explicit metadata so CSVRenderer matches
|
||||
if (classification.type === "csv") {
|
||||
const csvMeta = { mimeType: "text/csv", filename: artifact.title };
|
||||
const csvRenderer = globalRegistry.getRenderer(content, csvMeta);
|
||||
if (csvRenderer) {
|
||||
return <div className="p-4">{csvRenderer.render(content, csvMeta)}</div>;
|
||||
}
|
||||
}
|
||||
|
||||
// Try the global renderer registry
|
||||
const metadata = {
|
||||
mimeType: artifact.mimeType ?? undefined,
|
||||
filename: artifact.title,
|
||||
};
|
||||
const renderer = globalRegistry.getRenderer(content, metadata);
|
||||
if (renderer) {
|
||||
return <div className="p-4">{renderer.render(content, metadata)}</div>;
|
||||
}
|
||||
|
||||
// Fallback: plain text
|
||||
return (
|
||||
<pre className="whitespace-pre-wrap break-words p-4 font-mono text-sm text-zinc-800">
|
||||
{content}
|
||||
</pre>
|
||||
);
|
||||
}
|
||||
|
||||
export function ArtifactContent(props: Props) {
|
||||
return (
|
||||
<Suspense fallback={<ArtifactSkeleton />}>
|
||||
<ArtifactContentLoader {...props} />
|
||||
</Suspense>
|
||||
);
|
||||
}
|
||||
@@ -0,0 +1,93 @@
|
||||
"use client";
|
||||
|
||||
import { cn } from "@/lib/utils";
|
||||
import { useEffect, useRef, useState } from "react";
|
||||
import { DEFAULT_PANEL_WIDTH } from "../../../store";
|
||||
|
||||
interface Props {
|
||||
onWidthChange: (width: number) => void;
|
||||
minWidth?: number;
|
||||
maxWidthPercent?: number;
|
||||
}
|
||||
|
||||
export function ArtifactDragHandle({
|
||||
onWidthChange,
|
||||
minWidth = 320,
|
||||
maxWidthPercent = 85,
|
||||
}: Props) {
|
||||
const [isDragging, setIsDragging] = useState(false);
|
||||
const startXRef = useRef(0);
|
||||
const startWidthRef = useRef(0);
|
||||
// Use refs for the callback + bounds so the drag listeners can read the
|
||||
// latest values without having to detach/reattach between re-renders.
|
||||
const onWidthChangeRef = useRef(onWidthChange);
|
||||
const minWidthRef = useRef(minWidth);
|
||||
const maxWidthPercentRef = useRef(maxWidthPercent);
|
||||
onWidthChangeRef.current = onWidthChange;
|
||||
minWidthRef.current = minWidth;
|
||||
maxWidthPercentRef.current = maxWidthPercent;
|
||||
|
||||
// Attach document listeners only while dragging, and always tear them down
|
||||
// on unmount — otherwise closing the panel mid-drag leaves listeners bound
|
||||
// to a handler that calls setState on the unmounted component.
|
||||
useEffect(() => {
|
||||
if (!isDragging) return;
|
||||
|
||||
function handlePointerMove(moveEvent: PointerEvent) {
|
||||
const delta = startXRef.current - moveEvent.clientX;
|
||||
const maxWidth = window.innerWidth * (maxWidthPercentRef.current / 100);
|
||||
const newWidth = Math.min(
|
||||
maxWidth,
|
||||
Math.max(minWidthRef.current, startWidthRef.current + delta),
|
||||
);
|
||||
onWidthChangeRef.current(newWidth);
|
||||
}
|
||||
|
||||
function handlePointerUp() {
|
||||
setIsDragging(false);
|
||||
}
|
||||
|
||||
document.addEventListener("pointermove", handlePointerMove);
|
||||
document.addEventListener("pointerup", handlePointerUp);
|
||||
document.addEventListener("pointercancel", handlePointerUp);
|
||||
return () => {
|
||||
document.removeEventListener("pointermove", handlePointerMove);
|
||||
document.removeEventListener("pointerup", handlePointerUp);
|
||||
document.removeEventListener("pointercancel", handlePointerUp);
|
||||
};
|
||||
}, [isDragging]);
|
||||
|
||||
function handlePointerDown(e: React.PointerEvent) {
|
||||
e.preventDefault();
|
||||
startXRef.current = e.clientX;
|
||||
|
||||
// Get the panel's current width from its parent
|
||||
const panel = (e.target as HTMLElement).closest(
|
||||
"[data-artifact-panel]",
|
||||
) as HTMLElement | null;
|
||||
startWidthRef.current = panel?.offsetWidth ?? DEFAULT_PANEL_WIDTH;
|
||||
|
||||
setIsDragging(true);
|
||||
}
|
||||
|
||||
return (
|
||||
// 12px transparent hit target with the visible 1px line centered inside
|
||||
// (WCAG-compliant, matches ~8-12px conventions of other resizable panels).
|
||||
<div
|
||||
role="separator"
|
||||
aria-orientation="vertical"
|
||||
aria-label="Resize panel"
|
||||
className={cn(
|
||||
"group absolute -left-1.5 top-0 z-10 flex h-full w-3 cursor-col-resize items-stretch justify-center",
|
||||
)}
|
||||
onPointerDown={handlePointerDown}
|
||||
>
|
||||
<div
|
||||
className={cn(
|
||||
"h-full w-px bg-transparent transition-colors group-hover:w-0.5 group-hover:bg-violet-400",
|
||||
isDragging && "w-0.5 bg-violet-500",
|
||||
)}
|
||||
/>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
@@ -0,0 +1,47 @@
|
||||
"use client";
|
||||
|
||||
import { ArrowsOutSimple } from "@phosphor-icons/react";
|
||||
import type { ArtifactRef } from "../../../store";
|
||||
import type { ArtifactClassification } from "../helpers";
|
||||
|
||||
interface Props {
|
||||
artifact: ArtifactRef;
|
||||
classification: ArtifactClassification;
|
||||
onExpand: () => void;
|
||||
}
|
||||
|
||||
export function ArtifactMinimizedStrip({
|
||||
artifact,
|
||||
classification,
|
||||
onExpand,
|
||||
}: Props) {
|
||||
const Icon = classification.icon;
|
||||
|
||||
return (
|
||||
<div className="flex h-full w-10 flex-col items-center border-l border-zinc-200 bg-white pt-3">
|
||||
<button
|
||||
type="button"
|
||||
onClick={onExpand}
|
||||
className="rounded p-1.5 text-zinc-500 transition-colors hover:bg-zinc-100 hover:text-zinc-700 focus-visible:outline-none focus-visible:ring-2 focus-visible:ring-violet-400"
|
||||
title="Expand panel"
|
||||
>
|
||||
<ArrowsOutSimple size={16} />
|
||||
</button>
|
||||
<div className="mt-3 text-zinc-400">
|
||||
<Icon size={16} />
|
||||
</div>
|
||||
<span
|
||||
className="mt-2 text-xs text-zinc-400"
|
||||
style={{
|
||||
writingMode: "vertical-rl",
|
||||
textOrientation: "mixed",
|
||||
maxHeight: "120px",
|
||||
overflow: "hidden",
|
||||
textOverflow: "ellipsis",
|
||||
}}
|
||||
>
|
||||
{artifact.title}
|
||||
</span>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
@@ -0,0 +1,138 @@
|
||||
"use client";
|
||||
|
||||
import { cn } from "@/lib/utils";
|
||||
import {
|
||||
ArrowLeft,
|
||||
ArrowsIn,
|
||||
ArrowsOut,
|
||||
Copy,
|
||||
DownloadSimple,
|
||||
Minus,
|
||||
X,
|
||||
} from "@phosphor-icons/react";
|
||||
import type { ArtifactRef } from "../../../store";
|
||||
import type { ArtifactClassification } from "../helpers";
|
||||
import { SourceToggle } from "./SourceToggle";
|
||||
|
||||
interface Props {
|
||||
artifact: ArtifactRef;
|
||||
classification: ArtifactClassification;
|
||||
canGoBack: boolean;
|
||||
isMaximized: boolean;
|
||||
isSourceView: boolean;
|
||||
hasSourceToggle: boolean;
|
||||
mobile?: boolean;
|
||||
canCopy?: boolean;
|
||||
onBack: () => void;
|
||||
onClose: () => void;
|
||||
onMinimize: () => void;
|
||||
onMaximize: () => void;
|
||||
onRestore: () => void;
|
||||
onCopy: () => void;
|
||||
onDownload: () => void;
|
||||
onSourceToggle: (isSource: boolean) => void;
|
||||
}
|
||||
|
||||
function HeaderButton({
|
||||
onClick,
|
||||
title,
|
||||
children,
|
||||
}: {
|
||||
onClick: () => void;
|
||||
title: string;
|
||||
children: React.ReactNode;
|
||||
}) {
|
||||
return (
|
||||
<button
|
||||
type="button"
|
||||
onClick={onClick}
|
||||
title={title}
|
||||
aria-label={title}
|
||||
className="rounded p-1.5 text-zinc-500 transition-colors hover:bg-zinc-100 hover:text-zinc-700"
|
||||
>
|
||||
{children}
|
||||
</button>
|
||||
);
|
||||
}
|
||||
|
||||
export function ArtifactPanelHeader({
|
||||
artifact,
|
||||
classification,
|
||||
canGoBack,
|
||||
isMaximized,
|
||||
isSourceView,
|
||||
hasSourceToggle,
|
||||
mobile,
|
||||
canCopy = true,
|
||||
onBack,
|
||||
onClose,
|
||||
onMinimize,
|
||||
onMaximize,
|
||||
onRestore,
|
||||
onCopy,
|
||||
onDownload,
|
||||
onSourceToggle,
|
||||
}: Props) {
|
||||
const Icon = classification.icon;
|
||||
|
||||
return (
|
||||
<div className="sticky top-0 z-10 flex items-center gap-2 border-b border-zinc-200 bg-white px-3 py-2">
|
||||
{/* Left section */}
|
||||
<div className="flex min-w-0 flex-1 items-center gap-2">
|
||||
{canGoBack && (
|
||||
<HeaderButton onClick={onBack} title="Back">
|
||||
<ArrowLeft size={16} />
|
||||
</HeaderButton>
|
||||
)}
|
||||
<Icon size={16} className="shrink-0 text-zinc-400" />
|
||||
<span className="truncate text-sm font-medium text-zinc-900">
|
||||
{artifact.title}
|
||||
</span>
|
||||
<span
|
||||
className={cn(
|
||||
"shrink-0 rounded-full px-2 py-0.5 text-xs font-medium",
|
||||
artifact.origin === "user-upload"
|
||||
? "bg-blue-50 text-blue-600"
|
||||
: "bg-violet-50 text-violet-600",
|
||||
)}
|
||||
>
|
||||
{classification.label}
|
||||
</span>
|
||||
</div>
|
||||
|
||||
{/* Right section */}
|
||||
<div className="flex items-center gap-1">
|
||||
{hasSourceToggle && (
|
||||
<SourceToggle isSourceView={isSourceView} onToggle={onSourceToggle} />
|
||||
)}
|
||||
{canCopy && (
|
||||
<HeaderButton onClick={onCopy} title="Copy">
|
||||
<Copy size={16} />
|
||||
</HeaderButton>
|
||||
)}
|
||||
<HeaderButton onClick={onDownload} title="Download">
|
||||
<DownloadSimple size={16} />
|
||||
</HeaderButton>
|
||||
{!mobile && (
|
||||
<>
|
||||
<HeaderButton onClick={onMinimize} title="Minimize">
|
||||
<Minus size={16} />
|
||||
</HeaderButton>
|
||||
{isMaximized ? (
|
||||
<HeaderButton onClick={onRestore} title="Restore">
|
||||
<ArrowsIn size={16} />
|
||||
</HeaderButton>
|
||||
) : (
|
||||
<HeaderButton onClick={onMaximize} title="Maximize">
|
||||
<ArrowsOut size={16} />
|
||||
</HeaderButton>
|
||||
)}
|
||||
</>
|
||||
)}
|
||||
<HeaderButton onClick={onClose} title="Close">
|
||||
<X size={16} />
|
||||
</HeaderButton>
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
@@ -0,0 +1,72 @@
|
||||
"use client";
|
||||
|
||||
import { useEffect, useState } from "react";
|
||||
import { ArtifactSkeleton } from "./ArtifactSkeleton";
|
||||
import {
|
||||
buildReactArtifactSrcDoc,
|
||||
collectPreviewStyles,
|
||||
transpileReactArtifactSource,
|
||||
} from "./reactArtifactPreview";
|
||||
|
||||
interface Props {
|
||||
source: string;
|
||||
title: string;
|
||||
}
|
||||
|
||||
export function ArtifactReactPreview({ source, title }: Props) {
|
||||
const [srcDoc, setSrcDoc] = useState<string | null>(null);
|
||||
const [error, setError] = useState<string | null>(null);
|
||||
|
||||
useEffect(() => {
|
||||
let cancelled = false;
|
||||
|
||||
setSrcDoc(null);
|
||||
setError(null);
|
||||
|
||||
transpileReactArtifactSource(source, title)
|
||||
.then((compiledCode) => {
|
||||
if (cancelled) return;
|
||||
setSrcDoc(
|
||||
buildReactArtifactSrcDoc(compiledCode, title, collectPreviewStyles()),
|
||||
);
|
||||
})
|
||||
.catch((nextError: unknown) => {
|
||||
if (cancelled) return;
|
||||
setError(
|
||||
nextError instanceof Error
|
||||
? nextError.message
|
||||
: "Failed to build artifact preview",
|
||||
);
|
||||
});
|
||||
|
||||
return () => {
|
||||
cancelled = true;
|
||||
};
|
||||
}, [source, title]);
|
||||
|
||||
if (error) {
|
||||
return (
|
||||
<div className="flex flex-col gap-2 p-4">
|
||||
<p className="text-sm font-medium text-red-600">
|
||||
Failed to render React preview
|
||||
</p>
|
||||
<pre className="whitespace-pre-wrap break-words rounded-md bg-red-50 p-3 font-mono text-xs text-red-900">
|
||||
{error}
|
||||
</pre>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
if (!srcDoc) {
|
||||
return <ArtifactSkeleton />;
|
||||
}
|
||||
|
||||
return (
|
||||
<iframe
|
||||
sandbox="allow-scripts"
|
||||
srcDoc={srcDoc}
|
||||
className="h-full w-full border-0"
|
||||
title={`${title} preview`}
|
||||
/>
|
||||
);
|
||||
}
|
||||
@@ -0,0 +1,17 @@
|
||||
import { Skeleton } from "@/components/ui/skeleton";
|
||||
|
||||
interface Props {
|
||||
/** Extra line before the 32h block (the variant used while fetching text). */
|
||||
extraLine?: boolean;
|
||||
}
|
||||
|
||||
export function ArtifactSkeleton({ extraLine }: Props) {
|
||||
return (
|
||||
<div className="space-y-3 p-4">
|
||||
<Skeleton className="h-4 w-3/4" />
|
||||
<Skeleton className="h-4 w-1/2" />
|
||||
{extraLine && <Skeleton className="h-4 w-5/6" />}
|
||||
<Skeleton className="h-32 w-full" />
|
||||
</div>
|
||||
);
|
||||
}
|
||||
@@ -0,0 +1,41 @@
|
||||
"use client";
|
||||
|
||||
import { cn } from "@/lib/utils";
|
||||
|
||||
interface Props {
|
||||
isSourceView: boolean;
|
||||
onToggle: (isSource: boolean) => void;
|
||||
}
|
||||
|
||||
export function SourceToggle({ isSourceView, onToggle }: Props) {
|
||||
return (
|
||||
<div className="flex items-center rounded-md border border-zinc-200 bg-zinc-50 p-0.5 text-xs font-medium">
|
||||
<button
|
||||
type="button"
|
||||
aria-pressed={!isSourceView}
|
||||
className={cn(
|
||||
"rounded px-2 py-1 transition-colors",
|
||||
!isSourceView
|
||||
? "bg-white text-zinc-900 shadow-sm"
|
||||
: "text-zinc-500 hover:text-zinc-700",
|
||||
)}
|
||||
onClick={() => onToggle(false)}
|
||||
>
|
||||
Preview
|
||||
</button>
|
||||
<button
|
||||
type="button"
|
||||
aria-pressed={isSourceView}
|
||||
className={cn(
|
||||
"rounded px-2 py-1 transition-colors",
|
||||
isSourceView
|
||||
? "bg-white text-zinc-900 shadow-sm"
|
||||
: "text-zinc-500 hover:text-zinc-700",
|
||||
)}
|
||||
onClick={() => onToggle(true)}
|
||||
>
|
||||
Source
|
||||
</button>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
@@ -0,0 +1,167 @@
|
||||
import { describe, expect, it, vi, beforeEach, afterEach } from "vitest";
|
||||
import { renderHook, waitFor, act } from "@testing-library/react";
|
||||
import {
|
||||
useArtifactContent,
|
||||
getCachedArtifactContent,
|
||||
} from "../useArtifactContent";
|
||||
import type { ArtifactRef } from "../../../../store";
|
||||
import type { ArtifactClassification } from "../../helpers";
|
||||
|
||||
function makeArtifact(overrides?: Partial<ArtifactRef>): ArtifactRef {
|
||||
return {
|
||||
id: "file-001",
|
||||
title: "test.txt",
|
||||
mimeType: "text/plain",
|
||||
sourceUrl: "/api/proxy/api/workspace/files/file-001/download",
|
||||
origin: "agent",
|
||||
...overrides,
|
||||
};
|
||||
}
|
||||
|
||||
function makeClassification(
|
||||
overrides?: Partial<ArtifactClassification>,
|
||||
): ArtifactClassification {
|
||||
return {
|
||||
type: "text",
|
||||
icon: vi.fn() as any,
|
||||
label: "Text",
|
||||
openable: true,
|
||||
hasSourceToggle: false,
|
||||
...overrides,
|
||||
};
|
||||
}
|
||||
|
||||
describe("useArtifactContent", () => {
|
||||
beforeEach(() => {
|
||||
vi.stubGlobal(
|
||||
"fetch",
|
||||
vi.fn().mockResolvedValue({
|
||||
ok: true,
|
||||
text: () => Promise.resolve("file content here"),
|
||||
blob: () => Promise.resolve(new Blob(["pdf bytes"])),
|
||||
}),
|
||||
);
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
vi.restoreAllMocks();
|
||||
});
|
||||
|
||||
it("fetches text content for text artifacts", async () => {
|
||||
const artifact = makeArtifact();
|
||||
const classification = makeClassification({ type: "text" });
|
||||
|
||||
const { result } = renderHook(() =>
|
||||
useArtifactContent(artifact, classification),
|
||||
);
|
||||
|
||||
await waitFor(() => {
|
||||
expect(result.current.isLoading).toBe(false);
|
||||
});
|
||||
|
||||
expect(result.current.content).toBe("file content here");
|
||||
expect(result.current.error).toBeNull();
|
||||
});
|
||||
|
||||
it("skips fetch for image artifacts", async () => {
|
||||
const artifact = makeArtifact({ mimeType: "image/png" });
|
||||
const classification = makeClassification({ type: "image" });
|
||||
|
||||
const { result } = renderHook(() =>
|
||||
useArtifactContent(artifact, classification),
|
||||
);
|
||||
|
||||
expect(result.current.isLoading).toBe(false);
|
||||
expect(result.current.content).toBeNull();
|
||||
expect(fetch).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it("creates blob URL for PDF artifacts", async () => {
|
||||
const artifact = makeArtifact({ mimeType: "application/pdf" });
|
||||
const classification = makeClassification({ type: "pdf" });
|
||||
|
||||
const { result } = renderHook(() =>
|
||||
useArtifactContent(artifact, classification),
|
||||
);
|
||||
|
||||
await waitFor(() => {
|
||||
expect(result.current.isLoading).toBe(false);
|
||||
});
|
||||
|
||||
expect(result.current.pdfUrl).toMatch(/^blob:/);
|
||||
});
|
||||
|
||||
it("sets error on fetch failure", async () => {
|
||||
vi.stubGlobal(
|
||||
"fetch",
|
||||
vi.fn().mockResolvedValue({
|
||||
ok: false,
|
||||
status: 404,
|
||||
text: () => Promise.resolve("Not found"),
|
||||
}),
|
||||
);
|
||||
|
||||
// Use a unique ID to avoid hitting the module-level content cache
|
||||
const artifact = makeArtifact({ id: "error-test-unique" });
|
||||
const classification = makeClassification({ type: "text" });
|
||||
|
||||
const { result } = renderHook(() =>
|
||||
useArtifactContent(artifact, classification),
|
||||
);
|
||||
|
||||
await waitFor(() => {
|
||||
expect(result.current.error).toBeTruthy();
|
||||
});
|
||||
|
||||
expect(result.current.error).toContain("404");
|
||||
expect(result.current.content).toBeNull();
|
||||
});
|
||||
|
||||
it("caches fetched content and exposes via getCachedArtifactContent", async () => {
|
||||
const artifact = makeArtifact({ id: "cache-test" });
|
||||
const classification = makeClassification({ type: "text" });
|
||||
|
||||
const { result } = renderHook(() =>
|
||||
useArtifactContent(artifact, classification),
|
||||
);
|
||||
|
||||
await waitFor(() => {
|
||||
expect(result.current.content).toBe("file content here");
|
||||
});
|
||||
|
||||
expect(getCachedArtifactContent("cache-test")).toBe("file content here");
|
||||
});
|
||||
|
||||
it("retry clears cache and re-fetches", async () => {
|
||||
let callCount = 0;
|
||||
vi.stubGlobal(
|
||||
"fetch",
|
||||
vi.fn().mockImplementation(() => {
|
||||
callCount++;
|
||||
return Promise.resolve({
|
||||
ok: true,
|
||||
text: () => Promise.resolve(`response ${callCount}`),
|
||||
});
|
||||
}),
|
||||
);
|
||||
|
||||
const artifact = makeArtifact({ id: "retry-test" });
|
||||
const classification = makeClassification({ type: "text" });
|
||||
|
||||
const { result } = renderHook(() =>
|
||||
useArtifactContent(artifact, classification),
|
||||
);
|
||||
|
||||
await waitFor(() => {
|
||||
expect(result.current.content).toBe("response 1");
|
||||
});
|
||||
|
||||
act(() => {
|
||||
result.current.retry();
|
||||
});
|
||||
|
||||
await waitFor(() => {
|
||||
expect(result.current.content).toBe("response 2");
|
||||
});
|
||||
});
|
||||
});
|
||||
@@ -0,0 +1,88 @@
|
||||
import { describe, expect, it } from "vitest";
|
||||
import {
|
||||
buildReactArtifactSrcDoc,
|
||||
collectPreviewStyles,
|
||||
escapeHtml,
|
||||
} from "./reactArtifactPreview";
|
||||
|
||||
describe("escapeHtml", () => {
|
||||
it("escapes &, <, >, \", '", () => {
|
||||
expect(escapeHtml("a & b")).toBe("a & b");
|
||||
expect(escapeHtml("<script>")).toBe("<script>");
|
||||
expect(escapeHtml('hello "world"')).toBe("hello "world"");
|
||||
expect(escapeHtml("it's")).toBe("it's");
|
||||
});
|
||||
|
||||
it("neutralizes a </title> escape attempt", () => {
|
||||
// Used to escape a title that lands inside <title>${safeTitle}</title>
|
||||
const out = escapeHtml("</title><script>alert(1)</script>");
|
||||
expect(out).not.toContain("</title>");
|
||||
expect(out).not.toContain("<script>");
|
||||
expect(out).toContain("</title>");
|
||||
expect(out).toContain("<script>");
|
||||
});
|
||||
|
||||
it("escapes ampersand first so entities aren't double-escaped in the wrong order", () => {
|
||||
// If & were escaped AFTER <, the < → < output would become &lt;.
|
||||
// Verify the & substitution ran on the raw input only.
|
||||
expect(escapeHtml("A&B<C")).toBe("A&B<C");
|
||||
});
|
||||
|
||||
it("is safe on empty / plain strings", () => {
|
||||
expect(escapeHtml("")).toBe("");
|
||||
expect(escapeHtml("plain text 123")).toBe("plain text 123");
|
||||
});
|
||||
});
|
||||
|
||||
describe("buildReactArtifactSrcDoc", () => {
|
||||
const STYLES = collectPreviewStyles();
|
||||
|
||||
it("does not contain a CSP meta tag (see iframe-sandbox-csp.ts)", () => {
|
||||
const doc = buildReactArtifactSrcDoc("module.exports = {};", "A", STYLES);
|
||||
expect(doc).not.toContain("Content-Security-Policy");
|
||||
});
|
||||
|
||||
it("includes SRI-pinned React and ReactDOM bundles", () => {
|
||||
const doc = buildReactArtifactSrcDoc("module.exports = {};", "A", STYLES);
|
||||
expect(doc).toContain(
|
||||
'src="https://unpkg.com/react@18.3.1/umd/react.production.min.js"',
|
||||
);
|
||||
expect(doc).toContain('integrity="sha384-');
|
||||
expect(doc).toContain(
|
||||
'src="https://unpkg.com/react-dom@18.3.1/umd/react-dom.production.min.js"',
|
||||
);
|
||||
});
|
||||
|
||||
it("escapes the title into the <title> tag", () => {
|
||||
const doc = buildReactArtifactSrcDoc(
|
||||
"module.exports = {};",
|
||||
"</title><script>alert(1)</script>",
|
||||
STYLES,
|
||||
);
|
||||
expect(doc).not.toMatch(/<title><\/title><script>/);
|
||||
expect(doc).toContain("</title>");
|
||||
});
|
||||
|
||||
it("escapes </script> sequences in compiled code so the inline script can't be broken out of", () => {
|
||||
// A legitimate artifact may contain the literal string "</script>" inside
|
||||
// a JSX template or string; it must be \u003c-escaped before embedding.
|
||||
const compiled = 'const x = "</script><script>alert(1)</script>";';
|
||||
const doc = buildReactArtifactSrcDoc(compiled, "A", STYLES);
|
||||
// The raw compiled string should NOT appear verbatim inside the srcDoc
|
||||
// (that would break out of the runtime <script>).
|
||||
expect(doc).not.toContain('"</script><script>alert(1)</script>"');
|
||||
// Instead, the escaped \u003c/script> form is what we expect.
|
||||
expect(doc).toContain("\\u003c/script>");
|
||||
});
|
||||
|
||||
it("wires up #root and #error containers", () => {
|
||||
const doc = buildReactArtifactSrcDoc("module.exports = {};", "A", STYLES);
|
||||
expect(doc).toContain('<div id="root">');
|
||||
expect(doc).toContain('<div id="error">');
|
||||
});
|
||||
|
||||
it("injects the styles markup supplied by collectPreviewStyles", () => {
|
||||
const doc = buildReactArtifactSrcDoc("module.exports = {};", "A", STYLES);
|
||||
expect(doc).toContain("box-sizing: border-box");
|
||||
});
|
||||
});
|
||||
@@ -0,0 +1,318 @@
|
||||
/**
|
||||
* React artifact preview — security model
|
||||
*
|
||||
* AI-generated TSX source is transpiled (TypeScript) and executed inside a
|
||||
* sandboxed iframe (`sandbox="allow-scripts"` without `allow-same-origin`).
|
||||
*
|
||||
* What's isolated:
|
||||
* - No access to parent page cookies, localStorage, or sessionStorage
|
||||
* - No form submissions or popups (no allow-forms / allow-popups)
|
||||
* - Treated as a unique opaque origin by the browser
|
||||
*
|
||||
* What's allowed inside the iframe:
|
||||
* - Inline script execution (needed to render React components)
|
||||
* - `new Function()` is used to evaluate the compiled code (eval-equivalent)
|
||||
* - Full DOM access within the iframe
|
||||
* - Network requests via fetch/XHR (allowed — only artifact content is
|
||||
* visible inside the sandbox, no secret data to exfiltrate)
|
||||
*
|
||||
* React is loaded from unpkg with pinned version and SRI integrity hashes.
|
||||
*/
|
||||
|
||||
import { TAILWIND_CDN_URL } from "@/lib/iframe-sandbox-csp";
|
||||
|
||||
export { transpileReactArtifactSource } from "./transpileReactArtifact";
|
||||
|
||||
export function escapeHtml(value: string): string {
|
||||
return value
|
||||
.replaceAll("&", "&")
|
||||
.replaceAll("<", "<")
|
||||
.replaceAll(">", ">")
|
||||
.replaceAll('"', """)
|
||||
.replaceAll("'", "'");
|
||||
}
|
||||
|
||||
/** Minimal CSS reset for React artifact previews.
|
||||
*
|
||||
* Previously this copied ALL host stylesheets (200KB+ Tailwind) into every
|
||||
* preview iframe. Now we provide a self-contained reset and let artifacts
|
||||
* declare their own styles. This avoids tight coupling between the app's CSS
|
||||
* and artifact rendering, and keeps the srcdoc size small.
|
||||
*/
|
||||
export function collectPreviewStyles() {
|
||||
return `<style>
|
||||
*, *::before, *::after { box-sizing: border-box; }
|
||||
body { margin: 0; font-family: ui-sans-serif, system-ui, sans-serif; }
|
||||
</style>`;
|
||||
}
|
||||
|
||||
export function buildReactArtifactSrcDoc(
|
||||
compiledCode: string,
|
||||
title: string,
|
||||
stylesMarkup: string,
|
||||
) {
|
||||
const safeTitle = escapeHtml(title);
|
||||
const runtime = JSON.stringify(compiledCode).replace(/</g, "\\u003c");
|
||||
|
||||
return `<!doctype html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="utf-8" />
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1" />
|
||||
<title>${safeTitle}</title>
|
||||
${stylesMarkup}
|
||||
<style>
|
||||
html, body, #root {
|
||||
height: 100%;
|
||||
margin: 0;
|
||||
}
|
||||
|
||||
body {
|
||||
background:
|
||||
radial-gradient(circle at top, rgba(148, 163, 184, 0.18), transparent 35%),
|
||||
#f8fafc;
|
||||
color: #18181b;
|
||||
font-family: ui-sans-serif, system-ui, sans-serif;
|
||||
}
|
||||
|
||||
#root {
|
||||
box-sizing: border-box;
|
||||
min-height: 100%;
|
||||
isolation: isolate;
|
||||
}
|
||||
|
||||
#error {
|
||||
display: none;
|
||||
box-sizing: border-box;
|
||||
margin: 24px;
|
||||
padding: 16px;
|
||||
border: 1px solid #fecaca;
|
||||
border-radius: 16px;
|
||||
background: #fff1f2;
|
||||
color: #991b1b;
|
||||
font-family: ui-monospace, SFMono-Regular, monospace;
|
||||
white-space: pre-wrap;
|
||||
}
|
||||
</style>
|
||||
<script src="${TAILWIND_CDN_URL}"></script>
|
||||
<script crossorigin="anonymous" src="https://unpkg.com/react@18.3.1/umd/react.production.min.js" integrity="sha384-DGyLxAyjq0f9SPpVevD6IgztCFlnMF6oW/XQGmfe+IsZ8TqEiDrcHkMLKI6fiB/Z"></script>
|
||||
<script crossorigin="anonymous" src="https://unpkg.com/react-dom@18.3.1/umd/react-dom.production.min.js" integrity="sha384-gTGxhz21lVGYNMcdJOyq01Edg0jhn/c22nsx0kyqP0TxaV5WVdsSH1fSDUf5YJj1"></script>
|
||||
</head>
|
||||
<body>
|
||||
<div id="root"></div>
|
||||
<div id="error"></div>
|
||||
<script>
|
||||
(function () {
|
||||
const compiledCode = ${runtime};
|
||||
const rootElement = document.getElementById("root");
|
||||
const errorElement = document.getElementById("error");
|
||||
|
||||
function showError(error) {
|
||||
rootElement.style.display = "none";
|
||||
errorElement.style.display = "block";
|
||||
errorElement.textContent =
|
||||
error instanceof Error && error.stack
|
||||
? error.stack
|
||||
: error instanceof Error
|
||||
? error.message
|
||||
: String(error);
|
||||
}
|
||||
|
||||
function getModuleExports(module, exports) {
|
||||
return {
|
||||
...exports,
|
||||
...(typeof module.exports === "object" ? module.exports : {}),
|
||||
};
|
||||
}
|
||||
|
||||
function getRenderableCandidate(moduleExports) {
|
||||
if (typeof moduleExports.default === "function") {
|
||||
return moduleExports.default;
|
||||
}
|
||||
|
||||
if (typeof moduleExports.App === "function") {
|
||||
return moduleExports.App;
|
||||
}
|
||||
|
||||
const namedCandidate = Object.entries(moduleExports).find(
|
||||
([name, value]) =>
|
||||
name !== "default" &&
|
||||
!name.endsWith("Provider") &&
|
||||
/^[A-Z]/.test(name) &&
|
||||
typeof value === "function",
|
||||
);
|
||||
|
||||
if (namedCandidate) {
|
||||
return namedCandidate[1];
|
||||
}
|
||||
|
||||
if (typeof App !== "undefined" && typeof App === "function") {
|
||||
return App;
|
||||
}
|
||||
|
||||
throw new Error(
|
||||
"No renderable component found. Export a default component, export App, or export a named component.",
|
||||
);
|
||||
}
|
||||
|
||||
function wrapWithProviders(Component, moduleExports) {
|
||||
const providers = Object.entries(moduleExports)
|
||||
.filter(
|
||||
([name, value]) =>
|
||||
name !== "default" &&
|
||||
name.endsWith("Provider") &&
|
||||
typeof value === "function",
|
||||
)
|
||||
.map(([, value]) => value);
|
||||
|
||||
if (providers.length === 0) {
|
||||
return Component;
|
||||
}
|
||||
|
||||
return function WrappedArtifactPreview() {
|
||||
let tree = React.createElement(Component);
|
||||
|
||||
for (let i = providers.length - 1; i >= 0; i -= 1) {
|
||||
tree = React.createElement(providers[i], null, tree);
|
||||
}
|
||||
|
||||
return tree;
|
||||
};
|
||||
}
|
||||
|
||||
function require(name) {
|
||||
if (name === "react") {
|
||||
return React;
|
||||
}
|
||||
|
||||
if (name === "react-dom") {
|
||||
return ReactDOM;
|
||||
}
|
||||
|
||||
if (name === "react-dom/client") {
|
||||
return { createRoot: ReactDOM.createRoot };
|
||||
}
|
||||
|
||||
if (name === "react/jsx-runtime" || name === "react/jsx-dev-runtime") {
|
||||
// jsx/jsxs signature: (type, config, key) where config.children is
|
||||
// the children (single value for jsx, array for jsxs). createElement
|
||||
// wants variadic children, so we have to unpack config.children.
|
||||
function jsx(type, config, key) {
|
||||
var props = {};
|
||||
if (config != null) {
|
||||
for (var k in config) {
|
||||
if (k !== "children") props[k] = config[k];
|
||||
}
|
||||
}
|
||||
if (key !== undefined) props.key = key;
|
||||
var children =
|
||||
config != null && "children" in config ? config.children : undefined;
|
||||
if (Array.isArray(children)) {
|
||||
return React.createElement.apply(
|
||||
null,
|
||||
[type, props].concat(children),
|
||||
);
|
||||
}
|
||||
return children === undefined
|
||||
? React.createElement(type, props)
|
||||
: React.createElement(type, props, children);
|
||||
}
|
||||
return { Fragment: React.Fragment, jsx: jsx, jsxs: jsx };
|
||||
}
|
||||
|
||||
throw new Error("Unsupported import in artifact preview: " + name);
|
||||
}
|
||||
|
||||
class PreviewErrorBoundary extends React.Component {
|
||||
constructor(props) {
|
||||
super(props);
|
||||
this.state = { error: null };
|
||||
}
|
||||
|
||||
static getDerivedStateFromError(error) {
|
||||
return { error };
|
||||
}
|
||||
|
||||
render() {
|
||||
if (this.state.error) {
|
||||
return React.createElement(
|
||||
"div",
|
||||
{
|
||||
style: {
|
||||
margin: "24px",
|
||||
padding: "16px",
|
||||
border: "1px solid #fecaca",
|
||||
borderRadius: "16px",
|
||||
background: "#fff1f2",
|
||||
color: "#991b1b",
|
||||
fontFamily: "ui-monospace, SFMono-Regular, monospace",
|
||||
whiteSpace: "pre-wrap",
|
||||
},
|
||||
},
|
||||
this.state.error.stack || this.state.error.message || String(this.state.error),
|
||||
);
|
||||
}
|
||||
|
||||
return this.props.children;
|
||||
}
|
||||
}
|
||||
|
||||
try {
|
||||
const exports = {};
|
||||
const module = { exports };
|
||||
const factory = new Function(
|
||||
"React",
|
||||
"ReactDOM",
|
||||
"module",
|
||||
"exports",
|
||||
"require",
|
||||
\`
|
||||
"use strict";
|
||||
\${compiledCode}
|
||||
return {
|
||||
module,
|
||||
exports,
|
||||
app: typeof App !== "undefined" ? App : undefined,
|
||||
};
|
||||
\`,
|
||||
);
|
||||
|
||||
const executionResult = factory(
|
||||
React,
|
||||
ReactDOM,
|
||||
module,
|
||||
exports,
|
||||
require,
|
||||
);
|
||||
const moduleExports = getModuleExports(
|
||||
executionResult.module,
|
||||
executionResult.exports,
|
||||
);
|
||||
|
||||
if (
|
||||
executionResult.app &&
|
||||
typeof moduleExports.App !== "function"
|
||||
) {
|
||||
moduleExports.App = executionResult.app;
|
||||
}
|
||||
|
||||
const Component = wrapWithProviders(
|
||||
getRenderableCandidate(moduleExports),
|
||||
moduleExports,
|
||||
);
|
||||
|
||||
ReactDOM.createRoot(rootElement).render(
|
||||
React.createElement(
|
||||
PreviewErrorBoundary,
|
||||
null,
|
||||
React.createElement(Component),
|
||||
),
|
||||
);
|
||||
} catch (error) {
|
||||
showError(error);
|
||||
}
|
||||
})();
|
||||
</script>
|
||||
</body>
|
||||
</html>`;
|
||||
}
|
||||
@@ -0,0 +1,51 @@
|
||||
import { describe, expect, it } from "vitest";
|
||||
import { transpileReactArtifactSource } from "./transpileReactArtifact";
|
||||
|
||||
describe("transpileReactArtifactSource", () => {
|
||||
it("transpiles a simple TSX function component", async () => {
|
||||
const src =
|
||||
'import React from "react";\nexport default function App() { return <div>hi</div>; }';
|
||||
const out = await transpileReactArtifactSource(src, "App.tsx");
|
||||
// Classic-transform emits React.createElement calls.
|
||||
// esModuleInterop emits `react_1.default.createElement(...)` — match either form.
|
||||
expect(out).toMatch(/\.createElement\(/);
|
||||
expect(out).not.toContain("<div>");
|
||||
});
|
||||
|
||||
it("still transpiles when the filename lacks an extension (ensureJsxExtension)", async () => {
|
||||
const src = "export default function A() { return <span>x</span>; }";
|
||||
// Previously: filename without .tsx caused a JSX syntax error.
|
||||
const out = await transpileReactArtifactSource(src, "A");
|
||||
// esModuleInterop emits `react_1.default.createElement(...)` — match either form.
|
||||
expect(out).toMatch(/\.createElement\(/);
|
||||
});
|
||||
|
||||
it("still transpiles when the filename ends in .ts (not jsx-aware)", async () => {
|
||||
const src = "export default function A() { return <b>x</b>; }";
|
||||
const out = await transpileReactArtifactSource(src, "A.ts");
|
||||
// esModuleInterop emits `react_1.default.createElement(...)` — match either form.
|
||||
expect(out).toMatch(/\.createElement\(/);
|
||||
});
|
||||
|
||||
it("keeps .tsx extension as-is", async () => {
|
||||
const src = "export default function A() { return <i>x</i>; }";
|
||||
const out = await transpileReactArtifactSource(src, "Comp.tsx");
|
||||
// esModuleInterop emits `react_1.default.createElement(...)` — match either form.
|
||||
expect(out).toMatch(/\.createElement\(/);
|
||||
});
|
||||
|
||||
it("throws with a useful diagnostic on syntax errors", async () => {
|
||||
const broken = "export default function A() { return <div><b></div>; }"; // unclosed <b>
|
||||
await expect(
|
||||
transpileReactArtifactSource(broken, "broken.tsx"),
|
||||
).rejects.toThrow();
|
||||
});
|
||||
|
||||
it("transpiles TypeScript type annotations away", async () => {
|
||||
const src =
|
||||
"function greet(name: string): string { return 'hi ' + name; }\nexport default () => greet('a');";
|
||||
const out = await transpileReactArtifactSource(src, "g.tsx");
|
||||
expect(out).not.toContain(": string");
|
||||
expect(out).toContain("function greet(name)");
|
||||
});
|
||||
});
|
||||
@@ -0,0 +1,43 @@
|
||||
function ensureJsxExtension(filename: string): string {
|
||||
// TypeScript infers JSX parsing from the file extension; if the artifact
|
||||
// title is "component" or "foo.ts", TSX syntax in the source will be
|
||||
// treated as a syntax error. Force a .tsx extension for transpilation.
|
||||
const lower = filename.toLowerCase();
|
||||
if (lower.endsWith(".tsx") || lower.endsWith(".jsx")) return filename;
|
||||
return `${filename || "artifact"}.tsx`;
|
||||
}
|
||||
|
||||
export async function transpileReactArtifactSource(
|
||||
source: string,
|
||||
filename: string,
|
||||
) {
|
||||
const ts = await import("typescript");
|
||||
const result = ts.transpileModule(source, {
|
||||
compilerOptions: {
|
||||
allowJs: true,
|
||||
esModuleInterop: true,
|
||||
jsx: ts.JsxEmit.React,
|
||||
module: ts.ModuleKind.CommonJS,
|
||||
target: ts.ScriptTarget.ES2020,
|
||||
},
|
||||
fileName: ensureJsxExtension(filename),
|
||||
reportDiagnostics: true,
|
||||
});
|
||||
|
||||
const diagnostics =
|
||||
result.diagnostics?.filter(
|
||||
(diagnostic) => diagnostic.category === ts.DiagnosticCategory.Error,
|
||||
) ?? [];
|
||||
|
||||
if (diagnostics.length > 0) {
|
||||
const message = diagnostics
|
||||
.slice(0, 3)
|
||||
.map((diagnostic) =>
|
||||
ts.flattenDiagnosticMessageText(diagnostic.messageText, "\n"),
|
||||
)
|
||||
.join("\n\n");
|
||||
throw new Error(message);
|
||||
}
|
||||
|
||||
return result.outputText;
|
||||
}
|
||||
@@ -0,0 +1,154 @@
|
||||
"use client";
|
||||
|
||||
import { useEffect, useRef, useState } from "react";
|
||||
import type { ArtifactRef } from "../../../store";
|
||||
import type { ArtifactClassification } from "../helpers";
|
||||
|
||||
// Cap on cached text artifacts. Long sessions with many large artifacts
|
||||
// would otherwise hold every opened one in memory.
|
||||
const CONTENT_CACHE_MAX = 12;
|
||||
|
||||
// Module-level LRU keyed by artifact id so a sibling action (e.g. Copy
|
||||
// in ArtifactPanelHeader) can read what the panel already fetched without
|
||||
// re-hitting the network.
|
||||
const contentCache = new Map<string, string>();
|
||||
|
||||
export function getCachedArtifactContent(id: string): string | undefined {
|
||||
return contentCache.get(id);
|
||||
}
|
||||
|
||||
export function clearContentCache() {
|
||||
contentCache.clear();
|
||||
}
|
||||
|
||||
export function useArtifactContent(
|
||||
artifact: ArtifactRef,
|
||||
classification: ArtifactClassification,
|
||||
) {
|
||||
const [content, setContent] = useState<string | null>(null);
|
||||
const [pdfUrl, setPdfUrl] = useState<string | null>(null);
|
||||
const [isLoading, setIsLoading] = useState(false);
|
||||
const [error, setError] = useState<string | null>(null);
|
||||
// Bumped by `retry()` to force the fetch effect to re-run.
|
||||
const [retryNonce, setRetryNonce] = useState(0);
|
||||
const scrollPositions = useRef(new Map<string, number>());
|
||||
const scrollRef = useRef<HTMLDivElement>(null);
|
||||
|
||||
function retry() {
|
||||
// Drop any cached failure/content for this id so we actually re-fetch.
|
||||
contentCache.delete(artifact.id);
|
||||
setRetryNonce((n) => n + 1);
|
||||
}
|
||||
|
||||
// Save scroll position when switching artifacts. Only save when the
|
||||
// content div has actually been mounted with a nonzero scrollTop, so we
|
||||
// don't overwrite a previously-saved position with 0 from a skeleton render.
|
||||
useEffect(() => {
|
||||
return () => {
|
||||
const node = scrollRef.current;
|
||||
if (node && node.scrollTop > 0) {
|
||||
scrollPositions.current.set(artifact.id, node.scrollTop);
|
||||
}
|
||||
};
|
||||
}, [artifact.id]);
|
||||
|
||||
// Restore scroll position — wait until isLoading flips to false, since
|
||||
// the scroll container is replaced by a Skeleton during loading and the
|
||||
// real content div would otherwise mount with scrollTop=0.
|
||||
useEffect(() => {
|
||||
if (isLoading) return;
|
||||
const saved = scrollPositions.current.get(artifact.id);
|
||||
if (saved != null && scrollRef.current) {
|
||||
scrollRef.current.scrollTop = saved;
|
||||
}
|
||||
}, [artifact.id, isLoading]);
|
||||
|
||||
useEffect(() => {
|
||||
if (classification.type === "image") {
|
||||
setContent(null);
|
||||
setPdfUrl(null);
|
||||
setError(null);
|
||||
setIsLoading(false);
|
||||
return;
|
||||
}
|
||||
|
||||
let cancelled = false;
|
||||
setIsLoading(true);
|
||||
setError(null);
|
||||
|
||||
if (classification.type === "pdf") {
|
||||
let objectUrl: string | null = null;
|
||||
setContent(null);
|
||||
setPdfUrl(null);
|
||||
fetch(artifact.sourceUrl)
|
||||
.then((res) => {
|
||||
if (!res.ok) throw new Error(`Failed to fetch: ${res.status}`);
|
||||
return res.blob();
|
||||
})
|
||||
.then((blob) => {
|
||||
objectUrl = URL.createObjectURL(blob);
|
||||
if (cancelled) {
|
||||
URL.revokeObjectURL(objectUrl);
|
||||
objectUrl = null;
|
||||
return;
|
||||
}
|
||||
setPdfUrl(objectUrl);
|
||||
setIsLoading(false);
|
||||
})
|
||||
.catch((err) => {
|
||||
if (!cancelled) {
|
||||
setError(err.message);
|
||||
setIsLoading(false);
|
||||
}
|
||||
});
|
||||
return () => {
|
||||
cancelled = true;
|
||||
if (objectUrl) URL.revokeObjectURL(objectUrl);
|
||||
};
|
||||
}
|
||||
|
||||
setPdfUrl(null);
|
||||
// LRU touch — re-insert so the most-recently-used entry sits at the
|
||||
// tail and the oldest entry falls off the head first.
|
||||
const cache = contentCache;
|
||||
const cached = cache.get(artifact.id);
|
||||
if (cached !== undefined) {
|
||||
cache.delete(artifact.id);
|
||||
cache.set(artifact.id, cached);
|
||||
setContent(cached);
|
||||
setIsLoading(false);
|
||||
return () => {
|
||||
cancelled = true;
|
||||
};
|
||||
}
|
||||
fetch(artifact.sourceUrl)
|
||||
.then((res) => {
|
||||
if (!res.ok) throw new Error(`Failed to fetch: ${res.status}`);
|
||||
return res.text();
|
||||
})
|
||||
.then((text) => {
|
||||
if (!cancelled) {
|
||||
if (cache.size >= CONTENT_CACHE_MAX) {
|
||||
// Map preserves insertion order — first key is the oldest.
|
||||
const oldest = cache.keys().next().value;
|
||||
if (oldest !== undefined) cache.delete(oldest);
|
||||
}
|
||||
cache.set(artifact.id, text);
|
||||
setContent(text);
|
||||
setIsLoading(false);
|
||||
}
|
||||
})
|
||||
.catch((err) => {
|
||||
if (!cancelled) {
|
||||
setError(err.message);
|
||||
setIsLoading(false);
|
||||
}
|
||||
});
|
||||
|
||||
return () => {
|
||||
cancelled = true;
|
||||
};
|
||||
}, [artifact.id, artifact.sourceUrl, classification.type, retryNonce]);
|
||||
|
||||
return { content, pdfUrl, isLoading, error, scrollRef, retry };
|
||||
}
|
||||
@@ -0,0 +1,121 @@
|
||||
import { afterEach, describe, expect, it, vi } from "vitest";
|
||||
import type { ArtifactRef } from "../../store";
|
||||
import { downloadArtifact } from "./downloadArtifact";
|
||||
|
||||
function makeArtifact(title: string): ArtifactRef {
|
||||
return {
|
||||
id: "abc",
|
||||
title,
|
||||
mimeType: "text/plain",
|
||||
sourceUrl: "/api/proxy/api/workspace/files/abc/download",
|
||||
origin: "agent",
|
||||
};
|
||||
}
|
||||
|
||||
afterEach(() => {
|
||||
vi.restoreAllMocks();
|
||||
});
|
||||
|
||||
describe("downloadArtifact filename sanitization", () => {
|
||||
it("strips path separators and control characters", async () => {
|
||||
global.fetch = vi.fn().mockResolvedValue({
|
||||
ok: true,
|
||||
blob: () => Promise.resolve(new Blob(["x"])),
|
||||
});
|
||||
const clicks: HTMLAnchorElement[] = [];
|
||||
const originalCreate = document.createElement.bind(document);
|
||||
vi.spyOn(document, "createElement").mockImplementation((tag: string) => {
|
||||
const el = originalCreate(tag);
|
||||
if (tag === "a") {
|
||||
clicks.push(el as HTMLAnchorElement);
|
||||
// Prevent actual navigation in test env.
|
||||
(el as HTMLAnchorElement).click = () => {};
|
||||
}
|
||||
return el;
|
||||
});
|
||||
global.URL.createObjectURL = vi.fn(() => "blob:mock");
|
||||
global.URL.revokeObjectURL = vi.fn();
|
||||
|
||||
await downloadArtifact(makeArtifact("../../etc/passwd"));
|
||||
// ..→_ then /→_ gives ____etc_passwd (no leading ..)
|
||||
expect(clicks[0]?.download).toBe("____etc_passwd");
|
||||
});
|
||||
|
||||
it("replaces Windows-reserved characters", async () => {
|
||||
global.fetch = vi.fn().mockResolvedValue({
|
||||
ok: true,
|
||||
blob: () => Promise.resolve(new Blob(["x"])),
|
||||
});
|
||||
const clicks: HTMLAnchorElement[] = [];
|
||||
const originalCreate = document.createElement.bind(document);
|
||||
vi.spyOn(document, "createElement").mockImplementation((tag: string) => {
|
||||
const el = originalCreate(tag);
|
||||
if (tag === "a") {
|
||||
clicks.push(el as HTMLAnchorElement);
|
||||
(el as HTMLAnchorElement).click = () => {};
|
||||
}
|
||||
return el;
|
||||
});
|
||||
global.URL.createObjectURL = vi.fn(() => "blob:mock");
|
||||
global.URL.revokeObjectURL = vi.fn();
|
||||
|
||||
await downloadArtifact(makeArtifact('a<b>c:"d*e?f|g'));
|
||||
expect(clicks[0]?.download).toBe("a_b_c__d_e_f_g");
|
||||
});
|
||||
|
||||
it("falls back to 'download' when title is empty after sanitization", async () => {
|
||||
global.fetch = vi.fn().mockResolvedValue({
|
||||
ok: true,
|
||||
blob: () => Promise.resolve(new Blob(["x"])),
|
||||
});
|
||||
const clicks: HTMLAnchorElement[] = [];
|
||||
const originalCreate = document.createElement.bind(document);
|
||||
vi.spyOn(document, "createElement").mockImplementation((tag: string) => {
|
||||
const el = originalCreate(tag);
|
||||
if (tag === "a") {
|
||||
clicks.push(el as HTMLAnchorElement);
|
||||
(el as HTMLAnchorElement).click = () => {};
|
||||
}
|
||||
return el;
|
||||
});
|
||||
global.URL.createObjectURL = vi.fn(() => "blob:mock");
|
||||
global.URL.revokeObjectURL = vi.fn();
|
||||
|
||||
await downloadArtifact(makeArtifact(""));
|
||||
expect(clicks[0]?.download).toBe("download");
|
||||
});
|
||||
|
||||
it("keeps normal filenames intact", async () => {
|
||||
global.fetch = vi.fn().mockResolvedValue({
|
||||
ok: true,
|
||||
blob: () => Promise.resolve(new Blob(["x"])),
|
||||
});
|
||||
const clicks: HTMLAnchorElement[] = [];
|
||||
const originalCreate = document.createElement.bind(document);
|
||||
vi.spyOn(document, "createElement").mockImplementation((tag: string) => {
|
||||
const el = originalCreate(tag);
|
||||
if (tag === "a") {
|
||||
clicks.push(el as HTMLAnchorElement);
|
||||
(el as HTMLAnchorElement).click = () => {};
|
||||
}
|
||||
return el;
|
||||
});
|
||||
global.URL.createObjectURL = vi.fn(() => "blob:mock");
|
||||
global.URL.revokeObjectURL = vi.fn();
|
||||
|
||||
await downloadArtifact(makeArtifact("report-2024 (final).pdf"));
|
||||
expect(clicks[0]?.download).toBe("report-2024 (final).pdf");
|
||||
});
|
||||
|
||||
it("rejects when fetch returns non-ok status", async () => {
|
||||
global.fetch = vi.fn().mockResolvedValue({ ok: false, status: 404 });
|
||||
await expect(downloadArtifact(makeArtifact("x.txt"))).rejects.toThrow(
|
||||
/Download failed: 404/,
|
||||
);
|
||||
});
|
||||
|
||||
it("rejects when fetch itself throws", async () => {
|
||||
global.fetch = vi.fn().mockRejectedValue(new Error("network"));
|
||||
await expect(downloadArtifact(makeArtifact("x.txt"))).rejects.toThrow();
|
||||
});
|
||||
});
|
||||
@@ -0,0 +1,35 @@
|
||||
import type { ArtifactRef } from "../../store";
|
||||
|
||||
/**
|
||||
* Trigger a file download from an artifact URL.
|
||||
*
|
||||
* Uses fetch+blob instead of a bare `<a download>` because the browser
|
||||
* ignores the `download` attribute on cross-origin responses (GCS signed
|
||||
* URLs), and some browsers require the anchor to be attached to the DOM
|
||||
* before `.click()` fires the download.
|
||||
*/
|
||||
export function downloadArtifact(artifact: ArtifactRef): Promise<void> {
|
||||
// Replace path separators, Windows-reserved chars, control chars, and
|
||||
// parent-dir sequences so the browser-assigned filename is safe to write
|
||||
// anywhere on the user's filesystem.
|
||||
const safeName =
|
||||
artifact.title
|
||||
.replace(/\.\./g, "_")
|
||||
.replace(/[\\/:*?"<>|\x00-\x1f]/g, "_")
|
||||
.replace(/^\.+/, "") || "download";
|
||||
return fetch(artifact.sourceUrl)
|
||||
.then((res) => {
|
||||
if (!res.ok) throw new Error(`Download failed: ${res.status}`);
|
||||
return res.blob();
|
||||
})
|
||||
.then((blob) => {
|
||||
const url = URL.createObjectURL(blob);
|
||||
const a = document.createElement("a");
|
||||
a.href = url;
|
||||
a.download = safeName;
|
||||
document.body.appendChild(a);
|
||||
a.click();
|
||||
a.remove();
|
||||
URL.revokeObjectURL(url);
|
||||
});
|
||||
}
|
||||
@@ -0,0 +1,79 @@
|
||||
import { describe, expect, it } from "vitest";
|
||||
import { classifyArtifact } from "./helpers";
|
||||
|
||||
describe("classifyArtifact", () => {
|
||||
it("routes PDF by extension", () => {
|
||||
const c = classifyArtifact(null, "report.pdf");
|
||||
expect(c.type).toBe("pdf");
|
||||
expect(c.openable).toBe(true);
|
||||
});
|
||||
|
||||
it("routes PDF by MIME when no extension matches", () => {
|
||||
const c = classifyArtifact("application/pdf", "noextension");
|
||||
expect(c.type).toBe("pdf");
|
||||
});
|
||||
|
||||
it("routes JSX/TSX as react", () => {
|
||||
expect(classifyArtifact(null, "App.tsx").type).toBe("react");
|
||||
expect(classifyArtifact(null, "Comp.jsx").type).toBe("react");
|
||||
});
|
||||
|
||||
it("routes code extensions to code", () => {
|
||||
expect(classifyArtifact(null, "script.py").type).toBe("code");
|
||||
expect(classifyArtifact(null, "main.go").type).toBe("code");
|
||||
expect(classifyArtifact(null, "Dockerfile.yml").type).toBe("code");
|
||||
});
|
||||
|
||||
it("treats images as image (inline rendered)", () => {
|
||||
expect(classifyArtifact(null, "photo.png").type).toBe("image");
|
||||
expect(classifyArtifact("image/svg+xml", "unknown").type).toBe("image");
|
||||
});
|
||||
|
||||
it("treats CSVs as csv with source toggle", () => {
|
||||
const c = classifyArtifact(null, "data.csv");
|
||||
expect(c.type).toBe("csv");
|
||||
expect(c.hasSourceToggle).toBe(true);
|
||||
});
|
||||
|
||||
it("treats HTML as html with source toggle", () => {
|
||||
expect(classifyArtifact(null, "page.html").type).toBe("html");
|
||||
expect(classifyArtifact("text/html", "noext").type).toBe("html");
|
||||
});
|
||||
|
||||
it("treats markdown as markdown", () => {
|
||||
expect(classifyArtifact(null, "README.md").type).toBe("markdown");
|
||||
expect(classifyArtifact("text/markdown", "x").type).toBe("markdown");
|
||||
});
|
||||
|
||||
it("gates files > 10MB to download-only", () => {
|
||||
const c = classifyArtifact("text/plain", "big.txt", 20 * 1024 * 1024);
|
||||
expect(c.openable).toBe(false);
|
||||
expect(c.type).toBe("download-only");
|
||||
});
|
||||
|
||||
it("treats binary/octet-stream MIME as download-only", () => {
|
||||
expect(classifyArtifact("application/zip", "a.zip").openable).toBe(false);
|
||||
expect(classifyArtifact("application/octet-stream", "x").openable).toBe(
|
||||
false,
|
||||
);
|
||||
expect(classifyArtifact("video/mp4", "clip.mp4").openable).toBe(false);
|
||||
});
|
||||
|
||||
it("defaults unknown extension+MIME to download-only (not text)", () => {
|
||||
// Regression: previously dumped binary as <pre>; now refuses to open.
|
||||
const c = classifyArtifact(null, "data.bin");
|
||||
expect(c.openable).toBe(false);
|
||||
expect(c.type).toBe("download-only");
|
||||
});
|
||||
|
||||
it("is case-insensitive on extension", () => {
|
||||
expect(classifyArtifact(null, "image.PNG").type).toBe("image");
|
||||
expect(classifyArtifact(null, "Notes.MD").type).toBe("markdown");
|
||||
});
|
||||
|
||||
it("prioritizes extension over MIME", () => {
|
||||
// Extension says CSV, MIME says plain text → extension wins.
|
||||
const c = classifyArtifact("text/plain", "data.csv");
|
||||
expect(c.type).toBe("csv");
|
||||
});
|
||||
});
|
||||
@@ -0,0 +1,229 @@
|
||||
import {
|
||||
Code,
|
||||
File,
|
||||
FileHtml,
|
||||
FileText,
|
||||
Image,
|
||||
Table,
|
||||
} from "@phosphor-icons/react";
|
||||
import type { Icon } from "@phosphor-icons/react";
|
||||
|
||||
export interface ArtifactClassification {
|
||||
type:
|
||||
| "markdown"
|
||||
| "code"
|
||||
| "react"
|
||||
| "html"
|
||||
| "csv"
|
||||
| "json"
|
||||
| "image"
|
||||
| "pdf"
|
||||
| "text"
|
||||
| "download-only";
|
||||
icon: Icon;
|
||||
label: string;
|
||||
openable: boolean;
|
||||
hasSourceToggle: boolean;
|
||||
}
|
||||
|
||||
const TEN_MB = 10 * 1024 * 1024;
|
||||
|
||||
// Catalog of classification kinds. Each entry defines the shared output
|
||||
// shape; extension/MIME → kind mapping is handled by the lookup tables below.
|
||||
const KIND: Record<string, ArtifactClassification> = {
|
||||
image: {
|
||||
type: "image",
|
||||
icon: Image,
|
||||
label: "Image",
|
||||
openable: true,
|
||||
hasSourceToggle: false,
|
||||
},
|
||||
pdf: {
|
||||
type: "pdf",
|
||||
icon: FileText,
|
||||
label: "PDF",
|
||||
openable: true,
|
||||
hasSourceToggle: false,
|
||||
},
|
||||
csv: {
|
||||
type: "csv",
|
||||
icon: Table,
|
||||
label: "Spreadsheet",
|
||||
openable: true,
|
||||
hasSourceToggle: true,
|
||||
},
|
||||
html: {
|
||||
type: "html",
|
||||
icon: FileHtml,
|
||||
label: "HTML",
|
||||
openable: true,
|
||||
hasSourceToggle: true,
|
||||
},
|
||||
react: {
|
||||
type: "react",
|
||||
icon: FileHtml,
|
||||
label: "React",
|
||||
openable: true,
|
||||
hasSourceToggle: true,
|
||||
},
|
||||
markdown: {
|
||||
type: "markdown",
|
||||
icon: FileText,
|
||||
label: "Document",
|
||||
openable: true,
|
||||
hasSourceToggle: true,
|
||||
},
|
||||
json: {
|
||||
type: "json",
|
||||
icon: Code,
|
||||
label: "Data",
|
||||
openable: true,
|
||||
hasSourceToggle: true,
|
||||
},
|
||||
code: {
|
||||
type: "code",
|
||||
icon: Code,
|
||||
label: "Code",
|
||||
openable: true,
|
||||
hasSourceToggle: false,
|
||||
},
|
||||
text: {
|
||||
type: "text",
|
||||
icon: FileText,
|
||||
label: "Text",
|
||||
openable: true,
|
||||
hasSourceToggle: false,
|
||||
},
|
||||
"download-only": {
|
||||
type: "download-only",
|
||||
icon: File,
|
||||
label: "File",
|
||||
openable: false,
|
||||
hasSourceToggle: false,
|
||||
},
|
||||
};
|
||||
|
||||
// Extension → kind. First match wins.
|
||||
const EXT_KIND: Record<string, string> = {
|
||||
".png": "image",
|
||||
".jpg": "image",
|
||||
".jpeg": "image",
|
||||
".gif": "image",
|
||||
".webp": "image",
|
||||
".svg": "image",
|
||||
".bmp": "image",
|
||||
".ico": "image",
|
||||
".pdf": "pdf",
|
||||
".csv": "csv",
|
||||
".html": "html",
|
||||
".htm": "html",
|
||||
".jsx": "react",
|
||||
".tsx": "react",
|
||||
".md": "markdown",
|
||||
".mdx": "markdown",
|
||||
".json": "json",
|
||||
".txt": "text",
|
||||
".log": "text",
|
||||
// code extensions
|
||||
".js": "code",
|
||||
".ts": "code",
|
||||
".py": "code",
|
||||
".rb": "code",
|
||||
".go": "code",
|
||||
".rs": "code",
|
||||
".java": "code",
|
||||
".c": "code",
|
||||
".cpp": "code",
|
||||
".h": "code",
|
||||
".cs": "code",
|
||||
".php": "code",
|
||||
".swift": "code",
|
||||
".kt": "code",
|
||||
".sh": "code",
|
||||
".bash": "code",
|
||||
".zsh": "code",
|
||||
".yml": "code",
|
||||
".yaml": "code",
|
||||
".toml": "code",
|
||||
".ini": "code",
|
||||
".cfg": "code",
|
||||
".sql": "code",
|
||||
".r": "code",
|
||||
".lua": "code",
|
||||
".pl": "code",
|
||||
".scala": "code",
|
||||
};
|
||||
|
||||
// Exact-match MIME → kind (fallback when extension doesn't match).
|
||||
const MIME_KIND: Record<string, string> = {
|
||||
"application/pdf": "pdf",
|
||||
"text/csv": "csv",
|
||||
"text/html": "html",
|
||||
"text/jsx": "react",
|
||||
"text/tsx": "react",
|
||||
"application/jsx": "react",
|
||||
"application/x-typescript-jsx": "react",
|
||||
"text/markdown": "markdown",
|
||||
"text/x-markdown": "markdown",
|
||||
"application/json": "json",
|
||||
"application/javascript": "code",
|
||||
"text/javascript": "code",
|
||||
"application/typescript": "code",
|
||||
"text/typescript": "code",
|
||||
"application/xml": "code",
|
||||
"text/xml": "code",
|
||||
};
|
||||
|
||||
const BINARY_MIMES = new Set([
|
||||
"application/zip",
|
||||
"application/x-zip-compressed",
|
||||
"application/gzip",
|
||||
"application/x-tar",
|
||||
"application/x-rar-compressed",
|
||||
"application/x-7z-compressed",
|
||||
"application/octet-stream",
|
||||
"application/x-executable",
|
||||
"application/x-msdos-program",
|
||||
"application/vnd.microsoft.portable-executable",
|
||||
]);
|
||||
|
||||
function getExtension(filename?: string): string {
|
||||
if (!filename) return "";
|
||||
const lastDot = filename.lastIndexOf(".");
|
||||
if (lastDot === -1) return "";
|
||||
return filename.slice(lastDot).toLowerCase();
|
||||
}
|
||||
|
||||
export function classifyArtifact(
|
||||
mimeType: string | null,
|
||||
filename?: string,
|
||||
sizeBytes?: number,
|
||||
): ArtifactClassification {
|
||||
// Size gate: >10MB is download-only regardless of type.
|
||||
if (sizeBytes && sizeBytes > TEN_MB) return KIND["download-only"];
|
||||
|
||||
// Extension first (more reliable than MIME for AI-generated files).
|
||||
const ext = getExtension(filename);
|
||||
const extKind = EXT_KIND[ext];
|
||||
if (extKind) return KIND[extKind];
|
||||
|
||||
// MIME fallbacks.
|
||||
const mime = (mimeType ?? "").toLowerCase();
|
||||
if (mime.startsWith("image/")) return KIND.image;
|
||||
const mimeKind = MIME_KIND[mime];
|
||||
if (mimeKind) return KIND[mimeKind];
|
||||
if (mime.startsWith("text/x-")) return KIND.code;
|
||||
if (
|
||||
BINARY_MIMES.has(mime) ||
|
||||
mime.startsWith("audio/") ||
|
||||
mime.startsWith("video/")
|
||||
) {
|
||||
return KIND["download-only"];
|
||||
}
|
||||
if (mime.startsWith("text/")) return KIND.text;
|
||||
|
||||
// Unknown extension + unknown MIME: don't open — we can't safely assume
|
||||
// this is text, and fetching a binary to dump it into a <pre> wastes
|
||||
// bandwidth and shows garbage.
|
||||
return KIND["download-only"];
|
||||
}
|
||||
@@ -0,0 +1,148 @@
|
||||
"use client";
|
||||
|
||||
import { toast } from "@/components/molecules/Toast/use-toast";
|
||||
import { useEffect, useState } from "react";
|
||||
import { useCopilotUIStore } from "../../store";
|
||||
import { getCachedArtifactContent } from "./components/useArtifactContent";
|
||||
import { downloadArtifact } from "./downloadArtifact";
|
||||
import { classifyArtifact } from "./helpers";
|
||||
|
||||
// SSR fallback for viewport width before window is available.
|
||||
const DEFAULT_VIEWPORT_WIDTH = 1280;
|
||||
|
||||
export function useArtifactPanel() {
|
||||
const artifactPanel = useCopilotUIStore((s) => s.artifactPanel);
|
||||
const closeArtifactPanel = useCopilotUIStore((s) => s.closeArtifactPanel);
|
||||
const minimizeArtifactPanel = useCopilotUIStore(
|
||||
(s) => s.minimizeArtifactPanel,
|
||||
);
|
||||
const maximizeArtifactPanel = useCopilotUIStore(
|
||||
(s) => s.maximizeArtifactPanel,
|
||||
);
|
||||
const restoreArtifactPanel = useCopilotUIStore((s) => s.restoreArtifactPanel);
|
||||
const setArtifactPanelWidth = useCopilotUIStore(
|
||||
(s) => s.setArtifactPanelWidth,
|
||||
);
|
||||
const goBackArtifact = useCopilotUIStore((s) => s.goBackArtifact);
|
||||
|
||||
const [isSourceView, setIsSourceView] = useState(false);
|
||||
|
||||
const { activeArtifact } = artifactPanel;
|
||||
|
||||
const classification = activeArtifact
|
||||
? classifyArtifact(
|
||||
activeArtifact.mimeType,
|
||||
activeArtifact.title,
|
||||
activeArtifact.sizeBytes,
|
||||
)
|
||||
: null;
|
||||
|
||||
// Reset source view when switching artifacts
|
||||
useEffect(() => {
|
||||
setIsSourceView(false);
|
||||
}, [activeArtifact?.id]);
|
||||
|
||||
// Keyboard: Escape to close
|
||||
useEffect(() => {
|
||||
if (!artifactPanel.isOpen) return;
|
||||
|
||||
function handleKeyDown(e: KeyboardEvent) {
|
||||
if (e.key === "Escape") {
|
||||
if (document.querySelector('[role="dialog"], [data-state="open"]'))
|
||||
return;
|
||||
closeArtifactPanel();
|
||||
}
|
||||
}
|
||||
|
||||
document.addEventListener("keydown", handleKeyDown);
|
||||
return () => document.removeEventListener("keydown", handleKeyDown);
|
||||
}, [artifactPanel.isOpen, closeArtifactPanel]);
|
||||
|
||||
// Track viewport width reactively for maximize mode.
|
||||
const [viewportWidth, setViewportWidth] = useState(
|
||||
typeof window !== "undefined" ? window.innerWidth : DEFAULT_VIEWPORT_WIDTH,
|
||||
);
|
||||
useEffect(() => {
|
||||
// Throttle to ~10Hz: resize fires continuously during drag, but we only
|
||||
// need the panel width to follow the viewport within a frame or two.
|
||||
let timer: ReturnType<typeof setTimeout> | null = null;
|
||||
function handleResize() {
|
||||
if (timer) return;
|
||||
timer = setTimeout(() => {
|
||||
setViewportWidth(window.innerWidth);
|
||||
timer = null;
|
||||
}, 100);
|
||||
}
|
||||
window.addEventListener("resize", handleResize);
|
||||
return () => {
|
||||
window.removeEventListener("resize", handleResize);
|
||||
if (timer) clearTimeout(timer);
|
||||
};
|
||||
}, []);
|
||||
|
||||
const canCopy =
|
||||
classification != null &&
|
||||
classification.type !== "image" &&
|
||||
classification.type !== "download-only" &&
|
||||
classification.type !== "pdf";
|
||||
|
||||
function handleCopy() {
|
||||
if (!activeArtifact || !canCopy) return;
|
||||
// Reuse content already fetched by the preview pane when available —
|
||||
// Copy should feel instant, not trigger a second network round-trip.
|
||||
const cached = getCachedArtifactContent(activeArtifact.id);
|
||||
const textPromise = cached
|
||||
? Promise.resolve(cached)
|
||||
: fetch(activeArtifact.sourceUrl).then((res) => {
|
||||
if (!res.ok) throw new Error(`Copy failed: ${res.status}`);
|
||||
return res.text();
|
||||
});
|
||||
textPromise
|
||||
.then((text) => navigator.clipboard.writeText(text))
|
||||
.then(() => {
|
||||
toast({ title: "Copied to clipboard" });
|
||||
})
|
||||
.catch(() => {
|
||||
toast({
|
||||
title: "Copy failed",
|
||||
description: "Couldn't read the file or access the clipboard.",
|
||||
variant: "destructive",
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
function handleDownload() {
|
||||
if (!activeArtifact) return;
|
||||
downloadArtifact(activeArtifact).catch(() => {
|
||||
toast({
|
||||
title: "Download failed",
|
||||
description: "Couldn't fetch the file.",
|
||||
variant: "destructive",
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
// Always clamp against the current viewport so a previously-dragged-wide
|
||||
// panel doesn't spill offscreen after the user resizes their window.
|
||||
const maxWidth = viewportWidth * 0.85;
|
||||
const effectiveWidth = artifactPanel.isMaximized
|
||||
? maxWidth
|
||||
: Math.min(artifactPanel.width, maxWidth);
|
||||
|
||||
return {
|
||||
...artifactPanel,
|
||||
effectiveWidth,
|
||||
isSourceView,
|
||||
classification,
|
||||
setIsSourceView,
|
||||
closeArtifactPanel,
|
||||
minimizeArtifactPanel,
|
||||
maximizeArtifactPanel,
|
||||
restoreArtifactPanel,
|
||||
setArtifactPanelWidth,
|
||||
goBackArtifact,
|
||||
canCopy,
|
||||
handleCopy,
|
||||
handleDownload,
|
||||
};
|
||||
}
|
||||
@@ -1,11 +1,15 @@
|
||||
"use client";
|
||||
import { ChatInput } from "@/app/(platform)/copilot/components/ChatInput/ChatInput";
|
||||
import { cn } from "@/lib/utils";
|
||||
import { Flag, useGetFlag } from "@/services/feature-flags/use-get-flag";
|
||||
import { UIDataTypes, UIMessage, UITools } from "ai";
|
||||
import { LayoutGroup, motion } from "framer-motion";
|
||||
import { useCallback } from "react";
|
||||
import { useCopilotUIStore } from "../../store";
|
||||
import { ChatMessagesContainer } from "../ChatMessagesContainer/ChatMessagesContainer";
|
||||
import { CopilotChatActionsProvider } from "../CopilotChatActionsProvider/CopilotChatActionsProvider";
|
||||
import { EmptySession } from "../EmptySession/EmptySession";
|
||||
import { useAutoOpenArtifacts } from "./useAutoOpenArtifacts";
|
||||
|
||||
export interface ChatContainerProps {
|
||||
messages: UIMessage<unknown, UIDataTypes, UITools>[];
|
||||
@@ -23,6 +27,9 @@ export interface ChatContainerProps {
|
||||
onSend: (message: string, files?: File[]) => void | Promise<void>;
|
||||
onStop: () => void;
|
||||
isUploadingFiles?: boolean;
|
||||
hasMoreMessages?: boolean;
|
||||
isLoadingMore?: boolean;
|
||||
onLoadMore?: () => void;
|
||||
/** Files dropped onto the chat window. */
|
||||
droppedFiles?: File[];
|
||||
/** Called after droppedFiles have been consumed by ChatInput. */
|
||||
@@ -44,10 +51,23 @@ export const ChatContainer = ({
|
||||
onSend,
|
||||
onStop,
|
||||
isUploadingFiles,
|
||||
hasMoreMessages,
|
||||
isLoadingMore,
|
||||
onLoadMore,
|
||||
droppedFiles,
|
||||
onDroppedFilesConsumed,
|
||||
historicalDurations,
|
||||
}: ChatContainerProps) => {
|
||||
const isArtifactsEnabled = useGetFlag(Flag.ARTIFACTS);
|
||||
const isArtifactPanelOpen = useCopilotUIStore((s) => s.artifactPanel.isOpen);
|
||||
// When the flag is off we must not auto-open artifacts or let the panel's
|
||||
// open state drive layout width; an artifact generated in a stale session
|
||||
// state would otherwise shrink the chat column with no panel rendered.
|
||||
const isArtifactOpen = isArtifactsEnabled && isArtifactPanelOpen;
|
||||
useAutoOpenArtifacts({
|
||||
messages: isArtifactsEnabled ? messages : [],
|
||||
sessionId,
|
||||
});
|
||||
const isBusy =
|
||||
status === "streaming" ||
|
||||
status === "submitted" ||
|
||||
@@ -76,13 +96,21 @@ export const ChatContainer = ({
|
||||
<LayoutGroup id="copilot-2-chat-layout">
|
||||
<div className="flex h-full min-h-0 w-full flex-col bg-[#f8f8f9] px-2 lg:px-0">
|
||||
{sessionId ? (
|
||||
<div className="mx-auto flex h-full min-h-0 w-full max-w-3xl flex-col">
|
||||
<div
|
||||
className={cn(
|
||||
"mx-auto flex h-full min-h-0 w-full flex-col",
|
||||
!isArtifactOpen && "max-w-3xl",
|
||||
)}
|
||||
>
|
||||
<ChatMessagesContainer
|
||||
messages={messages}
|
||||
status={status}
|
||||
error={error}
|
||||
isLoading={isLoadingSession}
|
||||
sessionID={sessionId}
|
||||
hasMoreMessages={hasMoreMessages}
|
||||
isLoadingMore={isLoadingMore}
|
||||
onLoadMore={onLoadMore}
|
||||
onRetry={handleRetry}
|
||||
historicalDurations={historicalDurations}
|
||||
/>
|
||||
|
||||
@@ -0,0 +1,140 @@
|
||||
import { act, renderHook } from "@testing-library/react";
|
||||
import { beforeEach, describe, expect, it } from "vitest";
|
||||
import { useCopilotUIStore } from "../../store";
|
||||
import { useAutoOpenArtifacts } from "./useAutoOpenArtifacts";
|
||||
|
||||
function assistantMessageWithText(id: string, text: string) {
|
||||
return {
|
||||
id,
|
||||
role: "assistant" as const,
|
||||
parts: [{ type: "text" as const, text }],
|
||||
};
|
||||
}
|
||||
|
||||
const A_ID = "11111111-0000-0000-0000-000000000000";
|
||||
const B_ID = "22222222-0000-0000-0000-000000000000";
|
||||
|
||||
function resetStore() {
|
||||
useCopilotUIStore.setState({
|
||||
artifactPanel: {
|
||||
isOpen: false,
|
||||
isMinimized: false,
|
||||
isMaximized: false,
|
||||
width: 600,
|
||||
activeArtifact: null,
|
||||
history: [],
|
||||
},
|
||||
});
|
||||
}
|
||||
|
||||
describe("useAutoOpenArtifacts", () => {
|
||||
beforeEach(resetStore);
|
||||
|
||||
it("does NOT auto-open on the initial hydration of message list (baseline pass)", () => {
|
||||
const messages = [
|
||||
assistantMessageWithText("m1", `[a](workspace://${A_ID})`),
|
||||
];
|
||||
renderHook(() =>
|
||||
useAutoOpenArtifacts({ messages: messages as any, sessionId: "s1" }),
|
||||
);
|
||||
// Initial run just records the baseline fingerprint; nothing opens.
|
||||
expect(useCopilotUIStore.getState().artifactPanel.isOpen).toBe(false);
|
||||
});
|
||||
|
||||
it("auto-opens when an existing assistant message adds a new artifact", () => {
|
||||
// 1st render: baseline with no artifact.
|
||||
const initial = [assistantMessageWithText("m1", "thinking...")];
|
||||
const { rerender } = renderHook(
|
||||
({ messages, sessionId }) =>
|
||||
useAutoOpenArtifacts({ messages: messages as any, sessionId }),
|
||||
{ initialProps: { messages: initial, sessionId: "s1" } },
|
||||
);
|
||||
expect(useCopilotUIStore.getState().artifactPanel.isOpen).toBe(false);
|
||||
|
||||
// 2nd render: same message id now contains an artifact link.
|
||||
act(() => {
|
||||
rerender({
|
||||
messages: [
|
||||
assistantMessageWithText("m1", `here: [A](workspace://${A_ID})`),
|
||||
],
|
||||
sessionId: "s1",
|
||||
});
|
||||
});
|
||||
const s = useCopilotUIStore.getState().artifactPanel;
|
||||
expect(s.isOpen).toBe(true);
|
||||
expect(s.activeArtifact?.id).toBe(A_ID);
|
||||
});
|
||||
|
||||
it("does not re-open when the fingerprint hasn't changed", () => {
|
||||
const msg = assistantMessageWithText("m1", `[A](workspace://${A_ID})`);
|
||||
const { rerender } = renderHook(
|
||||
({ messages, sessionId }) =>
|
||||
useAutoOpenArtifacts({ messages: messages as any, sessionId }),
|
||||
{ initialProps: { messages: [msg], sessionId: "s1" } },
|
||||
);
|
||||
// Baseline captured; no open.
|
||||
expect(useCopilotUIStore.getState().artifactPanel.isOpen).toBe(false);
|
||||
|
||||
// Rerender identical content: no change in fingerprint → no open.
|
||||
act(() => {
|
||||
rerender({ messages: [msg], sessionId: "s1" });
|
||||
});
|
||||
expect(useCopilotUIStore.getState().artifactPanel.isOpen).toBe(false);
|
||||
});
|
||||
|
||||
it("auto-opens when a brand-new assistant message arrives after the baseline is established", () => {
|
||||
// First render: one message without artifacts → establishes baseline.
|
||||
const { rerender } = renderHook(
|
||||
({ messages, sessionId }) =>
|
||||
useAutoOpenArtifacts({ messages: messages as any, sessionId }),
|
||||
{
|
||||
initialProps: {
|
||||
messages: [assistantMessageWithText("m1", "plain")] as any,
|
||||
sessionId: "s1",
|
||||
},
|
||||
},
|
||||
);
|
||||
expect(useCopilotUIStore.getState().artifactPanel.isOpen).toBe(false);
|
||||
|
||||
// Second render: a *new* assistant message with an artifact. Baseline
|
||||
// is already set, so this should auto-open.
|
||||
act(() => {
|
||||
rerender({
|
||||
messages: [
|
||||
assistantMessageWithText("m1", "plain"),
|
||||
assistantMessageWithText("m2", `[B](workspace://${B_ID})`),
|
||||
] as any,
|
||||
sessionId: "s1",
|
||||
});
|
||||
});
|
||||
const s = useCopilotUIStore.getState().artifactPanel;
|
||||
expect(s.isOpen).toBe(true);
|
||||
expect(s.activeArtifact?.id).toBe(B_ID);
|
||||
});
|
||||
|
||||
it("resets hydration baseline when sessionId changes", () => {
|
||||
const { rerender } = renderHook(
|
||||
({ messages, sessionId }) =>
|
||||
useAutoOpenArtifacts({ messages: messages as any, sessionId }),
|
||||
{
|
||||
initialProps: {
|
||||
messages: [
|
||||
assistantMessageWithText("m1", `[A](workspace://${A_ID})`),
|
||||
] as any,
|
||||
sessionId: "s1",
|
||||
},
|
||||
},
|
||||
);
|
||||
// Switch to a new session — the first pass on the new session should
|
||||
// NOT auto-open (it's a fresh hydration).
|
||||
act(() => {
|
||||
rerender({
|
||||
messages: [
|
||||
assistantMessageWithText("m2", `[B](workspace://${B_ID})`),
|
||||
] as any,
|
||||
sessionId: "s2",
|
||||
});
|
||||
});
|
||||
expect(useCopilotUIStore.getState().artifactPanel.isOpen).toBe(false);
|
||||
});
|
||||
});
|
||||
@@ -0,0 +1,91 @@
|
||||
"use client";
|
||||
|
||||
import { UIDataTypes, UIMessage, UITools } from "ai";
|
||||
import { useEffect, useRef } from "react";
|
||||
import type { ArtifactRef } from "../../store";
|
||||
import { useCopilotUIStore } from "../../store";
|
||||
import { getMessageArtifacts } from "../ChatMessagesContainer/helpers";
|
||||
|
||||
function fingerprintArtifacts(artifacts: ArtifactRef[]): string {
|
||||
return artifacts
|
||||
.map((a) => `${a.id}:${a.title}:${a.mimeType ?? ""}:${a.sourceUrl}`)
|
||||
.join("|");
|
||||
}
|
||||
|
||||
interface UseAutoOpenArtifactsOptions {
|
||||
messages: UIMessage<unknown, UIDataTypes, UITools>[];
|
||||
sessionId: string | null;
|
||||
}
|
||||
|
||||
export function useAutoOpenArtifacts({
|
||||
messages,
|
||||
sessionId,
|
||||
}: UseAutoOpenArtifactsOptions) {
|
||||
const openArtifact = useCopilotUIStore((state) => state.openArtifact);
|
||||
const messageFingerprintsRef = useRef<Map<string, string>>(new Map());
|
||||
const hasInitializedRef = useRef(false);
|
||||
|
||||
useEffect(() => {
|
||||
messageFingerprintsRef.current = new Map();
|
||||
hasInitializedRef.current = false;
|
||||
}, [sessionId]);
|
||||
|
||||
useEffect(() => {
|
||||
if (messages.length === 0) {
|
||||
messageFingerprintsRef.current = new Map();
|
||||
return;
|
||||
}
|
||||
|
||||
// Only scan messages whose fingerprint might have changed since the
|
||||
// last pass: that's the last assistant message (currently streaming)
|
||||
// plus any assistant message whose id isn't in the baseline yet.
|
||||
// This keeps the cost O(new+tail), not O(all messages), on every chunk.
|
||||
const previous = messageFingerprintsRef.current;
|
||||
const nextFingerprints = new Map<string, string>(previous);
|
||||
let nextArtifact: ArtifactRef | null = null;
|
||||
const lastAssistantIdx = (() => {
|
||||
for (let i = messages.length - 1; i >= 0; i--) {
|
||||
if (messages[i].role === "assistant") return i;
|
||||
}
|
||||
return -1;
|
||||
})();
|
||||
|
||||
for (let i = 0; i < messages.length; i++) {
|
||||
const message = messages[i];
|
||||
if (message.role !== "assistant") continue;
|
||||
const isTailAssistant = i === lastAssistantIdx;
|
||||
const isNewMessage = !previous.has(message.id);
|
||||
if (!isTailAssistant && !isNewMessage) continue;
|
||||
|
||||
const artifacts = getMessageArtifacts(message);
|
||||
const fingerprint = fingerprintArtifacts(artifacts);
|
||||
nextFingerprints.set(message.id, fingerprint);
|
||||
|
||||
if (!hasInitializedRef.current || fingerprint.length === 0) {
|
||||
continue;
|
||||
}
|
||||
|
||||
const previousFingerprint = previous.get(message.id) ?? "";
|
||||
if (previousFingerprint === fingerprint) continue;
|
||||
|
||||
nextArtifact = artifacts[artifacts.length - 1] ?? nextArtifact;
|
||||
}
|
||||
|
||||
// Drop entries for messages that no longer exist (e.g. history truncated).
|
||||
const liveIds = new Set(messages.map((m) => m.id));
|
||||
for (const id of nextFingerprints.keys()) {
|
||||
if (!liveIds.has(id)) nextFingerprints.delete(id);
|
||||
}
|
||||
|
||||
messageFingerprintsRef.current = nextFingerprints;
|
||||
|
||||
if (!hasInitializedRef.current) {
|
||||
hasInitializedRef.current = true;
|
||||
return;
|
||||
}
|
||||
|
||||
if (nextArtifact) {
|
||||
openArtifact(nextArtifact);
|
||||
}
|
||||
}, [messages, openArtifact]);
|
||||
}
|
||||
@@ -61,7 +61,7 @@ export function ChatInput({
|
||||
: "Switched to Extended Thinking mode",
|
||||
description:
|
||||
next === "fast"
|
||||
? "Response quality may differ."
|
||||
? "Optimized for speed — ideal for simpler tasks."
|
||||
: "Responses may take longer.",
|
||||
});
|
||||
}
|
||||
|
||||
@@ -0,0 +1,44 @@
|
||||
import type { Meta, StoryObj } from "@storybook/nextjs";
|
||||
import { ModeToggleButton } from "./ModeToggleButton";
|
||||
|
||||
const meta: Meta<typeof ModeToggleButton> = {
|
||||
title: "Copilot/ModeToggleButton",
|
||||
component: ModeToggleButton,
|
||||
tags: ["autodocs"],
|
||||
parameters: {
|
||||
layout: "centered",
|
||||
docs: {
|
||||
description: {
|
||||
component:
|
||||
"Toggle between Fast and Extended Thinking copilot modes. Disabled while a response is streaming.",
|
||||
},
|
||||
},
|
||||
},
|
||||
args: {
|
||||
onToggle: () => {},
|
||||
},
|
||||
};
|
||||
|
||||
export default meta;
|
||||
type Story = StoryObj<typeof meta>;
|
||||
|
||||
export const FastMode: Story = {
|
||||
args: {
|
||||
mode: "fast",
|
||||
isStreaming: false,
|
||||
},
|
||||
};
|
||||
|
||||
export const ExtendedThinkingMode: Story = {
|
||||
args: {
|
||||
mode: "extended_thinking",
|
||||
isStreaming: false,
|
||||
},
|
||||
};
|
||||
|
||||
export const DisabledWhileStreaming: Story = {
|
||||
args: {
|
||||
mode: "fast",
|
||||
isStreaming: true,
|
||||
},
|
||||
};
|
||||
@@ -2,8 +2,7 @@
|
||||
|
||||
import { cn } from "@/lib/utils";
|
||||
import { Brain, Lightning } from "@phosphor-icons/react";
|
||||
|
||||
type CopilotMode = "extended_thinking" | "fast";
|
||||
import type { CopilotMode } from "../../../store";
|
||||
|
||||
interface Props {
|
||||
mode: CopilotMode;
|
||||
@@ -22,8 +21,8 @@ export function ModeToggleButton({ mode, isStreaming, onToggle }: Props) {
|
||||
className={cn(
|
||||
"inline-flex min-h-11 min-w-11 items-center justify-center gap-1 rounded-md px-2 py-1 text-xs font-medium transition-colors",
|
||||
isExtended
|
||||
? "bg-purple-100 text-purple-700 hover:bg-purple-200 dark:bg-purple-900/30 dark:text-purple-300"
|
||||
: "bg-amber-100 text-amber-700 hover:bg-amber-200 dark:bg-amber-900/30 dark:text-amber-300",
|
||||
? "bg-purple-100 text-purple-900 hover:bg-purple-200"
|
||||
: "bg-amber-100 text-amber-900 hover:bg-amber-200",
|
||||
isStreaming && "cursor-not-allowed opacity-50",
|
||||
)}
|
||||
aria-label={
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
import { useEffect, useMemo, useRef } from "react";
|
||||
import { useMemo, useState } from "react";
|
||||
import {
|
||||
Conversation,
|
||||
ConversationContent,
|
||||
@@ -11,6 +11,8 @@ import {
|
||||
} from "@/components/ai-elements/message";
|
||||
import { LoadingSpinner } from "@/components/atoms/LoadingSpinner/LoadingSpinner";
|
||||
import { FileUIPart, UIDataTypes, UIMessage, UITools } from "ai";
|
||||
import { useEffect, useLayoutEffect, useRef } from "react";
|
||||
import { useStickToBottomContext } from "use-stick-to-bottom";
|
||||
import { TOOL_PART_PREFIX } from "../JobStatsBar/constants";
|
||||
import { TurnStatsBar } from "../JobStatsBar/TurnStatsBar";
|
||||
import { useElapsedTimer } from "../JobStatsBar/useElapsedTimer";
|
||||
@@ -37,6 +39,9 @@ interface Props {
|
||||
error: Error | undefined;
|
||||
isLoading: boolean;
|
||||
sessionID?: string | null;
|
||||
hasMoreMessages?: boolean;
|
||||
isLoadingMore?: boolean;
|
||||
onLoadMore?: () => void;
|
||||
onRetry?: () => void;
|
||||
historicalDurations?: Map<string, number>;
|
||||
}
|
||||
@@ -106,15 +111,120 @@ function extractGraphExecId(
|
||||
return null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Triggers `onLoadMore` when scrolled near the top, and preserves the
|
||||
* user's scroll position after older messages are prepended to the DOM.
|
||||
*
|
||||
* Scroll preservation works by:
|
||||
* 1. Capturing `scrollHeight` / `scrollTop` in the observer callback
|
||||
* (synchronous, before React re-renders).
|
||||
* 2. Restoring `scrollTop` in a `useLayoutEffect` keyed on
|
||||
* `messageCount` so it only fires when messages actually change
|
||||
* (not on intermediate renders like the loading-spinner toggle).
|
||||
*/
|
||||
function LoadMoreSentinel({
|
||||
hasMore,
|
||||
isLoading,
|
||||
messageCount,
|
||||
onLoadMore,
|
||||
}: {
|
||||
hasMore: boolean;
|
||||
isLoading: boolean;
|
||||
messageCount: number;
|
||||
onLoadMore: () => void;
|
||||
}) {
|
||||
const sentinelRef = useRef<HTMLDivElement>(null);
|
||||
const onLoadMoreRef = useRef(onLoadMore);
|
||||
onLoadMoreRef.current = onLoadMore;
|
||||
// Pre-mutation scroll snapshot, written synchronously before onLoadMore
|
||||
const scrollSnapshotRef = useRef({ scrollHeight: 0, scrollTop: 0 });
|
||||
const { scrollRef } = useStickToBottomContext();
|
||||
|
||||
// IntersectionObserver to trigger load when sentinel is near viewport.
|
||||
// Only fires when the container is actually scrollable to prevent
|
||||
// exhausting all pages when content fits without scrolling.
|
||||
useEffect(() => {
|
||||
if (!sentinelRef.current || !hasMore || isLoading) return;
|
||||
const observer = new IntersectionObserver(
|
||||
([entry]) => {
|
||||
if (!entry.isIntersecting) return;
|
||||
const scrollParent =
|
||||
sentinelRef.current?.closest('[role="log"]') ??
|
||||
sentinelRef.current?.parentElement;
|
||||
if (
|
||||
scrollParent &&
|
||||
scrollParent.scrollHeight <= scrollParent.clientHeight
|
||||
)
|
||||
return;
|
||||
// Capture scroll metrics *before* the state update
|
||||
const el = scrollRef.current;
|
||||
if (el) {
|
||||
scrollSnapshotRef.current = {
|
||||
scrollHeight: el.scrollHeight,
|
||||
scrollTop: el.scrollTop,
|
||||
};
|
||||
}
|
||||
onLoadMoreRef.current();
|
||||
},
|
||||
{ rootMargin: "200px 0px 0px 0px" },
|
||||
);
|
||||
observer.observe(sentinelRef.current);
|
||||
return () => observer.disconnect();
|
||||
}, [hasMore, isLoading, scrollRef]);
|
||||
|
||||
// After React commits new DOM nodes (prepended messages), adjust
|
||||
// scrollTop so the user stays at the same visual position.
|
||||
// Keyed on messageCount so it only fires when messages actually
|
||||
// change — NOT on intermediate renders (loading spinner, etc.)
|
||||
// that would consume the snapshot too early.
|
||||
useLayoutEffect(() => {
|
||||
const el = scrollRef.current;
|
||||
const { scrollHeight: prevHeight, scrollTop: prevTop } =
|
||||
scrollSnapshotRef.current;
|
||||
if (!el || prevHeight === 0) return;
|
||||
const delta = el.scrollHeight - prevHeight;
|
||||
if (delta > 0) {
|
||||
el.scrollTop = prevTop + delta;
|
||||
}
|
||||
scrollSnapshotRef.current = { scrollHeight: 0, scrollTop: 0 };
|
||||
}, [messageCount, scrollRef]);
|
||||
|
||||
return (
|
||||
<div ref={sentinelRef} className="flex justify-center py-1">
|
||||
{isLoading && <LoadingSpinner className="h-5 w-5 text-neutral-400" />}
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
export function ChatMessagesContainer({
|
||||
messages,
|
||||
status,
|
||||
error,
|
||||
isLoading,
|
||||
sessionID,
|
||||
hasMoreMessages,
|
||||
isLoadingMore,
|
||||
onLoadMore,
|
||||
onRetry,
|
||||
historicalDurations,
|
||||
}: Props) {
|
||||
// Hide the container for one frame when messages first load so
|
||||
// StickToBottom can scroll to the bottom before the user sees it.
|
||||
const [settled, setSettled] = useState(false);
|
||||
const [prevSessionID, setPrevSessionID] = useState(sessionID);
|
||||
if (sessionID !== prevSessionID) {
|
||||
setPrevSessionID(sessionID);
|
||||
if (settled) setSettled(false);
|
||||
}
|
||||
const messagesReady = messages.length > 0 || !isLoading;
|
||||
useEffect(() => {
|
||||
if (settled || !messagesReady) return;
|
||||
const raf = requestAnimationFrame(() => setSettled(true));
|
||||
return () => cancelAnimationFrame(raf);
|
||||
}, [settled, messagesReady]);
|
||||
// opacity-0 only during the single frame between messages arriving and scroll settling
|
||||
const hideForScroll = messagesReady && !settled;
|
||||
|
||||
const lastMessage = messages[messages.length - 1];
|
||||
const graphExecId = useMemo(() => extractGraphExecId(messages), [messages]);
|
||||
|
||||
@@ -162,13 +272,27 @@ export function ChatMessagesContainer({
|
||||
});
|
||||
|
||||
return (
|
||||
<Conversation className="min-h-0 flex-1">
|
||||
<ConversationContent className="flex flex-1 flex-col gap-6 px-3 py-6">
|
||||
<Conversation
|
||||
key={sessionID ?? "new"}
|
||||
resize={settled ? "smooth" : "instant"}
|
||||
className={
|
||||
"min-h-0 flex-1 " +
|
||||
(hideForScroll
|
||||
? "opacity-0"
|
||||
: "opacity-100 transition-opacity duration-100 ease-out")
|
||||
}
|
||||
>
|
||||
<ConversationContent className="flex min-h-full flex-1 flex-col gap-6 px-3 py-6">
|
||||
{hasMoreMessages && onLoadMore && (
|
||||
<LoadMoreSentinel
|
||||
hasMore={hasMoreMessages}
|
||||
isLoading={!!isLoadingMore}
|
||||
messageCount={messages.length}
|
||||
onLoadMore={onLoadMore}
|
||||
/>
|
||||
)}
|
||||
{isLoading && messages.length === 0 && (
|
||||
<div
|
||||
className="flex flex-1 items-center justify-center"
|
||||
style={{ minHeight: "calc(100vh - 12rem)" }}
|
||||
>
|
||||
<div className="flex flex-1 items-center justify-center">
|
||||
<LoadingSpinner className="text-neutral-600" />
|
||||
</div>
|
||||
)}
|
||||
|
||||
@@ -2,6 +2,7 @@ import {
|
||||
FileText as FileTextIcon,
|
||||
DownloadSimple as DownloadIcon,
|
||||
} from "@phosphor-icons/react";
|
||||
import { Flag, useGetFlag } from "@/services/feature-flags/use-get-flag";
|
||||
import type { FileUIPart } from "ai";
|
||||
import {
|
||||
globalRegistry,
|
||||
@@ -14,6 +15,8 @@ import {
|
||||
ContentCardTitle,
|
||||
ContentCardSubtitle,
|
||||
} from "../../ToolAccordion/AccordionContent";
|
||||
import { ArtifactCard } from "../../ArtifactCard/ArtifactCard";
|
||||
import { filePartToArtifactRef } from "../helpers";
|
||||
|
||||
interface Props {
|
||||
files: FileUIPart[];
|
||||
@@ -39,11 +42,26 @@ function renderFileContent(file: FileUIPart): React.ReactNode | null {
|
||||
}
|
||||
|
||||
export function MessageAttachments({ files, isUser }: Props) {
|
||||
const isArtifactsEnabled = useGetFlag(Flag.ARTIFACTS);
|
||||
if (files.length === 0) return null;
|
||||
|
||||
return (
|
||||
<div className="mt-2 flex flex-col gap-2">
|
||||
{files.map((file, i) => {
|
||||
if (isArtifactsEnabled) {
|
||||
const artifactRef = filePartToArtifactRef(
|
||||
file,
|
||||
isUser ? "user-upload" : "agent",
|
||||
);
|
||||
if (artifactRef) {
|
||||
return (
|
||||
<ArtifactCard
|
||||
key={`artifact-${artifactRef.id}-${i}`}
|
||||
artifact={artifactRef}
|
||||
/>
|
||||
);
|
||||
}
|
||||
}
|
||||
const rendered = renderFileContent(file);
|
||||
return rendered ? (
|
||||
<div
|
||||
|
||||
@@ -1,7 +1,9 @@
|
||||
import { MessageResponse } from "@/components/ai-elements/message";
|
||||
import { ErrorCard } from "@/components/molecules/ErrorCard/ErrorCard";
|
||||
import { Flag, useGetFlag } from "@/services/feature-flags/use-get-flag";
|
||||
import { ExclamationMarkIcon } from "@phosphor-icons/react";
|
||||
import { ToolUIPart, UIDataTypes, UIMessage, UITools } from "ai";
|
||||
import { ArtifactCard } from "../../ArtifactCard/ArtifactCard";
|
||||
import { AskQuestionTool } from "../../../tools/AskQuestion/AskQuestion";
|
||||
import { ConnectIntegrationTool } from "../../../tools/ConnectIntegrationTool/ConnectIntegrationTool";
|
||||
import { CreateAgentTool } from "../../../tools/CreateAgent/CreateAgent";
|
||||
@@ -19,7 +21,11 @@ import { RunBlockTool } from "../../../tools/RunBlock/RunBlock";
|
||||
import { RunMCPToolComponent } from "../../../tools/RunMCPTool/RunMCPTool";
|
||||
import { SearchDocsTool } from "../../../tools/SearchDocs/SearchDocs";
|
||||
import { ViewAgentOutputTool } from "../../../tools/ViewAgentOutput/ViewAgentOutput";
|
||||
import { parseSpecialMarkers, resolveWorkspaceUrls } from "../helpers";
|
||||
import {
|
||||
extractWorkspaceArtifacts,
|
||||
parseSpecialMarkers,
|
||||
resolveWorkspaceUrls,
|
||||
} from "../helpers";
|
||||
|
||||
/**
|
||||
* Custom img component for Streamdown that renders <video> elements
|
||||
@@ -61,6 +67,27 @@ function WorkspaceMediaImage(props: React.JSX.IntrinsicElements["img"]) {
|
||||
/** Stable components override for Streamdown (avoids re-creating on every render). */
|
||||
const STREAMDOWN_COMPONENTS = { img: WorkspaceMediaImage };
|
||||
|
||||
function TextWithArtifactCards({ text }: { text: string }) {
|
||||
const isArtifactsEnabled = useGetFlag(Flag.ARTIFACTS);
|
||||
const artifacts = extractWorkspaceArtifacts(text);
|
||||
const resolved = resolveWorkspaceUrls(text);
|
||||
|
||||
return (
|
||||
<>
|
||||
{isArtifactsEnabled && artifacts.length > 0 && (
|
||||
<div className="mb-2 flex flex-col gap-1">
|
||||
{artifacts.map((artifact) => (
|
||||
<ArtifactCard key={artifact.id} artifact={artifact} />
|
||||
))}
|
||||
</div>
|
||||
)}
|
||||
<MessageResponse components={STREAMDOWN_COMPONENTS}>
|
||||
{resolved}
|
||||
</MessageResponse>
|
||||
</>
|
||||
);
|
||||
}
|
||||
|
||||
interface Props {
|
||||
part: UIMessage<unknown, UIDataTypes, UITools>["parts"][number];
|
||||
messageID: string;
|
||||
@@ -118,11 +145,7 @@ export function MessagePartRenderer({
|
||||
);
|
||||
}
|
||||
|
||||
return (
|
||||
<MessageResponse key={key} components={STREAMDOWN_COMPONENTS}>
|
||||
{resolveWorkspaceUrls(cleanText)}
|
||||
</MessageResponse>
|
||||
);
|
||||
return <TextWithArtifactCards key={key} text={cleanText} />;
|
||||
}
|
||||
case "tool-ask_question":
|
||||
return <AskQuestionTool key={key} part={part as ToolUIPart} />;
|
||||
|
||||
@@ -0,0 +1,103 @@
|
||||
import { describe, expect, it } from "vitest";
|
||||
import { extractWorkspaceArtifacts, filePartToArtifactRef } from "./helpers";
|
||||
|
||||
describe("extractWorkspaceArtifacts", () => {
|
||||
it("extracts a single workspace:// link with its markdown title", () => {
|
||||
const text =
|
||||
"See [the report](workspace://550e8400-e29b-41d4-a716-446655440000) for details.";
|
||||
const out = extractWorkspaceArtifacts(text);
|
||||
expect(out).toHaveLength(1);
|
||||
expect(out[0].id).toBe("550e8400-e29b-41d4-a716-446655440000");
|
||||
expect(out[0].title).toBe("the report");
|
||||
expect(out[0].origin).toBe("agent");
|
||||
});
|
||||
|
||||
it("falls back to a synthetic title when the URI isn't wrapped in link markdown", () => {
|
||||
const text = "raw workspace://abc12345-0000-0000-0000-000000000000 link";
|
||||
const out = extractWorkspaceArtifacts(text);
|
||||
expect(out).toHaveLength(1);
|
||||
expect(out[0].title).toBe("File abc12345");
|
||||
});
|
||||
|
||||
it("skips URIs inside image markdown so images don't double-render", () => {
|
||||
const text =
|
||||
"";
|
||||
expect(extractWorkspaceArtifacts(text)).toEqual([]);
|
||||
});
|
||||
|
||||
it("still extracts non-image links when image links are also present", () => {
|
||||
const text =
|
||||
" " +
|
||||
"and [doc](workspace://bbbbbbbb-0000-0000-0000-000000000000)";
|
||||
const out = extractWorkspaceArtifacts(text);
|
||||
expect(out).toHaveLength(1);
|
||||
expect(out[0].id).toBe("bbbbbbbb-0000-0000-0000-000000000000");
|
||||
});
|
||||
|
||||
it("deduplicates repeated references to the same artifact id", () => {
|
||||
const text =
|
||||
"[A](workspace://11111111-0000-0000-0000-000000000000) and " +
|
||||
"[A again](workspace://11111111-0000-0000-0000-000000000000)";
|
||||
const out = extractWorkspaceArtifacts(text);
|
||||
expect(out).toHaveLength(1);
|
||||
});
|
||||
|
||||
it("returns empty when no workspace URIs are present", () => {
|
||||
expect(extractWorkspaceArtifacts("plain text, no links")).toEqual([]);
|
||||
});
|
||||
|
||||
it("picks up the mime hint from the URI fragment", () => {
|
||||
const text =
|
||||
" " +
|
||||
"[d](workspace://dddddddd-0000-0000-0000-000000000000#application/pdf)";
|
||||
const out = extractWorkspaceArtifacts(text);
|
||||
expect(out).toHaveLength(1);
|
||||
expect(out[0].mimeType).toBe("application/pdf");
|
||||
});
|
||||
});
|
||||
|
||||
describe("filePartToArtifactRef", () => {
|
||||
it("returns null without a url", () => {
|
||||
expect(
|
||||
filePartToArtifactRef({ type: "file", url: "", filename: "x" } as any),
|
||||
).toBeNull();
|
||||
});
|
||||
|
||||
it("returns null for URLs that don't match the workspace file pattern", () => {
|
||||
expect(
|
||||
filePartToArtifactRef({
|
||||
type: "file",
|
||||
url: "https://example.com/file.txt",
|
||||
filename: "file.txt",
|
||||
} as any),
|
||||
).toBeNull();
|
||||
});
|
||||
|
||||
it("extracts id from the workspace proxy URL", () => {
|
||||
const ref = filePartToArtifactRef({
|
||||
type: "file",
|
||||
url: "/api/proxy/api/workspace/files/550e8400-e29b-41d4-a716-446655440000/download",
|
||||
filename: "report.pdf",
|
||||
mediaType: "application/pdf",
|
||||
} as any);
|
||||
expect(ref?.id).toBe("550e8400-e29b-41d4-a716-446655440000");
|
||||
expect(ref?.title).toBe("report.pdf");
|
||||
expect(ref?.mimeType).toBe("application/pdf");
|
||||
});
|
||||
|
||||
it("defaults origin to user-upload but accepts an override", () => {
|
||||
const url =
|
||||
"/api/proxy/api/workspace/files/550e8400-e29b-41d4-a716-446655440000/download";
|
||||
const defaulted = filePartToArtifactRef({
|
||||
type: "file",
|
||||
url,
|
||||
filename: "a.txt",
|
||||
} as any);
|
||||
expect(defaulted?.origin).toBe("user-upload");
|
||||
const overridden = filePartToArtifactRef(
|
||||
{ type: "file", url, filename: "a.txt" } as any,
|
||||
"agent",
|
||||
);
|
||||
expect(overridden?.origin).toBe("agent");
|
||||
});
|
||||
});
|
||||
@@ -1,6 +1,8 @@
|
||||
import { getGetWorkspaceDownloadFileByIdUrl } from "@/app/api/__generated__/endpoints/workspace/workspace";
|
||||
import { ResponseType } from "@/app/api/__generated__/models/responseType";
|
||||
import { ToolUIPart, UIDataTypes, UIMessage, UITools } from "ai";
|
||||
import { parseWorkspaceURI } from "@/lib/workspace-uri";
|
||||
import { FileUIPart, ToolUIPart, UIDataTypes, UIMessage, UITools } from "ai";
|
||||
import type { ArtifactRef } from "../../store";
|
||||
|
||||
export type MessagePart = UIMessage<
|
||||
unknown,
|
||||
@@ -31,6 +33,10 @@ const CUSTOM_TOOL_TYPES = new Set([
|
||||
"tool-create_feature_request",
|
||||
]);
|
||||
|
||||
const WORKSPACE_FILE_PATTERN =
|
||||
/\/api\/proxy\/api\/workspace\/files\/([a-f0-9-]+)\/download/;
|
||||
const WORKSPACE_URI_PATTERN = /workspace:\/\/([a-f0-9-]+)(?:#([^\s)\]]+))?/g;
|
||||
|
||||
const INTERACTIVE_RESPONSE_TYPES: ReadonlySet<string> = new Set([
|
||||
ResponseType.setup_requirements,
|
||||
ResponseType.agent_details,
|
||||
@@ -233,6 +239,84 @@ export function parseSpecialMarkers(text: string): {
|
||||
return { markerType: null, markerText: "", cleanText: text };
|
||||
}
|
||||
|
||||
export function filePartToArtifactRef(
|
||||
file: FileUIPart,
|
||||
origin: ArtifactRef["origin"] = "user-upload",
|
||||
): ArtifactRef | null {
|
||||
if (!file.url) return null;
|
||||
const match = file.url.match(WORKSPACE_FILE_PATTERN);
|
||||
if (!match) return null;
|
||||
return {
|
||||
id: match[1],
|
||||
title: file.filename || "File",
|
||||
mimeType: file.mediaType || null,
|
||||
sourceUrl: file.url,
|
||||
origin,
|
||||
};
|
||||
}
|
||||
|
||||
export function extractWorkspaceArtifacts(text: string): ArtifactRef[] {
|
||||
const seen = new Set<string>();
|
||||
const artifacts: ArtifactRef[] = [];
|
||||
|
||||
for (const match of text.matchAll(WORKSPACE_URI_PATTERN)) {
|
||||
const fullUri = match[0];
|
||||
const parsed = parseWorkspaceURI(fullUri);
|
||||
|
||||
if (!parsed || seen.has(parsed.fileID)) continue;
|
||||
|
||||
// Skip URIs inside image markdown (``). Images are
|
||||
// rendered inline via resolveWorkspaceUrls — surfacing them as cards too
|
||||
// would double-render the same asset.
|
||||
const escapedUri = escapeRegExp(fullUri);
|
||||
const imagePattern = new RegExp(`!\\[[^\\]]*\\]\\(${escapedUri}\\)`);
|
||||
if (imagePattern.test(text)) continue;
|
||||
|
||||
seen.add(parsed.fileID);
|
||||
|
||||
const linkPattern = new RegExp(`\\[([^\\]]+)\\]\\(${escapedUri}\\)`);
|
||||
const linkMatch = text.match(linkPattern);
|
||||
const title = linkMatch?.[1] ?? `File ${parsed.fileID.slice(0, 8)}`;
|
||||
|
||||
artifacts.push({
|
||||
id: parsed.fileID,
|
||||
title,
|
||||
mimeType: parsed.mimeType,
|
||||
sourceUrl: `/api/proxy${getGetWorkspaceDownloadFileByIdUrl(parsed.fileID)}`,
|
||||
origin: "agent",
|
||||
});
|
||||
}
|
||||
|
||||
return artifacts;
|
||||
}
|
||||
|
||||
export function getMessageArtifacts(
|
||||
message: UIMessage<unknown, UIDataTypes, UITools>,
|
||||
): ArtifactRef[] {
|
||||
const seen = new Set<string>();
|
||||
const artifacts: ArtifactRef[] = [];
|
||||
|
||||
for (const part of message.parts) {
|
||||
if (part.type === "text") {
|
||||
for (const artifact of extractWorkspaceArtifacts(part.text)) {
|
||||
if (seen.has(artifact.id)) continue;
|
||||
seen.add(artifact.id);
|
||||
artifacts.push(artifact);
|
||||
}
|
||||
}
|
||||
|
||||
if (part.type === "file") {
|
||||
const origin = message.role === "user" ? "user-upload" : "agent";
|
||||
const artifact = filePartToArtifactRef(part, origin);
|
||||
if (!artifact || seen.has(artifact.id)) continue;
|
||||
seen.add(artifact.id);
|
||||
artifacts.push(artifact);
|
||||
}
|
||||
}
|
||||
|
||||
return artifacts;
|
||||
}
|
||||
|
||||
/**
|
||||
* Resolve workspace:// URLs in markdown text to proxy download URLs.
|
||||
*
|
||||
|
||||
@@ -4,6 +4,7 @@ import {
|
||||
ORIGINAL_TITLE,
|
||||
extractSendMessageText,
|
||||
formatNotificationTitle,
|
||||
getSendSuppressionReason,
|
||||
parseSessionIDs,
|
||||
shouldSuppressDuplicateSend,
|
||||
} from "./helpers";
|
||||
@@ -100,9 +101,10 @@ describe("extractSendMessageText", () => {
|
||||
});
|
||||
});
|
||||
|
||||
let msgCounter = 0;
|
||||
function makeMsg(role: "user" | "assistant", text: string): UIMessage {
|
||||
return {
|
||||
id: `msg-${Math.random()}`,
|
||||
id: `msg-${msgCounter++}`,
|
||||
role,
|
||||
parts: [{ type: "text", text }],
|
||||
};
|
||||
@@ -192,3 +194,100 @@ describe("shouldSuppressDuplicateSend", () => {
|
||||
).toBe(false);
|
||||
});
|
||||
});
|
||||
|
||||
describe("getSendSuppressionReason", () => {
|
||||
it("returns 'reconnecting' when reconnect is scheduled", () => {
|
||||
expect(
|
||||
getSendSuppressionReason({
|
||||
text: "hello",
|
||||
isReconnectScheduled: true,
|
||||
lastSubmittedText: null,
|
||||
messages: [],
|
||||
}),
|
||||
).toBe("reconnecting");
|
||||
});
|
||||
|
||||
it("returns 'reconnecting' even when text would otherwise be a duplicate", () => {
|
||||
const messages = [makeMsg("user", "hello")];
|
||||
expect(
|
||||
getSendSuppressionReason({
|
||||
text: "hello",
|
||||
isReconnectScheduled: true,
|
||||
lastSubmittedText: "hello",
|
||||
messages,
|
||||
}),
|
||||
).toBe("reconnecting");
|
||||
});
|
||||
|
||||
it("returns 'duplicate' when text matches last submitted AND last user message", () => {
|
||||
const messages = [makeMsg("user", "hello"), makeMsg("assistant", "hi")];
|
||||
expect(
|
||||
getSendSuppressionReason({
|
||||
text: "hello",
|
||||
isReconnectScheduled: false,
|
||||
lastSubmittedText: "hello",
|
||||
messages,
|
||||
}),
|
||||
).toBe("duplicate");
|
||||
});
|
||||
|
||||
it("returns null when text matches last submitted but differs from last user message", () => {
|
||||
const messages = [
|
||||
makeMsg("user", "different"),
|
||||
makeMsg("assistant", "reply"),
|
||||
];
|
||||
expect(
|
||||
getSendSuppressionReason({
|
||||
text: "hello",
|
||||
isReconnectScheduled: false,
|
||||
lastSubmittedText: "hello",
|
||||
messages,
|
||||
}),
|
||||
).toBeNull();
|
||||
});
|
||||
|
||||
it("returns null when text differs from last submitted", () => {
|
||||
const messages = [makeMsg("user", "hello")];
|
||||
expect(
|
||||
getSendSuppressionReason({
|
||||
text: "new message",
|
||||
isReconnectScheduled: false,
|
||||
lastSubmittedText: "hello",
|
||||
messages,
|
||||
}),
|
||||
).toBeNull();
|
||||
});
|
||||
|
||||
it("returns null when not reconnecting and no prior submission", () => {
|
||||
expect(
|
||||
getSendSuppressionReason({
|
||||
text: "hello",
|
||||
isReconnectScheduled: false,
|
||||
lastSubmittedText: null,
|
||||
messages: [],
|
||||
}),
|
||||
).toBeNull();
|
||||
});
|
||||
|
||||
it("returns null when text is empty", () => {
|
||||
expect(
|
||||
getSendSuppressionReason({
|
||||
text: "",
|
||||
isReconnectScheduled: false,
|
||||
lastSubmittedText: "",
|
||||
messages: [],
|
||||
}),
|
||||
).toBeNull();
|
||||
});
|
||||
|
||||
it("returns null when messages array is empty even if text matches lastSubmitted", () => {
|
||||
expect(
|
||||
getSendSuppressionReason({
|
||||
text: "hello",
|
||||
isReconnectScheduled: false,
|
||||
lastSubmittedText: "hello",
|
||||
messages: [],
|
||||
}),
|
||||
).toBeNull();
|
||||
});
|
||||
});
|
||||
|
||||
@@ -6,6 +6,7 @@ interface SessionChatMessage {
|
||||
content: string | null;
|
||||
tool_call_id: string | null;
|
||||
tool_calls: unknown[] | null;
|
||||
sequence: number | null;
|
||||
duration_ms: number | null;
|
||||
}
|
||||
|
||||
@@ -35,6 +36,7 @@ function coerceSessionChatMessages(
|
||||
? null
|
||||
: String(msg.tool_call_id),
|
||||
tool_calls: Array.isArray(msg.tool_calls) ? msg.tool_calls : null,
|
||||
sequence: typeof msg.sequence === "number" ? msg.sequence : null,
|
||||
duration_ms:
|
||||
typeof msg.duration_ms === "number" ? msg.duration_ms : null,
|
||||
};
|
||||
@@ -101,10 +103,67 @@ function toToolInput(rawArguments: unknown): unknown {
|
||||
return {};
|
||||
}
|
||||
|
||||
/**
|
||||
* Concatenate two UIMessage arrays, merging consecutive assistant messages
|
||||
* at the join point so that reasoning + response parts stay in a single bubble.
|
||||
*
|
||||
* Within each page, `convertChatSessionMessagesToUiMessages` already merges
|
||||
* consecutive assistant DB rows. This handles the boundary between pages
|
||||
* (or between older-pages and the current/streaming page).
|
||||
*/
|
||||
export function concatWithAssistantMerge(
|
||||
a: UIMessage<unknown, UIDataTypes, UITools>[],
|
||||
b: UIMessage<unknown, UIDataTypes, UITools>[],
|
||||
): UIMessage<unknown, UIDataTypes, UITools>[] {
|
||||
if (a.length === 0) return b;
|
||||
if (b.length === 0) return a;
|
||||
const last = a[a.length - 1];
|
||||
const first = b[0];
|
||||
if (last.role === "assistant" && first.role === "assistant") {
|
||||
return [
|
||||
...a.slice(0, -1),
|
||||
{ ...last, parts: [...last.parts, ...first.parts] },
|
||||
...b.slice(1),
|
||||
];
|
||||
}
|
||||
return [...a, ...b];
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract a toolCallId → output map from raw API messages.
|
||||
* Used to provide cross-page tool output context when converting
|
||||
* older pages that may have assistant tool_calls whose results
|
||||
* are in a newer page.
|
||||
*/
|
||||
export function extractToolOutputsFromRaw(
|
||||
rawMessages: unknown[],
|
||||
): Map<string, unknown> {
|
||||
const map = new Map<string, unknown>();
|
||||
for (const raw of rawMessages) {
|
||||
if (!raw || typeof raw !== "object") continue;
|
||||
const msg = raw as Record<string, unknown>;
|
||||
if (
|
||||
msg.role === "tool" &&
|
||||
typeof msg.tool_call_id === "string" &&
|
||||
msg.content != null
|
||||
) {
|
||||
map.set(
|
||||
msg.tool_call_id,
|
||||
typeof msg.content === "string" ? msg.content : String(msg.content),
|
||||
);
|
||||
}
|
||||
}
|
||||
return map;
|
||||
}
|
||||
|
||||
export function convertChatSessionMessagesToUiMessages(
|
||||
sessionId: string,
|
||||
rawMessages: unknown[],
|
||||
options?: { isComplete?: boolean },
|
||||
options?: {
|
||||
isComplete?: boolean;
|
||||
/** Tool outputs from adjacent pages, for cross-page tool_call matching. */
|
||||
extraToolOutputs?: Map<string, unknown>;
|
||||
},
|
||||
): {
|
||||
messages: UIMessage<unknown, UIDataTypes, UITools>[];
|
||||
durations: Map<string, number>;
|
||||
@@ -112,6 +171,14 @@ export function convertChatSessionMessagesToUiMessages(
|
||||
const messages = coerceSessionChatMessages(rawMessages);
|
||||
const toolOutputsByCallId = new Map<string, unknown>();
|
||||
|
||||
// Seed with extra tool outputs from adjacent pages first;
|
||||
// outputs from this page will override if present in both.
|
||||
if (options?.extraToolOutputs) {
|
||||
for (const [id, output] of options.extraToolOutputs) {
|
||||
toolOutputsByCallId.set(id, output);
|
||||
}
|
||||
}
|
||||
|
||||
for (const msg of messages) {
|
||||
if (msg.role !== "tool") continue;
|
||||
if (!msg.tool_call_id) continue;
|
||||
@@ -122,7 +189,7 @@ export function convertChatSessionMessagesToUiMessages(
|
||||
const uiMessages: UIMessage<unknown, UIDataTypes, UITools>[] = [];
|
||||
const durations = new Map<string, number>();
|
||||
|
||||
messages.forEach((msg, index) => {
|
||||
messages.forEach((msg) => {
|
||||
if (msg.role === "tool") return;
|
||||
if (msg.role !== "user" && msg.role !== "assistant") return;
|
||||
|
||||
@@ -200,7 +267,7 @@ export function convertChatSessionMessagesToUiMessages(
|
||||
return;
|
||||
}
|
||||
|
||||
const msgId = `${sessionId}-${index}`;
|
||||
const msgId = `${sessionId}-seq-${msg.sequence}`;
|
||||
uiMessages.push({
|
||||
id: msgId,
|
||||
role: msg.role,
|
||||
|
||||
@@ -0,0 +1,141 @@
|
||||
import { beforeEach, describe, expect, it } from "vitest";
|
||||
import type { ArtifactRef } from "./store";
|
||||
import { useCopilotUIStore } from "./store";
|
||||
|
||||
function makeArtifact(id: string, title = `file-${id}`): ArtifactRef {
|
||||
return {
|
||||
id,
|
||||
title,
|
||||
mimeType: "text/plain",
|
||||
sourceUrl: `/api/proxy/api/workspace/files/${id}/download`,
|
||||
origin: "agent",
|
||||
};
|
||||
}
|
||||
|
||||
function resetStore() {
|
||||
useCopilotUIStore.setState({
|
||||
artifactPanel: {
|
||||
isOpen: false,
|
||||
isMinimized: false,
|
||||
isMaximized: false,
|
||||
width: 600,
|
||||
activeArtifact: null,
|
||||
history: [],
|
||||
},
|
||||
});
|
||||
}
|
||||
|
||||
describe("artifactPanel store actions", () => {
|
||||
beforeEach(resetStore);
|
||||
|
||||
it("openArtifact opens the panel and sets the active artifact", () => {
|
||||
const a = makeArtifact("a");
|
||||
useCopilotUIStore.getState().openArtifact(a);
|
||||
const s = useCopilotUIStore.getState().artifactPanel;
|
||||
expect(s.isOpen).toBe(true);
|
||||
expect(s.isMinimized).toBe(false);
|
||||
expect(s.activeArtifact?.id).toBe("a");
|
||||
expect(s.history).toEqual([]);
|
||||
});
|
||||
|
||||
it("openArtifact pushes the previous artifact onto history", () => {
|
||||
const a = makeArtifact("a");
|
||||
const b = makeArtifact("b");
|
||||
useCopilotUIStore.getState().openArtifact(a);
|
||||
useCopilotUIStore.getState().openArtifact(b);
|
||||
const s = useCopilotUIStore.getState().artifactPanel;
|
||||
expect(s.activeArtifact?.id).toBe("b");
|
||||
expect(s.history.map((h) => h.id)).toEqual(["a"]);
|
||||
});
|
||||
|
||||
it("openArtifact does NOT push history when re-opening the same artifact", () => {
|
||||
const a = makeArtifact("a");
|
||||
useCopilotUIStore.getState().openArtifact(a);
|
||||
useCopilotUIStore.getState().openArtifact(a);
|
||||
expect(useCopilotUIStore.getState().artifactPanel.history).toEqual([]);
|
||||
});
|
||||
|
||||
it("openArtifact pops the top of history when returning to it (A→B→A)", () => {
|
||||
const a = makeArtifact("a");
|
||||
const b = makeArtifact("b");
|
||||
useCopilotUIStore.getState().openArtifact(a);
|
||||
useCopilotUIStore.getState().openArtifact(b);
|
||||
useCopilotUIStore.getState().openArtifact(a); // ping-pong
|
||||
const s = useCopilotUIStore.getState().artifactPanel;
|
||||
expect(s.activeArtifact?.id).toBe("a");
|
||||
// History was [a]; returning to a should pop, not push.
|
||||
expect(s.history).toEqual([]);
|
||||
});
|
||||
|
||||
it("goBackArtifact pops the last entry and becomes active", () => {
|
||||
const a = makeArtifact("a");
|
||||
const b = makeArtifact("b");
|
||||
useCopilotUIStore.getState().openArtifact(a);
|
||||
useCopilotUIStore.getState().openArtifact(b);
|
||||
useCopilotUIStore.getState().goBackArtifact();
|
||||
const s = useCopilotUIStore.getState().artifactPanel;
|
||||
expect(s.activeArtifact?.id).toBe("a");
|
||||
expect(s.history).toEqual([]);
|
||||
});
|
||||
|
||||
it("goBackArtifact is a no-op when history is empty", () => {
|
||||
const a = makeArtifact("a");
|
||||
useCopilotUIStore.getState().openArtifact(a);
|
||||
useCopilotUIStore.getState().goBackArtifact();
|
||||
const s = useCopilotUIStore.getState().artifactPanel;
|
||||
expect(s.activeArtifact?.id).toBe("a");
|
||||
});
|
||||
|
||||
it("closeArtifactPanel keeps activeArtifact (for exit animation) and clears history", () => {
|
||||
const a = makeArtifact("a");
|
||||
const b = makeArtifact("b");
|
||||
useCopilotUIStore.getState().openArtifact(a);
|
||||
useCopilotUIStore.getState().openArtifact(b);
|
||||
useCopilotUIStore.getState().closeArtifactPanel();
|
||||
const s = useCopilotUIStore.getState().artifactPanel;
|
||||
expect(s.isOpen).toBe(false);
|
||||
expect(s.isMinimized).toBe(false);
|
||||
expect(s.activeArtifact?.id).toBe("b");
|
||||
expect(s.history).toEqual([]);
|
||||
});
|
||||
|
||||
it("minimize/restore toggles isMinimized without touching activeArtifact", () => {
|
||||
const a = makeArtifact("a");
|
||||
useCopilotUIStore.getState().openArtifact(a);
|
||||
useCopilotUIStore.getState().minimizeArtifactPanel();
|
||||
expect(useCopilotUIStore.getState().artifactPanel.isMinimized).toBe(true);
|
||||
useCopilotUIStore.getState().restoreArtifactPanel();
|
||||
expect(useCopilotUIStore.getState().artifactPanel.isMinimized).toBe(false);
|
||||
expect(useCopilotUIStore.getState().artifactPanel.activeArtifact?.id).toBe(
|
||||
"a",
|
||||
);
|
||||
});
|
||||
|
||||
it("maximize sets isMaximized and clears isMinimized", () => {
|
||||
const a = makeArtifact("a");
|
||||
useCopilotUIStore.getState().openArtifact(a);
|
||||
useCopilotUIStore.getState().minimizeArtifactPanel();
|
||||
useCopilotUIStore.getState().maximizeArtifactPanel();
|
||||
const s = useCopilotUIStore.getState().artifactPanel;
|
||||
expect(s.isMaximized).toBe(true);
|
||||
expect(s.isMinimized).toBe(false);
|
||||
});
|
||||
|
||||
it("restoreArtifactPanel clears both isMinimized and isMaximized", () => {
|
||||
const a = makeArtifact("a");
|
||||
useCopilotUIStore.getState().openArtifact(a);
|
||||
useCopilotUIStore.getState().maximizeArtifactPanel();
|
||||
useCopilotUIStore.getState().restoreArtifactPanel();
|
||||
const s = useCopilotUIStore.getState().artifactPanel;
|
||||
expect(s.isMaximized).toBe(false);
|
||||
expect(s.isMinimized).toBe(false);
|
||||
});
|
||||
|
||||
it("setArtifactPanelWidth updates width and clears isMaximized", () => {
|
||||
useCopilotUIStore.getState().maximizeArtifactPanel();
|
||||
useCopilotUIStore.getState().setArtifactPanelWidth(720);
|
||||
const s = useCopilotUIStore.getState().artifactPanel;
|
||||
expect(s.width).toBe(720);
|
||||
expect(s.isMaximized).toBe(false);
|
||||
});
|
||||
});
|
||||
@@ -1,5 +1,6 @@
|
||||
import { Key, storage } from "@/services/storage/local-storage";
|
||||
import { create } from "zustand";
|
||||
import { clearContentCache } from "./components/ArtifactPanel/components/useArtifactContent";
|
||||
import { ORIGINAL_TITLE, parseSessionIDs } from "./helpers";
|
||||
|
||||
export interface DeleteTarget {
|
||||
@@ -7,8 +8,77 @@ export interface DeleteTarget {
|
||||
title: string | null | undefined;
|
||||
}
|
||||
|
||||
/**
|
||||
* A single workspace artifact surfaced in the copilot chat.
|
||||
*
|
||||
* Rendered by `ArtifactCard` (inline) and `ArtifactPanel` (preview pane).
|
||||
* Typically extracted from `workspace://<id>` URIs in assistant text parts
|
||||
* or from `FileUIPart` attachments; see `getMessageArtifacts` in
|
||||
* `ChatMessagesContainer/helpers.ts`.
|
||||
*/
|
||||
export interface ArtifactRef {
|
||||
/** Workspace file ID (matches the backend `WorkspaceFile.id`). */
|
||||
id: string;
|
||||
/** Human-visible filename, used as both title and download filename. */
|
||||
title: string;
|
||||
/** MIME type if known (from backend metadata or `workspace://id#mime`). */
|
||||
mimeType: string | null;
|
||||
/**
|
||||
* Fully-qualified URL the preview/download code will fetch from. Today
|
||||
* this is always the same-origin proxy path
|
||||
* `/api/proxy/api/workspace/files/{id}/download`.
|
||||
*/
|
||||
sourceUrl: string;
|
||||
/**
|
||||
* Who produced the artifact — drives the origin badge color in
|
||||
* `ArtifactPanelHeader`. Derived from the emitting message's role.
|
||||
*/
|
||||
origin: "agent" | "user-upload";
|
||||
/** Size in bytes if known — used by `classifyArtifact` for size gating. */
|
||||
sizeBytes?: number;
|
||||
}
|
||||
|
||||
interface ArtifactPanelState {
|
||||
isOpen: boolean;
|
||||
isMinimized: boolean;
|
||||
isMaximized: boolean;
|
||||
width: number;
|
||||
activeArtifact: ArtifactRef | null;
|
||||
history: ArtifactRef[];
|
||||
}
|
||||
|
||||
export const DEFAULT_PANEL_WIDTH = 600;
|
||||
|
||||
/** Autopilot response mode. */
|
||||
export type CopilotMode = "extended_thinking" | "fast";
|
||||
|
||||
const isClient = typeof window !== "undefined";
|
||||
|
||||
function getPersistedWidth(): number {
|
||||
if (!isClient) return DEFAULT_PANEL_WIDTH;
|
||||
const saved = storage.get(Key.COPILOT_ARTIFACT_PANEL_WIDTH);
|
||||
if (saved) {
|
||||
const parsed = parseInt(saved, 10);
|
||||
// Match the drag-handle clamp so a stale/corrupt value can't open the
|
||||
// panel wider than 85% of the viewport.
|
||||
const maxWidth = window.innerWidth * 0.85;
|
||||
if (!isNaN(parsed) && parsed >= 320) {
|
||||
return Math.min(parsed, maxWidth);
|
||||
}
|
||||
}
|
||||
return DEFAULT_PANEL_WIDTH;
|
||||
}
|
||||
|
||||
let panelWidthPersistTimer: ReturnType<typeof setTimeout> | null = null;
|
||||
function schedulePanelWidthPersist(width: number) {
|
||||
if (!isClient) return;
|
||||
if (panelWidthPersistTimer) clearTimeout(panelWidthPersistTimer);
|
||||
panelWidthPersistTimer = setTimeout(() => {
|
||||
storage.set(Key.COPILOT_ARTIFACT_PANEL_WIDTH, String(width));
|
||||
panelWidthPersistTimer = null;
|
||||
}, 200);
|
||||
}
|
||||
|
||||
function persistCompletedSessions(ids: Set<string>) {
|
||||
if (!isClient) return;
|
||||
try {
|
||||
@@ -47,9 +117,19 @@ interface CopilotUIState {
|
||||
showNotificationDialog: boolean;
|
||||
setShowNotificationDialog: (show: boolean) => void;
|
||||
|
||||
// Artifact panel
|
||||
artifactPanel: ArtifactPanelState;
|
||||
openArtifact: (ref: ArtifactRef) => void;
|
||||
closeArtifactPanel: () => void;
|
||||
minimizeArtifactPanel: () => void;
|
||||
maximizeArtifactPanel: () => void;
|
||||
restoreArtifactPanel: () => void;
|
||||
setArtifactPanelWidth: (width: number) => void;
|
||||
goBackArtifact: () => void;
|
||||
|
||||
/** Autopilot mode: 'extended_thinking' (default) or 'fast'. */
|
||||
copilotMode: "extended_thinking" | "fast";
|
||||
setCopilotMode: (mode: "extended_thinking" | "fast") => void;
|
||||
copilotMode: CopilotMode;
|
||||
setCopilotMode: (mode: CopilotMode) => void;
|
||||
|
||||
clearCopilotLocalData: () => void;
|
||||
}
|
||||
@@ -108,6 +188,89 @@ export const useCopilotUIStore = create<CopilotUIState>((set) => ({
|
||||
showNotificationDialog: false,
|
||||
setShowNotificationDialog: (show) => set({ showNotificationDialog: show }),
|
||||
|
||||
// Artifact panel
|
||||
artifactPanel: {
|
||||
isOpen: false,
|
||||
isMinimized: false,
|
||||
isMaximized: false,
|
||||
width: getPersistedWidth(),
|
||||
activeArtifact: null,
|
||||
history: [],
|
||||
},
|
||||
openArtifact: (ref) =>
|
||||
set((state) => {
|
||||
const { activeArtifact, history: prevHistory } = state.artifactPanel;
|
||||
const topOfHistory = prevHistory[prevHistory.length - 1];
|
||||
const isReturningToTop = topOfHistory?.id === ref.id;
|
||||
const MAX_HISTORY = 25;
|
||||
const history = isReturningToTop
|
||||
? prevHistory.slice(0, -1)
|
||||
: activeArtifact && activeArtifact.id !== ref.id
|
||||
? [...prevHistory, activeArtifact].slice(-MAX_HISTORY)
|
||||
: prevHistory;
|
||||
return {
|
||||
artifactPanel: {
|
||||
...state.artifactPanel,
|
||||
isOpen: true,
|
||||
isMinimized: false,
|
||||
activeArtifact: ref,
|
||||
history,
|
||||
},
|
||||
};
|
||||
}),
|
||||
closeArtifactPanel: () =>
|
||||
set((state) => ({
|
||||
artifactPanel: {
|
||||
...state.artifactPanel,
|
||||
isOpen: false,
|
||||
isMinimized: false,
|
||||
history: [],
|
||||
},
|
||||
})),
|
||||
minimizeArtifactPanel: () =>
|
||||
set((state) => ({
|
||||
artifactPanel: { ...state.artifactPanel, isMinimized: true },
|
||||
})),
|
||||
maximizeArtifactPanel: () =>
|
||||
set((state) => ({
|
||||
artifactPanel: {
|
||||
...state.artifactPanel,
|
||||
isMaximized: true,
|
||||
isMinimized: false,
|
||||
},
|
||||
})),
|
||||
restoreArtifactPanel: () =>
|
||||
set((state) => ({
|
||||
artifactPanel: {
|
||||
...state.artifactPanel,
|
||||
isMaximized: false,
|
||||
isMinimized: false,
|
||||
},
|
||||
})),
|
||||
setArtifactPanelWidth: (width) => {
|
||||
schedulePanelWidthPersist(width);
|
||||
set((state) => ({
|
||||
artifactPanel: {
|
||||
...state.artifactPanel,
|
||||
width,
|
||||
isMaximized: false,
|
||||
},
|
||||
}));
|
||||
},
|
||||
goBackArtifact: () =>
|
||||
set((state) => {
|
||||
const { history } = state.artifactPanel;
|
||||
if (history.length === 0) return state;
|
||||
const previous = history[history.length - 1];
|
||||
return {
|
||||
artifactPanel: {
|
||||
...state.artifactPanel,
|
||||
activeArtifact: previous,
|
||||
history: history.slice(0, -1),
|
||||
},
|
||||
};
|
||||
}),
|
||||
|
||||
copilotMode:
|
||||
isClient && storage.get(Key.COPILOT_MODE) === "fast"
|
||||
? "fast"
|
||||
@@ -118,16 +281,26 @@ export const useCopilotUIStore = create<CopilotUIState>((set) => ({
|
||||
},
|
||||
|
||||
clearCopilotLocalData: () => {
|
||||
clearContentCache();
|
||||
storage.clean(Key.COPILOT_NOTIFICATIONS_ENABLED);
|
||||
storage.clean(Key.COPILOT_SOUND_ENABLED);
|
||||
storage.clean(Key.COPILOT_NOTIFICATION_BANNER_DISMISSED);
|
||||
storage.clean(Key.COPILOT_NOTIFICATION_DIALOG_DISMISSED);
|
||||
storage.clean(Key.COPILOT_ARTIFACT_PANEL_WIDTH);
|
||||
storage.clean(Key.COPILOT_MODE);
|
||||
storage.clean(Key.COPILOT_COMPLETED_SESSIONS);
|
||||
set({
|
||||
completedSessionIDs: new Set<string>(),
|
||||
isNotificationsEnabled: false,
|
||||
isSoundEnabled: true,
|
||||
artifactPanel: {
|
||||
isOpen: false,
|
||||
isMinimized: false,
|
||||
isMaximized: false,
|
||||
width: DEFAULT_PANEL_WIDTH,
|
||||
activeArtifact: null,
|
||||
history: [],
|
||||
},
|
||||
copilotMode: "extended_thinking",
|
||||
});
|
||||
if (isClient) {
|
||||
|
||||
@@ -15,7 +15,7 @@ export function useChatSession() {
|
||||
const [sessionId, setSessionId] = useQueryState("sessionId", parseAsString);
|
||||
const queryClient = useQueryClient();
|
||||
|
||||
const sessionQuery = useGetV2GetSession(sessionId ?? "", {
|
||||
const sessionQuery = useGetV2GetSession(sessionId ?? "", undefined, {
|
||||
query: {
|
||||
enabled: !!sessionId,
|
||||
staleTime: Infinity, // Manual invalidation on session switch
|
||||
@@ -57,6 +57,17 @@ export function useChatSession() {
|
||||
return !!sessionQuery.data.data.active_stream;
|
||||
}, [sessionQuery.data, sessionQuery.isFetching, sessionId]);
|
||||
|
||||
// Pagination metadata from the initial page load
|
||||
const hasMoreMessages = useMemo(() => {
|
||||
if (sessionQuery.data?.status !== 200) return false;
|
||||
return !!sessionQuery.data.data.has_more_messages;
|
||||
}, [sessionQuery.data]);
|
||||
|
||||
const oldestSequence = useMemo(() => {
|
||||
if (sessionQuery.data?.status !== 200) return null;
|
||||
return sessionQuery.data.data.oldest_sequence ?? null;
|
||||
}, [sessionQuery.data]);
|
||||
|
||||
// Memoize so the effect in useCopilotPage doesn't infinite-loop on a new
|
||||
// array reference every render. Re-derives only when query data changes.
|
||||
// When the session is complete (no active stream), mark dangling tool
|
||||
@@ -127,12 +138,22 @@ export function useChatSession() {
|
||||
}
|
||||
}
|
||||
|
||||
// Raw messages from the initial page — exposed for cross-page
|
||||
// tool output matching by useLoadMoreMessages.
|
||||
const rawSessionMessages =
|
||||
sessionQuery.data?.status === 200
|
||||
? ((sessionQuery.data.data.messages ?? []) as unknown[])
|
||||
: [];
|
||||
|
||||
return {
|
||||
sessionId,
|
||||
setSessionId,
|
||||
hydratedMessages,
|
||||
rawSessionMessages,
|
||||
historicalDurations,
|
||||
hasActiveStream,
|
||||
hasMoreMessages,
|
||||
oldestSequence,
|
||||
isLoadingSession: sessionQuery.isLoading,
|
||||
isSessionError: sessionQuery.isError,
|
||||
createSession,
|
||||
|
||||
@@ -12,10 +12,12 @@ import { useQueryClient } from "@tanstack/react-query";
|
||||
import type { FileUIPart } from "ai";
|
||||
import { Flag, useGetFlag } from "@/services/feature-flags/use-get-flag";
|
||||
import { useEffect, useRef, useState } from "react";
|
||||
import { concatWithAssistantMerge } from "./helpers/convertChatSessionToUiMessages";
|
||||
import { useCopilotUIStore } from "./store";
|
||||
import { useChatSession } from "./useChatSession";
|
||||
import { useCopilotNotifications } from "./useCopilotNotifications";
|
||||
import { useCopilotStream } from "./useCopilotStream";
|
||||
import { useLoadMoreMessages } from "./useLoadMoreMessages";
|
||||
import { useWorkflowImportAutoSubmit } from "./useWorkflowImportAutoSubmit";
|
||||
|
||||
const TITLE_POLL_INTERVAL_MS = 2_000;
|
||||
@@ -47,8 +49,11 @@ export function useCopilotPage() {
|
||||
sessionId,
|
||||
setSessionId,
|
||||
hydratedMessages,
|
||||
rawSessionMessages,
|
||||
historicalDurations,
|
||||
hasActiveStream,
|
||||
hasMoreMessages,
|
||||
oldestSequence,
|
||||
isLoadingSession,
|
||||
isSessionError,
|
||||
createSession,
|
||||
@@ -57,7 +62,7 @@ export function useCopilotPage() {
|
||||
} = useChatSession();
|
||||
|
||||
const {
|
||||
messages,
|
||||
messages: currentMessages,
|
||||
sendMessage,
|
||||
stop,
|
||||
status,
|
||||
@@ -75,6 +80,19 @@ export function useCopilotPage() {
|
||||
copilotMode: isModeToggleEnabled ? copilotMode : undefined,
|
||||
});
|
||||
|
||||
const { olderMessages, hasMore, isLoadingMore, loadMore } =
|
||||
useLoadMoreMessages({
|
||||
sessionId,
|
||||
initialOldestSequence: oldestSequence,
|
||||
initialHasMore: hasMoreMessages,
|
||||
initialPageRawMessages: rawSessionMessages,
|
||||
});
|
||||
|
||||
// Combine older (paginated) messages with current page messages,
|
||||
// merging consecutive assistant UIMessages at the page boundary so
|
||||
// reasoning + response parts stay in a single bubble.
|
||||
const messages = concatWithAssistantMerge(olderMessages, currentMessages);
|
||||
|
||||
useCopilotNotifications(sessionId);
|
||||
|
||||
// --- Delete session ---
|
||||
@@ -371,6 +389,10 @@ export function useCopilotPage() {
|
||||
isLoggedIn,
|
||||
createSession,
|
||||
onSend,
|
||||
// Pagination
|
||||
hasMoreMessages: hasMore,
|
||||
isLoadingMore,
|
||||
loadMore,
|
||||
// Mobile drawer
|
||||
isMobile,
|
||||
isDrawerOpen,
|
||||
|
||||
@@ -18,6 +18,7 @@ import {
|
||||
resolveInProgressTools,
|
||||
getSendSuppressionReason,
|
||||
} from "./helpers";
|
||||
import type { CopilotMode } from "./store";
|
||||
|
||||
const RECONNECT_BASE_DELAY_MS = 1_000;
|
||||
const RECONNECT_MAX_ATTEMPTS = 3;
|
||||
@@ -41,7 +42,7 @@ interface UseCopilotStreamArgs {
|
||||
hasActiveStream: boolean;
|
||||
refetchSession: () => Promise<{ data?: unknown }>;
|
||||
/** Autopilot mode to use for requests. `undefined` = let backend decide via feature flags. */
|
||||
copilotMode: "extended_thinking" | "fast" | undefined;
|
||||
copilotMode: CopilotMode | undefined;
|
||||
}
|
||||
|
||||
export function useCopilotStream({
|
||||
|
||||
@@ -0,0 +1,161 @@
|
||||
import { getV2GetSession } from "@/app/api/__generated__/endpoints/chat/chat";
|
||||
import type { UIDataTypes, UIMessage, UITools } from "ai";
|
||||
import { useEffect, useMemo, useRef, useState } from "react";
|
||||
import {
|
||||
convertChatSessionMessagesToUiMessages,
|
||||
extractToolOutputsFromRaw,
|
||||
} from "./helpers/convertChatSessionToUiMessages";
|
||||
|
||||
interface UseLoadMoreMessagesArgs {
|
||||
sessionId: string | null;
|
||||
initialOldestSequence: number | null;
|
||||
initialHasMore: boolean;
|
||||
/** Raw messages from the initial page, used for cross-page tool output matching. */
|
||||
initialPageRawMessages: unknown[];
|
||||
}
|
||||
|
||||
const MAX_CONSECUTIVE_ERRORS = 3;
|
||||
const MAX_OLDER_MESSAGES = 2000;
|
||||
|
||||
export function useLoadMoreMessages({
|
||||
sessionId,
|
||||
initialOldestSequence,
|
||||
initialHasMore,
|
||||
initialPageRawMessages,
|
||||
}: UseLoadMoreMessagesArgs) {
|
||||
// Store accumulated raw messages from all older pages (in ascending order).
|
||||
// Re-converting them all together ensures tool outputs are matched across
|
||||
// inter-page boundaries.
|
||||
const [olderRawMessages, setOlderRawMessages] = useState<unknown[]>([]);
|
||||
const [oldestSequence, setOldestSequence] = useState<number | null>(
|
||||
initialOldestSequence,
|
||||
);
|
||||
const [hasMore, setHasMore] = useState(initialHasMore);
|
||||
const [isLoadingMore, setIsLoadingMore] = useState(false);
|
||||
const isLoadingMoreRef = useRef(false);
|
||||
const consecutiveErrorsRef = useRef(0);
|
||||
// Epoch counter to discard stale loadMore responses after a reset
|
||||
const epochRef = useRef(0);
|
||||
|
||||
// Track the sessionId and initial cursor to reset state on change
|
||||
const prevSessionIdRef = useRef(sessionId);
|
||||
const prevInitialOldestRef = useRef(initialOldestSequence);
|
||||
|
||||
// Sync initial values from parent when they change
|
||||
useEffect(() => {
|
||||
if (prevSessionIdRef.current !== sessionId) {
|
||||
// Session changed — full reset
|
||||
prevSessionIdRef.current = sessionId;
|
||||
prevInitialOldestRef.current = initialOldestSequence;
|
||||
setOlderRawMessages([]);
|
||||
setOldestSequence(initialOldestSequence);
|
||||
setHasMore(initialHasMore);
|
||||
setIsLoadingMore(false);
|
||||
isLoadingMoreRef.current = false;
|
||||
consecutiveErrorsRef.current = 0;
|
||||
epochRef.current += 1;
|
||||
} else if (
|
||||
prevInitialOldestRef.current !== initialOldestSequence &&
|
||||
olderRawMessages.length > 0
|
||||
) {
|
||||
// Same session but initial window shifted (e.g. new messages arrived) —
|
||||
// clear paged state to avoid gaps/duplicates
|
||||
prevInitialOldestRef.current = initialOldestSequence;
|
||||
setOlderRawMessages([]);
|
||||
setOldestSequence(initialOldestSequence);
|
||||
setHasMore(initialHasMore);
|
||||
setIsLoadingMore(false);
|
||||
isLoadingMoreRef.current = false;
|
||||
consecutiveErrorsRef.current = 0;
|
||||
epochRef.current += 1;
|
||||
} else {
|
||||
// Update from parent when initial data changes (e.g. refetch)
|
||||
prevInitialOldestRef.current = initialOldestSequence;
|
||||
setOldestSequence(initialOldestSequence);
|
||||
setHasMore(initialHasMore);
|
||||
}
|
||||
}, [sessionId, initialOldestSequence, initialHasMore]);
|
||||
|
||||
// Convert all accumulated raw messages in one pass so tool outputs
|
||||
// are matched across inter-page boundaries. Initial page tool outputs
|
||||
// are included via extraToolOutputs to handle the boundary between
|
||||
// the last older page and the initial/streaming page.
|
||||
const olderMessages: UIMessage<unknown, UIDataTypes, UITools>[] =
|
||||
useMemo(() => {
|
||||
if (!sessionId || olderRawMessages.length === 0) return [];
|
||||
const extraToolOutputs =
|
||||
initialPageRawMessages.length > 0
|
||||
? extractToolOutputsFromRaw(initialPageRawMessages)
|
||||
: undefined;
|
||||
return convertChatSessionMessagesToUiMessages(
|
||||
sessionId,
|
||||
olderRawMessages,
|
||||
{ isComplete: true, extraToolOutputs },
|
||||
).messages;
|
||||
}, [sessionId, olderRawMessages, initialPageRawMessages]);
|
||||
|
||||
async function loadMore() {
|
||||
if (
|
||||
!sessionId ||
|
||||
!hasMore ||
|
||||
isLoadingMoreRef.current ||
|
||||
oldestSequence === null
|
||||
)
|
||||
return;
|
||||
|
||||
const requestEpoch = epochRef.current;
|
||||
isLoadingMoreRef.current = true;
|
||||
setIsLoadingMore(true);
|
||||
try {
|
||||
const response = await getV2GetSession(sessionId, {
|
||||
limit: 50,
|
||||
before_sequence: oldestSequence,
|
||||
});
|
||||
|
||||
// Discard response if session/pagination was reset while awaiting
|
||||
if (epochRef.current !== requestEpoch) return;
|
||||
|
||||
if (response.status !== 200) {
|
||||
consecutiveErrorsRef.current += 1;
|
||||
console.warn(
|
||||
`[loadMore] Failed to load messages (status=${response.status}, attempt=${consecutiveErrorsRef.current})`,
|
||||
);
|
||||
if (consecutiveErrorsRef.current >= MAX_CONSECUTIVE_ERRORS) {
|
||||
setHasMore(false);
|
||||
}
|
||||
return;
|
||||
}
|
||||
|
||||
consecutiveErrorsRef.current = 0;
|
||||
|
||||
const newRaw = (response.data.messages ?? []) as unknown[];
|
||||
setOlderRawMessages((prev) => {
|
||||
const merged = [...newRaw, ...prev];
|
||||
if (merged.length > MAX_OLDER_MESSAGES) {
|
||||
return merged.slice(merged.length - MAX_OLDER_MESSAGES);
|
||||
}
|
||||
return merged;
|
||||
});
|
||||
setOldestSequence(response.data.oldest_sequence ?? null);
|
||||
if (newRaw.length + olderRawMessages.length >= MAX_OLDER_MESSAGES) {
|
||||
setHasMore(false);
|
||||
} else {
|
||||
setHasMore(!!response.data.has_more_messages);
|
||||
}
|
||||
} catch (error) {
|
||||
if (epochRef.current !== requestEpoch) return;
|
||||
consecutiveErrorsRef.current += 1;
|
||||
console.warn("[loadMore] Network error:", error);
|
||||
if (consecutiveErrorsRef.current >= MAX_CONSECUTIVE_ERRORS) {
|
||||
setHasMore(false);
|
||||
}
|
||||
} finally {
|
||||
if (epochRef.current === requestEpoch) {
|
||||
isLoadingMoreRef.current = false;
|
||||
setIsLoadingMore(false);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return { olderMessages, hasMore, isLoadingMore, loadMore };
|
||||
}
|
||||
@@ -0,0 +1,223 @@
|
||||
import { describe, expect, test } from "vitest";
|
||||
import { render, screen } from "@/tests/integrations/test-utils";
|
||||
import { server } from "@/mocks/mock-server";
|
||||
import {
|
||||
getGetV2ListLibraryAgentsMockHandler,
|
||||
getGetV2ListLibraryAgentsResponseMock,
|
||||
getGetV2ListFavoriteLibraryAgentsMockHandler,
|
||||
getGetV2ListFavoriteLibraryAgentsResponseMock,
|
||||
} from "@/app/api/__generated__/endpoints/library/library.msw";
|
||||
import {
|
||||
getGetV2ListLibraryFoldersMockHandler,
|
||||
getGetV2ListLibraryFoldersResponseMock,
|
||||
} from "@/app/api/__generated__/endpoints/folders/folders.msw";
|
||||
import { getGetV1ListAllExecutionsMockHandler } from "@/app/api/__generated__/endpoints/graphs/graphs.msw";
|
||||
import { LibraryAgent } from "@/app/api/__generated__/models/libraryAgent";
|
||||
import LibraryPage from "../page";
|
||||
|
||||
function makeAgent(overrides: Partial<LibraryAgent> = {}): LibraryAgent {
|
||||
const base = getGetV2ListLibraryAgentsResponseMock().agents[0];
|
||||
return { ...base, ...overrides };
|
||||
}
|
||||
|
||||
function setupHandlers({
|
||||
agents,
|
||||
favorites,
|
||||
folders,
|
||||
executions,
|
||||
}: {
|
||||
agents?: LibraryAgent[];
|
||||
favorites?: LibraryAgent[];
|
||||
folders?: Parameters<typeof getGetV2ListLibraryFoldersResponseMock>[0];
|
||||
executions?: Parameters<typeof getGetV1ListAllExecutionsMockHandler>[0];
|
||||
} = {}) {
|
||||
const agentList = agents ?? [makeAgent()];
|
||||
const favList = favorites ?? [];
|
||||
|
||||
server.use(
|
||||
getGetV2ListLibraryAgentsMockHandler({
|
||||
...getGetV2ListLibraryAgentsResponseMock(),
|
||||
agents: agentList,
|
||||
pagination: {
|
||||
total_items: agentList.length,
|
||||
total_pages: 1,
|
||||
current_page: 1,
|
||||
page_size: 20,
|
||||
},
|
||||
}),
|
||||
getGetV2ListFavoriteLibraryAgentsMockHandler({
|
||||
...getGetV2ListFavoriteLibraryAgentsResponseMock(),
|
||||
agents: favList,
|
||||
pagination: {
|
||||
total_items: favList.length,
|
||||
total_pages: 1,
|
||||
current_page: 1,
|
||||
page_size: 10,
|
||||
},
|
||||
}),
|
||||
getGetV2ListLibraryFoldersMockHandler(
|
||||
folders
|
||||
? getGetV2ListLibraryFoldersResponseMock(folders)
|
||||
: {
|
||||
folders: [],
|
||||
pagination: {
|
||||
total_items: 0,
|
||||
total_pages: 1,
|
||||
current_page: 1,
|
||||
page_size: 20,
|
||||
},
|
||||
},
|
||||
),
|
||||
getGetV1ListAllExecutionsMockHandler(executions ?? []),
|
||||
);
|
||||
}
|
||||
|
||||
function waitForAgentsToLoad() {
|
||||
return screen.findAllByTestId("library-agent-card-name");
|
||||
}
|
||||
|
||||
describe("LibraryPage", () => {
|
||||
test("renders agent cards from API", async () => {
|
||||
setupHandlers({ agents: [makeAgent({ name: "Weather Bot" })] });
|
||||
|
||||
render(<LibraryPage />);
|
||||
|
||||
expect(await screen.findByText("Weather Bot")).toBeDefined();
|
||||
});
|
||||
|
||||
test("renders multiple agent cards with correct names", async () => {
|
||||
setupHandlers({
|
||||
agents: [
|
||||
makeAgent({ id: "a1", name: "Agent Alpha" }),
|
||||
makeAgent({ id: "a2", name: "Agent Beta" }),
|
||||
makeAgent({ id: "a3", name: "Agent Gamma" }),
|
||||
],
|
||||
});
|
||||
|
||||
render(<LibraryPage />);
|
||||
|
||||
expect(await screen.findByText("Agent Alpha")).toBeDefined();
|
||||
expect(screen.getByText("Agent Beta")).toBeDefined();
|
||||
expect(screen.getByText("Agent Gamma")).toBeDefined();
|
||||
});
|
||||
|
||||
test("renders All and Favorites tabs", async () => {
|
||||
setupHandlers();
|
||||
|
||||
render(<LibraryPage />);
|
||||
|
||||
await waitForAgentsToLoad();
|
||||
|
||||
const tabs = screen.getAllByRole("tab");
|
||||
const tabNames = tabs.map((t) => t.textContent);
|
||||
expect(tabNames.some((n) => n?.match(/all/i))).toBe(true);
|
||||
expect(tabNames.some((n) => n?.match(/favorites/i))).toBe(true);
|
||||
});
|
||||
|
||||
test("favorites tab is disabled when no favorites exist", async () => {
|
||||
setupHandlers();
|
||||
|
||||
render(<LibraryPage />);
|
||||
|
||||
await waitForAgentsToLoad();
|
||||
|
||||
const favoritesTab = screen
|
||||
.getAllByRole("tab")
|
||||
.find((t) => t.textContent?.match(/favorites/i));
|
||||
expect(favoritesTab).toBeDefined();
|
||||
expect(favoritesTab!.hasAttribute("data-disabled")).toBe(true);
|
||||
});
|
||||
|
||||
test("renders folders alongside agents", async () => {
|
||||
setupHandlers({
|
||||
folders: {
|
||||
folders: [
|
||||
{
|
||||
id: "f1",
|
||||
user_id: "test-user",
|
||||
name: "Work Agents",
|
||||
agent_count: 3,
|
||||
color: null,
|
||||
icon: null,
|
||||
created_at: new Date(),
|
||||
updated_at: new Date(),
|
||||
},
|
||||
{
|
||||
id: "f2",
|
||||
user_id: "test-user",
|
||||
name: "Personal",
|
||||
agent_count: 1,
|
||||
color: null,
|
||||
icon: null,
|
||||
created_at: new Date(),
|
||||
updated_at: new Date(),
|
||||
},
|
||||
],
|
||||
},
|
||||
});
|
||||
|
||||
render(<LibraryPage />);
|
||||
|
||||
expect(await screen.findByText("Work Agents")).toBeDefined();
|
||||
expect(screen.getByText("Personal")).toBeDefined();
|
||||
expect(screen.getAllByTestId("library-folder")).toHaveLength(2);
|
||||
});
|
||||
|
||||
test("shows See runs link on agent card", async () => {
|
||||
setupHandlers({
|
||||
agents: [makeAgent({ name: "Linked Agent", can_access_graph: true })],
|
||||
});
|
||||
|
||||
render(<LibraryPage />);
|
||||
|
||||
await screen.findByText("Linked Agent");
|
||||
|
||||
const runLinks = screen.getAllByText("See runs");
|
||||
expect(runLinks.length).toBeGreaterThan(0);
|
||||
});
|
||||
|
||||
test("renders search bar and import button", async () => {
|
||||
setupHandlers();
|
||||
|
||||
render(<LibraryPage />);
|
||||
|
||||
await waitForAgentsToLoad();
|
||||
|
||||
const searchBars = screen.getAllByTestId("library-textbox");
|
||||
expect(searchBars.length).toBeGreaterThan(0);
|
||||
|
||||
const importButtons = screen.getAllByTestId("import-button");
|
||||
expect(importButtons.length).toBeGreaterThan(0);
|
||||
});
|
||||
|
||||
test("renders Jump Back In when there is an active execution", async () => {
|
||||
const agent = makeAgent({
|
||||
id: "lib-1",
|
||||
graph_id: "g-1",
|
||||
name: "Running Agent",
|
||||
});
|
||||
setupHandlers({
|
||||
agents: [agent],
|
||||
executions: [
|
||||
{
|
||||
id: "exec-1",
|
||||
user_id: "test-user",
|
||||
graph_id: "g-1",
|
||||
graph_version: 1,
|
||||
inputs: {},
|
||||
credential_inputs: {},
|
||||
nodes_input_masks: {},
|
||||
preset_id: null,
|
||||
status: "RUNNING",
|
||||
started_at: new Date(Date.now() - 60_000),
|
||||
ended_at: null,
|
||||
stats: null,
|
||||
},
|
||||
],
|
||||
});
|
||||
|
||||
render(<LibraryPage />);
|
||||
|
||||
expect(await screen.findByText("Jump Back In")).toBeDefined();
|
||||
});
|
||||
});
|
||||
@@ -2,7 +2,6 @@ import { GraphExecution } from "@/app/api/__generated__/models/graphExecution";
|
||||
import { LibraryAgent } from "@/app/api/__generated__/models/libraryAgent";
|
||||
import { Button } from "@/components/atoms/Button/Button";
|
||||
import { LoadingSpinner } from "@/components/atoms/LoadingSpinner/LoadingSpinner";
|
||||
import { Flag, useGetFlag } from "@/services/feature-flags/use-get-flag";
|
||||
import {
|
||||
ArrowBendLeftUpIcon,
|
||||
ArrowBendRightDownIcon,
|
||||
@@ -47,7 +46,6 @@ export function SelectedRunActions({
|
||||
onSelectRun: onSelectRun,
|
||||
});
|
||||
|
||||
const shareExecutionResultsEnabled = useGetFlag(Flag.SHARE_EXECUTION_RESULTS);
|
||||
const isRunning = run?.status === "RUNNING";
|
||||
|
||||
if (!run || !agent) return null;
|
||||
@@ -104,14 +102,12 @@ export function SelectedRunActions({
|
||||
<EyeIcon weight="bold" size={18} className="text-zinc-700" />
|
||||
</Button>
|
||||
) : null}
|
||||
{shareExecutionResultsEnabled && (
|
||||
<ShareRunButton
|
||||
graphId={agent.graph_id}
|
||||
executionId={run.id}
|
||||
isShared={run.is_shared}
|
||||
shareToken={run.share_token}
|
||||
/>
|
||||
)}
|
||||
<ShareRunButton
|
||||
graphId={agent.graph_id}
|
||||
executionId={run.id}
|
||||
isShared={run.is_shared}
|
||||
shareToken={run.share_token}
|
||||
/>
|
||||
{canRunManually && (
|
||||
<>
|
||||
<Button
|
||||
|
||||
@@ -1134,7 +1134,7 @@
|
||||
"get": {
|
||||
"tags": ["v2", "chat", "chat"],
|
||||
"summary": "Get Session",
|
||||
"description": "Retrieve the details of a specific chat session.\n\nLooks up a chat session by ID for the given user (if authenticated) and returns all session data including messages.\nIf there's an active stream for this session, returns active_stream info for reconnection.\n\nArgs:\n session_id: The unique identifier for the desired chat session.\n user_id: The optional authenticated user ID, or None for anonymous access.\n\nReturns:\n SessionDetailResponse: Details for the requested session, including active_stream info if applicable.",
|
||||
"description": "Retrieve the details of a specific chat session.\n\nSupports cursor-based pagination via ``limit`` and ``before_sequence``.\nWhen no pagination params are provided, returns the most recent messages.\n\nArgs:\n session_id: The unique identifier for the desired chat session.\n user_id: The authenticated user's ID.\n limit: Maximum number of messages to return (1-200, default 50).\n before_sequence: Return messages with sequence < this value (cursor).\n\nReturns:\n SessionDetailResponse: Details for the requested session, including\n active_stream info and pagination metadata.",
|
||||
"operationId": "getV2GetSession",
|
||||
"security": [{ "HTTPBearerJWT": [] }],
|
||||
"parameters": [
|
||||
@@ -1143,6 +1143,30 @@
|
||||
"in": "path",
|
||||
"required": true,
|
||||
"schema": { "type": "string", "title": "Session Id" }
|
||||
},
|
||||
{
|
||||
"name": "limit",
|
||||
"in": "query",
|
||||
"required": false,
|
||||
"schema": {
|
||||
"type": "integer",
|
||||
"maximum": 200,
|
||||
"minimum": 1,
|
||||
"default": 50,
|
||||
"title": "Limit"
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "before_sequence",
|
||||
"in": "query",
|
||||
"required": false,
|
||||
"schema": {
|
||||
"anyOf": [
|
||||
{ "type": "integer", "minimum": 0 },
|
||||
{ "type": "null" }
|
||||
],
|
||||
"title": "Before Sequence"
|
||||
}
|
||||
}
|
||||
],
|
||||
"responses": {
|
||||
@@ -7041,12 +7065,76 @@
|
||||
}
|
||||
}
|
||||
},
|
||||
"/api/workspace/files": {
|
||||
"get": {
|
||||
"tags": ["workspace"],
|
||||
"summary": "List workspace files",
|
||||
"description": "List files in the user's workspace.\n\nWhen session_id is provided, only files for that session are returned.\nOtherwise, all files across sessions are listed. Results are paginated\nvia `limit`/`offset`; `has_more` indicates whether additional pages exist.",
|
||||
"operationId": "listWorkspaceFiles",
|
||||
"security": [{ "HTTPBearerJWT": [] }],
|
||||
"parameters": [
|
||||
{
|
||||
"name": "session_id",
|
||||
"in": "query",
|
||||
"required": false,
|
||||
"schema": {
|
||||
"anyOf": [{ "type": "string" }, { "type": "null" }],
|
||||
"title": "Session Id"
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "limit",
|
||||
"in": "query",
|
||||
"required": false,
|
||||
"schema": {
|
||||
"type": "integer",
|
||||
"maximum": 1000,
|
||||
"minimum": 1,
|
||||
"default": 200,
|
||||
"title": "Limit"
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "offset",
|
||||
"in": "query",
|
||||
"required": false,
|
||||
"schema": {
|
||||
"type": "integer",
|
||||
"minimum": 0,
|
||||
"default": 0,
|
||||
"title": "Offset"
|
||||
}
|
||||
}
|
||||
],
|
||||
"responses": {
|
||||
"200": {
|
||||
"description": "Successful Response",
|
||||
"content": {
|
||||
"application/json": {
|
||||
"schema": { "$ref": "#/components/schemas/ListFilesResponse" }
|
||||
}
|
||||
}
|
||||
},
|
||||
"401": {
|
||||
"$ref": "#/components/responses/HTTP401NotAuthenticatedError"
|
||||
},
|
||||
"422": {
|
||||
"description": "Validation Error",
|
||||
"content": {
|
||||
"application/json": {
|
||||
"schema": { "$ref": "#/components/schemas/HTTPValidationError" }
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"/api/workspace/files/upload": {
|
||||
"post": {
|
||||
"tags": ["workspace"],
|
||||
"summary": "Upload file to workspace",
|
||||
"description": "Upload a file to the user's workspace.\n\nFiles are stored in session-scoped paths when session_id is provided,\nso the agent's session-scoped tools can discover them automatically.",
|
||||
"operationId": "postWorkspaceUpload file to workspace",
|
||||
"operationId": "uploadWorkspaceFile",
|
||||
"security": [{ "HTTPBearerJWT": [] }],
|
||||
"parameters": [
|
||||
{
|
||||
@@ -7074,7 +7162,7 @@
|
||||
"content": {
|
||||
"multipart/form-data": {
|
||||
"schema": {
|
||||
"$ref": "#/components/schemas/Body_postWorkspaceUpload_file_to_workspace"
|
||||
"$ref": "#/components/schemas/Body_uploadWorkspaceFile"
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -7109,7 +7197,7 @@
|
||||
"tags": ["workspace"],
|
||||
"summary": "Delete a workspace file",
|
||||
"description": "Soft-delete a workspace file and attempt to remove it from storage.\n\nUsed when a user clears a file input in the builder.",
|
||||
"operationId": "deleteWorkspaceDelete a workspace file",
|
||||
"operationId": "deleteWorkspaceFile",
|
||||
"security": [{ "HTTPBearerJWT": [] }],
|
||||
"parameters": [
|
||||
{
|
||||
@@ -7147,7 +7235,7 @@
|
||||
"tags": ["workspace"],
|
||||
"summary": "Download file by ID",
|
||||
"description": "Download a file by its ID.\n\nReturns the file content directly or redirects to a signed URL for GCS.",
|
||||
"operationId": "getWorkspaceDownload file by id",
|
||||
"operationId": "getWorkspaceDownloadFileById",
|
||||
"security": [{ "HTTPBearerJWT": [] }],
|
||||
"parameters": [
|
||||
{
|
||||
@@ -7181,7 +7269,7 @@
|
||||
"tags": ["workspace"],
|
||||
"summary": "Get workspace storage usage",
|
||||
"description": "Get storage usage information for the user's workspace.",
|
||||
"operationId": "getWorkspaceGet workspace storage usage",
|
||||
"operationId": "getWorkspaceStorageUsage",
|
||||
"responses": {
|
||||
"200": {
|
||||
"description": "Successful Response",
|
||||
@@ -8499,13 +8587,13 @@
|
||||
"required": ["file"],
|
||||
"title": "Body_postV2Upload submission media"
|
||||
},
|
||||
"Body_postWorkspaceUpload_file_to_workspace": {
|
||||
"Body_uploadWorkspaceFile": {
|
||||
"properties": {
|
||||
"file": { "type": "string", "format": "binary", "title": "File" }
|
||||
},
|
||||
"type": "object",
|
||||
"required": ["file"],
|
||||
"title": "Body_postWorkspaceUpload file to workspace"
|
||||
"title": "Body_uploadWorkspaceFile"
|
||||
},
|
||||
"BulkMoveAgentsRequest": {
|
||||
"properties": {
|
||||
@@ -10692,6 +10780,24 @@
|
||||
"required": ["source_id", "sink_id", "source_name", "sink_name"],
|
||||
"title": "Link"
|
||||
},
|
||||
"ListFilesResponse": {
|
||||
"properties": {
|
||||
"files": {
|
||||
"items": { "$ref": "#/components/schemas/WorkspaceFileItem" },
|
||||
"type": "array",
|
||||
"title": "Files"
|
||||
},
|
||||
"offset": { "type": "integer", "title": "Offset", "default": 0 },
|
||||
"has_more": {
|
||||
"type": "boolean",
|
||||
"title": "Has More",
|
||||
"default": false
|
||||
}
|
||||
},
|
||||
"type": "object",
|
||||
"required": ["files"],
|
||||
"title": "ListFilesResponse"
|
||||
},
|
||||
"ListSessionsResponse": {
|
||||
"properties": {
|
||||
"sessions": {
|
||||
@@ -12362,6 +12468,15 @@
|
||||
{ "type": "null" }
|
||||
]
|
||||
},
|
||||
"has_more_messages": {
|
||||
"type": "boolean",
|
||||
"title": "Has More Messages",
|
||||
"default": false
|
||||
},
|
||||
"oldest_sequence": {
|
||||
"anyOf": [{ "type": "integer" }, { "type": "null" }],
|
||||
"title": "Oldest Sequence"
|
||||
},
|
||||
"total_prompt_tokens": {
|
||||
"type": "integer",
|
||||
"title": "Total Prompt Tokens",
|
||||
@@ -15219,6 +15334,31 @@
|
||||
],
|
||||
"title": "Webhook"
|
||||
},
|
||||
"WorkspaceFileItem": {
|
||||
"properties": {
|
||||
"id": { "type": "string", "title": "Id" },
|
||||
"name": { "type": "string", "title": "Name" },
|
||||
"path": { "type": "string", "title": "Path" },
|
||||
"mime_type": { "type": "string", "title": "Mime Type" },
|
||||
"size_bytes": { "type": "integer", "title": "Size Bytes" },
|
||||
"metadata": {
|
||||
"additionalProperties": true,
|
||||
"type": "object",
|
||||
"title": "Metadata"
|
||||
},
|
||||
"created_at": { "type": "string", "title": "Created At" }
|
||||
},
|
||||
"type": "object",
|
||||
"required": [
|
||||
"id",
|
||||
"name",
|
||||
"path",
|
||||
"mime_type",
|
||||
"size_bytes",
|
||||
"created_at"
|
||||
],
|
||||
"title": "WorkspaceFileItem"
|
||||
},
|
||||
"backend__api__features__workspace__routes__UploadFileResponse": {
|
||||
"properties": {
|
||||
"file_id": { "type": "string", "title": "File Id" },
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
"use client";
|
||||
|
||||
import { Button } from "@/components/ui/button";
|
||||
import { scrollbarStyles } from "@/components/styles/scrollbars";
|
||||
import { cn } from "@/lib/utils";
|
||||
import { ArrowDownIcon } from "lucide-react";
|
||||
import type { ComponentProps } from "react";
|
||||
@@ -12,12 +11,8 @@ export type ConversationProps = ComponentProps<typeof StickToBottom>;
|
||||
|
||||
export const Conversation = ({ className, ...props }: ConversationProps) => (
|
||||
<StickToBottom
|
||||
className={cn(
|
||||
"relative flex-1 overflow-y-hidden",
|
||||
scrollbarStyles,
|
||||
className,
|
||||
)}
|
||||
initial="smooth"
|
||||
className={cn("relative flex-1 overflow-y-hidden", className)}
|
||||
initial="instant"
|
||||
resize="smooth"
|
||||
role="log"
|
||||
{...props}
|
||||
@@ -30,10 +25,15 @@ export type ConversationContentProps = ComponentProps<
|
||||
|
||||
export const ConversationContent = ({
|
||||
className,
|
||||
scrollClassName,
|
||||
...props
|
||||
}: ConversationContentProps) => (
|
||||
<StickToBottom.Content
|
||||
className={cn("flex flex-col gap-8 p-4", className)}
|
||||
scrollClassName={cn(
|
||||
"scrollbar-thin scrollbar-track-transparent scrollbar-thumb-zinc-300",
|
||||
scrollClassName,
|
||||
)}
|
||||
{...props}
|
||||
/>
|
||||
);
|
||||
|
||||
@@ -78,7 +78,7 @@ export function Input({
|
||||
"font-normal text-black",
|
||||
"placeholder:font-normal placeholder:text-zinc-400",
|
||||
// Focus and hover states
|
||||
"focus:border-zinc-400 focus:shadow-none focus:outline-none focus:ring-1 focus:ring-zinc-400 focus:ring-offset-0",
|
||||
"focus:border-purple-400 focus:shadow-none focus:outline-none focus:ring-1 focus:ring-purple-400 focus:ring-offset-0",
|
||||
className,
|
||||
);
|
||||
|
||||
|
||||
@@ -1,6 +1,8 @@
|
||||
import { globalRegistry } from "./types";
|
||||
import { textRenderer } from "./renderers/TextRenderer";
|
||||
import { codeRenderer } from "./renderers/CodeRenderer";
|
||||
import { csvRenderer } from "./renderers/CSVRenderer";
|
||||
import { htmlRenderer } from "./renderers/HTMLRenderer";
|
||||
import { imageRenderer } from "./renderers/ImageRenderer";
|
||||
import { videoRenderer } from "./renderers/VideoRenderer";
|
||||
import { audioRenderer } from "./renderers/AudioRenderer";
|
||||
@@ -13,7 +15,9 @@ import { linkRenderer } from "./renderers/LinkRenderer";
|
||||
globalRegistry.register(workspaceFileRenderer);
|
||||
globalRegistry.register(videoRenderer);
|
||||
globalRegistry.register(audioRenderer);
|
||||
globalRegistry.register(htmlRenderer);
|
||||
globalRegistry.register(imageRenderer);
|
||||
globalRegistry.register(csvRenderer);
|
||||
globalRegistry.register(codeRenderer);
|
||||
globalRegistry.register(markdownRenderer);
|
||||
globalRegistry.register(jsonRenderer);
|
||||
|
||||
@@ -0,0 +1,67 @@
|
||||
import { describe, expect, it } from "vitest";
|
||||
import { csvRenderer } from "./CSVRenderer";
|
||||
|
||||
function downloadText(value: string, filename = "t.csv"): string {
|
||||
const dl = csvRenderer.getDownloadContent?.(value, { filename });
|
||||
if (!dl) throw new Error("no download content");
|
||||
return dl.filename;
|
||||
}
|
||||
|
||||
describe("csvRenderer.canRender", () => {
|
||||
it("matches CSV mime type", () => {
|
||||
expect(csvRenderer.canRender("a,b\n1,2", { mimeType: "text/csv" })).toBe(
|
||||
true,
|
||||
);
|
||||
});
|
||||
it("matches .csv filename case-insensitively", () => {
|
||||
expect(csvRenderer.canRender("a,b", { filename: "data.CSV" })).toBe(true);
|
||||
});
|
||||
it("rejects non-string values", () => {
|
||||
expect(csvRenderer.canRender(42, { mimeType: "text/csv" })).toBe(false);
|
||||
});
|
||||
it("rejects strings without CSV hint", () => {
|
||||
expect(csvRenderer.canRender("a,b,c", {})).toBe(false);
|
||||
});
|
||||
});
|
||||
|
||||
describe("csvRenderer.getDownloadContent", () => {
|
||||
it("uses filename from metadata", () => {
|
||||
expect(downloadText("a,b\n1,2", "my.csv")).toBe("my.csv");
|
||||
});
|
||||
it("falls back to data.csv", () => {
|
||||
const dl = csvRenderer.getDownloadContent?.("a,b\n1,2");
|
||||
expect(dl?.filename).toBe("data.csv");
|
||||
});
|
||||
});
|
||||
|
||||
describe("csvRenderer.getCopyContent", () => {
|
||||
it("round-trips content as plain text", () => {
|
||||
const result = csvRenderer.getCopyContent?.("x,y\n1,2");
|
||||
expect(result?.mimeType).toBe("text/plain");
|
||||
expect(result?.data).toBe("x,y\n1,2");
|
||||
});
|
||||
});
|
||||
|
||||
describe("csvRenderer.render (parse via render output smoke)", () => {
|
||||
// The parser itself isn't exported, so we exercise it through render.
|
||||
// These tests ensure render() doesn't throw on edge-case CSVs.
|
||||
it("handles empty input", () => {
|
||||
expect(() => csvRenderer.render("")).not.toThrow();
|
||||
});
|
||||
it("handles embedded newline inside quoted field", () => {
|
||||
const csv = 'name,bio\n"Alice","line1\nline2"\n"Bob","x"';
|
||||
expect(() => csvRenderer.render(csv)).not.toThrow();
|
||||
});
|
||||
it("strips BOM from first header cell (smoke)", () => {
|
||||
const csv = "\ufefftitle,count\nfoo,1";
|
||||
expect(() => csvRenderer.render(csv)).not.toThrow();
|
||||
});
|
||||
it("handles CRLF line endings", () => {
|
||||
const csv = "a,b\r\n1,2\r\n3,4";
|
||||
expect(() => csvRenderer.render(csv)).not.toThrow();
|
||||
});
|
||||
it("handles escaped double quote inside a quoted field", () => {
|
||||
const csv = 'name\n"She said ""hi"""';
|
||||
expect(() => csvRenderer.render(csv)).not.toThrow();
|
||||
});
|
||||
});
|
||||
@@ -0,0 +1,177 @@
|
||||
import React, { useMemo, useState } from "react";
|
||||
import {
|
||||
OutputRenderer,
|
||||
OutputMetadata,
|
||||
DownloadContent,
|
||||
CopyContent,
|
||||
} from "../types";
|
||||
|
||||
function parseCSV(text: string): { headers: string[]; rows: string[][] } {
|
||||
const normalized = text
|
||||
.replace(/\r\n?/g, "\n")
|
||||
.replace(/^\ufeff/, "")
|
||||
.trim();
|
||||
if (normalized.length === 0) return { headers: [], rows: [] };
|
||||
|
||||
// Character-by-character parse so embedded newlines inside "quoted" cells
|
||||
// (allowed by RFC 4180) don't break the row split.
|
||||
const rows: string[][] = [];
|
||||
let current = "";
|
||||
let row: string[] = [];
|
||||
let inQuotes = false;
|
||||
for (let i = 0; i < normalized.length; i++) {
|
||||
const ch = normalized[i];
|
||||
if (inQuotes) {
|
||||
if (ch === '"' && normalized[i + 1] === '"') {
|
||||
current += '"';
|
||||
i++;
|
||||
} else if (ch === '"') {
|
||||
inQuotes = false;
|
||||
} else {
|
||||
current += ch;
|
||||
}
|
||||
} else if (ch === '"') {
|
||||
inQuotes = true;
|
||||
} else if (ch === ",") {
|
||||
row.push(current);
|
||||
current = "";
|
||||
} else if (ch === "\n") {
|
||||
row.push(current);
|
||||
rows.push(row);
|
||||
row = [];
|
||||
current = "";
|
||||
} else {
|
||||
current += ch;
|
||||
}
|
||||
}
|
||||
row.push(current);
|
||||
rows.push(row);
|
||||
|
||||
const headers = rows[0] ?? [];
|
||||
return { headers, rows: rows.slice(1) };
|
||||
}
|
||||
|
||||
function CSVTable({ value }: { value: string }) {
|
||||
const { headers, rows } = useMemo(() => parseCSV(value), [value]);
|
||||
const [sortCol, setSortCol] = useState<number | null>(null);
|
||||
const [sortAsc, setSortAsc] = useState(true);
|
||||
|
||||
const sortedRows = useMemo(() => {
|
||||
if (sortCol === null) return rows;
|
||||
return [...rows].sort((a, b) => {
|
||||
const aVal = a[sortCol] ?? "";
|
||||
const bVal = b[sortCol] ?? "";
|
||||
const aNum = parseFloat(aVal);
|
||||
const bNum = parseFloat(bVal);
|
||||
if (!isNaN(aNum) && !isNaN(bNum)) {
|
||||
return sortAsc ? aNum - bNum : bNum - aNum;
|
||||
}
|
||||
return sortAsc ? aVal.localeCompare(bVal) : bVal.localeCompare(aVal);
|
||||
});
|
||||
}, [rows, sortCol, sortAsc]);
|
||||
|
||||
function handleSort(col: number) {
|
||||
if (sortCol === col) {
|
||||
setSortAsc(!sortAsc);
|
||||
} else {
|
||||
setSortCol(col);
|
||||
setSortAsc(true);
|
||||
}
|
||||
}
|
||||
|
||||
if (headers.length === 0) {
|
||||
return <p className="p-4 text-sm text-zinc-500">Empty CSV</p>;
|
||||
}
|
||||
|
||||
return (
|
||||
<div className="overflow-x-auto">
|
||||
<table className="w-full border-collapse text-sm">
|
||||
<thead>
|
||||
<tr className="border-b border-zinc-200 bg-zinc-50">
|
||||
{headers.map((header, i) => (
|
||||
<th
|
||||
key={i}
|
||||
className="px-3 py-2 text-left font-medium text-zinc-700"
|
||||
>
|
||||
<button
|
||||
type="button"
|
||||
className="flex w-full cursor-pointer select-none items-center gap-1 hover:bg-zinc-100"
|
||||
onClick={() => handleSort(i)}
|
||||
>
|
||||
{header}
|
||||
{sortCol === i && (
|
||||
<span className="text-xs">
|
||||
{sortAsc ? "\u25B2" : "\u25BC"}
|
||||
</span>
|
||||
)}
|
||||
</button>
|
||||
</th>
|
||||
))}
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
{sortedRows.map((row, rowIdx) => (
|
||||
<tr
|
||||
key={rowIdx}
|
||||
className="border-b border-zinc-100 even:bg-zinc-50/50"
|
||||
style={{
|
||||
contentVisibility: "auto",
|
||||
containIntrinsicSize: "0 36px",
|
||||
}}
|
||||
>
|
||||
{row.map((cell, cellIdx) => (
|
||||
<td key={cellIdx} className="px-3 py-1.5 text-zinc-600">
|
||||
{cell}
|
||||
</td>
|
||||
))}
|
||||
</tr>
|
||||
))}
|
||||
</tbody>
|
||||
</table>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
function canRenderCSV(value: unknown, metadata?: OutputMetadata): boolean {
|
||||
if (typeof value !== "string") return false;
|
||||
if (metadata?.mimeType === "text/csv") return true;
|
||||
if (metadata?.filename?.toLowerCase().endsWith(".csv")) return true;
|
||||
return false;
|
||||
}
|
||||
|
||||
function renderCSV(
|
||||
value: unknown,
|
||||
_metadata?: OutputMetadata,
|
||||
): React.ReactNode {
|
||||
return <CSVTable value={String(value)} />;
|
||||
}
|
||||
|
||||
function getCopyContentCSV(
|
||||
value: unknown,
|
||||
_metadata?: OutputMetadata,
|
||||
): CopyContent | null {
|
||||
const text = String(value);
|
||||
return { mimeType: "text/plain", data: text, fallbackText: text };
|
||||
}
|
||||
|
||||
function getDownloadContentCSV(
|
||||
value: unknown,
|
||||
metadata?: OutputMetadata,
|
||||
): DownloadContent | null {
|
||||
const text = String(value);
|
||||
return {
|
||||
data: new Blob([text], { type: "text/csv" }),
|
||||
filename: metadata?.filename || "data.csv",
|
||||
mimeType: "text/csv",
|
||||
};
|
||||
}
|
||||
|
||||
export const csvRenderer: OutputRenderer = {
|
||||
name: "CSVRenderer",
|
||||
priority: 38,
|
||||
canRender: canRenderCSV,
|
||||
render: renderCSV,
|
||||
getCopyContent: getCopyContentCSV,
|
||||
getDownloadContent: getDownloadContentCSV,
|
||||
isConcatenable: () => false,
|
||||
};
|
||||
@@ -1,4 +1,13 @@
|
||||
import React from "react";
|
||||
"use client";
|
||||
|
||||
import React, { useEffect, useState } from "react";
|
||||
import {
|
||||
SHIKI_THEMES,
|
||||
type BundledLanguage,
|
||||
getShikiHighlighter,
|
||||
isLanguageSupported,
|
||||
resolveLanguage,
|
||||
} from "@/lib/shiki-highlighter";
|
||||
import {
|
||||
OutputRenderer,
|
||||
OutputMetadata,
|
||||
@@ -6,6 +15,18 @@ import {
|
||||
CopyContent,
|
||||
} from "../types";
|
||||
|
||||
interface HighlightToken {
|
||||
content: string;
|
||||
color?: string;
|
||||
htmlStyle?: Record<string, string>;
|
||||
}
|
||||
|
||||
interface HighlightedCodeState {
|
||||
tokens: HighlightToken[][];
|
||||
fg?: string;
|
||||
bg?: string;
|
||||
}
|
||||
|
||||
function getFileExtension(language: string): string {
|
||||
const extensionMap: Record<string, string> = {
|
||||
javascript: "js",
|
||||
@@ -68,24 +89,153 @@ function canRenderCode(value: unknown, metadata?: OutputMetadata): boolean {
|
||||
return codeIndicators.some((pattern) => pattern.test(value));
|
||||
}
|
||||
|
||||
function EditorLineNumber({ index }: { index: number }) {
|
||||
return (
|
||||
<span className="select-none pr-2 text-right font-mono text-xs text-zinc-600">
|
||||
{index + 1}
|
||||
</span>
|
||||
);
|
||||
}
|
||||
|
||||
function PlainCodeLines({ code }: { code: string }) {
|
||||
return code.split("\n").map((line, index) => (
|
||||
<div key={`${index}-${line}`} className="grid grid-cols-[3rem_1fr] gap-4">
|
||||
<EditorLineNumber index={index} />
|
||||
<span className="whitespace-pre font-mono text-sm text-zinc-100">
|
||||
{line || " "}
|
||||
</span>
|
||||
</div>
|
||||
));
|
||||
}
|
||||
|
||||
function HighlightedCodeBlock({
|
||||
code,
|
||||
filename,
|
||||
language,
|
||||
}: {
|
||||
code: string;
|
||||
filename?: string;
|
||||
language?: string;
|
||||
}) {
|
||||
const [highlighted, setHighlighted] = useState<HighlightedCodeState | null>(
|
||||
null,
|
||||
);
|
||||
const resolvedLanguage = resolveLanguage(language || "text");
|
||||
const supportedLanguage = isLanguageSupported(resolvedLanguage)
|
||||
? resolvedLanguage
|
||||
: "text";
|
||||
|
||||
useEffect(() => {
|
||||
let cancelled = false;
|
||||
const shikiLanguage = supportedLanguage as BundledLanguage;
|
||||
|
||||
setHighlighted(null);
|
||||
|
||||
getShikiHighlighter()
|
||||
.then(async (highlighter) => {
|
||||
if (
|
||||
supportedLanguage !== "text" &&
|
||||
!highlighter.getLoadedLanguages().includes(supportedLanguage)
|
||||
) {
|
||||
await highlighter.loadLanguage(shikiLanguage);
|
||||
}
|
||||
|
||||
const shikiResult = highlighter.codeToTokens(code, {
|
||||
lang: shikiLanguage,
|
||||
theme: SHIKI_THEMES[1],
|
||||
});
|
||||
|
||||
if (cancelled) return;
|
||||
|
||||
setHighlighted({
|
||||
tokens: shikiResult.tokens.map((line) =>
|
||||
line.map((token) => ({
|
||||
content: token.content,
|
||||
color: token.color,
|
||||
htmlStyle: token.htmlStyle,
|
||||
})),
|
||||
),
|
||||
fg: shikiResult.fg,
|
||||
bg: shikiResult.bg,
|
||||
});
|
||||
})
|
||||
.catch(() => {
|
||||
if (cancelled) return;
|
||||
setHighlighted(null);
|
||||
});
|
||||
|
||||
return () => {
|
||||
cancelled = true;
|
||||
};
|
||||
}, [code, supportedLanguage]);
|
||||
|
||||
return (
|
||||
<div className="overflow-hidden rounded-lg border border-zinc-900 bg-[#020617] shadow-sm">
|
||||
<div className="flex items-center justify-between border-b border-zinc-800 bg-[#111827] px-3 py-2">
|
||||
<span className="truncate font-mono text-xs text-zinc-400">
|
||||
{filename || "code"}
|
||||
</span>
|
||||
<span className="rounded bg-zinc-800 px-2 py-0.5 font-mono text-[11px] uppercase tracking-wide text-zinc-300">
|
||||
{supportedLanguage}
|
||||
</span>
|
||||
</div>
|
||||
<div
|
||||
className="overflow-x-auto"
|
||||
style={{
|
||||
backgroundColor: highlighted?.bg || "#020617",
|
||||
color: highlighted?.fg || "#e2e8f0",
|
||||
}}
|
||||
>
|
||||
<pre className="min-w-full p-4">
|
||||
{highlighted ? (
|
||||
highlighted.tokens.map((line, index) => (
|
||||
<div
|
||||
key={`${index}-${line.length}`}
|
||||
className="grid grid-cols-[3rem_1fr] gap-4"
|
||||
>
|
||||
<EditorLineNumber index={index} />
|
||||
<span className="whitespace-pre font-mono text-sm leading-6">
|
||||
{line.length > 0
|
||||
? line.map((token, tokenIndex) => (
|
||||
<span
|
||||
key={`${index}-${tokenIndex}-${token.content}`}
|
||||
style={
|
||||
token.htmlStyle
|
||||
? (token.htmlStyle as React.CSSProperties)
|
||||
: token.color
|
||||
? { color: token.color }
|
||||
: undefined
|
||||
}
|
||||
>
|
||||
{token.content}
|
||||
</span>
|
||||
))
|
||||
: " "}
|
||||
</span>
|
||||
</div>
|
||||
))
|
||||
) : (
|
||||
<PlainCodeLines code={code} />
|
||||
)}
|
||||
</pre>
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
function renderCode(
|
||||
value: unknown,
|
||||
metadata?: OutputMetadata,
|
||||
): React.ReactNode {
|
||||
const codeValue = String(value);
|
||||
const language = metadata?.language || "plaintext";
|
||||
const language = metadata?.language || "text";
|
||||
|
||||
return (
|
||||
<div className="group relative">
|
||||
{metadata?.language && (
|
||||
<div className="absolute right-2 top-2 rounded bg-background/80 px-2 py-1 text-xs text-muted-foreground">
|
||||
{language}
|
||||
</div>
|
||||
)}
|
||||
<pre className="overflow-x-auto rounded-md bg-muted p-3">
|
||||
<code className="font-mono text-sm">{codeValue}</code>
|
||||
</pre>
|
||||
</div>
|
||||
<HighlightedCodeBlock
|
||||
code={codeValue}
|
||||
filename={metadata?.filename}
|
||||
language={language}
|
||||
/>
|
||||
);
|
||||
}
|
||||
|
||||
|
||||
@@ -0,0 +1,75 @@
|
||||
import React from "react";
|
||||
import {
|
||||
TAILWIND_CDN_URL,
|
||||
wrapWithHeadInjection,
|
||||
} from "@/lib/iframe-sandbox-csp";
|
||||
import {
|
||||
OutputRenderer,
|
||||
OutputMetadata,
|
||||
DownloadContent,
|
||||
CopyContent,
|
||||
} from "../types";
|
||||
|
||||
function HTMLPreview({ value }: { value: string }) {
|
||||
// Inject Tailwind CDN — no CSP (see iframe-sandbox-csp.ts for why)
|
||||
const tailwindScript = `<script src="${TAILWIND_CDN_URL}"></script>`;
|
||||
const srcDoc = wrapWithHeadInjection(value, tailwindScript);
|
||||
return (
|
||||
<iframe
|
||||
sandbox="allow-scripts"
|
||||
srcDoc={srcDoc}
|
||||
className="h-96 w-full rounded border border-zinc-200"
|
||||
title="HTML preview"
|
||||
/>
|
||||
);
|
||||
}
|
||||
|
||||
function canRenderHTML(value: unknown, metadata?: OutputMetadata): boolean {
|
||||
if (typeof value !== "string") return false;
|
||||
if (metadata?.mimeType === "text/html") return true;
|
||||
const filename = metadata?.filename?.toLowerCase();
|
||||
if (filename?.endsWith(".html") || filename?.endsWith(".htm")) return true;
|
||||
return false;
|
||||
}
|
||||
|
||||
function renderHTML(
|
||||
value: unknown,
|
||||
_metadata?: OutputMetadata,
|
||||
): React.ReactNode {
|
||||
return <HTMLPreview value={String(value)} />;
|
||||
}
|
||||
|
||||
function getCopyContentHTML(
|
||||
value: unknown,
|
||||
_metadata?: OutputMetadata,
|
||||
): CopyContent | null {
|
||||
const text = String(value);
|
||||
return {
|
||||
mimeType: "text/html",
|
||||
data: text,
|
||||
fallbackText: text,
|
||||
alternativeMimeTypes: ["text/plain"],
|
||||
};
|
||||
}
|
||||
|
||||
function getDownloadContentHTML(
|
||||
value: unknown,
|
||||
metadata?: OutputMetadata,
|
||||
): DownloadContent | null {
|
||||
const text = String(value);
|
||||
return {
|
||||
data: new Blob([text], { type: "text/html" }),
|
||||
filename: metadata?.filename || "page.html",
|
||||
mimeType: "text/html",
|
||||
};
|
||||
}
|
||||
|
||||
export const htmlRenderer: OutputRenderer = {
|
||||
name: "HTMLRenderer",
|
||||
priority: 42,
|
||||
canRender: canRenderHTML,
|
||||
render: renderHTML,
|
||||
getCopyContent: getCopyContentHTML,
|
||||
getDownloadContent: getDownloadContentHTML,
|
||||
isConcatenable: () => false,
|
||||
};
|
||||
@@ -1,4 +1,4 @@
|
||||
import { useDeleteWorkspaceDeleteAWorkspaceFile } from "@/app/api/__generated__/endpoints/workspace/workspace";
|
||||
import { useDeleteWorkspaceFile } from "@/app/api/__generated__/endpoints/workspace/workspace";
|
||||
import { useToast } from "@/components/molecules/Toast/use-toast";
|
||||
import { uploadFileDirect } from "@/lib/direct-upload";
|
||||
import { parseWorkspaceFileID, buildWorkspaceURI } from "@/lib/workspace-uri";
|
||||
@@ -6,7 +6,7 @@ import { parseWorkspaceFileID, buildWorkspaceURI } from "@/lib/workspace-uri";
|
||||
export function useWorkspaceUpload() {
|
||||
const { toast } = useToast();
|
||||
|
||||
const { mutate: deleteMutation } = useDeleteWorkspaceDeleteAWorkspaceFile({
|
||||
const { mutate: deleteMutation } = useDeleteWorkspaceFile({
|
||||
mutation: {
|
||||
onError: () => {
|
||||
toast({
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user