fix(backend/copilot): detect prompt-too-long in AssistantMessage content and ResultMessage success subtype (#12642)

## Why

PR #12625 fixed the prompt-too-long retry mechanism for most paths, but
two SDK-specific paths were still broken. The dev session `d2f7cba3`
kept accumulating synthetic "Prompt is too long" error entries on every
turn, growing the transcript from 2.5 MB → 3.2 MB, making recovery
impossible.

Root causes identified from production logs (`[T25]`, `[T28]`):

**Path 1 — AssistantMessage content check:**
When the Claude API rejects a prompt, the SDK surfaces it as
`AssistantMessage(error="invalid_request", content=[TextBlock("Prompt is
too long")])`. Our check only inspected `error_text = str(sdk_error)`
which is `"invalid_request"` — not a prompt-too-long pattern. The
content was then streamed out as `StreamText`, setting `events_yielded =
1`, which blocked retry even when the ResultMessage fired.

**Path 2 — ResultMessage success subtype:**
After the SDK auto-compacts internally (via `PreCompact` hook) and the
compacted transcript is _still_ too long, the SDK returns
`ResultMessage(subtype="success", result="Prompt is too long")`. Our
check only ran for `subtype="error"`. With `subtype="success"`, the
stream "completed normally", appended the synthetic error entry to the
transcript via `transcript_builder`, and uploaded it to GCS — causing
the transcript to grow on each failed turn.

## What

- **AssistantMessage handler**: when `sdk_error` is set, also check the
content text. `sdk_error` being non-`None` confirms this is an API error
message (not user-generated content), so content inspection is safe.
- **ResultMessage handler**: check `result` for prompt-too-long patterns
regardless of `subtype`, covering the SDK auto-compact path where
`subtype="success"` with `result="Prompt is too long"`.

## How

Two targeted one-line condition expansions in `_run_stream_attempt`,
plus two new integration tests in `retry_scenarios_test.py` that
reproduce each broken path and verify retry fires correctly.

## Changes

- `backend/copilot/sdk/service.py`: fix AssistantMessage content check +
ResultMessage subtype-independent check
- `backend/copilot/sdk/retry_scenarios_test.py`: add 2 integration tests
for the new scenarios

## Checklist

- [x] Tests added for both new scenarios (45 total, all pass)
- [x] Formatted (`poetry run format`)
- [x] No false-positive risk: AssistantMessage check gated behind
`sdk_error is not None`
- [x] Root cause verified from production pod logs
This commit is contained in:
Zamil Majdy
2026-04-02 00:32:09 +02:00
committed by GitHub
parent 4ac0ba570a
commit b9e29c96bd
3 changed files with 322 additions and 11 deletions

View File

@@ -29,6 +29,7 @@ from backend.copilot.response_model import (
StreamToolOutputAvailable,
)
from .compaction import compaction_events
from .response_adapter import SDKResponseAdapter
from .tool_adapter import MCP_TOOL_PREFIX
from .tool_adapter import _pending_tool_outputs as _pto
@@ -689,3 +690,102 @@ def test_already_resolved_tool_skipped_in_user_message():
assert (
len(output_events) == 0
), "Already-resolved tool should not emit duplicate output"
# -- _end_text_if_open before compaction -------------------------------------
def test_end_text_if_open_emits_text_end_before_finish_step():
"""StreamTextEnd must be emitted before StreamFinishStep during compaction.
When ``emit_end_if_ready`` fires compaction events while a text block is
still open, ``_end_text_if_open`` must close it first. If StreamFinishStep
arrives before StreamTextEnd, the Vercel AI SDK clears ``activeTextParts``
and raises "Received text-end for missing text part".
"""
adapter = _adapter()
# Open a text block by processing an AssistantMessage with text
msg = AssistantMessage(content=[TextBlock(text="partial response")], model="test")
adapter.convert_message(msg)
assert adapter.has_started_text
assert not adapter.has_ended_text
# Simulate what service.py does before yielding compaction events
pre_close: list[StreamBaseResponse] = []
adapter._end_text_if_open(pre_close)
combined = pre_close + list(compaction_events("Compacted transcript"))
text_end_idx = next(
(i for i, e in enumerate(combined) if isinstance(e, StreamTextEnd)), None
)
finish_step_idx = next(
(i for i, e in enumerate(combined) if isinstance(e, StreamFinishStep)), None
)
assert text_end_idx is not None, "StreamTextEnd must be present"
assert finish_step_idx is not None, "StreamFinishStep must be present"
assert text_end_idx < finish_step_idx, (
f"StreamTextEnd (idx={text_end_idx}) must precede "
f"StreamFinishStep (idx={finish_step_idx}) — otherwise the Vercel AI SDK "
"clears activeTextParts before text-end arrives"
)
def test_step_open_must_reset_after_compaction_finish_step():
"""Adapter step_open must be reset when compaction emits StreamFinishStep.
Compaction events bypass the adapter, so service.py must explicitly clear
step_open after yielding a StreamFinishStep from compaction. Without this,
the next AssistantMessage skips StreamStartStep because the adapter still
thinks a step is open.
"""
adapter = _adapter()
# Open a step + text block via an AssistantMessage
msg = AssistantMessage(content=[TextBlock(text="thinking...")], model="test")
adapter.convert_message(msg)
assert adapter.step_open is True
# Simulate what service.py does: close text, then check compaction events
pre_close: list[StreamBaseResponse] = []
adapter._end_text_if_open(pre_close)
events = list(compaction_events("Compacted transcript"))
if any(isinstance(ev, StreamFinishStep) for ev in events):
adapter.step_open = False
assert (
adapter.step_open is False
), "step_open must be False after compaction emits StreamFinishStep"
# Next AssistantMessage must open a new step
msg2 = AssistantMessage(content=[TextBlock(text="continued")], model="test")
results = adapter.convert_message(msg2)
assert any(
isinstance(r, StreamStartStep) for r in results
), "A new StreamStartStep must be emitted after compaction closed the step"
def test_end_text_if_open_no_op_when_no_text_open():
"""_end_text_if_open emits nothing when no text block is open."""
adapter = _adapter()
results: list[StreamBaseResponse] = []
adapter._end_text_if_open(results)
assert results == []
def test_end_text_if_open_no_op_after_text_already_ended():
"""_end_text_if_open emits nothing when the text block is already closed."""
adapter = _adapter()
msg = AssistantMessage(content=[TextBlock(text="hello")], model="test")
adapter.convert_message(msg)
# Close it once
first: list[StreamBaseResponse] = []
adapter._end_text_if_open(first)
assert len(first) == 1
assert isinstance(first[0], StreamTextEnd)
# Second call must be a no-op
second: list[StreamBaseResponse] = []
adapter._end_text_if_open(second)
assert second == []

View File

@@ -1487,3 +1487,188 @@ class TestStreamChatCompletionRetryIntegration:
errors = [e for e in events if isinstance(e, StreamError)]
assert not errors, f"Unexpected StreamError: {errors}"
assert any(isinstance(e, StreamStart) for e in events)
@pytest.mark.asyncio
async def test_result_message_success_subtype_prompt_too_long_triggers_compaction(
self,
):
"""CLI returns ResultMessage(subtype="success") with result="Prompt is too long".
The SDK internally compacts but the transcript is still too long. It
returns subtype="success" (process completed) with result="Prompt is
too long" (the actual rejection message). The retry loop must detect
this as a context-length error and trigger compaction — the subtype
"success" must not fool it into treating this as a real response.
"""
import contextlib
from claude_agent_sdk import ResultMessage
from backend.copilot.response_model import StreamError, StreamStart
from backend.copilot.sdk.service import stream_chat_completion_sdk
session = self._make_session()
success_result = self._make_result_message()
attempt_count = [0]
error_result = ResultMessage(
subtype="success",
result="Prompt is too long",
duration_ms=100,
duration_api_ms=0,
is_error=False,
num_turns=1,
session_id="test-session-id",
)
def _client_factory(*args, **kwargs):
attempt_count[0] += 1
async def _receive_error():
yield error_result
async def _receive_success():
yield success_result
client = MagicMock()
client._transport = MagicMock()
client._transport.write = AsyncMock()
client.query = AsyncMock()
if attempt_count[0] == 1:
client.receive_response = _receive_error
else:
client.receive_response = _receive_success
cm = AsyncMock()
cm.__aenter__.return_value = client
cm.__aexit__.return_value = None
return cm
original_transcript = _build_transcript(
[("user", "prior question"), ("assistant", "prior answer")]
)
compacted_transcript = _build_transcript(
[("user", "[summary]"), ("assistant", "summary reply")]
)
patches = _make_sdk_patches(
session,
original_transcript=original_transcript,
compacted_transcript=compacted_transcript,
client_side_effect=_client_factory,
)
events = []
with contextlib.ExitStack() as stack:
for target, kwargs in patches:
stack.enter_context(patch(target, **kwargs))
async for event in stream_chat_completion_sdk(
session_id="test-session-id",
message="hello",
is_user_message=True,
user_id="test-user",
session=session,
):
events.append(event)
assert attempt_count[0] == 2, (
f"Expected 2 SDK attempts (subtype='success' with 'Prompt is too long' "
f"result should trigger compaction retry), got {attempt_count[0]}"
)
errors = [e for e in events if isinstance(e, StreamError)]
assert not errors, f"Unexpected StreamError: {errors}"
assert any(isinstance(e, StreamStart) for e in events)
@pytest.mark.asyncio
async def test_assistant_message_error_content_prompt_too_long_triggers_compaction(
self,
):
"""AssistantMessage.error="invalid_request" with content "Prompt is too long".
The SDK returns error type "invalid_request" but puts the actual
rejection message ("Prompt is too long") in the content blocks.
The retry loop must detect this via content inspection (sdk_error
being set confirms it's an error message, not user content).
"""
import contextlib
from claude_agent_sdk import AssistantMessage, ResultMessage, TextBlock
from backend.copilot.response_model import StreamError, StreamStart
from backend.copilot.sdk.service import stream_chat_completion_sdk
session = self._make_session()
success_result = self._make_result_message()
attempt_count = [0]
def _client_factory(*args, **kwargs):
attempt_count[0] += 1
async def _receive_error():
# SDK returns invalid_request with "Prompt is too long" in content.
# ResultMessage.result is a non-PTL value ("done") to isolate
# the AssistantMessage content detection path exclusively.
yield AssistantMessage(
content=[TextBlock(text="Prompt is too long")],
model="<synthetic>",
error="invalid_request",
)
yield ResultMessage(
subtype="success",
result="done",
duration_ms=100,
duration_api_ms=0,
is_error=False,
num_turns=1,
session_id="test-session-id",
)
async def _receive_success():
yield success_result
client = MagicMock()
client._transport = MagicMock()
client._transport.write = AsyncMock()
client.query = AsyncMock()
if attempt_count[0] == 1:
client.receive_response = _receive_error
else:
client.receive_response = _receive_success
cm = AsyncMock()
cm.__aenter__.return_value = client
cm.__aexit__.return_value = None
return cm
original_transcript = _build_transcript(
[("user", "prior question"), ("assistant", "prior answer")]
)
compacted_transcript = _build_transcript(
[("user", "[summary]"), ("assistant", "summary reply")]
)
patches = _make_sdk_patches(
session,
original_transcript=original_transcript,
compacted_transcript=compacted_transcript,
client_side_effect=_client_factory,
)
events = []
with contextlib.ExitStack() as stack:
for target, kwargs in patches:
stack.enter_context(patch(target, **kwargs))
async for event in stream_chat_completion_sdk(
session_id="test-session-id",
message="hello",
is_user_message=True,
user_id="test-user",
session=session,
):
events.append(event)
assert attempt_count[0] == 2, (
f"Expected 2 SDK attempts (AssistantMessage error content 'Prompt is "
f"too long' should trigger compaction retry), got {attempt_count[0]}"
)
errors = [e for e in events if isinstance(e, StreamError)]
assert not errors, f"Unexpected StreamError: {errors}"
assert any(isinstance(e, StreamStart) for e in events)

View File

@@ -1310,10 +1310,16 @@ async def _run_stream_attempt(
# AssistantMessage.error (not as a Python exception).
# Re-raise so the outer retry loop can compact the
# transcript and retry with reduced context.
# Only check error_text (the error field), not the
# content preview — content may contain arbitrary text
# that false-positives the pattern match.
if _is_prompt_too_long(Exception(error_text)):
# Check both error_text and error_preview: sdk_error
# being set confirms this is an error message (not user
# content), so checking content is safe. The actual
# error description (e.g. "Prompt is too long") may be
# in the content, not the error type field
# (e.g. error="invalid_request", content="Prompt is
# too long").
if _is_prompt_too_long(Exception(error_text)) or _is_prompt_too_long(
Exception(error_preview)
):
logger.warning(
"%s Prompt-too-long detected via AssistantMessage "
"error — raising for retry",
@@ -1414,13 +1420,16 @@ async def _run_stream_attempt(
ctx.log_prefix,
sdk_msg.result or "(no error message provided)",
)
# If the CLI itself rejected the prompt as too long
# (pre-API check, duration_api_ms=0), re-raise as an
# exception so the retry loop can trigger compaction.
# Without this, the ResultMessage is silently consumed
# and the retry/compaction mechanism is never invoked.
if _is_prompt_too_long(RuntimeError(sdk_msg.result or "")):
raise RuntimeError("Prompt is too long")
# Check for prompt-too-long regardless of subtype — the
# SDK may return subtype="success" with result="Prompt is
# too long" when the CLI rejects the prompt before calling
# the API (cost_usd=0, no tokens consumed). If we only
# check the "error" subtype path, the stream appears to
# complete normally, the synthetic error text is stored
# in the transcript, and the session grows without bound.
if _is_prompt_too_long(RuntimeError(sdk_msg.result or "")):
raise RuntimeError("Prompt is too long")
# Capture token usage from ResultMessage.
# Anthropic reports cached tokens separately:
@@ -1453,6 +1462,23 @@ async def _run_stream_attempt(
# Emit compaction end if SDK finished compacting.
# Sync TranscriptBuilder with the CLI's active context.
compact_result = await ctx.compaction.emit_end_if_ready(ctx.session)
if compact_result.events:
# Compaction events end with StreamFinishStep, which maps to
# Vercel AI SDK's "finish-step" — that clears activeTextParts.
# Close any open text block BEFORE the compaction events so
# the text-end arrives before finish-step, preventing
# "text-end for missing text part" errors on the frontend.
pre_close: list[StreamBaseResponse] = []
state.adapter._end_text_if_open(pre_close)
# Compaction events bypass the adapter, so sync step state
# when a StreamFinishStep is present — otherwise the adapter
# will skip StreamStartStep on the next AssistantMessage.
if any(
isinstance(ev, StreamFinishStep) for ev in compact_result.events
):
state.adapter.step_open = False
for r in pre_close:
yield r
for ev in compact_result.events:
yield ev
entries_replaced = False