* fix: gracefully handle oversized tool results causing context overflow
When a subagent reads a very large file or gets a huge tool result (e.g.,
gh pr diff on a massive PR), it can exceed the model's context window in
a single prompt. Auto-compaction can't help because there's no older
history to compact — just one giant tool result.
This adds two layers of defense:
1. Pre-emptive: Hard cap on tool result size (400K chars ≈ 100K tokens)
applied in the session tool result guard before persistence. This
prevents extremely large tool results from being stored in full,
regardless of model context window size.
2. Recovery: When context overflow is detected and compaction fails,
scan session messages for oversized tool results relative to the
model's actual context window (30% max share). If found, truncate
them in the session via branching (creating a new branch with
truncated content) and retry the prompt.
The truncation preserves the beginning of the content (most useful for
understanding what was read) and appends a notice explaining the
truncation and suggesting offset/limit parameters for targeted reads.
Includes comprehensive tests for:
- Text truncation with newline-boundary awareness
- Context-window-proportional size calculation
- In-memory message truncation
- Oversized detection heuristics
- Guard-level size capping during persistence
* fix: prep fixes for tool result truncation PR (#11579) (thanks @tyler6204)