mirror of
https://github.com/Significant-Gravitas/AutoGPT.git
synced 2026-04-08 03:00:28 -04:00
fix(frontend): buffer workspace file downloads to prevent truncation (#12349)
## Summary - Workspace file downloads (images, CSVs, etc.) were silently truncated (~10 KB lost from the end) when served through the Next.js proxy - Root cause: `new NextResponse(response.body)` passes a `ReadableStream` directly, which Next.js / Vercel silently truncates for larger files - Fix: fully buffer with `response.arrayBuffer()` before forwarding, and set `Content-Length` from the actual buffer size - Keeps the auth proxy intact — no signed URLs (which would be public and expire, breaking chat history) ## Root cause verification Confirmed locally on session `080f27f9-0379-4085-a67a-ee34cc40cd62`: - Backend `write_workspace_file` logs **978,831 bytes** written - Direct backend download (`curl localhost:8006/api/workspace/files/.../download`): **978,831 bytes** ✅ - Browser download through Next.js proxy: **truncated** ❌ ## Why not signed URLs? - Signed URLs are effectively public — anyone with the link can download the file (privacy concern) - Signed URLs expire, but chat history persists — reopening a conversation later would show broken downloads - Buffering is fine: workspace files are capped at 100 MB, Vercel function memory is 1 GB ## Related - Discord thread: `#Truncated File Bug` channel - Related PR #12319 (signed URL approach) — this fix is simpler and preserves auth ## Test plan - [ ] Download a workspace file (CSV, PNG, any type) through the copilot UI - [ ] Verify downloaded file size matches the original - [ ] Verify PNGs open correctly and CSVs have all rows cc @Swiftyos @uberdot @AdarshRawat1
This commit is contained in:
@@ -57,27 +57,25 @@ async function handleWorkspaceDownload(
|
||||
);
|
||||
}
|
||||
|
||||
// Get the content type from the backend response
|
||||
// Fully buffer the response before forwarding. Passing response.body as a
|
||||
// ReadableStream causes silent truncation in Next.js / Vercel — the last
|
||||
// ~10 KB of larger files are dropped, corrupting PNGs and truncating CSVs.
|
||||
const buffer = await response.arrayBuffer();
|
||||
|
||||
const contentType =
|
||||
response.headers.get("Content-Type") || "application/octet-stream";
|
||||
const contentDisposition = response.headers.get("Content-Disposition");
|
||||
|
||||
// Stream the response body
|
||||
const responseHeaders: Record<string, string> = {
|
||||
"Content-Type": contentType,
|
||||
"Content-Length": String(buffer.byteLength),
|
||||
};
|
||||
|
||||
if (contentDisposition) {
|
||||
responseHeaders["Content-Disposition"] = contentDisposition;
|
||||
}
|
||||
|
||||
const contentLength = response.headers.get("Content-Length");
|
||||
if (contentLength) {
|
||||
responseHeaders["Content-Length"] = contentLength;
|
||||
}
|
||||
|
||||
// Stream the response body directly instead of buffering in memory
|
||||
return new NextResponse(response.body, {
|
||||
return new NextResponse(buffer, {
|
||||
status: 200,
|
||||
headers: responseHeaders,
|
||||
});
|
||||
|
||||
Reference in New Issue
Block a user