mirror of
https://github.com/Significant-Gravitas/AutoGPT.git
synced 2026-02-11 07:15:08 -05:00
## Summary - Add thread safety to NodeExecutionProgress class to prevent race conditions between graph executor and node executor threads - Fixes potential data corruption and lost outputs during concurrent access to shared output lists - Uses single global lock per node for minimal performance impact - Instead of blocking the node evaluation before adding another node evaluation, we move on to the next node, in case another node completes it. ## Changes - Added `threading.Lock` to NodeExecutionProgress class - Protected `add_output()` calls from node executor thread with lock - Protected `pop_output()` calls from graph executor thread with lock - Protected `_pop_done_task()` output checks with lock ## Problem Solved The `NodeExecutionProgress.output` dictionary was being accessed concurrently: - `add_output()` called from node executor thread (asyncio thread) - `pop_output()` called from graph executor thread (main thread) - Python lists are not thread-safe for concurrent append/pop operations - This could cause data corruption, index errors, and lost outputs ## Test Plan - [x] Existing executor tests pass - [x] No performance regression (operations are microsecond-level) - [x] Thread safety verified through code analysis ## Technical Details - Single `threading.Lock()` per NodeExecutionProgress instance (~64 bytes) - Lock acquisition time (~100-200ns) is minimal compared to list operations - Maintains order guarantees for same node_execution_id processing - No GIL contention issues as operations are very brief 🤖 Generated with [Claude Code](https://claude.ai/code) --------- Co-authored-by: Claude <noreply@anthropic.com>