mirror of
https://github.com/All-Hands-AI/OpenHands.git
synced 2026-01-10 07:18:10 -05:00
Fix CodeAct paper link (#1784)
https://arxiv.org/abs/2402.13463 is RefuteBench: Evaluating Refuting Instruction-Following for Large Language Models https://arxiv.org/abs/2402.01030 is Executable Code Actions Elicit Better LLM Agents
This commit is contained in:
@@ -8,7 +8,7 @@ sidebar_position: 3
|
||||
|
||||
### Description
|
||||
|
||||
This agent implements the CodeAct idea ([paper](https://arxiv.org/abs/2402.13463), [tweet](https://twitter.com/xingyaow_/status/1754556835703751087)) that consolidates LLM agents’ **act**ions into a unified **code** action space for both _simplicity_ and _performance_ (see paper for more details).
|
||||
This agent implements the CodeAct idea ([paper](https://arxiv.org/abs/2402.01030), [tweet](https://twitter.com/xingyaow_/status/1754556835703751087)) that consolidates LLM agents’ **act**ions into a unified **code** action space for both _simplicity_ and _performance_ (see paper for more details).
|
||||
|
||||
The conceptual idea is illustrated below. At each turn, the agent can:
|
||||
|
||||
|
||||
Reference in New Issue
Block a user