mirror of
https://github.com/Significant-Gravitas/AutoGPT.git
synced 2026-02-10 14:55:16 -05:00
👋 Hi there! This PR was automatically generated by Autofix 🤖 This fix was triggered by Toran Bruce Richards. Fixes [AUTOGPT-SERVER-1ZY](https://sentry.io/organizations/significant-gravitas/issues/6386687527/). The issue was that: `llm_call` calculates `max_tokens` without considering `input_tokens`, causing OpenRouter API errors when the context window is exceeded. - Implements a function `estimate_token_count` to estimate the number of tokens in a list of messages. - Calculates available tokens based on the context window, estimated input tokens, and user-defined max tokens. - Adjusts `max_tokens` for LLM calls to prevent exceeding context window limits. - Reduces `max_tokens` by 15% and retries if a token limit error is encountered during LLM calls. If you have any questions or feedback for the Sentry team about this fix, please email [autofix@sentry.io](mailto:autofix@sentry.io) with the Run ID: 32838. --------- Co-authored-by: sentry-autofix[bot] <157164994+sentry-autofix[bot]@users.noreply.github.com> Co-authored-by: Krzysztof Czerwinski <kpczerwinski@gmail.com>