- Updated Agent class to emit TaskFailedEvent instead of AgentExecutionErrorEvent when LLM calls are blocked.
- Removed unnecessary LLMCallBlockedError handling from CrewAgentExecutor.
- Enhanced test cases to ensure proper exception handling for blocked LLM calls.
- Improved code clarity and consistency in event handling across agent execution.
- Introduced LLMCallBlockedError to manage blocked LLM calls from before_llm_call hooks.
- Updated LLM class to raise LLMCallBlockedError instead of returning a boolean.
- Enhanced Agent class to emit events and handle LLMCallBlockedError during task execution.
- Added error handling in CrewAgentExecutor and agent utilities to gracefully manage blocked calls.
- Updated tests to verify behavior when LLM calls are blocked.
* fix: use CREWAI_PLUS_URL env var in precedence over PlusAPI configured value
* feat: bypass TLS certificate verification when calling platform
* test: fix test
- Replace Python representation with JsonSchema for tool arguments
- Remove deprecated PydanticSchemaParser in favor of direct schema generation
- Add handling for VAR_POSITIONAL and VAR_KEYWORD parameters
- Improve tool argument schema collection
* fix: gracefully terminate the future when executing a task async
* core: add unit test
---------
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
* supporting thinking for anthropic models
* drop comments here
* thinking and tool calling support
* fix: properly mock tool use and text block types in Anthropic tests
- Updated the test for the Anthropic tool use conversation flow to include type attributes for mocked ToolUseBlock and text blocks, ensuring accurate simulation of tool interactions during testing.
* feat: add AnthropicThinkingConfig for enhanced thinking capabilities
This update introduces the AnthropicThinkingConfig class to manage thinking parameters for the Anthropic completion model. The LLM and AnthropicCompletion classes have been updated to utilize this new configuration. Additionally, new test cassettes have been added to validate the functionality of thinking blocks across interactions.
Adds async support for tools with tests, async execution in the agent executor, and async operations for memory (with aiosqlite). Improves tool decorator typing, ensures _run backward compatibility, updates docs and docstrings, adds tests, and regenerates lockfiles.
Introduces async tool support with new tests, adds async execution to the agent executor, improves tool decorator typing, ensures _run backward compatibility, updates docs and docstrings, and adds additional tests.
* refactor: enhance model validation and provider inference in LLM class
- Updated the model validation logic to support pattern matching for new models and "latest" versions, improving flexibility for various providers.
- Refactored the `_validate_model_in_constants` method to first check hardcoded constants and then fall back to pattern matching.
- Introduced `_matches_provider_pattern` to streamline provider-specific model checks.
- Enhanced the `_infer_provider_from_model` method to utilize pattern matching for better provider inference.
This refactor aims to improve the extensibility of the LLM class, allowing it to accommodate new models without requiring constant updates to the hardcoded lists.
* feat: add new Anthropic model versions to constants
- Introduced "claude-opus-4-5-20251101" and "claude-opus-4-5" to the AnthropicModels and ANTHROPIC_MODELS lists for enhanced model support.
- Added "anthropic.claude-opus-4-5-20251101-v1:0" to BedrockModels and BEDROCK_MODELS to ensure compatibility with the latest model offerings.
- Updated test cases to ensure proper environment variable handling for model validation, improving robustness in testing scenarios.
* dont infer this way - dropped