Files
crewAI/docs/en/concepts/memory.mdx
João Moura 18d266c8e7 New Unified Memory System (#4420)
* chore: update memory management and dependencies

- Enhance the memory system by introducing a unified memory API that consolidates short-term, long-term, entity, and external memory functionalities.
- Update the `.gitignore` to exclude new memory-related files and blog directories.
- Modify `conftest.py` to handle missing imports for vcr stubs more gracefully.
- Add new development dependencies in `pyproject.toml` for testing and memory management.
- Refactor the `Crew` class to utilize the new unified memory system, replacing deprecated memory attributes.
- Implement memory context injection in `LiteAgent` to improve memory recall during agent execution.
- Update documentation to reflect changes in memory usage and configuration.

* feat: introduce Memory TUI for enhanced memory management

- Add a new command to the CLI for launching a Textual User Interface (TUI) to browse and recall memories.
- Implement the MemoryTUI class to facilitate user interaction with memory scopes and records.
- Enhance the unified memory API by adding a method to list records within a specified scope.
- Update `pyproject.toml` to include the `textual` dependency for TUI functionality.
- Ensure proper error handling for missing dependencies when accessing the TUI.

* feat: implement consolidation flow for memory management

- Introduce the ConsolidationFlow class to handle the decision-making process for inserting, updating, or deleting memory records based on new content.
- Add new data models: ConsolidationAction and ConsolidationPlan to structure the actions taken during consolidation.
- Enhance the memory types with new fields for consolidation thresholds and limits.
- Update the unified memory API to utilize the new consolidation flow for managing memory records.
- Implement embedding functionality for new content to facilitate similarity checks.
- Refactor existing memory analysis methods to integrate with the consolidation process.
- Update translations to include prompts for consolidation actions and user interactions.

* feat: enhance Memory TUI with Rich markup and improved UI elements

- Update the MemoryTUI class to utilize Rich markup for better visual representation of memory scope information.
- Introduce a color palette for consistent branding across the TUI interface.
- Refactor the CSS styles to improve the layout and aesthetics of the memory browsing experience.
- Enhance the display of memory entries, including better formatting for records and importance ratings.
- Implement loading indicators and error messages with Rich styling for improved user feedback during recall operations.
- Update the action bindings and navigation prompts for a more intuitive user experience.

* feat: enhance Crew class memory management and configuration

- Update the Crew class to allow for more flexible memory configurations by accepting Memory, MemoryScope, or MemorySlice instances.
- Refactor memory initialization logic to support custom memory configurations while maintaining backward compatibility.
- Improve documentation for memory-related fields to clarify usage and expectations.
- Introduce a recall oversample factor to optimize memory recall processes.
- Update related memory types and configurations to ensure consistency across the memory management system.

* chore: update dependency overrides and enhance memory management

- Added an override for the 'rich' dependency to allow compatibility with 'textual' requirements.
- Updated the 'pyproject.toml' and 'uv.lock' files to reflect the new dependency specifications.
- Refactored the Crew class to simplify memory configuration handling by allowing any type for the memory attribute.
- Improved error messages in the CLI for missing 'textual' dependency to guide users on installation.
- Introduced new packages and dependencies in the project to enhance functionality and maintain compatibility.

* refactor: enhance thread safety in flow management

- Updated LockedListProxy and LockedDictProxy to subclass list and dict respectively, ensuring compatibility with libraries requiring strict type checks.
- Improved documentation to clarify the purpose of these proxies and their thread-safe operations.
- Ensured that all mutations are protected by locks while reads delegate to the underlying data structures, enhancing concurrency safety.

* chore: update dependency versions and improve Python compatibility

- Downgraded 'vcrpy' dependency to version 7.0.0 for compatibility.
- Enhanced 'uv.lock' to include more granular resolution markers for Python versions and implementations, ensuring better compatibility across different environments.
- Updated 'urllib3' and 'selenium' dependencies to specify versions based on Python implementation, improving stability and performance.
- Removed deprecated resolution markers for 'fastembed' and streamlined its dependencies for better clarity.

* fix linter

* chore: update uv.lock for improved dependency management and memory management enhancements

- Incremented revision number in uv.lock to reflect changes.
- Added a new development dependency group in uv.lock, specifying versions for tools like pytest, mypy, and pre-commit to streamline development workflows.
- Enhanced error handling in CLI memory functions to provide clearer feedback on missing dependencies.
- Refactored memory management classes to improve type hints and maintainability, ensuring better compatibility with future updates.

* fix tests

* refactor: remove obsolete RAGStorage tests and clean up error handling

- Deleted outdated tests for RAGStorage that were no longer relevant, including tests for client failures, save operation failures, and reset failures.
- Cleaned up the test suite to focus on current functionality and improve maintainability.
- Ensured that remaining tests continue to validate the expected behavior of knowledge storage components.

* fix test

* fix texts

* fix tests

* forcing new commit

* fix: add location parameter to Google Vertex embedder configuration for memory integration tests

* debugging CI

* adding debugging for CI

* refactor: remove unnecessary logging for memory checks in agent execution

- Eliminated redundant logging statements related to memory checks in the Agent and CrewAgentExecutor classes.
- Simplified the memory retrieval logic by directly checking for available memory without logging intermediate states.
- Improved code readability and maintainability by reducing clutter in the logging output.

* udpating desp

* feat: enhance thread safety in LockedListProxy and LockedDictProxy

- Added equality comparison methods (__eq__ and __ne__) to LockedListProxy and LockedDictProxy to allow for safe comparison of their contents.
- Implemented consistent locking mechanisms to prevent deadlocks during comparisons.
- Improved the overall robustness of these proxy classes in multi-threaded environments.

* feat: enhance memory functionality in Flows documentation and memory system

- Added a new section on memory usage within Flows, detailing built-in methods for storing and recalling memories.
- Included an example of a Research and Analyze Flow demonstrating the integration of memory for accumulating knowledge over time.
- Updated the Memory documentation to clarify the unified memory system and its capabilities, including adaptive-depth recall and composite scoring.
- Introduced a new configuration parameter, `recall_oversample_factor`, to improve the effectiveness of memory retrieval processes.

* update docs

* refactor: improve memory record handling and pagination in unified memory system

- Simplified the `get_record` method in the Memory class by directly accessing the storage's `get_record` method.
- Enhanced the `list_records` method to include an `offset` parameter for pagination, allowing users to skip a specified number of records.
- Updated documentation for both methods to clarify their functionality and parameters, improving overall code clarity and usability.

* test: update memory scope assertions in unified memory tests

- Modified assertions in `test_lancedb_list_scopes_get_scope_info` and `test_memory_list_scopes_info_tree` to check for the presence of the "/team" scope instead of the root scope.
- Clarified comments to indicate that `list_scopes` returns child scopes rather than the root itself, enhancing test clarity and accuracy.

* feat: integrate memory tools for agents and crews

- Added functionality to inject memory tools into agents during initialization, enhancing their ability to recall and remember information mid-task.
- Implemented a new `_add_memory_tools` method in the Crew class to facilitate the addition of memory tools when memory is available.
- Introduced `RecallMemoryTool` and `RememberTool` classes in a new `memory_tools.py` file, providing agents with active recall and memory storage capabilities.
- Updated English translations to include descriptions for the new memory tools, improving user guidance on their usage.

* refactor: streamline memory recall functionality across agents and tools

- Removed the 'depth' parameter from memory recall calls in LiteAgent and Agent classes, simplifying the recall process.
- Updated the MemoryTUI to use 'deep' depth by default for more comprehensive memory retrieval.
- Enhanced the MemoryScope and MemorySlice classes to default to 'deep' depth, improving recall accuracy.
- Introduced a new 'recall_queries' field in QueryAnalysis to optimize semantic vector searches with targeted phrases.
- Updated documentation and comments to reflect changes in memory recall behavior and parameters.

* refactor: optimize memory management in flow classes

- Enhanced memory auto-creation logic in Flow class to prevent unnecessary Memory instance creation for internal flows (RecallFlow, ConsolidationFlow) by introducing a _skip_auto_memory flag.
- Removed the deprecated time_hints field from QueryAnalysis and replaced it with a more flexible time_filter field to better handle time-based queries.
- Updated documentation and comments to reflect changes in memory handling and query analysis structure, improving clarity and usability.

* updates tests

* feat: introduce EncodingFlow for enhanced memory encoding pipeline

- Added a new EncodingFlow class to orchestrate the encoding process for memory, integrating LLM analysis and embedding.
- Updated the Memory class to utilize EncodingFlow for saving content, improving the overall memory management and conflict resolution.
- Enhanced the unified memory module to include the new EncodingFlow in its public API, facilitating better memory handling.
- Updated tests to ensure proper functionality of the new encoding flow and its integration with existing memory features.

* refactor: optimize memory tool integration and recall flow

- Streamlined the addition of memory tools in the Agent class by using list comprehension for cleaner code.
- Enhanced the RecallFlow class to build task lists more efficiently with list comprehensions, improving readability and performance.
- Updated the RecallMemoryTool to utilize list comprehensions for formatting memory results, simplifying the code structure.
- Adjusted test assertions in LiteAgent to reflect the default behavior of memory recall depth, ensuring clarity in expected outcomes.

* Potential fix for pull request finding 'Empty except'

Co-authored-by: Copilot Autofix powered by AI <223894421+github-code-quality[bot]@users.noreply.github.com>

* chore: gen missing cassette

* fix

* test: enhance memory extraction test by mocking recall to prevent LLM calls

Updated the test for memory extraction to include a mock for the recall method, ensuring that the test focuses on the save path without invoking external LLM calls. This improves test reliability and clarity.

* refactor: enhance memory handling by adding agent role parameter

Updated memory storage methods across multiple classes to include an optional `agent_role` parameter, improving the context of stored memories. Additionally, modified the initialization of several flow classes to suppress flow events, enhancing performance and reducing unnecessary event triggers.

* feat: enhance agent memory functionality with recall and save mechanisms

Implemented memory context injection during agent kickoff, allowing for memory recall before execution and passive saving of results afterward. Added new methods to handle memory saving and retrieval, including error handling for memory operations. Updated the BaseAgent class to support dynamic memory resolution and improved memory record structure with source and privacy attributes for better provenance tracking.

* test

* feat: add utility method to simplify tools field in console formatter

Introduced a new static method `_simplify_tools_field` in the console formatter to transform the 'tools' field from full tool objects to a comma-separated string of tool names. This enhancement improves the readability of tool information in the output.

* refactor: improve lazy initialization of LLM and embedder in Memory class

Refactored the Memory class to implement lazy initialization for the LLM and embedder, ensuring they are only created when first accessed. This change enhances the robustness of the Memory class by preventing initialization failures when constructed without an API key. Additionally, updated error handling to provide clearer guidance for users on resolving initialization issues.

* refactor: consolidate memory saving methods for improved efficiency

Refactored memory handling across multiple classes to replace individual memory saving calls with a batch method, `remember_many`, enhancing performance and reducing redundancy. Updated related tools and schemas to support single and multiple item memory operations, ensuring a more streamlined interface for memory interactions. Additionally, improved documentation and test coverage for the new functionality.

* feat: enhance MemoryTUI with improved layout and entry handling

Updated the MemoryTUI class to incorporate a new vertical layout, adding an OptionList for displaying entries and enhancing the detail view for selected records. Introduced methods for populating entry and recall lists, improving user interaction and data presentation. Additionally, refined CSS styles for better visual organization and focus handling.

* fix test

* feat: inject memory tools into LiteAgent for enhanced functionality

Added logic to the LiteAgent class to inject memory tools if memory is configured, ensuring that memory tools are only added if they are not already present. This change improves the agent's capability to utilize memory effectively during execution.

* feat: add synchronous execution method to ConsolidationFlow for improved integration

Introduced a new `run_sync()` method in the ConsolidationFlow class to facilitate procedural execution of the consolidation pipeline without relying on asynchronous event loops. Updated the EncodingFlow class to utilize this method for conflict resolution, ensuring compatibility within its async context. This change enhances the flow's ability to manage memory records effectively during nested executions.

* refactor: update ConsolidationFlow and EncodingFlow for improved async handling

Removed the synchronous `run_sync()` method from ConsolidationFlow and refactored the consolidate method in EncodingFlow to be asynchronous. This change allows for direct awaiting of the ConsolidationFlow's kickoff method, enhancing compatibility within the async event loop and preventing nested asyncio.run() issues. Additionally, updated the execution plan to listen for multiple paths, streamlining the consolidation process.

* fix: update flow documentation and remove unused ConsolidationFlow

Corrected the comment in Flow class regarding internal flows, replacing "ConsolidationFlow" with "EncodingFlow". Removed the ConsolidationFlow class as it is no longer needed, streamlining the memory handling process. Updated related imports and ensured that the memory module reflects these changes, enhancing clarity and maintainability.

* feat: enhance memory handling with background saving and query analysis optimization

Implemented a background saving mechanism in the Memory class to allow non-blocking memory operations, improving performance during high-load scenarios. Added a query analysis threshold to skip LLM calls for short queries, optimizing recall efficiency. Updated related methods and documentation to reflect these changes, ensuring a more responsive and efficient memory management system.

* fix test

* fix test

* fix: handle synchronous fallback for save operations in Memory class

Updated the Memory class to implement a synchronous fallback mechanism for save operations when the background thread pool is shut down. This change ensures that late save requests still succeed, improving reliability in memory management during shutdown scenarios.

* feat: implement HITL learning features in human feedback decorator

Added support for learning from human feedback in the human feedback decorator. Introduced parameters to enable lesson distillation and pre-review of outputs based on past feedback. Updated related tests to ensure proper functionality of the learning mechanism, including memory interactions and default LLM usage.

---------

Co-authored-by: Copilot Autofix powered by AI <223894421+github-code-quality[bot]@users.noreply.github.com>
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2026-02-13 21:34:37 -03:00

728 lines
24 KiB
Plaintext

---
title: Memory
description: Leveraging the unified memory system in CrewAI to enhance agent capabilities.
icon: database
mode: "wide"
---
## Overview
CrewAI provides a **unified memory system** -- a single `Memory` class that replaces separate short-term, long-term, entity, and external memory types with one intelligent API. Memory uses an LLM to analyze content when saving (inferring scope, categories, and importance) and supports adaptive-depth recall with composite scoring that blends semantic similarity, recency, and importance.
You can use memory four ways: **standalone** (scripts, notebooks), **with Crews**, **with Agents**, or **inside Flows**.
## Quick Start
```python
from crewai import Memory
memory = Memory()
# Store -- the LLM infers scope, categories, and importance
memory.remember("We decided to use PostgreSQL for the user database.")
# Retrieve -- results ranked by composite score (semantic + recency + importance)
matches = memory.recall("What database did we choose?")
for m in matches:
print(f"[{m.score:.2f}] {m.record.content}")
# Tune scoring for a fast-moving project
memory = Memory(recency_weight=0.5, recency_half_life_days=7)
# Forget
memory.forget(scope="/project/old")
# Explore the self-organized scope tree
print(memory.tree())
print(memory.info("/"))
```
## Four Ways to Use Memory
### Standalone
Use memory in scripts, notebooks, CLI tools, or as a standalone knowledge base -- no agents or crews required.
```python
from crewai import Memory
memory = Memory()
# Build up knowledge
memory.remember("The API rate limit is 1000 requests per minute.")
memory.remember("Our staging environment uses port 8080.")
memory.remember("The team agreed to use feature flags for all new releases.")
# Later, recall what you need
matches = memory.recall("What are our API limits?", limit=5)
for m in matches:
print(f"[{m.score:.2f}] {m.record.content}")
# Extract atomic facts from a longer text
raw = """Meeting notes: We decided to migrate from MySQL to PostgreSQL
next quarter. The budget is $50k. Sarah will lead the migration."""
facts = memory.extract_memories(raw)
# ["Migration from MySQL to PostgreSQL planned for next quarter",
# "Database migration budget is $50k",
# "Sarah will lead the database migration"]
for fact in facts:
memory.remember(fact)
```
### With Crews
Pass `memory=True` for default settings, or pass a configured `Memory` instance for custom behavior.
```python
from crewai import Crew, Agent, Task, Process, Memory
# Option 1: Default memory
crew = Crew(
agents=[researcher, writer],
tasks=[research_task, writing_task],
process=Process.sequential,
memory=True,
verbose=True,
)
# Option 2: Custom memory with tuned scoring
memory = Memory(
recency_weight=0.4,
semantic_weight=0.4,
importance_weight=0.2,
recency_half_life_days=14,
)
crew = Crew(
agents=[researcher, writer],
tasks=[research_task, writing_task],
memory=memory,
)
```
When `memory=True`, the crew creates a default `Memory()` and passes the crew's `embedder` configuration through automatically. All agents in the crew share the crew's memory unless an agent has its own.
After each task, the crew automatically extracts discrete facts from the task output and stores them. Before each task, the agent recalls relevant context from memory and injects it into the task prompt.
### With Agents
Agents can use the crew's shared memory (default) or receive a scoped view for private context.
```python
from crewai import Agent, Memory
memory = Memory()
# Researcher gets a private scope -- only sees /agent/researcher
researcher = Agent(
role="Researcher",
goal="Find and analyze information",
backstory="Expert researcher with attention to detail",
memory=memory.scope("/agent/researcher"),
)
# Writer uses crew shared memory (no agent-level memory set)
writer = Agent(
role="Writer",
goal="Produce clear, well-structured content",
backstory="Experienced technical writer",
# memory not set -- uses crew._memory when crew has memory enabled
)
```
This pattern gives the researcher private findings while the writer reads from the shared crew memory.
### With Flows
Every Flow has built-in memory. Use `self.remember()`, `self.recall()`, and `self.extract_memories()` inside any flow method.
```python
from crewai.flow.flow import Flow, listen, start
class ResearchFlow(Flow):
@start()
def gather_data(self):
findings = "PostgreSQL handles 10k concurrent connections. MySQL caps at 5k."
self.remember(findings, scope="/research/databases")
return findings
@listen(gather_data)
def write_report(self, findings):
# Recall past research to provide context
past = self.recall("database performance benchmarks")
context = "\n".join(f"- {m.record.content}" for m in past)
return f"Report:\nNew findings: {findings}\nPrevious context:\n{context}"
```
See the [Flows documentation](/concepts/flows) for more on memory in Flows.
## Hierarchical Scopes
### What Scopes Are
Memories are organized into a hierarchical tree of scopes, similar to a filesystem. Each scope is a path like `/`, `/project/alpha`, or `/agent/researcher/findings`.
```
/
/company
/company/engineering
/company/product
/project
/project/alpha
/project/beta
/agent
/agent/researcher
/agent/writer
```
Scopes provide **context-dependent memory** -- when you recall within a scope, you only search that branch of the tree, which improves both precision and performance.
### How Scope Inference Works
When you call `remember()` without specifying a scope, the LLM analyzes the content and the existing scope tree, then suggests the best placement. If no existing scope fits, it creates a new one. Over time, the scope tree grows organically from the content itself -- you don't need to design a schema upfront.
```python
memory = Memory()
# LLM infers scope from content
memory.remember("We chose PostgreSQL for the user database.")
# -> might be placed under /project/decisions or /engineering/database
# You can also specify scope explicitly
memory.remember("Sprint velocity is 42 points", scope="/team/metrics")
```
### Visualizing the Scope Tree
```python
print(memory.tree())
# / (15 records)
# /project (8 records)
# /project/alpha (5 records)
# /project/beta (3 records)
# /agent (7 records)
# /agent/researcher (4 records)
# /agent/writer (3 records)
print(memory.info("/project/alpha"))
# ScopeInfo(path='/project/alpha', record_count=5,
# categories=['architecture', 'database'],
# oldest_record=datetime(...), newest_record=datetime(...),
# child_scopes=[])
```
### MemoryScope: Subtree Views
A `MemoryScope` restricts all operations to a branch of the tree. The agent or code using it can only see and write within that subtree.
```python
memory = Memory()
# Create a scope for a specific agent
agent_memory = memory.scope("/agent/researcher")
# Everything is relative to /agent/researcher
agent_memory.remember("Found three relevant papers on LLM memory.")
# -> stored under /agent/researcher
agent_memory.recall("relevant papers")
# -> searches only under /agent/researcher
# Narrow further with subscope
project_memory = agent_memory.subscope("project-alpha")
# -> /agent/researcher/project-alpha
```
### Best Practices for Scope Design
- **Start flat, let the LLM organize.** Don't over-engineer your scope hierarchy upfront. Begin with `memory.remember(content)` and let the LLM's scope inference create structure as content accumulates.
- **Use `/{entity_type}/{identifier}` patterns.** Natural hierarchies emerge from patterns like `/project/alpha`, `/agent/researcher`, `/company/engineering`, `/customer/acme-corp`.
- **Scope by concern, not by data type.** Use `/project/alpha/decisions` rather than `/decisions/project/alpha`. This keeps related content together.
- **Keep depth shallow (2-3 levels).** Deeply nested scopes become too sparse. `/project/alpha/architecture` is good; `/project/alpha/architecture/decisions/databases/postgresql` is too deep.
- **Use explicit scopes when you know, let the LLM infer when you don't.** If you're storing a known project decision, pass `scope="/project/alpha/decisions"`. If you're storing freeform agent output, omit the scope and let the LLM figure it out.
### Use Case Examples
**Multi-project team:**
```python
memory = Memory()
# Each project gets its own branch
memory.remember("Using microservices architecture", scope="/project/alpha/architecture")
memory.remember("GraphQL API for client apps", scope="/project/beta/api")
# Recall across all projects
memory.recall("API design decisions")
# Or within a specific project
memory.recall("API design", scope="/project/beta")
```
**Per-agent private context with shared knowledge:**
```python
memory = Memory()
# Researcher has private findings
researcher_memory = memory.scope("/agent/researcher")
# Writer can read from both its own scope and shared company knowledge
writer_view = memory.slice(
scopes=["/agent/writer", "/company/knowledge"],
read_only=True,
)
```
**Customer support (per-customer context):**
```python
memory = Memory()
# Each customer gets isolated context
memory.remember("Prefers email communication", scope="/customer/acme-corp")
memory.remember("On enterprise plan, 50 seats", scope="/customer/acme-corp")
# Shared product docs are accessible to all agents
memory.remember("Rate limit is 1000 req/min on enterprise plan", scope="/product/docs")
```
## Memory Slices
### What Slices Are
A `MemorySlice` is a view across multiple, possibly disjoint scopes. Unlike a scope (which restricts to one subtree), a slice lets you recall from several branches simultaneously.
### When to Use Slices vs Scopes
- **Scope**: Use when an agent or code block should be restricted to a single subtree. Example: an agent that only sees `/agent/researcher`.
- **Slice**: Use when you need to combine context from multiple branches. Example: an agent that reads from its own scope plus shared company knowledge.
### Read-Only Slices
The most common pattern: give an agent read access to multiple branches without letting it write to shared areas.
```python
memory = Memory()
# Agent can recall from its own scope AND company knowledge,
# but cannot write to company knowledge
agent_view = memory.slice(
scopes=["/agent/researcher", "/company/knowledge"],
read_only=True,
)
matches = agent_view.recall("company security policies", limit=5)
# Searches both /agent/researcher and /company/knowledge, merges and ranks results
agent_view.remember("new finding") # Raises PermissionError (read-only)
```
### Read-Write Slices
When read-only is disabled, you can write to any of the included scopes, but you must specify which scope explicitly.
```python
view = memory.slice(scopes=["/team/alpha", "/team/beta"], read_only=False)
# Must specify scope when writing
view.remember("Cross-team decision", scope="/team/alpha", categories=["decisions"])
```
## Composite Scoring
Recall results are ranked by a weighted combination of three signals:
```
composite = semantic_weight * similarity + recency_weight * decay + importance_weight * importance
```
Where:
- **similarity** = `1 / (1 + distance)` from the vector index (0 to 1)
- **decay** = `0.5^(age_days / half_life_days)` -- exponential decay (1.0 for today, 0.5 at half-life)
- **importance** = the record's importance score (0 to 1), set at encoding time
Configure these directly on the `Memory` constructor:
```python
# Sprint retrospective: favor recent memories, short half-life
memory = Memory(
recency_weight=0.5,
semantic_weight=0.3,
importance_weight=0.2,
recency_half_life_days=7,
)
# Architecture knowledge base: favor important memories, long half-life
memory = Memory(
recency_weight=0.1,
semantic_weight=0.5,
importance_weight=0.4,
recency_half_life_days=180,
)
```
Each `MemoryMatch` includes a `match_reasons` list so you can see why a result ranked where it did (e.g. `["semantic", "recency", "importance"]`).
## LLM Analysis Layer
Memory uses the LLM in three ways:
1. **On save** -- When you omit scope, categories, or importance, the LLM analyzes the content and suggests scope, categories, importance, and metadata (entities, dates, topics).
2. **On recall** -- For deep/auto recall, the LLM analyzes the query (keywords, time hints, suggested scopes, complexity) to guide retrieval.
3. **Extract memories** -- `extract_memories(content)` breaks raw text (e.g. task output) into discrete memory statements. Agents use this before calling `remember()` on each statement so that atomic facts are stored instead of one large blob.
All analysis degrades gracefully on LLM failure -- see [Failure Behavior](#failure-behavior).
## RecallFlow (Deep Recall)
`recall()` supports three depths:
- **`depth="shallow"`** -- Direct vector search with composite scoring. Fast; used by default when agents load context.
- **`depth="deep"` or `depth="auto"`** -- Runs a multi-step RecallFlow: query analysis, scope selection, vector search, confidence-based routing, and optional recursive exploration when confidence is low.
```python
# Fast path (default for agent task context)
matches = memory.recall("What did we decide?", limit=10, depth="shallow")
# Intelligent path for complex questions
matches = memory.recall(
"Summarize all architecture decisions from this quarter",
limit=10,
depth="auto",
)
```
The confidence thresholds that control the RecallFlow router are configurable:
```python
memory = Memory(
confidence_threshold_high=0.9, # Only synthesize when very confident
confidence_threshold_low=0.4, # Explore deeper more aggressively
exploration_budget=2, # Allow up to 2 exploration rounds
)
```
## Embedder Configuration
Memory needs an embedding model to convert text into vectors for semantic search. You can configure this in three ways.
### Passing to Memory Directly
```python
from crewai import Memory
# As a config dict
memory = Memory(embedder={"provider": "openai", "config": {"model_name": "text-embedding-3-small"}})
# As a pre-built callable
from crewai.rag.embeddings.factory import build_embedder
embedder = build_embedder({"provider": "ollama", "config": {"model_name": "mxbai-embed-large"}})
memory = Memory(embedder=embedder)
```
### Via Crew Embedder Config
When using `memory=True`, the crew's `embedder` config is passed through:
```python
from crewai import Crew
crew = Crew(
agents=[...],
tasks=[...],
memory=True,
embedder={"provider": "openai", "config": {"model_name": "text-embedding-3-small"}},
)
```
### Provider Examples
<AccordionGroup>
<Accordion title="OpenAI (default)">
```python
memory = Memory(embedder={
"provider": "openai",
"config": {
"model_name": "text-embedding-3-small",
# "api_key": "sk-...", # or set OPENAI_API_KEY env var
},
})
```
</Accordion>
<Accordion title="Ollama (local, private)">
```python
memory = Memory(embedder={
"provider": "ollama",
"config": {
"model_name": "mxbai-embed-large",
"url": "http://localhost:11434/api/embeddings",
},
})
```
</Accordion>
<Accordion title="Azure OpenAI">
```python
memory = Memory(embedder={
"provider": "azure",
"config": {
"deployment_id": "your-embedding-deployment",
"api_key": "your-azure-api-key",
"api_base": "https://your-resource.openai.azure.com",
"api_version": "2024-02-01",
},
})
```
</Accordion>
<Accordion title="Google AI">
```python
memory = Memory(embedder={
"provider": "google-generativeai",
"config": {
"model_name": "gemini-embedding-001",
# "api_key": "...", # or set GOOGLE_API_KEY env var
},
})
```
</Accordion>
<Accordion title="Google Vertex AI">
```python
memory = Memory(embedder={
"provider": "google-vertex",
"config": {
"model_name": "gemini-embedding-001",
"project_id": "your-gcp-project-id",
"location": "us-central1",
},
})
```
</Accordion>
<Accordion title="Cohere">
```python
memory = Memory(embedder={
"provider": "cohere",
"config": {
"model_name": "embed-english-v3.0",
# "api_key": "...", # or set COHERE_API_KEY env var
},
})
```
</Accordion>
<Accordion title="VoyageAI">
```python
memory = Memory(embedder={
"provider": "voyageai",
"config": {
"model": "voyage-3",
# "api_key": "...", # or set VOYAGE_API_KEY env var
},
})
```
</Accordion>
<Accordion title="AWS Bedrock">
```python
memory = Memory(embedder={
"provider": "amazon-bedrock",
"config": {
"model_name": "amazon.titan-embed-text-v1",
# Uses default AWS credentials (boto3 session)
},
})
```
</Accordion>
<Accordion title="Hugging Face">
```python
memory = Memory(embedder={
"provider": "huggingface",
"config": {
"model_name": "sentence-transformers/all-MiniLM-L6-v2",
},
})
```
</Accordion>
<Accordion title="Jina">
```python
memory = Memory(embedder={
"provider": "jina",
"config": {
"model_name": "jina-embeddings-v2-base-en",
# "api_key": "...", # or set JINA_API_KEY env var
},
})
```
</Accordion>
<Accordion title="IBM WatsonX">
```python
memory = Memory(embedder={
"provider": "watsonx",
"config": {
"model_id": "ibm/slate-30m-english-rtrvr",
"api_key": "your-watsonx-api-key",
"project_id": "your-project-id",
"url": "https://us-south.ml.cloud.ibm.com",
},
})
```
</Accordion>
<Accordion title="Custom Embedder">
```python
# Pass any callable that takes a list of strings and returns a list of vectors
def my_embedder(texts: list[str]) -> list[list[float]]:
# Your embedding logic here
return [[0.1, 0.2, ...] for _ in texts]
memory = Memory(embedder=my_embedder)
```
</Accordion>
</AccordionGroup>
### Provider Reference
| Provider | Key | Typical Model | Notes |
| :--- | :--- | :--- | :--- |
| OpenAI | `openai` | `text-embedding-3-small` | Default. Set `OPENAI_API_KEY`. |
| Ollama | `ollama` | `mxbai-embed-large` | Local, no API key needed. |
| Azure OpenAI | `azure` | `text-embedding-ada-002` | Requires `deployment_id`. |
| Google AI | `google-generativeai` | `gemini-embedding-001` | Set `GOOGLE_API_KEY`. |
| Google Vertex | `google-vertex` | `gemini-embedding-001` | Requires `project_id`. |
| Cohere | `cohere` | `embed-english-v3.0` | Strong multilingual support. |
| VoyageAI | `voyageai` | `voyage-3` | Optimized for retrieval. |
| AWS Bedrock | `amazon-bedrock` | `amazon.titan-embed-text-v1` | Uses boto3 credentials. |
| Hugging Face | `huggingface` | `all-MiniLM-L6-v2` | Local sentence-transformers. |
| Jina | `jina` | `jina-embeddings-v2-base-en` | Set `JINA_API_KEY`. |
| IBM WatsonX | `watsonx` | `ibm/slate-30m-english-rtrvr` | Requires `project_id`. |
| Sentence Transformer | `sentence-transformer` | `all-MiniLM-L6-v2` | Local, no API key. |
| Custom | `custom` | -- | Requires `embedding_callable`. |
## Storage Backend
- **Default**: LanceDB, stored under `./.crewai/memory` (or `$CREWAI_STORAGE_DIR/memory` if the env var is set, or the path you pass as `storage="path/to/dir"`).
- **Custom backend**: Implement the `StorageBackend` protocol (see `crewai.memory.storage.backend`) and pass an instance to `Memory(storage=your_backend)`.
## Discovery
Inspect the scope hierarchy, categories, and records:
```python
memory.tree() # Formatted tree of scopes and record counts
memory.tree("/project", max_depth=2) # Subtree view
memory.info("/project") # ScopeInfo: record_count, categories, oldest/newest
memory.list_scopes("/") # Immediate child scopes
memory.list_categories() # Category names and counts
memory.list_records(scope="/project/alpha", limit=20) # Records in a scope, newest first
```
## Failure Behavior
If the LLM fails during analysis (network error, rate limit, invalid response), memory degrades gracefully:
- **Save analysis** -- A warning is logged and the memory is still stored with default scope `/`, empty categories, and importance `0.5`.
- **Extract memories** -- The full content is stored as a single memory so nothing is dropped.
- **Query analysis** -- Recall falls back to simple scope selection and vector search so you still get results.
No exception is raised for these analysis failures; only storage or embedder failures will raise.
## Privacy Note
Memory content is sent to the configured LLM for analysis (scope/categories/importance on save, query analysis and optional deep recall). For sensitive data, use a local LLM (e.g. Ollama) or ensure your provider meets your compliance requirements.
## Memory Events
All memory operations emit events with `source_type="unified_memory"`. You can listen for timing, errors, and content.
| Event | Description | Key Properties |
| :---- | :---------- | :------------- |
| **MemoryQueryStartedEvent** | Query begins | `query`, `limit` |
| **MemoryQueryCompletedEvent** | Query succeeds | `query`, `results`, `query_time_ms` |
| **MemoryQueryFailedEvent** | Query fails | `query`, `error` |
| **MemorySaveStartedEvent** | Save begins | `value`, `metadata` |
| **MemorySaveCompletedEvent** | Save succeeds | `value`, `save_time_ms` |
| **MemorySaveFailedEvent** | Save fails | `value`, `error` |
| **MemoryRetrievalStartedEvent** | Agent retrieval starts | `task_id` |
| **MemoryRetrievalCompletedEvent** | Agent retrieval done | `task_id`, `memory_content`, `retrieval_time_ms` |
Example: monitor query time:
```python
from crewai.events import BaseEventListener, MemoryQueryCompletedEvent
class MemoryMonitor(BaseEventListener):
def setup_listeners(self, crewai_event_bus):
@crewai_event_bus.on(MemoryQueryCompletedEvent)
def on_done(source, event):
if getattr(event, "source_type", None) == "unified_memory":
print(f"Query '{event.query}' completed in {event.query_time_ms:.0f}ms")
```
## Troubleshooting
**Memory not persisting?**
- Ensure the storage path is writable (default `./.crewai/memory`). Pass `storage="./your_path"` to use a different directory, or set the `CREWAI_STORAGE_DIR` environment variable.
- When using a crew, confirm `memory=True` or `memory=Memory(...)` is set.
**Slow recall?**
- Use `depth="shallow"` for routine agent context. Reserve `depth="auto"` or `"deep"` for complex queries.
**LLM analysis errors in logs?**
- Memory still saves/recalls with safe defaults. Check API keys, rate limits, and model availability if you want full LLM analysis.
**Browse memory from the terminal:**
```bash
crewai memory # Opens the TUI browser
crewai memory --storage-path ./my_memory # Point to a specific directory
```
**Reset memory (e.g. for tests):**
```python
crew.reset_memories(command_type="memory") # Resets unified memory
# Or on a Memory instance:
memory.reset() # All scopes
memory.reset(scope="/project/old") # Only that subtree
```
## Configuration Reference
All configuration is passed as keyword arguments to `Memory(...)`. Every parameter has a sensible default.
| Parameter | Default | Description |
| :--- | :--- | :--- |
| `llm` | `"gpt-4o-mini"` | LLM for analysis (model name or `BaseLLM` instance). |
| `storage` | `"lancedb"` | Storage backend (`"lancedb"`, a path string, or a `StorageBackend` instance). |
| `embedder` | `None` (OpenAI default) | Embedder (config dict, callable, or `None` for default OpenAI). |
| `recency_weight` | `0.3` | Weight for recency in composite score. |
| `semantic_weight` | `0.5` | Weight for semantic similarity in composite score. |
| `importance_weight` | `0.2` | Weight for importance in composite score. |
| `recency_half_life_days` | `30` | Days for recency score to halve (exponential decay). |
| `consolidation_threshold` | `0.85` | Similarity above which consolidation is triggered on save. Set to `1.0` to disable. |
| `consolidation_limit` | `5` | Max existing records to compare during consolidation. |
| `default_importance` | `0.5` | Importance assigned when not provided and LLM analysis is skipped. |
| `confidence_threshold_high` | `0.8` | Recall confidence above which results are returned directly. |
| `confidence_threshold_low` | `0.5` | Recall confidence below which deeper exploration is triggered. |
| `complex_query_threshold` | `0.7` | For complex queries, explore deeper below this confidence. |
| `exploration_budget` | `1` | Number of LLM-driven exploration rounds during deep recall. |