mirror of
https://github.com/Significant-Gravitas/AutoGPT.git
synced 2026-01-10 07:38:04 -05:00
Vector memory revamp (part 1: refactoring) (#4208)
Additional changes: * Improve typing * Modularize message history memory & fix/refactor lots of things * Fix summarization * Move memory relevance calculation to MemoryItem & improve test * Fix import warnings in web_selenium.py * Remove `memory_add` ghost command * Implement overlap in `split_text` * Move memory tests into subdirectory * Remove deprecated `get_ada_embedding()` and helpers * Fix used token calculation in `chat_with_ai` * Replace Message TypedDict by dataclass * Fix AgentManager singleton issues in tests --------- Co-authored-by: Auto-GPT-Bot <github-bot@agpt.co>
This commit is contained in:
committed by
GitHub
parent
10489e0df2
commit
bfbe613960
@@ -33,7 +33,7 @@ Create your agent fixture.
|
||||
|
||||
```python
|
||||
def kubernetes_agent(
|
||||
agent_test_config, memory_local_cache, workspace: Workspace
|
||||
agent_test_config, memory_json_file, workspace: Workspace
|
||||
):
|
||||
# Please choose the commands your agent will need to beat the challenges, the full list is available in the main.py
|
||||
# (we 're working on a better way to design this, for now you have to look at main.py)
|
||||
@@ -56,7 +56,7 @@ def kubernetes_agent(
|
||||
agent = Agent(
|
||||
# We also give the AI a name
|
||||
ai_name="Kubernetes-Demo",
|
||||
memory=memory_local_cache,
|
||||
memory=memory_json_file,
|
||||
full_message_history=[],
|
||||
command_registry=command_registry,
|
||||
config=ai_config,
|
||||
@@ -131,5 +131,3 @@ def test_information_retrieval_challenge_a(kubernetes_agent, monkeypatch) -> Non
|
||||
|
||||
|
||||
```
|
||||
|
||||
|
||||
|
||||
@@ -1,3 +1,9 @@
|
||||
!!! warning
|
||||
The Pinecone, Milvus and Weaviate memory backends were rendered incompatible
|
||||
by work on the memory system, and have been removed in `master`.
|
||||
Whether support will be added back in the future is subject to discussion,
|
||||
feel free to pitch in: https://github.com/Significant-Gravitas/Auto-GPT/discussions/4280
|
||||
|
||||
## Setting Your Cache Type
|
||||
|
||||
By default, Auto-GPT set up with Docker Compose will use Redis as its memory backend.
|
||||
@@ -6,7 +12,7 @@ Otherwise, the default is LocalCache (which stores memory in a JSON file).
|
||||
To switch to a different backend, change the `MEMORY_BACKEND` in `.env`
|
||||
to the value that you want:
|
||||
|
||||
* `local` uses a local JSON cache file
|
||||
* `json_file` uses a local JSON cache file
|
||||
* `pinecone` uses the Pinecone.io account you configured in your ENV settings
|
||||
* `redis` will use the redis cache that you configured
|
||||
* `milvus` will use the milvus cache that you configured
|
||||
|
||||
Reference in New Issue
Block a user