Compare commits

...

25 Commits

Author SHA1 Message Date
Reinier van der Leer
d8f5cdbb50 Release v0.3.0 (#3683) 2023-05-02 16:53:43 +02:00
Reinier van der Leer
6e5ddeb015 v0.3.0 2023-05-02 16:32:19 +02:00
Reinier van der Leer
725abbb662 Fix bulletin 2023-05-02 16:30:37 +02:00
Reinier van der Leer
e4129e1a3a Fix CI for stable 2023-05-02 13:35:23 +02:00
Reinier van der Leer
dbd68df40c Merge branch 'stable' into release-v0.3 2023-05-02 13:27:40 +02:00
Reinier van der Leer
3a80e2f399 Revert "Revert "Merge branch 'master' into stable""
This reverts commit 999990b614.
2023-05-02 13:26:30 +02:00
Reinier van der Leer
0e1c0c55f8 Synchronize stable -> master (#3677)
* Revert "Merge branch 'master' into stable"

This reverts commit c4008971f7, reversing
changes made to fe855fef13.

* Fix `validate_json` file error when cwd != project root (#2665)

Co-authored-by: qianchengliang <qianchengliang1@huawei.com>

* Revert "Revert "Merge branch 'master' into stable""

This reverts commit 999990b614.

---------

Co-authored-by: BillSchumacher <34168009+BillSchumacher@users.noreply.github.com>
Co-authored-by: Mick <30898949+mickjagger19@users.noreply.github.com>
Co-authored-by: qianchengliang <qianchengliang1@huawei.com>
2023-05-02 12:17:09 +01:00
gravelBridge
2e9c80a486 Fix MACOS Zip Import Error when compressing plugin (#3629)
Co-authored-by: Nicholas Tindle <nick@ntindle.com>
2023-05-01 22:49:44 -05:00
Reinier van der Leer
1d26f6b697 Add warning for LLM to avoid context overflow (#3646) 2023-05-01 19:48:27 -05:00
kinance
4767fe63d3 Fix the maximum context length issue by chunking (#3222)
Co-authored-by: Reinier van der Leer <github@pwuts.nl>
2023-05-01 20:13:24 +02:00
k-boikov
0ef6f06462 Fix validate_json scheme path (#3631)
Co-authored-by: Reinier van der Leer <github@pwuts.nl>
2023-05-01 20:06:22 +02:00
sidewaysthought
a5f856328d Fix multi-byte character handling in read_file (#3173)
Co-authored-by: Reinier van der Leer <github@pwuts.nl>
2023-05-01 19:50:50 +02:00
non-adjective
7fc6f2abfc update web_selenium.py to use try-with for headers (#2988)
Co-authored-by: Richard Beales <rich@richbeales.net>
2023-05-01 16:45:52 +01:00
Bob
94ec4a4ea5 Fix file operations logger (#3489)
Co-authored-by: Reinier van der Leer <github@pwuts.nl>
2023-05-01 17:37:30 +02:00
Ashutosh Kataria
9c56b1beef Message about Pinecone initializing (#1194)
Co-authored-by: Nicholas Tindle <nick@ntindle.com>
Co-authored-by: Richard Beales <rich@richbeales.net>
2023-05-01 15:31:28 +01:00
AbTrax
34261a1583 Fix side effects on message history (#3619)
Co-authored-by: Reinier van der Leer <github@pwuts.nl>
2023-05-01 15:16:26 +02:00
Reinier van der Leer
d8968ae899 Update documentation URLs to docs.agpt.co (#3621) 2023-05-01 14:01:13 +02:00
Valay Dave
6ae90a3ea2 [bug] list_files api signature change in data_ingestion.py and lo… (#3601) 2023-05-01 06:57:16 +01:00
zyt600
c317cf0e75 fix bug #3455 (#3591)
Co-authored-by: Nicholas Tindle <nick@ntindle.com>
2023-04-30 16:24:07 -05:00
Richard Beales
c1329c92fd rename search_files to list_files (#3595) 2023-04-30 16:14:53 -05:00
Toran Bruce Richards
abd6115aea Add website to README.md 2023-05-01 08:35:42 +12:00
WladBlank
6d2c0c4242 add report method to typewriter_log & load report plugins into logger (#3582)
* add report method to typewriter_log & load report plugins into logger

* more clear log and comment

* isort and black
2023-04-30 09:43:01 -07:00
k-boikov
aab79fdf6d added tests for clone_repository (#3558)
Co-authored-by: Nicholas Tindle <nick@ntindle.com>
Co-authored-by: Richard Beales <rich@richbeales.net>
2023-04-30 10:41:45 +01:00
Mick
91537b0496 Fix validate_json file error when cwd != project root (#2665)
Co-authored-by: qianchengliang <qianchengliang1@huawei.com>
2023-04-21 03:26:28 +02:00
BillSchumacher
999990b614 Revert "Merge branch 'master' into stable"
This reverts commit c4008971f7, reversing
changes made to fe855fef13.
2023-04-20 01:15:46 -05:00
32 changed files with 2356 additions and 286 deletions

View File

@@ -49,6 +49,14 @@ OPENAI_API_KEY=your-openai-api-key
# FAST_TOKEN_LIMIT=4000
# SMART_TOKEN_LIMIT=8000
### EMBEDDINGS
## EMBEDDING_MODEL - Model to use for creating embeddings
## EMBEDDING_TOKENIZER - Tokenizer to use for chunking large inputs
## EMBEDDING_TOKEN_LIMIT - Chunk size limit for large inputs
# EMBEDDING_MODEL=text-embedding-ada-002
# EMBEDDING_TOKENIZER=cl100k_base
# EMBEDDING_TOKEN_LIMIT=8191
################################################################################
### MEMORY
################################################################################

View File

@@ -4,7 +4,7 @@ on:
push:
branches: [ master ]
pull_request:
branches: [ master ]
branches: [ master, stable ]
concurrency:
group: ${{ format('ci-{0}', github.head_ref && format('pr-{0}', github.event.pull_request.number) || github.sha) }}

View File

@@ -4,7 +4,7 @@ on:
push:
branches: [ master ]
pull_request:
branches: [ master ]
branches: [ master, stable ]
concurrency:
group: ${{ format('docker-ci-{0}', github.head_ref && format('pr-{0}', github.event.pull_request.number) || github.sha) }}

View File

@@ -1,9 +1,24 @@
Welcome to Auto-GPT! We'll keep you informed of the latest news and features by printing messages here.
If you don't wish to see this message, you can run Auto-GPT with the --skip-news flag
# Website and Documentation Site 📰📖
Check out *https://agpt.co*, the official news & updates site for Auto-GPT!
The documentation also has a place here, at *https://docs.agpt.co*
# INCLUDED COMMAND 'send_tweet' IS DEPRICATED, AND WILL BE REMOVED IN THE NEXT STABLE RELEASE
Base Twitter functionality (and more) is now covered by plugins: https://github.com/Significant-Gravitas/Auto-GPT-Plugins
# 🚀 v0.3.0 Release 🚀
Over a week and 275 pull requests have passed since v0.2.2, and we are happy to announce
the release of v0.3.0! *From now on, we will be focusing on major improvements* rather
than bugfixes, as we feel stability has reached a reasonable level. Most remaining
issues relate to limitations in prompt generation and the memory system, which will be
the focus of our efforts for the next release.
## Changes to Docker configuration
The workdir has been changed from /home/appuser to /app. Be sure to update any volume mounts accordingly.
Highlights and notable changes in this release:
## Plugin support 🔌
Auto-GPT now has support for plugins! With plugins, you can extend Auto-GPT's abilities,
adding support for third-party services and more.
See https://github.com/Significant-Gravitas/Auto-GPT-Plugins for instructions and available plugins.
## Changes to Docker configuration 🐋
The workdir has been changed from */home/appuser* to */app*.
Be sure to update any volume mounts accordingly!
# ⚠️ Command `send_tweet` is DEPRECATED, and will be removed in v0.4.0 ⚠️
Twitter functionality (and more) is now covered by plugins, see [Plugin support 🔌]

View File

@@ -8,7 +8,7 @@ This document provides guidelines and best practices to help you contribute effe
By participating in this project, you agree to abide by our [Code of Conduct]. Please read it to understand the expectations we have for everyone who contributes to this project.
[Code of Conduct]: https://significant-gravitas.github.io/Auto-GPT/code-of-conduct.md
[Code of Conduct]: https://docs.agpt.co/code-of-conduct/
## 📢 A Quick Word
Right now we will not be accepting any Contributions that add non-essential commands to Auto-GPT.
@@ -101,7 +101,7 @@ https://github.com/Significant-Gravitas/Auto-GPT/pulls?q=is%3Apr+is%3Aopen+-labe
If you add or change code, make sure the updated code is covered by tests.
To increase coverage if necessary, [write tests using pytest].
For more info on running tests, please refer to ["Running tests"](https://significant-gravitas.github.io/Auto-GPT/testing/).
For more info on running tests, please refer to ["Running tests"](https://docs.agpt.co/testing/).
[write tests using pytest]: https://realpython.com/pytest-python-testing/

View File

@@ -1,4 +1,5 @@
# Auto-GPT: An Autonomous GPT-4 Experiment
[![Official Website](https://img.shields.io/badge/Official%20Website-agpt.co-blue?style=flat&logo=world&logoColor=white)](https://agpt.co)
[![Unit Tests](https://img.shields.io/github/actions/workflow/status/Significant-Gravitas/Auto-GPT/ci.yml?label=unit%20tests)](https://github.com/Significant-Gravitas/Auto-GPT/actions/workflows/ci.yml)
[![Discord Follow](https://dcbadge.vercel.app/api/server/autogpt?style=flat)](https://discord.gg/autogpt)
[![GitHub Repo stars](https://img.shields.io/github/stars/Significant-Gravitas/auto-gpt?style=social)](https://github.com/Significant-Gravitas/Auto-GPT/stargazers)
@@ -99,21 +100,21 @@ Your support is greatly appreciated. Development of this free, open-source proje
Please see the [documentation][docs] for full setup instructions and configuration options.
[docs]: https://significant-gravitas.github.io/Auto-GPT/
[docs]: https://docs.agpt.co/
## 📖 Documentation
* [⚙️ Setup][docs/setup]
* [💻 Usage][docs/usage]
* [🔌 Plugins][docs/plugins]
* Configuration
* [🔍 Web Search](https://significant-gravitas.github.io/Auto-GPT/configuration/search/)
* [🧠 Memory](https://significant-gravitas.github.io/Auto-GPT/configuration/memory/)
* [🗣️ Voice (TTS)](https://significant-gravitas.github.io/Auto-GPT/configuration/voice/)
* [🖼️ Image Generation](https://significant-gravitas.github.io/Auto-GPT/configuration/imagegen/)
* [🔍 Web Search](https://docs.agpt.co/configuration/search/)
* [🧠 Memory](https://docs.agpt.co/configuration/memory/)
* [🗣️ Voice (TTS)](https://docs.agpt.co/configuration/voice/)
* [🖼️ Image Generation](https://docs.agpt.co/configuration/imagegen/)
[docs/setup]: https://significant-gravitas.github.io/Auto-GPT/setup/
[docs/usage]: https://significant-gravitas.github.io/Auto-GPT/usage/
[docs/plugins]: https://significant-gravitas.github.io/Auto-GPT/plugins/
[docs/setup]: https://docs.agpt.co/setup/
[docs/usage]: https://docs.agpt.co/usage/
[docs/plugins]: https://docs.agpt.co/plugins/
## ⚠️ Limitations

View File

@@ -5,6 +5,7 @@ from autogpt.config import Config
from autogpt.json_utils.json_fix_llm import fix_json_using_multiple_techniques
from autogpt.json_utils.utilities import LLM_DEFAULT_RESPONSE_FORMAT, validate_json
from autogpt.llm import chat_with_ai, create_chat_completion, create_chat_message
from autogpt.llm.token_counter import count_string_tokens
from autogpt.logs import logger, print_assistant_thoughts
from autogpt.speech import say_text
from autogpt.spinner import Spinner
@@ -233,6 +234,16 @@ class Agent:
)
result = f"Command {command_name} returned: " f"{command_result}"
result_tlength = count_string_tokens(
str(command_result), cfg.fast_llm_model
)
memory_tlength = count_string_tokens(
str(self.summary_memory), cfg.fast_llm_model
)
if result_tlength + memory_tlength + 600 > cfg.fast_token_limit:
result = f"Failure: command {command_name} returned too much output. \
Do not execute this command again with the same arguments."
for plugin in cfg.plugins:
if not plugin.can_handle_post_command():
continue

View File

@@ -1,10 +1,12 @@
"""File operations for AutoGPT"""
from __future__ import annotations
import hashlib
import os
import os.path
from typing import Generator
from typing import Dict, Generator, Literal, Tuple
import charset_normalizer
import requests
from colorama import Back, Fore
from requests.adapters import HTTPAdapter, Retry
@@ -17,31 +19,96 @@ from autogpt.utils import readable_file_size
CFG = Config()
Operation = Literal["write", "append", "delete"]
def check_duplicate_operation(operation: str, filename: str) -> bool:
"""Check if the operation has already been performed on the given file
Args:
operation (str): The operation to check for
filename (str): The name of the file to check for
def text_checksum(text: str) -> str:
"""Get the hex checksum for the given text."""
return hashlib.md5(text.encode("utf-8")).hexdigest()
def operations_from_log(log_path: str) -> Generator[Tuple[Operation, str, str | None]]:
"""Parse the file operations log and return a tuple containing the log entries"""
try:
log = open(log_path, "r", encoding="utf-8")
except FileNotFoundError:
return
for line in log:
line = line.replace("File Operation Logger", "").strip()
if not line:
continue
operation, tail = line.split(": ", maxsplit=1)
operation = operation.strip()
if operation in ("write", "append"):
try:
path, checksum = (x.strip() for x in tail.rsplit(" #", maxsplit=1))
except ValueError:
path, checksum = tail.strip(), None
yield (operation, path, checksum)
elif operation == "delete":
yield (operation, tail.strip(), None)
log.close()
def file_operations_state(log_path: str) -> Dict:
"""Iterates over the operations log and returns the expected state.
Parses a log file at CFG.file_logger_path to construct a dictionary that maps
each file path written or appended to its checksum. Deleted files are removed
from the dictionary.
Returns:
bool: True if the operation has already been performed on the file
A dictionary mapping file paths to their checksums.
Raises:
FileNotFoundError: If CFG.file_logger_path is not found.
ValueError: If the log file content is not in the expected format.
"""
log_content = read_file(CFG.file_logger_path)
log_entry = f"{operation}: {filename}\n"
return log_entry in log_content
state = {}
for operation, path, checksum in operations_from_log(log_path):
if operation in ("write", "append"):
state[path] = checksum
elif operation == "delete":
del state[path]
return state
def log_operation(operation: str, filename: str) -> None:
def is_duplicate_operation(
operation: Operation, filename: str, checksum: str | None = None
) -> bool:
"""Check if the operation has already been performed
Args:
operation: The operation to check for
filename: The name of the file to check for
checksum: The checksum of the contents to be written
Returns:
True if the operation has already been performed on the file
"""
state = file_operations_state(CFG.file_logger_path)
if operation == "delete" and filename not in state:
return True
if operation == "write" and state.get(filename) == checksum:
return True
return False
def log_operation(operation: str, filename: str, checksum: str | None = None) -> None:
"""Log the file operation to the file_logger.txt
Args:
operation (str): The operation to log
filename (str): The name of the file the operation was performed on
operation: The operation to log
filename: The name of the file the operation was performed on
checksum: The checksum of the contents to be written
"""
log_entry = f"{operation}: {filename}\n"
append_to_file(CFG.file_logger_path, log_entry, should_log=False)
log_entry = f"{operation}: {filename}"
if checksum is not None:
log_entry += f" #{checksum}"
logger.debug(f"Logging file operation: {log_entry}")
append_to_file(CFG.file_logger_path, f"{log_entry}\n", should_log=False)
def split_file(
@@ -87,11 +154,12 @@ def read_file(filename: str) -> str:
str: The contents of the file
"""
try:
with open(filename, "r", encoding="utf-8") as f:
content = f.read()
return content
except Exception as e:
return f"Error: {str(e)}"
charset_match = charset_normalizer.from_path(filename).best()
encoding = charset_match.encoding
logger.debug(f"Read file '{filename}' with encoding '{encoding}'")
return str(charset_match)
except Exception as err:
return f"Error: {err}"
def ingest_file(
@@ -124,8 +192,8 @@ def ingest_file(
memory.add(memory_to_add)
logger.info(f"Done ingesting {num_chunks} chunks from {filename}.")
except Exception as e:
logger.info(f"Error while ingesting file '{filename}': {str(e)}")
except Exception as err:
logger.info(f"Error while ingesting file '{filename}': {err}")
@command("write_to_file", "Write to file", '"filename": "<filename>", "text": "<text>"')
@@ -139,17 +207,18 @@ def write_to_file(filename: str, text: str) -> str:
Returns:
str: A message indicating success or failure
"""
if check_duplicate_operation("write", filename):
checksum = text_checksum(text)
if is_duplicate_operation("write", filename, checksum):
return "Error: File has already been updated."
try:
directory = os.path.dirname(filename)
os.makedirs(directory, exist_ok=True)
with open(filename, "w", encoding="utf-8") as f:
f.write(text)
log_operation("write", filename)
log_operation("write", filename, checksum)
return "File written to successfully."
except Exception as e:
return f"Error: {str(e)}"
except Exception as err:
return f"Error: {err}"
@command(
@@ -169,15 +238,17 @@ def append_to_file(filename: str, text: str, should_log: bool = True) -> str:
try:
directory = os.path.dirname(filename)
os.makedirs(directory, exist_ok=True)
with open(filename, "a") as f:
with open(filename, "a", encoding="utf-8") as f:
f.write(text)
if should_log:
log_operation("append", filename)
with open(filename, "r", encoding="utf-8") as f:
checksum = text_checksum(f.read())
log_operation("append", filename, checksum=checksum)
return "Text appended successfully."
except Exception as e:
return f"Error: {str(e)}"
except Exception as err:
return f"Error: {err}"
@command("delete_file", "Delete file", '"filename": "<filename>"')
@@ -190,19 +261,19 @@ def delete_file(filename: str) -> str:
Returns:
str: A message indicating success or failure
"""
if check_duplicate_operation("delete", filename):
if is_duplicate_operation("delete", filename):
return "Error: File has already been deleted."
try:
os.remove(filename)
log_operation("delete", filename)
return "File deleted successfully."
except Exception as e:
return f"Error: {str(e)}"
except Exception as err:
return f"Error: {err}"
@command("search_files", "Search Files", '"directory": "<directory>"')
def search_files(directory: str) -> list[str]:
"""Search for files in a directory
@command("list_files", "List Files in Directory", '"directory": "<directory>"')
def list_files(directory: str) -> list[str]:
"""lists files in a directory recursively
Args:
directory (str): The directory to search in
@@ -266,7 +337,7 @@ def download_file(url, filename):
spinner.update_message(f"{message} {progress}")
return f'Successfully downloaded and locally stored file: "{filename}"! (Size: {readable_file_size(downloaded_size)})'
except requests.HTTPError as e:
return f"Got an HTTP Error whilst trying to download file: {e}"
except Exception as e:
return "Error: " + str(e)
except requests.HTTPError as err:
return f"Got an HTTP Error whilst trying to download file: {err}"
except Exception as err:
return f"Error: {err}"

View File

@@ -175,4 +175,9 @@ def add_header(driver: WebDriver) -> None:
Returns:
None
"""
driver.execute_script(open(f"{FILE_DIR}/js/overlay.js", "r").read())
try:
with open(f"{FILE_DIR}/js/overlay.js", "r") as overlay_file:
overlay_script = overlay_file.read()
driver.execute_script(overlay_script)
except Exception as e:
print(f"Error executing overlay.js: {e}")

View File

@@ -35,6 +35,9 @@ class Config(metaclass=Singleton):
self.smart_llm_model = os.getenv("SMART_LLM_MODEL", "gpt-4")
self.fast_token_limit = int(os.getenv("FAST_TOKEN_LIMIT", 4000))
self.smart_token_limit = int(os.getenv("SMART_TOKEN_LIMIT", 8000))
self.embedding_model = os.getenv("EMBEDDING_MODEL", "text-embedding-ada-002")
self.embedding_tokenizer = os.getenv("EMBEDDING_TOKENIZER", "cl100k_base")
self.embedding_token_limit = int(os.getenv("EMBEDDING_TOKEN_LIMIT", 8191))
self.browse_chunk_max_length = int(os.getenv("BROWSE_CHUNK_MAX_LENGTH", 3000))
self.browse_spacy_language_model = os.getenv(
"BROWSE_SPACY_LANGUAGE_MODEL", "en_core_web_sm"
@@ -216,6 +219,18 @@ class Config(metaclass=Singleton):
"""Set the smart token limit value."""
self.smart_token_limit = value
def set_embedding_model(self, value: str) -> None:
"""Set the model to use for creating embeddings."""
self.embedding_model = value
def set_embedding_tokenizer(self, value: str) -> None:
"""Set the tokenizer to use when creating embeddings."""
self.embedding_tokenizer = value
def set_embedding_token_limit(self, value: int) -> None:
"""Set the token limit for creating embeddings."""
self.embedding_token_limit = value
def set_browse_chunk_max_length(self, value: int) -> None:
"""Set the browse_website command chunk max length value."""
self.browse_chunk_max_length = value

View File

@@ -1,5 +1,6 @@
"""Utilities for the json_fixes package."""
import json
import os.path
import re
from jsonschema import Draft7Validator
@@ -35,7 +36,8 @@ def validate_json(json_object: object, schema_name: str) -> dict | None:
:param schema_name: str
:type json_object: object
"""
with open(f"autogpt/json_utils/{schema_name}.json", "r") as f:
scheme_file = os.path.join(os.path.dirname(__file__), f"{schema_name}.json")
with open(scheme_file, "r") as f:
schema = json.load(f)
validator = Draft7Validator(schema)

View File

@@ -11,6 +11,7 @@ from autogpt.llm.base import (
from autogpt.llm.chat import chat_with_ai, create_chat_message, generate_context
from autogpt.llm.llm_utils import (
call_ai_function,
chunked_tokens,
create_chat_completion,
get_ada_embedding,
)
@@ -32,6 +33,7 @@ __all__ = [
"call_ai_function",
"create_chat_completion",
"get_ada_embedding",
"chunked_tokens",
"COSTS",
"count_message_tokens",
"count_string_tokens",

View File

@@ -2,9 +2,12 @@ from __future__ import annotations
import functools
import time
from itertools import islice
from typing import List, Optional
import numpy as np
import openai
import tiktoken
from colorama import Fore, Style
from openai.error import APIError, RateLimitError, Timeout
@@ -30,7 +33,7 @@ def retry_openai_api(
api_key_error_msg = (
f"Please double check that you have setup a "
f"{Fore.CYAN + Style.BRIGHT}PAID{Style.RESET_ALL} OpenAI API Account. You can "
f"read more here: {Fore.CYAN}https://significant-gravitas.github.io/Auto-GPT/setup/#getting-an-api-key{Fore.RESET}"
f"read more here: {Fore.CYAN}https://docs.agpt.co/setup/#getting-an-api-key{Fore.RESET}"
)
backoff_msg = (
f"{Fore.RED}Error: API Bad gateway. Waiting {{backoff}} seconds...{Fore.RESET}"
@@ -174,7 +177,7 @@ def create_chat_completion(
if not warned_user:
logger.double_check(
f"Please double check that you have setup a {Fore.CYAN + Style.BRIGHT}PAID{Style.RESET_ALL} OpenAI API Account. "
+ f"You can read more here: {Fore.CYAN}https://significant-gravitas.github.io/Auto-GPT/setup/#getting-an-api-key{Fore.RESET}"
+ f"You can read more here: {Fore.CYAN}https://docs.agpt.co/setup/#getting-an-api-key{Fore.RESET}"
)
warned_user = True
except (APIError, Timeout) as e:
@@ -207,6 +210,23 @@ def create_chat_completion(
return resp
def batched(iterable, n):
"""Batch data into tuples of length n. The last batch may be shorter."""
# batched('ABCDEFG', 3) --> ABC DEF G
if n < 1:
raise ValueError("n must be at least one")
it = iter(iterable)
while batch := tuple(islice(it, n)):
yield batch
def chunked_tokens(text, tokenizer_name, chunk_length):
tokenizer = tiktoken.get_encoding(tokenizer_name)
tokens = tokenizer.encode(text)
chunks_iterator = batched(tokens, chunk_length)
yield from chunks_iterator
def get_ada_embedding(text: str) -> List[float]:
"""Get an embedding from the ada model.
@@ -217,7 +237,7 @@ def get_ada_embedding(text: str) -> List[float]:
List[float]: The embedding.
"""
cfg = Config()
model = "text-embedding-ada-002"
model = cfg.embedding_model
text = text.replace("\n", " ")
if cfg.use_azure:
@@ -226,13 +246,7 @@ def get_ada_embedding(text: str) -> List[float]:
kwargs = {"model": model}
embedding = create_embedding(text, **kwargs)
api_manager = ApiManager()
api_manager.update_cost(
prompt_tokens=embedding.usage.prompt_tokens,
completion_tokens=0,
model=model,
)
return embedding["data"][0]["embedding"]
return embedding
@retry_openai_api()
@@ -251,8 +265,31 @@ def create_embedding(
openai.Embedding: The embedding object.
"""
cfg = Config()
return openai.Embedding.create(
input=[text],
api_key=cfg.openai_api_key,
**kwargs,
)
chunk_embeddings = []
chunk_lengths = []
for chunk in chunked_tokens(
text,
tokenizer_name=cfg.embedding_tokenizer,
chunk_length=cfg.embedding_token_limit,
):
embedding = openai.Embedding.create(
input=[chunk],
api_key=cfg.openai_api_key,
**kwargs,
)
api_manager = ApiManager()
api_manager.update_cost(
prompt_tokens=embedding.usage.prompt_tokens,
completion_tokens=0,
model=cfg.embedding_model,
)
chunk_embeddings.append(embedding["data"][0]["embedding"])
chunk_lengths.append(len(chunk))
# do weighted avg
chunk_embeddings = np.average(chunk_embeddings, axis=0, weights=chunk_lengths)
chunk_embeddings = chunk_embeddings / np.linalg.norm(
chunk_embeddings
) # normalize the length to one
chunk_embeddings = chunk_embeddings.tolist()
return chunk_embeddings

View File

@@ -3,5 +3,8 @@ COSTS = {
"gpt-3.5-turbo-0301": {"prompt": 0.002, "completion": 0.002},
"gpt-4-0314": {"prompt": 0.03, "completion": 0.06},
"gpt-4": {"prompt": 0.03, "completion": 0.06},
"gpt-4-0314": {"prompt": 0.03, "completion": 0.06},
"gpt-4-32k": {"prompt": 0.06, "completion": 0.12},
"gpt-4-32k-0314": {"prompt": 0.06, "completion": 0.12},
"text-embedding-ada-002": {"prompt": 0.0004, "completion": 0.0},
}

View File

@@ -75,6 +75,7 @@ class Logger(metaclass=Singleton):
self.logger.setLevel(logging.DEBUG)
self.speak_mode = False
self.chat_plugins = []
def typewriter_log(
self, title="", title_color="", content="", speak_text=False, level=logging.INFO
@@ -82,6 +83,9 @@ class Logger(metaclass=Singleton):
if speak_text and self.speak_mode:
say_text(f"{title}. {content}")
for plugin in self.chat_plugins:
plugin.report(f"{title}. {content}")
if content:
if isinstance(content, list):
content = " ".join(content)

View File

@@ -3,7 +3,7 @@ import logging
import sys
from pathlib import Path
from colorama import Fore
from colorama import Fore, Style
from autogpt.agent.agent import Agent
from autogpt.commands.command import CommandRegistry
@@ -13,7 +13,11 @@ from autogpt.logs import logger
from autogpt.memory import get_memory
from autogpt.plugins import scan_plugins
from autogpt.prompts.prompt import DEFAULT_TRIGGERING_PROMPT, construct_main_ai_config
from autogpt.utils import get_current_git_branch, get_latest_bulletin
from autogpt.utils import (
get_current_git_branch,
get_latest_bulletin,
markdown_to_ansi_style,
)
from autogpt.workspace import Workspace
from scripts.install_plugin_deps import install_plugin_dependencies
@@ -57,9 +61,19 @@ def run_auto_gpt(
)
if not cfg.skip_news:
motd = get_latest_bulletin()
motd, is_new_motd = get_latest_bulletin()
if motd:
logger.typewriter_log("NEWS: ", Fore.GREEN, motd)
motd = markdown_to_ansi_style(motd)
for motd_line in motd.split("\n"):
logger.info(motd_line, "NEWS:", Fore.GREEN)
if is_new_motd and not cfg.chat_messages_enabled:
input(
Fore.MAGENTA
+ Style.BRIGHT
+ "NEWS: Bulletin was updated! Press Enter to continue..."
+ Style.RESET_ALL
)
git_branch = get_current_git_branch()
if git_branch and git_branch != "stable":
logger.typewriter_log(
@@ -125,6 +139,13 @@ def run_auto_gpt(
full_message_history = []
next_action_count = 0
# add chat plugins capable of report to logger
if cfg.chat_messages_enabled:
for plugin in cfg.plugins:
if hasattr(plugin, "can_handle_report") and plugin.can_handle_report():
logger.info(f"Loaded plugin into logger: {plugin.__class__.__name__}")
logger.chat_plugins.append(plugin)
# Initialize memory and make sure it is empty.
# this is particularly important for indexing and referencing pinecone memory
memory = get_memory(cfg, init=True)

View File

@@ -38,6 +38,9 @@ class PineconeMemory(MemoryProviderSingleton):
exit(1)
if table_name not in pinecone.list_indexes():
logger.typewriter_log(
"Connecting Pinecone. This may take some time...", Fore.MAGENTA, ""
)
pinecone.create_index(
table_name, dimension=dimension, metric=metric, pod_type=pod_type
)

View File

@@ -1,3 +1,4 @@
import copy
import json
from typing import Dict, List, Tuple
@@ -44,7 +45,9 @@ def get_newly_trimmed_messages(
return new_messages_not_in_context, new_index
def update_running_summary(current_memory: str, new_events: List[Dict]) -> str:
def update_running_summary(
current_memory: str, new_events: List[Dict[str, str]]
) -> str:
"""
This function takes a list of dictionaries representing new events and combines them with the current summary,
focusing on key and potentially important information to remember. The updated summary is returned in a message
@@ -61,17 +64,23 @@ def update_running_summary(current_memory: str, new_events: List[Dict]) -> str:
update_running_summary(new_events)
# Returns: "This reminds you of these events from your past: \nI entered the kitchen and found a scrawled note saying 7."
"""
# Create a copy of the new_events list to prevent modifying the original list
new_events = copy.deepcopy(new_events)
# Replace "assistant" with "you". This produces much better first person past tense results.
for event in new_events:
if event["role"].lower() == "assistant":
event["role"] = "you"
# Remove "thoughts" dictionary from "content"
content_dict = json.loads(event["content"])
if "thoughts" in content_dict:
del content_dict["thoughts"]
event["content"] = json.dumps(content_dict)
elif event["role"].lower() == "system":
event["role"] = "your computer"
# Delete all user messages
elif event["role"] == "user":
new_events.remove(event)

View File

@@ -33,7 +33,7 @@ def inspect_zip_for_modules(zip_path: str, debug: bool = False) -> list[str]:
result = []
with zipfile.ZipFile(zip_path, "r") as zfile:
for name in zfile.namelist():
if name.endswith("__init__.py"):
if name.endswith("__init__.py") and not name.startswith("__MACOSX"):
logger.debug(f"Found module '{name}' in the zipfile at: {name}")
result.append(name)
if len(result) == 0:

View File

@@ -1,8 +1,9 @@
import os
import re
import requests
import yaml
from colorama import Fore
from colorama import Fore, Style
from git.repo import Repo
from autogpt.logs import logger
@@ -107,15 +108,46 @@ def get_current_git_branch() -> str:
return ""
def get_latest_bulletin() -> str:
def get_latest_bulletin() -> tuple[str, bool]:
exists = os.path.exists("CURRENT_BULLETIN.md")
current_bulletin = ""
if exists:
current_bulletin = open("CURRENT_BULLETIN.md", "r", encoding="utf-8").read()
new_bulletin = get_bulletin_from_web()
is_new_news = new_bulletin != current_bulletin
is_new_news = new_bulletin != "" and new_bulletin != current_bulletin
news_header = Fore.YELLOW + "Welcome to Auto-GPT!\n"
if new_bulletin or current_bulletin:
news_header += (
"Below you'll find the latest Auto-GPT News and updates regarding features!\n"
"If you don't wish to see this message, you "
"can run Auto-GPT with the *--skip-news* flag.\n"
)
if new_bulletin and is_new_news:
open("CURRENT_BULLETIN.md", "w", encoding="utf-8").write(new_bulletin)
return f" {Fore.RED}::UPDATED:: {Fore.CYAN}{new_bulletin}{Fore.RESET}"
return current_bulletin
current_bulletin = f"{Fore.RED}::NEW BULLETIN::{Fore.RESET}\n\n{new_bulletin}"
return f"{news_header}\n{current_bulletin}", is_new_news
def markdown_to_ansi_style(markdown: str):
ansi_lines: list[str] = []
for line in markdown.split("\n"):
line_style = ""
if line.startswith("# "):
line_style += Style.BRIGHT
else:
line = re.sub(
r"(?<!\*)\*(\*?[^*]+\*?)\*(?!\*)",
rf"{Style.BRIGHT}\1{Style.NORMAL}",
line,
)
if re.match(r"^#+ ", line) is not None:
line_style += Fore.CYAN
line = re.sub(r"^#+ ", "", line)
ansi_lines.append(f"{line_style}{line}{Style.RESET_ALL}")
return "\n".join(ansi_lines)

View File

@@ -1,7 +1,7 @@
import argparse
import logging
from autogpt.commands.file_operations import ingest_file, search_files
from autogpt.commands.file_operations import ingest_file, list_files
from autogpt.config import Config
from autogpt.memory import get_memory
@@ -10,12 +10,11 @@ cfg = Config()
def configure_logging():
logging.basicConfig(
filemode="a",
format="%(asctime)s,%(msecs)d %(name)s %(levelname)s %(message)s",
datefmt="%H:%M:%S",
level=logging.DEBUG,
handlers=[
logging.FileHandler(filename="log-ingestion.txt"),
logging.FileHandler(filename="log-ingestion.txt", mode="a"),
logging.StreamHandler(),
],
)
@@ -31,7 +30,7 @@ def ingest_directory(directory, memory, args):
"""
global logger
try:
files = search_files(directory)
files = list_files(directory)
for file in files:
ingest_file(file, memory, args.max_length, args.overlap)
except Exception as e:
@@ -68,7 +67,6 @@ def main() -> None:
help="The max_length of each chunk when ingesting files (default: 4000)",
default=4000,
)
args = parser.parse_args()
# Initialize memory

View File

@@ -1,5 +1,5 @@
# Auto-GPT
Welcome to Auto-GPT. Please follow the [Installation](https://significant-gravitas.github.io/Auto-GPT/setup/) guide to get started.
Welcome to Auto-GPT. Please follow the [Installation](/setup/) guide to get started.
It is recommended to use a virtual machine for tasks that require high security measures to prevent any potential harm to the main computer's system and data.

View File

@@ -1,5 +1,5 @@
site_name: Auto-GPT
site_url: https://significantgravitas.github.io/Auto-GPT/
site_url: https://docs.agpt.co/
repo_url: https://github.com/Significant-Gravitas/Auto-GPT
nav:
- Home: index.md

View File

@@ -4,7 +4,7 @@ build-backend = "hatchling.build"
[project]
name = "agpt"
version = "0.2.2"
version = "0.3.0"
authors = [
{ name="Torantulino", email="support@agpt.co" },
]

View File

@@ -14,13 +14,14 @@ duckduckgo-search
google-api-python-client #(https://developers.google.com/custom-search/v1/overview)
pinecone-client==2.2.1
redis
orjson
orjson==3.8.10
Pillow
selenium==4.1.4
webdriver-manager
jsonschema
tweepy
click
charset-normalizer>=3.1.0
spacy>=3.0.0,<4.0.0
en-core-web-sm @ https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-3.5.0/en_core_web_sm-3.5.0-py3-none-any.whl

View File

@@ -0,0 +1,168 @@
interactions:
- request:
body: '{"input": [[1985]], "model": "text-embedding-ada-002", "encoding_format":
"base64"}'
headers:
Accept:
- '*/*'
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
Content-Length:
- '83'
Content-Type:
- application/json
method: POST
uri: https://api.openai.com/v1/embeddings
response:
body:
string: !!binary |
H4sIAAAAAAAAA1SaSxO6Orvl5++n2LWn9FsiIgl7xl3kkiAgYldXlyAiKHJNgJw6371L/6dOd08c
QAqV5HnWWr/kP/71119/t1ld5NPf//z197sap7//x/fa/Tbd/v7nr//5r7/++uuv//h9/n8jiyYr
7vfqU/6G/25Wn3ux/P3PX/x/X/m/g/756++DcanIKGZXd/X5SZTuBpdQW6+lYQqkfQ5NFN8pSgIw
zBGFBLyF+0x9ZATRzFVcC+xx3aJPfw3cJaBFA2RnibBlW4EuBNKhlLJ5dbH/QRJYDniFELe0Ryw1
zy4l2M8lVX4IVLvccrDkD26WikG50PuxCQFdnkwEiSTcsFJvgcv2m6mB50DfUpu5FLAPV1ZQMZ8b
IvW7azZ//KsHo/R9nYA/OPXa3M0WXkbEI3Dus2z9lNsEjEngUv+4PdWrN6EKdpdVoMba1vqqHZkH
b+fNG4mbTQRmWGsFxIdzQM3Kfkbt1AUcBI3doc1xk9ZLJVoC3ClDSm3yBtl4AC8Bvk47CzunCbFl
aqkGQtGR0A74sT4HY8DDq8puaE3xHC1C9H7BD749CHUSFM03CxuwDd2YQD5eskks+B4idSLUZd1V
n+fOt2FzPT2pkptdNtu6EQD6Cg2sRFSr1/NdKvcmSu74EM8ioDerUyT9vHfIWElpvUwkz0F2c2e0
vXpNNvuXsQe81EdofRp6LfDVTYMNUUx6PtQEjL//y79kHiuu1mRzc3xWsDg6TwQepu6OOhkEae88
TKzIbykanVhZ5SOLI3zWZCmjogx6OCXDhAT9yuoulKYevMWdRl2+GOtu9W6a9CHZBxvBramHc8Up
MNsImCJ0jBgThDiA13A6U8UU5IxJ8keQGG8I2At5RecPahjD4yJBrGZgAl2aDgjGwQth47BVXR5l
eQyPQVnRMISPjMUvNgNJeRm4GC/BsMh0SsDUKRt6PB+LelHlIJCl9SYjYfewsjbe5Rb8MNvDONKd
gclhbEAMywgba6u762TdAkjVIkJrd3Pc9antBBgEgUIEXZPZdDzHL/hCTxEtOGkHtg5XBQiuvsUe
DR76a+oCCDe720rWtGncxT69JHi49BTbp/jo8vxe5mEjlx02733NVuNIb4BDao0PgNnDnOWLBoNH
YlB8ebb6ulyVUS5at8MW8c5DeVb2IdQv8RN7R9F02VEWRxBM2Yka77HVZzqLMbwdpBSJVyNzRymr
rY2zKwusPUQpIrNlFYC+AgOHmmCxbfbkS3mMeR47t0UB/ImJHvBP64C4GV/rebiFCKKw66h78mRA
twSKwOJ5DtvXng4ru62tlPsgx+63Hpep/Sgwm2cXH55jxNZzfL4BkkeYHPqNMVBkCSFkaL8isbq4
gPmml0AnmldsPYNbvYiZBwEcZhcHqV9my/bSWVDZRwOaUSyxWs4WT17k15keClcZmCRTHua8lmGn
2QTR9CDbEtabDUaSqLHoMsfUkgoo+WRNL5+oc6ruJqnc3qeeHIwRldNjBXb81aCHeE7ZOnKrAMs6
JeRZtn00qXIQwho5Cek+J21Y5/NJg547drRQwStaLreWg+M2irB7ZM96em4kDSiX65bsDjAeRv5o
FxDt+JEqUVmzWUkwhD23rthXls0wLRvTAWgnjGhV9/nQ77ZKLpUhxNTYmEs0LU5pyNKTK4gEBjmb
9N1DA/zhpmPNutz0JdZ8A3Kq4OOD8HnUtBma8M99fXOpaxLfHF56VUxFwtozfS10vYc5jhl671zD
7ayTOErIrC4EGB9xmPbVK4aBW2dorrdAXwN9HGExwJCikPu4zMx2BF6OmU6k0DCGuXWjXLKi/EVj
6cPYlD+EGSr704CPt/NnWBRijjBqDzLaXMIuYrvNkgIm5DYRWWXWs7XbOlC0e4/aBlcNzNn0PcyX
4YiA8UmHlRVVDBsOKvQU8Za+RmkqgcwZPIy2xgiWNNzOsBncC9Wt/D6Q7a0RwFsKrlhP44ixuK1z
mJ9QT93w2LqzFJocrO+tRA9G1NdLW18R7N2qIuLuRoalfTwrYMvjTOZvf1qorhBgHHuB6tK+cef9
dXFgBIUnqTt7x2b1/kkgi/oLavuDDKZdJdrSSqWeatz5zOa38EHAy7BPzYDYYAG9ToDs9yrhv/O7
3kQ7Br4WHLD71bdOeCgF5OhVRnJYTWDmzWCG7gXK2MxhmlH95s9gPrsaPvrPul6GduWhpZUdPp5a
N2PcoxThd/6pJrx1sN5EJYGnk8OhrdsLw7AUhgfdNssRF36WqOtJGUDH9I5Ym54LYx6gDjjipiH8
GzRgILEkwqnjJHqA2qGeuYfFwfO6a5Akth82c+RUwlZrZ3o+eHt3GRvZAiS7xPiwOw86I0WqASsq
XiiYHiqYL1AeAa34CTufxHKX61vUwIDHiR6bOM3owxV4MCXdhM3g4ej8RRs8OBlsR80VvrMlLdIW
3nfBgJXJ9tz3tXReEIKAYteXjtHylLwGevfXA59j+VMvt9vZhtKaythJKk0XnCFuwFefscdLJhtP
bPZkJd16JMolO2PGpChw674Q4QXqsDnzm1y69MuduvxWi3jyiRA8bz2Rekfxra/2W4JwSk0DR8G5
1ufr6Qkh+qQ+eX/1tputOYUtvzHJOnU2W80EImmTcBHa8Ks0LPYlTOC2O26o/yjs73qiyR9/5CVW
XM/LPJbgFFUB2QUiZatq6RUsJPVB5u/7FOIks+HhaliIPWSZjUXjJHB95C69U3DRp+zJV0BYbJ6c
0rAG7KffB6kosW/K7bCcmBVAk5Idtk/1h43h6CuSu79cUNTZF0DuYPSAWRkOmWt5dafuPqxwDdID
RuVRdZmcuCvciMlEXSW9slV1Zw74/rPCSF6O7uortgAJ6Z9oEndrNKLXksPMs2qqHhRNHxkALyhe
vC1GZr4b5lrqCYyO1pZaqcTrHXgVAbwcNiN267qsV2z2OZzsV0ldheJh1nrowBRr7/96fxetRvBU
azySmk2Qje1JEeHWbRD1YL9j1FPzFiaf2id9EunZ+pATGy59ZKNdIGLwez5wwnOGjbJ86Gy6lTG0
3vHtT723vJnOcCPGE3a//WnZVbMDY+7IYeWtGtkCWMtDn/VP9OaAoDMuSBLAKamAtdi26+UdVCv0
nxHGB0/P63arZ8p+P8aAjKfdFgxTIvYSSooQqy0NdRbWqgDezcajWno5RLO5X0W407WeOob7yGbX
OzcSnYH01e8SzHSeEyjPUkidgdcH5sqjBEs+7mgx4339HtpVgKfH1FBNSEyXP1qAg+vmiqj9XHYZ
M7o0gWP4qAhI1MWdw9HUYErUiXqqvrB5TIpv/QoT1qT7yBbnagRAfUgmthscg4UF6igp/aTR42ab
DvOYezYYNchh/bW/6qPJm7lUJCeGkjXjs7EWmAIN1/Cxsk+ygcwukCCIA4aNoHtHdMCnAOyPTwfj
G8uHpRh8Afh+XSGiOfXPn/HAj5CGBKa2jKJE4aExFh+qtJH4/X1eCEP9HpE9pYrO25u4grz+vmHn
5o4ZScclgGVVMNTZgLqtsWYljPJUo97xibOlPro3aXEDhUbcbR+tGZgVyNXeh6r5rEbjKh17OJ+P
GrX7gg0lW4sc7PjMoLb9GdksWoIEHTUryV5528OuhwUH6xzX1ASdWn/rPQbVkLwRj/kFLP6wjlB7
4hrJqDfA1m1OhmQJwZlG5XrK/ughamObmvrwZrPjDC0wN35GcQsUIDiLpsHwfD5SDfcO4B2nbqF0
0mqql+sSMRSebHghLwsHXVC79OfHp1ueYPu+J1EHl9oCxFsHqj9Tk9EPM3t4etAGH+73rT7LzeqA
0UjP3/mANT2MPJKm4XbCasRb7uI3fgjuZ/FEHfG0uuNmjGYQcy6HXpL6yKbaPdrwlzd1IWuHpZxp
DrFcINJ882XHrLEBh0tLv/mnGlbFPoYgDcQc65L7za9bQ4GOESJqQU8DW+tc3sCoPj/Y3Yk70FWH
aw4DeTeTfWVe69mSqwamD67E5tfv7OxeKCC+kopsK/uZLbOcIVDySUd21/eoz3TQJCg81itWlxxF
8x4VIQzoWiNxF16GOS9bAukzS4i8v1tgvVw3CkRvTsAq92n1dc3nVN5fNx15g04dhL0g3eDY2io1
Y/kzrKNRxrK6hCV2HU3Jfs8D4oGcsKY9Fn1BtL8BdFYAdeAxc8k3/8GnlgmoV/rRnRtzEX71RK83
q/35j1jKK/tNfdS/wHxvFQu8zWYlUXDW9bksHg7sbo6N3SRt2PrJsARffL9DrDcrdyBp38DPfCoQ
f38r2Swmai5/x1Nlu22jKaDFC7KovRDh5FTunBp3WwqCUKEqunLR3KVlKC+39YX4aUjBKD+NEX7X
P1r22UOfYzbeoDffTtS5uV7GErZW0h+9KQJDX0JBhkDfLQo1N09nmL/5Bhiu5VPjmqjD7tUYLSik
EpA6ZsqwO2VVDlKiT+h1KIaoS4ugBd/5or/1O3qBGcD0yV/oY5tifW0rGoKNadyRYFuBy0javwD4
NMo3f94zFtZH/ud/aELW/pvXRgluwfP0x68vh2tg/PG/1tpHOvXUuP/lETKpQslI5AkatGqgI5k/
3cGiawuE9VPSyfztZ+xZ1go8v8cYeyipavbzb/GjD6minXuXFLrbg7g01m8+7Fxat9oN4g+j1AIX
nH3rrwVj66gYJY9b1L8uLw1WtpqSLd6fM4YsLtj7fJNjJMIzWPL3ywMavozUupb20AfSoYILtDn6
84vELv0S5kpIkbjrpuwPfwmGjYQku8uBcFtvEiSkfRIQHludJa8PhEuyAei1trU708ER4SSHB3zU
W06fV9Ks8BxeD1SP0rSe57UP4VooPFbf1gHM6LUUcvp4GGT51gfbA1mE9f1YYLuRlWi9kgGB4C5O
9GAvtvu5yjoPz5pYY4zugst2+tOBumSH1LxHyJ0jyo9wauua7O3XNVtMO2tg7+V3IontAcxD/Jqh
ubwIvbc6ijrTqCXotpJIFc5FbHYvcw4vnNFj5z5u6mWWIw8qkt/Qg3cLszXnPR7IhVVS1XluMxI/
xBBAKeapW5ApY5301mBNDlck2h8PrMn7JIJFbs7UiJ8kG03B9cDxWdzx8aPVbOVUmUBouguxfbPP
mKjWNxhGYEWQj0/ZrJOa/62fnz7U6zPUUjgEbfdnfuf8VDgwy94HrMe+566L/+bhKPgIl7zI6U8h
mhrw3oHDL78z9s3nIqGvHDvGEjG25ftY+vYDrGsvvp6PKioBMFaMxML29UXfXRTw43nuLaBgvb6R
Bb/8B9FI74f5+RbKXx6jaJtVQ0f3Q7k/12cNB3nc6lRO1QoWAxeixi1BtPiNGUL+teGpbQvD0Pey
IoLp824JTO59NpNJ6SFVFEjawWh1VlbtDF/K3qUG94CMvqdRA6p7fP78l7twyyPYJ3ZlYr+ePjW5
vi0L1iVnozJc02hOHa2Rv36ILOUprpcj/tjw+rY+2Dm/d4xOJ92Tf/xJfTAB0N96JpxrUttQ02ht
xgDJ19gZ8c//rBN7QMDrJkedOVSzMWzPMdyOyoT2xn2pxy9fg21cNtP+vHPcHnGuBiEkFlW2mDHm
OfsZdpYAsMKLhT5eT08OjALpCfzsNLaquO1hcZgcwhwJuIvbXA1gb5wDko4XO1vEArbAvXAyteFh
cNnpLGvQng2dPkprqic+WlJZGe0r2fNmw358FBRJxMii6Zcf38x/+QXndfvSf/4QvJ6XHmvRvhtY
4D0k6W7A5Ntv+2H83oe2m2Y0g9D55rVAg/0oQiKxswB6Im0SiWxMheL9Z2LdMPYV+OWH42mjusTM
ux7MZltgMy0k1ke58gLf+aFq7nhgTjiQ/3nf2mVZh+HKGgIDeTtj7CQkI6qSWFAUP5i8u+d9oD1M
uP03X9JDFdpRb5XXGG4dZ0fN4NG7bMcXJehasqBebD9gXiW1lW9pvKO6ewrAt7+uwB1Ch2rlNAHW
8bsGfip0Ii/8Zjq7bo4N+PJq+s1/YHs+vnK4+YjoD+9dvFeWAF0cAT7l12Egx3qL4ErFnhpxiIf9
GV9FuM5Cg60mr4cF7oIS6s3WJFs327G1GVMEI3nOqbbfpdGojXsR7urHEYlfXracuM4B37z8zW9P
0FOtWIF9D05fvnZnzO65HIIXDfHXr331UirBTn1H1OCEsP7pgXQqh4rI85Vn6+/9LE5ikr3/rIcv
b7Z+/AEf4aq7c+ebKQDGjIkgVLk+7697B2qCpJGBuqCeOHbWwEswEI6mZpuRn14GhG/JJtKd+udX
4I9H+8dmZfP97At/+MzmxuAw4c/Og/sxAdTyzoQxJeBz8PE39pdv7/W3U3UpLDUlxt4NfMD4q9d3
lxT4y1N0iryBQLe95tRNPu9h0TYOAvBS6/T40XS2rHYfSycGKbVhcmRbAroQLgN5YG+gA+sfFkhg
2PYJEoT9AkZwfcdgE79V6pq+r/M6fwqg3qEL9peDEs1uqCtw95JK8tRufs1AUNnAOJxe1Do5mr47
inog/fR9zmPb/fILBRKXn7G3T/WMbu61DS89uyMAETeQjU57sG4yRIZZoe46n68KtNj1g621Z+54
VK0S2l6c4JsalD+9QeCb12mmcChbrdsg/vIZxpxfRayMCg++L0WLoDFCl11lXQDvs5lQnz/J7Mej
gZvUzc+/RuyRqSncZ1eFfv25y3/rWT6cT3eMkiCrd+mIE7BjoMX6+/OqWXWTX/CrF4QT4ZbNh+3m
9ocXWf1QZXMfbC3ocXBPvTPl2Ki3XAo43q7oob/OP55syGopqPSX10lTAQPoUS1SWynf7penoD9+
6dSdrXoXk1SERg71n37UfSY4DfzuT2C9I162Oz1CBxo599O/dlhF5hB4E5cEF8MCAG2WbIR5sb38
/HJEj2pbwN4r7ljV9B2YuQeCkmJtEsRzjxysxZ4UkJ6klSoXdacT64RmcNw8O3pQjIFR7TlDuM2s
FStqumEL3KUV7EygIMsbXwPtD3tJEh6BjP0xnPSfvsBbAkpSfCIuY0HoJ0AXLh2av3x6Fpw1he5B
1bC2zPMwvgXqwaOPY4pbfxvN8f4kyE8zeFFzIyqR8Ov/H1+2qdL2ZOgVWw3+jDfubyXaYWfiwIdc
P9SrpHR4PbeqAdqoGagenHWXLP7E//whPpyTKWNfvyh/88mf/YWV2h4C1s4LsDMgqtP4YSXQo/mZ
BhMph13Udx7cbZMNgS/ryabuo4jwiFyMcXRTsuWbp6BUPg/kQ1EdLQFNGrgxrTs2pE8E+tsaSvAy
te6Pp4AJLoMB9XiDsdtMhb5uu30KN9GxR2y6JMP84rcB1IVzh7oqx9my9ZAnhfvXif78ycKUjQPS
zb7+7S8w8kQ3AkVJawjvbIp6XuUlgbzURmQP19plV9aMsBssHZtA93Xy88O/vK+ViaiP0r0U5K8f
psahcKNZcKQbbFwUY6x2oT6/KBbAb/9HxyWuZ+mZVjByKo/65oV3R/zwNPjjZUyITsMo908e3LVD
S/g4pDWTkOj94Y+XdHOpZ16wbLgRhSOST/CarSMnCWDevp/426/An/xvc+1Av+uVzXIj2dI+Fy5Y
iY7IpZddj8D+qhK0+er7aqR7CAso+kRsmr6epQu7yTW2Ttj/wCaaP5z1gostEqpc0hHQIuo18OXZ
ZJGO/cDsXsjB8T4w0hqqGHV0k3I//4rV6D5n9P1CEjjs9SO27mrGGA6iFGZw80Q722U1WY6cLe2U
LsV2RdRhXvkcwfAyZjTcbnBGfzx4t26m335a9NvvBV8egY+k58A01GYKv/yY+rMj6N3LA9ZPn7AD
d80wbMWbAr/9AvuqoLBp3r1n+M1zROhj7+tPrQpu1BNP3nZaRUtyaEZ4xK+G/Pj1ck2mAv78uTek
gt5utrMo/fTLFHiSrVn/GeUfn4y/ej0XVdjCe5xLGD0kTWeKb8ywAsz4+jPObVM8a/CsSTVG/srp
0zBWJfyUq0Qkv/m4q+ynrz/1AGrPYaszxC8Q5I8EzdrZ0dmLditYrhcfW5X/YqOPDhW8XVf05XOb
aC3sxYITjD2qLM1xYCu1R2guDUFSv9tnP14n9/dbivFxkw5ffbaBwCqLKi/j5tIHbsr9ZVOpWHck
oLPrPnZk8zsT6BNx0dgdzyOA1tum+uodwWrdakn8+3cq4D//9ddf/+t3wqBp78X7ezBgKpbp3/99
VODft/vt3zwv/JsKf04ikPFWFn//81+HEP7uhrbppv89ta/iM/79z1/bP6cN/p7a6fb+fy7/6/td
//mv/wMAAP//AwDOXgQl4SAAAA==
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 7c09bf823fb50b70-AMS
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Mon, 01 May 2023 17:29:41 GMT
Server:
- cloudflare
access-control-allow-origin:
- '*'
alt-svc:
- h3=":443"; ma=86400, h3-29=":443"; ma=86400
openai-organization:
- user-kd1j0bcill5flig1m29wdaof
openai-processing-ms:
- '69'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=15724800; includeSubDomains
x-ratelimit-limit-requests:
- '3000'
x-ratelimit-remaining-requests:
- '2999'
x-ratelimit-reset-requests:
- 20ms
x-request-id:
- 555d4ffdb6ceac9f62f60bb64d87170d
status:
code: 200
message: OK
version: 1

View File

@@ -0,0 +1,42 @@
import pytest
from git.exc import GitCommandError
from git.repo.base import Repo
from autogpt.commands.git_operations import clone_repository
@pytest.fixture
def mock_clone_from(mocker):
return mocker.patch.object(Repo, "clone_from")
def test_clone_auto_gpt_repository(workspace, mock_clone_from, config):
mock_clone_from.return_value = None
repo = "github.com/Significant-Gravitas/Auto-GPT.git"
scheme = "https://"
url = scheme + repo
clone_path = str(workspace.get_path("auto-gpt-repo"))
expected_output = f"Cloned {url} to {clone_path}"
clone_result = clone_repository(url=url, clone_path=clone_path)
assert clone_result == expected_output
mock_clone_from.assert_called_once_with(
url=f"{scheme}{config.github_username}:{config.github_api_key}@{repo}",
to_path=clone_path,
)
def test_clone_repository_error(workspace, mock_clone_from):
url = "https://github.com/this-repository/does-not-exist.git"
clone_path = str(workspace.get_path("does-not-exist"))
mock_clone_from.side_effect = GitCommandError(
"clone", "fatal: repository not found", ""
)
result = clone_repository(url=url, clone_path=clone_path)
assert "Error: " in result

View File

@@ -1,9 +1,14 @@
import string
from unittest.mock import MagicMock
import pytest
from numpy.random import RandomState
from pytest_mock import MockerFixture
from autogpt.llm.llm_utils import get_ada_embedding
from autogpt.config import Config
from autogpt.llm import llm_utils
from autogpt.llm.api_manager import ApiManager
from autogpt.llm.modelsinfo import COSTS
from tests.utils import requires_api_key
@@ -16,10 +21,42 @@ def random_large_string():
return "".join(random.choice(list(string.ascii_lowercase), size=n_characters))
@pytest.mark.xfail(reason="We have no mechanism for embedding large strings.")
@pytest.fixture()
def api_manager(mocker: MockerFixture):
api_manager = ApiManager()
mocker.patch.multiple(
api_manager,
total_prompt_tokens=0,
total_completion_tokens=0,
total_cost=0,
)
yield api_manager
@pytest.fixture()
def spy_create_embedding(mocker: MockerFixture):
return mocker.spy(llm_utils, "create_embedding")
@pytest.mark.vcr
@requires_api_key("OPENAI_API_KEY")
def test_get_ada_embedding(
config: Config, api_manager: ApiManager, spy_create_embedding: MagicMock
):
token_cost = COSTS[config.embedding_model]["prompt"]
llm_utils.get_ada_embedding("test")
spy_create_embedding.assert_called_once_with("test", model=config.embedding_model)
assert (prompt_tokens := api_manager.get_total_prompt_tokens()) == 1
assert api_manager.get_total_completion_tokens() == 0
assert api_manager.get_total_cost() == (prompt_tokens * token_cost) / 1000
@pytest.mark.vcr
@requires_api_key("OPENAI_API_KEY")
def test_get_ada_embedding_large_context(random_large_string):
# This test should be able to mock the openai call after we have a fix. We don't need
# to hit the API to test the logic of the function (so not using vcr). This is a quick
# regression test to document the issue.
get_ada_embedding(random_large_string)
llm_utils.get_ada_embedding(random_large_string)

View File

@@ -56,67 +56,13 @@ def test_readable_file_size():
@patch("requests.get")
def test_get_bulletin_from_web_success(mock_get):
expected_content = "Test bulletin from web"
mock_get.return_value.status_code = 200
mock_get.return_value.text = "Test bulletin"
mock_get.return_value.text = expected_content
bulletin = get_bulletin_from_web()
assert bulletin == "Test bulletin"
@patch("requests.get")
def test_get_bulletin_from_web_failure(mock_get):
mock_get.return_value.status_code = 404
bulletin = get_bulletin_from_web()
print(bulletin)
assert bulletin == ""
@skip_in_ci
def test_get_current_git_branch():
branch_name = get_current_git_branch()
# Assuming that the branch name will be non-empty if the function is working correctly.
assert branch_name != ""
def test_get_latest_bulletin_no_file():
if os.path.exists("CURRENT_BULLETIN.md"):
os.remove("CURRENT_BULLETIN.md")
with patch("autogpt.utils.get_bulletin_from_web", return_value=""):
bulletin = get_latest_bulletin()
assert bulletin == ""
def test_get_latest_bulletin_with_file():
with open("CURRENT_BULLETIN.md", "w", encoding="utf-8") as f:
f.write("Test bulletin")
with patch("autogpt.utils.get_bulletin_from_web", return_value=""):
bulletin = get_latest_bulletin()
assert bulletin == "Test bulletin"
os.remove("CURRENT_BULLETIN.md")
def test_get_latest_bulletin_with_new_bulletin():
with open("CURRENT_BULLETIN.md", "w", encoding="utf-8") as f:
f.write("Old bulletin")
with patch("autogpt.utils.get_bulletin_from_web", return_value="New bulletin"):
bulletin = get_latest_bulletin()
assert "New bulletin" in bulletin
os.remove("CURRENT_BULLETIN.md")
@patch("requests.get")
def test_get_bulletin_from_web_success(mock_get):
mock_get.return_value.status_code = 200
mock_get.return_value.text = "Test bulletin"
bulletin = get_bulletin_from_web()
assert bulletin == "Test bulletin"
assert expected_content in bulletin
mock_get.assert_called_with(
"https://raw.githubusercontent.com/Significant-Gravitas/Auto-GPT/master/BULLETIN.md"
)
@@ -138,6 +84,62 @@ def test_get_bulletin_from_web_exception(mock_get):
assert bulletin == ""
def test_get_latest_bulletin_no_file():
if os.path.exists("CURRENT_BULLETIN.md"):
os.remove("CURRENT_BULLETIN.md")
bulletin, is_new = get_latest_bulletin()
assert is_new
def test_get_latest_bulletin_with_file():
expected_content = "Test bulletin"
with open("CURRENT_BULLETIN.md", "w", encoding="utf-8") as f:
f.write(expected_content)
with patch("autogpt.utils.get_bulletin_from_web", return_value=""):
bulletin, is_new = get_latest_bulletin()
assert expected_content in bulletin
assert is_new == False
os.remove("CURRENT_BULLETIN.md")
def test_get_latest_bulletin_with_new_bulletin():
with open("CURRENT_BULLETIN.md", "w", encoding="utf-8") as f:
f.write("Old bulletin")
expected_content = "New bulletin from web"
with patch("autogpt.utils.get_bulletin_from_web", return_value=expected_content):
bulletin, is_new = get_latest_bulletin()
assert "::NEW BULLETIN::" in bulletin
assert expected_content in bulletin
assert is_new
os.remove("CURRENT_BULLETIN.md")
def test_get_latest_bulletin_new_bulletin_same_as_old_bulletin():
expected_content = "Current bulletin"
with open("CURRENT_BULLETIN.md", "w", encoding="utf-8") as f:
f.write(expected_content)
with patch("autogpt.utils.get_bulletin_from_web", return_value=expected_content):
bulletin, is_new = get_latest_bulletin()
assert expected_content in bulletin
assert is_new == False
os.remove("CURRENT_BULLETIN.md")
@skip_in_ci
def test_get_current_git_branch():
branch_name = get_current_git_branch()
# Assuming that the branch name will be non-empty if the function is working correctly.
assert branch_name != ""
@patch("autogpt.utils.Repo")
def test_get_current_git_branch_success(mock_repo):
mock_repo.return_value.active_branch.name = "test-branch"
@@ -154,47 +156,5 @@ def test_get_current_git_branch_failure(mock_repo):
assert branch_name == ""
def test_get_latest_bulletin_no_file():
if os.path.exists("CURRENT_BULLETIN.md"):
os.remove("CURRENT_BULLETIN.md")
with patch("autogpt.utils.get_bulletin_from_web", return_value=""):
bulletin = get_latest_bulletin()
assert bulletin == ""
def test_get_latest_bulletin_with_file():
with open("CURRENT_BULLETIN.md", "w", encoding="utf-8") as f:
f.write("Test bulletin")
with patch("autogpt.utils.get_bulletin_from_web", return_value=""):
bulletin = get_latest_bulletin()
assert bulletin == "Test bulletin"
os.remove("CURRENT_BULLETIN.md")
def test_get_latest_bulletin_with_new_bulletin():
with open("CURRENT_BULLETIN.md", "w", encoding="utf-8") as f:
f.write("Old bulletin")
with patch("autogpt.utils.get_bulletin_from_web", return_value="New bulletin"):
bulletin = get_latest_bulletin()
assert f" {Fore.RED}::UPDATED:: {Fore.CYAN}New bulletin{Fore.RESET}" in bulletin
os.remove("CURRENT_BULLETIN.md")
def test_get_latest_bulletin_new_bulletin_same_as_old_bulletin():
with open("CURRENT_BULLETIN.md", "w", encoding="utf-8") as f:
f.write("Test bulletin")
with patch("autogpt.utils.get_bulletin_from_web", return_value="Test bulletin"):
bulletin = get_latest_bulletin()
assert bulletin == "Test bulletin"
os.remove("CURRENT_BULLETIN.md")
if __name__ == "__main__":
pytest.main()

View File

@@ -2,25 +2,19 @@
This set of unit tests is designed to test the file operations that autoGPT has access to.
"""
import hashlib
import os
import re
from io import TextIOWrapper
from pathlib import Path
from tempfile import gettempdir
import pytest
from pytest_mock import MockerFixture
from autogpt.commands.file_operations import (
append_to_file,
check_duplicate_operation,
delete_file,
download_file,
log_operation,
read_file,
search_files,
split_file,
write_to_file,
)
import autogpt.commands.file_operations as file_ops
from autogpt.config import Config
from autogpt.utils import readable_file_size
from autogpt.workspace import Workspace
@pytest.fixture()
@@ -29,66 +23,186 @@ def file_content():
@pytest.fixture()
def test_file(workspace, file_content):
test_file = str(workspace.get_path("test_file.txt"))
with open(test_file, "w") as f:
f.write(file_content)
return test_file
def test_file_path(config, workspace: Workspace):
return workspace.get_path("test_file.txt")
@pytest.fixture()
def test_directory(workspace):
return str(workspace.get_path("test_directory"))
def test_file(test_file_path: Path):
file = open(test_file_path, "w")
yield file
if not file.closed:
file.close()
@pytest.fixture()
def test_nested_file(workspace):
return str(workspace.get_path("nested/test_file.txt"))
def test_file_with_content_path(test_file: TextIOWrapper, file_content):
test_file.write(file_content)
test_file.close()
file_ops.log_operation(
"write", test_file.name, file_ops.text_checksum(file_content)
)
return Path(test_file.name)
def test_check_duplicate_operation(config, test_file):
log_operation("write", test_file)
assert check_duplicate_operation("write", test_file) is True
@pytest.fixture()
def test_directory(config, workspace: Workspace):
return workspace.get_path("test_directory")
@pytest.fixture()
def test_nested_file(config, workspace: Workspace):
return workspace.get_path("nested/test_file.txt")
def test_file_operations_log(test_file: TextIOWrapper):
log_file_content = (
"File Operation Logger\n"
"write: path/to/file1.txt #checksum1\n"
"write: path/to/file2.txt #checksum2\n"
"write: path/to/file3.txt #checksum3\n"
"append: path/to/file2.txt #checksum4\n"
"delete: path/to/file3.txt\n"
)
test_file.write(log_file_content)
test_file.close()
expected = [
("write", "path/to/file1.txt", "checksum1"),
("write", "path/to/file2.txt", "checksum2"),
("write", "path/to/file3.txt", "checksum3"),
("append", "path/to/file2.txt", "checksum4"),
("delete", "path/to/file3.txt", None),
]
assert list(file_ops.operations_from_log(test_file.name)) == expected
def test_file_operations_state(test_file: TextIOWrapper):
# Prepare a fake log file
log_file_content = (
"File Operation Logger\n"
"write: path/to/file1.txt #checksum1\n"
"write: path/to/file2.txt #checksum2\n"
"write: path/to/file3.txt #checksum3\n"
"append: path/to/file2.txt #checksum4\n"
"delete: path/to/file3.txt\n"
)
test_file.write(log_file_content)
test_file.close()
# Call the function and check the returned dictionary
expected_state = {
"path/to/file1.txt": "checksum1",
"path/to/file2.txt": "checksum4",
}
assert file_ops.file_operations_state(test_file.name) == expected_state
def test_is_duplicate_operation(config, mocker: MockerFixture):
# Prepare a fake state dictionary for the function to use
state = {
"path/to/file1.txt": "checksum1",
"path/to/file2.txt": "checksum2",
}
mocker.patch.object(file_ops, "file_operations_state", lambda _: state)
# Test cases with write operations
assert (
file_ops.is_duplicate_operation("write", "path/to/file1.txt", "checksum1")
is True
)
assert (
file_ops.is_duplicate_operation("write", "path/to/file1.txt", "checksum2")
is False
)
assert (
file_ops.is_duplicate_operation("write", "path/to/file3.txt", "checksum3")
is False
)
# Test cases with append operations
assert (
file_ops.is_duplicate_operation("append", "path/to/file1.txt", "checksum1")
is False
)
# Test cases with delete operations
assert file_ops.is_duplicate_operation("delete", "path/to/file1.txt") is False
assert file_ops.is_duplicate_operation("delete", "path/to/file3.txt") is True
# Test logging a file operation
def test_log_operation(test_file, config):
file_logger_name = config.file_logger_path
if os.path.exists(file_logger_name):
os.remove(file_logger_name)
log_operation("log_test", test_file)
with open(config.file_logger_path, "r") as f:
def test_log_operation(config: Config):
file_ops.log_operation("log_test", "path/to/test")
with open(config.file_logger_path, "r", encoding="utf-8") as f:
content = f.read()
assert f"log_test: {test_file}" in content
assert f"log_test: path/to/test\n" in content
def test_text_checksum(file_content: str):
checksum = file_ops.text_checksum(file_content)
different_checksum = file_ops.text_checksum("other content")
assert re.match(r"^[a-fA-F0-9]+$", checksum) is not None
assert checksum != different_checksum
def test_log_operation_with_checksum(config: Config):
file_ops.log_operation("log_test", "path/to/test", checksum="ABCDEF")
with open(config.file_logger_path, "r", encoding="utf-8") as f:
content = f.read()
assert f"log_test: path/to/test #ABCDEF\n" in content
# Test splitting a file into chunks
def test_split_file():
content = "abcdefghij"
chunks = list(split_file(content, max_length=4, overlap=1))
chunks = list(file_ops.split_file(content, max_length=4, overlap=1))
expected = ["abcd", "defg", "ghij"]
assert chunks == expected
def test_read_file(test_file, file_content):
content = read_file(test_file)
def test_read_file(test_file_with_content_path: Path, file_content):
content = file_ops.read_file(test_file_with_content_path)
assert content == file_content
def test_write_to_file(config, test_nested_file):
def test_write_to_file(test_file_path: Path):
new_content = "This is new content.\n"
write_to_file(test_nested_file, new_content)
with open(test_nested_file, "r") as f:
file_ops.write_to_file(str(test_file_path), new_content)
with open(test_file_path, "r", encoding="utf-8") as f:
content = f.read()
assert content == new_content
def test_append_to_file(test_nested_file):
append_text = "This is appended text.\n"
write_to_file(test_nested_file, append_text)
def test_write_file_logs_checksum(config: Config, test_file_path: Path):
new_content = "This is new content.\n"
new_checksum = file_ops.text_checksum(new_content)
file_ops.write_to_file(str(test_file_path), new_content)
with open(config.file_logger_path, "r", encoding="utf-8") as f:
log_entry = f.read()
assert log_entry == f"write: {test_file_path} #{new_checksum}\n"
append_to_file(test_nested_file, append_text)
def test_write_file_fails_if_content_exists(test_file_path: Path):
new_content = "This is new content.\n"
file_ops.log_operation(
"write",
str(test_file_path),
checksum=file_ops.text_checksum(new_content),
)
result = file_ops.write_to_file(str(test_file_path), new_content)
assert result == "Error: File has already been updated."
def test_write_file_succeeds_if_content_different(test_file_with_content_path: Path):
new_content = "This is different content.\n"
result = file_ops.write_to_file(str(test_file_with_content_path), new_content)
assert result == "File written to successfully."
def test_append_to_file(test_nested_file: Path):
append_text = "This is appended text.\n"
file_ops.write_to_file(test_nested_file, append_text)
file_ops.append_to_file(test_nested_file, append_text)
with open(test_nested_file, "r") as f:
content_after = f.read()
@@ -96,24 +210,45 @@ def test_append_to_file(test_nested_file):
assert content_after == append_text + append_text
def test_delete_file(config, test_file):
delete_file(test_file)
assert os.path.exists(test_file) is False
assert delete_file(test_file) == "Error: File has already been deleted."
def test_append_to_file_uses_checksum_from_appended_file(
config: Config, test_file_path: Path
):
append_text = "This is appended text.\n"
file_ops.append_to_file(test_file_path, append_text)
file_ops.append_to_file(test_file_path, append_text)
with open(config.file_logger_path, "r", encoding="utf-8") as f:
log_contents = f.read()
digest = hashlib.md5()
digest.update(append_text.encode("utf-8"))
checksum1 = digest.hexdigest()
digest.update(append_text.encode("utf-8"))
checksum2 = digest.hexdigest()
assert log_contents == (
f"append: {test_file_path} #{checksum1}\n"
f"append: {test_file_path} #{checksum2}\n"
)
def test_delete_missing_file(test_file):
os.remove(test_file)
def test_delete_file(test_file_with_content_path: Path):
result = file_ops.delete_file(str(test_file_with_content_path))
assert result == "File deleted successfully."
assert os.path.exists(test_file_with_content_path) is False
def test_delete_missing_file(config):
filename = "path/to/file/which/does/not/exist"
# confuse the log
file_ops.log_operation("write", filename, checksum="fake")
try:
os.remove(test_file)
except FileNotFoundError as e:
error_string = str(e)
assert error_string in delete_file(test_file)
os.remove(filename)
except FileNotFoundError as err:
assert str(err) in file_ops.delete_file(filename)
return
assert True, "Failed to test delete_file"
assert False, f"Failed to test delete_file; {filename} not expected to exist"
def test_search_files(config, workspace, test_directory):
def test_list_files(workspace: Workspace, test_directory: Path):
# Case 1: Create files A and B, search for A, and ensure we don't return A and B
file_a = workspace.get_path("file_a.txt")
file_b = workspace.get_path("file_b.txt")
@@ -131,7 +266,7 @@ def test_search_files(config, workspace, test_directory):
with open(os.path.join(test_directory, file_a.name), "w") as f:
f.write("This is file A in the subdirectory.")
files = search_files(str(workspace.root))
files = file_ops.list_files(str(workspace.root))
assert file_a.name in files
assert file_b.name in files
assert os.path.join(Path(test_directory).name, file_a.name) in files
@@ -144,26 +279,28 @@ def test_search_files(config, workspace, test_directory):
# Case 2: Search for a file that does not exist and make sure we don't throw
non_existent_file = "non_existent_file.txt"
files = search_files("")
files = file_ops.list_files("")
assert non_existent_file not in files
def test_download_file():
def test_download_file(config, workspace: Workspace):
url = "https://github.com/Significant-Gravitas/Auto-GPT/archive/refs/tags/v0.2.2.tar.gz"
local_name = os.path.join(gettempdir(), "auto-gpt.tar.gz")
local_name = workspace.get_path("auto-gpt.tar.gz")
size = 365023
readable_size = readable_file_size(size)
assert (
download_file(url, local_name)
file_ops.download_file(url, local_name)
== f'Successfully downloaded and locally stored file: "{local_name}"! (Size: {readable_size})'
)
assert os.path.isfile(local_name) is True
assert os.path.getsize(local_name) == size
url = "https://github.com/Significant-Gravitas/Auto-GPT/archive/refs/tags/v0.0.0.tar.gz"
assert "Got an HTTP Error whilst trying to download file" in download_file(
assert "Got an HTTP Error whilst trying to download file" in file_ops.download_file(
url, local_name
)
url = "https://thiswebsiteiswrong.hmm/v0.0.0.tar.gz"
assert "Failed to establish a new connection:" in download_file(url, local_name)
assert "Failed to establish a new connection:" in file_ops.download_file(
url, local_name
)

View File

@@ -1,8 +1,7 @@
import pytest
from openai.error import APIError, RateLimitError
from autogpt.llm import COSTS, get_ada_embedding
from autogpt.llm.llm_utils import retry_openai_api
from autogpt.llm import llm_utils
@pytest.fixture(params=[RateLimitError, APIError])
@@ -13,22 +12,12 @@ def error(request):
return request.param("Error")
@pytest.fixture
def mock_create_embedding(mocker):
mock_response = mocker.MagicMock()
mock_response.usage.prompt_tokens = 5
mock_response.__getitem__.side_effect = lambda key: [{"embedding": [0.1, 0.2, 0.3]}]
return mocker.patch(
"autogpt.llm.llm_utils.create_embedding", return_value=mock_response
)
def error_factory(error_instance, error_count, retry_count, warn_user=True):
class RaisesError:
def __init__(self):
self.count = 0
@retry_openai_api(
@llm_utils.retry_openai_api(
num_retries=retry_count, backoff_base=0.001, warn_user=warn_user
)
def __call__(self):
@@ -41,7 +30,7 @@ def error_factory(error_instance, error_count, retry_count, warn_user=True):
def test_retry_open_api_no_error(capsys):
@retry_openai_api()
@llm_utils.retry_openai_api()
def f():
return 1
@@ -114,16 +103,31 @@ def test_retry_openapi_other_api_error(capsys):
assert output.out == ""
def test_get_ada_embedding(mock_create_embedding, api_manager):
model = "text-embedding-ada-002"
embedding = get_ada_embedding("test")
mock_create_embedding.assert_called_once_with(
"test", model="text-embedding-ada-002"
)
assert embedding == [0.1, 0.2, 0.3]
cost = COSTS[model]["prompt"]
assert api_manager.get_total_prompt_tokens() == 5
assert api_manager.get_total_completion_tokens() == 0
assert api_manager.get_total_cost() == (5 * cost) / 1000
def test_chunked_tokens():
text = "Auto-GPT is an experimental open-source application showcasing the capabilities of the GPT-4 language model"
expected_output = [
(
13556,
12279,
2898,
374,
459,
22772,
1825,
31874,
3851,
67908,
279,
17357,
315,
279,
480,
2898,
12,
19,
4221,
1646,
)
]
output = list(llm_utils.chunked_tokens(text, "cl100k_base", 8191))
assert output == expected_output