Files
AutoGPT/autogpt/scripts/git_log_to_release_notes.py
Reinier van der Leer f107ff8cf0 Set up unified pre-commit + CI w/ linting + type checking & FIX EVERYTHING (#7171)
- **FIX ALL LINT/TYPE ERRORS IN AUTOGPT, FORGE, AND BENCHMARK**

### Linting
- Clean up linter configs for `autogpt`, `forge`, and `benchmark`
- Add type checking with Pyright
- Create unified pre-commit config
- Create unified linting and type checking CI workflow

### Testing
- Synchronize CI test setups for `autogpt`, `forge`, and `benchmark`
   - Add missing pytest-cov to benchmark dependencies
- Mark GCS tests as slow to speed up pre-commit test runs
- Repair `forge` test suite
  - Add `AgentDB.close()` method for test DB teardown in db_test.py
  - Use actual temporary dir instead of forge/test_workspace/
- Move left-behind dependencies for moved `forge`-code to from autogpt to forge

### Notable type changes
- Replace uses of `ChatModelProvider` by `MultiProvider`
- Removed unnecessary exports from various __init__.py
- Simplify `FileStorage.open_file` signature by removing `IOBase` from return type union
  - Implement `S3BinaryIOWrapper(BinaryIO)` type interposer for `S3FileStorage`

- Expand overloads of `GCSFileStorage.open_file` for improved typing of read and write modes

  Had to silence type checking for the extra overloads, because (I think) Pyright is reporting a false-positive:
  https://github.com/microsoft/pyright/issues/8007

- Change `count_tokens`, `get_tokenizer`, `count_message_tokens` methods on `ModelProvider`s from class methods to instance methods

- Move `CompletionModelFunction.schema` method -> helper function `format_function_def_for_openai` in `forge.llm.providers.openai`

- Rename `ModelProvider` -> `BaseModelProvider`
- Rename `ChatModelProvider` -> `BaseChatModelProvider`
- Add type `ChatModelProvider` which is a union of all subclasses of `BaseChatModelProvider`

### Removed rather than fixed
- Remove deprecated and broken autogpt/agbenchmark_config/benchmarks.py
- Various base classes and properties on base classes in `forge.llm.providers.schema` and `forge.models.providers`

### Fixes for other issues that came to light
- Clean up `forge.agent_protocol.api_router`, `forge.agent_protocol.database`, and `forge.agent.agent`

- Add fallback behavior to `ImageGeneratorComponent`
   - Remove test for deprecated failure behavior

- Fix `agbenchmark.challenges.builtin` challenge exclusion mechanism on Windows

- Fix `_tool_calls_compat_extract_calls` in `forge.llm.providers.openai`

- Add support for `any` (= no type specified) in `JSONSchema.typescript_type`
2024-05-28 05:04:21 +02:00

141 lines
7.0 KiB
Python
Executable File

#!/usr/bin/env python3
import logging
from pathlib import Path
from typing import Optional
import click
from forge.llm.providers import ChatMessage, MultiProvider
from forge.llm.providers.anthropic import AnthropicModelName
from git import Repo, TagReference
from autogpt.app.utils import coroutine
@click.command()
@click.option(
"--repo-path",
type=click.Path(file_okay=False, exists=True),
help="Path to the git repository",
)
@coroutine
async def generate_release_notes(repo_path: Optional[Path] = None):
logger = logging.getLogger(generate_release_notes.name) # pyright: ignore
repo = Repo(repo_path, search_parent_directories=True)
tags = list(repo.tags)
if not tags:
click.echo("No tags found in the repository.")
return
click.echo("Available tags:")
for index, tag in enumerate(tags):
click.echo(f"{index + 1}: {tag.name}")
last_release_index = (
click.prompt("Enter the number for the last release tag", type=int) - 1
)
if last_release_index >= len(tags) or last_release_index < 0:
click.echo("Invalid tag number entered.")
return
last_release_tag: TagReference = tags[last_release_index]
new_release_ref = click.prompt(
"Enter the name of the release branch or git ref",
default=repo.active_branch.name,
)
try:
new_release_ref = repo.heads[new_release_ref].name
except IndexError:
try:
new_release_ref = repo.tags[new_release_ref].name
except IndexError:
new_release_ref = repo.commit(new_release_ref).hexsha
logger.debug(f"Selected release ref: {new_release_ref}")
git_log = repo.git.log(
f"{last_release_tag.name}...{new_release_ref}",
"autogpt/",
no_merges=True,
follow=True,
)
logger.debug(f"-------------- GIT LOG --------------\n\n{git_log}\n")
model_provider = MultiProvider()
chat_messages = [
ChatMessage.system(SYSTEM_PROMPT),
ChatMessage.user(content=git_log),
]
click.echo("Writing release notes ...")
completion = await model_provider.create_chat_completion(
model_prompt=chat_messages,
model_name=AnthropicModelName.CLAUDE3_OPUS_v1,
# model_name=OpenAIModelName.GPT4_v4,
)
click.echo("-------------- LLM RESPONSE --------------\n")
click.echo(completion.response.content)
EXAMPLE_RELEASE_NOTES = """
First some important notes w.r.t. using the application:
* `run.sh` has been renamed to `autogpt.sh`
* The project has been restructured. The AutoGPT Agent is now located in `autogpt`.
* The application no longer uses a single workspace for all tasks. Instead, every task that you run the agent on creates a new workspace folder. See the [usage guide](https://docs.agpt.co/autogpt/usage/#workspace) for more information.
## New features ✨
* **Agent Protocol 🔌**
Our agent now works with the [Agent Protocol](/#-agent-protocol), a REST API that allows creating tasks and executing the agent's step-by-step process. This allows integration with other applications, and we also use it to connect to the agent through the UI.
* **UI 💻**
With the aforementioned Agent Protocol integration comes the benefit of using our own open-source Agent UI. Easily create, use, and chat with multiple agents from one interface.
When starting the application through the project's new [CLI](/#-cli), it runs with the new frontend by default, with benchmarking capabilities. Running `autogpt.sh serve` in the subproject folder (`autogpt`) will also serve the new frontend, but without benchmarking functionality.
Running the application the "old-fashioned" way, with the terminal interface (let's call it TTY mode), is still possible with `autogpt.sh run`.
* **Resuming agents 🔄️**
In TTY mode, the application will now save the agent's state when quitting, and allows resuming where you left off at a later time!
* **GCS and S3 workspace backends 📦**
To further support running the application as part of a larger system, Google Cloud Storage and S3 workspace backends were added. Configuration options for this can be found in [`.env.template`](/autogpt/.env.template).
* **Documentation Rewrite 📖**
The [documentation](https://docs.agpt.co) has been restructured and mostly rewritten to clarify and simplify the instructions, and also to accommodate the other subprojects that are now in the repo.
* **New Project CLI 🔧**
The project has a new CLI to provide easier usage of all of the components that are now in the repo: different agents, frontend and benchmark. More info can be found [here](/#-cli).
* **Docker dev build 🐳**
In addition to the regular Docker release [images](https://hub.docker.com/r/significantgravitas/auto-gpt/tags) (`latest`, `v0.5.0` in this case), we now also publish a `latest-dev` image that always contains the latest working build from `master`. This allows you to try out the latest bleeding edge version, but be aware that these builds may contain bugs!
## Architecture changes & improvements 👷🏼
* **PromptStrategy**
To make it easier to harness the power of LLMs and use them to fulfil tasks within the application, we adopted the `PromptStrategy` class from `autogpt.core` (AKA re-arch) to encapsulate prompt generation and response parsing throughout the application.
* **Config modularization**
To reduce the complexity of the application's config structure, parts of the monolithic `Config` have been moved into smaller, tightly scoped config objects. Also, the logic for building the configuration from environment variables was decentralized to make it all a lot more maintainable.
This is mostly made possible by the `autogpt.core.configuration` module, which was also expanded with a few new features for it. Most notably, the new `from_env` attribute on the `UserConfigurable` field decorator and corresponding logic in `SystemConfiguration.from_env()` and related functions.
* **Monorepo**
As mentioned, the repo has been restructured to accommodate the AutoGPT Agent, Forge, AGBenchmark and the new Frontend.
* AutoGPT Agent has been moved to `autogpt`
* Forge now lives in `forge`, and the project's new CLI makes it easy to create new Forge-based agents.
* AGBenchmark -> `benchmark`
* Frontend -> `frontend`
See also the [README](/#readme).
""".lstrip() # noqa
SYSTEM_PROMPT = f"""
Please generate release notes based on the user's git log and the example release notes.
Here is an example of what we like our release notes to look and read like:
---------------------------------------------------------------------------
{EXAMPLE_RELEASE_NOTES}
---------------------------------------------------------------------------
NOTE: These example release notes are not related to the git log that you should write release notes for!
Do not mention the changes in the example when writing your release notes!
""".lstrip() # noqa
if __name__ == "__main__":
import dotenv
from forge.logging.config import configure_logging
configure_logging(debug=True)
dotenv.load_dotenv()
generate_release_notes()