Files
AutoGPT/benchmark/agbenchmark
Reinier van der Leer 9012ff4db2 refactor(benchmark): Interface & type consoledation, and arch change, to allow adding challenge providers
Squashed commit of the following:

commit 7d6476d329
Author: Reinier van der Leer <pwuts@agpt.co>
Date:   Tue Jan 9 18:10:45 2024 +0100

    refactor(benchmark/challenge): Set up structure to support more challenge providers

    - Move `Challenge`, `ChallengeData`, `load_challenges` to `challenges/builtin.py` and rename to `BuiltinChallenge`, `BuiltinChallengeSpec`, `load_builtin_challenges`
    - Create `BaseChallenge` to serve as interface and base class for different challenge implementations
    - Create `ChallengeInfo` model to serve as universal challenge info object
    - Create `get_challenge_from_source_uri` function in `challenges/__init__.py`
    - Replace `ChallengeData` by `ChallengeInfo` everywhere except in `BuiltinChallenge`
    - Add strong typing to `task_informations` store in app.py
    - Use `call.duration` in `finalize_test_report` and remove `timer` fixture
    - Update docstring on `challenges/__init__.py:get_unique_categories`
    - Add docstring to `generate_test.py`

commit 5df2aa7939
Author: Reinier van der Leer <pwuts@agpt.co>
Date:   Tue Jan 9 16:58:01 2024 +0100

    refactor(benchmark): Refactor & rename functions in agent_interface.py and agent_api_interface.py

    - `copy_artifacts_into_temp_folder` -> `copy_challenge_artifacts_into_workspace`
    - `copy_agent_artifacts_into_folder` -> `download_agent_artifacts_into_folder`
    - Reorder parameters of `run_api_agent`, `copy_challenge_artifacts_into_workspace`; use `Path` instead of `str`

commit 6a256fef4c
Author: Reinier van der Leer <pwuts@agpt.co>
Date:   Tue Jan 9 16:02:25 2024 +0100

    refactor(benchmark): Refactor & typefix report generation and handling logic

    - Rename functions in reports.py and ReportManager.py to better reflect what they do
       - `get_previous_test_results` -> `get_and_update_success_history`
       - `generate_single_call_report` -> `initialize_test_report`
       - `finalize_reports` -> `finalize_test_report`
       - `ReportManager.end_info_report` -> `SessionReportManager.finalize_session_report`
    - Modify `pytest_runtest_makereport` hook in conftest.py to finalize the report immediately after the challenge finishes running instead of after teardown
       - Move result processing logic from `initialize_test_report` to `finalize_test_report` in reports.py
    - Use `Test` and `Report` types from report_types.py where possible instead of untyped dicts: reports.py, utils.py, ReportManager.py
    - Differentiate `ReportManager` into `SessionReportManager`, `RegressionTestsTracker`, `SuccessRateTracker`
    - Move filtering of optional challenge categories from challenge.py (`Challenge.skip_optional_categories`) to conftest.py (`pytest_collection_modifyitems`)
    - Remove unused `scores` fixture in conftest.py

commit 370d6dbf5d
Author: Reinier van der Leer <pwuts@agpt.co>
Date:   Tue Jan 9 15:16:43 2024 +0100

    refactor(benchmark): Simplify models in report_types.py

    - Removed ForbidOptionalMeta and BaseModelBenchmark classes.
    - Changed model attributes to optional: `Metrics.difficulty`, `Metrics.success`, `Metrics.success_percentage`, `Metrics.run_time`, and `Test.reached_cutoff`.
    - Added validator to `Metrics` model to require `success` and `run_time` fields if `attempted=True`.
    - Added default values to all optional model fields.
    - Removed duplicate imports.
    - Added condition in process_report.py to prevent null lookups if `metrics.difficulty` is not set.
2024-01-18 15:19:06 +01:00
..
2023-09-13 12:18:04 +02:00
2024-01-02 22:23:09 +01:00

As a user

  1. pip install auto-gpt-benchmarks
  2. Add boilerplate code to run and kill agent
  3. agbenchmark
    • --category challenge_category to run tests in a specific category
    • --mock to only run mock tests if they exists for each test
    • --noreg to skip any tests that have passed in the past. When you run without this flag and a previous challenge that passed fails, it will now not be regression tests
  4. We call boilerplate code for your agent
  5. Show pass rate of tests, logs, and any other metrics

Contributing

Diagrams: https://whimsical.com/agbenchmark-5n4hXBq1ZGzBwRsK4TVY7x

To run the existing mocks

  1. clone the repo auto-gpt-benchmarks
  2. pip install poetry
  3. poetry shell
  4. poetry install
  5. cp .env_example .env
  6. git submodule update --init --remote --recursive
  7. uvicorn server:app --reload
  8. agbenchmark --mock Keep config the same and watch the logs :)

To run with mini-agi

  1. Navigate to auto-gpt-benchmarks/agent/mini-agi
  2. pip install -r requirements.txt
  3. cp .env_example .env, set PROMPT_USER=false and add your OPENAI_API_KEY=. Sset MODEL="gpt-3.5-turbo" if you don't have access to gpt-4 yet. Also make sure you have Python 3.10^ installed
  4. set AGENT_NAME=mini-agi in .env file and where you want your REPORT_LOCATION to be
  5. Make sure to follow the commands above, and remove mock flag agbenchmark
  • To add requirements poetry add requirement.

Feel free to create prs to merge with main at will (but also feel free to ask for review) - if you can't send msg in R&D chat for access.

If you push at any point and break things - it'll happen to everyone - fix it asap. Step 1 is to revert master to last working commit

Let people know what beautiful code you write does, document everything well

Share your progress :)

Dataset

Manually created, existing challenges within Auto-Gpt, https://osu-nlp-group.github.io/Mind2Web/

How do I add new agents to agbenchmark ?

Example with smol developer.

1- Create a github branch with your agent following the same pattern as this example:

https://github.com/smol-ai/developer/pull/114/files

2- Create the submodule and the github workflow by following the same pattern as this example:

https://github.com/Significant-Gravitas/Auto-GPT-Benchmarks/pull/48/files

How do I run agent in different environments?

To just use as the benchmark for your agent. pip install the package and run agbenchmark

For internal Auto-GPT ci runs, specify the AGENT_NAME you want you use and set the HOME_ENV. Ex. AGENT_NAME=mini-agi

To develop agent alongside benchmark, you can specify the AGENT_NAME you want you use and add as a submodule to the repo