Remove GPT-4 as the default model. (#1072)

* Remove GPT-4 as the default model.

* Updated test_compressible_agent to work around a bug that would otherwise default to gpt-4. Revist after #1073 is addressed.

* Worked around another bug in test_compressible_agent. It seems the config_list was always empty!

* Reverted changes to compressible agent.

* Noted that GPT-4 is the preferred model in the OAI_CONFIG_LIST_sample and README.

* Fixed failing tests after #1110

* Update OAI_CONFIG_LIST_sample

Co-authored-by: Chi Wang <wang.chi@microsoft.com>

---------

Co-authored-by: Chi Wang <wang.chi@microsoft.com>
This commit is contained in:
afourney
2024-01-05 06:27:48 -08:00
committed by GitHub
parent 3f343654bd
commit e5ebdb66bf
4 changed files with 32 additions and 7 deletions

View File

@@ -1,5 +1,7 @@
// Please modify the content, remove these two lines of comment and rename this file to OAI_CONFIG_LIST to run the sample code.
// if using pyautogen v0.1.x with Azure OpenAI, please replace "base_url" with "api_base" (line 11 and line 18 below). Use "pip list" to check version of pyautogen installed.
// Please modify the content, remove these four lines of comment and rename this file to OAI_CONFIG_LIST to run the sample code.
// If using pyautogen v0.1.x with Azure OpenAI, please replace "base_url" with "api_base" (line 13 and line 20 below). Use "pip list" to check version of pyautogen installed.
//
// NOTE: This configuration lists GPT-4 as the default model, as this represents our current recommendation, and is known to work well with AutoGen. If you use a model other than GPT-4, you may need to revise various system prompts (especially if using weaker models like GPT-3.5-turbo). Moreover, if you use models other than those hosted by OpenAI or Azure, you may incur additional risks related to alignment and safety. Proceed with caution if updating this default.
[
{
"model": "gpt-4",

View File

@@ -58,6 +58,8 @@ The easiest way to start playing is
2. Copy OAI_CONFIG_LIST_sample to ./notebook folder, name to OAI_CONFIG_LIST, and set the correct configuration.
3. Start playing with the notebooks!
*NOTE*: OAI_CONFIG_LIST_sample lists GPT-4 as the default model, as this represents our current recommendation, and is known to work well with AutoGen. If you use a model other than GPT-4, you may need to revise various system prompts (especially if using weaker models like GPT-3.5-turbo). Moreover, if you use models other than those hosted by OpenAI or Azure, you may incur additional risks related to alignment and safety. Proceed with caution if updating this default.
## Using existing docker image
Install docker, save your oai key into an environment variable name OPENAI_API_KEY, and then run the following.

View File

@@ -43,9 +43,7 @@ class ConversableAgent(Agent):
To customize the initial message when a conversation starts, override `generate_init_message` method.
"""
DEFAULT_CONFIG = {
"model": DEFAULT_MODEL,
}
DEFAULT_CONFIG = {} # An empty configuration
MAX_CONSECUTIVE_AUTO_REPLY = 100 # maximum number of consecutive auto replies (subject to future change)
llm_config: Union[Dict, Literal[False]]
@@ -1301,7 +1299,7 @@ class ConversableAgent(Agent):
is_remove: whether removing the function from llm_config with name 'func_sig'
"""
if not self.llm_config:
if not isinstance(self.llm_config, dict):
error_msg = "To update a function signature, agent must have an llm_config"
logger.error(error_msg)
raise AssertionError(error_msg)

View File

@@ -7,6 +7,14 @@ from pydantic import BaseModel, Field
from typing_extensions import Annotated
from autogen.agentchat import ConversableAgent, UserProxyAgent
from conftest import skip_openai
try:
import openai
except ImportError:
skip = True
else:
skip = False or skip_openai
@pytest.fixture
@@ -610,9 +618,24 @@ def test_register_for_execution():
assert get_origin(user_proxy_1.function_map) == expected_function_map
@pytest.mark.skipif(
skip,
reason="do not run if skipping openai",
)
def test_no_llm_config():
# We expect a TypeError when the model isn't specified
with pytest.raises(TypeError, match=r".*Missing required arguments.*"):
agent1 = ConversableAgent(name="agent1", llm_config=False, human_input_mode="NEVER", default_auto_reply="")
agent2 = ConversableAgent(
name="agent2", llm_config={"api_key": "Intentionally left blank."}, human_input_mode="NEVER"
)
agent1.initiate_chat(agent2, message="hi")
if __name__ == "__main__":
# test_trigger()
# test_context()
# test_max_consecutive_auto_reply()
# test_generate_code_execution_reply()
test_conversable_agent()
# test_conversable_agent()
test_no_llm_config()