Skip to content

Commit

Permalink
Remove GPT-4 as the default model. (#1072)
Browse files Browse the repository at this point in the history
* Remove GPT-4 as the default model.

* Updated test_compressible_agent to work around a bug that would otherwise default to gpt-4. Revist after #1073 is addressed.

* Worked around another bug in test_compressible_agent. It seems the config_list was always empty!

* Reverted changes to compressible agent.

* Noted that GPT-4 is the preferred model in the OAI_CONFIG_LIST_sample and README.

* Fixed failing tests after #1110

* Update OAI_CONFIG_LIST_sample

Co-authored-by: Chi Wang <[email protected]>

---------

Co-authored-by: Chi Wang <[email protected]>
  • Loading branch information
afourney and sonichi authored Jan 5, 2024
1 parent 3f34365 commit e5ebdb6
Show file tree
Hide file tree
Showing 4 changed files with 32 additions and 7 deletions.
6 changes: 4 additions & 2 deletions OAI_CONFIG_LIST_sample
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
// Please modify the content, remove these two lines of comment and rename this file to OAI_CONFIG_LIST to run the sample code.
// if using pyautogen v0.1.x with Azure OpenAI, please replace "base_url" with "api_base" (line 11 and line 18 below). Use "pip list" to check version of pyautogen installed.
// Please modify the content, remove these four lines of comment and rename this file to OAI_CONFIG_LIST to run the sample code.
// If using pyautogen v0.1.x with Azure OpenAI, please replace "base_url" with "api_base" (line 13 and line 20 below). Use "pip list" to check version of pyautogen installed.
//
// NOTE: This configuration lists GPT-4 as the default model, as this represents our current recommendation, and is known to work well with AutoGen. If you use a model other than GPT-4, you may need to revise various system prompts (especially if using weaker models like GPT-3.5-turbo). Moreover, if you use models other than those hosted by OpenAI or Azure, you may incur additional risks related to alignment and safety. Proceed with caution if updating this default.
[
{
"model": "gpt-4",
Expand Down
2 changes: 2 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,6 +58,8 @@ The easiest way to start playing is
2. Copy OAI_CONFIG_LIST_sample to ./notebook folder, name to OAI_CONFIG_LIST, and set the correct configuration.
3. Start playing with the notebooks!

*NOTE*: OAI_CONFIG_LIST_sample lists GPT-4 as the default model, as this represents our current recommendation, and is known to work well with AutoGen. If you use a model other than GPT-4, you may need to revise various system prompts (especially if using weaker models like GPT-3.5-turbo). Moreover, if you use models other than those hosted by OpenAI or Azure, you may incur additional risks related to alignment and safety. Proceed with caution if updating this default.

## Using existing docker image
Install docker, save your oai key into an environment variable name OPENAI_API_KEY, and then run the following.

Expand Down
6 changes: 2 additions & 4 deletions autogen/agentchat/conversable_agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -43,9 +43,7 @@ class ConversableAgent(Agent):
To customize the initial message when a conversation starts, override `generate_init_message` method.
"""

DEFAULT_CONFIG = {
"model": DEFAULT_MODEL,
}
DEFAULT_CONFIG = {} # An empty configuration
MAX_CONSECUTIVE_AUTO_REPLY = 100 # maximum number of consecutive auto replies (subject to future change)

llm_config: Union[Dict, Literal[False]]
Expand Down Expand Up @@ -1301,7 +1299,7 @@ def update_function_signature(self, func_sig: Union[str, Dict], is_remove: None)
is_remove: whether removing the function from llm_config with name 'func_sig'
"""

if not self.llm_config:
if not isinstance(self.llm_config, dict):
error_msg = "To update a function signature, agent must have an llm_config"
logger.error(error_msg)
raise AssertionError(error_msg)
Expand Down
25 changes: 24 additions & 1 deletion test/agentchat/test_conversable_agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,14 @@
from typing_extensions import Annotated

from autogen.agentchat import ConversableAgent, UserProxyAgent
from conftest import skip_openai

try:
import openai
except ImportError:
skip = True
else:
skip = False or skip_openai


@pytest.fixture
Expand Down Expand Up @@ -610,9 +618,24 @@ async def exec_sh(script: Annotated[str, "Valid shell script to execute."]):
assert get_origin(user_proxy_1.function_map) == expected_function_map


@pytest.mark.skipif(
skip,
reason="do not run if skipping openai",
)
def test_no_llm_config():
# We expect a TypeError when the model isn't specified
with pytest.raises(TypeError, match=r".*Missing required arguments.*"):
agent1 = ConversableAgent(name="agent1", llm_config=False, human_input_mode="NEVER", default_auto_reply="")
agent2 = ConversableAgent(
name="agent2", llm_config={"api_key": "Intentionally left blank."}, human_input_mode="NEVER"
)
agent1.initiate_chat(agent2, message="hi")


if __name__ == "__main__":
# test_trigger()
# test_context()
# test_max_consecutive_auto_reply()
# test_generate_code_execution_reply()
test_conversable_agent()
# test_conversable_agent()
test_no_llm_config()

0 comments on commit e5ebdb6

Please sign in to comment.