Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update autogen.md #544

Merged
merged 1 commit into from
Dec 1, 2023
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
124 changes: 62 additions & 62 deletions docs/autogen.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,76 +62,76 @@ In order to run this example on a local LLM, go to lines 46-66 in [examples/agen

For example, if you are using web UI, it will look something like this:
```python
# Non-MemGPT agents will still use local LLMs, but they will use the ChatCompletions endpoint
config_list = [
{
"model": "NULL", # not needed
"api_base": "http://127.0.0.1:5001/v1", # notice port 5001 for web UI
"api_key": "NULL", # not needed
"api_type": "open_ai",
},
]

# MemGPT-powered agents will also use local LLMs, but they need additional setup (also they use the Completions endpoint)
config_list_memgpt = [
{
"preset": DEFAULT_PRESET,
"model": None, # not required for web UI, only required for Ollama, see: https://memgpt.readthedocs.io/en/latest/ollama/
"model_wrapper": "airoboros-l2-70b-2.1", # airoboros is the default wrapper and should work for most models
"model_endpoint_type": "webui",
"model_endpoint": "http://localhost:5000", # notice port 5000 for web UI
"context_window": 8192, # the context window of your model (for Mistral 7B-based models, it's likely 8192)
},
]
# Non-MemGPT agents will still use local LLMs, but they will use the ChatCompletions endpoint
config_list = [
{
"model": "NULL", # not needed
"api_base": "http://127.0.0.1:5001/v1", # notice port 5001 for web UI
"api_key": "NULL", # not needed
"api_type": "open_ai",
},
]

# MemGPT-powered agents will also use local LLMs, but they need additional setup (also they use the Completions endpoint)
config_list_memgpt = [
{
"preset": DEFAULT_PRESET,
"model": None, # not required for web UI, only required for Ollama, see: https://memgpt.readthedocs.io/en/latest/ollama/
"model_wrapper": "airoboros-l2-70b-2.1", # airoboros is the default wrapper and should work for most models
"model_endpoint_type": "webui",
"model_endpoint": "http://localhost:5000", # notice port 5000 for web UI
"context_window": 8192, # the context window of your model (for Mistral 7B-based models, it's likely 8192)
},
]
```

If you are using LM Studio, then you'll need to change the `api_base` in `config_list`, and `model_endpoint_type` + `model_endpoint` in `config_list_memgpt`:
```python
# Non-MemGPT agents will still use local LLMs, but they will use the ChatCompletions endpoint
config_list = [
{
"model": "NULL",
"api_base": "http://127.0.0.1:1234/v1", # port 1234 for LM Studio
"api_key": "NULL",
"api_type": "open_ai",
},
]

# MemGPT-powered agents will also use local LLMs, but they need additional setup (also they use the Completions endpoint)
config_list_memgpt = [
{
"preset": DEFAULT_PRESET,
"model": None,
"model_wrapper": "airoboros-l2-70b-2.1",
"model_endpoint_type": "lmstudio",
"model_endpoint": "http://localhost:1234", # port 1234 for LM Studio
"context_window": 8192,
},
]
# Non-MemGPT agents will still use local LLMs, but they will use the ChatCompletions endpoint
config_list = [
{
"model": "NULL",
"api_base": "http://127.0.0.1:1234/v1", # port 1234 for LM Studio
"api_key": "NULL",
"api_type": "open_ai",
},
]

# MemGPT-powered agents will also use local LLMs, but they need additional setup (also they use the Completions endpoint)
config_list_memgpt = [
{
"preset": DEFAULT_PRESET,
"model": None,
"model_wrapper": "airoboros-l2-70b-2.1",
"model_endpoint_type": "lmstudio",
"model_endpoint": "http://localhost:1234", # port 1234 for LM Studio
"context_window": 8192,
},
]
```

If you are using the OpenAI API (e.g. using `gpt-4-turbo` via your own OpenAI API account), then the `config_list` for the AutoGen agent and `config_list_memgpt` for the MemGPT AutoGen agent will look different (a lot simpler):
```python
# This config is for autogen agents that are not powered by MemGPT
config_list = [
{
"model": "gpt-4-1106-preview", # gpt-4-turbo (https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo)
"api_key": os.getenv("OPENAI_API_KEY"),
}
]

# This config is for autogen agents that powered by MemGPT
config_list_memgpt = [
{
"model": "gpt-4-1106-preview", # gpt-4-turbo (https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo)
"preset": DEFAULT_PRESET,
"model": None,
"model_wrapper": None,
"model_endpoint_type": None,
"model_endpoint": None,
"context_window": 128000, # gpt-4-turbo
},
]
# This config is for autogen agents that are not powered by MemGPT
config_list = [
{
"model": "gpt-4-1106-preview", # gpt-4-turbo (https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo)
"api_key": os.getenv("OPENAI_API_KEY"),
}
]

# This config is for autogen agents that powered by MemGPT
config_list_memgpt = [
{
"model": "gpt-4-1106-preview", # gpt-4-turbo (https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo)
"preset": DEFAULT_PRESET,
"model": None,
"model_wrapper": None,
"model_endpoint_type": None,
"model_endpoint": None,
"context_window": 128000, # gpt-4-turbo
},
]
```

!!! warning "Making internal monologue visible to AutoGen"
Expand Down
Loading