Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CLI Onboard is confusing when using local models #301

Closed
danx0r opened this issue Nov 4, 2023 · 3 comments
Closed

CLI Onboard is confusing when using local models #301

danx0r opened this issue Nov 4, 2023 · 3 comments

Comments

@danx0r
Copy link
Contributor

danx0r commented Nov 4, 2023

Describe the bug
Running gpt run --no_verify and choosing a local model requires user to respond yes to

Do you want to enable MemGPT with Open AI?

User then is prompted to enter an OpenAI key, and the program will fail if one is not supplied (in spite of the fact we are not using OpenAI)

The workaround for now is to type nonsense for the key but this is very counter-intuitive

To Reproduce
Steps to reproduce the behavior:
run memgpt from command line with no previous agent

Expected behavior
If you are running a local model, there should be no talk of openAI

Actual behavior
Have to say yes to enabling OpenAI and supply a key


How did you install MemGPT?

  • With git clone [email protected]:cpacker/MemGPT.git and pip install -r requirements.txt

Your setup (please complete the following information)

  • Your OS (Linux, MacOS, Windows)
    -Ubuntu 22.04
  • Where you're trying to run MemGPT from
    • Terminal
  • Your python version (run python --version)
    • 3.10.12
  • If you installed from source:
    -- d7a937
    Local LLM details

If you are trying to run MemGPT with local LLMs, please provide the following information:

  • The exact model you're trying to use (link to the HuggingFace page you downloaded it from)
  • The local LLM backend you are using (web UI? LM Studio?)
    • webui
  • Your hardware for the local LLM backend (local computer? operating system? remote RunPod?)
    • aws EC2 g5.xlarge
  • Your hardware for the MemGPT command (same computer as the local LLM backend?)
    • aws EC2 g5.xlarge
  • The full output (or as much as possible) of where the LLM is failing
    • If you can include screenshots, even better!
      memgpt301
@danx0r
Copy link
Contributor Author

danx0r commented Nov 4, 2023

It seems to work to just hit return when asked for OpenAI key but still confusing

@Anrock
Copy link

Anrock commented Nov 11, 2023

NB: it seems that since 2.1 (possible 2.0) you now need to answer No to OpenAI if you're using ollama otherwise it would error out on memgpt run asking you to unset ollama env vars since model isn't set to local.

@cpacker
Copy link
Collaborator

cpacker commented Dec 1, 2023

Should be clearer in the latest configure workflows. Please feel free to reopen / open a new issue if you think configure could use further changes.

@cpacker cpacker closed this as completed Dec 1, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants