-
Notifications
You must be signed in to change notification settings - Fork 4.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Document how to use specific LLMs #417
Comments
+1 for ollama support and documentation. I get |
I see github still has the wiki feature. I don't see it on opendevin, but IIRC that's because it would need enabled for the project. What if we use that for a few / several model-specific pages? I don't remember if access can be enabled widely but it's a wiki, I'd assume so. |
That seems reasonable to me |
Can you update the supported tags for the .env with commands to connect to open llms, openAI/OllamaWebUI/Oogabooga/ETC? |
IMO, Ollama can be a nuisance for people who already have gguf files because it requires GGUFs to be converted. It also has no GUI, in contrast to koboldcpp and Ooba. I recommend using llama.cpp server, koboldcpp and Ooba instead. Those are really easy to use inference programs. |
Agreed. I also made a PR for setting LLM_BASE_URL with make setup-config. For example, it works with Koboldcpp by setting it like this: |
I made a markdown file for Ollama #615. |
I use oobabooga with the openai compatible api. And currently trying OpenCodeInterpreter-DS-33B (exl2) as model first. I'm not aware which non-openai/claude model works best yet... config.toml: After starting a development i currently get these kind of errors sometimes during the process: ERROR: ############### |
here is a guide for oobabooga webui |
Getting a similar error using the following config in MacOS config.tomlLLM_MODEL="ollama/deepseek-coder:instruct"
LLM_API_KEY="ollama"
LLM_EMBEDDING_MODEL="local"
LLM_BASE_URL="http://localhost:11434"
WORKSPACE_DIR="./workspace"
|
hey, use openai/modelname like LLM_API_KEY="na" although i dont think that the model name really matter for oobabooga as long as its openai/ it can be anything, at least it works for me, i can switch model without changing the config and it's working. |
@Rags100 this isn't really the right place for your question. Please follow the README, and search our issues for related problems (there's an existing one for uvloop--you'll need WSL to make it work). If you continue to have trouble, feel free to file a new issue with the template filled out. Thanks! |
Hey all--lots of unrelated comments in this thread. Please try to keep this about LLM Model documentation. I'm going to delete the unrelated comments--feel free to open new issues if you're having trouble |
Documentation for Azure: #1035 |
Added documentation for using Google's Gemini model through AI studio as well as VertexAI through GCP #1321 |
We've made a lot of progress on this one, so I'm going to close it. More docs welcome! |
What problem or use case are you trying to solve?
Lots of folks are struggling to get OpenDevin working with non-OpenAI models. Local ollama seems to be particularly hard
Describe the UX of the solution you'd like
We should have a doc that lists out 3-4 major providers, explains how to get an API key, and how to configure OpenDevin
Do you have thoughts on the technical implementation?
Just a Models.md that we can link to from README.md
The text was updated successfully, but these errors were encountered: