-
Notifications
You must be signed in to change notification settings - Fork 4.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ollama support issue. #635
Comments
EDIT: a guide for ollama was added this config was working for me, before the devin update today, its not right now with the latest update, idk if its about the config tho LLM_API_KEY="ollama" Also make sure to start the ollama serve after loading the mode and that you are using the correct ollama server port. If the server is already running load the model and kill the server process. I am using sudo fuser -k -n tcp 11434 to kill it but im on ubuntu. Btw I tried it on windows using wsl and i wasn't able to get it to work, since wsl is using a virtual network. There is a workaround by creating a wsl config file to mirror you host network, it didn't work for me tho for someone else it did so you have you try it yourself. "Open wsl config file C:\Users%username%.wslconfig (create one if it doesnt exist), and add this: [wsl2] if your ollama server is listening on 0.0.0.0:port then change the Makefile adding --host 0.0.0.0 and --host Start backendstart-backend: Start frontendstart-frontend: Also what i just mentioned is, that you're using the cmd, you should use a wsl terminal like anaconda command prompt in windows |
I tested this yesterday and again this morning using Ollama local with OpenDevin patch-11 and it works. However, it fails when I switch to patch-12 and the main branch. |
@imtpalmer An updated guide got merged this morning here |
Describe the bug
When trying to configure OpenDevin to run with Ollama there are requests that are being sent to the ollama server like this:
The post request should look like this:
"POST /chat/completions HTTP/1.1"
Setup and configuration
Current version:
commit 5c640c99cafb3c718dad60f377f3a725a8bab1de (HEAD -> local-llm-flag, origin/main, origin/HEAD, main)
My config.toml and environment vars (be sure to redact API keys):
My model and agent (you can see these settings in the UI):
Commands I ran to install and run OpenDevin:
Steps to Reproduce:
opendevin/llm/llm.py
in__init__
replaceself.model = model if model else DEFAULT_MODEL_NAME
withself.model_name = DEFAULT_MODEL_NAME
litellm --model ollama/starcoder2:15b --port 8000
make build
thenmake start-backend
andmake start-frontend
Logs, error messages, and screenshots:
This is a log from the backend server running from
make start-backend
steps 0-99 all look the same.Additional Context
Litellm for local models is expecting api calls in the following format:
From:
http://localhost:8000/#/
I know that the problem is whatever is managing the api calls is set to call
/api/generate/
because this is the convention, but for local server that is not supported. I do not know where to look to fix this, any ideas?The server responds when I test it like this:
The text was updated successfully, but these errors were encountered: