-
Notifications
You must be signed in to change notification settings - Fork 4.3k
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
* doc: Guide for using local LLM with Ollama
- Loading branch information
Showing
3 changed files
with
73 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,73 @@ | ||
# Local LLM Guide with Ollama and Litellm | ||
|
||
- This is a guide to use local LLM's with Ollama and Litellm | ||
|
||
## 1. Follow the default installation: | ||
``` | ||
git clone [email protected]:OpenDevin/OpenDevin.git | ||
``` | ||
or | ||
``` | ||
git clone [email protected]:<YOUR-USERNAME>/OpenDevin.git | ||
``` | ||
|
||
then `cd OpenDevin` | ||
|
||
## 2. Run setup commands: | ||
``` | ||
make build | ||
make setup-config | ||
``` | ||
|
||
## 3. Modify config file: | ||
|
||
- after running `make setup-config` you will see a generated file called `config.toml` in `OpenDevin/`. | ||
|
||
- open this file and modify it to your needs based on this template: | ||
|
||
``` | ||
LLM_API_KEY="0" | ||
LLM_MODEL="ollama/<model_name>" | ||
LLM_BASE_URL="http://localhost:<port_number>" | ||
WORKSPACE_DIR="./workspace" | ||
``` | ||
`<port_number>` can be whatever you want just make sure it is not used by anything else. | ||
|
||
ollama model names can be found [here](https://ollama.com/library) | ||
|
||
Example: | ||
![alt text](images/ollama.png) | ||
|
||
Note: The API key does not matter and the base url needs to be `localhost` with the port number you intend to use with litellm. By default this is `11434`. | ||
|
||
## 4. Run Litellm in CLI: | ||
|
||
- there are two options for this: | ||
|
||
#### 1. Run litellm in linux terminal: | ||
``` | ||
conda activate <env_name> | ||
litellm --model ollama/<model_name> | ||
``` | ||
|
||
#### 2. Create a batch script: | ||
- The below example assumes the use of miniconda3 with default install settings, you will need to change this to the path to your `conda.sh` file if you use something else. | ||
``` | ||
start /B wsl.exe -d <DISTRO_NAME> -e bash -c "source ~/miniconda3/etc/profile.d/conda.sh && conda activate <ENV_NAME> && litellm --model ollama/<MODEL_NAME> --port <PORT>" | ||
``` | ||
- The above script will spawn a wsl instance in your cmd terminal, activate your conda environment and then run the litellm command with your model and port number. | ||
- make sure you fill in all the <> brackets with the appropriate names. | ||
|
||
Either way you do it you should see something like this to confirm you have started the server: | ||
![alt text](images/example.png) | ||
|
||
## 5. Start OpenDevin: | ||
|
||
At this point everything should be set up and working properly. | ||
1. Start by running the litellm server using one of the methods outlined above | ||
2. Run `make build` in your terminal `~/OpenDevin/` | ||
3. Run `make run` in your terminal | ||
4. If that fails try running the server and front end in sepparate terminals: | ||
- In the first terminal `make start-backend` | ||
- In the second terminal `make start-frontend` | ||
5. you should now be able to connect to `http://localhost:3001/` with your local model running! |
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
08a2dfb
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
interesting, I have been running ollama separately, not using litellm
08a2dfb
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here is a Guide for Oobabooga Web-UI
Local LLM Guide with Oobabooga web-ui and Litellm
1. Follow the default installation:```
git clone [email protected]:OpenDevin/OpenDevin.git
git clone [email protected]:/OpenDevin.git
make build
#2.1 start Oobabooga Webui, move to the "Session" Tab and set "openai", "listen" and hit "Allpy flags/extensions and restart"
In the Terminal you should see something like:
3. Modify config file:
after running
make setup-config
you will see a generated file calledconfig.toml
inOpenDevin/
.open this file and modify it to your needs based on this template:
Note: The API key does not matter and the base url needs to be
localhost
with the port number you intend to use with litellm. By default this is11434
.4. In case your oobabooga webui openai Api is running on 0.0.0.0
#5 Start OpenDevin:
At this point everything should be set up and working properly.
make build
in your terminal~/OpenDevin/
make run
in your terminalmake start-backend
make start-frontend
you should now be able to connect to
http://localhost:3001
with your local model running!08a2dfb
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ajeema Can you share a bit more on what you are running?
I have tried it using
ollama serve
and I get 404 errors:I posted this issue earlier #635 are you getting the same results?