Skip to content

Commit

Permalink
Guide for Ollama local LLM (#615)
Browse files Browse the repository at this point in the history
* doc: Guide for using local LLM with Ollama
  • Loading branch information
JayQuimby authored Apr 3, 2024
1 parent d397a20 commit 08a2dfb
Show file tree
Hide file tree
Showing 3 changed files with 73 additions and 0 deletions.
73 changes: 73 additions & 0 deletions opendevin/llm/LOCAL_LLM_GUIDE.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,73 @@
# Local LLM Guide with Ollama and Litellm

- This is a guide to use local LLM's with Ollama and Litellm

## 1. Follow the default installation:
```
git clone [email protected]:OpenDevin/OpenDevin.git
```
or
```
git clone [email protected]:<YOUR-USERNAME>/OpenDevin.git
```

then `cd OpenDevin`

## 2. Run setup commands:
```
make build
make setup-config
```

## 3. Modify config file:

- after running `make setup-config` you will see a generated file called `config.toml` in `OpenDevin/`.

- open this file and modify it to your needs based on this template:

```
LLM_API_KEY="0"
LLM_MODEL="ollama/<model_name>"
LLM_BASE_URL="http://localhost:<port_number>"
WORKSPACE_DIR="./workspace"
```
`<port_number>` can be whatever you want just make sure it is not used by anything else.

ollama model names can be found [here](https://ollama.com/library)

Example:
![alt text](images/ollama.png)

Note: The API key does not matter and the base url needs to be `localhost` with the port number you intend to use with litellm. By default this is `11434`.

## 4. Run Litellm in CLI:

- there are two options for this:

#### 1. Run litellm in linux terminal:
```
conda activate <env_name>
litellm --model ollama/<model_name>
```

#### 2. Create a batch script:
- The below example assumes the use of miniconda3 with default install settings, you will need to change this to the path to your `conda.sh` file if you use something else.
```
start /B wsl.exe -d <DISTRO_NAME> -e bash -c "source ~/miniconda3/etc/profile.d/conda.sh && conda activate <ENV_NAME> && litellm --model ollama/<MODEL_NAME> --port <PORT>"
```
- The above script will spawn a wsl instance in your cmd terminal, activate your conda environment and then run the litellm command with your model and port number.
- make sure you fill in all the <> brackets with the appropriate names.

Either way you do it you should see something like this to confirm you have started the server:
![alt text](images/example.png)

## 5. Start OpenDevin:

At this point everything should be set up and working properly.
1. Start by running the litellm server using one of the methods outlined above
2. Run `make build` in your terminal `~/OpenDevin/`
3. Run `make run` in your terminal
4. If that fails try running the server and front end in sepparate terminals:
- In the first terminal `make start-backend`
- In the second terminal `make start-frontend`
5. you should now be able to connect to `http://localhost:3001/` with your local model running!
Binary file added opendevin/llm/images/example.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added opendevin/llm/images/ollama.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

3 comments on commit 08a2dfb

@ajeema
Copy link
Contributor

@ajeema ajeema commented on 08a2dfb Apr 3, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

interesting, I have been running ollama separately, not using litellm

@stratte89
Copy link

@stratte89 stratte89 commented on 08a2dfb Apr 3, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here is a Guide for Oobabooga Web-UI

Local LLM Guide with Oobabooga web-ui and Litellm

1. Follow the default installation:```

git clone [email protected]:OpenDevin/OpenDevin.git

or 

git clone [email protected]:/OpenDevin.git

then `cd OpenDevin`
## 2. Run setup commands:

make build

#2.1 start Oobabooga Webui, move to the "Session" Tab and set "openai", "listen" and hit "Allpy flags/extensions and restart"
In the Terminal you should see something like:

18:36:56-533136 INFO     OpenAI-compatible API URL:                             
                                                                                
                         http://0.0.0.0:5000    
                         
"http://0.0.0.0:5000" this is your LLM_BASE_URL=
#2.2 run  make setup-config

3. Modify config file:

  • after running make setup-config you will see a generated file called config.toml in OpenDevin/.

  • open this file and modify it to your needs based on this template:

LLM_API_KEY="na"
LLM_BASE_URL="http://0.0.0.0:5000/v1" # as explained, use your openAI Base URL provided in the oobabooga Temrinal
LLM_MODEL="openai/alpindale_Mistral-7B-v0.2-hf" #Example for the model (Folder name)
LLM_EMBEDDING_MODEL="lokal"
MAX_ITERATIONS=10000 #set max steps opendevin will run
WORKSPACE_DIR="./workspace"

Note: The API key does not matter and the base url needs to be localhost with the port number you intend to use with litellm. By default this is 11434.

4. In case your oobabooga webui openai Api is running on 0.0.0.0

  • open Makefile and replace the old lines with this:
# Start backend
start-backend:
	@echo "Starting backend..."
	@python -m pipenv run uvicorn opendevin.server.listen:app --port $(BACKEND_PORT) --host 0.0.0.0

# Start frontend
start-frontend:
	@echo "Starting frontend..."
	@cd frontend && BACKEND_HOST=$(BACKEND_HOST) FRONTEND_PORT=$(FRONTEND_PORT) npm run start -- --host
  • this will add
--host 0.0.0.0 and -- --host parameter to use 0.0.0.0 instead of http://127.0.0.1:<Port>/http://localhost:<Port>

#5 Start OpenDevin:

At this point everything should be set up and working properly.

  1. Start by running the litellm server using one of the methods outlined above
  2. Run make build in your terminal ~/OpenDevin/
  3. Run make run in your terminal
  4. If that fails try running the server and front end in sepparate terminals:
  • In the first terminal make start-backend
  • In the second terminal make start-frontend
    you should now be able to connect to http://localhost:3001 with your local model running!

@JayQuimby
Copy link
Contributor Author

@JayQuimby JayQuimby commented on 08a2dfb Apr 3, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

interesting, I have been running ollama separately, not using litellm

@ajeema Can you share a bit more on what you are running?

I have tried it using ollama serve and I get 404 errors:

image

I posted this issue earlier #635 are you getting the same results?

Please sign in to comment.