Skip to content

2.3.7 Satellite: Open Interpreter

av edited this page Sep 14, 2024 · 1 revision

Handle: opint URL: -

Discord JA doc ZH doc ES doc IN doc License

Open Interpreter lets LLMs run code (Python, Javascript, Shell, and more) locally. You can chat with Open Interpreter through a ChatGPT-like interface in your terminal.

Starting

Note that Harbor uses shortened opint service handle. For the CLI, you are free to use either official interpreter or opint alias.

Harbor will allow you running interpreter as if it was installed on your local machine. A big disclaimer is that Harbor only allows for the features of interpreter that are compatible with Docker runtime. Official Docker Intergration outlines those nicely.

We'll refer to the service as opint from now on.

# Pre-build the image for convenience
harbor build opint

# opint is only configured to run
# alongside the LLM backend service (ollama, litellm, mistral.rs),
# check that at least one of them is running, otherwise
# you'll see connection errors
harbor ps

# See official CLI help
harbor opint --help

Configuration

Profiles

See official profiles doc

# See where profiles are located on the host
# Modify the profiles as needed
harbor opint profiles

# Ensure that specific model is unset before
# setting the profile
harbor opint model ""
harbor opint args --profile <name>

# [Alternative] Set via opint.cmd config
# Note, it resets .model and .args
harbor opint cmd --profile <name>
Ollama

opin is pre-configured to run with ollama when it is also running.

# 0. Check your current default services
# ollama should be one of them
# See ollama models you have available
harbor defaults
harbor ollama models

# 1.1 You want to choose as big of a model
# as you can affort for the best experience
harbor opint model codestral

# Execute in the target folder
harbor opint
vLLM
# [Optional] If running __multiple__ backends
# at a time, you'll need to point opint to one of them
harbor opint backend vllm

# Set opint to use one of the models from
# /v1/models endpoint of the backend
harbor opint model google/gemma-2-2b-it

# Execute in the target folder
harbor opint
Other backends

To check if a backend is integrated with opint - lookup compose.x.opint.<backend>.yml file in the Harbor workspace.

The setup is identical to vllm:

  • if running multiple backends, ensure that opint is pointed to one of them
  • ensure that opint is configured to use one of the models from the backend's OpenAI API
Clone this wiki locally