Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for local models (OpenAI compatible) #157

Draft
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

browningluke
Copy link

Using any of the current LLMs for translation is expensive, since you pay per-token for API calls. This PR adds an option to allow for a "Local OpenAI server" to be used instead.

This should allow for using locally hosted LLMs using tools such as vLLM or Jan. I have had success translating pages using Llama-3.1-8B-Instruct through Jan. Importantly, I have not tried vLLM, as I am running an M1 Mac, but it should work. Anyone else able to verify this would be much appreciated.

Addresses issue #150

This PR should be functional, but is marked as draft as the README still needs to be updated.

Comment on lines +111 to +114
message = [
{"role": "system", "content": system_prompt},
{"role": "user", "content": user_prompt}
]
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This may be a Jan-specific issue, but requests in format: {"type": "text", "text": system_prompt} do not work, since only text is supported (images are not). However, I left the image option toggle and the request format for that alone, in case any OpenAI compatible servers do support images.

Comment on lines +259 to +268
base_url = None

if 'Local OpenAI' in translator_key:
base_url = self.settings.ui.llm_widgets['local_oai_url_input'].text()

if not base_url and translator_key == "Local OpenAI Server":
raise ValueError(f"Base URL not found for translator: {translator_key}")

return base_url
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added logic to pull the base_url in translator.py and pass it to the get_llm_client() helper, rather than have the helper need to access the settings pane.


local_oai_model_input = MLineEdit()
local_oai_model_input.setFixedWidth(400)
local_oai_model_input.setPlaceholderText("llama3.1-8b-instruct")
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using llama3.1-8b-instruct as the default model.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant