-
Notifications
You must be signed in to change notification settings - Fork 3.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support using ollama as an inline_completion_provider #15968
Comments
Some considerations that come to mind regarding this, keeping in mind the current support for Ollama: In contrast to something like GitHub Copilot, whose entire purpose is to provide inline completions, some of the most popular models used with Ollama, such as {
"model": "deepseek-coder-v2:latest",
"prompt": "time := time.",
"suffix": " return time;",
"options": {
"temperature": 0
},
"keep_alive": -1,
"stream": false
} And get back a response such as: {
"model": "deepseek-coder-v2:latest",
"created_at": "2024-09-07T14:28:17.013718016Z",
"response": "Now().Unix()\n", // our inline completion, inserted between prompt and suffix
"done": true,
"done_reason": "stop",
// metadata fields omitted
} For models that do not support inline completions the above request results in i.e.: {
"error": "llama3.1:latest does not support insert"
} However it would be desirable to use say Another consideration is how much sense it would make to support remote Ollama instances for inline completions. , I've got Ollama running both locally on my laptop and on a server I've got on my LAN to get access to more powerful models. |
This repo was mentioned in #14134. Proxy that allows you to use ollama as a copilot like Github copilothttps://github.com/bernardo-bruning/ollama-copilot Written in Go |
As mentioned in #16030 to address your concern @MatejLach, we should be able to configure an autocomplete model along side a chat model. Copy pasting from the issue: Heres an example of the config.json in Continue.dev to change autocomplete model:
|
Check for existing issues
Describe the feature
I am successfully using my local ollama models using assistant panel.
I would love to be able to use them as well as an
inline_completion_provider
.Currently, only
none
,copilot
orsupermaven
values are supported.If applicable, add mockups / screenshots to help present your vision of the feature
No response
The text was updated successfully, but these errors were encountered: