A Neovim plugin that provides a floating window interface for interacting LLM models like Claude, GPT-4, and local models using Ollama.
llm-buffer-demo-sm.mp4
Warning
This project is for personal use and in active development. Use at your own risk!
The code may also be questionable since this is my first time dabbling in lua and plugin development 😅
- Floating window interface with markdown support
- Visual selection or line selection as prompt
- Streaming responses from different LLM providers
- Local model support using Ollama
- Configurable window dimensions and keybindings
Using lazy.nvim:
return {
"saibayadon/llm-buffer.nvim",
dependencies = { "nvim-lua/plenary.nvim" },
name = "llm-buffer.nvim",
config = function()
require("llm-buffer").setup()
end,
}
Ensure you have an Anthropic API key set as an environment variable.
export ANTHROPIC_API_KEY=sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
require("llm-buffer").setup({
window_width = 0.8, -- Width as percentage of editor width
window_height = 0.8, -- Height as percentage of editor height
anthropic_api_key = os.getenv("ANTHROPIC_API_KEY"),
openai_api_key = os.getenv("OPENAI_API_KEY"),
ollama_api_host = "http://localhost:11434",
provider = "anthropic", -- "anthropic" or "openai" or "ollama"
model = "claude-3-5-sonnet-20241022", -- Claude model to use
mappings = {
send_prompt = "<C-l>",
close_window = "q",
toggle_window = "<leader>llm"
}
})
<leader>llm
Toggle the LLM buffer window<C-l>
Send current line or visual selection as the prompt.q
Close the buffer window.<Esc>
Cancel ongoing request (closing the buffer will do this as well).
You can also use the :LLMBuffer
command to toggle the window.
To swap providers on the fly, you can use the update_options
function:
require("llm-buffer").update_options({
provider = "openai"
model = "gpt-4o-mini"
})
You can bind this to a keymap like <leader>llo
to quickly switch between providers.
MIT