Releases: taketwo/llm-ollama
Releases · taketwo/llm-ollama
0.7.1
- Update plugin internals to be compatible with the latest 0.4.0 release of Ollama Python library
0.7.0
- Add support for text embedding.
Example usage: llm embed -m mxbai-embed-large -i README.md
- Do not register embedding-only models (such
mxbai-embed-large
) for prompting and chatting
0.6.0
- Add support for image attachments.
Example usage: llm -m llava "Describe this image" --attachment image.jpg
0.5.0
- Add support for forcing the model to reply with a valid JSON object
0.4.3
- Fix the type of
stop
option. This allows using it through the llm
Python API; however, it's not clear how to pass it through the CLI.
0.4.2
- Ignore
KeyError
when iterating through response messages in streaming mode
0.4.1
- Prevent inability to communicate with Ollama server from failing the entire
llm
CLI
0.4.0
- Add missing
pydantic
dependency
0.2.0
- Switch to using official Ollama Python library instead of raw HTTP API
- Automatically create aliases for identical models with different names