Skip to content

Releases: taketwo/llm-ollama

0.7.1

22 Nov 19:05
0a03615
Compare
Choose a tag to compare
  • Update plugin internals to be compatible with the latest 0.4.0 release of Ollama Python library

0.7.0

06 Nov 09:07
4674f24
Compare
Choose a tag to compare
  • Add support for text embedding.
    Example usage: llm embed -m mxbai-embed-large -i README.md
  • Do not register embedding-only models (such mxbai-embed-large) for prompting and chatting

0.6.0

30 Oct 20:22
b4ad6f7
Compare
Choose a tag to compare
  • Add support for image attachments.
    Example usage: llm -m llava "Describe this image" --attachment image.jpg

0.5.0

31 Jul 07:40
bb3e92b
Compare
Choose a tag to compare
  • Add support for forcing the model to reply with a valid JSON object

0.4.3

02 Jul 04:49
40c6600
Compare
Choose a tag to compare
  • Fix the type of stop option. This allows using it through the llm Python API; however, it's not clear how to pass it through the CLI.

0.4.2

12 Jun 09:53
e163fc5
Compare
Choose a tag to compare
  • Ignore KeyError when iterating through response messages in streaming mode

0.4.1

29 May 19:59
55bf578
Compare
Choose a tag to compare
  • Prevent inability to communicate with Ollama server from failing the entire llm CLI

0.4.0

22 May 07:10
Compare
Choose a tag to compare
  • Add missing pydantic dependency

0.3.0

07 May 08:19
97359cf
Compare
Choose a tag to compare

0.2.0

28 Jan 07:36
62ea462
Compare
Choose a tag to compare
  • Switch to using official Ollama Python library instead of raw HTTP API
  • Automatically create aliases for identical models with different names