-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: improve abstraction for external providers #123
Conversation
Great job. Approved with just one question regarding the token usage stats within LLM response - if not now we probably will add it later nevertheless, I mean it's kind of standard per LLM responsy in any framework |
Right now, Ollama won't work due to ollama/ollama#8095, unless I misconfigured the request. I will try to replicate this behavior by tweaking the schema and we will see. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Approved with two minor suggestions
…ework into feat/completions-2
…ework into feat/completions-2
…ework into feat/completions-2
I am going to merge it, but let's test it thoroughly tomorrow before cutting release, to make sure it does work as expected! |
Grok working good! |
Slightly better abstraction (but still light-weight) for other providers.
Support for Ollama (tho results with local LLMs are rather poor, I would love if someone could experiment with it).
We may need to tweak prompts on per model basis, which I would see as a task for the future.
Fixes #114