Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LLM Prompt Adherence/Response Formatting. #32

Open
rtuszik opened this issue Dec 13, 2024 · 2 comments
Open

LLM Prompt Adherence/Response Formatting. #32

rtuszik opened this issue Dec 13, 2024 · 2 comments

Comments

@rtuszik
Copy link
Contributor

rtuszik commented Dec 13, 2024

Issues Persist when using ollama due to bad prompt adherence. As brought up by @dxcore35 in #15.

The "format" parameter that is used by ollama is not available using the OpenAI-Node library.

In a future overhaul of the api-call logic, it would be possible to make this work.

For now, I suggest testing different models and some improvements with regards to the prompt.

PR for a suggestion will follow shortly.

@rtuszik
Copy link
Contributor Author

rtuszik commented Dec 13, 2024

As a temporary "fix", this prompt-template works well for me with ollama and llama3.2:

"""
{{input}}
"""
Answer format is JSON {reliability:0~1, outputs:[tag1,tag2,...]}. 
Even if you are unsure, qualify the reliability and select the best matches.
Respond only with valid JSON. Do not write an introduction or summary.
Output tags must be from these options:
{{reference}}
`;

@dxcore35
Copy link

🛑 gemma2:9b
🛑 qwen2.5

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants