-
Notifications
You must be signed in to change notification settings - Fork 110
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Prompt errors for Llama3.1 #264
Comments
As mentioned here, we provide custom code to parse the raw response into JSON responses. Feel free to disable it by removing |
That's not what i meant, I am not just talking about the generated text, let me rephrase it. The functionary-small-v3.2 model has the prompt format different from what it is supposed to be for llama3.1, If you check your tests/prompt_test_v3-llama3.1.txt This is how it should look like if your base model is llama3.1 but it seems like the base model for your 3.2 version model is llama3 instead because the prompt matches to the tests/prompt_test_v3.llama3.txt |
v3.2 is finetuned with the prompt template in tests/prompt_test_v3.llama3.txt, not tests/prompt_test_v3-llama3.1.txt. The base model is llama3.1, not llama3. Llama3.1 can be finetuned using a different prompt template. |
Using the default code on huggingface page link, the prompt generated for tool calling is incorrect.
This v3.2 model is based on Llama3.1 and it should be using this template defined in your repo
But this is the model output for the default tool and message on hf
Can someone please fix this?
And if you could provide a huggingface inference just like llama_cpp example that would be great!
The text was updated successfully, but these errors were encountered: