-
Notifications
You must be signed in to change notification settings - Fork 990
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Newlines in generation when using grammar #637
Comments
@rlancemartin the grammar only specifies the syntax of the output not necessarily the stopping condition, if the model doesn't generate an EOS token and no other stopping criteria is met "\n" is the only valid character at the end of the generation. You either need to pass something to the |
Thanks. Yes, this works.
Prompt w/ STOP token specified:
Result, as expected:
Is there a best-practice for this? (I'm just using |
OK, I think I get it a bit further: The problem seems to be with the json.gbnf specifically. I'm working on modifying that file. |
any update? |
Using
llama-cpp-python
w/ LangChain integration and this PR to support grammars.Test w/o
grammar_path
:The result is as expected.
Test w/
grammar_path
:The result has a large number of newlines:
Has anyone seen / resolved similar behavior?
The text was updated successfully, but these errors were encountered: