You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When I use llamafile with python api. But for 2 models I use, they all retain the end token in response string, that I need to manually remove, is that my problem?
like this :
if self.model_string == "LLaMA_CPP": # why llama_file don't remove end token?
self.response_str = self.response_str.replace("<|eot_id|>", "")
if self.model_string == "gemma-2b-it":
self.response_str = self.response_str.replace("<end_of_turn>", "")
Version
llamafile v0.8.4
What operating system are you seeing the problem on?
Linux
Relevant log output
model_gemma("I have a head of broccoli, and a cabbage. How many fruits do I have?")
output:
'You have **zero** fruits! 🥦 🥬 \n\nBroccoli and cabbage are both vegetables, not fruits. \n<end_of_turn>'
The text was updated successfully, but these errors were encountered:
Contact Details
[email protected]
What happened?
When I use llamafile with python api. But for 2 models I use, they all retain the end token in response string, that I need to manually remove, is that my problem?
like this :
Version
llamafile v0.8.4
What operating system are you seeing the problem on?
Linux
Relevant log output
The text was updated successfully, but these errors were encountered: