You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The new version is much better, I mean it started working easily and with Vulcan offloading.
The local mode returns a warning: [node-llama-cpp] Using this model ("~/.humanifyjs/models/Phi-3.1-mini-4k-instruct-Q4_K_M.gguf") to tokenize text with special tokens and then detokenize it resulted in a different text. There might be an issue with the model or the tokenizer implementation. Using this model may not work as intended
Should I ignore this warning or this isn't expected and I must investigate what happens ?
The text was updated successfully, but these errors were encountered:
The new version is much better, I mean it started working easily and with Vulcan offloading.
The local mode returns a warning:
[node-llama-cpp] Using this model ("~/.humanifyjs/models/Phi-3.1-mini-4k-instruct-Q4_K_M.gguf") to tokenize text with special tokens and then detokenize it resulted in a different text. There might be an issue with the model or the tokenizer implementation. Using this model may not work as intended
Should I ignore this warning or this isn't expected and I must investigate what happens ?
The text was updated successfully, but these errors were encountered: