-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can get work on tabby 0.13.1 or 0.14.0 follow by the quick-start guide #2719
Comments
I have the same issue, how to troubleshoot it? |
Same here. Running with docker run -it --gpus all -p 8080:8080 -v $HOME/.tabby:/data tabbyml/tabby serve --model StarCoder-1B --chat-model Qwen2-1.5B-Instruct --device cuda GPU info:
|
Same issue as well. |
Thank you for reporting the issues. The changes in https://github.com/TabbyML/tabby/pull/2925/files will be included in the 0.16 release and will provide more detailed information in the logs to assist with debugging. |
Describe the bug
Can get work on tabby 0.13.1 or 0.14.0 follow by the quick-start guide, it's just start process with the a embed model
/opt/tabby/bin/llama-server -m /data/models/TabbyML/Nomic-Embed-Text/ggml/model.gguf --cont-batching --port 30888 -np 1 --log-disable --ctx-size 4096 -ngl 9999 --embedding --ubatch-size 4096
and hang for ever
Information about your version
0.14.0 or 0.13.1
Information about your GPU
The text was updated successfully, but these errors were encountered: