We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
server.cpp should recognise parameters -tb / --threads-batch (as stated in the readme).
Please provide a detailed written description of what llama.cpp did, instead.
llama.cpp
server.cpp doesn't recognise the -tb / --threads-batch parameter.
I checked the code, this options seems indeed missing.
PS: I can attempt adding it, if you agree... it would be a good task to get started on the code.
The text was updated successfully, but these errors were encountered:
I have made a fork of this repo with the changes needed to add --threads-batch / --tb to server.cpp, what is the best way to make a PR?
Link
Sorry, something went wrong.
Successfully merging a pull request may close this issue.
Expected Behavior
server.cpp should recognise parameters -tb / --threads-batch (as stated in the readme).
Please provide a detailed written description of what
llama.cpp
did, instead.server.cpp doesn't recognise the -tb / --threads-batch parameter.
I checked the code, this options seems indeed missing.
PS: I can attempt adding it, if you agree... it would be a good task to get started on the code.
The text was updated successfully, but these errors were encountered: