We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chatGLM-6B is an open source model based on GLM fine tuned on over 1 trillion tokens of dialogue and RLHF for chat.
It's quickly becoming one of the most popular local models despite no good fast CPU inference support (yet).
Official Repo: https://github.com/THUDM/ChatGLM-6B/blob/main/README_en.md
The text was updated successfully, but these errors were encountered:
Are you aware of any differences between GLM's architecture and GPT-NeoX? If not then all we need to do is quantize it.
Also, it's LICENSE seems to have the similar restrictions to LLaMas. Any ideas on what format its int4 quantized version in?
Sorry, something went wrong.
No branches or pull requests
chatGLM-6B is an open source model based on GLM fine tuned on over 1 trillion tokens of dialogue and RLHF for chat.
It's quickly becoming one of the most popular local models despite no good fast CPU inference support (yet).
Official Repo:
https://github.com/THUDM/ChatGLM-6B/blob/main/README_en.md
The text was updated successfully, but these errors were encountered: