Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

generate_quant.py脚本测试13b量化模型,效果很差,如图: #97

Closed
greatewei opened this issue Apr 20, 2023 · 2 comments
Closed

Comments

@greatewei
Copy link

greatewei commented Apr 20, 2023

generate_quant.py脚本执行量化脚本后,效果很差,如图:
image
我的量化过程如下:

  1. 13b-lora 与 llama13b进行合并生成一个新的模型 chinese-v-13b-hf, 这个模型测试过,能够正常的交流。
  2. 执行命令 python tools/llama_quant.py /data/chat/models/chinese-v-13b-hf ptb --wbits 4 --groupsize 128 --save /data/chat/models/chinese-v-13b-hf/pyllama-4b.pt 进行了模型量化,最终输出了 pyllama-4b.pt文件
  3. 执行命令python tools/generate_quant.py --model_path "/data/chat/models/chinese-v-13b-hf" --quant_path "/data/chat/models/chinese-v-13b-hf/pyllama-4b.pt" --wbits 4

是不是哪个环节出了错误

Originally posted by @greatewei in #46 (comment)

@greatewei greatewei changed the title 我遇到了一个问题,generate_quant.py脚本执行量化脚本后,效果很差,如图: generate_quant.py脚本测试13b量化模型,效果很差,如图: Apr 20, 2023
@Chuge0335
Copy link
Collaborator

Chuge0335 commented Apr 20, 2023

正常现象,我们使用的是pyllama的方案,4bit量化比8bit差很多。之后会考虑GPTQ-for-LLaMa的方法

@Chuge0335
Copy link
Collaborator

量化工具已经更新,效果请查看:https://github.com/Facico/Chinese-Vicuna/blob/master/tools/readme_zh.md

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants