Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DeepSpeed inference support for int8 parameters on BLOOM? #330

Closed
pai4451 opened this issue Aug 16, 2022 · 6 comments
Closed

DeepSpeed inference support for int8 parameters on BLOOM? #330

pai4451 opened this issue Aug 16, 2022 · 6 comments

Comments

@pai4451
Copy link

pai4451 commented Aug 16, 2022

Recently, HuggingFace transformers has a new feature on int8 quantization for all HuggingFace models. This feature could reduce the size of the large models by up to 2 without a high loss in performance. Is it possible for DeepSpeed inference to support int8 quantization for BLOOM? According to the DeepSpeed inference tutorial, DeepSpeed inference supports fp32, fp16, and int8 parameters. But when I tried BLOOM with the inference script and changed dtype=torch.int8 on line 194, an error will be raised.

site-packages/deepspeed/runtime/weight_quantizer.py”, line 163, in model_quantize
    return quantized_module, torch.cat(all_scales)
RuntimeError: torch.cat(): expected a non-empty list of Tensors

Any chance on DeepSpeed inference to support int8 quantization for BLOOM?

@pai4451 pai4451 changed the title DeepSpeed inference support for int8 parameters on BLOOM DeepSpeed inference support for int8 parameters on BLOOM? Aug 16, 2022
@mayank31398
Copy link
Collaborator

@pai4451
https://www.deepspeed.ai/docs/config-json/#weight-quantization
You can't use it that way. Please refer to this config.
Let me know if it works ;)

@mayank31398
Copy link
Collaborator

mayank31398 commented Aug 16, 2022

As an alternative, you can use it in HuggingFace too. I haven't tried it either though.

@mayank31398
Copy link
Collaborator

@pai4451 #328 (comment)
you can use these instructions for quantization.
However, this is a barebones script.
I would encourage to wait for this PR: #328
Planning to add server + CLI inference + benchmarking support using accelerate and ds inference both. This will also support quantization should you need it.

@pai4451
Copy link
Author

pai4451 commented Aug 30, 2022

@pai4451 #328 (comment)
you can use these instructions for quantization.
However, this is a barebones script.
I would encourage to wait for this PR: #328
Planning to add server + CLI inference + benchmarking support using accelerate and ds inference both. This will also support quantization should you need it.

@mayank31398 I am running my server without Internet available so I can’t use snapshot_download from the hub. Also I am running on two nodes with 16 GPUs so I need a total of 16 shards checkpoints instead of the 8 shards provided by microsoft/bloom-deepspeed-inference-int8. I can convert by myself with the old FP16 weights but for int8 the following error occurs
NotImplenentationError: Cannot copy out of meta tensors; no data. Any chance to solve that?

@mayank31398
Copy link
Collaborator

Quantization with int8 requires knowledge distillation and might need significant compute.
Read the zeroquant paper.
I would suggest to get intenet access on the node if you can.
I dont know how to quantize yourself.
Int8 might work on a single node with 8 gpus for you. Can you give it a shot?

@mayank31398
Copy link
Collaborator

Also, can you provide me the ds config you use to run on 16 gpus?
I dont know how to reshard for pipeline parallel.
Do you save the resharded weights?
Or reshard every time?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants