-
-
Notifications
You must be signed in to change notification settings - Fork 4.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bugfix] Ignore GPTQ quantization of Qwen2-VL visual module #10169
Conversation
Signed-off-by: mgoin <[email protected]>
👋 Hi! Thank you for contributing to the vLLM project. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can do one of these:
🚀 |
@@ -982,7 +984,7 @@ def __init__(self, | |||
self.visual = Qwen2VisionTransformer( | |||
config.vision_config, | |||
norm_eps=getattr(config, "rms_norm_eps", 1e-6), | |||
quant_config=quant_config, | |||
quant_config=self._maybe_ignore_quant_config(quant_config), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have a question: What should we do if our users execute their own quantization, and they have quantized the visual encoder?
…ject#10169) Signed-off-by: mgoin <[email protected]> Signed-off-by: OmerD <[email protected]>
…ject#10169) Signed-off-by: mgoin <[email protected]> Signed-off-by: Loc Huynh <[email protected]>
…ject#10169) Signed-off-by: mgoin <[email protected]> Signed-off-by: Jee Jee Li <[email protected]>
…ject#10169) Signed-off-by: mgoin <[email protected]>
…ject#10169) Signed-off-by: mgoin <[email protected]> Signed-off-by: Sumit Dubey <[email protected]>
FIX #9832
This is a workaround for the fact that GPTQ configs generated by AutoGPTQ do not have a list of ignored modules to check if a module should be quantized. We hardcode a case where we set the
quant_config = None
when passing it to Qwen2-VL's visual module if the config is GPTQ-based.The issue remains that we will need to use this utility on a case-by-case basis for each model.
Tested with an evaluation:
As an example see this model https://huggingface.co/Qwen/Qwen2-VL-2B-Instruct-GPTQ-Int4
You can see in its quantization_config that there is no mention of ignored modules:
However looking at the model checkpoint you can see that all of the Linear modules under
model.layers.*
are quantized, but the Linear modules undervisual.blocks.*
are not at all: