Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature]: Qwen2.5 bitsandbytes support #8941

Closed
1 task done
hanan9m opened this issue Sep 29, 2024 · 7 comments · Fixed by #9467
Closed
1 task done

[Feature]: Qwen2.5 bitsandbytes support #8941

hanan9m opened this issue Sep 29, 2024 · 7 comments · Fixed by #9467

Comments

@hanan9m
Copy link

hanan9m commented Sep 29, 2024

🚀 The feature, motivation and pitch

Description:
Qwen2.5 (32B) is a state-of-the-art model, especially interesting in 4-bit precision (bitsandbytes).

  • I tried integrating it, but the model did not work as expected. the model output is just "!!!!!"
  • I created a Colab showing Qwen2.5 works in the transformers library but fails in vllm after my modification.
    in this notebook i show how the model is working using hugginface, and how after adding bitsandbytes support the output is gibberish
    i tried to add this lines, under Qwen2ForCausalLM class:
    bitsandbytes_stacked_params_mapping = {
        # shard_name, weight_name, index
        "q_proj": ("qkv_proj", 0),
        "k_proj": ("qkv_proj", 1),
        "v_proj": ("qkv_proj", 2),
        "gate_proj": ("gate_up_proj", 0),
        "up_proj": ("gate_up_proj", 1),
    }
  • There is similar PR just merge where adding bitsandbytes to Gemma2

bad output example

Prompt: 'The future of AI is', Generated text: '!!!!!!!!!!!!!!!!'

Alternatives

No response

Additional context

No response

Before submitting a new issue...

  • Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
@jeejeelee
Copy link
Contributor

cc @chenqianfzh could you plz look at this issue,thanks

@blueyo0
Copy link
Contributor

blueyo0 commented Oct 17, 2024

😃 Hi, I create a similar PR to support Qwen2.5 and get correct results with the following scripts. Hope this PR helps.

from vllm.entrypoints.llm import LLM, SamplingParams
import torch

bnb_model="local_ckpt/Qwen2.5-0.5B-Instruct-bnb-4bit" # not support bnb
llm = LLM(model=bnb_model, 
        #   dtype=torch.bfloat16, 
          trust_remote_code=True, 
          quantization="bitsandbytes", 
          load_format="bitsandbytes", 
          enforce_eager=True, 
          max_model_len=1024)

prompts = [
    "Hello, my name is",
    "The president of the United States is",
    "The capital of France is",
    "The future of AI is",
]
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
outputs = llm.generate(prompts, sampling_params)

# Print the outputs.
for output in outputs:
    prompt = output.prompt
    generated_text = output.outputs[0].text
    print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")

And the corresponding outputs:

Prompt: 'Hello, my name is', Generated text: ' Kiki, and I am a healthy, 44-year-old woman.'
Prompt: 'The president of the United States is', Generated text: ' 2 seconds behind a new construction project and is 200 meters away'
Prompt: 'The capital of France is', Generated text: ' ____.\nA. Paris\nB. Brussels\nC. Nice\nD.'
Prompt: 'The future of AI is', Generated text: ' about the seamless integration of AI with various industries, including but not limited to healthcare'

@JJEccles
Copy link

May I ask what version of the model are you using in this example? is it the Unsloth/Qwen2.5-0.5B-Instruct-bnb-4bit ? I am trying to run inference on the Unsloth/Qwen2.5-7B-bnb-4bit model using your example

But Im getting the error: [rank0]: AttributeError: Model Qwen2ForCausalLM does not support BitsAndBytes quantization yet.

btw ive tried running inference on the models--unsloth--Llama-3.2-1B-Instruct and It works without any issues so Im assuming the issue may be specific to unsloths qwen 2.5 model library.

@blueyo0
Copy link
Contributor

blueyo0 commented Nov 11, 2024

May I ask what version of the model are you using in this example? is it the Unsloth/Qwen2.5-0.5B-Instruct-bnb-4bit ? I am trying to run inference on the Unsloth/Qwen2.5-7B-bnb-4bit model using your example

But Im getting the error: [rank0]: AttributeError: Model Qwen2ForCausalLM does not support BitsAndBytes quantization yet.

btw ive tried running inference on the models--unsloth--Llama-3.2-1B-Instruct and It works without any issues so Im assuming the issue may be specific to unsloths qwen 2.5 model library.

Hi, updating the vllm version may work, because llama3 bnb is supported before Aug, but Qwen2.5 bnb support is added recently.

@chenqianfzh
Copy link
Contributor

@blueyo0 is right. The bnb support of qwen2 was added recently.

@yananchen1989
Copy link

hi, could you also add support of BNB quantization for phi series, such as microsoft/Phi-3.5-mini-instruct ?

@blueyo0
Copy link
Contributor

blueyo0 commented Nov 13, 2024

sure,I will take a look at Phi 3.5 mini

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants