Quantization represents data with fewer bits, making it a useful technique for reducing memory-usage and accelerating inference especially when it comes to large language models (LLMs). There are several ways to quantize a model including:
- optimizing which model weights are quantized with the AWQ algorithm
- independently quantizing each row of a weight matrix with the GPTQ algorithm
- quantizing to 8-bit and 4-bit precision with the bitsandbytes library
- quantizing to as low as 2-bit precision with the AQLM algorithm
However, after a model is quantized it isn't typically further trained for downstream tasks because training can be unstable due to the lower precision of the weights and activations. But since PEFT methods only add extra trainable parameters, this allows you to train a quantized model with a PEFT adapter on top! Combining quantization with PEFT can be a good strategy for training even the largest models on a single GPU. For example, QLoRA is a method that quantizes a model to 4-bits and then trains it with LoRA. This method allows you to finetune a 65B parameter model on a single 48GB GPU!
In this guide, you'll see how to quantize a model to 4-bits and train it with LoRA.
bitsandbytes is a quantization library with a Transformers integration. With this integration, you can quantize a model to 8 or 4-bits and enable many other options by configuring the [~transformers.BitsAndBytesConfig
] class. For example, you can:
- set
load_in_4bit=True
to quantize the model to 4-bits when you load it - set
bnb_4bit_quant_type="nf4"
to use a special 4-bit data type for weights initialized from a normal distribution - set
bnb_4bit_use_double_quant=True
to use a nested quantization scheme to quantize the already quantized weights - set
bnb_4bit_compute_dtype=torch.bfloat16
to use bfloat16 for faster computation
import torch
from transformers import BitsAndBytesConfig
config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_use_double_quant=True,
bnb_4bit_compute_dtype=torch.bfloat16,
)
Pass the config
to the [~transformers.AutoModelForCausalLM.from_pretrained
] method.
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-v0.1", quantization_config=config)
Next, you should call the [~peft.utils.prepare_model_for_kbit_training
] function to preprocess the quantized model for training.
from peft import prepare_model_for_kbit_training
model = prepare_model_for_kbit_training(model)
Now that the quantized model is ready, let's set up a configuration.
Create a [LoraConfig
] with the following parameters (or choose your own):
from peft import LoraConfig
config = LoraConfig(
r=16,
lora_alpha=8,
target_modules=["q_proj", "k_proj", "v_proj", "o_proj"],
lora_dropout=0.05,
bias="none",
task_type="CAUSAL_LM"
)
Then use the [get_peft_model
] function to create a [PeftModel
] from the quantized model and configuration.
from peft import get_peft_model
model = get_peft_model(model, config)
You're all set for training with whichever training method you prefer!
LoftQ initializes LoRA weights such that the quantization error is minimized, and it can improve performance when training quantized models. To get started, follow these instructions.
In general, for LoftQ to work best, it is recommended to target as many layers with LoRA as possible, since those not targeted cannot have LoftQ applied. This means that passing LoraConfig(..., target_modules="all-linear")
will most likely give the best results. Also, you should use nf4
as quant type in your quantization config when using 4bit quantization, i.e. BitsAndBytesConfig(load_in_4bit=True, bnb_4bit_quant_type="nf4")
.
QLoRA adds trainable weights to all the linear layers in the transformer architecture. Since the attribute names for these linear layers can vary across architectures, set target_modules
to "all-linear"
to add LoRA to all the linear layers:
config = LoraConfig(target_modules="all-linear", ...)
Additive Quantization of Language Models (AQLM) is a Large Language Models compression method. It quantizes multiple weights together and takes advantage of interdependencies between them. AQLM represents groups of 8-16 weights as a sum of multiple vector codes. This allows it to compress models down to as low as 2-bit with considerably low accuracy losses.
Since the AQLM quantization process is computationally expensive, a use of prequantized models is recommended. A partial list of available models can be found in the official aqlm repository.
The models support LoRA adapter tuning. To tune the quantized model you'll need to install the aqlm
inference library: pip install aqlm>=1.0.2
. Finetuned LoRA adapters shall be saved separately, as merging them with AQLM quantized weights is not possible.
quantized_model = AutoModelForCausalLM.from_pretrained(
"BlackSamorez/Mixtral-8x7b-AQLM-2Bit-1x16-hf-test-dispatch",
torch_dtype="auto", device_map="auto", low_cpu_mem_usage=True,
)
peft_config = LoraConfig(...)
quantized_model = get_peft_model(quantized_model, peft_config)
You can refer to the Google Colab example for an overview of AQLM+LoRA finetuning.
You can also perform LoRA fine-tuning on EETQ quantized models. EETQ package offers simple and efficient way to perform 8-bit quantization, which is claimed to be faster than the LLM.int8()
algorithm. First, make sure that you have a transformers version that is compatible with EETQ (e.g. by installing it from latest pypi or from source).
import torch
from transformers import EetqConfig
config = EetqConfig("int8")
Pass the config
to the [~transformers.AutoModelForCausalLM.from_pretrained
] method.
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-v0.1", quantization_config=config)
and create a LoraConfig
and pass it to get_peft_model
:
from peft import LoraConfig, get_peft_model
config = LoraConfig(
r=16,
lora_alpha=8,
target_modules=["q_proj", "k_proj", "v_proj", "o_proj"],
lora_dropout=0.05,
bias="none",
task_type="CAUSAL_LM"
)
model = get_peft_model(model, config)
The models that is quantized using Half-Quadratic Quantization of Large Machine Learning Models (HQQ) support LoRA adapter tuning. To tune the quantized model, you'll need to install the hqq
library with: pip install hqq
.
from hqq.engine.hf import HQQModelForCausalLM
quantized_model = HQQModelForCausalLM.from_quantized(save_dir_or_hfhub, device='cuda')
peft_config = LoraConfig(...)
quantized_model = get_peft_model(quantized_model, peft_config)
Or using transformers version that is compatible with HQQ (e.g. by installing it from latest pypi or from source).
from transformers import HqqConfig, AutoModelForCausalLM
quant_config = HqqConfig(nbits=4, group_size=64)
quantized_model = AutoModelForCausalLM.from_pretrained(save_dir_or_hfhub, device_map=device_map, quantization_config=quant_config)
peft_config = LoraConfig(...)
quantized_model = get_peft_model(quantized_model, peft_config)
PEFT supports models quantized with torchao ("ao") for int8 quantization.
from peft import LoraConfig, get_peft_model
from transformers import AutoModelForCausalLM, TorchAoConfig
model_id = ...
quantization_config = TorchAoConfig(quant_type="int8_weight_only")
base_model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=quantization_config)
peft_config = LoraConfig(...)
model = get_peft_model(base_model, peft_config)
- Use the most recent versions of torchao (>= v0.4.0) and transformers (> 4.42).
- Only linear layers are currently supported.
quant_type = "int4_weight_only"
is currently not supported.NF4
is not implemented in transformers as of yet and is thus also not supported.- DoRA only works with
quant_type = "int8_weight_only"
at the moment. - There is explicit support for torchao when used with LoRA. However, when torchao quantizes a layer, its class does not change, only the type of the underlying tensor. For this reason, PEFT methods other than LoRA will generally also work with torchao, even if not explicitly supported. Be aware, however, that merging only works correctly with LoRA and with
quant_type = "int8_weight_only"
. If you use a different PEFT method or dtype, merging will likely result in an error, and even it doesn't, the results will still be incorrect.
Besides LoRA, the following PEFT methods also support quantization:
- VeRA (supports bitsandbytes quantization)
- AdaLoRA (supports both bitsandbytes and GPTQ quantization)
- (IA)³ (supports bitsandbytes quantization)
If you're interested in learning more about quantization, the following may be helpful:
- Learn more details about QLoRA and check out some benchmarks on its impact in the Making LLMs even more accessible with bitsandbytes, 4-bit quantization and QLoRA blog post.
- Read more about different quantization schemes in the Transformers Quantization guide.