Skip to content

Commit

Permalink
fix the torch_dtype and quant_storage_dtype (#1614)
Browse files Browse the repository at this point in the history
* fix the torch_dtype and quant_storage_dtype

Co-Authored-By: Gabriel Altay <[email protected]>

* quality

---------

Co-authored-by: Gabriel Altay <[email protected]>
  • Loading branch information
pacman100 and galtay authored Apr 4, 2024
1 parent 02b5aed commit 8452d71
Show file tree
Hide file tree
Showing 2 changed files with 5 additions and 2 deletions.
2 changes: 1 addition & 1 deletion examples/sft/train.py
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ class ModelArguments:
metadata={"help": "Compute dtype for 4bit base models"},
)
bnb_4bit_quant_storage_dtype: Optional[str] = field(
default="float32",
default="uint8",
metadata={"help": "Quantization storage dtype for 4bit base models"},
)
bnb_4bit_quant_type: Optional[str] = field(
Expand Down
5 changes: 4 additions & 1 deletion examples/sft/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -125,12 +125,15 @@ def create_and_prepare_model(args, data_args, training_args):
load_in_4bit=args.use_4bit_quantization,
)
else:
torch_dtype = (
quant_storage_dtype if quant_storage_dtype and quant_storage_dtype.is_floating_point else torch.float32
)
model = AutoModelForCausalLM.from_pretrained(
args.model_name_or_path,
quantization_config=bnb_config,
trust_remote_code=True,
attn_implementation="flash_attention_2" if args.use_flash_attn else "eager",
torch_dtype=quant_storage_dtype or torch.float32,
torch_dtype=torch_dtype,
)

peft_config = None
Expand Down

0 comments on commit 8452d71

Please sign in to comment.