You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
An officially supported task in the examples folder
My own task or dataset (give details below)
Reproduction
I've come across a an error with model.generate when used inside a TrainerCallback of SFTTrainer. Happens only when training with TrainingArguments( .., bf16=True, ..) but not with fp16=True. Models tested: mistral and llama2-7b.
RuntimeError: expected mat1 and mat2 to have the same dtype, but got: float != c10::BFloat16
the mentioned PR which fixed this was merged a year ago, why is autocast() still necessary?
I am facing this issue while training mistral7b with bf16=True. I went through the code of peft/tuners/lora/bnb.py seems like this code is already added there. any suggestions of this error can be fixed?
System Info
Who can help?
@pacman100 @younesbelkada @saya
Information
Tasks
examples
folderReproduction
I've come across a an error with
model.generate
when used inside aTrainerCallback
ofSFTTrainer
. Happens only when training withTrainingArguments( .., bf16=True, ..)
but not withfp16=True
. Models tested: mistral and llama2-7b.RuntimeError: expected mat1 and mat2 to have the same dtype, but got: float != c10::BFloat16
Minimal reproducible example:
Full stacktrace
Any idea what's going on? Thank you!
Expected behavior
bf16 and fp16 both working
The text was updated successfully, but these errors were encountered: