Skip to content

Commit

Permalink
Fix incorrect accelerator device handling for MPS in `TrainingArgumen…
Browse files Browse the repository at this point in the history
…ts` (#31812)

* Fix wrong acclerator device setup when using MPS

* More robust TrainingArguments MPS handling

* Update training_args.py

* Cleanup
  • Loading branch information
andstor authored Jul 8, 2024
1 parent 4879ac2 commit ae9dd02
Showing 1 changed file with 3 additions and 0 deletions.
3 changes: 3 additions & 0 deletions src/transformers/training_args.py
Original file line number Diff line number Diff line change
Expand Up @@ -48,6 +48,7 @@
is_torch_bf16_cpu_available,
is_torch_bf16_gpu_available,
is_torch_mlu_available,
is_torch_mps_available,
is_torch_neuroncore_available,
is_torch_npu_available,
is_torch_tf32_available,
Expand Down Expand Up @@ -2178,6 +2179,8 @@ def _setup_devices(self) -> "torch.device":
)
if self.use_cpu:
device = torch.device("cpu")
elif is_torch_mps_available():
device = torch.device("mps")
elif is_torch_xpu_available():
if not is_ipex_available() and not is_accelerate_available("0.32.0.dev"):
raise ImportError("Using the XPU PyTorch backend requires `accelerate>=0.32.0.dev`")
Expand Down

0 comments on commit ae9dd02

Please sign in to comment.