-
Notifications
You must be signed in to change notification settings - Fork 4.5k
Environment
hoshi-hiyouga edited this page Dec 24, 2023
·
5 revisions
For NVIDIA GPUs, we recommend using CUDA>=11.8
Install torch with CUDA support and test with
import torch
assert torch.cuda.is_available() is True
FlashAttention-2 requires sm>=8.0 LLM.int8 (8-bit quantization) requires sm>=7.5
https://arnon.dk/matching-sm-architectures-arch-and-gencode-for-various-nvidia-cards/
NPU requires torch adapter, follow
https://github.com/Ascend/pytorch
If you are using deepspeed, you are also required to install the deepspeed adapter for NPU https://github.com/Ascend/DeepSpeed
We support torch rocm version
install torch and test mps with
import torch
assert torch.backends.mps.is_available() is True
- Requirements
- Usage
- Guides
- Features