-
-
Notifications
You must be signed in to change notification settings - Fork 5.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Ascend NPU support #5541
Add Ascend NPU support #5541
Conversation
Why Huawai? What do NPUs have to do with Huawai? Is this some kind of advertisement? |
@Malrama I have just followed the behavior of Nvidia/AMD/Apple gpus. If there are another npu device wants to adapt, shold add another option such F) xxx which would install requirements for xxx. Thank you anyway, i will delete "Huawei" to avoid misunderstanding. |
I think it's fine to leave Huawei, the code list chip vendors like AMD, Intel, and Apple but that doesn't mean it's endorsing said companies. In fact "Ascend" might be confusing down the line because that's like listing Apple chip as "M" and Intel as "Arc". |
@OKN1212 thanks for your review. I think "Ascend" is better for user and could not make confusion like "Apple" because all Ascend series use the same version of torch and torch_npu. If there are different Ascend chip using different torch/torch_npu in the future, "Ascend" here may not be appropriate. |
Hi @oobabooga , thank you for this great work! I find this is applicated for multiple backends and there are people who wants to use Ascend NPU. For Ascend NPU, it was supported by Pytorch officially since Pytorch 2.1 and people could use that for LLM training and inference. |
Any plan to support this? @oobabooga Looks like @Malrama @OKN1212 's question has been resolved. When I try to use BTW, I also found |
Hi @oobabooga , this PR has been pending for over a month. May I know if adding new backends is supported? |
@oobabooga |
I am not familiar with NPUs and do not have access to one, so I have reverted the various changes to the one-click installer and removed temporary workarounds, keeping the changes minimal. |
Then which one should I select when using one click installer if I have a Huawei Ascend NPU? |
Thanks for your reply! I almost thought this job was going to fail. |
@Touch-Night I‘ve write a wiki for patch upstream as a temporary alternative. Althogh this is not fancy, it can support you guys use Ascend NPU. |
Thanks, but I have already reverted all changes by @oobabooga turning Huawei Ascend NPU support into a general NPU support in my own fork for localization.
|
nice job |
Checklist:
Description
Ascend NPU has been supported by transformers, deepspeed and others. Based on above works, i want to use Ascend NPU to chat and i also find there are people who wants to use Ascend NPU(Open: #5261).
This PR make auto installing success and has passed below tests:
Verified on Ascend NPU with ChatGLM2-6B,
Chat
Default
Notebook
deepspeed
Verified on Ascned NPU with opt-1.3b:
lora training