Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bfloat16 and FP16 support for custom kernels #271

Open
sidnb13 opened this issue Oct 31, 2023 · 3 comments
Open

bfloat16 and FP16 support for custom kernels #271

sidnb13 opened this issue Oct 31, 2023 · 3 comments
Labels

Comments

@sidnb13
Copy link

sidnb13 commented Oct 31, 2023

🚀 The feature, motivation and pitch

As the kernels seem to be limited to the FP32 data type at the moment, it would be immensely helpful to have the implementations support mixed precision computations (FP16 and BF16) as well. This would be helpful for broader ranging applications in NLP, not just in graph neural nets.

How involved would enabling mixed-precision computations be? Any pointers to potentially start a PR?

Alternatives

No response

Additional context

No response

@finndayton
Copy link

finndayton commented Nov 2, 2023

So adding on to @sidnb13 's comments here, it looks like segment_matmul just takes in two Tensor types here which are simply torch.Tensors, right? And torch.Tensor does have native support for bfloat16 / torch.float16 / torch.half, right? The weird thing is that when one tries to run segment_matmul on two tensors cast to bfloat16, you get this error:

Screenshot 2023-11-01 at 7 44 47 PM

@rusty1s
Copy link
Member

rusty1s commented Nov 2, 2023

@DamianSzwichtenberg

@DamianSzwichtenberg
Copy link
Member

(segment|grouped)_matmul had an incomplete dispatch types set, I've fixed CPU implementation with pyg-lib @ 272 (@puririshi98 could you please take a look at CUDA implementation?). If you find any custom operation that is lacking bf16 support you can take a look at @yanbing-j PRs, e.g. pytorch_scatter @ 316 and pytorch_scatter @ 375.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants