-
Notifications
You must be signed in to change notification settings - Fork 96
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support floating-point exponents for ttnn.pow
#13779
Labels
bug
Something isn't working
op_cat: eltwise
perf
for issues tracking performance problems/improvements
precision
Precision issues
pytorch-compiler
Comments
jdh8
added
precision
Precision issues
op_cat: eltwise
perf
for issues tracking performance problems/improvements
and removed
community
labels
Oct 15, 2024
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
bug
Something isn't working
op_cat: eltwise
perf
for issues tracking performance problems/improvements
precision
Precision issues
pytorch-compiler
As we found in tenstorrent/pytorch2.0_ttnn#211,
ttnn.pow
only supports integer exponents. Even if we only need integral support (as current statistics say), we can still optimize current implementation with binary exponentiation.tt-metal/tt_metal/hw/ckernels/blackhole/metal/llk_api/llk_sfpu/ckernel_sfpu_power_iterative.h
Lines 18 to 26 in 021137c
The text was updated successfully, but these errors were encountered: