-
Notifications
You must be signed in to change notification settings - Fork 6.8k
[RFC] Denormal floating point values handling #19361
Comments
Welcome to Apache MXNet (incubating)! We are on a mission to democratize AI, and we are glad that you are contributing to it by opening this issue. |
@pengzhao-intel @mgouicem could you please help to review the proposal? Many thanks! |
It will be more convenient for setting FAZ to true by default. The only concern is that if it affects the training accuracy (suppose very limited). We have encountered several performance issues with denormal computation in the past but only happen in the user's debugging mode by randomly generated numbers. Thus, I am not sure if this issue will be happening in real cases. Let's wait for a while for the inputs from other members :) |
Thanks @grygielski for the proposal. I definitely agree with the premise of this proposal: most users do not know/care about denormals and they just get in the way of good performance for some use cases. For ease of use, I would encourage disabling denormals by default and go for option 2 or 4 (so set FTZ and DAZ), since the users that need denormals for accuracy usually know about denormals in the first place, whereas for the general users, denormals will likely not make any difference for accuracy but will impact performance. I have no opinion on which one is the best for code simplicity/maintenance though, so I let the MXNet contributors further comment on that. |
@szha The clip is very necessary, otherwise tons of NaN would come up when very small value are fed into OPs like |
@xidulu Thanks a lot for your user experience comment. In this case, using import daz
daz.set_ftz()
daz.set_daz()
np.power(np.finfo('float32').eps, 5, dtype=np.float32)
>>> 2.4074124e-35
np.power(np.finfo('float32').eps, 6, dtype=np.float32)
>>> 0.0 |
Based on the discussion, I think the combined approach for dealing with denormal floats sounds reasonable. @grygielski thanks for the proposal |
Problem statement
Currently in MXNet there is no mechanism of handling denormal floating point values (wikipedia) of parameters/inputs/outputs. Such numbers are problematic in terms of computations because adding/multiplying them require more CPU instructions than normal floating point numbers. However, they are so close to zero (e.g. ~1e-30) that most of the times they can be rounded to 0 without any lose in model's accuracy.
It can be done simply by checking every single parameter of the model with some, small threshold and rounding all parameters below this threshold to 0. It adds some overhead to saving/loading parameters and it's not perfect because denormal values can be created during inference on input/output values too.
Cleaner solution would to to use hardware features of modern CPUs. Since SSE2 extension there are CPU flags that handle denormals automatically. These flags are DAZ (denormals-are-zero) and FTZ (flush-to-zero). They can be set inside C++ code using intrinsic instructions.
Important point is that denormal values are rather rare since most modern NN architectures do not work asymptotically close to 0. However it can happen that they will show up in RNN models (because of sigmoid gate activation) or when using layers like PReLU (#19218).
My question here is what is a way of handling such cases preferred by a community? I would love to hear your suggestions and opinions about proposed solutions.
Proposed solutions
Example code deleting denormals from PReLU gamma parameter:
Pros: simple solution, no change in framework behavior
Cons: users may not be aware of denormals slow-down, require using additional code or library, not user-friendly
Example code used during execution in Tensorflow:
Usage:
Pros: users do not have to worry about denorm cases, no change in external API
Cons: sometimes it may lead to wrong results (?), it cannot be switched off if needed
Creating Python API function enabling DAZ and FTZ flags. This is PyTorch-like solution since they do not handle denormals by default but user can invoke Python function to treat denormals as 0s.
Example from PyTorch documentation:
https://pytorch.org/docs/stable/generated/torch.set_flush_denormal.html
Pros: users can control behavior of the framework, simple one-line API
Cons: users have to be aware of denormals existence, additional functionality in API
Combination of 2 previous ones: Enable it by default and expose Python API function disabling DAZ and FTZ.
Pros: user-friendly solution but allows user to control framework behavior if needed
Cons: the most complex solution in terms of implementation
The text was updated successfully, but these errors were encountered: