Performance: Convert an FDIV to an FMUL in a hot loop #1030
+5
−4
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
For the generic/CPU path, we noticed a super hot loop executing a floating point divide, where the divisor is a loop invariant and can be replaced by a floating point multiply. This can provide some speedup for microarchitectures which implement FMUL faster than FDIV.
Added dependencies: none
How to test
I could not build this repository on my box. We were testing with the SPEC CPU candidate drop of marian. The edit is small and contained, and verified within our framework without loss of accuracy. We are offering this patch in case it helps the community.
Here is a snippet of coverage from
tensor_operations.cpp
, where hit counts are listed to the right of the line numbers. In this image, line 1177 and 1181 are the ones we would edit.Checklist