-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[QNN] Optimize lowering for requantize and FixedPointMultiply. #4798
Merged
Merged
Changes from 1 commit
Commits
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -149,6 +149,7 @@ Expr FixedPointMultiplyPerChannel(Expr tensor, std::vector<double> multipliers, | |
// 1) Calculating the integer multiplier and integer shift. These are calculated per axis/per | ||
// channel. | ||
std::vector<int32_t> fixed_pt_multipliers, lshifts, rshifts; | ||
bool is_lshift_required = false; | ||
for (auto multiplier : multipliers) { | ||
int32_t fixed_pt_multiplier, shift; | ||
std::tie(fixed_pt_multiplier, shift) = GetFixedPointMultiplierShift(multiplier); | ||
|
@@ -157,12 +158,15 @@ Expr FixedPointMultiplyPerChannel(Expr tensor, std::vector<double> multipliers, | |
fixed_pt_multipliers.push_back(fixed_pt_multiplier); | ||
lshifts.push_back(lshift); | ||
rshifts.push_back(rshift); | ||
is_lshift_required |= (lshift != 0); | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Maybe we save the |
||
} | ||
|
||
// 2) Multiply the integer multiplier. Convert lefts shifts into expr and multiply. | ||
auto lshift_expr = MakeConstantTensor(hp_dtype, {n_channels}, lshifts); | ||
auto exp_lshift_expr = ExpandBiasToMatchAxis(lshift_expr, n_dim, {channel_axis}); | ||
tensor = LeftShift(tensor, exp_lshift_expr); | ||
if (is_lshift_required) { | ||
auto lshift_expr = MakeConstantTensor(hp_dtype, {n_channels}, lshifts); | ||
auto exp_lshift_expr = ExpandBiasToMatchAxis(lshift_expr, n_dim, {channel_axis}); | ||
tensor = LeftShift(tensor, exp_lshift_expr); | ||
} | ||
|
||
// 3) Perform the multiplication in higher precision. | ||
// The scalar is a fixed point value of int32 where the decimal point is | ||
|
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would you please share the insight here? I looked around, but lost a bit in the arithmetic here. :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Definitely, happy to explain :)
We approximate the floating point computation here with fixed point computation. This is done by representing the requantize_scale (input_scale/output_scale) as int32, where the decimal point is between 1st and 2nd bit - representing a number between 0.5 and 1. And then we multiply this fixed point number with the quantized tensor (another int32 tensor). So, to keep the precision higher, we perform multiplication in int64. But, we can safely say that the resulting number is still a fixed point int64 number, where the decimal part of the number is within int32 range. We, then perform, right shift etc to get the decimal portion.
So, if the requantize scale is less than 1, we can safely assume that the range will be within int32. (I forgot to add that check, but let me add that as a second commit).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see, thank you for the detailed explain!