-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[QNN] Optimize lowering for requantize and FixedPointMultiply. #4798
Conversation
src/relay/qnn/util.cc
Outdated
@@ -157,12 +158,15 @@ Expr FixedPointMultiplyPerChannel(Expr tensor, std::vector<double> multipliers, | |||
fixed_pt_multipliers.push_back(fixed_pt_multiplier); | |||
lshifts.push_back(lshift); | |||
rshifts.push_back(rshift); | |||
is_lshift_required |= (lshift != 0); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we save the [?]=
style operators for artimethic, but write boolean as it originally is.
src/relay/qnn/op/requantize.cc
Outdated
if (out_dtype == DataType::Int(32)) { | ||
return Cast(shifted_int64_t, out_dtype); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would you please share the insight here? I looked around, but lost a bit in the arithmetic here. :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Definitely, happy to explain :)
We approximate the floating point computation here with fixed point computation. This is done by representing the requantize_scale (input_scale/output_scale) as int32, where the decimal point is between 1st and 2nd bit - representing a number between 0.5 and 1. And then we multiply this fixed point number with the quantized tensor (another int32 tensor). So, to keep the precision higher, we perform multiplication in int64. But, we can safely say that the resulting number is still a fixed point int64 number, where the decimal part of the number is within int32 range. We, then perform, right shift etc to get the decimal portion.
So, if the requantize scale is less than 1, we can safely assume that the range will be within int32. (I forgot to add that check, but let me add that as a second commit).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see, thank you for the detailed explain!
LGTM, possible to have a test? |
Done. Thanks! |
Ping |
Thanks @anijain2305 @jackwish this is merged |
…e#4798) * [QNN] Optimize lowering for requantize and FixedPointMultiply. * Add check for requantize scale gt 1. * Added test case.
…e#4798) * [QNN] Optimize lowering for requantize and FixedPointMultiply. * Add check for requantize scale gt 1. * Added test case.
…e#4798) * [QNN] Optimize lowering for requantize and FixedPointMultiply. * Add check for requantize scale gt 1. * Added test case.
As Title.
Changes are verified through existing tests.
@jackwish @FrozenGene @yzhliu @vinx13