You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
A standalone BatchNormalization node with certain settings (see .onnx file in .zip folder: bn_model.zip) changes its functional behavior, when transformed with the BatchNormToAffine transformation.
Run transformation BatchNormToAffine on ONNX file.
Execute model before and after the transformation with random floating point input (x = gen_finn_dt_tensor(DataType["FLOAT32"], (1, 64, 64, 64)) inp_dict = {"global_in": x})
Compare execution of the model before and after the transformation.
Expected behavior
The outputs before and after do not match.
Actual behavior
The functional behavior should not change due to the transformation.
Quick summary
A standalone
BatchNormalization
node with certain settings (see .onnx file in .zip folder: bn_model.zip) changes its functional behavior, when transformed with theBatchNormToAffine
transformation.Steps to Reproduce
main
branch of qonnx (commit hash:12c96a3ded06beacab08e0f554e4ed014476c0aa
).(x = gen_finn_dt_tensor(DataType["FLOAT32"], (1, 64, 64, 64)) inp_dict = {"global_in": x})
Expected behavior
The outputs before and after do not match.
Actual behavior
The functional behavior should not change due to the transformation.
Possible fix
It seems to be a rounding error, coming from this calculation: A = scale / np.sqrt(epsilon + variance)
The text was updated successfully, but these errors were encountered: