You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
These magic numbers are fixed to float32 when running in graph mode, and will raise an exception as soon as the network is called on float16 or bfloat16 inputs. bfloat16 inputs are recommended when training on TPUs
Describe the current behavior.
An exception is raised whenever a keras efficientnet is trained on a non-float32 input
Describe the expected behavior.
An exception is not raised whenever a keras efficientnet is trained on a non-float32 input
Do you want to contribute a PR? (yes/no): yes
Briefly describe your candidate solution(if contributing): Replace the hardcoded graph constants with:
x = layers.Rescaling(1. / tf.math.sqrt(IMAGENET_STDDEV_RGB))(x)
System information
tf-nightly
Describe the problem.
This PR merged to solve an issue with efficientnet normalisation hardcoded a set of float values into the efficientnet architecture.
These magic numbers are fixed to float32 when running in graph mode, and will raise an exception as soon as the network is called on float16 or bfloat16 inputs. bfloat16 inputs are recommended when training on TPUs
Describe the current behavior.
An exception is raised whenever a keras efficientnet is trained on a non-float32 input
Describe the expected behavior.
An exception is not raised whenever a keras efficientnet is trained on a non-float32 input
x = layers.Rescaling(1. / tf.math.sqrt(IMAGENET_STDDEV_RGB))(x)
Standalone code to reproduce the issue.
The text was updated successfully, but these errors were encountered: