-
-
Notifications
You must be signed in to change notification settings - Fork 572
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
387_YuNetV2 TFLite integer models cannot allocate tensors #352
Comments
if (kernel_type == kGenericOptimized || kernel_type == kReference) {
if (input->type == kTfLiteUInt8) {
TF_LITE_ENSURE(context, output->params.scale == 1. / 256);
LUTPopulate<uint8_t>(
input->params.scale, input->params.zero_point, output->params.scale,
output->params.zero_point,
[](float value) { return 1.0f / (1.0f + std::exp(-value)); },
data->lut_uint8);
} else if (input->type == kTfLiteInt8) {
TF_LITE_ENSURE(context, output->params.scale == 1. / 256);
LUTPopulate<int8_t>(
input->params.scale, input->params.zero_point, output->params.scale,
output->params.zero_point,
[](float value) { return 1.0f / (1.0f + std::exp(-value)); },
data->lut_int8);
} else if (input->type == kTfLiteInt16) {
TF_LITE_ENSURE(context, output->params.scale == 1. / 32768);
TF_LITE_ENSURE(context, output->params.zero_point == 0);
}
} from pprint import pprint
print('')
pprint(interpreter._get_op_details(41))
print('')
pprint(interpreter.get_tensor_details()[159])
print(f"TFLite quant param: {interpreter.get_tensor_details()[159]['quantization_parameters']['scales']}")
print('')
pprint(interpreter.get_tensor_details()[160])
print(f"TFLite quant param: {interpreter.get_tensor_details()[160]['quantization_parameters']['scales']}")
print('')
pprint(interpreter.get_tensor_details()[161])
print(f"TFLite quant param: {interpreter.get_tensor_details()[161]['quantization_parameters']['scales']}")
print('')
print(f"1. / 256: {np.asarray([1./256.], dtype=np.float32)}")
|
Fix: TensorFlow v2.17.0
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Issue Type
Bug
OS
Ubuntu, Other
OS architecture
x86_64
Programming Language
Python
Framework
TensorFlowLite
Model name and Weights/Checkpoints URL
face_detection_yunet_2023mar_float32.tflite
face_detection_yunet_2023mar_integer_quant.tflite
https://s3.ap-northeast-2.wasabisys.com/pinto-model-zoo/387_YuNetV2/resources.tar.gz
Description
When using the integer quantized model of 387_YuNetV2, Tensorflow Lite is unable to allocate the tensors. But when using the float32 tflite model, the interpreter can allocate the tensors. When using 144_YuNet, both float32 and int8 quantized models work.
Target Machine:
Arch Linux
Python 3.11
TensorFlow Lite 2.13.0
Relevant Log Output
URL or source code for simple inference testing code
The text was updated successfully, but these errors were encountered: