Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is it possible to infer 513 size by Quantization-aware trained deeplab? #5

Open
khkim0127 opened this issue Mar 25, 2019 · 6 comments

Comments

@khkim0127
Copy link

I trained deeplab(mobilenetV2) by quantization-aware training method.
And I exported quantized pb file. ( Cropped by here )
And converted from pb file to tflite file.
So I made my quantized tflite file.

until Crop Size 321, segmentation result is good. ( Green Mask is Overlayed )
But Over Crop Size 321, segmentation is not good ( Green Mask is not Overlayed )

Could you teach me why this problem is happened?
Sorry for bad English and Thank you.

@tantara
Copy link
Owner

tantara commented Apr 8, 2019

It might depend on training configuration or datasets. Could you provide your implementation details?

@rapotekhin
Copy link

rapotekhin commented Sep 2, 2019

Dear @tantara, I have the same question. I'm tring run your application with my deeplab model, from your this model different only input size (1, 513, 513, 3).
I use export_model.py with input parameters by default (with small differences from original, how recomendate in this issue tensorflow/tensorflow#23747 (comment), but it wasn't help) for convert model to .pb format and tflite_convert for convert to .tflite.

tflite_convert  --output_file=mobilenet_v2_deeplab_v3_513.tflite  \
		--graph_def_file=mobilenet_v2_deeplab_v3_513.pb \
		--input_arrays=ImageTensor \
		--output_arrays=SemanticPredictions \
		--input_shapes=1,513,513,3 \
		--inference_input_type=QUANTIZED_UINT8  \
		--inference_type=FLOAT \
		--mean_values=128 \
		--std_dev_values=128 \
		--post_training_quantize

But I have this error.

java.lang.IllegalArgumentException: ByteBuffer is not a valid flatbuffer model

I think, problem in the input parameters of (tflite_convert or export_model.py), or maybe you change model's architecture. Please, could you help me?

deeplabv3_mnv2_pascal_train_aug_2018_01_29.tar.gz - original model
deeplab_models.zip - models and export_model.py

@tantara
Copy link
Owner

tantara commented Sep 2, 2019

@PotekhinRoman I haven't used different types as follows

  --inference_input_type=QUANTIZED_UINT8  \
  --inference_type=FLOAT \

Let's try to the same type on inference_input_type and inference_type!

@Roopesh-Nallakshyam
Copy link

Roopesh-Nallakshyam commented Sep 23, 2019

tflite_convert --output_file=/home/roopesh/Desktop/projects/intuision/DeepLab/deeplab_camvid_lowlight_quant_new_2.tflite
 --graph_def_file=/home/roopesh/Desktop/projects/intuision/DeepLab/deeplab_camvid_lowlight_quant_new.pb
 --inference_input_type=QUANTIZED_UINT8
 --inference_type=QUANTIZED_UINT8
 --input_arrays=ImageTensor
 --output_arrays=SemanticPredictions
 --mean_values=128  
 --std_dev_values=128
 --input_shapes=1,320,320,3
 --default_ranges_min=0 --default_ranges_max=255

@tantara Given both inference types as same as you said above but unable to convert to tflite and giving the following message:

Traceback (most recent call last):
File "/home/roopesh/venv3_tf_01/bin/tflite_convert", line 10, in
sys.exit(main())
File "/home/roopesh/venv3_tf_01/lib/python3.6/site-packages/tensorflow/lite/python/tflite_convert.py", line 442, in main
app.run(main=run_main, argv=sys.argv[:1])
File "/home/roopesh/venv3_tf_01/lib/python3.6/site-packages/tensorflow/python/platform/app.py", line 125, in run
_sys.exit(main(argv))
File "/home/roopesh/venv3_tf_01/lib/python3.6/site-packages/tensorflow/lite/python/tflite_convert.py", line 438, in run_main
_convert_model(tflite_flags)
File "/home/roopesh/venv3_tf_01/lib/python3.6/site-packages/tensorflow/lite/python/tflite_convert.py", line 191, in _convert_model
output_data = converter.convert()
File "/home/roopesh/venv3_tf_01/lib/python3.6/site-packages/tensorflow/lite/python/lite.py", line 455, in convert
**converter_kwargs)
File "/home/roopesh/venv3_tf_01/lib/python3.6/site-packages/tensorflow/lite/python/convert.py", line 442, in toco_convert_impl
input_data.SerializeToString())
File "/home/roopesh/venv3_tf_01/lib/python3.6/site-packages/tensorflow/lite/python/convert.py", line 205, in toco_convert_protos
"TOCO failed. See console for info.\n%s\n%s\n" % (stdout, stderr))
tensorflow.lite.python.convert.ConverterError: TOCO failed. See console for info.
2019-09-23 12:47:50.891492: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before Removing unused ops: 803 operators, 1213 arrays (0 quantized)
2019-09-23 12:47:50.910663: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] After Removing unused ops pass 1: 791 operators, 1191 arrays (0 quantized)
2019-09-23 12:47:50.932456: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before general graph transformations: 791 operators, 1191 arrays (0 quantized)
2019-09-23 12:47:50.947810: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] After general graph transformations pass 1: 141 operators, 341 arrays (0 quantized)
2019-09-23 12:47:50.950008: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] After general graph transformations pass 2: 141 operators, 341 arrays (0 quantized)
2019-09-23 12:47:50.952030: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] After general graph transformations pass 3: 129 operators, 320 arrays (0 quantized)
2019-09-23 12:47:50.953945: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before pre-quantization graph transformations: 129 operators, 320 arrays (0 quantized)
2019-09-23 12:47:50.955456: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before default min-max range propagation graph transformations: 129 operators, 320 arrays (0 quantized)
2019-09-23 12:47:50.956873: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] After default min-max range propagation graph transformations pass 1: 129 operators, 320 arrays (0 quantized)
2019-09-23 12:47:50.958550: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before quantization graph transformations: 129 operators, 320 arrays (0 quantized)
2019-09-23 12:47:50.958556: F tensorflow/lite/toco/graph_transformations/quantize.cc:491] Unimplemented: this graph contains an operator of type Cast for which the quantized form is not yet implemented. Sorry, and patches welcome (that's a relatively fun patch to write, mostly providing the actual quantized arithmetic code for this op).
Aborted (core dumped)

Please respond, thanks!

@Roopesh-Nallakshyam
Copy link

@PotekhinRoman Did u get the solution for the error?
java.lang.IllegalArgumentException: ByteBuffer is not a valid flatbuffer model

@rapotekhin
Copy link

@tantara, @Roopesh-Nallakshyam, So far I have not been able to solve this problem, I plan to return to searching for a solution in a month.
I can assume that we are not correctly converting the model to .pb format. By analogy with mobilenet_ssd, we need a special method for converting the original model to .pb. (https://github.com/tensorflow/models/blob/master/research/object_detection/export_tflite_ssd_graph.py)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants