Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] Failure to convert ssdlite_mobiledet_cpu_320x320_coco_2020_05_19 #9461

Closed
CarlosML opened this issue Dec 28, 2021 · 3 comments
Closed
Assignees

Comments

@CarlosML
Copy link

CarlosML commented Dec 28, 2021

I am trying to convert "ssdlite_mobiledet_cpu_320x320_coco_2020_05_19" (http://download.tensorflow.org/models/object_detection/ssdlite_mobiledet_cpu_320x320_coco_2020_05_19.tar.gz) using openvino/workbench:2021.4.2 and while doing so I get this error:

Model Optimizer arguments:
Common parameters:
- Path to the Input Model: 	/home/workbench/.workbench/models/9/original/tflite_graph.pb
- Path for generated IR: 	/home/workbench/.workbench/models/10/original
- IR output name: 	ssdlite_mobiledet_cpu_320x320_coco_2020_05_19
- Log level: 	ERROR
- Batch: 	Not specified, inherited from the model
- Input layers: 	normalized_input_image_tensor
- Output layers: 	Not specified, inherited from the model
- Input shapes: 	Not specified, inherited from the model
- Mean values: 	Not specified
- Scale values: 	Not specified
- Scale factor: 	Not specified
- Precision of IR: 	FP16
- Enable fusing: 	True
- Enable grouped convolutions fusing: 	True
- Move mean values to preprocess section: 	None
- Reverse input channels: 	True
TensorFlow specific parameters:
- Input model in text protobuf format: 	False
- Path to model dump for TensorBoard: 	None
- List of shared libraries with TensorFlow custom layers implementation: 	None
- Update the configuration file with input/output node names: 	None
- Use configuration file used to generate the model with Object Detection API: 	/home/workbench/.workbench/models/9/original/pipeline.config
- Use the config file: 	None
- Inference Engine found in: 	/opt/intel/openvino/python/python3.8/openvino
Inference Engine version: 	2021.4.2-3974-e2a469a3450-releases/2021/4
Model Optimizer version: 	2021.4.2-3974-e2a469a3450-releases/2021/4
2021-12-28 02:17:58.553474: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: :/opt/intel/openvino_2021.4.752/python/python3/cv2/../../../opencv/bin:/opt/intel/openvino/opencv/lib:/opt/intel/openvino/deployment_tools/ngraph/lib:/opt/intel/openvino/deployment_tools/inference_engine/external/tbb/lib::/opt/intel/openvino/deployment_tools/inference_engine/external/hddl/lib:/opt/intel/openvino/deployment_tools/inference_engine/external/omp/lib:/opt/intel/openvino/deployment_tools/inference_engine/external/gna/lib:/opt/intel/openvino/deployment_tools/inference_engine/external/mkltiny_lnx/lib:/opt/intel/openvino/deployment_tools/inference_engine/lib/intel64
2021-12-28 02:17:58.553489: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
Progress: [                    ]   0.29% done
Progress: [                    ]   0.58% done
Progress: [                    ]   0.86% done
Progress: [                    ]   1.15% done
Progress: [                    ]   1.44% done
Progress: [                    ]   1.73% done
Progress: [                    ]   2.02% done
Progress: [                    ]   2.31% done
Progress: [                    ]   2.59% done
Progress: [                    ]   2.88% done
Progress: [                    ]   3.17% done
Progress: [                    ]   3.46% done
Progress: [                    ]   3.75% done
Progress: [                    ]   4.03% done
Progress: [                    ]   4.32% done
Progress: [                    ]   4.61% done
Progress: [                    ]   4.90% done
Progress: [.                   ]   5.19% done
Progress: [.                   ]   5.48% done
Progress: [.                   ]   5.76% done
Progress: [.                   ]   6.05% done
Progress: [.                   ]   6.34% done
Progress: [.                   ]   6.63% done
Progress: [.                   ]   6.92% done
Progress: [.                   ]   7.20% done
Progress: [.                   ]   7.49% done
Progress: [.                   ]   7.78% done
Progress: [.                   ]   8.07% done
Progress: [.                   ]   8.36% done
Progress: [.                   ]   8.65% done
Progress: [.                   ]   8.93% done
Progress: [.                   ]   9.22% done
Progress: [.                   ]   9.51% done
Progress: [.                   ]   9.80% done
Progress: [..                  ]  10.09% done
Progress: [..                  ]  10.37% done
Progress: [..                  ]  10.66% done
Progress: [..                  ]  10.95% done
Progress: [..                  ]  11.24% done
Progress: [..                  ]  11.53% done
Progress: [..                  ]  11.82% done
Progress: [..                  ]  12.10% done
Progress: [..                  ]  12.39% done
Progress: [..                  ]  12.68% done
Progress: [..                  ]  12.97% done
Progress: [..                  ]  13.26% done
Progress: [..                  ]  13.54% done
Progress: [..                  ]  13.83% done
Progress: [..                  ]  14.12% done
Progress: [..                  ]  14.41% done
Progress: [..                  ]  14.70% done
Progress: [..                  ]  14.99% done
Progress: [...                 ]  15.27% done
Progress: [...                 ]  15.56% done
Progress: [...                 ]  15.85% done
Progress: [...                 ]  16.14% done
Progress: [...                 ]  16.43% done
Progress: [...                 ]  16.71% done
Progress: [...                 ]  17.00% done
Progress: [...                 ]  17.29% done
Progress: [...                 ]  17.58% done
Progress: [...                 ]  17.87% done
Progress: [...                 ]  18.16% done
Progress: [...                 ]  18.44% done
Progress: [...                 ]  18.73% done
Progress: [...                 ]  19.02% done
Progress: [...                 ]  19.31% done
Progress: [...                 ]  19.60% done
Progress: [...                 ]  19.88% done
Progress: [....                ]  20.17% done
Progress: [....                ]  20.46% done
Progress: [....                ]  20.75% done
Progress: [....                ]  21.04% done
Progress: [....                ]  21.33% done
Progress: [....                ]  21.61% done
Progress: [....                ]  21.90% done
Progress: [....                ]  22.19% done
Progress: [....                ]  22.48% done
Progress: [....                ]  22.77% done
Progress: [....                ]  23.05% done
Progress: [....                ]  23.34% done
Progress: [....                ]  23.63% done
Progress: [....                ]  23.92% done
Progress: [....                ]  24.21% done
Progress: [....                ]  24.50% done
Progress: [....                ]  24.78% done
Progress: [.....               ]  25.07% done
Progress: [.....               ]  25.36% done
Progress: [.....               ]  25.65% done
Progress: [.....               ]  25.94% done
Progress: [.....               ]  26.22% done
Progress: [.....               ]  26.51% done
Progress: [.....               ]  26.80% done
Progress: [.....               ]  27.09% done
Progress: [.....               ]  27.38% done
Progress: [.....               ]  27.67% done
Progress: [.....               ]  27.95% done
Progress: [.....               ]  28.24% done
Progress: [.....               ]  28.53% done
Progress: [.....               ]  28.82% done
Progress: [.....               ]  29.11% done
Progress: [.....               ]  29.39% done
Progress: [.....               ]  29.68% done
Progress: [.....               ]  29.97% done
Progress: [......              ]  30.26% done
Progress: [......              ]  30.55% done
Progress: [......              ]  30.84% done
Progress: [......              ]  31.12% done
Progress: [......              ]  31.41% done
Progress: [......              ]  31.70% done
Progress: [......              ]  31.99% done
Progress: [......              ]  32.28% done
Progress: [......              ]  32.56% done
Progress: [......              ]  32.85% done
Progress: [......              ]  33.14% done
Progress: [......              ]  33.43% done
Progress: [......              ]  33.72% done
Progress: [......              ]  34.01% done
Progress: [......              ]  34.29% done
Progress: [......              ]  34.58% done
Progress: [......              ]  34.87% done
Progress: [.......             ]  35.16% done
Progress: [.......             ]  35.45% done
Progress: [.......             ]  35.73% done
Progress: [.......             ]  36.02% done
Progress: [.......             ]  36.31% done
Progress: [.......             ]  36.60% done
Progress: [.......             ]  36.89% done
Progress: [.......             ]  37.18% done
Progress: [.......             ]  37.46% done
Progress: [.......             ]  37.75% done
Progress: [.......             ]  38.04% done
Progress: [.......             ]  38.33% done
Progress: [.......             ]  38.62% done
Progress: [.......             ]  38.90% done
Progress: [.......             ]  39.19% done
Progress: [.......             ]  39.48% done
Progress: [.......             ]  39.77% done
Progress: [........            ]  40.06% done
Progress: [........            ]  40.35% done
Progress: [........            ]  40.63% done
Progress: [........            ]  40.92% done
Progress: [........            ]  41.21% done
Progress: [........            ]  41.50% done
Progress: [........            ]  41.79% done
Progress: [........            ]  42.07% done
Progress: [........            ]  42.36% done
Progress: [........            ]  42.65% done
Progress: [........            ]  42.94% done
Progress: [........            ]  43.23% done
Progress: [........            ]  43.52% done
Progress: [........            ]  43.80% done
2021-12-28 02:18:03.465949: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-12-28 02:18:03.466078: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: :/opt/intel/openvino_2021.4.752/python/python3/cv2/../../../opencv/bin:/opt/intel/openvino/opencv/lib:/opt/intel/openvino/deployment_tools/ngraph/lib:/opt/intel/openvino/deployment_tools/inference_engine/external/tbb/lib::/opt/intel/openvino/deployment_tools/inference_engine/external/hddl/lib:/opt/intel/openvino/deployment_tools/inference_engine/external/omp/lib:/opt/intel/openvino/deployment_tools/inference_engine/external/gna/lib:/opt/intel/openvino/deployment_tools/inference_engine/external/mkltiny_lnx/lib:/opt/intel/openvino/deployment_tools/inference_engine/lib/intel64
2021-12-28 02:18:03.466084: W tensorflow/stream_executor/cuda/cuda_driver.cc:326] failed call to cuInit: UNKNOWN ERROR (303)
2021-12-28 02:18:03.466093: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (workbench): /proc/driver/nvidia/version does not exist
2021-12-28 02:18:03.466194: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-12-28 02:18:03.467292: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
/home/workbench/.workbench/environments/1/lib/python3.8/site-packages/tensorflow/python/autograph/impl/api.py:22: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp
Cannot infer shapes or values for node "TFLite_Detection_PostProcess".
Op type not registered 'TFLite_Detection_PostProcess' in binary running on workbench. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.) `tf.contrib.resampler` should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.
It can happen due to bug in custom shape infer function <function tf_native_tf_node_infer at 0x7f4a0e412dc0>.
Or because the node inputs have incorrect values/shapes.
Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
Run Model Optimizer with --log_level=DEBUG for more information.
Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): Stopped shape/value propagation at "TFLite_Detection_PostProcess" node. 
For more information please refer to Model Optimizer FAQ, question #38. ([https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html?question=38#question-38)](https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html?question=38#question-38))

I have already tried with all conversion configuration files starting with ssd and the same thing happens.

Thank you very much for any indication.

@CarlosML CarlosML added bug Something isn't working support_request labels Dec 28, 2021
@avitial avitial added PSE and removed bug Something isn't working labels Dec 28, 2021
@maxlytkin
Copy link

maxlytkin commented Dec 28, 2021

@CarlosML
The final operation in the model is "TFLite_Detection_PostProcess" which is not currently supported. I managed to convert the rest of the model with the following command line outside of DL workbench env. Please try this as well.

python3 -- /opt/intel/openvino_2021.4.752/deployment_tools/model_optimizer/mo.py
--framework=tf --data_type=FP32
--output_dir=. --model_name=ssdlite_mobiledet
--reverse_input_channels
'--input_shape=[1,320,320,3]'
--tensorflow_object_detection_api_pipeline_config=./ssdlite_mobiledet_cpu_320x320_coco_2020_05_19/pipeline.config
--input_model=./ssdlite_mobiledet_cpu_320x320_coco_2020_05_19/tflite_graph.pb
--output raw_outputs/box_encodings,convert_scores

Model Optimizer arguments:
Common parameters:
- Path to the Input Model: /opt/intel/openvino_2021.4.752/deployment_tools/open_model_zoo/tools/downloader/./ssdlite_mobiledet_cpu_320x320_coco_2020_05_19/tflite_graph.pb
- Path for generated IR: /opt/intel/openvino_2021.4.752/deployment_tools/open_model_zoo/tools/downloader/.
- IR output name: ssdlite_mobiledet
- Log level: ERROR
- Batch: Not specified, inherited from the model
- Input layers: Not specified, inherited from the model
- Output layers: raw_outputs/box_encodings,convert_scores
- Input shapes: [1,320,320,3]
- Mean values: Not specified
- Scale values: Not specified
- Scale factor: Not specified
- Precision of IR: FP32
- Enable fusing: True
- Enable grouped convolutions fusing: True
- Move mean values to preprocess section: None
- Reverse input channels: True
TensorFlow specific parameters:
- Input model in text protobuf format: False
- Path to model dump for TensorBoard: None
- List of shared libraries with TensorFlow custom layers implementation: None
- Update the configuration file with input/output node names: None
- Use configuration file used to generate the model with Object Detection API: /opt/intel/openvino_2021.4.752/deployment_tools/open_model_zoo/tools/downloader/./ssdlite_mobiledet_cpu_320x320_coco_2020_05_19/pipeline.config
- Use the config file: None
- Inference Engine found in: /opt/intel/openvino_2021.4.752/python/python3.6/openvino
Inference Engine version: 2021.4.2-3974-e2a469a3450-releases/2021/4
Model Optimizer version: 2021.4.2-3974-e2a469a3450-releases/2021/4
2021-12-28 03:44:56.861888: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: :/opt/intel/openvino_2021.4.752/deployment_tools/model_optimizer/mo/utils/../../../inference_engine/lib/intel64:/opt/intel/openvino_2021.4.752/deployment_tools/model_optimizer/mo/utils/../../../inference_engine/external/tbb/lib:/opt/intel/openvino_2021.4.752/deployment_tools/model_optimizer/mo/utils/../../../ngraph/lib
2021-12-28 03:44:56.861911: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
/home/pse/.local/lib/python3.6/site-packages/tensorflow/python/autograph/impl/api.py:22: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp
[ SUCCESS ] Generated IR version 10 model.
[ SUCCESS ] XML file: /opt/intel/openvino_2021.4.752/deployment_tools/open_model_zoo/tools/downloader/ssdlite_mobiledet.xml
[ SUCCESS ] BIN file: /opt/intel/openvino_2021.4.752/deployment_tools/open_model_zoo/tools/downloader/ssdlite_mobiledet.bin
[ SUCCESS ] Total execution time: 29.74 seconds.
[ SUCCESS ] Memory consumed: 432 MB.

@maxlytkin maxlytkin self-assigned this Dec 28, 2021
@avitial avitial added the category: MO Model Optimizer label Dec 28, 2021
@maxlytkin maxlytkin assigned avitial and unassigned maxlytkin Dec 29, 2021
@avitial
Copy link
Contributor

avitial commented Jan 5, 2022

Closing this, I hope previous response was sufficient to help you proceed with model conversion. Feel free to reopen and ask any additional questions related to this topic.

@avitial avitial closed this as completed Jan 5, 2022
@vishniakov-nikolai
Copy link
Contributor

Hello @CarlosML!
I converted your model with parameters that @maxlytkin provided and conversion finished successfully. I attached screenshot with DL Workbench conversion form parameters below.
FireShot Capture 002 - DL Workbench - localhost

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants