Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ValueError: Could not infer attribute explicit_paddings type from empty iterator #2262

Closed
trung-nguyen-code opened this issue Oct 28, 2023 · 14 comments
Labels
bug An unexpected problem or unintended behavior pending on user response Waiting for more information or validation from user

Comments

@trung-nguyen-code
Copy link

trung-nguyen-code commented Oct 28, 2023

Describe the bug

Hi, I got this error when trying to convert my tensorflow model to onnx. I tried all the opset values but no one worked.
Urgent
I need this one as soon as possible, before November 8th, so please help me.
To Reproduce

I'm building a model to classify emotion with Convolution2D and Sequential api.
Here is link to the colab notebook.
https://colab.research.google.com/drive/1VdcubIhxcjQU79qFgqy8AObnp6dwZ59h?usp=sharing
Note that you need to upload kaggle.json file to download dataset and reproduce the resultls. Here:
kaggle.json

Screenshots

Screenshot from 2023-10-28 14-43-27
Screenshot from 2023-10-28 14-43-17

@trung-nguyen-code trung-nguyen-code added the bug An unexpected problem or unintended behavior label Oct 28, 2023
@aponte411
Copy link

aponte411 commented Oct 28, 2023

I am also experiencing the same issue: (tf2onnx310) bambam@LAPTOP-I50CAR6D:/mnt/c/ASG/PersonalizedSpeechEnhancement$ python -m tf2onnx.convert --saved-model ../ModelWeights/tf2_v23_SavedModelCPU/
--output ../ModelWeights/tf2_v23_onnx/model.onnx
2023-10-27 02:17:34.154023: I tensorflow/core/util/port.cc:111] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable TF_ENABLE_ONEDNN_OPTS=0.
2023-10-27 02:17:34.156314: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.
2023-10-27 02:17:34.186523: E tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:9342] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2023-10-27 02:17:34.186581: E tensorflow/compiler/xla/stream_executor/cuda/cuda_fft.cc:609] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2023-10-27 02:17:34.186606: E tensorflow/compiler/xla/stream_executor/cuda/cuda_blas.cc:1518] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2023-10-27 02:17:34.192854: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.
2023-10-27 02:17:34.193132: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-10-27 02:17:34.932402: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
/home/bambam/.pyenv/versions/3.10.1/lib/python3.10/runpy.py:126: RuntimeWarning: 'tf2onnx.convert' found in sys.modules after import of package 'tf2onnx', but prior to execution of 'tf2onnx.convert'; this may result in unpredictable behaviour
warn(RuntimeWarning(msg))
2023-10-27 02:17:35,560 - WARNING - '--tag' not specified for saved_model. Using --tag serve
2023-10-27 02:17:41,762 - INFO - Signatures found in model: [serving_default].
2023-10-27 02:17:41,762 - WARNING - '--signature_def' not specified, using first signature: serving_default
2023-10-27 02:17:41,763 - INFO - Output names: ['out']
2023-10-27 02:17:41,763 - WARNING - Could not search for non-variable resources. Concrete function internal representation may have changed.
2023-10-27 02:17:41.780672: I tensorflow/core/grappler/devices.cc:66] Number of eligible GPUs (core count >= 8, compute capability >= 0.0): 0
2023-10-27 02:17:41.780901: I tensorflow/core/grappler/clusters/single_machine.cc:361] Starting new session
2023-10-27 02:17:44.850728: I tensorflow/core/grappler/devices.cc:66] Number of eligible GPUs (core count >= 8, compute capability >= 0.0): 0
2023-10-27 02:17:44.851146: I tensorflow/core/grappler/clusters/single_machine.cc:361] Starting new session
2023-10-27 02:17:46,566 - INFO - Using tensorflow=2.14.0, onnx=1.15.0, tf2onnx=1.15.1/37820d
2023-10-27 02:17:46,568 - INFO - Using opset <onnx, 15>
2023-10-27 02:17:46,717 - ERROR - pass1 convert failed for name: "StatefulPartitionedCall/model_2/enc00/conv2d_108/Conv2D"
op: "Conv2D"
input: "StatefulPartitionedCall/model_2/enc00/Pad"
input: "StatefulPartitionedCall/model_2/enc00/conv2d_108/Conv2D/ReadVariableOp"
attr {
key: "T"
value {
type: DT_FLOAT
}
}
attr {
key: "data_format"
value {
s: "NHWC"
}
}
attr {
key: "dilations"
value {
list {
i: 1
i: 1
i: 1
i: 1
}
}
}
attr {
key: "explicit_paddings"
value {
list {
}
}
}
attr {
key: "padding"
value {
s: "VALID"
}
}
attr {
key: "strides"
value {
list {
i: 1
i: 1
i: 1
i: 1
}
}
}
attr {
key: "use_cudnn_on_gpu"
value {
b: true
}
}
, ex=Could not infer attribute explicit_paddings type from empty iterator
Traceback (most recent call last):
File "/home/bambam/.pyenv/versions/3.10.1/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/bambam/.pyenv/versions/3.10.1/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/home/bambam/.pyenv/versions/tf2onnx310/lib/python3.10/site-packages/tf2onnx/convert.py", line 714, in
main()
File "/home/bambam/.pyenv/versions/tf2onnx310/lib/python3.10/site-packages/tf2onnx/convert.py", line 273, in main
model_proto, _ = _convert_common(
File "/home/bambam/.pyenv/versions/tf2onnx310/lib/python3.10/site-packages/tf2onnx/convert.py", line 168, in _convert_common
g = process_tf_graph(tf_graph, const_node_values=const_node_values,
File "/home/bambam/.pyenv/versions/tf2onnx310/lib/python3.10/site-packages/tf2onnx/tfonnx.py", line 459, in process_tf_graph
main_g, subgraphs = graphs_from_tf(tf_graph, input_names, output_names, shape_override, const_node_values,
File "/home/bambam/.pyenv/versions/tf2onnx310/lib/python3.10/site-packages/tf2onnx/tfonnx.py", line 474, in graphs_from_tf
ordered_func = resolve_functions(tf_graph)
File "/home/bambam/.pyenv/versions/tf2onnx310/lib/python3.10/site-packages/tf2onnx/tf_loader.py", line 764, in resolve_functions
_, _, _, _, _, functions = tflist_to_onnx(tf_graph, {})
File "/home/bambam/.pyenv/versions/tf2onnx310/lib/python3.10/site-packages/tf2onnx/tf_utils.py", line 462, in tflist_to_onnx
onnx_node = helper.make_node(node_type, input_names, output_names, name=node.name, **attr)
File "/home/bambam/.pyenv/versions/tf2onnx310/lib/python3.10/site-packages/onnx/helper.py", line 164, in make_node
node.attribute.extend(
File "/home/bambam/.pyenv/versions/tf2onnx310/lib/python3.10/site-packages/onnx/helper.py", line 165, in
make_attribute(key, value)
File "/home/bambam/.pyenv/versions/tf2onnx310/lib/python3.10/site-packages/onnx/helper.py", line 876, in make_attribute
raise ValueError(
ValueError: Could not infer attribute explicit_paddings type from empty iterator

@Sawaiz8
Copy link

Sawaiz8 commented Oct 28, 2023

Getting the same error:

Traceback (most recent call last):
File "/home/sawaiz/anaconda3/envs/object/lib/python3.9/runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/sawaiz/anaconda3/envs/object/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/sawaiz/anaconda3/envs/object/lib/python3.9/site-packages/tf2onnx/convert.py", line 714, in
main()
File "/home/sawaiz/anaconda3/envs/object/lib/python3.9/site-packages/tf2onnx/convert.py", line 273, in main
model_proto, _ = _convert_common(
File "/home/sawaiz/anaconda3/envs/object/lib/python3.9/site-packages/tf2onnx/convert.py", line 168, in _convert_common
g = process_tf_graph(tf_graph, const_node_values=const_node_values,
File "/home/sawaiz/anaconda3/envs/object/lib/python3.9/site-packages/tf2onnx/tfonnx.py", line 459, in process_tf_graph
main_g, subgraphs = graphs_from_tf(tf_graph, input_names, output_names, shape_override, const_node_values,
File "/home/sawaiz/anaconda3/envs/object/lib/python3.9/site-packages/tf2onnx/tfonnx.py", line 474, in graphs_from_tf
ordered_func = resolve_functions(tf_graph)
File "/home/sawaiz/anaconda3/envs/object/lib/python3.9/site-packages/tf2onnx/tf_loader.py", line 764, in resolve_functions
_, _, _, _, _, functions = tflist_to_onnx(tf_graph, {})
File "/home/sawaiz/anaconda3/envs/object/lib/python3.9/site-packages/tf2onnx/tf_utils.py", line 462, in tflist_to_onnx
onnx_node = helper.make_node(node_type, input_names, output_names, name=node.name, **attr)
File "/home/sawaiz/anaconda3/envs/object/lib/python3.9/site-packages/onnx/helper.py", line 164, in make_node
node.attribute.extend(
File "/home/sawaiz/anaconda3/envs/object/lib/python3.9/site-packages/onnx/helper.py", line 165, in
make_attribute(key, value)
File "/home/sawaiz/anaconda3/envs/object/lib/python3.9/site-packages/onnx/helper.py", line 876, in make_attribute
raise ValueError(
ValueError: Could not infer attribute explicit_paddings type from empty iterator

Is this a version issue?

@hariharan-tech
Copy link

  • I faced the same issue when I tried exporting the base pretrained (imagenet weights) Xception model to onnx weights format for inferencing.
  • As mentioned in the Readme of this repo as tf2onnx doesn't support tensorflow version 2.14 (currently), I tried building and exporting the same simple model in tensorflow version 2.12 yet faced the same issue.
  • Environment used:
    • Google collab
    • Tried TF versions 2.14 and 2.12

@lrmillennium
Copy link

same issue. tried to export a image classification model on hugging face to onnx. will try tensorflow.js next.

2023-10-29 21:02:04.427028: E tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:9342] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2023-10-29 21:02:04.427120: E tensorflow/compiler/xla/stream_executor/cuda/cuda_fft.cc:609] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2023-10-29 21:02:04.427187: E tensorflow/compiler/xla/stream_executor/cuda/cuda_blas.cc:1518] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2023-10-29 21:02:08.444768: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
Framework not specified. Using tf to export to ONNX.
All model checkpoint layers were used when initializing TFViTForImageClassification.

All the layers of TFViTForImageClassification were initialized from the model checkpoint at ericrong888/logo_classifier.
If your task is similar to the task the model of the checkpoint was trained on, you can already use TFViTForImageClassification for predictions without further training.
Automatic task detection to image-classification.
/usr/local/lib/python3.10/dist-packages/transformers/models/vit/feature_extraction_vit.py:28: FutureWarning: The class ViTFeatureExtractor is deprecated and will be removed in version 5 of Transformers. Please use ViTImageProcessor instead.
warnings.warn(
Using the export variant default. Available variants are:
- default: The default ONNX variant.
input_shapes argument is not supported by the Tensorflow ONNX export and will be ignored.
Using framework TensorFlow: 2.14.0
Could not search for non-variable resources. Concrete function internal representation may have changed.
pass1 convert failed for name: "tf_vi_t_for_image_classification/vit/embeddings/patch_embeddings/projection/Conv2D"
op: "Conv2D"
input: "tf_vi_t_for_image_classification/vit/embeddings/patch_embeddings/transpose"
input: "tf_vi_t_for_image_classification/vit/embeddings/patch_embeddings/projection/Conv2D/ReadVariableOp"
attr {
key: "T"
value {
type: DT_FLOAT
}
}
attr {
key: "data_format"
value {
s: "NHWC"
}
}
attr {
key: "dilations"
value {
list {
i: 1
i: 1
i: 1
i: 1
}
}
}
attr {
key: "explicit_paddings"
value {
list {
}
}
}
attr {
key: "padding"
value {
s: "VALID"
}
}
attr {
key: "strides"
value {
list {
i: 1
i: 16
i: 16
i: 1
}
}
}
attr {
key: "use_cudnn_on_gpu"
value {
b: true
}
}
, ex=Could not infer attribute explicit_paddings type from empty iterator
Traceback (most recent call last):
File "/usr/local/bin/optimum-cli", line 8, in
sys.exit(main())
File "/usr/local/lib/python3.10/dist-packages/optimum/commands/optimum_cli.py", line 163, in main
service.run()
File "/usr/local/lib/python3.10/dist-packages/optimum/commands/export/onnx.py", line 232, in run
main_export(
File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/main.py", line 486, in main_export
_, onnx_outputs = export_models(
File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/convert.py", line 752, in export_models
export(
File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/convert.py", line 875, in export
export_output = export_tensorflow(model, config, opset, output)
File "/usr/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/convert.py", line 685, in export_tensorflow
onnx_model, _ = tf2onnx.convert.from_keras(model, input_signature, opset=opset)
File "/usr/local/lib/python3.10/dist-packages/tf2onnx/convert.py", line 502, in from_keras
model_proto, external_tensor_storage = _convert_common(
File "/usr/local/lib/python3.10/dist-packages/tf2onnx/convert.py", line 168, in _convert_common
g = process_tf_graph(tf_graph, const_node_values=const_node_values,
File "/usr/local/lib/python3.10/dist-packages/tf2onnx/tfonnx.py", line 459, in process_tf_graph
main_g, subgraphs = graphs_from_tf(tf_graph, input_names, output_names, shape_override, const_node_values,
File "/usr/local/lib/python3.10/dist-packages/tf2onnx/tfonnx.py", line 474, in graphs_from_tf
ordered_func = resolve_functions(tf_graph)
File "/usr/local/lib/python3.10/dist-packages/tf2onnx/tf_loader.py", line 764, in resolve_functions
_, _, _, _, _, functions = tflist_to_onnx(tf_graph, {})
File "/usr/local/lib/python3.10/dist-packages/tf2onnx/tf_utils.py", line 462, in tflist_to_onnx
onnx_node = helper.make_node(node_type, input_names, output_names, name=node.name, **attr)
File "/usr/local/lib/python3.10/dist-packages/onnx/helper.py", line 164, in make_node
node.attribute.extend(
File "/usr/local/lib/python3.10/dist-packages/onnx/helper.py", line 165, in
make_attribute(key, value)
File "/usr/local/lib/python3.10/dist-packages/onnx/helper.py", line 876, in make_attribute
raise ValueError(
ValueError: Could not infer attribute explicit_paddings type from empty iterator

@fatcat-z
Copy link
Collaborator

Please don't use ONNX 1.15 and try a lower version.

@fatcat-z
Copy link
Collaborator

  • I faced the same issue when I tried exporting the base pretrained (imagenet weights) Xception model to onnx weights format for inferencing.

  • As mentioned in the Readme of this repo as tf2onnx doesn't support tensorflow version 2.14 (currently), I tried building and exporting the same simple model in tensorflow version 2.12 yet faced the same issue.

  • Environment used:

    • Google collab
    • Tried TF versions 2.14 and 2.12

Please don't use ONNX 1.15 and try a lower version.

@fatcat-z fatcat-z added the pending on user response Waiting for more information or validation from user label Oct 30, 2023
@felixdittrich92
Copy link

@fatcat-z Would it be possible to release a patch with the empty iterator fix for onnx 1.15 ? #2252

@trung-nguyen-code
Copy link
Author

  • I faced the same issue when I tried exporting the base pretrained (imagenet weights) Xception model to onnx weights format for inferencing.

  • As mentioned in the Readme of this repo as tf2onnx doesn't support tensorflow version 2.14 (currently), I tried building and exporting the same simple model in tensorflow version 2.12 yet faced the same issue.

  • Environment used:

    • Google collab
    • Tried TF versions 2.14 and 2.12

Please don't use ONNX 1.15 and try a lower version.

How? Please give me the command line to install a new one.

@SergheiDinu
Copy link

SergheiDinu commented Oct 30, 2023

  • I faced the same issue when I tried exporting the base pretrained (imagenet weights) Xception model to onnx weights format for inferencing.

  • As mentioned in the Readme of this repo as tf2onnx doesn't support tensorflow version 2.14 (currently), I tried building and exporting the same simple model in tensorflow version 2.12 yet faced the same issue.

  • Environment used:

    • Google collab
    • Tried TF versions 2.14 and 2.12

Please don't use ONNX 1.15 and try a lower version.

How? Please give me the command line to install a new one.

pip install onnx==1.14.1 worked

@Jensssen
Copy link

Installing tf2onnx via pip will install onnx:1.15.0. You can verify that by running pip list | grep onnx
In order to downgrade the version of your onnx dependency, just run pip install onnx==1.14.1.
With ONNX==1.14.1, it should work.

@hariharan-tech
Copy link

  • I faced the same issue when I tried exporting the base pretrained (imagenet weights) Xception model to onnx weights format for inferencing.

  • As mentioned in the Readme of this repo as tf2onnx doesn't support tensorflow version 2.14 (currently), I tried building and exporting the same simple model in tensorflow version 2.12 yet faced the same issue.

  • Environment used:

    • Google collab
    • Tried TF versions 2.14 and 2.12

Please don't use ONNX 1.15 and try a lower version.

Yes, I used Tensorflow version 2.11 with tf2onnx version 1.14.0 and the export works!
Thanks!

@trung-nguyen-code
Copy link
Author

Thanks all! I did it!

@raulcarlomagno
Copy link

Installing tf2onnx via pip will install onnx:1.15.0. You can verify that by running pip list | grep onnx In order to downgrade the version of your onnx dependency, just run pip install onnx==1.14.1. With ONNX==1.14.1, it should work.
it does work
image

@fatcat-z
Copy link
Collaborator

fatcat-z commented Dec 18, 2023

@fatcat-z Would it be possible to release a patch with the empty iterator fix for onnx 1.15 ? #2252

The PR will be merged into code soon, a new patch/release will be prepared to support ONNX 1.15 officially.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug An unexpected problem or unintended behavior pending on user response Waiting for more information or validation from user
Projects
None yet
Development

No branches or pull requests

10 participants