-
Notifications
You must be signed in to change notification settings - Fork 74
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ParSeq] [Error] convert error mismatch after saved_model output complete #436
Comments
Cannot be reproduced. I think it is a problem specific to your environment. It looks like a bug in the runtime for OSX. I do not investigate environment-specific issues. onnx2tf -i parseq-tiny.onnx -cotof -rtpo Erf -coion import numpy as np
import tensorflow as tf
interpreter = tf.lite.Interpreter(model_path="saved_model/parseq-tiny_float32.tflite")
tf_lite_model = interpreter.get_signature_runner()
inputs = {
'input': np.ones([1,32,128,3], dtype=np.float32),
}
tf_lite_output = tf_lite_model(**inputs)
print("[TFLite] Model Predictions:", tf_lite_output)
|
I ran the command you post on a fresh ubuntu 20.04 with all the package installed as described in the issue, still got the same exact error. Can you share more about the package version you use? |
I see. I reproduced it. It appears to be a very specific error pattern. docker run --rm -it \
-v `pwd`:/workdir \
-w /workdir \
ghcr.io/pinto0309/onnx2tf:1.15.7
pip show tensorflow
Name: tensorflow
Version: 2.13.0
Summary: TensorFlow is an open source machine learning framework for everyone.
Home-page: https://www.tensorflow.org/
Author: Google Inc.
Author-email: [email protected]
License: Apache 2.0
Location: /usr/local/lib/python3.8/dist-packages
Requires: absl-py, astunparse, flatbuffers, gast, google-pasta, grpcio, h5py, keras, libclang, numpy, opt-einsum, packaging, protobuf, setuptools, six, tensorboard, tensorflow-estimator, tensorflow-io-gcs-filesystem, termcolor, typing-extensions, wrapt
Required-by:
onnx2tf -i parseq-tiny.onnx -cotof -rtpo Erf -coion
There seems to be a bug in Tensorflow v2.13.0. Maybe this is a bug that should be reported as an issue to the official TensorFlow repository.
|
Thank you for the prompt response. I was able to generte the model after downgrading tensorflow. I apologize for not trying with Docker first, it's my 3rd week at work and I am learning as much as I can about Docker. |
Issue Type
Others
OS
OSX
onnx2tf version number
1.15.4
onnx version number
1.14.0
onnxruntime version number
1.15.1
onnxsim (onnx_simplifier) version number
0.4.33
tensorflow version number
2.13.0
Download URL for ONNX
https://drive.google.com/file/d/1qC1EOT5hyiHSjVwFrYFh1318Dbu0ERNN/view?usp=sharing
Parameter Replacement JSON
N/A
Description
I was attempting to convert a transformer based model for text recognition from onnx to tensorflow graph def and tflite. The conversion was completed (the mapping from original onnx operations to tensorflow operations is generated), but I got an error on shape mismatch.
onnx2tf -i parseq-tiny.onnx
It appears the problem occurs at reshape_30 and reshape_100. So I looked at the output during the conversion, and but can only find an op using the reshape_30 as input, but not where it got outputed:
The closest reshape I can find is this reshape block producing reshape_28. Reshape_30 does exist at the end of the table.
I checked reshape_100, similar issue.
3. How:
I tried various release version, but this bug persisted. Because my code actually ran through the conversion step without error, it doesn't seem to be a similar problem to all the ones I saw in closed issue. I can't locate which module to debug because this reshape_30 and reshape_100 doesn't seem to exist on the netron visualization of my onnx model.
4. Why
I need this problem solved to incorporate this text recognition model into our product. While the current model is in pytorch, and torchscript/onnx conversion went smoothly, tensorflow family won't be possible without conversion between onnx to tensorflow.
The text was updated successfully, but these errors were encountered: