-
Notifications
You must be signed in to change notification settings - Fork 73
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Conv-TasNet] Facing issue in converting Conv-TasNet model #447
Comments
It must be a bug in PyTorch. To begin with, you cannot inference with the standard onnxruntime. Have you checked? sit4onnx -if conv_tasnet.onnx -oep cpu
INFO: file: conv_tasnet.onnx
INFO: providers: ['CPUExecutionProvider']
INFO: input_name.1: onnx::Unsqueeze_0 shape: [256, 20] dtype: float32
Traceback (most recent call last):
File "/home/b920405/.local/bin/sit4onnx", line 8, in <module>
sys.exit(main())
File "/home/b920405/.local/lib/python3.10/site-packages/sit4onnx/onnx_inference_test.py", line 506, in main
final_results = inference(
File "/home/b920405/.local/lib/python3.10/site-packages/sit4onnx/onnx_inference_test.py", line 357, in inference
results = onnx_session.run(
File "/home/b920405/.local/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 217, in run
return self._sess.run(output_names, input_feed, run_options)
onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 :
INVALID_ARGUMENT : Non-zero status code returned while running ScatterElementsnode.
Name: '/decoder/ScatterElements'
Status Message: Indices and updates must have the same rank Embedded |
thanks for the info |
subject : Facing Nan values in tflite output import onnx
import torch
import torch.nn as nn
import torch.nn.init as init
from conv_tasnet import ConvTasNet
import onnxruntime
import numpy as np
def convertoOnnx():
device = torch.device('cpu')
# Create the super-resolution model by using the above model definition.
model = ConvTasNet(256, 20, 256, 512, 3, 8, 4,
2, norm_type="gLN", causal=0,
mask_nonlinear="softmax")
model.eval()
model.to(device)
dummy_input = torch.ones(256, 20).to(torch.device('cpu'))
# Export the model
torch.onnx.export(model, # model being run
dummy_input, # model input (or a tuple for multiple inputs)
"conv_tasnet_39nx_7.onnx", # where to save the model (can be a file or file-like object)
export_params=True, # store the trained parameter weights inside the model file
opset_version=12, # the ONNX version to export the model to
do_constant_folding=True, # whether to execute constant folding for optimization
input_names = ['input'], # the model's input names
output_names = ['output'], # the model's output names
#dynamic_axes={'input' : {0 : 'batch_size'}, # variable length axes
# 'output' : {0 : 'batch_size'}}
)
#from conv_tasnet import ConvTasNet
ort_session = onnxruntime.InferenceSession("conv_tasnet_39nx_7.onnx")
def to_numpy(tensor):
return tensor.detach().cpu().numpy() if tensor.requires_grad else tensor.cpu().numpy()
x = torch.randn(256, 20).to(torch.device('cpu'))
device = torch.device('cpu')
torch_out = model(x)
# compute ONNX Runtime output prediction
ort_inputs = {ort_session.get_inputs()[0].name: to_numpy(x)}
ort_outs = ort_session.run(None, ort_inputs)
# compare ONNX Runtime and PyTorch results
np.testing.assert_allclose(to_numpy(torch_out), ort_outs[0], rtol=1e-03, atol=1e-05)
print("torch out")
print(to_numpy(torch_out))
print("onnx out")
print(ort_outs[0])
print("Exported model has been tested with ONNXRuntime, and the result looks good!")
def main():
convertoOnnx()
if __name__ == "__main__":
main() I use below onnx model for conversion with onnx2tf and it also has the tflite model after conversion this is the command i used to convert and there were no errors during the conversion import tensorflow as tf
import time
# Load TFLite model
interpreter = tf.lite.Interpreter(model_path="./saved_model/conv_tasnet_39nx_7_float32.tflite")
interpreter.allocate_tensors()
tensor_shape = (256, 20)
input_data = {'waveform': tf.ones(tensor_shape, dtype=tf.float32) }
# Load and preprocess
input_details = interpreter.get_input_details()
input_shape = input_details[0]['shape']
print(input_shape)
# Run inference
interpreter.set_tensor(input_details[0]['index'], input_data["waveform"])
separate_time = time.time()
interpreter.invoke()
print("Done! {:.3f} s".format(time.time() - separate_time))
output_details = interpreter.get_output_details()
output_data = interpreter.get_tensor(output_details[0]['index'])
# Check the device used for inference
# output_tensor = interpreter.tensor(output_data)
# print("Inference was performed on:", output_details.device_name)
output_data = []
for output_detail in output_details:
output_data.append(interpreter.get_tensor(output_detail['index']))
print(output_data) |
Do as much research on your own as you can. I am not a handyman. import tensorflow as tf
import time
import numpy as np
np.random.seed(0)
# Load TFLite model
interpreter = tf.lite.Interpreter(model_path="./saved_model/conv_tasnet_39nx_7_float32.tflite")
interpreter.allocate_tensors()
tensor_shape = (256, 20)
input_data = {'waveform': np.random.randn(*tensor_shape).astype(np.float32)}
# Load and preprocess
input_details = interpreter.get_input_details()
input_shape = input_details[0]['shape']
print(input_shape)
# Run inference
interpreter.set_tensor(input_details[0]['index'], input_data["waveform"])
separate_time = time.time()
interpreter.invoke()
print("Done! {:.3f} s".format(time.time() - separate_time))
output_details = interpreter.get_output_details()
output_data = interpreter.get_tensor(output_details[0]['index'])
# Check the device used for inference
# output_tensor = interpreter.tensor(output_data)
# print("Inference was performed on:", output_details.device_name)
output_data = []
for output_detail in output_details:
output_data.append(interpreter.get_tensor(output_detail['index']))
print(output_data)
There must be a bug in the
|
I decided to add to the README the numerous workarounds I have implemented in this tool, as they do not seem to be understood by most engineers. |
thanks |
Issue Type
Others
OS
Linux
onnx2tf version number
1.15.7
onnx version number
1.14.0
onnxruntime version number
1.15.1
onnxsim (onnx_simplifier) version number
0.4.33
tensorflow version number
2.13.0
Download URL for ONNX
https://drive.google.com/file/d/189UHTs9OvDiNBc6BiZDG5zde2zSyTe6E/view?usp=sharing
Parameter Replacement JSON
Description
Error:
Command that i tried
Onnx conversion script:
The text was updated successfully, but these errors were encountered: