Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

convert model from ONNX #343

Open
braincxx opened this issue Oct 17, 2024 · 1 comment
Open

convert model from ONNX #343

braincxx opened this issue Oct 17, 2024 · 1 comment

Comments

@braincxx
Copy link

braincxx commented Oct 17, 2024

im trying convert model (cnn+lstm) from onnx to rknn for rk3588

My code:
shape = (1, 7, 3, 608, 184)
import numpy as np
img_means = (np.array((0.19007764876619865, 0.15170388157131237, 0.10659445665650864)) * 255).tolist()
img_stds = (np.array((0.2610784009469139, 0.25729316928935814, 0.25163823815039915)) * 255).tolist()

from rknn.api import RKNN
rknn = RKNN(verbose=True)
rknn.config(mean_values=img_means + [0, 0, 0, 0], std_values=img_stds + [0, 0, 0, 0], target_platform='rk3588')

#ret = rknn.load_pytorch(model=path2torch_converted_model, input_size_list=[list(shape)])
ret = rknn.load_onnx(model=path2onnx_model, input_size_list=[list(shape)])

my output:
I rknn-toolkit2 version: 2.2.0
I Loading : 100%|█████████████████████████████████████████████████| 26/26 [00:00<00:00, 1054.44it/s]
D base_optimize ...
D base_optimize done.
D
D fold_constant ...
/root/miniconda3/envs/rknn/lib/python3.10/site-packages/rknn/api/rknn.py:192: RuntimeWarning: divide by zero encountered in divide
return self.rknn_base.build(do_quantization=do_quantization, dataset=dataset, expand_batch_size=rknn_batch_size)
/root/miniconda3/envs/rknn/lib/python3.10/site-packages/rknn/api/rknn.py:192: RuntimeWarning: invalid value encountered in divide
return self.rknn_base.build(do_quantization=do_quantization, dataset=dataset, expand_batch_size=rknn_batch_size)
D fold_constant done.
D fold_constant remove nodes = ['/rnn/Expand_3', '/rnn/Concat_3', 'Unsqueeze_104', '/rnn/Gather_3', '/rnn/Shape_3', '/rnn/Expand_2', '/rnn/Concat_2', 'Unsqueeze_95', '/rnn/Gather_2', '/rnn/Shape_2', '/rnn/Expand_1', '/rnn/Concat_1', 'Unsqueeze_81', '/rnn/Gather_1', '/rnn/Shape_1', '/rnn/Expand', '/rnn/Concat', 'Unsqueeze_72', '/rnn/Gather', '/rnn/Shape']
D Fixed the shape information of some tensor!
D
D correct_ops ...
D correct_ops done.
D
D fuse_ops ...
D fuse_ops results:
D fuse_reshape_transpose: remove node = ['/rnn/Transpose']
D squeeze_to_4d_slice: remove node = [], add node = ['input_rs', '/Slice_output_0-rs']
D squeeze_to_4d_slice: remove node = [], add node = ['input_rs#1', '/Slice_1_output_0-rs']
D squeeze_to_4d_concat: remove node = [], add node = ['/Slice_output_0_rs', '/Slice_1_output_0_rs', '/Concat_output_0-rs']
D convert_squeeze_to_reshape: remove node = ['/rnn/Squeeze'], add node = ['/rnn/Squeeze_2rs']
D convert_squeeze_to_reshape: remove node = ['/rnn/Squeeze_1'], add node = ['/rnn/Squeeze_1_2rs']
D unsqueeze_to_4d_transpose: remove node = [], add node = ['/rnn/Squeeze_1_output_0_rs', '/rnn/Transpose_1_output_0-rs']
D convert_matmul_to_exmatmul: remove node = ['/linear/MatMul'], add node = ['/rnn/Transpose_1_output_0_tp', '/rnn/Transpose_1_output_0_tp_rs', '/linear/MatMul', '/linear/MatMul_output_0_mm_tp', '/linear/MatMul_output_0_mm_tp_rs']
D unsqueeze_to_4d_add: remove node = [], add node = ['/linear/MatMul_output_0_rs', 'output-rs']
D fuse_lstm_transpose_reshape: remove node = ['/rnn/Squeeze_2rs', '/rnn/LSTM'], add node = ['/rnn/LSTM']
D fuse_lstm_transpose_reshape: remove node = ['/rnn/Squeeze_1_2rs', '/rnn/LSTM_1'], add node = ['/rnn/LSTM_1']
D unsqueeze_to_4d_transpose: remove node = [], add node = ['/rnn/Transpose_1_output_0_rs', '/rnn/Transpose_1_output_0_tp-rs']
D input_align_4D_add: remove node = ['/linear/Add'], add node = ['/linear/Add']
D bypass_two_reshape: remove node = ['/Slice_output_0_rs', '/Slice_output_0-rs', '/Slice_1_output_0_rs', '/Slice_1_output_0-rs', '/Reshape', '/Concat_output_0-rs']
D fuse_reshape_transpose: remove node = ['/rnn/Transpose_1']
D fuse_two_reshape: remove node = ['/rnn/Transpose_1_output_0-rs']
D bypass_two_reshape: remove node = ['/rnn/Transpose_1_output_0_tp_rs', '/rnn/Transpose_1_output_0_tp-rs', '/linear/MatMul_output_0_rs', '/linear/MatMul_output_0_mm_tp_rs']
D fuse_two_reshape: remove node = ['/rnn/Squeeze_1_output_0_rs']
D swap_transpose_add: remove node = ['/linear/MatMul_output_0_mm_tp', '/linear/Add'], add node = ['/linear/Add', '/linear/MatMul_output_0_mm_tp']
D fuse_exmatmul_add: remove node = ['/linear/Add', '/linear/MatMul'], add node = ['/linear/MatMul']
D convert_exmatmul_to_conv: remove node = ['/linear/MatMul'], add node = ['/linear/MatMul']
D fold_constant ...
D fold_constant done.
D fuse_ops done.
D
D sparse_weight ...
D sparse_weight done.
D
I rknn building ...
I RKNN: [17:16:46.556] compress = 0, conv_eltwise_activation_fuse = 1, global_fuse = 1, multi-core-model-mode = 7, output_optimize = 1, layout_match = 1, enable_argb_group = 0, pipeline_fuse = 0, enable_flash_attention = 0
I RKNN: librknnc version: 2.2.0 (c195366594@2024-09-14T12:24:14)
D RKNN: [17:16:47.381] RKNN is invoked
W RKNN: [17:16:48.557] Model initializer tensor data is empty, name: empty_placeholder_0
D RKNN: [17:16:48.559] >>>>>> start: rknn::RKNNExtractCustomOpAttrs
D RKNN: [17:16:48.559] <<<<<<<< end: rknn::RKNNExtractCustomOpAttrs
D RKNN: [17:16:48.559] >>>>>> start: rknn::RKNNSetOpTargetPass
D RKNN: [17:16:48.559] <<<<<<<< end: rknn::RKNNSetOpTargetPass
D RKNN: [17:16:48.559] >>>>>> start: rknn::RKNNBindNorm
D RKNN: [17:16:48.559] <<<<<<<< end: rknn::RKNNBindNorm
D RKNN: [17:16:48.559] >>>>>> start: rknn::RKNNEliminateQATDataConvert
D RKNN: [17:16:48.559] <<<<<<<< end: rknn::RKNNEliminateQATDataConvert
D RKNN: [17:16:48.559] >>>>>> start: rknn::RKNNTileGroupConv
D RKNN: [17:16:48.559] <<<<<<<< end: rknn::RKNNTileGroupConv
D RKNN: [17:16:48.559] >>>>>> start: rknn::RKNNAddConvBias
D RKNN: [17:16:48.559] <<<<<<<< end: rknn::RKNNAddConvBias
D RKNN: [17:16:48.559] >>>>>> start: rknn::RKNNTileChannel
D RKNN: [17:16:48.559] <<<<<<<< end: rknn::RKNNTileChannel
D RKNN: [17:16:48.559] >>>>>> start: rknn::RKNNPerChannelPrep
D RKNN: [17:16:48.559] <<<<<<<< end: rknn::RKNNPerChannelPrep
D RKNN: [17:16:48.559] >>>>>> start: rknn::RKNNBnQuant
D RKNN: [17:16:48.559] <<<<<<<< end: rknn::RKNNBnQuant
D RKNN: [17:16:48.559] >>>>>> start: rknn::RKNNFuseOptimizerPass
D RKNN: [17:16:48.560] <<<<<<<< end: rknn::RKNNFuseOptimizerPass
D RKNN: [17:16:48.560] >>>>>> start: rknn::RKNNTurnAutoPad
D RKNN: [17:16:48.560] <<<<<<<< end: rknn::RKNNTurnAutoPad
D RKNN: [17:16:48.560] >>>>>> start: rknn::RKNNInitRNNConst
D RKNN: [17:16:48.562] <<<<<<<< end: rknn::RKNNInitRNNConst
D RKNN: [17:16:48.562] >>>>>> start: rknn::RKNNInitCastConst
D RKNN: [17:16:48.562] <<<<<<<< end: rknn::RKNNInitCastConst
D RKNN: [17:16:48.562] >>>>>> start: rknn::RKNNMultiSurfacePass
D RKNN: [17:16:48.562] <<<<<<<< end: rknn::RKNNMultiSurfacePass
D RKNN: [17:16:48.562] >>>>>> start: rknn::RKNNReplaceConstantTensorPass
D RKNN: [17:16:48.562] <<<<<<<< end: rknn::RKNNReplaceConstantTensorPass
D RKNN: [17:16:48.562] >>>>>> start: rknn::RKNNSubgraphManager
D RKNN: [17:16:48.562] <<<<<<<< end: rknn::RKNNSubgraphManager
D RKNN: [17:16:48.562] >>>>>> start: OpEmit
D RKNN: [17:16:48.564] <<<<<<<< end: OpEmit
D RKNN: [17:16:48.564] >>>>>> start: rknn::RKNNAddFirstConv
D RKNN: [17:16:48.564] <<<<<<<< end: rknn::RKNNAddFirstConv
D RKNN: [17:16:48.564] >>>>>> start: rknn::RKNNTilingPass
D RKNN: [17:16:48.570] <<<<<<<< end: rknn::RKNNTilingPass
D RKNN: [17:16:48.570] >>>>>> start: rknn::RKNNLayoutMatchPass
D RKNN: [17:16:48.570] <<<<<<<< end: rknn::RKNNLayoutMatchPass
D RKNN: [17:16:48.570] >>>>>> start: rknn::RKNNAddSecondaryNode
D RKNN: [17:16:48.570] <<<<<<<< end: rknn::RKNNAddSecondaryNode
D RKNN: [17:16:48.570] >>>>>> start: rknn::RKNNAllocateConvCachePass
D RKNN: [17:16:48.570] <<<<<<<< end: rknn::RKNNAllocateConvCachePass
D RKNN: [17:16:48.570] >>>>>> start: OpEmit
E RKNN: [17:16:52.762] buffer overflow!!!

@Eurekaer
Copy link

@braincxx Hi, I recently encountered the same issue as you. May I ask if your input is also 5-dimensional? Have you found a solution to this problem? When my input is [1,3,16,640,640], I ran into the following issue:
ValueError: Traceback (most recent call last):
File "rknn/api/rknn_log.py", line 309, in rknn.api.rknn_log.error_catch_decorator.error_catch_wrapper
File "rknn/api/rknn_base.py", line 1945, in rknn.api.rknn_base.RKNNBase.build
File "rknn/api/rknn_base.py", line 176, in rknn.api.rknn_base.RKNNBase._quantize
File "rknn/api/quantizer.py", line 1397, in rknn.api.quantizer.Quantizer.run
File "rknn/api/quantizer.py", line 899, in rknn.api.quantizer.Quantizer._get_layer_range
File "rknn/api/rknn_utils.py", line 274, in rknn.api.rknn_utils.get_input_img
File "rknn/api/rknn_log.py", line 95, in rknn.api.rknn_log.RKNNLog.e
ValueError: The height_width of r_shape [16, 640, 640] is invalid!

Have you encountered a similar issue? I'm looking forward to your response.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants