Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error Code 4: Internal Error (Network must have at least one output) #160

Open
alicera opened this issue Oct 28, 2022 · 3 comments
Open

Comments

@alicera
Copy link

alicera commented Oct 28, 2022

https://github.com/linghu8812/tensorrt_inference/tree/master/project/yolov7#2export-onnx-model

model

work/AI/YOLOv7/tensorrt/tensorrt_inference/project/build# ../../bin/tensorrt_inference yolov7 ../../configs/yolov7/config.yaml ../../samples/detection_segmentation/bus.jpg 
read file error: ../configs/labels/coco.names
[10/28/2022-13:28:08] [I] [TRT] [MemUsageChange] Init CUDA: CPU +196, GPU +0, now: CPU 203, GPU 8098 (MiB)
[10/28/2022-13:28:09] [I] [TRT] [MemUsageChange] Init builder kernel library: CPU +6, GPU +2, now: CPU 228, GPU 8100 (MiB)
[10/28/2022-13:28:09] [I] [TRT] ----------------------------------------------------------------
[10/28/2022-13:28:09] [I] [TRT] Input filename:   ../weights/yolov7.onnx
[10/28/2022-13:28:09] [I] [TRT] ONNX IR version:  0.0.6
[10/28/2022-13:28:09] [I] [TRT] Opset version:    12
[10/28/2022-13:28:09] [I] [TRT] Producer name:    pytorch
[10/28/2022-13:28:09] [I] [TRT] Producer version: 1.10
[10/28/2022-13:28:09] [I] [TRT] Domain:           
[10/28/2022-13:28:09] [I] [TRT] Model version:    0
[10/28/2022-13:28:09] [I] [TRT] Doc string:       
[10/28/2022-13:28:09] [I] [TRT] ----------------------------------------------------------------
[10/28/2022-13:28:09] [W] [TRT] parsers/onnx/onnx2trt_utils.cpp:367: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[10/28/2022-13:28:09] [E] [TRT] [graphShapeAnalyzer.cpp::analyzeShapes::1294] Error Code 4: Miscellaneous (IElementWiseLayer Mul_322: broadcast dimensions must be conformable)
[10/28/2022-13:28:09] [E] [TRT] parsers/onnx/ModelImporter.cpp:773: While parsing node number 322 [Mul -> "528"]:
[10/28/2022-13:28:09] [E] [TRT] parsers/onnx/ModelImporter.cpp:774: --- Begin node ---
[10/28/2022-13:28:09] [E] [TRT] parsers/onnx/ModelImporter.cpp:775: input: "525"
input: "657"
output: "528"
name: "Mul_322"
op_type: "Mul"

[10/28/2022-13:28:09] [E] [TRT] parsers/onnx/ModelImporter.cpp:776: --- End node ---
[10/28/2022-13:28:09] [E] [TRT] parsers/onnx/ModelImporter.cpp:778: ERROR: parsers/onnx/ModelImporter.cpp:180 In function parseGraph:
[6] Invalid Node - Mul_322
[graphShapeAnalyzer.cpp::analyzeShapes::1294] Error Code 4: Miscellaneous (IElementWiseLayer Mul_322: broadcast dimensions must be conformable)
[10/28/2022-13:28:09] [E] Failure while parsing ONNX file
start building engine
[10/28/2022-13:28:09] [E] [TRT] 4: [network.cpp::validate::2671] Error Code 4: Internal Error (Network must have at least one output)
build engine done
tensorrt_inference: /work/AI/YOLOv7/tensorrt/tensorrt_inference/code/src/model.cpp:46: void Model::OnnxToTRTModel(): Assertion `engine' failed.
Aborted (core dumped)

@nuriakiin
Copy link

I have same issue. did you solve it?

@linghu8812
Copy link
Owner

I have same issue. did you solve it?

@nuriakiin try to download ONNX models from here:
https://pan.baidu.com/s/1Ff_SA9Q66DUnZjSipPa74Q, code: opkp

@nuriakiin
Copy link

@linghu8812 thank you. i solve it with original repo. I was trying with your Export.py on forked repo.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants