Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Where is the inference part? (推理部分在哪里?) #6

Open
SwEngine opened this issue May 10, 2023 · 2 comments
Open

Where is the inference part? (推理部分在哪里?) #6

SwEngine opened this issue May 10, 2023 · 2 comments

Comments

@SwEngine
Copy link

SwEngine commented May 10, 2023

Where is the inference.cu? I run the code and the outputs are like below. When I look to output images, there are no bounding boxes, it does not do inference. So I though that you did not completed the inference part. If you completed, can you share?

/content/yolov7-face-tensorrt
mkdir: cannot create directory ‘build’: File exists
/content/yolov7-face-tensorrt/build
-- Configuring done
-- Generating done
-- Build files have been written to: /content/yolov7-face-tensorrt/build
[ 14%] Building CXX object CMakeFiles/yolov7_face.dir/yolov7-face.cpp.o
[ 28%] Linking CXX executable yolov7_face
[ 71%] Built target yolov7_face
[100%] Built target onnx2trt
/root
/
/content/yolov7-face-tensorrt
/content/yolov7-face-tensorrt/build
[05/10/2023-10:01:31] [W] [TRT] CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage and speed up TensorRT initialization. See "Lazy Loading" section of CUDA documentation https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#lazy-loading
[05/10/2023-10:01:31] [W] [TRT] onnx2trt_utils.cpp:374: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[05/10/2023-10:01:31] [W] [TRT] onnx2trt_utils.cpp:400: One or more weights outside the range of INT32 was clamped
[05/10/2023-10:11:39] [W] [TRT] TensorRT encountered issues when converting weights between types and that could affect accuracy.
[05/10/2023-10:11:39] [W] [TRT] If this is not the desired behavior, please modify the weights or retrain with regularization to adjust the magnitude of the weights.
[05/10/2023-10:11:39] [W] [TRT] Check verbose logs for the list of affected weights.
[05/10/2023-10:11:39] [W] [TRT] - 76 weights are affected by this issue: Detected subnormal FP16 values.
convert seccuss!
/content/yolov7-face-tensorrt
/content
/content/yolov7-face-tensorrt
/content/yolov7-face-tensorrt/build
[05/10/2023-10:33:32] [W] [TRT] CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage and speed up TensorRT initialization. See "Lazy Loading" section of CUDA documentation https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#lazy-loading
../images/22_Picnic_Picnic_22_290.jpg   time_gap: 6.8426ms 
../images/22_Picnic_Picnic_22_152.jpg   time_gap: 6.5877ms 
../images/22_Picnic_Picnic_22_140.jpg   time_gap: 6.52203ms 
../images/22_Picnic_Picnic_22_208.jpg   time_gap: 6.57545ms 
../images/test.jpg   time_gap: 7.97395ms 
../images/22_Picnic_Picnic_22_308.jpg   time_gap: 6.65615ms 
../images/22_Picnic_Picnic_22_241.jpg   time_gap: 6.53058ms 
../images/22_Picnic_Picnic_22_10.jpg   time_gap: 6.50339ms 
../images/22_Picnic_Picnic_22_36.jpg   time_gap: 6.5744ms 
../images/22_Picnic_Picnic_22_928.jpg   time_gap: 6.5951ms 
averageTime:6.72431ms
/content/yolov7-face-tensorrt
/content
@SwEngine
Copy link
Author

inference.cu 在哪里?我运行代码,输出如下所示。当我查看输出图像时,没有边界框,它不进行推理。所以我认为你没有完成推理部分。如果你完成了,你能分享一下吗?

@SwEngine SwEngine changed the title Where is the inference part? Where is the inference part? (推理部分在哪里?) May 12, 2023
@FLAWLESSJade
Copy link

You may need to view the yolo_face.cpp file, which play the role of inference.
After the Cmake compiling, the executable file yolo_face would be generated.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants