You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Where is the inference.cu? I run the code and the outputs are like below. When I look to output images, there are no bounding boxes, it does not do inference. So I though that you did not completed the inference part. If you completed, can you share?
/content/yolov7-face-tensorrt
mkdir: cannot create directory ‘build’: File exists
/content/yolov7-face-tensorrt/build
-- Configuring done
-- Generating done
-- Build files have been written to: /content/yolov7-face-tensorrt/build
[ 14%] Building CXX object CMakeFiles/yolov7_face.dir/yolov7-face.cpp.o
[ 28%] Linking CXX executable yolov7_face
[ 71%] Built target yolov7_face
[100%] Built target onnx2trt
/root
/
/content/yolov7-face-tensorrt
/content/yolov7-face-tensorrt/build
[05/10/2023-10:01:31] [W] [TRT] CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage and speed up TensorRT initialization. See "Lazy Loading" section of CUDA documentation https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#lazy-loading
[05/10/2023-10:01:31] [W] [TRT] onnx2trt_utils.cpp:374: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[05/10/2023-10:01:31] [W] [TRT] onnx2trt_utils.cpp:400: One or more weights outside the range of INT32 was clamped
[05/10/2023-10:11:39] [W] [TRT] TensorRT encountered issues when converting weights between types and that could affect accuracy.
[05/10/2023-10:11:39] [W] [TRT] If this is not the desired behavior, please modify the weights or retrain with regularization to adjust the magnitude of the weights.
[05/10/2023-10:11:39] [W] [TRT] Check verbose logs for the list of affected weights.
[05/10/2023-10:11:39] [W] [TRT] - 76 weights are affected by this issue: Detected subnormal FP16 values.
convert seccuss!
/content/yolov7-face-tensorrt
/content
/content/yolov7-face-tensorrt
/content/yolov7-face-tensorrt/build
[05/10/2023-10:33:32] [W] [TRT] CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage and speed up TensorRT initialization. See "Lazy Loading" section of CUDA documentation https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#lazy-loading
../images/22_Picnic_Picnic_22_290.jpg time_gap: 6.8426ms
../images/22_Picnic_Picnic_22_152.jpg time_gap: 6.5877ms
../images/22_Picnic_Picnic_22_140.jpg time_gap: 6.52203ms
../images/22_Picnic_Picnic_22_208.jpg time_gap: 6.57545ms
../images/test.jpg time_gap: 7.97395ms
../images/22_Picnic_Picnic_22_308.jpg time_gap: 6.65615ms
../images/22_Picnic_Picnic_22_241.jpg time_gap: 6.53058ms
../images/22_Picnic_Picnic_22_10.jpg time_gap: 6.50339ms
../images/22_Picnic_Picnic_22_36.jpg time_gap: 6.5744ms
../images/22_Picnic_Picnic_22_928.jpg time_gap: 6.5951ms
averageTime:6.72431ms
/content/yolov7-face-tensorrt
/content
The text was updated successfully, but these errors were encountered:
You may need to view the yolo_face.cpp file, which play the role of inference.
After the Cmake compiling, the executable file yolo_face would be generated.
Where is the inference.cu? I run the code and the outputs are like below. When I look to output images, there are no bounding boxes, it does not do inference. So I though that you did not completed the inference part. If you completed, can you share?
The text was updated successfully, but these errors were encountered: