-
-
Notifications
You must be signed in to change notification settings - Fork 16.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error while parsing quantized model using detect.py/val.py #9950
Comments
👋 Hello! Thanks for asking about YOLOv5 🚀 benchmarks. YOLOv5 inference is officially supported in 11 formats, and all formats are benchmarked for identical accuracy and to compare speed every 24 hours by the YOLOv5 CI. 💡 ProTip: Export to ONNX or OpenVINO for up to 3x CPU speedup. See CPU Benchmarks.
BenchmarksBenchmarks below run on a Colab Pro with the YOLOv5 tutorial notebook . To reproduce: python utils/benchmarks.py --weights yolov5s.pt --imgsz 640 --device 0 Colab Pro V100 GPU
Colab Pro CPU
Good luck 🍀 and let us know if you have any other questions! |
👋 Hello, this issue has been automatically marked as stale because it has not had recent activity. Please note it will be closed if no further activity occurs. Access additional YOLOv5 🚀 resources:
Access additional Ultralytics ⚡ resources:
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed! Thank you for your contributions to YOLOv5 🚀 and Vision AI ⭐! |
Search before asking
Question
Hi @glenn-jocher,
Actually I have tested the custom models (float32) with inferencing using detect.py/val.py. The results were quite promising and there seemed to be no issues.
=================================
File "detect.py", line 73, in mainfunc
model_LPD = DetectMultiBackend(weights_yolov5_LPD, device=device, dnn=dnn_yolov5, data=data_yolov5,
fp16=half_yolov5)
File "/yolov5/models/common.py", line 340, in init
model = attempt_load(weights if isinstance(weights, list) else w, device=device, inplace=True, fuse=fuse)
File "/yolov5/models/experimental.py", line 80, in attempt_load
ckpt = (ckpt.get('ema') or ckpt['model']).to(device).float() # FP32 model
AttributeError: 'collections.OrderedDict' object has no attribute 'to'
=================================
Could you please tell how to resolve this and how to infer the quantized models?
Thanks and regards
Additional
No response
The text was updated successfully, but these errors were encountered: