-
-
Notifications
You must be signed in to change notification settings - Fork 16.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Convert Yolov5 to IR Model in Openvino #5533
Comments
@aseprohman what is an IR model? |
Hi, @aseprohman I've done the same work! how can I help? |
@glenn-jocher IR is short from Intermediate Representative model that is the OpenVino model format for inferencing model into targetted devices |
hi @besmaGuesmi, can you share to me about model conversion technique from .pt or onnx file to IR Models ? how to pass any parameter like input_shape etc ? if I train yolov5m.pt model with parameter like this : |
Hi, @aseprohman first you have to convert your model from PyTorch to the onnx format, then to the IR format.
if it didn't work, don't hesitate to send me an email: [email protected] or ask here. |
thanks @besmaGuesmi, I will try it. have you ever tried to compare performance of model inference between YOLOv5 in IR model while executed by the openvino framework and the yolov5 executed by the pytorch framework ? |
HI @aseprohman, yes, of course, I compared the inference time as well as Throughput of the model (FPS), you can use DL benchmark to convert the model from FP16 to INT8 but unfortunately, you couldn't use the INT8 model with VPU device, otherwise, I highly recommend to use MYRIADX which provide me 10 times much better result than CPU: https://www.intel.com/content/www/us/en/products/details/processors/movidius-vpu/movidius-myriad-x.html Good luck. |
thanks a lot @besmaGuesmi for your explanation |
this message appears when I try to export pt model to onnx model
I do export command like this python3 export.py --weights weights/yolov5m.pt --include onnx --device cpu --batch-size 1 --img 1024 I do training with --img 416 but I want to run inference with 1024 cause I have better accuracy result when run inference in bigger size. am I wrong ? |
@glenn-jocher or @besmaGuesmi, do you have any suggestion ? |
@aseprohman you can ignore warnings |
Hi @aseprohman, sorry for the later reply! first I don't agree with you to increase the image size in inference! why did you use Openvino? to speed up the inference right? so when the image size increases the inference time increase (read this: https://www.researchgate.net/figure/The-impact-of-image-size-on-the-inference-speed-on-an-edge-device_fig9_323867606) In addition to obtaining a good bounding box size I highly recommend to use the same image size of the model (for Yolov5m model use image size = 640) for training. here is what exactly you have to do after training the model with 640 image size (Yolov5m): |
Another modification maybe you have to do in export.py |
Hello! I use the command line to convert ONNX files to IR format: python mo_onnx.py --input_model=yolov5s.onnx --model_name=test -s 255 --reverse_input_channels --output Conv_416,Conv_482,Conv_350 --data_type=FP16 (Layer name already viewed with Netron),But the following error is returned:[ ERROR ] Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.front.user_data_repack.UserDataRepack'>): No node with name Conv_416 |
@zzff-sys we don't have an openvino export workflow so I can't provide support there. The ONNX model you have is correct, it's one of our supported export workflows. What's an example of a correctly working openvino export workflow? |
@zzff-sys FYI the output of a YOLOv5 ONNX model is just 'output'. I don't know where you got your output values from in your command, but that can't be right. |
Hi @zzff-sys ! in your command, you have to put the last three CONV names as shown in the screenshot below, |
@besmaGuesmi if you experience with openvino exports do you think you could create a PR to add this format to export.py? That would be useful helping future users like @zzff-sys with this in the future. Thanks! Please see our ✅ Contributing Guide to get started. |
Thank you very much. According to your tips, I have solved the problem. @glenn-jocher @besmaGuesmi |
Hi @glenn-jocher, |
@besmaGuesmi @zzff-sys @aseprohman I've created a PR for YOLOv5 OpenVINO export support in #6057. This isn't yet working though, I get a non-zero exit code on the export command. Do you know what the problem might be? Can you help me debug this? Thanks!! !git clone https://github.com/ultralytics/yolov5 -b export/openvino # clone
%cd yolov5
%pip install -qr requirements.txt onnx openvino-dev # install INCLUDING onnx and openvino-dev
import torch
from yolov5 import utils
display = utils.notebook_init() # checks
# Export OpenVINO
!python export.py --include openvino |
@besmaGuesmi @zzff-sys @aseprohman problem was that OpenVINO export seems to require ONNX Opset <= 12. I've enforced this constraint now and everything seems to be working well :) EDIT: converted to directory output since OpenVINO creates 3 files. Export directory is i.e. |
@besmaGuesmi @zzff-sys @aseprohman good news 😃! Your original issue may now be fixed ✅ in PR #6057. This PR adds native YOLOv5 OpenVINO export:
To receive this update:
Thank you for spotting this issue and informing us of the problem. Please let us know if this update resolves the issue for you, and feel free to inform us of any other issues you discover or feature requests that come to mind. Happy trainings with YOLOv5 🚀! |
Great work! Thank you! I will try again @glenn-jocher |
@besmaGuesmi do you think you could help us with OpenVINO inference now that export is complete? We need to add OpenVino fields to DetectMultBackend() for this purpose. I've never used OpenVINO though so I don't have a good inference example to start from: Lines 277 to 437 in db6ec66
|
@besmaGuesmi good news 😃! Your original issue may now be fixed ✅ in PR #6179. This PR brings native OpenVINO export and inference: !python export.py --weights yolov5s.pt --include openvino # export
!python detect.py --weights yolov5s_openvino_model/yolov5s.xml # inference
!python val.py --weights yolov5s_openvino_model/yolov5s.xml --data ... # validation To receive this update:
Thank you for spotting this issue and informing us of the problem. Please let us know if this update resolves the issue for you, and feel free to inform us of any other issues you discover or feature requests that come to mind. Happy trainings with YOLOv5 🚀! |
He knows but this guy never helped anyone |
Hi, guys, it seems that now we can export it directly to openvino format via the following command line: I still can't get the results from the bounding box tho. Instead, I am getting this: <InferRequest: |
@arduinitavares OpenVINO Usage examples are clearly displayed after export: |
Thanks for the quick answer. That's part of the script:
|
Fusing layers... PyTorch: starting from laotie.pt with output shape (1, 25200, 85) (14.8 MB) ONNX: starting export with onnx 1.11.0... OpenVINO: starting export with openvino 2.1.2020.4.0-359-21e092122f4-releases/2020/4... OpenVINO: export failure: Command 'mo --input_model laotie.onnx --output_dir laotie_openvino_model/' returned non-zero exit status 1. |
1 similar comment
Fusing layers... PyTorch: starting from laotie.pt with output shape (1, 25200, 85) (14.8 MB) ONNX: starting export with onnx 1.11.0... OpenVINO: starting export with openvino 2.1.2020.4.0-359-21e092122f4-releases/2020/4... OpenVINO: export failure: Command 'mo --input_model laotie.onnx --output_dir laotie_openvino_model/' returned non-zero exit status 1. |
@plmmyyds it appears you may have environment problems. Please ensure you meet all dependency requirements if you are attempting to run YOLOv5 locally. If in doubt, create a new virtual Python 3.9 environment, clone the latest repo (code changes daily), and 💡 ProTip! Try one of our verified environments below if you are having trouble with your local environment. RequirementsPython>=3.7.0 with all requirements.txt installed including PyTorch>=1.7. To get started: git clone https://github.com/ultralytics/yolov5 # clone
cd yolov5
pip install -r requirements.txt # install Models and datasets download automatically from the latest YOLOv5 release when first requested. EnvironmentsYOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
StatusIf this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training (train.py), validation (val.py), inference (detect.py) and export (export.py) on MacOS, Windows, and Ubuntu every 24 hours and on every commit. |
@Guemann-ui Hi, I have problem when converting
I confused by |
Hello @WorstCodeWay-T, it appears that Also, be aware that PyTorch 2.0.1 doesn't officially support ONNX 1.14 and OpenVINO 2023.0.0 yet. It's recommend to use an earlier version to make sure this error is not caused by unsupported version combinations. For example, PyTorch 1.9.0, ONNX 1.8.1, and OpenVINO 2021.3 should work together properly. |
@glenn-jocher Thanks for quick reply. Now I‘m realized. And I will try mentioned PyTorch and OpenVINO.
@glenn-jocher Thanks! I’ll try |
@WorstCodeWay your suggestions and see if that resolves the issue. I appreciate your help in troubleshooting this error. |
Sorry for the late reply! did you solve it? |
Hello @Guemann-ui, I noticed that you asked @WorstCodeWay if they were able to solve their issue with the YOLOv5 conversion to OpenVINO. If they haven't replied yet, I suggest following up with them to see if they have made any progress. If you have a similar issue, feel free to share your problem and any error messages you receive. We'll do our best to assist you in resolving it. Best regards. |
I encountered the same problem (openvino 2023.0.1), after removing the parameter '- data'_ Type FP32 `, it can be successfully exported |
Hi @yao-xiaofei, I encountered the same problem with OpenVINO 2023.0.1. However, after removing the Thank you for providing the solution. |
Search before asking
Question
Hello @glenn-jocher et all,
Has anyone ever done the conversion of yolov5 models to IR models in openvino? maybe there is a tutorial i can learn? I want to try deploying yolov5 on Intel NCS2/VPU hardware devices.
thanks
Additional
No response
The text was updated successfully, but these errors were encountered: