Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make the pre-processing consistent with yolov5 #274

Closed
1 of 2 tasks
zhiqwang opened this issue Jan 12, 2022 · 2 comments · Fixed by #293
Closed
1 of 2 tasks

Make the pre-processing consistent with yolov5 #274

zhiqwang opened this issue Jan 12, 2022 · 2 comments · Fixed by #293
Assignees
Labels
bug / fix Something isn't working enhancement New feature or request

Comments

@zhiqwang
Copy link
Owner

zhiqwang commented Jan 12, 2022

🚀 The feature and motivation

As mentioned in #265 (comment), the YOLOv5.load_from_yolov5() strategy will introduce minor differences with ultralytics, now there are two options to solve this problem:

  • Make the pre-processing behave the same with yolov5. The advantage of this approach is that there is no need to write code on the C++ backend.
  • Just use YOLO.load_from_yolov5() and implement the pre-precessing in the C++ backend. This requires to modify the example we provide. And the main difference between this strategy and the official yolov5 is that we don't need to write the post-processing (nms) module on the C++ backend.

In fact, the main ops needed for preprocessing is interpolate, yolov5 uses the cv2.resize operator, we use torch.nn.functional.interpolate in yolort. (And the ops in cv2 can't be traced or scripted.) The implementation of the interpolate operator is not entirely consistent across frameworks.

UPDATED: Fortunately, the interpolation operator of PyTorch is aligned with that of OpenCV for the float type, however, PyTorch's interpolate does not currently support integers type: pytorch/pytorch#5580 , a little loss of precision will occur here.

https://github.com/zhiqwang/yolov5-rt-stack/blob/949fa0b08dd336c21dc051594941af457a312b21/yolort/v5/utils/augmentations.py#L126

https://github.com/zhiqwang/yolov5-rt-stack/blob/949fa0b08dd336c21dc051594941af457a312b21/yolort/models/transform.py#L286-L293

Additional context

https://pytorch.org/blog/introducing-torchvision-new-multi-weight-support-api/#associated-meta-data---preprocessing-transforms

Originally posted by @zhiqwang in #271 (comment) cc @mattpopovich .

@zhiqwang zhiqwang added the enhancement New feature or request label Jan 12, 2022
@zhiqwang zhiqwang self-assigned this Jan 26, 2022
@zhiqwang zhiqwang added the bug / fix Something isn't working label Jan 28, 2022
@zhiqwang
Copy link
Owner Author

zhiqwang commented Feb 4, 2022

Hi @mattpopovich ,

Here is a GOOD news! I think we've finally resolved the issue in #293 , and I'll update the comparison documentation in a follow-up PR.

I haven't done a complete test on the speed, but since we are using basically the same operators both before and after the change, it' s likely that the code will not be slower after the change. We will pay more attention to the performance part in subsequent updates. We would welcome additional testing on different devices here.

As such I'm closing this thread, we'd be happy to hear more from you about yolort, and feel free to create another tickets if you have more questions.

@zhiqwang
Copy link
Owner Author

zhiqwang commented Feb 5, 2022

FYI, we've added the doc for comparison between yolov5 & yolort:
https://zhiqwang.com/yolov5-rt-stack/notebooks/comparison-between-yolort-vs-yolov5.html

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug / fix Something isn't working enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant