Releases: zhiqwang/yolort
Releases · zhiqwang/yolort
Support upstream ultralytics v4.0 release
This is a minor release to support the upstream ultralytics/yolov5 v4.0 release stacks
TVM and Lightning inference support
Highlights
Support TVM backend inference
This release support TVM backend inference (#63, #53, #51, #50). Here is a rough comparison of inference time consumed on Jupyter notebook (iPython) (#54).
-
On the ONNXRuntime backend,
CPU times: user 2.04 s, sys: 0 ns, total: 2.04 s Wall time: 55.8 ms
-
On the TorchScript backend,
CPU times: user 2.03 s, sys: 32 ms, total: 2.06 s Wall time: 60.5 ms
-
On the PyTorch backend,
CPU times: user 3.87 s, sys: 60 ms, total: 3.93 s Wall time: 116 ms
-
On the TVM backend,
CPU times: user 528 ms, sys: 364 ms, total: 892 ms Wall time: 22.3 ms
Inherited from Lightning
yolort module are now inherited from LightningModule (#62, #48, #46, #43). To read a source of image(s) and detect its objects just run as following.
from yolort.models import yolov5s
# Load model
model = yolov5s(pretrained=True, score_thresh=0.45)
model.eval()
# Perform inference on an image file
predictions = model.predict('bus.jpg')
# Perform inference on a list of image files
predictions = model.predict(['bus.jpg', 'zidane.jpg'])
Backwards Incompatible Changes
New Features
- Add flake8 lint unittest (#58)
- Create codeql-analysis.yml (#37)
- Refactor darknet backbone as separate modules (#33)
- Add ISSUE_TEMPLATE and codecov workflows (#24)
- Refactor model unittest (#23)
- Replacing all torch.jit.annotations with typing (#22)
- Add automatic rebase (#11)
- Add torchscript export and model unittest (#8)
- Support yolov5m and yolov5l models (#7)
- Add essential scripts for training (#5)
- Enable unittest in pytorch nightly version (#4)
- Add half precision inference in libtorch (#1)
Bug Fixes
- Rescale to original scale after post-processor (#47)
- Suppress cpp language analysis (#40)
- Fix inference inconsistency compare with ultralytics (#31)
- Fix loss computation (#25)
- Add missing Detection struct (#27)
- Fix ops missing from upstream (#21)
- Fixes ORT segfault in nightly version (#18)
- Fix loading with coco and voc datasets (#12)
- Correcting incorrect types (#3)
Documents
- Update CODE_OF_CONDUCT.md (#61)
- Cleanup requirements.txt (#57)
- Update readme and setup instructions [skip ci] (#56, #19, #13)
- Add ultralytics like model loading notebooks (#52)
- Create CONTRIBUTING.md (#36) and CODE_OF_CONDUCT.md (#35) , thanks @BobinMathew !
- Fix badge links of GH actions (#32)
- Add statement of stable branch (#30)
- Make it more readable (#20)
- Update checkpoints updating information (#15)
- Add design principle (#14)
- Update Namespace arguments in updating checkpoints (#10)
- Use yolov5s, yolov5m and yolov5l directly (#9)
Model Checkpoints for r3.1 and r4.0
Last modified date: 2021-02-22
Just leave a block of space to store the model checkpoints.
NOTE: All checkpoints here make use of MD5 hash.
Support yolov5m and yolov5l models
New Features
- Support
yolov5m
andyolov5l
models (#7)
Add graph visualization tools
New Features
- Add
libtorch
andonnx
inference and export notebooks. Nov. 21, 2020. - Add graph visualization tools and notebooks. Nov. 21, 2020.
Refactor the BackboneWithPAN module
New Features
- Refactor the
Backbone
modules, Nov. 16, 2020. - Support exporting to
onnx
, and doing inference usingonnxruntime
. Nov. 17, 2020.
Refactor the YOLOHead and AnchorGenerator modules
New Features
- Add
TorchScript
cpp interface example, Nov. 4, 2020. - Refactor the
YoloHead
andAnchorGenerator
modules, Nov. 11, 2020.
yolort initial release
New Features
- Support exporting to
TorchScript
model, and doing inference using python interface, Oct. 8, 2020. - Support doing inference using
libtorch
cpp interface, Oct. 10, 2020.