diff --git a/README.md b/README.md index af1092a5..75096494 100644 --- a/README.md +++ b/README.md @@ -27,13 +27,14 @@ ## 🆕 What's New -- Add `TVM` compile and inference notebooks. *Feb. 5, 2021*. -- Add graph visualization tools. *Nov. 21, 2020*. -- Support exporting to `ONNX`, and inferring with `ONNXRuntime` interface. *Nov. 17, 2020*. -- Refactor YOLO modules and support *dynamic shape/batch* inference. *Nov. 16, 2020*. -- Add `TorchScript` C++ inference example. *Nov. 4, 2020*. -- Support inferring with `LibTorch` C++ interface. *Oct. 10, 2020*. -- Support exporting to `TorchScript` model. *Oct. 8, 2020*. +- *Sep. 24, 2021*. Add `ONNXRuntime` C++ interface example. Thanks to [itsnine](https://github.com/itsnine). +- *Feb. 5, 2021*. Add `TVM` compile and inference notebooks. +- *Nov. 21, 2020*. Add graph visualization tools. +- *Nov. 17, 2020*. Support exporting to `ONNX`, and inferencing with `ONNXRuntime` Python interface. +- *Nov. 16, 2020*. Refactor YOLO modules and support *dynamic shape/batch* inference. +- *Nov. 4, 2020*. Add `TorchScript` C++ inference example. +- *Oct. 10, 2020*. Support inferencing with `LibTorch` C++ interface. +- *Oct. 8, 2020*. Support exporting to `TorchScript` model. ## 🛠️ Usage diff --git a/deployment/onnxruntime/README.md b/deployment/onnxruntime/README.md index bc4c3d6e..8ff123cc 100644 --- a/deployment/onnxruntime/README.md +++ b/deployment/onnxruntime/README.md @@ -34,7 +34,7 @@ The ONNXRuntime inference for `yolort`, both GPU and CPU are supported. ```bash ./yolort_onnx [--image ../../../test/assets/zidane.jpg] - [--model_path ../../../test/tracing/yolov5s.onnx] - [--class_names ../../../notebooks/assets/coco.names] - [--gpu] # GPU switch, which is optional, and set False as default - ``` \ No newline at end of file + [--model_path ../../../notebooks/yolov5s.onnx] + [--class_names ../../../notebooks/assets/coco.names] + [--gpu] # GPU switch, which is optional, and set False as default + ```