Skip to content

Commit

Permalink
Add Shiquan as co-authors for great contributions on TRT (#258)
Browse files Browse the repository at this point in the history
* Update README in deployment/tensorrt

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Add Shiquan as co-authors for great contributions on TRT

* Update TensorRT statement

* Update badges

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
  • Loading branch information
zhiqwang and pre-commit-ci[bot] authored Dec 27, 2021
1 parent 7d08c9c commit b6b05e4
Show file tree
Hide file tree
Showing 2 changed files with 65 additions and 13 deletions.
16 changes: 11 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,9 +24,9 @@ ______________________________________________________________________
[![pre-commit.ci status](https://results.pre-commit.ci/badge/github/zhiqwang/yolov5-rt-stack/main.svg)](https://results.pre-commit.ci/latest/github/zhiqwang/yolov5-rt-stack/main)

[![codecov](https://codecov.io/gh/zhiqwang/yolov5-rt-stack/branch/main/graph/badge.svg?token=1GX96EA72Y)](https://codecov.io/gh/zhiqwang/yolov5-rt-stack)
[![license](https://img.shields.io/github/license/zhiqwang/yolov5-rt-stack?color=brightgreen)](LICENSE)
[![Slack](https://img.shields.io/badge/slack-chat-green.svg?logo=slack)](https://join.slack.com/t/yolort/shared_invite/zt-mqwc7235-940aAh8IaKYeWclrJx10SA)
[![PRs Welcome](https://img.shields.io/badge/PRs-welcome-brightgreen.svg)](https://github.com/zhiqwang/yolov5-rt-stack/issues?q=is%3Aopen+is%3Aissue+label%3A%22help+wanted%22)
[![license](https://img.shields.io/github/license/zhiqwang/yolov5-rt-stack?color=dfd)](LICENSE)
[![Slack](https://img.shields.io/badge/slack-chat-aff.svg?logo=slack)](https://join.slack.com/t/yolort/shared_invite/zt-mqwc7235-940aAh8IaKYeWclrJx10SA)
[![PRs Welcome](https://img.shields.io/badge/PRs-welcome-pink.svg)](https://github.com/zhiqwang/yolov5-rt-stack/issues?q=is%3Aopen+is%3Aissue+label%3A%22help+wanted%22)

______________________________________________________________________

Expand All @@ -46,7 +46,9 @@ ______________________________________________________________________

## 🆕 What's New

- *Sep. 24, 2021*. Add `ONNXRuntime` C++ interface example. Thanks to [itsnine](https://github.com/itsnine).
- *Dec. 27, 2021*. Add `TensorRT` C++ interface example. Thanks to [Shiquan](https://github.com/ShiquanYu).
- *Dec. 25, 2021*. Support exporting to `TensorRT`, and inferencing with `TensorRT` Python interface.
- *Sep. 24, 2021*. Add `ONNXRuntime` C++ interface example. Thanks to [Fidan](https://github.com/itsnine).
- *Feb. 5, 2021*. Add `TVM` compile and inference notebooks.
- *Nov. 21, 2020*. Add graph visualization tools.
- *Nov. 17, 2020*. Support exporting to `ONNX`, and inferencing with `ONNXRuntime` Python interface.
Expand Down Expand Up @@ -135,6 +137,10 @@ We provide a [notebook](notebooks/inference-pytorch-export-libtorch.ipynb) to de

On the `ONNXRuntime` front you can use the [C++ example](deployment/onnxruntime), and we also provide a tutorial [export-onnx-inference-onnxruntime](notebooks/export-onnx-inference-onnxruntime.ipynb) for using the `ONNXRuntime`.

### Inference on TensorRT backend

On the `TensorRT` front you can use the [C++ example](deployment/tensorrt), and we also provide a tutorial [onnx-graphsurgeon-inference-tensorrt](notebooks/onnx-graphsurgeon-inference-tensorrt.ipynb) for using the `TensorRT`.

## 🎨 Model Graph Visualization

Now, `yolort` can draw the model graph directly, checkout our [model-graph-visualization](notebooks/model-graph-visualization.ipynb) notebook to see how to use and visualize the model graph.
Expand All @@ -152,7 +158,7 @@ If you use yolort in your publication, please cite it by using the following Bib

```bibtex
@Misc{yolort2021,
author = {Zhiqiang Wang, Fidan Kharrasov},
author = {Zhiqiang Wang, Shiquan Yu, Fidan Kharrasov},
title = {yolort: A runtime stack for object detection on specialized accelerators},
howpublished = {\url{https://github.com/zhiqwang/yolov5-rt-stack}},
year = {2021}
Expand Down
62 changes: 54 additions & 8 deletions deployment/tensorrt/README.md
Original file line number Diff line number Diff line change
@@ -1,33 +1,79 @@
# TensorRT Inference

The TensorRT inference for `yolort`, support GPU only.
The TensorRT inference for `yolort`, support CUDA only.

## Dependencies

- TensorRT 8.x
- TensorRT 8.0 +

## Usage

1. Create build director and cmake config.

```bash
mkdir -p build/ && cd build/
cmake .. -DTENSORRT_DIR=${your_trt_install_director}
cmake .. -DTENSORRT_DIR={path/to/your/trt/install/director}
```

1. Build project

```bash
make
cmake --build . -j4
```

1. Export your custom model to ONNX(see [onnx-graphsurgeon-inference-tensorrt](https://github.com/zhiqwang/yolov5-rt-stack/blob/main/notebooks/onnx-graphsurgeon-inference-tensorrt.ipynb)).
1. Export your custom model to ONNX

Here is a small demo to surgeon the YOLOv5 ONNX model and then export to TensorRT engine. For details see out our [tutorial for deploying yolort on TensorRT](https://zhiqwang.com/yolov5-rt-stack/notebooks/onnx-graphsurgeon-inference-tensorrt.html).

- Set the super parameters

```python
model_path = "https://github.com/ultralytics/yolov5/releases/download/v6.0/yolov5n6.pt"
checkpoint_path = attempt_download(model_path)
onnx_path = "yolov5n6.onnx"
engine_path = "yolov5n6.engine"

score_thresh = 0.4
iou_thresh = 0.45
detections_per_img = 100
```

- Surgeon the yolov5 ONNX models

```python
from yolort.runtime.yolo_graphsurgeon import YOLOGraphSurgeon

yolo_gs = YOLOGraphSurgeon(
checkpoint_path,
version="r6.0",
enable_dynamic=False,
)

yolo_gs.register_nms(
score_thresh=score_thresh,
nms_thresh=iou_thresh,
detections_per_img=detections_per_img,
)

# Export the ONNX model
yolo_gs.save(onnx_path)
```

- Build the TensorRT engine

```python
from yolort.runtime.trt_helper import EngineBuilder

engine_builder = EngineBuilder()
engine_builder.create_network(onnx_path)
engine_builder.create_engine(engine_path, precision="fp32")
```

1. Now, you can infer your own images.

```bash
./yolort_trt [--image ../../../test/assets/zidane.jpg]
[--model_path ../../../notebooks/yolov5s.onnx]
[--class_names ../../../notebooks/assets/coco.names]
[--fp16] # Enable it if your GPU support fp16 inference
[--model_path ../../../notebooks/yolov5s.onnx]
[--class_names ../../../notebooks/assets/coco.names]
[--fp16] # Enable it if your GPU support fp16 inference
```

0 comments on commit b6b05e4

Please sign in to comment.