Skip to content

zerollzeng/tiny-tensorrt

Repository files navigation

image

This Project is no longer maintained since we already have better alternatives for engine build, you can use TensorRT's python API, or make use of trtexec/polygraphy tool to build the engine quickly

For any issue about TensorRT, you can file issue against https://github.com/NVIDIA/TensorRT/issues

tiny-tensorrt

An easy-to-use nvidia TensorRT wrapper for onnx model with c++ and python api. you are able to deploy your model with tiny-tensorrt in few lines of code!

Trt* net = new Trt();
net->SetFP16();
net->BuildEngine(onnxModel, engineFile);
net->CopyFromHostToDevice(input, inputBindIndex);
net->Forward();
net->CopyFromDeviceToHost(output, outputBindIndex)

Install

tiny-tensorrt rely on CUDA, CUDNN and TensorRT. Make sure you has installed those dependencies already. For a quick start, you can use official docker

Support CUDA version: 10.2, 11.0, 11.1, 11.2, 11.3, 11.4

Support TensorRT version: 7.0, 7.1, 7.2, 8.0, 8.2 8.4

To build tiny-tensorrt, you also need some extra packages.

sudo apt-get update -y
sudo apt-get install cmake zlib1g-dev

## this is for python binding
sudo apt-get install python3 python3-pip
pip3 install numpy

## clone project and submodule
git clone --recurse-submodules -j8 https://github.com/zerollzeng/tiny-tensorrt.git

cd tiny-tensorrt
mkdir build && cd build

cmake .. && make

Then you can intergrate it into your own project with libtinytrt.so and Trt.h, for python module, you get pytrt.so

Docs

Please refer to Wiki

About License

For the 3rd-party module and TensorRT, you need to follow their license

For the part I wrote, you can do anything you want