The Pytorch implementation is DBNet.
-
- generate
.wts
Download code and model from DBNet and config your environments.
Go to file
tools/predict.py
, set--save_wts
asTrue
, then run, theDBNet.wts
will be generated.Onnx can also be exported, just need to set
--onnx
asTrue
. - generate
-
- cmake and make
mkdir build cd build cmake .. make cp /your_wts_path/DBNet.wts . sudo ./dbnet -s // serialize model to plan file i.e. 'DBNet.engine' sudo ./dbnet -d ./test_imgs // deserialize plan file and run inference, all images in test_imgs folder will be processed.
https://github.com/BaofengZan/DBNet-TensorRT
-
1. In
common.hpp
, the following two functions can be merged.ILayer* convBnLeaky(INetworkDefinition *network, std::map<std::string, Weights>& weightMap, ITensor& input, int outch, int ksize, int s, int g, std::string lname, bool bias = true)
ILayer* convBnLeaky2(INetworkDefinition *network, std::map<std::string, Weights>& weightMap, ITensor& input, int outch, int ksize, int s, int g, std::string lname, bool bias = true)
-
2. The postprocess method here should be optimized, which is a little different from pytorch side.
-
3. The input image here is resized to
640 x 640
directly, while the pytorch side is usingletterbox
method.