Skip to content

Commit

Permalink
[Doc] Add C++ comments for external models (PaddlePaddle#394)
Browse files Browse the repository at this point in the history
* first commit for yolov7

* pybind for yolov7

* CPP README.md

* CPP README.md

* modified yolov7.cc

* README.md

* python file modify

* delete license in fastdeploy/

* repush the conflict part

* README.md modified

* README.md modified

* file path modified

* file path modified

* file path modified

* file path modified

* file path modified

* README modified

* README modified

* move some helpers to private

* add examples for yolov7

* api.md modified

* api.md modified

* api.md modified

* YOLOv7

* yolov7 release link

* yolov7 release link

* yolov7 release link

* copyright

* change some helpers to private

* change variables to const and fix documents.

* gitignore

* Transfer some funtions to private member of class

* Transfer some funtions to private member of class

* Merge from develop (#9)

* Fix compile problem in different python version (#26)

* fix some usage problem in linux

* Fix compile problem

Co-authored-by: root <[email protected]>

* Add PaddleDetetion/PPYOLOE model support (#22)

* add ppdet/ppyoloe

* Add demo code and documents

* add convert processor to vision (#27)

* update .gitignore

* Added checking for cmake include dir

* fixed missing trt_backend option bug when init from trt

* remove un-need data layout and add pre-check for dtype

* changed RGB2BRG to BGR2RGB in ppcls model

* add model_zoo yolov6 c++/python demo

* fixed CMakeLists.txt typos

* update yolov6 cpp/README.md

* add yolox c++/pybind and model_zoo demo

* move some helpers to private

* fixed CMakeLists.txt typos

* add normalize with alpha and beta

* add version notes for yolov5/yolov6/yolox

* add copyright to yolov5.cc

* revert normalize

* fixed some bugs in yolox

* fixed examples/CMakeLists.txt to avoid conflicts

* add convert processor to vision

* format examples/CMakeLists summary

* Fix bug while the inference result is empty with YOLOv5 (#29)

* Add multi-label function for yolov5

* Update README.md

Update doc

* Update fastdeploy_runtime.cc

fix variable option.trt_max_shape wrong name

* Update runtime_option.md

Update resnet model dynamic shape setting name from images to x

* Fix bug when inference result boxes are empty

* Delete detection.py

Co-authored-by: Jason <[email protected]>
Co-authored-by: root <[email protected]>
Co-authored-by: DefTruth <[email protected]>
Co-authored-by: huangjianhui <[email protected]>

* first commit for yolor

* for merge

* Develop (#11)

* Fix compile problem in different python version (#26)

* fix some usage problem in linux

* Fix compile problem

Co-authored-by: root <[email protected]>

* Add PaddleDetetion/PPYOLOE model support (#22)

* add ppdet/ppyoloe

* Add demo code and documents

* add convert processor to vision (#27)

* update .gitignore

* Added checking for cmake include dir

* fixed missing trt_backend option bug when init from trt

* remove un-need data layout and add pre-check for dtype

* changed RGB2BRG to BGR2RGB in ppcls model

* add model_zoo yolov6 c++/python demo

* fixed CMakeLists.txt typos

* update yolov6 cpp/README.md

* add yolox c++/pybind and model_zoo demo

* move some helpers to private

* fixed CMakeLists.txt typos

* add normalize with alpha and beta

* add version notes for yolov5/yolov6/yolox

* add copyright to yolov5.cc

* revert normalize

* fixed some bugs in yolox

* fixed examples/CMakeLists.txt to avoid conflicts

* add convert processor to vision

* format examples/CMakeLists summary

* Fix bug while the inference result is empty with YOLOv5 (#29)

* Add multi-label function for yolov5

* Update README.md

Update doc

* Update fastdeploy_runtime.cc

fix variable option.trt_max_shape wrong name

* Update runtime_option.md

Update resnet model dynamic shape setting name from images to x

* Fix bug when inference result boxes are empty

* Delete detection.py

Co-authored-by: Jason <[email protected]>
Co-authored-by: root <[email protected]>
Co-authored-by: DefTruth <[email protected]>
Co-authored-by: huangjianhui <[email protected]>

* Yolor (#16)

* Develop (#11) (#12)

* Fix compile problem in different python version (#26)

* fix some usage problem in linux

* Fix compile problem

Co-authored-by: root <[email protected]>

* Add PaddleDetetion/PPYOLOE model support (#22)

* add ppdet/ppyoloe

* Add demo code and documents

* add convert processor to vision (#27)

* update .gitignore

* Added checking for cmake include dir

* fixed missing trt_backend option bug when init from trt

* remove un-need data layout and add pre-check for dtype

* changed RGB2BRG to BGR2RGB in ppcls model

* add model_zoo yolov6 c++/python demo

* fixed CMakeLists.txt typos

* update yolov6 cpp/README.md

* add yolox c++/pybind and model_zoo demo

* move some helpers to private

* fixed CMakeLists.txt typos

* add normalize with alpha and beta

* add version notes for yolov5/yolov6/yolox

* add copyright to yolov5.cc

* revert normalize

* fixed some bugs in yolox

* fixed examples/CMakeLists.txt to avoid conflicts

* add convert processor to vision

* format examples/CMakeLists summary

* Fix bug while the inference result is empty with YOLOv5 (#29)

* Add multi-label function for yolov5

* Update README.md

Update doc

* Update fastdeploy_runtime.cc

fix variable option.trt_max_shape wrong name

* Update runtime_option.md

Update resnet model dynamic shape setting name from images to x

* Fix bug when inference result boxes are empty

* Delete detection.py

Co-authored-by: Jason <[email protected]>
Co-authored-by: root <[email protected]>
Co-authored-by: DefTruth <[email protected]>
Co-authored-by: huangjianhui <[email protected]>

Co-authored-by: Jason <[email protected]>
Co-authored-by: root <[email protected]>
Co-authored-by: DefTruth <[email protected]>
Co-authored-by: huangjianhui <[email protected]>

* Develop (#13)

* Fix compile problem in different python version (#26)

* fix some usage problem in linux

* Fix compile problem

Co-authored-by: root <[email protected]>

* Add PaddleDetetion/PPYOLOE model support (#22)

* add ppdet/ppyoloe

* Add demo code and documents

* add convert processor to vision (#27)

* update .gitignore

* Added checking for cmake include dir

* fixed missing trt_backend option bug when init from trt

* remove un-need data layout and add pre-check for dtype

* changed RGB2BRG to BGR2RGB in ppcls model

* add model_zoo yolov6 c++/python demo

* fixed CMakeLists.txt typos

* update yolov6 cpp/README.md

* add yolox c++/pybind and model_zoo demo

* move some helpers to private

* fixed CMakeLists.txt typos

* add normalize with alpha and beta

* add version notes for yolov5/yolov6/yolox

* add copyright to yolov5.cc

* revert normalize

* fixed some bugs in yolox

* fixed examples/CMakeLists.txt to avoid conflicts

* add convert processor to vision

* format examples/CMakeLists summary

* Fix bug while the inference result is empty with YOLOv5 (#29)

* Add multi-label function for yolov5

* Update README.md

Update doc

* Update fastdeploy_runtime.cc

fix variable option.trt_max_shape wrong name

* Update runtime_option.md

Update resnet model dynamic shape setting name from images to x

* Fix bug when inference result boxes are empty

* Delete detection.py

Co-authored-by: Jason <[email protected]>
Co-authored-by: root <[email protected]>
Co-authored-by: DefTruth <[email protected]>
Co-authored-by: huangjianhui <[email protected]>

* documents

* documents

* documents

* documents

* documents

* documents

* documents

* documents

* documents

* documents

* documents

* documents

* Develop (#14)

* Fix compile problem in different python version (#26)

* fix some usage problem in linux

* Fix compile problem

Co-authored-by: root <[email protected]>

* Add PaddleDetetion/PPYOLOE model support (#22)

* add ppdet/ppyoloe

* Add demo code and documents

* add convert processor to vision (#27)

* update .gitignore

* Added checking for cmake include dir

* fixed missing trt_backend option bug when init from trt

* remove un-need data layout and add pre-check for dtype

* changed RGB2BRG to BGR2RGB in ppcls model

* add model_zoo yolov6 c++/python demo

* fixed CMakeLists.txt typos

* update yolov6 cpp/README.md

* add yolox c++/pybind and model_zoo demo

* move some helpers to private

* fixed CMakeLists.txt typos

* add normalize with alpha and beta

* add version notes for yolov5/yolov6/yolox

* add copyright to yolov5.cc

* revert normalize

* fixed some bugs in yolox

* fixed examples/CMakeLists.txt to avoid conflicts

* add convert processor to vision

* format examples/CMakeLists summary

* Fix bug while the inference result is empty with YOLOv5 (#29)

* Add multi-label function for yolov5

* Update README.md

Update doc

* Update fastdeploy_runtime.cc

fix variable option.trt_max_shape wrong name

* Update runtime_option.md

Update resnet model dynamic shape setting name from images to x

* Fix bug when inference result boxes are empty

* Delete detection.py

Co-authored-by: root <[email protected]>
Co-authored-by: DefTruth <[email protected]>
Co-authored-by: huangjianhui <[email protected]>

Co-authored-by: Jason <[email protected]>
Co-authored-by: root <[email protected]>
Co-authored-by: DefTruth <[email protected]>
Co-authored-by: huangjianhui <[email protected]>
Co-authored-by: Jason <[email protected]>

* add is_dynamic for YOLO series (#22)

* modify ppmatting backend and docs

* modify ppmatting docs

* fix the PPMatting size problem

* fix LimitShort's log

* retrigger ci

* modify PPMatting docs

* modify the way  for dealing with  LimitShort

* add C++ comments for external models

* add comments for external models

* add comments for ocr, seg, ppyoloe and yolov5cls

* modify copyright to pass the code style check

Co-authored-by: Jason <[email protected]>
Co-authored-by: root <[email protected]>
Co-authored-by: DefTruth <[email protected]>
Co-authored-by: huangjianhui <[email protected]>
Co-authored-by: Jason <[email protected]>
  • Loading branch information
6 people authored Oct 21, 2022
1 parent 542d42a commit 640dfcf
Show file tree
Hide file tree
Showing 29 changed files with 566 additions and 203 deletions.
2 changes: 1 addition & 1 deletion fastdeploy/vision/classification/contrib/yolov5cls.h
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ class FASTDEPLOY_DECL YOLOv5Cls : public FastDeployModel {

/** \brief Predict the classification result for an input image
*
* \param[in] im The input image data, comes from cv::imread()
* \param[in] im The input image data, comes from cv::imread(), is a 3-D array with layout HWC, BGR format
* \param[in] result The output classification result will be writen to this structure
* \param[in] topk Returns the topk classification result with the highest predicted probability, the default is 1
* \return true if the prediction successed, otherwise false
Expand Down
39 changes: 28 additions & 11 deletions fastdeploy/vision/detection/contrib/nanodet_plus.h
Original file line number Diff line number Diff line change
Expand Up @@ -23,34 +23,51 @@ namespace fastdeploy {
namespace vision {

namespace detection {

/*! @brief NanoDetPlus model object used when to load a NanoDetPlus model exported by NanoDet.
*/
class FASTDEPLOY_DECL NanoDetPlus : public FastDeployModel {
public:
/** \brief Set path of model file and the configuration of runtime.
*
* \param[in] model_file Path of model file, e.g ./nanodet_plus_320.onnx
* \param[in] params_file Path of parameter file, e.g ppyoloe/model.pdiparams, if the model format is ONNX, this parameter will be ignored
* \param[in] custom_option RuntimeOption for inference, the default will use cpu, and choose the backend defined in "valid_cpu_backends"
* \param[in] model_format Model format of the loaded model, default is ONNX format
*/
NanoDetPlus(const std::string& model_file,
const std::string& params_file = "",
const RuntimeOption& custom_option = RuntimeOption(),
const ModelFormat& model_format = ModelFormat::ONNX);

/// Get model's name
std::string ModelName() const { return "nanodet"; }


/** \brief Predict the detection result for an input image
*
* \param[in] im The input image data, comes from cv::imread(), is a 3-D array with layout HWC, BGR format
* \param[in] result The output detection result will be writen to this structure
* \param[in] conf_threshold confidence threashold for postprocessing, default is 0.35
* \param[in] nms_iou_threshold iou threashold for NMS, default is 0.5
* \return true if the prediction successed, otherwise false
*/
virtual bool Predict(cv::Mat* im, DetectionResult* result,
float conf_threshold = 0.35f,
float nms_iou_threshold = 0.5f);

// tuple of input size (width, height), e.g (320, 320)
/// tuple of input size (width, height), e.g (320, 320)
std::vector<int> size;
// padding value, size should be same with Channels
/// padding value, size should be the same as channels
std::vector<float> padding_value;
// keep aspect ratio or not when perform resize operation.
// This option is set as `false` by default in NanoDet-Plus.
/*! @brief
keep aspect ratio or not when perform resize operation. This option is set as `false` by default in NanoDet-Plus
*/
bool keep_ratio;
// downsample strides for NanoDet-Plus to generate anchors, will
// take (8, 16, 32, 64) as default values.
/*! @brief
downsample strides for NanoDet-Plus to generate anchors, will take (8, 16, 32, 64) as default values
*/
std::vector<int> downsample_strides;
// for offseting the boxes by classes when using NMS, default 4096.
/// for offseting the boxes by classes when using NMS, default 4096
float max_wh;
// reg_max for GFL regression, default 7
/// reg_max for GFL regression, default 7
int reg_max;

private:
Expand Down
42 changes: 30 additions & 12 deletions fastdeploy/vision/detection/contrib/scaledyolov4.h
Original file line number Diff line number Diff line change
Expand Up @@ -20,35 +20,53 @@
namespace fastdeploy {
namespace vision {
namespace detection {

/*! @brief ScaledYOLOv4 model object used when to load a ScaledYOLOv4 model exported by ScaledYOLOv4.
*/
class FASTDEPLOY_DECL ScaledYOLOv4 : public FastDeployModel {
public:
/** \brief Set path of model file and the configuration of runtime.
*
* \param[in] model_file Path of model file, e.g ./scaled_yolov4.onnx
* \param[in] params_file Path of parameter file, e.g ppyoloe/model.pdiparams, if the model format is ONNX, this parameter will be ignored
* \param[in] custom_option RuntimeOption for inference, the default will use cpu, and choose the backend defined in "valid_cpu_backends"
* \param[in] model_format Model format of the loaded model, default is ONNX format
*/

ScaledYOLOv4(const std::string& model_file,
const std::string& params_file = "",
const RuntimeOption& custom_option = RuntimeOption(),
const ModelFormat& model_format = ModelFormat::ONNX);

virtual std::string ModelName() const { return "ScaledYOLOv4"; }

/** \brief Predict the detection result for an input image
*
* \param[in] im The input image data, comes from cv::imread(), is a 3-D array with layout HWC, BGR format
* \param[in] result The output detection result will be writen to this structure
* \param[in] conf_threshold confidence threashold for postprocessing, default is 0.25
* \param[in] nms_iou_threshold iou threashold for NMS, default is 0.5
* \return true if the prediction successed, otherwise false
*/
virtual bool Predict(cv::Mat* im, DetectionResult* result,
float conf_threshold = 0.25,
float nms_iou_threshold = 0.5);

// tuple of (width, height)
/// tuple of (width, height)
std::vector<int> size;
// padding value, size should be same with Channels
/// padding value, size should be the same as channels
std::vector<float> padding_value;
// only pad to the minimum rectange which height and width is times of stride
/// only pad to the minimum rectange which height and width is times of stride
bool is_mini_pad;
// while is_mini_pad = false and is_no_pad = true, will resize the image to
// the set size
/*! @brief
while is_mini_pad = false and is_no_pad = true, will resize the image to the set size
*/
bool is_no_pad;
// if is_scale_up is false, the input image only can be zoom out, the maximum
// resize scale cannot exceed 1.0
/*! @brief
if is_scale_up is false, the input image only can be zoom out, the maximum resize scale cannot exceed 1.0
*/
bool is_scale_up;
// padding stride, for is_mini_pad
/// padding stride, for is_mini_pad
int stride;
// for offseting the boxes by classes when using NMS
/// for offseting the boxes by classes when using NMS
float max_wh;

private:
Expand All @@ -70,7 +88,7 @@ class FASTDEPLOY_DECL ScaledYOLOv4 : public FastDeployModel {
// or not.)
// while is_dynamic_shape if 'false', is_mini_pad will force 'false'. This
// value will
// auto check by fastdeploy after the internal Runtime already initialized.
// auto check by fastdeploy after the internal Runtime already initialized
bool is_dynamic_input_;
};
} // namespace detection
Expand Down
42 changes: 30 additions & 12 deletions fastdeploy/vision/detection/contrib/yolor.h
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
// Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
// Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. //NOLINT
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
Expand All @@ -20,34 +20,51 @@
namespace fastdeploy {
namespace vision {
namespace detection {

/*! @brief YOLOR model object used when to load a YOLOR model exported by YOLOR.
*/
class FASTDEPLOY_DECL YOLOR : public FastDeployModel {
public:
/** \brief Set path of model file and the configuration of runtime.
*
* \param[in] model_file Path of model file, e.g ./yolor.onnx
* \param[in] params_file Path of parameter file, e.g ppyoloe/model.pdiparams, if the model format is ONNX, this parameter will be ignored
* \param[in] custom_option RuntimeOption for inference, the default will use cpu, and choose the backend defined in "valid_cpu_backends"
* \param[in] model_format Model format of the loaded model, default is ONNX format
*/
YOLOR(const std::string& model_file, const std::string& params_file = "",
const RuntimeOption& custom_option = RuntimeOption(),
const ModelFormat& model_format = ModelFormat::ONNX);

virtual std::string ModelName() const { return "YOLOR"; }

/** \brief Predict the detection result for an input image
*
* \param[in] im The input image data, comes from cv::imread(), is a 3-D array with layout HWC, BGR format
* \param[in] result The output detection result will be writen to this structure
* \param[in] conf_threshold confidence threashold for postprocessing, default is 0.25
* \param[in] nms_iou_threshold iou threashold for NMS, default is 0.5
* \return true if the prediction successed, otherwise false
*/
virtual bool Predict(cv::Mat* im, DetectionResult* result,
float conf_threshold = 0.25,
float nms_iou_threshold = 0.5);

// tuple of (width, height)
/// tuple of (width, height)
std::vector<int> size;
// padding value, size should be same with Channels
/// padding value, size should be the same as channels
std::vector<float> padding_value;
// only pad to the minimum rectange which height and width is times of stride
/// only pad to the minimum rectange which height and width is times of stride
bool is_mini_pad;
// while is_mini_pad = false and is_no_pad = true, will resize the image to
// the set size
/*! @brief
while is_mini_pad = false and is_no_pad = true, will resize the image to the set size
*/
bool is_no_pad;
// if is_scale_up is false, the input image only can be zoom out, the maximum
// resize scale cannot exceed 1.0
/*! @brief
if is_scale_up is false, the input image only can be zoom out, the maximum resize scale cannot exceed 1.0
*/
bool is_scale_up;
// padding stride, for is_mini_pad
/// padding stride, for is_mini_pad
int stride;
// for offseting the boxes by classes when using NMS
/// for offseting the boxes by classes when using NMS
float max_wh;

private:
Expand All @@ -72,6 +89,7 @@ class FASTDEPLOY_DECL YOLOR : public FastDeployModel {
// auto check by fastdeploy after the internal Runtime already initialized.
bool is_dynamic_input_;
};

} // namespace detection
} // namespace vision
} // namespace fastdeploy
43 changes: 30 additions & 13 deletions fastdeploy/vision/detection/contrib/yolov5.h
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
// Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
// Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. //NOLINT
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
Expand All @@ -21,17 +21,32 @@
namespace fastdeploy {
namespace vision {
namespace detection {

/*! @brief YOLOv5 model object used when to load a YOLOv5 model exported by YOLOv5.
*/
class FASTDEPLOY_DECL YOLOv5 : public FastDeployModel {
public:
/** \brief Set path of model file and the configuration of runtime.
*
* \param[in] model_file Path of model file, e.g ./yolov5.onnx
* \param[in] params_file Path of parameter file, e.g ppyoloe/model.pdiparams, if the model format is ONNX, this parameter will be ignored
* \param[in] custom_option RuntimeOption for inference, the default will use cpu, and choose the backend defined in "valid_cpu_backends"
* \param[in] model_format Model format of the loaded model, default is ONNX format
*/
YOLOv5(const std::string& model_file, const std::string& params_file = "",
const RuntimeOption& custom_option = RuntimeOption(),
const ModelFormat& model_format = ModelFormat::ONNX);

~YOLOv5();

std::string ModelName() const { return "yolov5"; }

/** \brief Predict the detection result for an input image
*
* \param[in] im The input image data, comes from cv::imread(), is a 3-D array with layout HWC, BGR format
* \param[in] result The output detection result will be writen to this structure
* \param[in] conf_threshold confidence threashold for postprocessing, default is 0.25
* \param[in] nms_iou_threshold iou threashold for NMS, default is 0.5
* \return true if the prediction successed, otherwise false
*/
virtual bool Predict(cv::Mat* im, DetectionResult* result,
float conf_threshold = 0.25,
float nms_iou_threshold = 0.5);
Expand Down Expand Up @@ -62,23 +77,25 @@ class FASTDEPLOY_DECL YOLOv5 : public FastDeployModel {
float conf_threshold, float nms_iou_threshold, bool multi_label,
float max_wh = 7680.0);

// tuple of (width, height)
/// tuple of (width, height)
std::vector<int> size_;
// padding value, size should be same with Channels
/// padding value, size should be the same as channels
std::vector<float> padding_value_;
// only pad to the minimum rectange which height and width is times of stride
/// only pad to the minimum rectange which height and width is times of stride
bool is_mini_pad_;
// while is_mini_pad = false and is_no_pad = true, will resize the image to
// the set size
/*! @brief
while is_mini_pad = false and is_no_pad = true, will resize the image to the set size
*/
bool is_no_pad_;
// if is_scale_up is false, the input image only can be zoom out, the maximum
// resize scale cannot exceed 1.0
/*! @brief
if is_scale_up is false, the input image only can be zoom out, the maximum resize scale cannot exceed 1.0
*/
bool is_scale_up_;
// padding stride, for is_mini_pad
/// padding stride, for is_mini_pad
int stride_;
// for offseting the boxes by classes when using NMS
/// for offseting the boxes by classes when using NMS
float max_wh_;
// for different strategies to get boxes when postprocessing
/// for different strategies to get boxes when postprocessing
bool multi_label_;

private:
Expand Down
Loading

0 comments on commit 640dfcf

Please sign in to comment.