Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Modify PPMatting backend and docs #182

Merged
merged 82 commits into from
Sep 7, 2022
Merged
Show file tree
Hide file tree
Changes from 77 commits
Commits
Show all changes
82 commits
Select commit Hold shift + click to select a range
1684b05
first commit for yolov7
ziqi-jin Jul 13, 2022
71c00d9
pybind for yolov7
ziqi-jin Jul 14, 2022
21ab2f9
CPP README.md
ziqi-jin Jul 14, 2022
d63e862
CPP README.md
ziqi-jin Jul 14, 2022
7b3b0e2
modified yolov7.cc
ziqi-jin Jul 14, 2022
d039e80
README.md
ziqi-jin Jul 15, 2022
a34a815
python file modify
ziqi-jin Jul 18, 2022
eb010a8
merge test
ziqi-jin Jul 18, 2022
39f64f2
delete license in fastdeploy/
ziqi-jin Jul 18, 2022
d071b37
repush the conflict part
ziqi-jin Jul 18, 2022
d5026ca
README.md modified
ziqi-jin Jul 18, 2022
fb376ad
README.md modified
ziqi-jin Jul 18, 2022
4b8737c
file path modified
ziqi-jin Jul 18, 2022
ce922a0
file path modified
ziqi-jin Jul 18, 2022
6e00b82
file path modified
ziqi-jin Jul 18, 2022
8c359fb
file path modified
ziqi-jin Jul 18, 2022
906c730
file path modified
ziqi-jin Jul 18, 2022
80c1223
README modified
ziqi-jin Jul 18, 2022
6072757
README modified
ziqi-jin Jul 18, 2022
2c6e6a4
move some helpers to private
ziqi-jin Jul 18, 2022
48136f0
add examples for yolov7
ziqi-jin Jul 18, 2022
6feca92
api.md modified
ziqi-jin Jul 18, 2022
ae70d4f
api.md modified
ziqi-jin Jul 18, 2022
f591b85
api.md modified
ziqi-jin Jul 18, 2022
f0def41
YOLOv7
ziqi-jin Jul 18, 2022
15b9160
yolov7 release link
ziqi-jin Jul 18, 2022
4706e8c
yolov7 release link
ziqi-jin Jul 18, 2022
dc83584
yolov7 release link
ziqi-jin Jul 18, 2022
086debd
copyright
ziqi-jin Jul 18, 2022
4f980b9
change some helpers to private
ziqi-jin Jul 18, 2022
2e61c95
Merge branch 'develop' into develop
ziqi-jin Jul 19, 2022
80beadf
change variables to const and fix documents.
ziqi-jin Jul 19, 2022
8103772
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Jul 19, 2022
f5f7a86
gitignore
ziqi-jin Jul 19, 2022
e6cec25
Transfer some funtions to private member of class
ziqi-jin Jul 19, 2022
e25e4f2
Transfer some funtions to private member of class
ziqi-jin Jul 19, 2022
e8a8439
Merge from develop (#9)
ziqi-jin Jul 20, 2022
a182893
first commit for yolor
ziqi-jin Jul 20, 2022
3aa015f
for merge
ziqi-jin Jul 20, 2022
d6b98aa
Develop (#11)
ziqi-jin Jul 20, 2022
871cfc6
Merge branch 'yolor' into develop
ziqi-jin Jul 20, 2022
013921a
Yolor (#16)
ziqi-jin Jul 21, 2022
7a5a6d9
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Jul 21, 2022
c996117
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Jul 22, 2022
0aefe32
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Jul 26, 2022
2330414
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Jul 26, 2022
4660161
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Jul 27, 2022
033c18e
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Jul 28, 2022
6c94d65
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Jul 28, 2022
85fb256
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Jul 29, 2022
90ca4cb
add is_dynamic for YOLO series (#22)
ziqi-jin Jul 29, 2022
f6a4ed2
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Aug 1, 2022
3682091
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Aug 3, 2022
ca1e110
Merge remote-tracking branch 'upstream/develop' into develop
ziqi-jin Aug 8, 2022
93ba6a6
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Aug 9, 2022
767842e
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Aug 10, 2022
cc32733
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Aug 10, 2022
2771a3b
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Aug 11, 2022
a1e29ac
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Aug 11, 2022
5ecc6fe
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Aug 11, 2022
2780588
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Aug 12, 2022
c00be81
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Aug 15, 2022
9082178
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Aug 15, 2022
4b14f56
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Aug 15, 2022
4876b82
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Aug 16, 2022
9cebb1f
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Aug 18, 2022
d1e3b29
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Aug 19, 2022
69cf0d2
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Aug 22, 2022
2ff10e1
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Aug 23, 2022
a673a2c
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Aug 25, 2022
832d777
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Aug 25, 2022
e513eac
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Aug 29, 2022
ded2054
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Sep 1, 2022
19db925
modify ppmatting backend and docs
ziqi-jin Sep 1, 2022
15be4a6
modify ppmatting docs
ziqi-jin Sep 1, 2022
3a5b93a
fix the PPMatting size problem
ziqi-jin Sep 3, 2022
f765853
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Sep 3, 2022
c2332b0
fix LimitShort's log
ziqi-jin Sep 3, 2022
950f948
retrigger ci
ziqi-jin Sep 4, 2022
64a13c9
modify PPMatting docs
ziqi-jin Sep 4, 2022
09c073d
modify the way for dealing with LimitShort
ziqi-jin Sep 6, 2022
99969b6
Merge branch 'develop' into develop
jiangjiajun Sep 6, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
78 changes: 78 additions & 0 deletions csrc/fastdeploy/vision/common/processors/resize_by_long.cc
Original file line number Diff line number Diff line change
@@ -0,0 +1,78 @@
// Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.

#include "fastdeploy/vision/common/processors/resize_by_long.h"

namespace fastdeploy {
namespace vision {

bool ResizeByLong::CpuRun(Mat* mat) {
cv::Mat* im = mat->GetCpuMat();
int origin_w = im->cols;
int origin_h = im->rows;
double scale = GenerateScale(origin_w, origin_h);
if (use_scale_) {
cv::resize(*im, *im, cv::Size(), scale, scale, interp_);
} else {
int width = static_cast<int>(round(scale * im->cols));
int height = static_cast<int>(round(scale * im->rows));
cv::resize(*im, *im, cv::Size(width, height), 0, 0, interp_);
}
mat->SetWidth(im->cols);
mat->SetHeight(im->rows);
return true;
}

#ifdef ENABLE_OPENCV_CUDA
bool ResizeByLong::GpuRun(Mat* mat) {
cv::cuda::GpuMat* im = mat->GetGpuMat();
int origin_w = im->cols;
int origin_h = im->rows;
double scale = GenerateScale(origin_w, origin_h);
im->convertTo(*im, CV_32FC(im->channels()));
if (use_scale_) {
cv::cuda::resize(*im, *im, cv::Size(), scale, scale, interp_);
} else {
int width = static_cast<int>(round(scale * im->cols));
int height = static_cast<int>(round(scale * im->rows));
cv::cuda::resize(*im, *im, cv::Size(width, height), 0, 0, interp_);
}
mat->SetWidth(im->cols);
mat->SetHeight(im->rows);
return true;
}
#endif

double ResizeByLong::GenerateScale(const int origin_w, const int origin_h) {
int im_size_max = std::max(origin_w, origin_h);
int im_size_min = std::min(origin_w, origin_h);
double scale = 1.0f;
if (target_size_ == -1) {
if (im_size_max > max_size_) {
scale = static_cast<double>(max_size_) / static_cast<double>(im_size_max);
}
} else {
scale =
static_cast<double>(target_size_) / static_cast<double>(im_size_max);
}
return scale;
}

bool ResizeByLong::Run(Mat* mat, int target_size, int interp, bool use_scale,
int max_size, ProcLib lib) {
auto r = ResizeByLong(target_size, interp, use_scale, max_size);
return r(mat, lib);
}
} // namespace vision
} // namespace fastdeploy
49 changes: 49 additions & 0 deletions csrc/fastdeploy/vision/common/processors/resize_by_long.h
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
// Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.

#pragma once

#include "fastdeploy/vision/common/processors/base.h"

namespace fastdeploy {
namespace vision {

class ResizeByLong : public Processor {
public:
ResizeByLong(int target_size, int interp = 1, bool use_scale = true,
int max_size = -1) {
target_size_ = target_size;
max_size_ = max_size;
interp_ = interp;
use_scale_ = use_scale;
}
bool CpuRun(Mat* mat);
#ifdef ENABLE_OPENCV_CUDA
bool GpuRun(Mat* mat);
#endif
std::string Name() { return "ResizeByLong"; }

static bool Run(Mat* mat, int target_size, int interp = 1,
bool use_scale = true, int max_size = -1,
ProcLib lib = ProcLib::OPENCV_CPU);

private:
double GenerateScale(const int origin_w, const int origin_h);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

naive 值传递 不需要 const限制

int target_size_;
int max_size_;
int interp_;
bool use_scale_;
};
} // namespace vision
} // namespace fastdeploy
1 change: 1 addition & 0 deletions csrc/fastdeploy/vision/common/processors/transform.h
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,7 @@
#include "fastdeploy/vision/common/processors/pad.h"
#include "fastdeploy/vision/common/processors/pad_to_size.h"
#include "fastdeploy/vision/common/processors/resize.h"
#include "fastdeploy/vision/common/processors/resize_by_long.h"
#include "fastdeploy/vision/common/processors/resize_by_short.h"
#include "fastdeploy/vision/common/processors/resize_to_int_mult.h"
#include "fastdeploy/vision/common/processors/stride_pad.h"
46 changes: 44 additions & 2 deletions csrc/fastdeploy/vision/matting/ppmatting/ppmatting.cc
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ PPMatting::PPMatting(const std::string& model_file,
const Frontend& model_format) {
config_file_ = config_file;
valid_cpu_backends = {Backend::PDINFER, Backend::ORT};
valid_gpu_backends = {Backend::PDINFER, Backend::ORT, Backend::TRT};
valid_gpu_backends = {Backend::PDINFER, Backend::TRT};
runtime_option = custom_option;
runtime_option.model_format = model_format;
runtime_option.model_file = model_file;
Expand Down Expand Up @@ -74,6 +74,10 @@ bool PPMatting::BuildPreprocessPipelineFromConfig() {
if (op["min_short"]) {
min_short = op["min_short"].as<int>();
}
std::cout << "If LimintShort in yaml file, you may transfer PPMatting "
"model by yourself, please make sure your input image's "
"width==hight and not smaller than "
<< max_short << std::endl;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

日志输出改为FDINFO

LimitShort, height 单词拼错了

改为Detected LimitShort processing step in yaml file, if the model is exported from PaddleSeg, please make sure the input of your model is fixed with a square shape, and equal to " << max_short << "." << std::endl;

processors_.push_back(
std::make_shared<LimitShort>(max_short, min_short));
} else if (op["type"].as<std::string>() == "ResizeToIntMult") {
Expand All @@ -92,6 +96,22 @@ bool PPMatting::BuildPreprocessPipelineFromConfig() {
std = op["std"].as<std::vector<float>>();
}
processors_.push_back(std::make_shared<Normalize>(mean, std));
} else if (op["type"].as<std::string>() == "ResizeByLong") {
int target_size = 512;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

为啥要给默认值

if (op["target_size"]) {
target_size = op["target_size"].as<int>();
}
processors_.push_back(std::make_shared<ResizeByLong>(target_size));
} else if (op["type"].as<std::string>() == "Pad") {
// size: (w, h)
auto size = op["size"].as<std::vector<int>>();
std::vector<float> value = {114, 114, 114};
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

为啥是114, 114的这个默认值是PaddleSeg里面默认的吗?

if (op["fill_value"]) {
auto value = op["fill_value"].as<std::vector<float>>();
}
processors_.push_back(std::make_shared<Cast>("float"));
processors_.push_back(
std::make_shared<PadToSize>(size[1], size[0], value));
}
}
processors_.push_back(std::make_shared<HWC2CHW>());
Expand All @@ -107,6 +127,14 @@ bool PPMatting::Preprocess(Mat* mat, FDTensor* output,
<< "." << std::endl;
return false;
}
if (processors_[i]->Name().compare("PadToSize") == 0) {
(*im_info)["pad_to_size"] = {static_cast<int>(mat->Height()),
static_cast<int>(mat->Width())};
}
if (processors_[i]->Name().compare("ResizeByLong") == 0) {
(*im_info)["resize_by_long"] = {static_cast<int>(mat->Height()),
static_cast<int>(mat->Width())};
}
}

// Record output shape of preprocessed image
Expand Down Expand Up @@ -135,6 +163,8 @@ bool PPMatting::Postprocess(
// 先获取alpha并resize (使用opencv)
auto iter_ipt = im_info.find("input_shape");
auto iter_out = im_info.find("output_shape");
auto pad_to_size = im_info.find("output_shape");
auto resize_by_long = im_info.find("resize_by_long");
FDASSERT(iter_out != im_info.end() && iter_ipt != im_info.end(),
"Cannot find input_shape or output_shape from im_info.");
int out_h = iter_out->second[0];
Expand All @@ -145,7 +175,19 @@ bool PPMatting::Postprocess(
// TODO: 需要修改成FDTensor或Mat的运算 现在依赖cv::Mat
float* alpha_ptr = static_cast<float*>(alpha_tensor.Data());
cv::Mat alpha_zero_copy_ref(out_h, out_w, CV_32FC1, alpha_ptr);
Mat alpha_resized(alpha_zero_copy_ref); // ref-only, zero copy.
cv::Mat cropped_alpha;
if (pad_to_size != im_info.end() && resize_by_long != im_info.end()) {
int resize_h = resize_by_long->second[0];
int resize_w = resize_by_long->second[1];
int pad_h = pad_to_size->second[0];
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里pad_h和pad_w获取后并没有用上,所以这两个值的具体作用是什么

int pad_w = pad_to_size->second[1];
alpha_zero_copy_ref(cv::Rect(0, 0, resize_w, resize_h))
.copyTo(cropped_alpha);
} else {
cropped_alpha = alpha_zero_copy_ref;
}
Mat alpha_resized(cropped_alpha); // ref-only, zero copy.

if ((out_h != ipt_h) || (out_w != ipt_w)) {
// already allocated a new continuous memory after resize.
// cv::resize(alpha_resized, alpha_resized, cv::Size(ipt_w, ipt_h));
Expand Down
2 changes: 1 addition & 1 deletion csrc/fastdeploy/vision/matting/ppmatting/ppmatting.h
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ class FASTDEPLOY_DECL PPMatting : public FastDeployModel {
const RuntimeOption& custom_option = RuntimeOption(),
const Frontend& model_format = Frontend::PADDLE);

std::string ModelName() const { return "PaddleMat"; }
std::string ModelName() const { return "PaddleMatting"; }

virtual bool Predict(cv::Mat* im, MattingResult* result);

Expand Down
1 change: 1 addition & 0 deletions examples/vision/matting/modnet/python/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,7 @@ python infer.py --model modnet_photographic_portrait_matting.onnx --image mattin
<img width="200" height="200" float="left" src="https://user-images.githubusercontent.com/67993288/186852116-cf91445b-3a67-45d9-a675-c69fe77c383a.jpg">
<img width="200" height="200" float="left" src="https://user-images.githubusercontent.com/67993288/186851964-4c9086b9-3490-4fcb-82f9-2106c63aa4f3.jpg">
</div>

## MODNet Python接口

```python
Expand Down
2 changes: 1 addition & 1 deletion examples/vision/matting/ppmatting/cpp/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ wget https://bj.bcebos.com/paddlehub/fastdeploy/matting_bgr.jpg

# CPU推理
./infer_demo PP-Matting-512 matting_input.jpg matting_bgr.jpg 0
# GPU推理 (TODO: ORT-GPU 推理会报错)
# GPU推理
./infer_demo PP-Matting-512 matting_input.jpg matting_bgr.jpg 1
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里保留GpuInfer,但是设置一下backend为paddle

# GPU上TensorRT推理
./infer_demo PP-Matting-512 matting_input.jpg matting_bgr.jpg 2
Expand Down
5 changes: 4 additions & 1 deletion examples/vision/matting/ppmatting/cpp/infer.cc
Original file line number Diff line number Diff line change
Expand Up @@ -25,8 +25,10 @@ void CpuInfer(const std::string& model_dir, const std::string& image_file,
auto model_file = model_dir + sep + "model.pdmodel";
auto params_file = model_dir + sep + "model.pdiparams";
auto config_file = model_dir + sep + "deploy.yaml";
auto option = fastdeploy::RuntimeOption();
option.UseOrtBackend();
auto model = fastdeploy::vision::matting::PPMatting(model_file, params_file,
config_file);
config_file, option);
if (!model.Initialized()) {
std::cerr << "Failed to initialize." << std::endl;
return;
Expand Down Expand Up @@ -58,6 +60,7 @@ void GpuInfer(const std::string& model_dir, const std::string& image_file,

auto option = fastdeploy::RuntimeOption();
option.UseGpu();
option.UsePaddleBackend();
auto model = fastdeploy::vision::matting::PPMatting(model_file, params_file,
config_file, option);
if (!model.Initialized()) {
Expand Down
2 changes: 1 addition & 1 deletion examples/vision/matting/ppmatting/python/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ wget https://bj.bcebos.com/paddlehub/fastdeploy/matting_input.jpg
wget https://bj.bcebos.com/paddlehub/fastdeploy/matting_bgr.jpg
# CPU推理
python infer.py --model PP-Matting-512 --image matting_input.jpg --bg matting_bgr.jpg --device cpu
# GPU推理 (TODO: ORT-GPU 推理会报错)
# GPU推理
python infer.py --model PP-Matting-512 --image matting_input.jpg --bg matting_bgr.jpg --device gpu
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同上

# GPU上使用TensorRT推理 (注意:TensorRT推理第一次运行,有序列化模型的操作,有一定耗时,需要耐心等待)
python infer.py --model PP-Matting-512 --image matting_input.jpg --bg matting_bgr.jpg --device gpu --use_trt True
Expand Down
3 changes: 2 additions & 1 deletion examples/vision/matting/ppmatting/python/infer.py
Original file line number Diff line number Diff line change
Expand Up @@ -32,9 +32,10 @@ def parse_arguments():

def build_option(args):
option = fd.RuntimeOption()

option.use_ort_backend()
if args.device.lower() == "gpu":
option.use_gpu()
option.use_paddle_backend()

if args.use_trt:
option.use_trt_backend()
Expand Down