-
Notifications
You must be signed in to change notification settings - Fork 465
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Modify PPMatting backend and docs #182
Conversation
@@ -50,39 +50,6 @@ void CpuInfer(const std::string& model_dir, const std::string& image_file, | |||
<< std::endl; | |||
} | |||
|
|||
void GpuInfer(const std::string& model_dir, const std::string& image_file, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这里保留GpuInfer,但是设置一下backend为paddle
option.UsePaddleBackend()
@@ -131,8 +98,6 @@ int main(int argc, char* argv[]) { | |||
} | |||
if (std::atoi(argv[4]) == 0) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
同上,保留gpu推理案例
@@ -19,8 +19,6 @@ wget https://bj.bcebos.com/paddlehub/fastdeploy/matting_input.jpg | |||
wget https://bj.bcebos.com/paddlehub/fastdeploy/matting_bgr.jpg | |||
# CPU推理 | |||
python infer.py --model PP-Matting-512 --image matting_input.jpg --bg matting_bgr.jpg --device cpu | |||
# GPU推理 (TODO: ORT-GPU 推理会报错) | |||
python infer.py --model PP-Matting-512 --image matting_input.jpg --bg matting_bgr.jpg --device gpu |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
同上
@@ -27,8 +27,6 @@ wget https://bj.bcebos.com/paddlehub/fastdeploy/matting_bgr.jpg | |||
|
|||
# CPU推理 | |||
./infer_demo PP-Matting-512 matting_input.jpg matting_bgr.jpg 0 | |||
# GPU推理 (TODO: ORT-GPU 推理会报错) | |||
./infer_demo PP-Matting-512 matting_input.jpg matting_bgr.jpg 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这里保留GpuInfer,但是设置一下backend为paddle
@@ -92,6 +96,22 @@ bool PPMatting::BuildPreprocessPipelineFromConfig() { | |||
std = op["std"].as<std::vector<float>>(); | |||
} | |||
processors_.push_back(std::make_shared<Normalize>(mean, std)); | |||
} else if (op["type"].as<std::string>() == "ResizeByLong") { | |||
int target_size = 512; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
为啥要给默认值
std::cout << "If LimintShort in yaml file, you may transfer PPMatting " | ||
"model by yourself, please make sure your input image's " | ||
"width==hight and not smaller than " | ||
<< max_short << std::endl; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
日志输出改为FDINFO
LimitShort, height 单词拼错了
改为Detected LimitShort processing step in yaml file, if the model is exported from PaddleSeg, please make sure the input of your model is fixed with a square shape, and equal to " << max_short << "." << std::endl;
} else if (op["type"].as<std::string>() == "Pad") { | ||
// size: (w, h) | ||
auto size = op["size"].as<std::vector<int>>(); | ||
std::vector<float> value = {114, 114, 114}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
为啥是114, 114的这个默认值是PaddleSeg里面默认的吗?
if (processors_[i]->Name().compare("LimitShort") == 0) { | ||
int input_h = static_cast<int>(mat->Height()); | ||
int input_w = static_cast<int>(mat->Width()); | ||
FDASSERT(input_h == input_w, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这里用FDASSERT,用户输入的如果不是方形的,是不是会直接报错了?我看到这个input_h/w,是从输入的mat获取的,并不是从模型输入的shape获取的,但是报错提示的是"model"的input_shape必须是方形的
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if (option.backend == Backend::PDINFER) {
if (option.device == Device::CPU) {
FDWARN << "" << std::endl;
// some ops
}
} else {
FDWARN << "" << std::endl;
// resize op
}
if (pad_to_size != im_info.end() && resize_by_long != im_info.end()) { | ||
int resize_h = resize_by_long->second[0]; | ||
int resize_w = resize_by_long->second[1]; | ||
int pad_h = pad_to_size->second[0]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这里pad_h和pad_w获取后并没有用上,所以这两个值的具体作用是什么
"sure the input of your model is fixed with a square shape"); | ||
auto processor = dynamic_cast<LimitShort*>(processors_[i].get()); | ||
int max_short = processor->GetMaxShort(); | ||
FDASSERT(input_h >= max_short && input_w >= max_short, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
同上,这里获取的是输入的input_h和input_w,并不是模型的
if args.device.lower() == "gpu": | ||
option.use_gpu() | ||
if args.use_trt: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个paddle gpu的判断,应该不用缩进一个层级,可以这样(gpu下,如果不指定trt,默认就用paddle推)
def build_option(args):
option = fd.RuntimeOption()
if args.device.lower() == "gpu":
option.use_gpu()
option.use_paddle_backend()
if args.use_trt:
option.use_trt_backend()
option.set_trt_input_shape("img", [1, 3, 512, 512])
return option
ProcLib lib = ProcLib::OPENCV_CPU); | ||
|
||
private: | ||
double GenerateScale(const int origin_w, const int origin_h); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
naive 值传递 不需要 const限制
No description provided.