diff --git a/doc/cpp_server/ABTEST_IN_PADDLE_SERVING_CN.md b/doc/C++Serving/ABTest_CN.md
old mode 100644
new mode 100755
similarity index 92%
rename from doc/cpp_server/ABTEST_IN_PADDLE_SERVING_CN.md
rename to doc/C++Serving/ABTest_CN.md
index 34d1525b7..c64d57ea0
--- a/doc/cpp_server/ABTEST_IN_PADDLE_SERVING_CN.md
+++ b/doc/C++Serving/ABTest_CN.md
@@ -1,10 +1,10 @@
# 如何使用Paddle Serving做ABTEST
-(简体中文|[English](./ABTEST_IN_PADDLE_SERVING.md))
+(简体中文|[English](./ABTest_EN.md))
该文档将会用一个基于IMDB数据集的文本分类任务的例子,介绍如何使用Paddle Serving搭建A/B Test框架,例中的Client端、Server端结构如下图所示。
-
+
需要注意的是:A/B Test只适用于RPC模式,不适用于WEB模式。
@@ -24,13 +24,13 @@ pip install Shapely
````
您可以直接运行下面的命令来处理数据。
-[python abtest_get_data.py](../python/examples/imdb/abtest_get_data.py)
+[python abtest_get_data.py](../../examples/C++/imdb/abtest_get_data.py)
文件中的Python代码将处理`test_data/part-0`的数据,并将处理后的数据生成并写入`processed.data`文件中。
### 启动Server端
-这里采用[Docker方式](RUN_IN_DOCKER_CN.md)启动Server端服务。
+这里采用[Docker方式](../RUN_IN_DOCKER_CN.md)启动Server端服务。
首先启动BOW Server,该服务启用`8000`端口:
@@ -62,7 +62,7 @@ exit
您可以直接使用下面的命令,进行ABTEST预测。
-[python abtest_client.py](../python/examples/imdb/abtest_client.py)
+[python abtest_client.py](../../examples/C++/imdb/abtest_client.py)
```python
from paddle_serving_client import Client
diff --git a/doc/cpp_server/ABTEST_IN_PADDLE_SERVING.md b/doc/C++Serving/ABTest_EN.md
old mode 100644
new mode 100755
similarity index 93%
rename from doc/cpp_server/ABTEST_IN_PADDLE_SERVING.md
rename to doc/C++Serving/ABTest_EN.md
index f250f1a17..edbdbc091
--- a/doc/cpp_server/ABTEST_IN_PADDLE_SERVING.md
+++ b/doc/C++Serving/ABTest_EN.md
@@ -1,10 +1,10 @@
# ABTEST in Paddle Serving
-([简体中文](./ABTEST_IN_PADDLE_SERVING_CN.md)|English)
+([简体中文](./ABTest_CN.md)|English)
This document will use an example of text classification task based on IMDB dataset to show how to build a A/B Test framework using Paddle Serving. The structure relationship between the client and servers in the example is shown in the figure below.
-
+
Note that: A/B Test is only applicable to RPC mode, not web mode.
@@ -25,13 +25,13 @@ pip install Shapely
You can directly run the following command to process the data.
-[python abtest_get_data.py](../python/examples/imdb/abtest_get_data.py)
+[python abtest_get_data.py](../../examples/C++/imdb/abtest_get_data.py)
The Python code in the file will process the data `test_data/part-0` and write to the `processed.data` file.
### Start Server
-Here, we [use docker](RUN_IN_DOCKER.md) to start the server-side service.
+Here, we [use docker](../RUN_IN_DOCKER.md) to start the server-side service.
First, start the BOW server, which enables the `8000` port:
@@ -63,7 +63,7 @@ Before running, use `pip install paddle-serving-client` to install the paddle-se
You can directly use the following command to make abtest prediction.
-[python abtest_client.py](../python/examples/imdb/abtest_client.py)
+[python abtest_client.py](../../examples/C++/imdb/abtest_client.py)
[//file]:#abtest_client.py
``` python
diff --git a/doc/C++Serving/Benchmark_CN.md b/doc/C++Serving/Benchmark_CN.md
new file mode 100755
index 000000000..c42219119
--- /dev/null
+++ b/doc/C++Serving/Benchmark_CN.md
@@ -0,0 +1,53 @@
+# C++ Serving vs TensorFlow Serving 性能对比
+# 1. 测试环境和说明
+1) GPU型号:Tesla P4(7611 Mib)
+2) Cuda版本:11.0
+3) 模型:ResNet_v2_50
+4) 为了测试异步合并batch的效果,测试数据中batch=1
+5) [使用的测试代码和使用的数据集](../../examples/C++/PaddleClas/resnet_v2_50)
+6) 下图中蓝色是C++ Serving,灰色为TF-Serving。
+7) 折线图为QPS,数值越大表示每秒钟处理的请求数量越大,性能就越好。
+8) 柱状图为平均处理时延,数值越大表示单个请求处理时间越长,性能就越差。
+
+# 2. 同步模式
+均使用同步模式,默认参数配置。
+
+
+可以看出同步模型默认参数配置情况下,C++Serving QPS和平均时延指标均优于TF-Serving。
+
+
+
+
+
+
+
+|client_num | model_name | qps(samples/s) | mean(ms) | model_name | qps(samples/s) | mean(ms) |
+| --- | --- | --- | --- | --- | --- | --- |
+| 10 | pd-serving | 111.336 | 89.787| tf-serving| 84.632| 118.13|
+|30 |pd-serving |165.928 |180.761 |tf-serving |106.572 |281.473|
+|50| pd-serving| 207.244| 241.211| tf-serving| 80.002 |624.959|
+|70 |pd-serving |214.769 |325.894 |tf-serving |105.17 |665.561|
+|100| pd-serving| 235.405| 424.759| tf-serving| 93.664 |1067.619|
+|150 |pd-serving |239.114 |627.279 |tf-serving |86.312 |1737.848|
+
+# 3. 异步模式
+均使用异步模式,最大batch=32,异步线程数=2。
+
+
+可以看出异步模式情况下,两者性能接近,但当Client端并发数达到70的时候,TF-Serving服务直接超时,而C++Serving能够正常返回结果。
+
+同时,对比同步和异步模式可以看出,异步模式在请求batch数较小时,通过合并batch能够有效提高QPS和平均处理时延。
+
+
+
+
+
+
+|client_num | model_name | qps(samples/s) | mean(ms) | model_name | qps(samples/s) | mean(ms) |
+| --- | --- | --- | --- | --- | --- | --- |
+|10| pd-serving| 130.631| 76.502| tf-serving |172.64 |57.916|
+|30| pd-serving| 201.062| 149.168| tf-serving| 241.669| 124.128|
+|50| pd-serving| 286.01| 174.764| tf-serving |278.744 |179.367|
+|70| pd-serving| 313.58| 223.187| tf-serving| 298.241| 234.7|
+|100| pd-serving| 323.369| 309.208| tf-serving| 0| ∞|
+|150| pd-serving| 328.248| 456.933| tf-serving| 0| ∞|
diff --git a/doc/cpp_server/CLIENT_CONFIGURE.md b/doc/C++Serving/Client_Configure_CN.md
old mode 100644
new mode 100755
similarity index 100%
rename from doc/cpp_server/CLIENT_CONFIGURE.md
rename to doc/C++Serving/Client_Configure_CN.md
diff --git a/doc/cpp_server/CREATING.md b/doc/C++Serving/Creat_C++Serving_CN.md
old mode 100644
new mode 100755
similarity index 98%
rename from doc/cpp_server/CREATING.md
rename to doc/C++Serving/Creat_C++Serving_CN.md
index 8442efc79..933385921
--- a/doc/cpp_server/CREATING.md
+++ b/doc/C++Serving/Creat_C++Serving_CN.md
@@ -75,9 +75,9 @@ service ImageClassifyService {
#### 2.2.2 示例配置
-关于Serving端的配置的详细信息,可以参考[Serving端配置](SERVING_CONFIGURE.md)
+关于Serving端的配置的详细信息,可以参考[Serving端配置](../SERVING_CONFIGURE_CN.md)
-以下配置文件将ReaderOP, ClassifyOP和WriteJsonOP串联成一个workflow (关于OP/workflow等概念,可参考[设计文档](C++DESIGN_CN.md))
+以下配置文件将ReaderOP, ClassifyOP和WriteJsonOP串联成一个workflow (关于OP/workflow等概念,可参考[OP介绍](OP_CN.md)和[DAG介绍](DAG_CN.md))
- 配置文件示例:
@@ -310,7 +310,7 @@ api.thrd_finalize();
api.destroy();
```
-具体实现可参考paddle Serving提供的例子sdk-cpp/demo/ximage.cpp
+具体实现可参考C++Serving提供的例子。sdk-cpp/demo/ximage.cpp
### 3.3 链接
@@ -392,4 +392,4 @@ predictors {
}
}
```
-关于客户端的详细配置选项,可参考[CLIENT CONFIGURATION](CLIENT_CONFIGURE.md)
+关于客户端的详细配置选项,可参考[CLIENT CONFIGURATION](Client_Configure_CN.md)
diff --git a/doc/cpp_server/SERVER_DAG_CN.md b/doc/C++Serving/DAG_CN.md
old mode 100644
new mode 100755
similarity index 89%
rename from doc/cpp_server/SERVER_DAG_CN.md
rename to doc/C++Serving/DAG_CN.md
index 8f073c635..fc13ff986
--- a/doc/cpp_server/SERVER_DAG_CN.md
+++ b/doc/C++Serving/DAG_CN.md
@@ -1,6 +1,6 @@
# Server端的计算图
-(简体中文|[English](./SERVER_DAG.md))
+(简体中文|[English](DAG_EN.md))
本文档显示了Server端上计算图的概念。 如何使用PaddleServing内置运算符定义计算图。 还显示了一些顺序执行逻辑的示例。
@@ -9,7 +9,7 @@
深度神经网络通常在输入数据上有一些预处理步骤,而在模型推断分数上有一些后处理步骤。 由于深度学习框架现在非常灵活,因此可以在训练计算图之外进行预处理和后处理。 如果要在服务器端进行输入数据预处理和推理结果后处理,则必须在服务器上添加相应的计算逻辑。 此外,如果用户想在多个模型上使用相同的输入进行推理,则最好的方法是在仅提供一个客户端请求的情况下在服务器端同时进行推理,这样我们可以节省一些网络计算开销。 由于以上两个原因,自然而然地将有向无环图(DAG)视为服务器推理的主要计算方法。 DAG的一个示例如下:
-
+
## 如何定义节点
@@ -18,7 +18,7 @@
PaddleServing在框架中具有一些预定义的计算节点。 一种非常常用的计算图是简单的reader-infer-response模式,可以涵盖大多数单一模型推理方案。 示例图和相应的DAG定义代码如下。
-
+
``` python
@@ -47,10 +47,10 @@ python -m paddle_serving_server.serve --model uci_housing_model --thread 10 --po
### 包含多个输入的节点
-在[Paddle Serving中的集成预测](./deprecated/MODEL_ENSEMBLE_IN_PADDLE_SERVING_CN.md)文档中给出了一个包含多个输入节点的样例,示意图和代码如下。
+在[Paddle Serving中的集成预测](Model_Ensemble_CN.md)文档中给出了一个包含多个输入节点的样例,示意图和代码如下。
-
+
```python
diff --git a/doc/cpp_server/SERVER_DAG.md b/doc/C++Serving/DAG_EN.md
old mode 100644
new mode 100755
similarity index 89%
rename from doc/cpp_server/SERVER_DAG.md
rename to doc/C++Serving/DAG_EN.md
index ae181798c..90b7e0e53
--- a/doc/cpp_server/SERVER_DAG.md
+++ b/doc/C++Serving/DAG_EN.md
@@ -1,6 +1,6 @@
# Computation Graph On Server
-([简体中文](./SERVER_DAG_CN.md)|English)
+([简体中文](./DAG_CN.md)|English)
This document shows the concept of computation graph on server. How to define computation graph with PaddleServing built-in operators. Examples for some sequential execution logics are shown as well.
@@ -9,7 +9,7 @@ This document shows the concept of computation graph on server. How to define co
Deep neural nets often have some preprocessing steps on input data, and postprocessing steps on model inference scores. Since deep learning frameworks are now very flexible, it is possible to do preprocessing and postprocessing outside the training computation graph. If we want to do input data preprocessing and inference result postprocess on server side, we have to add the corresponding computation logics on server. Moreover, if a user wants to do inference with the same inputs on more than one model, the best way is to do the inference concurrently on server side given only one client request so that we can save some network computation overhead. For the above two reasons, it is naturally to think of a Directed Acyclic Graph(DAG) as the main computation method for server inference. One example of DAG is as follows:
-
+
## How to define Node
@@ -19,7 +19,7 @@ Deep neural nets often have some preprocessing steps on input data, and postproc
PaddleServing has some predefined Computation Node in the framework. A very commonly used Computation Graph is the simple reader-inference-response mode that can cover most of the single model inference scenarios. A example graph and the corresponding DAG definition code is as follows.
-
+
``` python
@@ -48,10 +48,10 @@ python -m paddle_serving_server.serve --model uci_housing_model --thread 10 --po
### Nodes with multiple inputs
-An example containing multiple input nodes is given in the [MODEL_ENSEMBLE_IN_PADDLE_SERVING](./deprecated/MODEL_ENSEMBLE_IN_PADDLE_SERVING.md). A example graph and the corresponding DAG definition code is as follows.
+An example containing multiple input nodes is given in the [Model_Ensemble](Model_Ensemble_EN.md). A example graph and the corresponding DAG definition code is as follows.
-
+
```python
diff --git a/doc/cpp_server/ENCRYPTION_CN.md b/doc/C++Serving/Encryption_CN.md
old mode 100644
new mode 100755
similarity index 86%
rename from doc/cpp_server/ENCRYPTION_CN.md
rename to doc/C++Serving/Encryption_CN.md
index 41713e8aa..77459a21a
--- a/doc/cpp_server/ENCRYPTION_CN.md
+++ b/doc/C++Serving/Encryption_CN.md
@@ -1,6 +1,6 @@
# 加密模型预测
-(简体中文|[English](ENCRYPTION.md))
+(简体中文|[English](Encryption_EN.md))
Padle Serving提供了模型加密预测功能,本文档显示了详细信息。
@@ -12,7 +12,7 @@ Padle Serving提供了模型加密预测功能,本文档显示了详细信息
普通的模型和参数可以理解为一个字符串,通过对其使用加密算法(参数是您的密钥),普通模型和参数就变成了一个加密的模型和参数。
-我们提供了一个简单的演示来加密模型。请参阅[`python/examples/encryption/encrypt.py`](../python/examples/encryption/encrypt.py)。
+我们提供了一个简单的演示来加密模型。请参阅[examples/C++/encryption/encrypt.py](../../examples/C++/encryption/encrypt.py)。
### 启动加密服务
@@ -40,5 +40,4 @@ python -m paddle_serving_server.serve --model encrypt_server/ --port 9300 --use_
### 模型加密推理示例
-模型加密推理示例, 请参见[`/python/examples/encryption/`](../python/examples/encryption/)。
-
+模型加密推理示例, 请参见[examples/C++/encryption/](../../examples/C++/encryption/)。
diff --git a/doc/cpp_server/ENCRYPTION.md b/doc/C++Serving/Encryption_EN.md
old mode 100644
new mode 100755
similarity index 83%
rename from doc/cpp_server/ENCRYPTION.md
rename to doc/C++Serving/Encryption_EN.md
index 89b2c5f8e..3b6274519
--- a/doc/cpp_server/ENCRYPTION.md
+++ b/doc/C++Serving/Encryption_EN.md
@@ -1,6 +1,6 @@
# MOEDL ENCRYPTION INFERENCE
-([简体中文](ENCRYPTION_CN.md)|English)
+([简体中文](Encryption_CN.md)|English)
Paddle Serving provides model encryption inference, This document shows the details.
@@ -12,7 +12,7 @@ We use symmetric encryption algorithm to encrypt the model. Symmetric encryption
Normal model and parameters can be understood as a string, by using the encryption algorithm (parameter is your key) on them, the normal model and parameters become an encrypted one.
-We provide a simple demo to encrypt the model. See the [python/examples/encryption/encrypt.py](../python/examples/encryption/encrypt.py)。
+We provide a simple demo to encrypt the model. See the [examples/C++/encryption/encrypt.py](../../examples/C++/encryption/encrypt.py)。
### Start Encryption Service
@@ -40,5 +40,4 @@ Once the server gets the key, it uses the key to parse the model and starts the
### Example of Model Encryption Inference
-Example of model encryption inference, See the [`/python/examples/encryption/`](../python/examples/encryption/)。
-
+Example of model encryption inference, See the [examples/C++/encryption/](../../examples/C++/encryption/)。
diff --git a/doc/C++Serving/Frame_Performance_CN.md b/doc/C++Serving/Frame_Performance_CN.md
new file mode 100755
index 000000000..427f1d312
--- /dev/null
+++ b/doc/C++Serving/Frame_Performance_CN.md
@@ -0,0 +1,461 @@
+# C++ Serving框架性能测试
+本文以文本分类任务为例搭建Serving预测服务,给出Serving框架性能数据:
+
+1) Serving框架净开销测试
+
+2) 不同模型下预测服务单线程响应时间、QPS、准确率等指标和单机模式的对比
+
+3) 不同模型下Serving扩展能力对比
+
+
+# 1. Serving单次请求时间分解
+
+下图是一个对serving请求的耗时阶段的不完整分析。图中对brpc的开销,只列出了bthread创建和启动开销。
+
+![](../images/serving-timings.png)
+
+(右键在新窗口中浏览大图)
+
+试与单机模式对比:
+
+1) 从原始样例填充PaddleTensor (几us到几十us)
+
+2) 从PaddleTensor填充LoDTensor (几us到几十us)
+
+3) inference (几十us到几百ms)
+
+4) 从LoDTensor填充PaddleTensor (几us到几十us)
+
+5) 从Paddletensor读取预测结果 (几us到几十us)
+
+与单机模式相比,serving模式增加了:
+
+1) protobuf数据构造和序列化与反序列化 (几us到几十us)
+
+2) 网络通信 (单机十几us,远程500us到几十ms)
+
+3) 和bthread创建于调度等。(十几us)
+
+从client端看(图中total time T2),serving模式增加的时间,与inference时间的比例,对整个client端观察到的系统吞吐关系密切:
+
+1) 当inference时间达到10+ms到几百ms (例如,文本分类的CNN模型),而serving模式增加的时间只有几ms,则client端观察到的吞吐与单机模式几乎一致
+
+2) 当inference时间只有几个us到几十个us (例如,文本分类的BOW模型),而serving模式增加了几个ms,则client端观察到的吞吐与单机模式相比,会下降到单机模式的20%甚至更低。
+
+**为了验证上述假设,文本分类任务的serving模式测试,需要在几个不同模型上分别进行,分别记录serving模式下,client端吞吐量的变化情况。**
+
+# 2. 测试任务和测试环境
+
+## 2.1 测试任务
+
+文本分类的两种常见模型:BOW, CNN
+
+**Batch Size: 本实验中所有请求的batch size均为50**
+
+## 2.2 测试环境
+
+
+| | CPU型号、核数 | 内存 |
+| --- | --- | --- |
+| Serving所在机器 | Intel(R) Xeon(R) CPU E5-2650 v3 @ 2.30GHz 40核 | 128G |
+| Client所在机器 | Intel(R) Xeon(R) CPU E5-2650 v3 @ 2.30GHz 40核 | 128G |
+
+Serving端与Client端通信时延:0.102 ms
+
+# 3. 净开销测试
+
+本测试是为了描画引入Serving框架后,在Serving端空转的情况下,每query消耗的时间,也就是框架引入的开销。
+
+所谓空转是指,serving端去除实际执行预测的计算时间,但保留拆包和组包逻辑。
+
+| 模型 | 净开销 (ms) |
+| --- | --- |
+| BOW | 1 |
+| CNN | 1 |
+
+在C++ Serving模式下,框架引入的时间开销较小,约为1ms
+
+# 4. 预测服务单线程响应时间、QPS、准确率等指标与单机模式的对比
+
+本测试用来确定Serving的准确率、QPS和响应时间等指标与单机模式相比是否无明显异常
+
+
+
+
+ 模型 |
+ Serving(client与serving同机器) |
+ 单机 |
+
+
+
+
+ |
+ QPS |
+ Latency (ms) |
+ Accuracy |
+ QPS |
+ Latency (ms) |
+ Accuracy |
+
+
+ BOW |
+ 265.393 |
+ 3 |
+ 0.84348 |
+ 715.973366 |
+ 1.396700 |
+ 0.843480 |
+
+
+ CNN |
+ 23.3002 |
+ 42 |
+ 0.8962 |
+ 25.372693 |
+ 39.412450 |
+ 0.896200 |
+
+
+
+
+准确率:Serving模式下与单机模式下预测准确率一致。
+
+QPS:与模型特点有关:可以看到,在预测时间很短的BOW模型上,Serving框架本身的开销和网络通信固定时间在单次请求中的时间占比占了绝大部分,这导致Serving模式下的QPS与单机模式相比下降明显;而在预测时间较长的CNN模型上,Serving框架开销和网络通信时间在单次请求中的占比较小,Serving模式下QPS与单机模式下相差不多。这也验证了第1节的预期。
+
+# 5. Serving扩展能力
+
+Serving扩展能力的测试是指,在不同模型上:
+
+1) 固定serving端brpc使用的系统线程数
+
+2) 不断增加client端并发请求数
+
+3) 运行一段时间后,client端记录当前设定下QPS、平均响应时间和各个分位点的响应时间等信息
+
+4) serving与client在不同机器上,Serving端与Client端通信时延:0.102 ms
+
+
+## 5.1 测试结论
+1) 当模型较为复杂,模型本身的预测时间较长时(预测时间>10ms,以上述实验中CNN模型为例),Paddle Serving能够提供较好的线性扩展能力.
+2) 当模型是简单模型,模型本身的预测时间较短时(预测时间<10ms,以上述实验中BOW模型为例),随着serving端线程数的增加,qps的增长趋势较为杂乱,看不出明显的线性趋势。猜测是因为预测时间较短,而线程切换、框架本身的开销等占了大头,导致虽然随着线程数增加,qps也有增长,但当并发数增大时,qps反而出现下降。
+3) Server端线程数N的设置需要结合,最大并发请求量,机器core数量,以及预测时间长短这三个因素来确定。
+5) 使用GPU进行模型测试,当模型预测时间较短时,Server端线程数不宜过多(线程数=1~4倍core数量),否则线程切换带来的开销不可忽视。
+6) 使用GPU进行模型测试,当模型预测时间较长时,Server端线程数应稍大一些(线程数=4~20倍core数量)。由于模型预测对于CPU而言是一个阻塞操作,此时当前线程会在此处阻塞等待(类似于Sleep操作),若所有线程均阻塞在模型预测阶段,将没有可运行BRPC的”协程worker“的空闲线程。
+7) 若机器环境允许,Server端线程数应等于或略小于最大并发量。
+
+
+## 5.2 测试数据-BOW模型
+
+### Serving 4线程
+
+| 并发数 | QPS | 总时间 | 平均响应时间 | 80分位点响应时间 | 90分位点响应时间 | 99分位点响应时间 | 99.9分位点响应时间 |
+| --- | --- | --- | --- | --- | --- | --- | --- |
+| 4 | 561.325 | 3563 | 7.1265 | 9 | 11 | 23 | 62 |
+| 8 | 807.428 | 4954 | 9.9085 | 7 | 10 | 24 | 31 |
+| 12 | 894.721 | 6706 | 13.4123 | 18 | 22 | 41 | 61 |
+| 16 | 993.542 | 8052 | 16.1057 | 22 | 28 | 47 | 75 |
+| 20 | 834.725 | 11980 | 23.9615 | 32 | 40 | 64 | 81 |
+| 24 | 649.316 | 18481 | 36.962 | 50 | 67 | 149 | 455 |
+| 28 | 709.975 | 19719 | 39.438 | 53 | 76 | 159 | 293 |
+| 32 | 661.868 | 24174 | 48.3495 | 62 | 90 | 294 | 560 |
+| 36 | 551.234 | 32654 | 65.3081 | 83 | 129 | 406 | 508 |
+| 40 | 525.155 | 38084 | 76.1687 | 99 | 143 | 464 | 567 |
+
+### Serving 8线程
+
+| 并发数 | QPS | 总时间 | 平均响应时间 | 80分位点响应时间 | 90分位点响应时间 | 99分位点响应时间 | 99.9分位点响应时间 |
+| --- | --- | --- | --- | --- | --- | --- | --- |
+| 4 | 397.693 | 5029 | 10.0585 | 11 | 15 | 75 | 323 |
+| 8 | 501.567 | 7975 | 15.9515 | 18 | 25 | 113 | 327 |
+| 12 | 598.027 | 10033 | 20.0663 | 24 | 33 | 125 | 390 |
+| 16 | 691.384 | 11571 | 23.1427 | 31 | 42 | 105 | 348 |
+| 20 | 468.099 | 21363 | 42.7272 | 53 | 74 | 232 | 444 |
+| 24 | 424.553 | 28265 | 56.5315 | 67 | 102 | 353 | 448 |
+| 28 | 587.692 | 23822 | 47.6457 | 61 | 83 | 287 | 494 |
+| 32 | 692.911 | 23091 | 46.1833 | 66 | 94 | 184 | 389 |
+| 36 | 809.753 | 22229 | 44.4581 | 59 | 76 | 256 | 556 |
+| 40 | 762.108 | 26243 | 52.4869 | 74 | 98 | 290 | 475 |
+
+### Serving 12线程
+
+| 并发数 | QPS | 总时间 | 平均响应时间 | 80分位点响应时间 | 90分位点响应时间 | 99分位点响应时间 | 99.9分位点响应时间 |
+| --- | --- | --- | --- | --- | --- | --- | --- |
+| 4 | 442.478 | 4520 | 9.0405 | 12 | 15 | 31 | 46 |
+| 8 | 497.884 | 8034 | 16.0688 | 19 | 25 | 130 | 330 |
+| 12 | 797.13 | 7527 | 15.0552 | 16 | 22 | 162 | 326 |
+| 16 | 674.707 | 11857 | 23.7154 | 30 | 42 | 229 | 455 |
+| 20 | 489.956 | 20410 | 40.8209 | 49 | 68 | 304 | 437 |
+| 24 | 452.335 | 26529 | 53.0582 | 66 | 85 | 341 | 414 |
+| 28 | 753.093 | 18590 | 37.1812 | 50 | 65 | 184 | 421 |
+| 32 | 932.498 | 18278 | 36.5578 | 48 | 62 | 109 | 337 |
+| 36 | 932.498 | 19303 | 38.6066 | 54 | 70 | 110 | 164 |
+| 40 | 921.532 | 21703 | 43.4066 | 59 | 75 | 125 | 451 |
+
+### Serving 16线程
+
+| 并发数 | QPS | 总时间 | 平均响应时间 | 80分位点响应时间 | 90分位点响应时间 | 99分位点响应时间 | 99.9分位点响应时间 |
+| --- | --- | --- | --- | --- | --- | --- | --- |
+| 4 | 559.597 | 3574 | 7.1485 | 9 | 11 | 24 | 56 |
+| 8 | 896.66 | 4461 | 8.9225 | 12 | 15 | 23 | 42 |
+| 12 | 1014.37 | 5915 | 11.8305 | 16 | 20 | 34 | 63 |
+| 16 | 1046.98 | 7641 | 15.2837 | 21 | 28 | 48 | 64 |
+| 20 | 1188.64 | 8413 | 16.8276 | 23 | 31 | 55 | 71 |
+| 24 | 1013.43 | 11841 | 23.6833 | 34 | 41 | 63 | 86 |
+| 28 | 933.769 | 14993 | 29.9871 | 41 | 52 | 91 | 149 |
+| 32 | 930.665 | 17192 | 34.3844 | 48 | 60 | 97 | 137 |
+| 36 | 880.153 | 20451 | 40.9023 | 57 | 72 | 118 | 142 |
+| 40 | 939.144 | 21296 | 42.5938 | 59 | 75 | 126 | 163 |
+
+### Serving 20线程
+
+| 并发数 | QPS | 总时间 | 平均响应时间 | 80分位点响应时间 | 90分位点响应时间 | 99分位点响应时间 | 99.9分位点响应时间 |
+| --- | --- | --- | --- | --- | --- | --- | --- |
+| 4 | 686.813 | 2912 | 5.825 | 7 | 9 | 18 | 54 |
+| 8 | 1016.26 | 3936 | 7.87375 | 10 | 13 | 24 | 33 |
+| 12 | 1282.87 | 4677 | 9.35483 | 12 | 15 | 35 | 73 |
+| 16 | 1253.13 | 6384 | 12.7686 | 17 | 23 | 40 | 54 |
+| 20 | 1276.49 | 7834 | 15.6696 | 22 | 28 | 53 | 90 |
+| 24 | 1273.34 | 9424 | 18.8497 | 26 | 35 | 66 | 93 |
+| 28 | 1258.31 | 11126 | 22.2535 | 31 | 41 | 71 | 133 |
+| 32 | 1027.95 | 15565 | 31.1308 | 43 | 54 | 81 | 103 |
+| 36 | 912.316 | 19730 | 39.4612 | 52 | 66 | 106 | 131 |
+| 40 | 808.865 | 24726 | 49.4539 | 64 | 79 | 144 | 196 |
+
+### Serving 24线程
+
+| 并发数 | QPS | 总时间 | 平均响应时间 | 80分位点响应时间 | 90分位点响应时间 | 99分位点响应时间 | 99.9分位点响应时间 |
+| --- | --- | --- | --- | --- | --- | --- | --- |
+| 4 | 635.728 | 3146 | 6.292 | 7 | 10 | 22 | 48 |
+| 8 | 1089.03 | 3673 | 7.346 | 9 | 11 | 21 | 40 |
+| 12 | 1087.55 | 5056 | 10.1135 | 13 | 17 | 41 | 51 |
+| 16 | 1251.17 | 6394 | 12.7898 | 17 | 24 | 39 | 54 |
+| 20 | 1241.31 | 8056 | 16.1136 | 21 | 29 | 51 | 72 |
+| 24 | 1327.29 | 9041 | 18.0837 | 24 | 33 | 59 | 77 |
+| 28 | 1066.02 | 13133 | 26.2664 | 37 | 47 | 84 | 109 |
+| 32 | 1034.33 | 15469 | 30.9384 | 41 | 51 | 94 | 115 |
+| 36 | 896.191 | 20085 | 40.1708 | 55 | 68 | 110 | 168 |
+| 40 | 701.508 | 28510 | 57.0208 | 74 | 88 | 142 | 199 |
+
+### Serving 28线程
+
+| 并发数 | QPS | 总时间 | 平均响应时间 | 80分位点响应时间 | 90分位点响应时间 | 99分位点响应时间 | 99.9分位点响应时间 |
+| --- | --- | --- | --- | --- | --- | --- | --- |
+| 4 | 592.944 | 3373 | 6.746 | 8 | 10 | 21 | 56 |
+| 8 | 1050.14 | 3809 | 7.619 | 9 | 12 | 22 | 41 |
+| 12 | 1220.75 | 4915 | 9.83133 | 13 | 16 | 26 | 51 |
+| 16 | 1178.38 | 6789 | 13.579 | 19 | 24 | 41 | 65 |
+| 20 | 1184.97 | 8439 | 16.8789 | 23 | 30 | 51 | 72 |
+| 24 | 1234.95 | 9717 | 19.4341 | 26 | 34 | 53 | 94 |
+| 28 | 1162.31 | 12045 | 24.0908 | 33 | 40 | 70 | 208 |
+| 32 | 1160.35 | 13789 | 27.5784 | 39 | 47 | 75 | 97 |
+| 36 | 991.79 | 18149 | 36.2987 | 50 | 61 | 91 | 110 |
+| 40 | 952.336 | 21001 | 42.0024 | 58 | 69 | 105 | 136 |
+
+### Serving 32线程
+
+| 并发数 | QPS | 总时间 | 平均响应时间 | 80分位点响应时间 | 90分位点响应时间 | 99分位点响应时间 | 99.9分位点响应时间 |
+| --- | --- | --- | --- | --- | --- | --- | --- |
+| 4 | 654.879 | 3054 | 6.109 | 7 | 9 | 18 | 39 |
+| 8 | 959.463 | 4169 | 8.33925 | 11 | 13 | 24 | 39 |
+| 12 | 1222.99 | 4906 | 9.81367 | 13 | 16 | 30 | 39 |
+| 16 | 1314.71 | 6085 | 12.1704 | 16 | 20 | 35 | 42 |
+| 20 | 1390.63 | 7191 | 14.3837 | 19 | 24 | 40 | 69 |
+| 24 | 1370.8 | 8754 | 17.5096 | 24 | 30 | 45 | 62 |
+| 28 | 1213.8 | 11534 | 23.0696 | 31 | 37 | 60 | 79 |
+| 32 | 1178.2 | 13580 | 27.1601 | 38 | 45 | 68 | 82 |
+| 36 | 1167.69 | 15415 | 30.8312 | 42 | 51 | 77 | 92 |
+| 40 | 950.841 | 21034 | 42.0692 | 55 | 65 | 96 | 137 |
+
+### Serving 36线程
+
+| 并发数 | QPS | 总时间 | 平均响应时间 | 80分位点响应时间 | 90分位点响应时间 | 99分位点响应时间 | 99.9分位点响应时间 |
+| --- | --- | --- | --- | --- | --- | --- | --- |
+| 4 | 611.06 | 3273 | 6.546 | 7 | 10 | 23 | 63 |
+| 8 | 948.992 | 4215 | 8.43 | 10 | 13 | 38 | 87 |
+| 12 | 1081.47 | 5548 | 11.0972 | 15 | 18 | 31 | 37 |
+| 16 | 1319.7 | 6062 | 12.1241 | 16 | 21 | 35 | 64 |
+| 20 | 1246.73 | 8021 | 16.0434 | 22 | 28 | 41 | 47 |
+| 24 | 1210.04 | 9917 | 19.8354 | 28 | 34 | 54 | 70 |
+| 28 | 1013.46 | 13814 | 27.6296 | 37 | 47 | 83 | 125 |
+| 32 | 1104.44 | 14487 | 28.9756 | 41 | 49 | 72 | 88 |
+| 36 | 1089.32 | 16524 | 33.0495 | 45 | 55 | 83 | 107 |
+| 40 | 940.115 | 21274 | 42.5481 | 58 | 68 | 101 | 138 |
+
+### Serving 40线程
+
+| 并发数 | QPS | 总时间 | 平均响应时间 | 80分位点响应时间 | 90分位点响应时间 | 99分位点响应时间 | 99.9分位点响应时间 |
+| --- | --- | --- | --- | --- | --- | --- | --- |
+| 4 | 610.314 | 3277 | 6.555 | 8 | 11 | 20 | 57 |
+| 8 | 1065.34 | 4001 | 8.0035 | 10 | 12 | 23 | 29 |
+| 12 | 1177.86 | 5632 | 11.2645 | 14 | 18 | 33 | 310 |
+| 16 | 1252.74 | 6386 | 12.7723 | 17 | 22 | 40 | 63 |
+| 20 | 1290.16 | 7751 | 15.5036 | 21 | 27 | 47 | 66 |
+| 24 | 1153.07 | 10407 | 20.8159 | 28 | 36 | 64 | 81 |
+| 28 | 1300.39 | 10766 | 21.5326 | 30 | 37 | 60 | 78 |
+| 32 | 1222.4 | 13089 | 26.1786 | 36 | 45 | 75 | 99 |
+| 36 | 1141.55 | 15768 | 31.5374 | 43 | 52 | 83 | 121 |
+| 40 | 1125.24 | 17774 | 35.5489 | 48 | 57 | 93 | 190 |
+
+下图是Paddle Serving在BOW模型上QPS随serving端线程数增加而变化的图表。可以看出当线程数较少时(4线程/8线程/12线程),QPS的变化规律非常杂乱;当线程数较多时,QPS曲线又基本趋于一致,基本无线性增长关系。
+
+![](../images/qps-threads-bow.png)
+
+(右键在新窗口中浏览大图)
+
+## 5.3 测试数据-CNN模型
+
+### Serving 4线程
+
+| 并发数 | QPS | 总时间 | 平均响应时间 | 80分位点响应时间 | 90分位点响应时间 | 99分位点响应时间 | 99.9分位点响应时间 |
+| --- | --- | --- | --- | --- | --- | --- | --- |
+| 4 | 81.9437 | 24407 | 47 | 55 | 64 | 80 | 91 |
+| 8 | 142.486 | 28073 | 53 | 65 | 71 | 86 | 106 |
+| 12 | 173.732 | 34536 | 66 | 79 | 86 | 105 | 126 |
+| 16 | 174.894 | 45742 | 89 | 101 | 109 | 131 | 151 |
+| 20 | 172.58 | 57944 | 113 | 129 | 138 | 159 | 187 |
+| 24 | 178.216 | 67334 | 132 | 147 | 158 | 189 | 283 |
+| 28 | 171.315 | 81721 | 160 | 180 | 192 | 223 | 291 |
+| 32 | 178.17 | 89802 | 176 | 195 | 208 | 251 | 288 |
+| 36 | 173.762 | 103590 | 204 | 227 | 241 | 278 | 309 |
+| 40 | 177.335 | 112781 | 223 | 246 | 262 | 296 | 315 |
+
+### Serving 8线程
+
+| 并发数 | QPS | 总时间 | 平均响应时间 | 80分位点响应时间 | 90分位点响应时间 | 99分位点响应时间 | 99.9分位点响应时间 |
+| --- | --- | --- | --- | --- | --- | --- | --- |
+| 4 | 86.2999 | 23175 | 44 | 50 | 54 | 72 | 92 |
+| 8 | 143.73 | 27830 | 53 | 65 | 71 | 83 | 91 |
+| 12 | 178.471 | 33619 | 65 | 77 | 85 | 106 | 144 |
+| 16 | 180.485 | 44325 | 86 | 99 | 108 | 131 | 149 |
+| 20 | 180.466 | 55412 | 108 | 122 | 131 | 153 | 170 |
+| 24 | 174.452 | 68787 | 134 | 151 | 162 | 189 | 214 |
+| 28 | 174.158 | 80387 | 157 | 175 | 186 | 214 | 236 |
+| 32 | 172.857 | 92562 | 182 | 202 | 214 | 244 | 277 |
+| 36 | 172.171 | 104547 | 206 | 228 | 241 | 275 | 304 |
+| 40 | 174.435 | 114656 | 226 | 248 | 262 | 306 | 338 |
+
+### Serving 12线程
+
+| 并发数 | QPS | 总时间 | 平均响应时间 | 80分位点响应时间 | 90分位点响应时间 | 99分位点响应时间 | 99.9分位点响应时间 |
+| --- | --- | --- | --- | --- | --- | --- | --- |
+| 4 | 85.6274 | 23357 | 45 | 50 | 55 | 75 | 105 |
+| 8 | 137.632 | 29063 | 55 | 67 | 73 | 88 | 134 |
+| 12 | 187.793 | 31950 | 61 | 73 | 79 | 94 | 123 |
+| 16 | 211.512 | 37823 | 73 | 87 | 94 | 113 | 134 |
+| 20 | 206.624 | 48397 | 93 | 109 | 118 | 145 | 217 |
+| 24 | 209.933 | 57161 | 111 | 128 | 137 | 157 | 190 |
+| 28 | 198.689 | 70462 | 137 | 154 | 162 | 186 | 205 |
+| 32 | 214.024 | 74758 | 146 | 165 | 176 | 204 | 228 |
+| 36 | 223.947 | 80376 | 158 | 177 | 189 | 222 | 282 |
+| 40 | 226.045 | 88478 | 174 | 193 | 204 | 236 | 277 |
+
+### Serving 16线程
+
+| 并发数 | QPS | 总时间 | 平均响应时间 | 80分位点响应时间 | 90分位点响应时间 | 99分位点响应时间 | 99.9分位点响应时间 |
+| --- | --- | --- | --- | --- | --- | --- | --- |
+| 4 | 82.9119 | 24122 | 45 | 52 | 60 | 79 | 99 |
+| 8 | 145.82 | 27431 | 51 | 63 | 69 | 85 | 114 |
+| 12 | 193.287 | 31042 | 59 | 71 | 77 | 92 | 139 |
+| 16 | 240.428 | 33274 | 63 | 76 | 82 | 99 | 127 |
+| 20 | 249.457 | 40087 | 77 | 91 | 99 | 127 | 168 |
+| 24 | 263.673 | 45511 | 87 | 102 | 110 | 136 | 186 |
+| 28 | 272.729 | 51333 | 99 | 115 | 123 | 147 | 189 |
+| 32 | 269.515 | 59366 | 115 | 132 | 140 | 165 | 192 |
+| 36 | 267.4 | 67315 | 131 | 148 | 157 | 184 | 220 |
+| 40 | 264.939 | 75489 | 147 | 164 | 173 | 200 | 235 |
+
+### Serving 20线程
+
+| 并发数 | QPS | 总时间 | 平均响应时间 | 80分位点响应时间 | 90分位点响应时间 | 99分位点响应时间 | 99.9分位点响应时间 |
+| --- | --- | --- | --- | --- | --- | --- | --- |
+| 4 | 85.5615 | 23375 | 44 | 49 | 55 | 73 | 101 |
+| 8 | 148.765 | 26888 | 50 | 61 | 69 | 84 | 97 |
+| 12 | 196.11 | 30595 | 57 | 70 | 75 | 88 | 108 |
+| 16 | 241.087 | 33183 | 63 | 76 | 82 | 98 | 115 |
+| 20 | 291.24 | 34336 | 65 | 66 | 78 | 99 | 114 |
+| 24 | 301.515 | 39799 | 76 | 90 | 97 | 122 | 194 |
+| 28 | 314.303 | 44543 | 86 | 101 | 109 | 132 | 173 |
+| 32 | 327.486 | 48857 | 94 | 109 | 118 | 143 | 196 |
+| 36 | 320.422 | 56176 | 109 | 125 | 133 | 157 | 190 |
+| 40 | 325.399 | 61463 | 120 | 137 | 145 | 174 | 216 |
+
+### Serving 24线程
+
+| 并发数 | QPS | 总时间 | 平均响应时间 | 80分位点响应时间 | 90分位点响应时间 | 99分位点响应时间 | 99.9分位点响应时间 |
+| --- | --- | --- | --- | --- | --- | --- | --- |
+| 4 | 85.6568 | 23349 | 45 | 50 | 57 | 72 | 110 |
+| 8 | 154.919 | 25820 | 48 | 57 | 66 | 81 | 95 |
+| 12 | 221.992 | 27028 | 51 | 61 | 69 | 85 | 100 |
+| 16 | 272.889 | 29316 | 55 | 68 | 74 | 89 | 101 |
+| 20 | 300.906 | 33233 | 63 | 75 | 81 | 95 | 108 |
+| 24 | 326.735 | 36727 | 69 | 82 | 87 | 102 | 114 |
+| 28 | 339.057 | 41291 | 78 | 92 | 99 | 119 | 137 |
+| 32 | 346.868 | 46127 | 88 | 103 | 110 | 130 | 155 |
+| 36 | 338.429 | 53187 | 102 | 117 | 124 | 146 | 170 |
+| 40 | 320.919 | 62321 | 119 | 135 | 144 | 176 | 226 |
+
+### Serving 28线程
+
+| 并发数 | QPS | 总时间 | 平均响应时间 | 80分位点响应时间 | 90分位点响应时间 | 99分位点响应时间 | 99.9分位点响应时间 |
+| --- | --- | --- | --- | --- | --- | --- | --- |
+| 4 | 87.8773 | 22759 | 43 | 48 | 52 | 76 | 112 |
+| 8 | 154.524 | 25886 | 49 | 58 | 66 | 82 | 100 |
+| 12 | 192.709 | 31135 | 59 | 72 | 78 | 93 | 112 |
+| 16 | 253.59 | 31547 | 59 | 72 | 79 | 95 | 129 |
+| 20 | 288.367 | 34678 | 65 | 78 | 84 | 100 | 122 |
+| 24 | 307.653 | 39005 | 73 | 84 | 92 | 116 | 313 |
+| 28 | 334.105 | 41903 | 78 | 90 | 97 | 119 | 140 |
+| 32 | 348.25 | 45944 | 86 | 99 | 107 | 132 | 164 |
+| 36 | 355.661 | 50610 | 96 | 110 | 118 | 143 | 166 |
+| 40 | 350.957 | 56987 | 109 | 124 | 133 | 165 | 221 |
+
+### Serving 32线程
+
+| 并发数 | QPS | 总时间 | 平均响应时间 | 80分位点响应时间 | 90分位点响应时间 | 99分位点响应时间 | 99.9分位点响应时间 |
+| --- | --- | --- | --- | --- | --- | --- | --- |
+| 4 | 87.4088 | 22881 | 43 | 48 | 52 | 70 | 86 |
+| 8 | 150.733 | 26537 | 50 | 60 | 68 | 85 | 102 |
+| 12 | 197.433 | 30390 | 57 | 70 | 75 | 90 | 106 |
+| 16 | 250.917 | 31883 | 60 | 73 | 78 | 94 | 121 |
+| 20 | 286.369 | 34920 | 66 | 78 | 84 | 102 | 131 |
+| 24 | 306.029 | 39212 | 74 | 85 | 92 | 110 | 134 |
+| 28 | 323.902 | 43223 | 81 | 93 | 100 | 122 | 143 |
+| 32 | 341.559 | 46844 | 89 | 102 | 111 | 136 | 161 |
+| 36 | 341.077 | 52774 | 98 | 113 | 124 | 158 | 193 |
+| 40 | 357.814 | 55895 | 107 | 122 | 133 | 166 | 196 |
+
+### Serving 36线程
+
+| 并发数 | QPS | 总时间 | 平均响应时间 | 80分位点响应时间 | 90分位点响应时间 | 99分位点响应时间 | 99.9分位点响应时间 |
+| --- | --- | --- | --- | --- | --- | --- | --- |
+| 4 | 86.9036 | 23014 | 44 | 49 | 53 | 72 | 112 |
+| 8 | 158.964 | 25163 | 48 | 55 | 63 | 79 | 91 |
+| 12 | 205.086 | 29256 | 55 | 68 | 75 | 91 | 168 |
+| 16 | 238.173 | 33589 | 61 | 73 | 79 | 100 | 158 |
+| 20 | 279.705 | 35752 | 67 | 79 | 86 | 106 | 129 |
+| 24 | 318.294 | 37701 | 71 | 82 | 89 | 108 | 129 |
+| 28 | 336.296 | 41630 | 78 | 89 | 97 | 119 | 194 |
+| 32 | 360.295 | 44408 | 84 | 97 | 105 | 130 | 154 |
+| 36 | 353.08 | 50980 | 96 | 113 | 123 | 152 | 179 |
+| 40 | 362.286 | 55205 | 105 | 122 | 134 | 171 | 247 |
+
+### Serving 40线程
+
+| 并发数 | QPS | 总时间 | 平均响应时间 | 80分位点响应时间 | 90分位点响应时间 | 99分位点响应时间 | 99.9分位点响应时间 |
+| --- | --- | --- | --- | --- | --- | --- | --- |
+| 4 | 87.7347 | 22796 | 44 | 48 | 54 | 73 | 114 |
+| 8 | 150.483 | 26581 | 50 | 59 | 67 | 85 | 149 |
+| 12 | 202.088 | 29690 | 56 | 69 | 75 | 90 | 102 |
+| 16 | 250.485 | 31938 | 60 | 74 | 79 | 93 | 113 |
+| 20 | 289.62 | 34528 | 65 | 77 | 83 | 102 | 132 |
+| 24 | 314.408 | 38167 | 72 | 83 | 90 | 110 | 125 |
+| 28 | 321.728 | 43515 | 83 | 95 | 104 | 132 | 159 |
+| 32 | 335.022 | 47758 | 90 | 104 | 114 | 141 | 166 |
+| 36 | 341.452 | 52716 | 101 | 117 | 129 | 170 | 231 |
+| 40 | 347.953 | 57479 | 109 | 130 | 143 | 182 | 216 |
+
+下图是Paddle Serving在CNN模型上QPS随serving端线程数增加而变化的图表。可以看出,随着线程数变大,Serving QPS有较为明显的线性增长关系。可以这样解释此图表:例如,线程数为16时,基本在20个并发时达到最大QPS,此后再增加并发压力QPS基本保持稳定;当线程能够数为24线程时,基本在28并发时达到最大QPS,此后再增大并发压力qps基本保持稳定。
+
+![](../images/qps-threads-cnn.png)
+
+(右键在新窗口中浏览大图)
diff --git a/doc/cpp_server/HOT_LOADING_IN_SERVING_CN.md b/doc/C++Serving/Hot_Loading_CN.md
old mode 100644
new mode 100755
similarity index 99%
rename from doc/cpp_server/HOT_LOADING_IN_SERVING_CN.md
rename to doc/C++Serving/Hot_Loading_CN.md
index 97a2272cf..66e47a19f
--- a/doc/cpp_server/HOT_LOADING_IN_SERVING_CN.md
+++ b/doc/C++Serving/Hot_Loading_CN.md
@@ -1,6 +1,6 @@
# Paddle Serving中的模型热加载
-(简体中文|[English](HOT_LOADING_IN_SERVING.md))
+(简体中文|[English](Hot_Loading_EN.md))
## 背景
diff --git a/doc/cpp_server/HOT_LOADING_IN_SERVING.md b/doc/C++Serving/Hot_Loading_EN.md
old mode 100644
new mode 100755
similarity index 99%
rename from doc/cpp_server/HOT_LOADING_IN_SERVING.md
rename to doc/C++Serving/Hot_Loading_EN.md
index 94575ca51..9f95ca558
--- a/doc/cpp_server/HOT_LOADING_IN_SERVING.md
+++ b/doc/C++Serving/Hot_Loading_EN.md
@@ -1,6 +1,6 @@
# Hot Loading in Paddle Serving
-([简体中文](HOT_LOADING_IN_SERVING_CN.md)|English)
+([简体中文](Hot_Loading_CN.md)|English)
## Background
diff --git a/doc/HTTP_SERVICE_CN.md b/doc/C++Serving/Http_Service_CN.md
old mode 100644
new mode 100755
similarity index 97%
rename from doc/HTTP_SERVICE_CN.md
rename to doc/C++Serving/Http_Service_CN.md
index 040656ea1..96f9e841e
--- a/doc/HTTP_SERVICE_CN.md
+++ b/doc/C++Serving/Http_Service_CN.md
@@ -14,12 +14,10 @@ BRPC-Server会尝试去JSON字符串中再去反序列化出Proto格式的数据
各种语言都提供了对ProtoBuf的支持,如果您对此比较熟悉,您也可以先将数据使用ProtoBuf序列化,再将序列化后的数据放入Http请求数据体中,然后指定Content-Type: application/proto,从而使用http/h2+protobuf二进制串访问服务。
实测随着数据量的增大,使用JSON方式的Http的数据量和反序列化的耗时会大幅度增加,推荐当您的数据量较大时,使用Http+protobuf方式,目前已经在Java和Python的Client端提供了支持。
-**理论上讲,序列化/反序列化的性能从高到底排序为:protobuf > http/h2+protobuf > http**
-
## 示例
-我们将以python/examples/fit_a_line为例,讲解如何通过Http访问Server端。
+我们将以examples/C++/fit_a_line为例,讲解如何通过Http访问Server端。
### 获取模型
diff --git a/doc/C++Serving/Introduction_CN.md b/doc/C++Serving/Introduction_CN.md
new file mode 100755
index 000000000..573e3cebb
--- /dev/null
+++ b/doc/C++Serving/Introduction_CN.md
@@ -0,0 +1,89 @@
+# C++ Serving 简要介绍
+## 适用场景
+C++ Serving主打性能,如果您想搭建企业级的高性能线上推理服务,对高并发、低延时有一定的要求。C++ Serving框架可能会更适合您。目前无论是使用同步/异步模型,[C++ Serving与TensorFlow Serving性能对比](Benchmark_CN.md)均有优势。
+
+C++ Serving网络框架使用brpc,核心执行引擎是基于C/C++编写,并且提供强大的工业级应用能力,包括模型热加载、模型加密部署、A/B Test、多模型组合、同步/异步模式、支持多语言多协议Client等功能。
+
+## 1.网络框架(BRPC)
+C++ Serving采用[brpc框架](https://github.com/apache/incubator-brpc)进行Client/Server端的通信。brpc是百度开源的一款PRC网络框架,具有高并发、低延时等特点,已经支持了包括百度在内上百万在线预估实例、上千个在线预估服务,稳定可靠。与gRPC网络框架相比,具有更低的延时,更高的并发性能,且底层支持**brpc/grpc/http+json/http+proto**等多种协议;缺点是跨操作系统平台能力不足。详细的框架性能开销见[C++ Serving框架性能测试](Frame_Performance_CN.md)。
+
+## 2.核心执行引擎
+C++ Serving的核心执行引擎是一个有向无环图(也称作[DAG图](DAG_CN.md)),DAG图中的每个节点(在PaddleServing中,借用模型中operator算子的概念,将DAG图中的节点也称为[OP](OP_CN.md))代表预估服务的一个环节,DAG图支持多个OP按照串并联的方式进行组合,从而实现在一个服务中完成多个模型的预测整合最终产出结果。整个框架原理如下图所示,可分为Client Side 和 Server Side。
+
+
+
+
+
+
+### 2.1 Client Side
+如图所示,Client端通过Pybind API接口将Request请求,按照ProtoBuf协议进行序列化后,经由BRPC网络框架Client端发送给Server端。此时,Client端等待Server端的返回数据并反序列化为正常的数据,之后将结果返给Client调用方。
+
+### 2.2 Server Side
+Server端接收到序列化的Request请求后,反序列化正常数据,进入图执行引擎,按照定义好的DAG图结构,执行每个OP环节的操作(每个OP环节的处理由用户定义,即可以只是单纯的数据处理,也可以是调用预测引擎用不同的模型对输入数据进行预测),当DAG图中所有OP环节均执行完成后,将结果数据序列化后返回给Client端。
+
+### 2.3 通信数据格式ProtoBuf
+Protocol Buffers(简称Protobuf) ,是Google出品的序列化框架,与开发语言无关,和平台无关,具有良好的可扩展性。Protobuf和所有的序列化框架一样,都可以用于数据存储、通讯协议。Protobuf支持生成代码的语言包括Java、Python、C++、Go、JavaNano、Ruby、C#。Portobuf的序列化的结果体积要比XML、JSON小很多,速度比XML、JSON快很多。
+
+在C++ Serving中定义了Client Side 和 Server Side之间通信的ProtoBuf,详细的字段的介绍见《[C++ Serving ProtoBuf简介](Inference_Protocols_CN.md)》。
+
+## 3.Server端特性
+### 3.1 启动Server端
+Server端的核心是一个由项目代码编译产生的名称为serving的二进制可执行文件,启动serving时需要用户指定一些参数(**例如,网络IP和Port端口、brpc线程数、使用哪个显卡、模型文件路径、模型是否开启trt、XPU推理、模型精度设置等等**),有些参数是通过命令行直接传入的,还有一些是写在指定的配置文件中配置文件中。
+
+为了方便用户快速的启动C++ Serving的Server端,除了用户自行修改配置文件并通过命令行传参运行serving二进制可执行文件以外,我们也提供了另外一种通过python脚本启动的方式。python脚本启动本质上仍是运行serving二进制可执行文件,但python脚本中会自动完成两件事:1、配置文件的生成;2、根据需要配置的参数,生成命令行,通过命令行的方式,传入参数信息并运行serving二进制可执行文件。
+
+更多详细说明和示例,请参考[C++ Serving 参数配置和启动的详细说明](../SERVING_CONFIGURE_CN.md)。
+
+### 3.2 同步/异步模式
+同步模式比较简单直接,适用于模型预测时间短,单个Request请求的batch已经比较大的情况。
+同步模型下,Server端线程数N = 模型预测引擎数N = 同时处理Request请求数N,超发的Request请求需要等待当前线程处理结束后才能得到响应和处理。
+
+
+
+异步模型主要适用于模型支持多batch(最大batch数M可通过配置选项指定),单个Request请求的batch较小(batch << M),单次预测时间较长的情况。
+异步模型下,Server端N个线程只负责接收Request请求,实际调用预测引擎是在异步框架的线程中,异步框架的线程数可以由配置选项来指定。为了方便理解,我们假设每个Request请求的batch均为1,此时异步框架会尽可能多得从请求池中取n(n≤M)个Request并将其拼装为1个Request(batch=n),调用1次预测引擎,得到1个Response(batch = n),再将其对应拆分为n个Response作为返回结果。
+
+
+
+
+更多关于模式参数配置以及性能调优的介绍见《[C++ Serving性能调优](Performance_Tuning_CN.md)》。
+
+### 3.3 多模型组合
+当用户需要多个模型组合处理结果来作为一个服务接口对外暴露时,通常的解决办法是搭建内外两层服务,内层服务负责跑模型预测,外层服务负责串联和前后处理。当传输的数据量不大时,这样做的性能开销并不大,但当输出的数据量较大时,因为网络传输而带来的性能开销不容忽视(实测单次传输40MB数据时,RPC耗时为160-170ms)。
+
+
+
+
+
+
+
+C++ Serving框架支持[自定义DAG图](Model_Ensemble_CN.md)的方式来表示多模型之间串并联组合关系,也支持用户[使用C++开发自定义OP节点](OP_CN.md)。相比于使用内外两层服务来提供多模型组合处理的方式,由于节省了一次RPC网络传输的开销,把多模型在一个服务中处理性能上会有一定的提升,尤其当RPC通信传输的数据量较大时。
+
+### 3.4 模型管理与热加载
+C++ Serving的引擎支持模型管理功能,支持多种模型和模型不同版本的管理。为了保证在模型更换期间推理服务的可用性,需要在服务不中断的情况下对模型进行热加载。C++ Serving对该特性进行了支持,并提供了一个监控产出模型更新本地模型的工具,具体例子请参考《[C++ Serving中的模型热加载](Hot_Loading_CN.md)》。
+
+### 3.5 模型加解密
+C++ Serving采用对称加密算法对模型进行加密,在服务加载模型过程中在内存中解密。目前,提供基础的模型安全能力,并不保证模型绝对安全性,用户可根据我们的设计加以完善,实现更高级别的安全性。说明文档参考《[C++ Serving加密模型预测](Encryption_CN.md)》。
+
+## 4.Client端特性
+### 4.1 A/B Test
+在对模型进行充分的离线评估后,通常需要进行在线A/B测试,来决定是否大规模上线服务。下图为使用Paddle Serving做A/B测试的基本结构,Client端做好相应的配置后,自动将流量分发给不同的Server,从而完成A/B测试。具体例子请参考《[如何使用Paddle Serving做ABTEST](ABTest_CN.md)》。
+
+
+
+
+
+
+
+### 4.2 多语言多协议Client
+BRPC网络框架支持[多种底层通信协议](#1网络框架(BRPC)),即使用目前的C++ Serving框架的Server端,各种语言的Client端,甚至使用curl的方式,只要按照上述协议(具体支持的协议见[brpc官网](https://github.com/apache/incubator-brpc))封装数据并发送,Server端就能够接收、处理和返回结果。
+
+对于支持的各种协议我们提供了部分的Client SDK示例供用户参考和使用,用户也可以根据自己的需求去开发新的Client SDK,也欢迎用户添加其他语言/协议(例如GRPC-Go、GRPC-C++ HTTP2-Go、HTTP2-Java等)Client SDK到我们的仓库供其他开发者借鉴和参考。
+
+| 通信协议 | 速度 | 是否支持 | 是否提供Client SDK |
+|-------------|-----|---------|-------------------|
+| BRPC | 最快 | 支持 | [C++](../../core/general-client/README_CN.md)、[Python(Pybind方式)](../../examples/C++/fit_a_line/README_CN.md) |
+| HTTP2+Proto | 快 | 支持 | coming soon |
+| GRPC | 快 | 支持 | [Java](../../java/README_CN.md)、[Python](../../examples/C++/fit_a_line/README_CN.md) |
+| HTTP1+Proto | 一般 | 支持 | [Java](../../java/README_CN.md)、[Python](../../examples/C++/fit_a_line/README_CN.md) |
+| HTTP1+Json | 慢 | 支持 | [Java](../../java/README_CN.md)、[Python](../../examples/C++/fit_a_line/README_CN.md)、[Curl](Http_Service_CN.md) |
diff --git a/doc/C++Serving/Model_Ensemble_CN.md b/doc/C++Serving/Model_Ensemble_CN.md
new file mode 100755
index 000000000..5517a2505
--- /dev/null
+++ b/doc/C++Serving/Model_Ensemble_CN.md
@@ -0,0 +1,121 @@
+# Paddle Serving中的集成预测
+
+(简体中文|[English](Model_Ensemble_EN.md))
+
+在一些场景中,可能使用多个相同输入的模型并行集成预测以获得更好的预测效果,Paddle Serving提供了这项功能。
+
+下面将以文本分类任务为例,来展示Paddle Serving的集成预测功能(暂时还是串行预测,我们会尽快支持并行化)。
+
+## 集成预测样例
+
+该样例中(见下图),Server端在一项服务中并行预测相同输入的BOW和CNN模型,Client端获取两个模型的预测结果并进行后处理,得到最终的预测结果。
+
+![simple example](../images/model_ensemble_example.png)
+
+需要注意的是,目前只支持在同一个服务中使用多个相同格式输入输出的模型。在该例子中,CNN模型和BOW模型的输入输出格式是相同的。
+
+样例中用到的代码保存在`examples/C++/imdb`路径下:
+
+```shell
+.
+├── get_data.sh
+├── imdb_reader.py
+├── test_ensemble_client.py
+└── test_ensemble_server.py
+```
+
+### 数据准备
+
+通过下面命令获取预训练的CNN和BOW模型(您也可以直接运行`get_data.sh`脚本):
+
+```shell
+wget --no-check-certificate https://fleet.bj.bcebos.com/text_classification_data.tar.gz
+wget --no-check-certificate https://paddle-serving.bj.bcebos.com/imdb-demo/imdb_model.tar.gz
+tar -zxvf text_classification_data.tar.gz
+tar -zxvf imdb_model.tar.gz
+```
+
+### 启动Server
+
+通过下面的Python代码启动Server端(您也可以直接运行`test_ensemble_server.py`脚本):
+
+```python
+from paddle_serving_server import OpMaker
+from paddle_serving_server import OpGraphMaker
+from paddle_serving_server import Server
+
+op_maker = OpMaker()
+read_op = op_maker.create('general_reader')
+cnn_infer_op = op_maker.create(
+ 'general_infer', engine_name='cnn', inputs=[read_op])
+bow_infer_op = op_maker.create(
+ 'general_infer', engine_name='bow', inputs=[read_op])
+response_op = op_maker.create(
+ 'general_response', inputs=[cnn_infer_op, bow_infer_op])
+
+op_graph_maker = OpGraphMaker()
+op_graph_maker.add_op(read_op)
+op_graph_maker.add_op(cnn_infer_op)
+op_graph_maker.add_op(bow_infer_op)
+op_graph_maker.add_op(response_op)
+
+server = Server()
+server.set_op_graph(op_graph_maker.get_op_graph())
+model_config = {cnn_infer_op: 'imdb_cnn_model', bow_infer_op: 'imdb_bow_model'}
+server.load_model_config(model_config)
+server.prepare_server(workdir="work_dir1", port=9393, device="cpu")
+server.run_server()
+```
+
+与普通预测服务不同的是,这里我们需要用DAG来描述Server端的运行逻辑。
+
+在创建Op的时候需要指定当前Op的前继(在该例子中,`cnn_infer_op`与`bow_infer_op`的前继均是`read_op`,`response_op`的前继是`cnn_infer_op`和`bow_infer_op`),对于预测Op`infer_op`还需要定义预测引擎名称`engine_name`(也可以使用默认值,建议设置该值方便Client端获取预测结果)。
+
+同时在配置模型路径时,需要以预测Op为key,对应的模型路径为value,创建模型配置字典,来告知Serving每个预测Op使用哪个模型。
+
+### 启动Client
+
+通过下面的Python代码运行Client端(您也可以直接运行`test_ensemble_client.py`脚本):
+
+```python
+from paddle_serving_client import Client
+from imdb_reader import IMDBDataset
+
+client = Client()
+# If you have more than one model, make sure that the input
+# and output of more than one model are the same.
+client.load_client_config('imdb_bow_client_conf/serving_client_conf.prototxt')
+client.connect(["127.0.0.1:9393"])
+
+# you can define any english sentence or dataset here
+# This example reuses imdb reader in training, you
+# can define your own data preprocessing easily.
+imdb_dataset = IMDBDataset()
+imdb_dataset.load_resource('imdb.vocab')
+
+for i in range(3):
+ line = 'i am very sad | 0'
+ word_ids, label = imdb_dataset.get_words_and_label(line)
+ feed = {"words": word_ids}
+ fetch = ["acc", "cost", "prediction"]
+ fetch_maps = client.predict(feed=feed, fetch=fetch)
+ if len(fetch_maps) == 1:
+ print("step: {}, res: {}".format(i, fetch_maps['prediction'][0][1]))
+ else:
+ for model, fetch_map in fetch_maps.items():
+ print("step: {}, model: {}, res: {}".format(i, model, fetch_map[
+ 'prediction'][0][1]))
+```
+
+Client端与普通预测服务没有发生太大的变化。当使用多个模型预测时,预测服务将返回一个key为Server端定义的引擎名称`engine_name`,value为对应的模型预测结果的字典。
+
+### 预期结果
+
+```txt
+step: 0, model: cnn, res: 0.560272455215
+step: 0, model: bow, res: 0.633530199528
+step: 1, model: cnn, res: 0.560272455215
+step: 1, model: bow, res: 0.633530199528
+step: 2, model: cnn, res: 0.560272455215
+step: 2, model: bow, res: 0.633530199528
+```
diff --git a/doc/C++Serving/Model_Ensemble_EN.md b/doc/C++Serving/Model_Ensemble_EN.md
new file mode 100755
index 000000000..071e77731
--- /dev/null
+++ b/doc/C++Serving/Model_Ensemble_EN.md
@@ -0,0 +1,121 @@
+# Model Ensemble in Paddle Serving
+
+([简体中文](Model_Ensemble_CN.md)|English)
+
+In some scenarios, multiple models with the same input may be used to predict in parallel and integrate predicted results for better prediction effect. Paddle Serving also supports this feature.
+
+Next, we will take the text classification task as an example to show model ensemble in Paddle Serving (This feature is still serial prediction for the time being. We will support parallel prediction as soon as possible).
+
+## Simple example
+
+In this example (see the figure below), the server side predict the bow and CNN models with the same input in a service in parallel, The client side fetchs the prediction results of the two models, and processes the prediction results to get the final predict results.
+
+![simple example](../images/model_ensemble_example.png)
+
+It should be noted that at present, only multiple models with the same format input and output in the same service are supported. In this example, the input and output formats of CNN and BOW model are the same.
+
+The code used in the example is saved in the `examples/C++/imdb` path:
+
+```shell
+.
+├── get_data.sh
+├── imdb_reader.py
+├── test_ensemble_client.py
+└── test_ensemble_server.py
+```
+
+### Prepare data
+
+Get the pre-trained CNN and BOW models by the following command (you can also run the `get_data.sh` script):
+
+```shell
+wget --no-check-certificate https://fleet.bj.bcebos.com/text_classification_data.tar.gz
+wget --no-check-certificate https://paddle-serving.bj.bcebos.com/imdb-demo/imdb_model.tar.gz
+tar -zxvf text_classification_data.tar.gz
+tar -zxvf imdb_model.tar.gz
+```
+
+### Start server
+
+Start server by the following Python code (you can also run the `test_ensemble_server.py` script):
+
+```python
+from paddle_serving_server import OpMaker
+from paddle_serving_server import OpGraphMaker
+from paddle_serving_server import Server
+
+op_maker = OpMaker()
+read_op = op_maker.create('general_reader')
+cnn_infer_op = op_maker.create(
+ 'general_infer', engine_name='cnn', inputs=[read_op])
+bow_infer_op = op_maker.create(
+ 'general_infer', engine_name='bow', inputs=[read_op])
+response_op = op_maker.create(
+ 'general_response', inputs=[cnn_infer_op, bow_infer_op])
+
+op_graph_maker = OpGraphMaker()
+op_graph_maker.add_op(read_op)
+op_graph_maker.add_op(cnn_infer_op)
+op_graph_maker.add_op(bow_infer_op)
+op_graph_maker.add_op(response_op)
+
+server = Server()
+server.set_op_graph(op_graph_maker.get_op_graph())
+model_config = {cnn_infer_op: 'imdb_cnn_model', bow_infer_op: 'imdb_bow_model'}
+server.load_model_config(model_config)
+server.prepare_server(workdir="work_dir1", port=9393, device="cpu")
+server.run_server()
+```
+
+Different from the normal prediction service, here we need to use DAG to describe the logic of the server side.
+
+When creating an Op, you need to specify the predecessor of the current Op (in this example, the predecessor of `cnn_infer_op` and `bow_infer_op` is `read_op`, and the predecessor of `response_op` is `cnn_infer_op` and `bow_infer_op`. For the infer Op `infer_op`, you need to define the prediction engine name `engine_name` (You can also use the default value. It is recommended to set the value to facilitate the client side to obtain the order of prediction results).
+
+At the same time, when configuring the model path, you need to create a model configuration dictionary with the infer Op as the key and the corresponding model path as value to inform Serving which model each infer OP uses.
+
+### Start client
+
+Start client by the following Python code (you can also run the `test_ensemble_client.py` script):
+
+```python
+from paddle_serving_client import Client
+from imdb_reader import IMDBDataset
+
+client = Client()
+# If you have more than one model, make sure that the input
+# and output of more than one model are the same.
+client.load_client_config('imdb_bow_client_conf/serving_client_conf.prototxt')
+client.connect(["127.0.0.1:9393"])
+
+# you can define any english sentence or dataset here
+# This example reuses imdb reader in training, you
+# can define your own data preprocessing easily.
+imdb_dataset = IMDBDataset()
+imdb_dataset.load_resource('imdb.vocab')
+
+for i in range(3):
+ line = 'i am very sad | 0'
+ word_ids, label = imdb_dataset.get_words_and_label(line)
+ feed = {"words": word_ids}
+ fetch = ["acc", "cost", "prediction"]
+ fetch_maps = client.predict(feed=feed, fetch=fetch)
+ if len(fetch_maps) == 1:
+ print("step: {}, res: {}".format(i, fetch_maps['prediction'][0][1]))
+ else:
+ for model, fetch_map in fetch_maps.items():
+ print("step: {}, model: {}, res: {}".format(i, model, fetch_map[
+ 'prediction'][0][1]))
+```
+
+Compared with the normal prediction service, the client side has not changed much. When multiple model predictions are used, the prediction service will return a dictionary with engine name `engine_name`(the value is defined on the server side) as the key, and the corresponding model prediction results as the value.
+
+### Expected result
+
+```shell
+step: 0, model: cnn, res: 0.560272455215
+step: 0, model: bow, res: 0.633530199528
+step: 1, model: cnn, res: 0.560272455215
+step: 1, model: bow, res: 0.633530199528
+step: 2, model: cnn, res: 0.560272455215
+step: 2, model: bow, res: 0.633530199528
+```
diff --git a/doc/cpp_server/NEW_OPERATOR_CN.md b/doc/C++Serving/OP_CN.md
old mode 100644
new mode 100755
similarity index 99%
rename from doc/cpp_server/NEW_OPERATOR_CN.md
rename to doc/C++Serving/OP_CN.md
index d659b5f32..f0e579525
--- a/doc/cpp_server/NEW_OPERATOR_CN.md
+++ b/doc/C++Serving/OP_CN.md
@@ -1,6 +1,6 @@
# 如何开发一个新的General Op?
-(简体中文|[English](./NEW_OPERATOR.md))
+(简体中文|[English](OP_EN.md))
在本文档中,我们主要集中于如何为Paddle Serving开发新的服务器端运算符。 在开始编写新运算符之前,让我们看一些示例代码以获得为服务器编写新运算符的基本思想。 我们假设您已经知道Paddle Serving服务器端的基本计算逻辑。 下面的代码您可以在 Serving代码库下的 `core/general-server/op` 目录查阅。
diff --git a/doc/cpp_server/NEW_OPERATOR.md b/doc/C++Serving/OP_EN.md
old mode 100644
new mode 100755
similarity index 99%
rename from doc/cpp_server/NEW_OPERATOR.md
rename to doc/C++Serving/OP_EN.md
index ab1ff42ad..96ed7ca4b
--- a/doc/cpp_server/NEW_OPERATOR.md
+++ b/doc/C++Serving/OP_EN.md
@@ -1,6 +1,6 @@
# How to write an general operator?
-([简体中文](./NEW_OPERATOR_CN.md)|English)
+([简体中文](OP_CN.md)|English)
In this document, we mainly focus on how to develop a new server side operator for PaddleServing. Before we start to write a new operator, let's look at some sample code to get the basic idea of writing a new operator for server. We assume you have known the basic computation logic on server side of PaddleServing, please reference to []() if you do not know much about it. The following code can be visited at `core/general-server/op` of Serving repo.
diff --git a/doc/C++Serving/Performance_Tuning_CN.md b/doc/C++Serving/Performance_Tuning_CN.md
new file mode 100755
index 000000000..fe8ef992c
--- /dev/null
+++ b/doc/C++Serving/Performance_Tuning_CN.md
@@ -0,0 +1 @@
+待填写!
diff --git a/doc/cpp_server/C++DESIGN.md b/doc/cpp_server/C++DESIGN.md
deleted file mode 100644
index e45fe4392..000000000
--- a/doc/cpp_server/C++DESIGN.md
+++ /dev/null
@@ -1,378 +0,0 @@
-# C++ Serving Design
-
-([简体中文](./C++DESIGN_CN.md)|English)
-
-## 1. Background
-
-PaddlePaddle is the Baidu's open source machine learning framework, which supports a wide range of customized development of deep learning models; Paddle serving is the online prediction framework of Paddle, which seamlessly connects with Paddle model training, and provides cloud services for machine learning prediction. This article will describe the Paddle Serving design from the bottom up, from the model, service, and access levels.
-
-1. The model is the core of Paddle Serving prediction, including the management of model data and inference calculations;
-2. Prediction framework encapsulation model for inference calculations, providing external RPC interface to connect different upstream
-3. The prediction service SDK provides a set of access frameworks
-
-The result is a complete serving solution.
-
-## 2. Terms explanation
-
-- **baidu-rpc**: Baidu's official open source RPC framework, supports multiple common communication protocols, and provides a custom interface experience based on protobuf
-- **Variant**: Paddle Serving architecture is an abstraction of a minimal prediction cluster, which is characterized by all internal instances (replicas) being completely homogeneous and logically corresponding to a fixed version of a model
-- **Endpoint**: Multiple Variants form an Endpoint. Logically, Endpoint represents a model, and Variants within the Endpoint represent different versions.
-- **OP**: PaddlePaddle is used to encapsulate a numerical calculation operator, Paddle Serving is used to represent a basic business operation operator, and the core interface is inference. OP configures its dependent upstream OP to connect multiple OPs into a workflow
-- **Channel**: An abstraction of all request-level intermediate data of the OP; data exchange between OPs through Channels
-- **Bus**: manages all channels in a thread, and schedules the access relationship between the two sets of OP and Channel according to the DAG dependency graph between DAGs
-- **Stage**: Workflow according to the topology diagram described by DAG, a collection of OPs that belong to the same link and can be executed in parallel
-- **Node**: An OP operator instance composed of an OP operator class combined with parameter configuration, which is also an execution unit in Workflow
-- **Workflow**: executes the inference interface of each OP in order according to the topology described by DAG
-- **DAG/Workflow**: consists of several interdependent Nodes. Each Node can obtain the Request object through a specific interface. The node Op obtains the output object of its pre-op through the dependency relationship. The output of the last Node is the Response object by default.
-- **Service**: encapsulates a pv request, can configure several Workflows, reuse the current PV's Request object with each other, and then execute each in parallel/serial execution, and finally write the Response to the corresponding output slot; a Paddle-serving process Multiple sets of Service interfaces can be configured. The upstream determines the Service interface currently accessed based on the ServiceName.
-
-## 3. Python Interface Design
-
-### 3.1 Core Targets:
-
-A set of Paddle Serving dynamic library, support the remote estimation service of the common model saved by Paddle, and call the various underlying functions of PaddleServing through the Python Interface.
-
-### 3.2 General Model:
-
-Models that can be predicted using the Paddle Inference Library, models saved during training, including Feed Variable and Fetch Variable
-
-### 3.3 Overall design:
-
-- The user starts the Client and Server through the Python Client. The Python API has a function to check whether the interconnection and the models to be accessed match.
-- The Python API calls the pybind corresponding to the client and server functions implemented by Paddle Serving, and the information transmitted through RPC is implemented through RPC.
-- The Client Python API currently has two simple functions, load_inference_conf and predict, which are used to perform loading of the model to be predicted and prediction, respectively.
-- The Server Python API is mainly responsible for loading the inference model and generating various configurations required by Paddle Serving, including engines, workflow, resources, etc.
-
-### 3.4 Server Inferface
-
-![Server Interface](images/server_interface.png)
-
-### 3.5 Client Interface
-
-
-
-### 3.6 Client io used during Training
-
-PaddleServing is designed to saves the model interface that can be used during the training process, which is basically the same as the Paddle save inference model interface, feed_var_dict and fetch_var_dict
-You can alias the input and output variables. The configuration that needs to be read when the serving starts is saved in the client and server storage directories.
-
-``` python
-def save_model(server_model_folder,
- client_config_folder,
- feed_var_dict,
- fetch_var_dict,
- main_program=None)
-```
-
-## 4. Paddle Serving Underlying Framework
-
-![Paddle-Serging Overall Architecture](images/framework.png)
-
-**Model Management Framework**: Connects model files of multiple machine learning platforms and provides a unified inference interface
-**Business Scheduling Framework**: Abstracts the calculation logic of various different inference models, provides a general DAG scheduling framework, and connects different operators through DAG diagrams to complete a prediction service together. This abstract model allows users to conveniently implement their own calculation logic, and at the same time facilitates operator sharing. (Users build their own forecasting services. A large part of their work is to build DAGs and provide operators.)
-**Predict Service**: Encapsulation of the externally provided prediction service interface. Define communication fields with the client through protobuf.
-
-### 4.1 Model Management Framework
-
-The model management framework is responsible for managing the models trained by the machine learning framework. It can be abstracted into three levels: model loading, model data, and model reasoning.
-
-#### Model Loading
-
-Load model from disk to memory, support multi-version, hot-load, incremental update, etc.
-
-#### Model data
-
-Model data structure in memory, integrated fluid inference lib
-
-#### inferencer
-
-it provided united inference interface for upper layers
-
-```C++
-class FluidFamilyCore {
- virtual bool Run(const void* in_data, void* out_data);
- virtual int create(const std::string& data_path);
- virtual int clone(void* origin_core);
-};
-```
-
-### 4.2 Business Scheduling Framework
-
-#### 4.2.1 Inference Service
-
-With reference to the abstract idea of model calculation of the TensorFlow framework, the business logic is abstracted into a DAG diagram, driven by configuration, generating a workflow, and skipping C ++ code compilation. Each specific step of the service corresponds to a specific OP. The OP can configure the upstream OP that it depends on. Unified message passing between OPs is achieved by the thread-level bus and channel mechanisms. For example, the service process of a simple prediction service can be abstracted into 3 steps including reading request data-> calling the prediction interface-> writing back the prediction result, and correspondingly implemented to 3 OP: ReaderOp-> ClassifyOp-> WriteOp
-
-![Infer Service](images/predict-service.png)
-
-Regarding the dependencies between OPs, and the establishment of workflows through OPs, you can refer to [从零开始写一个预测服务](CREATING.md) (simplified Chinese Version)
-
-Server instance perspective
-
-![Server instance perspective](images/server-side.png)
-
-
-#### 4.2.2 Paddle Serving Multi-Service Mechanism
-
-![Paddle Serving multi-service](images/multi-service.png)
-
-Paddle Serving instances can load multiple models at the same time, and each model uses a Service (and its configured workflow) to undertake services. You can refer to [service configuration file in Demo example](../tools/cpp_examples/demo-serving/conf/service.prototxt) to learn how to configure multiple services for the serving instance
-
-#### 4.2.3 Hierarchical relationship of business scheduling
-
-From the client's perspective, a Paddle Serving service can be divided into three levels: Service, Endpoint, and Variant from top to bottom.
-
-![Call hierarchy relationship](images/multi-variants.png)
-
-One Service corresponds to one inference model, and there is one endpoint under the model. Different versions of the model are implemented through multiple variant concepts under endpoint:
-The same model prediction service can configure multiple variants, and each variant has its own downstream IP list. The client code can configure relative weights for each variant to achieve the relationship of adjusting the traffic ratio (refer to the description of variant_weight_list in [Client Configuration](CLIENT_CONFIGURE.md) section 3.2).
-
-![Client-side proxy function](images/client-side-proxy.png)
-
-## 5. User Interface
-
-Under the premise of meeting certain interface specifications, the service framework does not make any restrictions on user data fields to meet different business interfaces of various forecast services. Baidu-rpc inherits the interface of Protobuf serice, and the user describes the Request and Response business interfaces according to the Protobuf syntax specification. Paddle Serving is built on the Baidu-rpc framework and supports this feature by default.
-
-No matter how the communication protocol changes, the framework only needs to ensure that the communication protocol between the client and server and the format of the business data are synchronized to ensure normal communication. This information can be broken down as follows:
-
--Protocol: Header information agreed in advance between Server and Client to ensure mutual recognition of data format. Paddle Serving uses Protobuf as the basic communication format
--Data: Used to describe the interface of Request and Response, such as the sample data to be predicted, and the score returned by the prediction. include:
- -Data fields: Field definitions included in the two data structures of Request and Return.
- -Description interface: similar to the protocol interface, it supports Protobuf by default
-
-### 5.1 Data Compression Method
-
-Baidu-rpc has built-in data compression methods such as snappy, gzip, zlib, which can be configured in the configuration file (refer to [Client Configuration](CLIENT_CONFIGURE.md) Section 3.1 for an introduction to compress_type)
-
-### 5.2 C ++ SDK API Interface
-
-```C++
-class PredictorApi {
- public:
- int create(const char* path, const char* file);
- int thrd_initialize();
- int thrd_clear();
- int thrd_finalize();
- void destroy();
-
- Predictor* fetch_predictor(std::string ep_name);
- int free_predictor(Predictor* predictor);
-};
-
-class Predictor {
- public:
- // synchronize interface
- virtual int inference(google::protobuf::Message* req,
- google::protobuf::Message* res) = 0;
-
- // asynchronize interface
- virtual int inference(google::protobuf::Message* req,
- google::protobuf::Message* res,
- DoneType done,
- brpc::CallId* cid = NULL) = 0;
-
- // synchronize interface
- virtual int debug(google::protobuf::Message* req,
- google::protobuf::Message* res,
- butil::IOBufBuilder* debug_os) = 0;
-};
-
-```
-
-### 5.3 Inferfaces related to Op
-
-```C++
-class Op {
- // ------Getters for Channel/Data/Message of dependent OP-----
-
- // Get the Channel object of dependent OP
- Channel* mutable_depend_channel(const std::string& op);
-
- // Get the Channel object of dependent OP
- const Channel* get_depend_channel(const std::string& op) const;
-
- template
- T* mutable_depend_argument(const std::string& op);
-
- template
- const T* get_depend_argument(const std::string& op) const;
-
- // -----Getters for Channel/Data/Message of current OP----
-
- // Get pointer to the progobuf message of current OP
- google::protobuf::Message* mutable_message();
-
- // Get pointer to the protobuf message of current OP
- const google::protobuf::Message* get_message() const;
-
- // Get the template class data object of current OP
- template
- T* mutable_data();
-
- // Get the template class data object of current OP
- template
- const T* get_data() const;
-
- // ---------------- Other base class members ----------------
-
- int init(Bus* bus,
- Dag* dag,
- uint32_t id,
- const std::string& name,
- const std::string& type,
- void* conf);
-
- int deinit();
-
-
- int process(bool debug);
-
- // Get the input object
- const google::protobuf::Message* get_request_message();
-
- const std::string& type() const;
-
- uint32_t id() const;
-
- // ------------------ OP Interface -------------------
-
- // Get the derived Channel object of current OP
- virtual Channel* mutable_channel() = 0;
-
- // Get the derived Channel object of current OP
- virtual const Channel* get_channel() const = 0;
-
- // Release the derived Channel object of current OP
- virtual int release_channel() = 0;
-
- // Inference interface
- virtual int inference() = 0;
-
- // ------------------ Conf Interface -------------------
- virtual void* create_config(const configure::DAGNode& conf) { return NULL; }
-
- virtual void delete_config(void* conf) {}
-
- virtual void set_config(void* conf) { return; }
-
- // ------------------ Metric Interface -------------------
- virtual void regist_metric() { return; }
-};
-
-```
-
-
-### 5.4 Interfaces related to framework
-
-Service
-
-```C++
-class InferService {
- public:
- static const char* tag() { return "service"; }
- int init(const configure::InferService& conf);
- int deinit() { return 0; }
- int reload();
- const std::string& name() const;
- const std::string& full_name() const { return _infer_service_format; }
-
- // Execute each workflow serially
- virtual int inference(const google::protobuf::Message* request,
- google::protobuf::Message* response,
- butil::IOBufBuilder* debug_os = NULL);
-
- int debug(const google::protobuf::Message* request,
- google::protobuf::Message* response,
- butil::IOBufBuilder* debug_os);
-
-};
-
-class ParallelInferService : public InferService {
- public:
- // Execute workflows in parallel
- int inference(const google::protobuf::Message* request,
- google::protobuf::Message* response,
- butil::IOBufBuilder* debug_os) {
- return 0;
- }
-};
-```
-ServerManager
-
-```C++
-class ServerManager {
- public:
- typedef google::protobuf::Service Service;
- ServerManager();
-
- static ServerManager& instance() {
- static ServerManager server;
- return server;
- }
- static bool reload_starting() { return _s_reload_starting; }
- static void stop_reloader() { _s_reload_starting = false; }
- int add_service_by_format(const std::string& format);
- int start_and_wait();
-};
-```
-
-DAG
-
-```C++
-class Dag {
- public:
- EdgeMode parse_mode(std::string& mode); // NOLINT
-
- int init(const char* path, const char* file, const std::string& name);
-
- int init(const configure::Workflow& conf, const std::string& name);
-
- int deinit();
-
- uint32_t nodes_size();
-
- const DagNode* node_by_id(uint32_t id);
-
- const DagNode* node_by_id(uint32_t id) const;
-
- const DagNode* node_by_name(std::string& name); // NOLINT
-
- const DagNode* node_by_name(const std::string& name) const;
-
- uint32_t stage_size();
-
- const DagStage* stage_by_index(uint32_t index);
-
- const std::string& name() const { return _dag_name; }
-
- const std::string& full_name() const { return _dag_name; }
-
- void regist_metric(const std::string& service_name);
-};
-```
-
-Workflow
-
-```C++
-class Workflow {
- public:
- Workflow() {}
- static const char* tag() { return "workflow"; }
-
- // Each workflow object corresponds to an independent
- // configure file, so you can share the object between
- // different apps.
- int init(const configure::Workflow& conf);
-
- DagView* fetch_dag_view(const std::string& service_name);
-
- int deinit() { return 0; }
-
- void return_dag_view(DagView* view);
-
- int reload();
-
- const std::string& name() { return _name; }
-
- const std::string& full_name() { return _name; }
-};
-```
diff --git a/doc/cpp_server/C++DESIGN_CN.md b/doc/cpp_server/C++DESIGN_CN.md
deleted file mode 100644
index 383666959..000000000
--- a/doc/cpp_server/C++DESIGN_CN.md
+++ /dev/null
@@ -1,379 +0,0 @@
-# C++ Serving设计方案
-
-(简体中文|[English](./C++DESIGN.md))
-
-注意本页内容有已经过期,请查看:[设计文档](DESIGN_DOC_CN.md)
-
-## 1. 项目背景
-
-PaddlePaddle是百度开源的机器学习框架,广泛支持各种深度学习模型的定制化开发; Paddle Serving是Paddle的在线预测部分,与Paddle模型训练环节无缝衔接,提供机器学习预测云服务。本文将从模型、服务、接入等层面,自底向上描述Paddle Serving设计方案。
-
-1. 模型是Paddle Serving预测的核心,包括模型数据和推理计算的管理;
-2. 预测框架封装模型推理计算,对外提供RPC接口,对接不同上游;
-3. 预测服务SDK提供一套接入框架
-
-最终形成一套完整的serving解决方案。
-
-## 2. 名词解释
-
-- **baidu-rpc**: 百度官方开源RPC框架,支持多种常见通信协议,提供基于protobuf的自定义接口体验
-- **Variant**: Paddle Serving架构对一个最小预测集群的抽象,其特点是内部所有实例(副本)完全同质,逻辑上对应一个model的一个固定版本
-- **Endpoint**: 多个Variant组成一个Endpoint,逻辑上看,Endpoint代表一个model,Endpoint内部的Variant代表不同的版本
-- **OP**: PaddlePaddle用来封装一种数值计算的算子,Paddle Serving用来表示一种基础的业务操作算子,核心接口是inference。OP通过配置其依赖的上游OP,将多个OP串联成一个workflow
-- **Channel**: 一个OP所有请求级中间数据的抽象;OP之间通过Channel进行数据交互
-- **Bus**: 对一个线程中所有channel的管理,以及根据DAG之间的DAG依赖图对OP和Channel两个集合间的访问关系进行调度
-- **Stage**: Workflow按照DAG描述的拓扑图中,属于同一个环节且可并行执行的OP集合
-- **Node**: 由某个OP算子类结合参数配置组成的OP算子实例,也是Workflow中的一个执行单元
-- **Workflow**: 按照DAG描述的拓扑,有序执行每个OP的inference接口
-- **DAG/Workflow**: 由若干个相互依赖的Node组成,每个Node均可通过特定接口获得Request对象,节点OP通过依赖关系获得其前置OP的输出对象,最后一个Node的输出默认就是Response对象
-- **Service**: 对一次PV的请求封装,可配置若干条Workflow,彼此之间复用当前PV的Request对象,然后各自并行/串行执行,最后将Response写入对应的输出slot中;一个Paddle-serving进程可配置多套Service接口,上游根据ServiceName决定当前访问的Service接口。
-
-## 3. Python Interface设计
-
-### 3.1 核心目标:
-
-完成一整套Paddle Serving的动态库,支持Paddle保存的通用模型的远程预估服务,通过Python Interface调用PaddleServing底层的各种功能。
-
-### 3.2 通用模型:
-
-能够使用Paddle Inference Library进行预测的模型,在训练过程中保存的模型,包含Feed Variable和Fetch Variable
-
-### 3.3 整体设计:
-
-- 用户通过Python Client启动Client和Server,Python API有检查互联和待访问模型是否匹配的功能
-- Python API背后调用的是Paddle Serving实现的client和server对应功能的pybind,互传的信息通过RPC实现
-- Client Python API当前有两个简单的功能,load_inference_conf和predict,分别用来执行加载待预测的模型和预测
-- Server Python API主要负责加载预估模型,以及生成Paddle Serving需要的各种配置,包括engines,workflow,resource等
-
-### 3.4 Server Inferface
-
-![Server Interface](images/server_interface.png)
-
-### 3.5 Client Interface
-
-
-
-### 3.6 训练过程中使用的Client io
-
-PaddleServing设计可以在训练过程中使用的保存模型接口,与Paddle保存inference model的接口基本一致,feed_var_dict与fetch_var_dict
-可以为输入和输出变量起别名,serving启动需要读取的配置会保存在client端和server端的保存目录中。
-
-``` python
-def save_model(server_model_folder,
- client_config_folder,
- feed_var_dict,
- fetch_var_dict,
- main_program=None)
-```
-
-## 4. Paddle Serving底层框架
-
-![Paddle-Serging总体框图](images/framework.png)
-
-**模型管理框架**:对接多种机器学习平台的模型文件,向上提供统一的inference接口
-**业务调度框架**:对各种不同预测模型的计算逻辑进行抽象,提供通用的DAG调度框架,通过DAG图串联不同的算子,共同完成一次预测服务。该抽象模型使用户可以方便的实现自己的计算逻辑,同时便于算子共用。(用户搭建自己的预测服务,很大一部分工作是搭建DAG和提供算子的实现)
-**PredictService**:对外部提供的预测服务接口封装。通过protobuf定义与客户端的通信字段。
-
-### 4.1 模型管理框架
-
-模型管理框架负责管理机器学习框架训练出来的模型,总体可抽象成模型加载、模型数据和模型推理等3个层次。
-
-#### 模型加载
-
-将模型从磁盘加载到内存,支持多版本、热加载、增量更新等功能
-
-#### 模型数据
-
-模型在内存中的数据结构,集成fluid预测lib
-
-#### inferencer
-
-向上为预测服务提供统一的预测接口
-
-```C++
-class FluidFamilyCore {
- virtual bool Run(const void* in_data, void* out_data);
- virtual int create(const std::string& data_path);
- virtual int clone(void* origin_core);
-};
-```
-
-### 4.2 业务调度框架
-
-#### 4.2.1 预测服务Service
-
-参考TF框架的模型计算的抽象思想,将业务逻辑抽象成DAG图,由配置驱动,生成workflow,跳过C++代码编译。业务的每个具体步骤,对应一个具体的OP,OP可配置自己依赖的上游OP。OP之间消息传递统一由线程级Bus和channel机制实现。例如,一个简单的预测服务的服务过程,可以抽象成读请求数据->调用预测接口->写回预测结果等3个步骤,相应的实现到3个OP: ReaderOp->ClassifyOp->WriteOp
-
-![预测服务Service](images/predict-service.png)
-
-关于OP之间的依赖关系,以及通过OP组建workflow,可以参考[从零开始写一个预测服务](CREATING.md)的相关章节
-
-服务端实例透视图
-
-![服务端实例透视图](images/server-side.png)
-
-
-#### 4.2.2 Paddle Serving的多服务机制
-
-![Paddle Serving的多服务机制](images/multi-service.png)
-
-Paddle Serving实例可以同时加载多个模型,每个模型用一个Service(以及其所配置的workflow)承接服务。可以参考[Demo例子中的service配置文件](../tools/cpp_examples/demo-serving/conf/service.prototxt)了解如何为serving实例配置多个service
-
-#### 4.2.3 业务调度层级关系
-
-从客户端看,一个Paddle Serving service从顶向下可分为Service, Endpoint, Variant等3个层级
-
-![调用层级关系](images/multi-variants.png)
-
-一个Service对应一个预测模型,模型下有1个endpoint。模型的不同版本,通过endpoint下多个variant概念实现:
-同一个模型预测服务,可以配置多个variant,每个variant有自己的下游IP列表。客户端代码可以对各个variant配置相对权重,以达到调节流量比例的关系(参考[客户端配置](CLIENT_CONFIGURE.md)第3.2节中关于variant_weight_list的说明)。
-
-![Client端proxy功能](images/client-side-proxy.png)
-
-## 5. 用户接口
-
-在满足一定的接口规范前提下,服务框架不对用户数据字段做任何约束,以应对各种预测服务的不同业务接口。Baidu-rpc继承了Protobuf serice的接口,用户按照Protobuf语法规范描述Request和Response业务接口。Paddle Serving基于Baidu-rpc框架搭建,默认支持该特性。
-
-无论通信协议如何变化,框架只需确保Client和Server间通信协议和业务数据两种信息的格式同步,即可保证正常通信。这些信息又可细分如下:
-
-- 协议:Server和Client之间事先约定的、确保相互识别数据格式的包头信息。Paddle Serving用Protobuf作为基础通信格式
-- 数据:用来描述Request和Response的接口,例如待预测样本数据,和预测返回的打分。包括:
- - 数据字段:请求包Request和返回包Response两种数据结构包含的字段定义
- - 描述接口:跟协议接口类似,默认支持Protobuf
-
-### 5.1 数据压缩方法
-
-Baidu-rpc内置了snappy, gzip, zlib等数据压缩方法,可在配置文件中配置(参考[客户端配置](CLIENT_CONFIGURE.md)第3.1节关于compress_type的介绍)
-
-### 5.2 C++ SDK API接口
-
-```C++
-class PredictorApi {
- public:
- int create(const char* path, const char* file);
- int thrd_initialize();
- int thrd_clear();
- int thrd_finalize();
- void destroy();
-
- Predictor* fetch_predictor(std::string ep_name);
- int free_predictor(Predictor* predictor);
-};
-
-class Predictor {
- public:
- // synchronize interface
- virtual int inference(google::protobuf::Message* req,
- google::protobuf::Message* res) = 0;
-
- // asynchronize interface
- virtual int inference(google::protobuf::Message* req,
- google::protobuf::Message* res,
- DoneType done,
- brpc::CallId* cid = NULL) = 0;
-
- // synchronize interface
- virtual int debug(google::protobuf::Message* req,
- google::protobuf::Message* res,
- butil::IOBufBuilder* debug_os) = 0;
-};
-
-```
-
-### 5.3 OP相关接口
-
-```C++
-class Op {
- // ------Getters for Channel/Data/Message of dependent OP-----
-
- // Get the Channel object of dependent OP
- Channel* mutable_depend_channel(const std::string& op);
-
- // Get the Channel object of dependent OP
- const Channel* get_depend_channel(const std::string& op) const;
-
- template
- T* mutable_depend_argument(const std::string& op);
-
- template
- const T* get_depend_argument(const std::string& op) const;
-
- // -----Getters for Channel/Data/Message of current OP----
-
- // Get pointer to the progobuf message of current OP
- google::protobuf::Message* mutable_message();
-
- // Get pointer to the protobuf message of current OP
- const google::protobuf::Message* get_message() const;
-
- // Get the template class data object of current OP
- template
- T* mutable_data();
-
- // Get the template class data object of current OP
- template
- const T* get_data() const;
-
- // ---------------- Other base class members ----------------
-
- int init(Bus* bus,
- Dag* dag,
- uint32_t id,
- const std::string& name,
- const std::string& type,
- void* conf);
-
- int deinit();
-
-
- int process(bool debug);
-
- // Get the input object
- const google::protobuf::Message* get_request_message();
-
- const std::string& type() const;
-
- uint32_t id() const;
-
- // ------------------ OP Interface -------------------
-
- // Get the derived Channel object of current OP
- virtual Channel* mutable_channel() = 0;
-
- // Get the derived Channel object of current OP
- virtual const Channel* get_channel() const = 0;
-
- // Release the derived Channel object of current OP
- virtual int release_channel() = 0;
-
- // Inference interface
- virtual int inference() = 0;
-
- // ------------------ Conf Interface -------------------
- virtual void* create_config(const configure::DAGNode& conf) { return NULL; }
-
- virtual void delete_config(void* conf) {}
-
- virtual void set_config(void* conf) { return; }
-
- // ------------------ Metric Interface -------------------
- virtual void regist_metric() { return; }
-};
-
-```
-
-### 5.4 框架相关接口
-
-Service
-
-```C++
-class InferService {
- public:
- static const char* tag() { return "service"; }
- int init(const configure::InferService& conf);
- int deinit() { return 0; }
- int reload();
- const std::string& name() const;
- const std::string& full_name() const { return _infer_service_format; }
-
- // Execute each workflow serially
- virtual int inference(const google::protobuf::Message* request,
- google::protobuf::Message* response,
- butil::IOBufBuilder* debug_os = NULL);
-
- int debug(const google::protobuf::Message* request,
- google::protobuf::Message* response,
- butil::IOBufBuilder* debug_os);
-
-};
-
-class ParallelInferService : public InferService {
- public:
- // Execute workflows in parallel
- int inference(const google::protobuf::Message* request,
- google::protobuf::Message* response,
- butil::IOBufBuilder* debug_os) {
- return 0;
- }
-};
-```
-ServerManager
-
-```C++
-class ServerManager {
- public:
- typedef google::protobuf::Service Service;
- ServerManager();
-
- static ServerManager& instance() {
- static ServerManager server;
- return server;
- }
- static bool reload_starting() { return _s_reload_starting; }
- static void stop_reloader() { _s_reload_starting = false; }
- int add_service_by_format(const std::string& format);
- int start_and_wait();
-};
-```
-
-DAG
-
-```C++
-class Dag {
- public:
- EdgeMode parse_mode(std::string& mode); // NOLINT
-
- int init(const char* path, const char* file, const std::string& name);
-
- int init(const configure::Workflow& conf, const std::string& name);
-
- int deinit();
-
- uint32_t nodes_size();
-
- const DagNode* node_by_id(uint32_t id);
-
- const DagNode* node_by_id(uint32_t id) const;
-
- const DagNode* node_by_name(std::string& name); // NOLINT
-
- const DagNode* node_by_name(const std::string& name) const;
-
- uint32_t stage_size();
-
- const DagStage* stage_by_index(uint32_t index);
-
- const std::string& name() const { return _dag_name; }
-
- const std::string& full_name() const { return _dag_name; }
-
- void regist_metric(const std::string& service_name);
-};
-```
-
-Workflow
-
-```C++
-class Workflow {
- public:
- Workflow() {}
- static const char* tag() { return "workflow"; }
-
- // Each workflow object corresponds to an independent
- // configure file, so you can share the object between
- // different apps.
- int init(const configure::Workflow& conf);
-
- DagView* fetch_dag_view(const std::string& service_name);
-
- int deinit() { return 0; }
-
- void return_dag_view(DagView* view);
-
- int reload();
-
- const std::string& name() { return _name; }
-
- const std::string& full_name() { return _name; }
-};
-```
diff --git a/doc/cpp_server/NEW_WEB_SERVICE.md b/doc/cpp_server/NEW_WEB_SERVICE.md
deleted file mode 100644
index 86e53b843..000000000
--- a/doc/cpp_server/NEW_WEB_SERVICE.md
+++ /dev/null
@@ -1,152 +0,0 @@
-# How to develop a new Web service?
-
-
-([简体中文](NEW_WEB_SERVICE_CN.md)|English)
-
-This document will take Uci service as an example to introduce how to develop a new Web Service. You can check out the complete code [here](../python/examples/pipeline/simple_web_service/web_service.py).
-
-## Op base class
-
-In some services, a single model may not meet business needs, requiring multiple models to be concatenated or parallel to complete the entire service. We call a single model operation Op and provide a simple set of interfaces to implement the complex logic of Op concatenation or parallelism.
-
-Data between Ops is passed as a dictionary, Op can be started as threads or process, and Op can be configured for the number of concurrencies, etc.
-
-Typically, you need to inherit the Op base class and override its `init_op`, `preprocess` and `postprocess` methods, which are implemented by default as follows:
-
-```python
-class Op(object):
- def init_op(self):
- pass
- def preprocess(self, input_dicts):
- # multiple previous Op
- if len(input_dicts) != 1:
- _LOGGER.critical(
- "Failed to run preprocess: this Op has multiple previous "
- "inputs. Please override this func.")
- os._exit(-1)
- (_, input_dict), = input_dicts.items()
- return input_dict
- def postprocess(self, input_dicts, fetch_dict):
- return fetch_dict
-```
-
-### init_op
-
-This method is used to load user-defined resources such as dictionaries. A separator is loaded in the [UciOp](../python/examples/pipeline/simple_web_service/web_service.py).
-
-**Note**: If Op is launched in threaded mode, different threads of the same Op execute `init_op` only once and share `init_op` loaded resources when Op is multi-concurrent.
-
-### preprocess
-
-This method is used to preprocess the data before model prediction. It has an `input_dicts` parameter, `input_dicts` is a dictionary, key is the `name` of the previous Op, and value is the data transferred from the corresponding previous op (the data is also in dictionary format).
-
-The `preprocess` method needs to process the data into a ndarray dictionary (key is the feed variable name, and value is the corresponding ndarray value). Op will take the return value as the input of the model prediction and pass the output to the `postprocess` method.
-
-**Note**: if Op does not have a model configuration file, the return value of `preprocess` will be directly passed to `postprocess`.
-
-### postprocess
-
-This method is used for data post-processing after model prediction. It has two parameters, `input_dicts` and `fetch_dict`.
-
-Where the `input_dicts` parameter is consistent with the parameter in `preprocess` method, and `fetch_dict` is the output of the model prediction (key is the name of the fetch variable, and value is the corresponding ndarray value). Op will take the return value of `postprocess` as the input of subsequent Op `preprocess`.
-
-**Note**: if Op does not have a model configuration file, `fetch_dict` will be the return value of `preprocess`.
-
-
-
-Here is the op of the UCI example:
-
-```python
-class UciOp(Op):
- def init_op(self):
- self.separator = ","
-
- def preprocess(self, input_dicts):
- (_, input_dict), = input_dicts.items()
- x_value = input_dict["x"]
- if isinstance(x_value, (str, unicode)):
- input_dict["x"] = np.array(
- [float(x.strip()) for x in x_value.split(self.separator)])
- return input_dict
-
- def postprocess(self, input_dicts, fetch_dict):
- fetch_dict["price"] = str(fetch_dict["price"][0][0])
- return fetch_dict
-```
-
-
-
-## WebService base class
-
-Paddle Serving implements the [WebService](https://github.com/PaddlePaddle/Serving/blob/develop/python/paddle_serving_server/web_service.py#L23) base class. You need to override its `get_pipeline_response` method to define the topological relationship between Ops. The default implementation is as follows:
-
-```python
-class WebService(object):
- def get_pipeline_response(self, read_op):
- return None
-```
-
-Where `read_op` serves as the entry point of the topology map of the whole service (that is, the first op defined by the user is followed by `read_op`).
-
-For single Op service (single model), take Uci service as an example (there is only one Uci prediction model in the whole service):
-
-```python
-class UciService(WebService):
- def get_pipeline_response(self, read_op):
- uci_op = UciOp(name="uci", input_ops=[read_op])
- return uci_op
-```
-
-For multiple Op services (multiple models), take Ocr service as an example (the whole service is completed in series by Det model and Rec model):
-
-```python
-class OcrService(WebService):
- def get_pipeline_response(self, read_op):
- det_op = DetOp(name="det", input_ops=[read_op])
- rec_op = RecOp(name="rec", input_ops=[det_op])
- return rec_op
-```
-
-
-
-WebService objects need to load a yaml configuration file through the `prepare_pipeline_config` to configure each Op and the entire service. The simplest configuration file is as follows (Uci example):
-
-```yaml
-http_port: 18080
-op:
- uci:
- local_service_conf:
- model_config: uci_housing_model # path
-```
-
-All field names of yaml file are as follows:
-
-```yaml
-rpc_port: 18080 # gRPC port
-build_dag_each_worker: false # Whether to use process server or not. The default is false
-worker_num: 1 # gRPC thread pool size (the number of processes in the process version servicer). The default is 1
-http_port: 0 # HTTP service port. Do not start HTTP service when the value is less or equals 0. The default value is 0.
-dag:
- is_thread_op: true # Whether to use the thread version of OP. The default is true
- client_type: brpc # Use brpc or grpc client. The default is brpc
- retry: 1 # The number of times DAG executor retries after failure. The default value is 1, that is, no retrying
- use_profile: false # Whether to print the log on the server side. The default is false
- tracer:
- interval_s: -1 # Monitoring time interval of Tracer (in seconds). Do not start monitoring when the value is less than 1. The default value is -1
-op:
- : # op name, corresponding to the one defined in the program
- concurrency: 1 # op concurrency number, the default is 1
- timeout: -1 # predict timeout in milliseconds. The default value is -1, that is, no timeout
- retry: 1 # timeout retransmissions. The default value is 1, that is, do not try again
- batch_size: 1 # If this field is set, Op will merge multiple request outputs into a single batch
- auto_batching_timeout: -1 # auto-batching timeout in milliseconds. The default value is -1, that is, no timeout
- local_service_conf:
- model_config: # the path of the corresponding model file. There is no default value(None). If this item is not configured, the model file will not be loaded.
- workdir: "" # working directory of corresponding model
- thread_num: 2 # the corresponding model is started with thread_num threads
- devices: "" # on which device does the model launched. You can specify the GPU card number(such as "0,1,2"), which is CPU by default
- mem_optim: true # mem optimization option, the default is true
- ir_optim: false # ir optimization option, the default is false
-```
-
-All fields of Op can be defined when Op is created in the program (which will override yaml fields).
diff --git a/doc/cpp_server/NEW_WEB_SERVICE_CN.md b/doc/cpp_server/NEW_WEB_SERVICE_CN.md
deleted file mode 100644
index af6730a89..000000000
--- a/doc/cpp_server/NEW_WEB_SERVICE_CN.md
+++ /dev/null
@@ -1,152 +0,0 @@
-# 如何开发一个新的Web Service?
-
-
-(简体中文|[English](NEW_WEB_SERVICE.md))
-
-本文档将以 Uci 房价预测服务为例,来介绍如何开发一个新的Web Service。您可以在[这里](../python/examples/pipeline/simple_web_service/web_service.py)查阅完整的代码。
-
-## Op 基类
-
-在一些服务中,单个模型可能无法满足需求,需要多个模型串联或并联来完成整个服务。我们将单个模型操作称为 Op,并提供了一套简单的接口来实现 Op 串联或并联的复杂逻辑。
-
-Op 间数据是以字典形式进行传递的,Op 可以以线程或进程方式启动,同时可以对 Op 的并发数等进行配置。
-
-通常情况下,您需要继承 Op 基类,重写它的 `init_op`、`preprocess` 和 `postprocess` 方法,默认实现如下:
-
-```python
-class Op(object):
- def init_op(self):
- pass
- def preprocess(self, input_dicts):
- # multiple previous Op
- if len(input_dicts) != 1:
- _LOGGER.critical(
- "Failed to run preprocess: this Op has multiple previous "
- "inputs. Please override this func.")
- os._exit(-1)
- (_, input_dict), = input_dicts.items()
- return input_dict
- def postprocess(self, input_dicts, fetch_dict):
- return fetch_dict
-```
-
-### init_op 方法
-
-该方法用于加载用户自定义资源(如字典等),在 [UciOp](../python/examples/pipeline/simple_web_service/web_service.py) 中加载了一个分隔符。
-
-**注意**:如果 Op 是以线程模式加载的,那么在 Op 多并发时,同种 Op 的不同线程只执行一次 `init_op`,且共用 `init_op` 加载的资源。
-
-### preprocess 方法
-
-该方法用于模型预测前对数据的预处理,它有一个 `input_dicts` 参数,`input_dicts` 是一个字典,key 为前继 Op 的 `name`,value 为对应前继 Op 传递过来的数据(数据同样是字典格式)。
-
-`preprocess` 方法需要将数据处理成 ndarray 字典(key 为 feed 变量名,value 为对应的 ndarray 值),Op 会将该返回值作为模型预测的输入,并将输出传递给 `postprocess` 方法。
-
-**注意**:如果 Op 没有配置模型,则 `preprocess` 的返回值会直接传递给 `postprocess`。
-
-### postprocess 方法
-
-该方法用于模型预测后对数据的后处理,它有两个参数,`input_dicts` 和 `fetch_dict`。
-
-其中,`input_dicts` 与 `preprocess` 的参数相同,`fetch_dict` 为模型预测的输出(key 为 fetch 变量名,value 为对应的 ndarray 值)。Op 会将 `postprocess` 的返回值作为后继 Op `preprocess` 的输入。
-
-**注意**:如果 Op 没有配置模型,则 `fetch_dict` 将为 `preprocess` 的返回值。
-
-
-
-下面是 Uci 例子的 Op:
-
-```python
-class UciOp(Op):
- def init_op(self):
- self.separator = ","
-
- def preprocess(self, input_dicts):
- (_, input_dict), = input_dicts.items()
- x_value = input_dict["x"]
- if isinstance(x_value, (str, unicode)):
- input_dict["x"] = np.array(
- [float(x.strip()) for x in x_value.split(self.separator)])
- return input_dict
-
- def postprocess(self, input_dicts, fetch_dict):
- fetch_dict["price"] = str(fetch_dict["price"][0][0])
- return fetch_dict
-```
-
-
-
-## WebService 基类
-
-Paddle Serving 实现了 [WebService](https://github.com/PaddlePaddle/Serving/blob/develop/python/paddle_serving_server/web_service.py#L28) 基类,您需要重写它的 `get_pipeline_response` 方法来定义 Op 间的拓扑关系,并返回作为 Response 的 Op,默认实现如下:
-
-```python
-class WebService(object):
- def get_pipeline_response(self, read_op):
- return None
-```
-
-其中,`read_op` 作为整个服务拓扑图的入口(即用户自定义的第一个 Op 的前继为 `read_op`)。
-
-对于单 Op 服务(单模型),以 Uci 服务为例(整个服务中只有一个 Uci 房价预测模型):
-
-```python
-class UciService(WebService):
- def get_pipeline_response(self, read_op):
- uci_op = UciOp(name="uci", input_ops=[read_op])
- return uci_op
-```
-
-对于多 Op 服务(多模型),以 Ocr 服务为例(整个服务由 Det 模型和 Rec 模型串联完成):
-
-```python
-class OcrService(WebService):
- def get_pipeline_response(self, read_op):
- det_op = DetOp(name="det", input_ops=[read_op])
- rec_op = RecOp(name="rec", input_ops=[det_op])
- return rec_op
-```
-
-
-
-WebService 对象需要通过 `prepare_pipeline_config` 加载一个 yaml 配置文件,用来对各个 Op 以及整个服务进行配置,最简单的配置文件如下(Uci 例子):
-
-```yaml
-http_port: 18080
-op:
- uci:
- local_service_conf:
- model_config: uci_housing_model # 路径
-```
-
-yaml 文件的所有字段名详见下面:
-
-```yaml
-rpc_port: 18080 # gRPC端口号
-build_dag_each_worker: false # 是否使用进程版 Servicer,默认为 false
-worker_num: 1 # gRPC线程池大小(进程版 Servicer 中为进程数),默认为 1
-http_port: 0 # HTTP 服务的端口号,若该值小于或等于 0 则不开启 HTTP 服务,默认为 0
-dag:
- is_thread_op: true # 是否使用线程版Op,默认为 true
- client_type: brpc # 使用 brpc 或 grpc client,默认为 brpc
- retry: 1 # DAG Executor 在失败后重试次数,默认为 1,即不重试
- use_profile: false # 是否在 Server 端打印日志,默认为 false
- tracer:
- interval_s: -1 # Tracer 监控的时间间隔,单位为秒。当该值小于 1 时不启动监控,默认为 -1
-op:
- : # op 名,与程序中定义的相对应
- concurrency: 1 # op 并发数,默认为 1
- timeout: -1 # 预测超时时间,单位为毫秒。默认为 -1 即不超时
- retry: 1 # 超时重发次数。默认为 1 即不重试
- batch_size: 1 # auto-batching 中的 batch_size,若设置该字段则 Op 会将多个请求输出合并为一个 batch
- auto_batching_timeout: -1 # auto-batching 超时时间,单位为毫秒。默认为 -1 即不超时
- local_service_conf:
- model_config: # 对应模型文件的路径,无默认值(None)。若不配置该项则不会加载模型文件。
- workdir: "" # 对应模型的工作目录
- thread_num: 2 # 对应模型用几个线程启动
- devices: "" # 模型启动在哪个设备上,可以指定 gpu 卡号(如 "0,1,2"),默认为 cpu
- mem_optim: true # mem 优化选项,默认为 true
- ir_optim: false # ir 优化选项,默认为 false
-```
-
-其中,Op 的所有字段均可以在程序中创建 Op 时定义(会覆盖 yaml 的字段)。
diff --git a/doc/images/asyn_benchmark.png b/doc/images/asyn_benchmark.png
new file mode 100644
index 000000000..13b1f356e
Binary files /dev/null and b/doc/images/asyn_benchmark.png differ
diff --git a/doc/images/asyn_mode.png b/doc/images/asyn_mode.png
new file mode 100644
index 000000000..711f3a242
Binary files /dev/null and b/doc/images/asyn_mode.png differ
diff --git a/doc/images/multi_model.png b/doc/images/multi_model.png
new file mode 100644
index 000000000..369410dcb
Binary files /dev/null and b/doc/images/multi_model.png differ
diff --git a/doc/images/qps-threads-bow.png b/doc/images/qps-threads-bow.png
new file mode 100755
index 000000000..8123b71ee
Binary files /dev/null and b/doc/images/qps-threads-bow.png differ
diff --git a/doc/images/qps-threads-cnn.png b/doc/images/qps-threads-cnn.png
new file mode 100755
index 000000000..f983d4882
Binary files /dev/null and b/doc/images/qps-threads-cnn.png differ
diff --git a/doc/images/serving-timings.png b/doc/images/serving-timings.png
new file mode 100755
index 000000000..32bab31a5
Binary files /dev/null and b/doc/images/serving-timings.png differ
diff --git a/doc/images/syn_benchmark.png b/doc/images/syn_benchmark.png
new file mode 100644
index 000000000..ad42e187e
Binary files /dev/null and b/doc/images/syn_benchmark.png differ
diff --git a/doc/images/syn_mode.png b/doc/images/syn_mode.png
new file mode 100644
index 000000000..9bae50d01
Binary files /dev/null and b/doc/images/syn_mode.png differ
diff --git a/examples/Cpp/PaddleClas/imagenet/README.md b/examples/C++/PaddleClas/imagenet/README.md
similarity index 100%
rename from examples/Cpp/PaddleClas/imagenet/README.md
rename to examples/C++/PaddleClas/imagenet/README.md
diff --git a/examples/Cpp/PaddleClas/imagenet/README_CN.md b/examples/C++/PaddleClas/imagenet/README_CN.md
similarity index 100%
rename from examples/Cpp/PaddleClas/imagenet/README_CN.md
rename to examples/C++/PaddleClas/imagenet/README_CN.md
diff --git a/examples/Cpp/PaddleClas/imagenet/benchmark.py b/examples/C++/PaddleClas/imagenet/benchmark.py
similarity index 100%
rename from examples/Cpp/PaddleClas/imagenet/benchmark.py
rename to examples/C++/PaddleClas/imagenet/benchmark.py
diff --git a/examples/Cpp/PaddleClas/imagenet/benchmark.sh b/examples/C++/PaddleClas/imagenet/benchmark.sh
similarity index 100%
rename from examples/Cpp/PaddleClas/imagenet/benchmark.sh
rename to examples/C++/PaddleClas/imagenet/benchmark.sh
diff --git a/examples/Cpp/PaddleClas/imagenet/daisy.jpg b/examples/C++/PaddleClas/imagenet/daisy.jpg
similarity index 100%
rename from examples/Cpp/PaddleClas/imagenet/daisy.jpg
rename to examples/C++/PaddleClas/imagenet/daisy.jpg
diff --git a/examples/Cpp/PaddleClas/imagenet/data/n01440764_10026.JPEG b/examples/C++/PaddleClas/imagenet/data/n01440764_10026.JPEG
similarity index 100%
rename from examples/Cpp/PaddleClas/imagenet/data/n01440764_10026.JPEG
rename to examples/C++/PaddleClas/imagenet/data/n01440764_10026.JPEG
diff --git a/examples/Cpp/PaddleClas/imagenet/flower.jpg b/examples/C++/PaddleClas/imagenet/flower.jpg
similarity index 100%
rename from examples/Cpp/PaddleClas/imagenet/flower.jpg
rename to examples/C++/PaddleClas/imagenet/flower.jpg
diff --git a/examples/Cpp/PaddleClas/imagenet/get_model.sh b/examples/C++/PaddleClas/imagenet/get_model.sh
similarity index 100%
rename from examples/Cpp/PaddleClas/imagenet/get_model.sh
rename to examples/C++/PaddleClas/imagenet/get_model.sh
diff --git a/examples/Cpp/PaddleClas/imagenet/imagenet.label b/examples/C++/PaddleClas/imagenet/imagenet.label
similarity index 100%
rename from examples/Cpp/PaddleClas/imagenet/imagenet.label
rename to examples/C++/PaddleClas/imagenet/imagenet.label
diff --git a/examples/Cpp/PaddleClas/imagenet/resnet50_http_client.py b/examples/C++/PaddleClas/imagenet/resnet50_http_client.py
similarity index 100%
rename from examples/Cpp/PaddleClas/imagenet/resnet50_http_client.py
rename to examples/C++/PaddleClas/imagenet/resnet50_http_client.py
diff --git a/examples/Cpp/PaddleClas/imagenet/resnet50_rpc_client.py b/examples/C++/PaddleClas/imagenet/resnet50_rpc_client.py
similarity index 100%
rename from examples/Cpp/PaddleClas/imagenet/resnet50_rpc_client.py
rename to examples/C++/PaddleClas/imagenet/resnet50_rpc_client.py
diff --git a/examples/Cpp/PaddleClas/imagenet/test_image_reader.py b/examples/C++/PaddleClas/imagenet/test_image_reader.py
similarity index 100%
rename from examples/Cpp/PaddleClas/imagenet/test_image_reader.py
rename to examples/C++/PaddleClas/imagenet/test_image_reader.py
diff --git a/examples/Cpp/PaddleClas/mobilenet/README.md b/examples/C++/PaddleClas/mobilenet/README.md
similarity index 100%
rename from examples/Cpp/PaddleClas/mobilenet/README.md
rename to examples/C++/PaddleClas/mobilenet/README.md
diff --git a/examples/Cpp/PaddleClas/mobilenet/README_CN.md b/examples/C++/PaddleClas/mobilenet/README_CN.md
similarity index 100%
rename from examples/Cpp/PaddleClas/mobilenet/README_CN.md
rename to examples/C++/PaddleClas/mobilenet/README_CN.md
diff --git a/examples/Cpp/PaddleClas/mobilenet/daisy.jpg b/examples/C++/PaddleClas/mobilenet/daisy.jpg
similarity index 100%
rename from examples/Cpp/PaddleClas/mobilenet/daisy.jpg
rename to examples/C++/PaddleClas/mobilenet/daisy.jpg
diff --git a/examples/Cpp/PaddleClas/mobilenet/mobilenet_tutorial.py b/examples/C++/PaddleClas/mobilenet/mobilenet_tutorial.py
similarity index 100%
rename from examples/Cpp/PaddleClas/mobilenet/mobilenet_tutorial.py
rename to examples/C++/PaddleClas/mobilenet/mobilenet_tutorial.py
diff --git a/examples/Cpp/PaddleClas/resnet_v2_50/README.md b/examples/C++/PaddleClas/resnet_v2_50/README.md
similarity index 100%
rename from examples/Cpp/PaddleClas/resnet_v2_50/README.md
rename to examples/C++/PaddleClas/resnet_v2_50/README.md
diff --git a/examples/Cpp/PaddleClas/resnet_v2_50/README_CN.md b/examples/C++/PaddleClas/resnet_v2_50/README_CN.md
similarity index 100%
rename from examples/Cpp/PaddleClas/resnet_v2_50/README_CN.md
rename to examples/C++/PaddleClas/resnet_v2_50/README_CN.md
diff --git a/examples/Cpp/PaddleClas/resnet_v2_50/benchmark.py b/examples/C++/PaddleClas/resnet_v2_50/benchmark.py
similarity index 100%
rename from examples/Cpp/PaddleClas/resnet_v2_50/benchmark.py
rename to examples/C++/PaddleClas/resnet_v2_50/benchmark.py
diff --git a/examples/Cpp/PaddleClas/resnet_v2_50/benchmark.sh b/examples/C++/PaddleClas/resnet_v2_50/benchmark.sh
similarity index 100%
rename from examples/Cpp/PaddleClas/resnet_v2_50/benchmark.sh
rename to examples/C++/PaddleClas/resnet_v2_50/benchmark.sh
diff --git a/examples/Cpp/PaddleClas/resnet_v2_50/daisy.jpg b/examples/C++/PaddleClas/resnet_v2_50/daisy.jpg
similarity index 100%
rename from examples/Cpp/PaddleClas/resnet_v2_50/daisy.jpg
rename to examples/C++/PaddleClas/resnet_v2_50/daisy.jpg
diff --git a/examples/Cpp/PaddleClas/resnet_v2_50/resnet50_debug.py b/examples/C++/PaddleClas/resnet_v2_50/resnet50_debug.py
similarity index 100%
rename from examples/Cpp/PaddleClas/resnet_v2_50/resnet50_debug.py
rename to examples/C++/PaddleClas/resnet_v2_50/resnet50_debug.py
diff --git a/examples/Cpp/PaddleClas/resnet_v2_50/resnet50_v2_tutorial.py b/examples/C++/PaddleClas/resnet_v2_50/resnet50_v2_tutorial.py
similarity index 100%
rename from examples/Cpp/PaddleClas/resnet_v2_50/resnet50_v2_tutorial.py
rename to examples/C++/PaddleClas/resnet_v2_50/resnet50_v2_tutorial.py
diff --git a/examples/Cpp/PaddleClas/resnet_v2_50/run_benchmark.sh b/examples/C++/PaddleClas/resnet_v2_50/run_benchmark.sh
similarity index 100%
rename from examples/Cpp/PaddleClas/resnet_v2_50/run_benchmark.sh
rename to examples/C++/PaddleClas/resnet_v2_50/run_benchmark.sh
diff --git a/examples/Cpp/PaddleDetection/README.md b/examples/C++/PaddleDetection/README.md
similarity index 100%
rename from examples/Cpp/PaddleDetection/README.md
rename to examples/C++/PaddleDetection/README.md
diff --git a/examples/Cpp/PaddleDetection/README_CN.md b/examples/C++/PaddleDetection/README_CN.md
similarity index 100%
rename from examples/Cpp/PaddleDetection/README_CN.md
rename to examples/C++/PaddleDetection/README_CN.md
diff --git a/examples/Cpp/PaddleDetection/blazeface/README.md b/examples/C++/PaddleDetection/blazeface/README.md
similarity index 100%
rename from examples/Cpp/PaddleDetection/blazeface/README.md
rename to examples/C++/PaddleDetection/blazeface/README.md
diff --git a/examples/Cpp/PaddleDetection/blazeface/test_client.py b/examples/C++/PaddleDetection/blazeface/test_client.py
similarity index 100%
rename from examples/Cpp/PaddleDetection/blazeface/test_client.py
rename to examples/C++/PaddleDetection/blazeface/test_client.py
diff --git a/examples/Cpp/PaddleDetection/cascade_rcnn/000000570688.jpg b/examples/C++/PaddleDetection/cascade_rcnn/000000570688.jpg
similarity index 100%
rename from examples/Cpp/PaddleDetection/cascade_rcnn/000000570688.jpg
rename to examples/C++/PaddleDetection/cascade_rcnn/000000570688.jpg
diff --git a/examples/Cpp/PaddleDetection/cascade_rcnn/README.md b/examples/C++/PaddleDetection/cascade_rcnn/README.md
similarity index 100%
rename from examples/Cpp/PaddleDetection/cascade_rcnn/README.md
rename to examples/C++/PaddleDetection/cascade_rcnn/README.md
diff --git a/examples/Cpp/PaddleDetection/cascade_rcnn/README_CN.md b/examples/C++/PaddleDetection/cascade_rcnn/README_CN.md
similarity index 100%
rename from examples/Cpp/PaddleDetection/cascade_rcnn/README_CN.md
rename to examples/C++/PaddleDetection/cascade_rcnn/README_CN.md
diff --git a/examples/Cpp/PaddleDetection/cascade_rcnn/get_data.sh b/examples/C++/PaddleDetection/cascade_rcnn/get_data.sh
similarity index 100%
rename from examples/Cpp/PaddleDetection/cascade_rcnn/get_data.sh
rename to examples/C++/PaddleDetection/cascade_rcnn/get_data.sh
diff --git a/examples/Cpp/PaddleDetection/cascade_rcnn/label_list.txt b/examples/C++/PaddleDetection/cascade_rcnn/label_list.txt
similarity index 100%
rename from examples/Cpp/PaddleDetection/cascade_rcnn/label_list.txt
rename to examples/C++/PaddleDetection/cascade_rcnn/label_list.txt
diff --git a/examples/Cpp/PaddleDetection/cascade_rcnn/test_client.py b/examples/C++/PaddleDetection/cascade_rcnn/test_client.py
similarity index 84%
rename from examples/Cpp/PaddleDetection/cascade_rcnn/test_client.py
rename to examples/C++/PaddleDetection/cascade_rcnn/test_client.py
index aac9f6721..6ddb8a79e 100644
--- a/examples/Cpp/PaddleDetection/cascade_rcnn/test_client.py
+++ b/examples/C++/PaddleDetection/cascade_rcnn/test_client.py
@@ -19,11 +19,10 @@
import cv2
preprocess = DetectionSequential([
- DetectionFile2Image(),
- DetectionResize((800, 1333), True, interpolation=2),
- DetectionNormalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225], True),
- DetectionTranspose((2,0,1)),
- DetectionPadStride(32)
+ DetectionFile2Image(), DetectionResize(
+ (800, 1333), True, interpolation=2),
+ DetectionNormalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225], True),
+ DetectionTranspose((2, 0, 1)), DetectionPadStride(32)
])
postprocess = RCNNPostprocess("label_list.txt", "output")
diff --git a/examples/Cpp/PaddleDetection/faster_rcnn_hrnetv2p_w18_1x/000000570688.jpg b/examples/C++/PaddleDetection/faster_rcnn_hrnetv2p_w18_1x/000000570688.jpg
similarity index 100%
rename from examples/Cpp/PaddleDetection/faster_rcnn_hrnetv2p_w18_1x/000000570688.jpg
rename to examples/C++/PaddleDetection/faster_rcnn_hrnetv2p_w18_1x/000000570688.jpg
diff --git a/examples/Cpp/PaddleDetection/faster_rcnn_hrnetv2p_w18_1x/README.md b/examples/C++/PaddleDetection/faster_rcnn_hrnetv2p_w18_1x/README.md
similarity index 100%
rename from examples/Cpp/PaddleDetection/faster_rcnn_hrnetv2p_w18_1x/README.md
rename to examples/C++/PaddleDetection/faster_rcnn_hrnetv2p_w18_1x/README.md
diff --git a/examples/Cpp/PaddleDetection/faster_rcnn_hrnetv2p_w18_1x/README_CN.md b/examples/C++/PaddleDetection/faster_rcnn_hrnetv2p_w18_1x/README_CN.md
similarity index 100%
rename from examples/Cpp/PaddleDetection/faster_rcnn_hrnetv2p_w18_1x/README_CN.md
rename to examples/C++/PaddleDetection/faster_rcnn_hrnetv2p_w18_1x/README_CN.md
diff --git a/examples/Cpp/PaddleDetection/faster_rcnn_hrnetv2p_w18_1x/label_list.txt b/examples/C++/PaddleDetection/faster_rcnn_hrnetv2p_w18_1x/label_list.txt
similarity index 100%
rename from examples/Cpp/PaddleDetection/faster_rcnn_hrnetv2p_w18_1x/label_list.txt
rename to examples/C++/PaddleDetection/faster_rcnn_hrnetv2p_w18_1x/label_list.txt
diff --git a/examples/Cpp/PaddleDetection/ssd_vgg16_300_240e_voc/test_client.py b/examples/C++/PaddleDetection/faster_rcnn_hrnetv2p_w18_1x/test_client.py
similarity index 85%
rename from examples/Cpp/PaddleDetection/ssd_vgg16_300_240e_voc/test_client.py
rename to examples/C++/PaddleDetection/faster_rcnn_hrnetv2p_w18_1x/test_client.py
index 1df635c89..e93b93a41 100644
--- a/examples/Cpp/PaddleDetection/ssd_vgg16_300_240e_voc/test_client.py
+++ b/examples/C++/PaddleDetection/faster_rcnn_hrnetv2p_w18_1x/test_client.py
@@ -19,11 +19,10 @@
import cv2
preprocess = DetectionSequential([
- DetectionFile2Image(),
- DetectionResize(
- (300, 300), False, interpolation=cv2.INTER_LINEAR),
- DetectionNormalize([104.0, 117.0, 123.0], [1.0, 1.0, 1.0], False),
- DetectionTranspose((2,0,1)),
+ DetectionFile2Image(), DetectionResize(
+ (800, 1333), True, interpolation=2),
+ DetectionNormalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225], True),
+ DetectionTranspose((2, 0, 1)), DetectionPadStride(32)
])
postprocess = RCNNPostprocess("label_list.txt", "output")
diff --git a/examples/Cpp/PaddleDetection/faster_rcnn_r50_fpn_1x_coco/000000570688.jpg b/examples/C++/PaddleDetection/faster_rcnn_r50_fpn_1x_coco/000000570688.jpg
similarity index 100%
rename from examples/Cpp/PaddleDetection/faster_rcnn_r50_fpn_1x_coco/000000570688.jpg
rename to examples/C++/PaddleDetection/faster_rcnn_r50_fpn_1x_coco/000000570688.jpg
diff --git a/examples/Cpp/PaddleDetection/faster_rcnn_r50_fpn_1x_coco/README.md b/examples/C++/PaddleDetection/faster_rcnn_r50_fpn_1x_coco/README.md
similarity index 100%
rename from examples/Cpp/PaddleDetection/faster_rcnn_r50_fpn_1x_coco/README.md
rename to examples/C++/PaddleDetection/faster_rcnn_r50_fpn_1x_coco/README.md
diff --git a/examples/Cpp/PaddleDetection/faster_rcnn_r50_fpn_1x_coco/README_CN.md b/examples/C++/PaddleDetection/faster_rcnn_r50_fpn_1x_coco/README_CN.md
similarity index 100%
rename from examples/Cpp/PaddleDetection/faster_rcnn_r50_fpn_1x_coco/README_CN.md
rename to examples/C++/PaddleDetection/faster_rcnn_r50_fpn_1x_coco/README_CN.md
diff --git a/examples/Cpp/PaddleDetection/faster_rcnn_r50_fpn_1x_coco/label_list.txt b/examples/C++/PaddleDetection/faster_rcnn_r50_fpn_1x_coco/label_list.txt
similarity index 100%
rename from examples/Cpp/PaddleDetection/faster_rcnn_r50_fpn_1x_coco/label_list.txt
rename to examples/C++/PaddleDetection/faster_rcnn_r50_fpn_1x_coco/label_list.txt
diff --git a/examples/C++/PaddleDetection/faster_rcnn_r50_fpn_1x_coco/test_client.py b/examples/C++/PaddleDetection/faster_rcnn_r50_fpn_1x_coco/test_client.py
new file mode 100644
index 000000000..b56a1a5bb
--- /dev/null
+++ b/examples/C++/PaddleDetection/faster_rcnn_r50_fpn_1x_coco/test_client.py
@@ -0,0 +1,45 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import sys
+import numpy as np
+from paddle_serving_client import Client
+from paddle_serving_app.reader import *
+import cv2
+
+preprocess = DetectionSequential([
+ DetectionFile2Image(),
+ DetectionNormalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225], True),
+ DetectionResize(
+ (800, 1333), True, interpolation=cv2.INTER_LINEAR), DetectionTranspose(
+ (2, 0, 1)), DetectionPadStride(128)
+])
+
+postprocess = RCNNPostprocess("label_list.txt", "output")
+client = Client()
+
+client.load_client_config("serving_client/serving_client_conf.prototxt")
+client.connect(['127.0.0.1:9494'])
+
+im, im_info = preprocess(sys.argv[1])
+fetch_map = client.predict(
+ feed={
+ "image": im,
+ "im_shape": np.array(list(im.shape[1:])).reshape(-1),
+ "scale_factor": im_info['scale_factor'],
+ },
+ fetch=["save_infer_model/scale_0.tmp_1"],
+ batch=False)
+fetch_map["image"] = sys.argv[1]
+postprocess(fetch_map)
diff --git a/examples/Cpp/PaddleDetection/fcos_dcn_r50_fpn_1x_coco/000000014439.jpg b/examples/C++/PaddleDetection/fcos_dcn_r50_fpn_1x_coco/000000014439.jpg
similarity index 100%
rename from examples/Cpp/PaddleDetection/fcos_dcn_r50_fpn_1x_coco/000000014439.jpg
rename to examples/C++/PaddleDetection/fcos_dcn_r50_fpn_1x_coco/000000014439.jpg
diff --git a/examples/Cpp/PaddleDetection/fcos_dcn_r50_fpn_1x_coco/README.md b/examples/C++/PaddleDetection/fcos_dcn_r50_fpn_1x_coco/README.md
similarity index 100%
rename from examples/Cpp/PaddleDetection/fcos_dcn_r50_fpn_1x_coco/README.md
rename to examples/C++/PaddleDetection/fcos_dcn_r50_fpn_1x_coco/README.md
diff --git a/examples/Cpp/PaddleDetection/fcos_dcn_r50_fpn_1x_coco/README_CN.md b/examples/C++/PaddleDetection/fcos_dcn_r50_fpn_1x_coco/README_CN.md
similarity index 100%
rename from examples/Cpp/PaddleDetection/fcos_dcn_r50_fpn_1x_coco/README_CN.md
rename to examples/C++/PaddleDetection/fcos_dcn_r50_fpn_1x_coco/README_CN.md
diff --git a/examples/Cpp/PaddleDetection/fcos_dcn_r50_fpn_1x_coco/label_list.txt b/examples/C++/PaddleDetection/fcos_dcn_r50_fpn_1x_coco/label_list.txt
similarity index 100%
rename from examples/Cpp/PaddleDetection/fcos_dcn_r50_fpn_1x_coco/label_list.txt
rename to examples/C++/PaddleDetection/fcos_dcn_r50_fpn_1x_coco/label_list.txt
diff --git a/examples/Cpp/PaddleDetection/fcos_dcn_r50_fpn_1x_coco/test_client.py b/examples/C++/PaddleDetection/fcos_dcn_r50_fpn_1x_coco/test_client.py
similarity index 82%
rename from examples/Cpp/PaddleDetection/fcos_dcn_r50_fpn_1x_coco/test_client.py
rename to examples/C++/PaddleDetection/fcos_dcn_r50_fpn_1x_coco/test_client.py
index 7ad59d75b..0cacceb86 100644
--- a/examples/Cpp/PaddleDetection/fcos_dcn_r50_fpn_1x_coco/test_client.py
+++ b/examples/C++/PaddleDetection/fcos_dcn_r50_fpn_1x_coco/test_client.py
@@ -19,12 +19,11 @@
import cv2
preprocess = DetectionSequential([
- DetectionFile2Image(),
- DetectionNormalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225], True),
- DetectionResize(
- (800, 1333), True, interpolation=cv2.INTER_LINEAR),
- DetectionTranspose((2,0,1)),
- DetectionPadStride(128)
+ DetectionFile2Image(),
+ DetectionNormalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225], True),
+ DetectionResize(
+ (800, 1333), True, interpolation=cv2.INTER_LINEAR), DetectionTranspose(
+ (2, 0, 1)), DetectionPadStride(128)
])
postprocess = RCNNPostprocess("label_list.txt", "output")
diff --git a/examples/Cpp/PaddleDetection/ppyolo_r50vd_dcn_1x_coco/000000570688.jpg b/examples/C++/PaddleDetection/ppyolo_r50vd_dcn_1x_coco/000000570688.jpg
similarity index 100%
rename from examples/Cpp/PaddleDetection/ppyolo_r50vd_dcn_1x_coco/000000570688.jpg
rename to examples/C++/PaddleDetection/ppyolo_r50vd_dcn_1x_coco/000000570688.jpg
diff --git a/examples/Cpp/PaddleDetection/ppyolo_r50vd_dcn_1x_coco/README.md b/examples/C++/PaddleDetection/ppyolo_r50vd_dcn_1x_coco/README.md
similarity index 100%
rename from examples/Cpp/PaddleDetection/ppyolo_r50vd_dcn_1x_coco/README.md
rename to examples/C++/PaddleDetection/ppyolo_r50vd_dcn_1x_coco/README.md
diff --git a/examples/Cpp/PaddleDetection/ppyolo_r50vd_dcn_1x_coco/README_CN.md b/examples/C++/PaddleDetection/ppyolo_r50vd_dcn_1x_coco/README_CN.md
similarity index 100%
rename from examples/Cpp/PaddleDetection/ppyolo_r50vd_dcn_1x_coco/README_CN.md
rename to examples/C++/PaddleDetection/ppyolo_r50vd_dcn_1x_coco/README_CN.md
diff --git a/examples/Cpp/PaddleDetection/ppyolo_r50vd_dcn_1x_coco/label_list.txt b/examples/C++/PaddleDetection/ppyolo_r50vd_dcn_1x_coco/label_list.txt
similarity index 100%
rename from examples/Cpp/PaddleDetection/ppyolo_r50vd_dcn_1x_coco/label_list.txt
rename to examples/C++/PaddleDetection/ppyolo_r50vd_dcn_1x_coco/label_list.txt
diff --git a/examples/Cpp/PaddleDetection/ppyolo_r50vd_dcn_1x_coco/test_client.py b/examples/C++/PaddleDetection/ppyolo_r50vd_dcn_1x_coco/test_client.py
similarity index 85%
rename from examples/Cpp/PaddleDetection/ppyolo_r50vd_dcn_1x_coco/test_client.py
rename to examples/C++/PaddleDetection/ppyolo_r50vd_dcn_1x_coco/test_client.py
index f40f2d5c8..6e6c5dd65 100644
--- a/examples/Cpp/PaddleDetection/ppyolo_r50vd_dcn_1x_coco/test_client.py
+++ b/examples/C++/PaddleDetection/ppyolo_r50vd_dcn_1x_coco/test_client.py
@@ -19,11 +19,10 @@
import cv2
preprocess = DetectionSequential([
- DetectionFile2Image(),
- DetectionNormalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225], True),
- DetectionResize(
- (608, 608), False, interpolation=2),
- DetectionTranspose((2,0,1))
+ DetectionFile2Image(),
+ DetectionNormalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225], True),
+ DetectionResize(
+ (608, 608), False, interpolation=2), DetectionTranspose((2, 0, 1))
])
postprocess = RCNNPostprocess("label_list.txt", "output")
diff --git a/examples/Cpp/PaddleDetection/ssd_vgg16_300_240e_voc/000000014439.jpg b/examples/C++/PaddleDetection/ssd_vgg16_300_240e_voc/000000014439.jpg
similarity index 100%
rename from examples/Cpp/PaddleDetection/ssd_vgg16_300_240e_voc/000000014439.jpg
rename to examples/C++/PaddleDetection/ssd_vgg16_300_240e_voc/000000014439.jpg
diff --git a/examples/Cpp/PaddleDetection/ssd_vgg16_300_240e_voc/README.md b/examples/C++/PaddleDetection/ssd_vgg16_300_240e_voc/README.md
similarity index 100%
rename from examples/Cpp/PaddleDetection/ssd_vgg16_300_240e_voc/README.md
rename to examples/C++/PaddleDetection/ssd_vgg16_300_240e_voc/README.md
diff --git a/examples/Cpp/PaddleDetection/ssd_vgg16_300_240e_voc/README_CN.md b/examples/C++/PaddleDetection/ssd_vgg16_300_240e_voc/README_CN.md
similarity index 100%
rename from examples/Cpp/PaddleDetection/ssd_vgg16_300_240e_voc/README_CN.md
rename to examples/C++/PaddleDetection/ssd_vgg16_300_240e_voc/README_CN.md
diff --git a/examples/Cpp/PaddleDetection/ssd_vgg16_300_240e_voc/label_list.txt b/examples/C++/PaddleDetection/ssd_vgg16_300_240e_voc/label_list.txt
similarity index 100%
rename from examples/Cpp/PaddleDetection/ssd_vgg16_300_240e_voc/label_list.txt
rename to examples/C++/PaddleDetection/ssd_vgg16_300_240e_voc/label_list.txt
diff --git a/examples/Cpp/PaddleDetection/faster_rcnn_hrnetv2p_w18_1x/test_client.py b/examples/C++/PaddleDetection/ssd_vgg16_300_240e_voc/test_client.py
similarity index 84%
rename from examples/Cpp/PaddleDetection/faster_rcnn_hrnetv2p_w18_1x/test_client.py
rename to examples/C++/PaddleDetection/ssd_vgg16_300_240e_voc/test_client.py
index 329f6effb..47e134a25 100644
--- a/examples/Cpp/PaddleDetection/faster_rcnn_hrnetv2p_w18_1x/test_client.py
+++ b/examples/C++/PaddleDetection/ssd_vgg16_300_240e_voc/test_client.py
@@ -19,11 +19,11 @@
import cv2
preprocess = DetectionSequential([
- DetectionFile2Image(),
- DetectionResize((800, 1333), True, interpolation=2),
- DetectionNormalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225], True),
- DetectionTranspose((2,0,1)),
- DetectionPadStride(32)
+ DetectionFile2Image(),
+ DetectionResize(
+ (300, 300), False, interpolation=cv2.INTER_LINEAR),
+ DetectionNormalize([104.0, 117.0, 123.0], [1.0, 1.0, 1.0], False),
+ DetectionTranspose((2, 0, 1)),
])
postprocess = RCNNPostprocess("label_list.txt", "output")
diff --git a/examples/Cpp/PaddleDetection/ttfnet_darknet53_1x_coco/000000570688.jpg b/examples/C++/PaddleDetection/ttfnet_darknet53_1x_coco/000000570688.jpg
similarity index 100%
rename from examples/Cpp/PaddleDetection/ttfnet_darknet53_1x_coco/000000570688.jpg
rename to examples/C++/PaddleDetection/ttfnet_darknet53_1x_coco/000000570688.jpg
diff --git a/examples/Cpp/PaddleDetection/ttfnet_darknet53_1x_coco/README.md b/examples/C++/PaddleDetection/ttfnet_darknet53_1x_coco/README.md
similarity index 100%
rename from examples/Cpp/PaddleDetection/ttfnet_darknet53_1x_coco/README.md
rename to examples/C++/PaddleDetection/ttfnet_darknet53_1x_coco/README.md
diff --git a/examples/Cpp/PaddleDetection/ttfnet_darknet53_1x_coco/README_CN.md b/examples/C++/PaddleDetection/ttfnet_darknet53_1x_coco/README_CN.md
similarity index 100%
rename from examples/Cpp/PaddleDetection/ttfnet_darknet53_1x_coco/README_CN.md
rename to examples/C++/PaddleDetection/ttfnet_darknet53_1x_coco/README_CN.md
diff --git a/examples/Cpp/PaddleDetection/ttfnet_darknet53_1x_coco/label_list.txt b/examples/C++/PaddleDetection/ttfnet_darknet53_1x_coco/label_list.txt
similarity index 100%
rename from examples/Cpp/PaddleDetection/ttfnet_darknet53_1x_coco/label_list.txt
rename to examples/C++/PaddleDetection/ttfnet_darknet53_1x_coco/label_list.txt
diff --git a/examples/Cpp/PaddleDetection/ttfnet_darknet53_1x_coco/test_client.py b/examples/C++/PaddleDetection/ttfnet_darknet53_1x_coco/test_client.py
similarity index 83%
rename from examples/Cpp/PaddleDetection/ttfnet_darknet53_1x_coco/test_client.py
rename to examples/C++/PaddleDetection/ttfnet_darknet53_1x_coco/test_client.py
index f735c01bc..3fbcdc0aa 100644
--- a/examples/Cpp/PaddleDetection/ttfnet_darknet53_1x_coco/test_client.py
+++ b/examples/C++/PaddleDetection/ttfnet_darknet53_1x_coco/test_client.py
@@ -18,11 +18,10 @@
import cv2
preprocess = DetectionSequential([
- DetectionFile2Image(),
- DetectionResize(
- (512, 512), False, interpolation=cv2.INTER_LINEAR),
- DetectionNormalize([123.675, 116.28, 103.53], [58.395, 57.12, 57.375], False),
- DetectionTranspose((2,0,1))
+ DetectionFile2Image(), DetectionResize(
+ (512, 512), False, interpolation=cv2.INTER_LINEAR), DetectionNormalize(
+ [123.675, 116.28, 103.53], [58.395, 57.12, 57.375], False),
+ DetectionTranspose((2, 0, 1))
])
postprocess = RCNNPostprocess("label_list.txt", "output")
@@ -33,7 +32,6 @@
im, im_info = preprocess(sys.argv[1])
-
fetch_map = client.predict(
feed={
"image": im,
diff --git a/examples/Cpp/PaddleDetection/yolov3_darknet53_270e_coco/000000570688.jpg b/examples/C++/PaddleDetection/yolov3_darknet53_270e_coco/000000570688.jpg
similarity index 100%
rename from examples/Cpp/PaddleDetection/yolov3_darknet53_270e_coco/000000570688.jpg
rename to examples/C++/PaddleDetection/yolov3_darknet53_270e_coco/000000570688.jpg
diff --git a/examples/Cpp/PaddleDetection/yolov3_darknet53_270e_coco/README.md b/examples/C++/PaddleDetection/yolov3_darknet53_270e_coco/README.md
similarity index 100%
rename from examples/Cpp/PaddleDetection/yolov3_darknet53_270e_coco/README.md
rename to examples/C++/PaddleDetection/yolov3_darknet53_270e_coco/README.md
diff --git a/examples/Cpp/PaddleDetection/yolov3_darknet53_270e_coco/README_CN.md b/examples/C++/PaddleDetection/yolov3_darknet53_270e_coco/README_CN.md
similarity index 100%
rename from examples/Cpp/PaddleDetection/yolov3_darknet53_270e_coco/README_CN.md
rename to examples/C++/PaddleDetection/yolov3_darknet53_270e_coco/README_CN.md
diff --git a/examples/Cpp/PaddleDetection/yolov3_darknet53_270e_coco/label_list.txt b/examples/C++/PaddleDetection/yolov3_darknet53_270e_coco/label_list.txt
similarity index 100%
rename from examples/Cpp/PaddleDetection/yolov3_darknet53_270e_coco/label_list.txt
rename to examples/C++/PaddleDetection/yolov3_darknet53_270e_coco/label_list.txt
diff --git a/examples/Cpp/PaddleDetection/yolov3_darknet53_270e_coco/test_client.py b/examples/C++/PaddleDetection/yolov3_darknet53_270e_coco/test_client.py
similarity index 85%
rename from examples/Cpp/PaddleDetection/yolov3_darknet53_270e_coco/test_client.py
rename to examples/C++/PaddleDetection/yolov3_darknet53_270e_coco/test_client.py
index 04f21b32a..a54333399 100644
--- a/examples/Cpp/PaddleDetection/yolov3_darknet53_270e_coco/test_client.py
+++ b/examples/C++/PaddleDetection/yolov3_darknet53_270e_coco/test_client.py
@@ -19,11 +19,11 @@
import cv2
preprocess = DetectionSequential([
- DetectionFile2Image(),
- DetectionResize(
- (608, 608), False, interpolation=2),
- DetectionNormalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225], True),
- DetectionTranspose((2,0,1)),
+ DetectionFile2Image(),
+ DetectionResize(
+ (608, 608), False, interpolation=2),
+ DetectionNormalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225], True),
+ DetectionTranspose((2, 0, 1)),
])
postprocess = RCNNPostprocess("label_list.txt", "output")
diff --git a/examples/Cpp/PaddleDetection/yolov4/000000570688.jpg b/examples/C++/PaddleDetection/yolov4/000000570688.jpg
similarity index 100%
rename from examples/Cpp/PaddleDetection/yolov4/000000570688.jpg
rename to examples/C++/PaddleDetection/yolov4/000000570688.jpg
diff --git a/examples/Cpp/PaddleDetection/yolov4/README.md b/examples/C++/PaddleDetection/yolov4/README.md
similarity index 100%
rename from examples/Cpp/PaddleDetection/yolov4/README.md
rename to examples/C++/PaddleDetection/yolov4/README.md
diff --git a/examples/Cpp/PaddleDetection/yolov4/README_CN.md b/examples/C++/PaddleDetection/yolov4/README_CN.md
similarity index 100%
rename from examples/Cpp/PaddleDetection/yolov4/README_CN.md
rename to examples/C++/PaddleDetection/yolov4/README_CN.md
diff --git a/examples/Cpp/PaddleDetection/yolov4/label_list.txt b/examples/C++/PaddleDetection/yolov4/label_list.txt
similarity index 100%
rename from examples/Cpp/PaddleDetection/yolov4/label_list.txt
rename to examples/C++/PaddleDetection/yolov4/label_list.txt
diff --git a/examples/Cpp/PaddleDetection/yolov4/test_client.py b/examples/C++/PaddleDetection/yolov4/test_client.py
similarity index 100%
rename from examples/Cpp/PaddleDetection/yolov4/test_client.py
rename to examples/C++/PaddleDetection/yolov4/test_client.py
diff --git a/examples/Cpp/PaddleNLP/bert/README.md b/examples/C++/PaddleNLP/bert/README.md
similarity index 100%
rename from examples/Cpp/PaddleNLP/bert/README.md
rename to examples/C++/PaddleNLP/bert/README.md
diff --git a/examples/Cpp/PaddleNLP/bert/README_CN.md b/examples/C++/PaddleNLP/bert/README_CN.md
similarity index 100%
rename from examples/Cpp/PaddleNLP/bert/README_CN.md
rename to examples/C++/PaddleNLP/bert/README_CN.md
diff --git a/examples/Cpp/PaddleNLP/bert/batching.py b/examples/C++/PaddleNLP/bert/batching.py
similarity index 100%
rename from examples/Cpp/PaddleNLP/bert/batching.py
rename to examples/C++/PaddleNLP/bert/batching.py
diff --git a/examples/Cpp/PaddleNLP/bert/benchmark.py b/examples/C++/PaddleNLP/bert/benchmark.py
similarity index 100%
rename from examples/Cpp/PaddleNLP/bert/benchmark.py
rename to examples/C++/PaddleNLP/bert/benchmark.py
diff --git a/examples/Cpp/PaddleNLP/bert/benchmark.sh b/examples/C++/PaddleNLP/bert/benchmark.sh
similarity index 100%
rename from examples/Cpp/PaddleNLP/bert/benchmark.sh
rename to examples/C++/PaddleNLP/bert/benchmark.sh
diff --git a/examples/Cpp/PaddleNLP/bert/benchmark_with_profile.sh b/examples/C++/PaddleNLP/bert/benchmark_with_profile.sh
similarity index 100%
rename from examples/Cpp/PaddleNLP/bert/benchmark_with_profile.sh
rename to examples/C++/PaddleNLP/bert/benchmark_with_profile.sh
diff --git a/examples/Cpp/PaddleNLP/bert/bert_client.py b/examples/C++/PaddleNLP/bert/bert_client.py
similarity index 100%
rename from examples/Cpp/PaddleNLP/bert/bert_client.py
rename to examples/C++/PaddleNLP/bert/bert_client.py
diff --git a/examples/Cpp/PaddleNLP/bert/bert_gpu_server.py b/examples/C++/PaddleNLP/bert/bert_gpu_server.py
similarity index 100%
rename from examples/Cpp/PaddleNLP/bert/bert_gpu_server.py
rename to examples/C++/PaddleNLP/bert/bert_gpu_server.py
diff --git a/examples/Cpp/PaddleNLP/bert/bert_httpclient.py b/examples/C++/PaddleNLP/bert/bert_httpclient.py
similarity index 100%
rename from examples/Cpp/PaddleNLP/bert/bert_httpclient.py
rename to examples/C++/PaddleNLP/bert/bert_httpclient.py
diff --git a/examples/Cpp/PaddleNLP/bert/bert_reader.py b/examples/C++/PaddleNLP/bert/bert_reader.py
similarity index 100%
rename from examples/Cpp/PaddleNLP/bert/bert_reader.py
rename to examples/C++/PaddleNLP/bert/bert_reader.py
diff --git a/examples/Cpp/PaddleNLP/bert/bert_server.py b/examples/C++/PaddleNLP/bert/bert_server.py
similarity index 100%
rename from examples/Cpp/PaddleNLP/bert/bert_server.py
rename to examples/C++/PaddleNLP/bert/bert_server.py
diff --git a/examples/Cpp/PaddleNLP/bert/get_data.sh b/examples/C++/PaddleNLP/bert/get_data.sh
similarity index 100%
rename from examples/Cpp/PaddleNLP/bert/get_data.sh
rename to examples/C++/PaddleNLP/bert/get_data.sh
diff --git a/examples/Cpp/PaddleNLP/bert/prepare_model.py b/examples/C++/PaddleNLP/bert/prepare_model.py
similarity index 100%
rename from examples/Cpp/PaddleNLP/bert/prepare_model.py
rename to examples/C++/PaddleNLP/bert/prepare_model.py
diff --git a/examples/Cpp/PaddleNLP/bert/test_multi_fetch_client.py b/examples/C++/PaddleNLP/bert/test_multi_fetch_client.py
similarity index 100%
rename from examples/Cpp/PaddleNLP/bert/test_multi_fetch_client.py
rename to examples/C++/PaddleNLP/bert/test_multi_fetch_client.py
diff --git a/examples/Cpp/PaddleNLP/bert/tokenization.py b/examples/C++/PaddleNLP/bert/tokenization.py
similarity index 100%
rename from examples/Cpp/PaddleNLP/bert/tokenization.py
rename to examples/C++/PaddleNLP/bert/tokenization.py
diff --git a/examples/Cpp/PaddleNLP/lac/README.md b/examples/C++/PaddleNLP/lac/README.md
similarity index 100%
rename from examples/Cpp/PaddleNLP/lac/README.md
rename to examples/C++/PaddleNLP/lac/README.md
diff --git a/examples/Cpp/PaddleNLP/lac/README_CN.md b/examples/C++/PaddleNLP/lac/README_CN.md
similarity index 100%
rename from examples/Cpp/PaddleNLP/lac/README_CN.md
rename to examples/C++/PaddleNLP/lac/README_CN.md
diff --git a/examples/Cpp/PaddleNLP/lac/benchmark.py b/examples/C++/PaddleNLP/lac/benchmark.py
similarity index 100%
rename from examples/Cpp/PaddleNLP/lac/benchmark.py
rename to examples/C++/PaddleNLP/lac/benchmark.py
diff --git a/examples/Cpp/PaddleNLP/lac/lac_client.py b/examples/C++/PaddleNLP/lac/lac_client.py
similarity index 100%
rename from examples/Cpp/PaddleNLP/lac/lac_client.py
rename to examples/C++/PaddleNLP/lac/lac_client.py
diff --git a/examples/Cpp/PaddleNLP/lac/lac_http_client.py b/examples/C++/PaddleNLP/lac/lac_http_client.py
similarity index 100%
rename from examples/Cpp/PaddleNLP/lac/lac_http_client.py
rename to examples/C++/PaddleNLP/lac/lac_http_client.py
diff --git a/examples/Cpp/PaddleNLP/lac/lac_reader.py b/examples/C++/PaddleNLP/lac/lac_reader.py
similarity index 100%
rename from examples/Cpp/PaddleNLP/lac/lac_reader.py
rename to examples/C++/PaddleNLP/lac/lac_reader.py
diff --git a/examples/Cpp/PaddleNLP/lac/utils.py b/examples/C++/PaddleNLP/lac/utils.py
similarity index 100%
rename from examples/Cpp/PaddleNLP/lac/utils.py
rename to examples/C++/PaddleNLP/lac/utils.py
diff --git a/examples/Cpp/PaddleNLP/senta/README.md b/examples/C++/PaddleNLP/senta/README.md
similarity index 100%
rename from examples/Cpp/PaddleNLP/senta/README.md
rename to examples/C++/PaddleNLP/senta/README.md
diff --git a/examples/Cpp/PaddleNLP/senta/README_CN.md b/examples/C++/PaddleNLP/senta/README_CN.md
similarity index 100%
rename from examples/Cpp/PaddleNLP/senta/README_CN.md
rename to examples/C++/PaddleNLP/senta/README_CN.md
diff --git a/examples/Cpp/PaddleNLP/senta/get_data.sh b/examples/C++/PaddleNLP/senta/get_data.sh
similarity index 100%
rename from examples/Cpp/PaddleNLP/senta/get_data.sh
rename to examples/C++/PaddleNLP/senta/get_data.sh
diff --git a/examples/Cpp/PaddleNLP/senta/senta_web_service.py b/examples/C++/PaddleNLP/senta/senta_web_service.py
similarity index 100%
rename from examples/Cpp/PaddleNLP/senta/senta_web_service.py
rename to examples/C++/PaddleNLP/senta/senta_web_service.py
diff --git a/examples/Cpp/PaddleOCR/ocr/README.md b/examples/C++/PaddleOCR/ocr/README.md
similarity index 100%
rename from examples/Cpp/PaddleOCR/ocr/README.md
rename to examples/C++/PaddleOCR/ocr/README.md
diff --git a/examples/Cpp/PaddleOCR/ocr/README_CN.md b/examples/C++/PaddleOCR/ocr/README_CN.md
similarity index 100%
rename from examples/Cpp/PaddleOCR/ocr/README_CN.md
rename to examples/C++/PaddleOCR/ocr/README_CN.md
diff --git a/examples/Cpp/PaddleOCR/ocr/det_debugger_server.py b/examples/C++/PaddleOCR/ocr/det_debugger_server.py
similarity index 100%
rename from examples/Cpp/PaddleOCR/ocr/det_debugger_server.py
rename to examples/C++/PaddleOCR/ocr/det_debugger_server.py
diff --git a/examples/Cpp/PaddleOCR/ocr/det_web_server.py b/examples/C++/PaddleOCR/ocr/det_web_server.py
similarity index 100%
rename from examples/Cpp/PaddleOCR/ocr/det_web_server.py
rename to examples/C++/PaddleOCR/ocr/det_web_server.py
diff --git a/examples/Cpp/PaddleOCR/ocr/imgs/1.jpg b/examples/C++/PaddleOCR/ocr/imgs/1.jpg
similarity index 100%
rename from examples/Cpp/PaddleOCR/ocr/imgs/1.jpg
rename to examples/C++/PaddleOCR/ocr/imgs/1.jpg
diff --git a/examples/Cpp/PaddleOCR/ocr/ocr_cpp_client.py b/examples/C++/PaddleOCR/ocr/ocr_cpp_client.py
similarity index 88%
rename from examples/Cpp/PaddleOCR/ocr/ocr_cpp_client.py
rename to examples/C++/PaddleOCR/ocr/ocr_cpp_client.py
index fa9209aab..aba8f7bbf 100644
--- a/examples/Cpp/PaddleOCR/ocr/ocr_cpp_client.py
+++ b/examples/C++/PaddleOCR/ocr/ocr_cpp_client.py
@@ -31,14 +31,18 @@
import paddle
test_img_dir = "imgs/"
+
def cv2_to_base64(image):
- return base64.b64encode(image) #data.tostring()).decode('utf8')
+ return base64.b64encode(image) #data.tostring()).decode('utf8')
+
for img_file in os.listdir(test_img_dir):
with open(os.path.join(test_img_dir, img_file), 'rb') as file:
image_data = file.read()
image = cv2_to_base64(image_data)
fetch_map = client.predict(
- feed={"image": image}, fetch = ["ctc_greedy_decoder_0.tmp_0", "softmax_0.tmp_0"], batch=True)
+ feed={"image": image},
+ fetch=["ctc_greedy_decoder_0.tmp_0", "softmax_0.tmp_0"],
+ batch=True)
#print("{} {}".format(fetch_map["price"][0], data[0][1][0]))
print(fetch_map)
diff --git a/examples/Cpp/PaddleOCR/ocr/ocr_debugger_server.py b/examples/C++/PaddleOCR/ocr/ocr_debugger_server.py
similarity index 100%
rename from examples/Cpp/PaddleOCR/ocr/ocr_debugger_server.py
rename to examples/C++/PaddleOCR/ocr/ocr_debugger_server.py
diff --git a/examples/Cpp/PaddleOCR/ocr/ocr_web_client.py b/examples/C++/PaddleOCR/ocr/ocr_web_client.py
similarity index 100%
rename from examples/Cpp/PaddleOCR/ocr/ocr_web_client.py
rename to examples/C++/PaddleOCR/ocr/ocr_web_client.py
diff --git a/examples/Cpp/PaddleOCR/ocr/ocr_web_server.py b/examples/C++/PaddleOCR/ocr/ocr_web_server.py
similarity index 100%
rename from examples/Cpp/PaddleOCR/ocr/ocr_web_server.py
rename to examples/C++/PaddleOCR/ocr/ocr_web_server.py
diff --git a/examples/Cpp/PaddleOCR/ocr/rec_debugger_server.py b/examples/C++/PaddleOCR/ocr/rec_debugger_server.py
similarity index 100%
rename from examples/Cpp/PaddleOCR/ocr/rec_debugger_server.py
rename to examples/C++/PaddleOCR/ocr/rec_debugger_server.py
diff --git a/examples/Cpp/PaddleOCR/ocr/rec_img/ch_doc3.jpg b/examples/C++/PaddleOCR/ocr/rec_img/ch_doc3.jpg
similarity index 100%
rename from examples/Cpp/PaddleOCR/ocr/rec_img/ch_doc3.jpg
rename to examples/C++/PaddleOCR/ocr/rec_img/ch_doc3.jpg
diff --git a/examples/Cpp/PaddleOCR/ocr/rec_web_client.py b/examples/C++/PaddleOCR/ocr/rec_web_client.py
similarity index 100%
rename from examples/Cpp/PaddleOCR/ocr/rec_web_client.py
rename to examples/C++/PaddleOCR/ocr/rec_web_client.py
diff --git a/examples/Cpp/PaddleOCR/ocr/rec_web_server.py b/examples/C++/PaddleOCR/ocr/rec_web_server.py
similarity index 100%
rename from examples/Cpp/PaddleOCR/ocr/rec_web_server.py
rename to examples/C++/PaddleOCR/ocr/rec_web_server.py
diff --git a/examples/Cpp/PaddleRec/criteo_ctr/README.md b/examples/C++/PaddleRec/criteo_ctr/README.md
similarity index 100%
rename from examples/Cpp/PaddleRec/criteo_ctr/README.md
rename to examples/C++/PaddleRec/criteo_ctr/README.md
diff --git a/examples/Cpp/PaddleRec/criteo_ctr/README_CN.md b/examples/C++/PaddleRec/criteo_ctr/README_CN.md
similarity index 100%
rename from examples/Cpp/PaddleRec/criteo_ctr/README_CN.md
rename to examples/C++/PaddleRec/criteo_ctr/README_CN.md
diff --git a/examples/Cpp/PaddleRec/criteo_ctr/args.py b/examples/C++/PaddleRec/criteo_ctr/args.py
similarity index 100%
rename from examples/Cpp/PaddleRec/criteo_ctr/args.py
rename to examples/C++/PaddleRec/criteo_ctr/args.py
diff --git a/examples/Cpp/PaddleRec/criteo_ctr/benchmark.py b/examples/C++/PaddleRec/criteo_ctr/benchmark.py
similarity index 100%
rename from examples/Cpp/PaddleRec/criteo_ctr/benchmark.py
rename to examples/C++/PaddleRec/criteo_ctr/benchmark.py
diff --git a/examples/Cpp/PaddleRec/criteo_ctr/benchmark.sh b/examples/C++/PaddleRec/criteo_ctr/benchmark.sh
similarity index 100%
rename from examples/Cpp/PaddleRec/criteo_ctr/benchmark.sh
rename to examples/C++/PaddleRec/criteo_ctr/benchmark.sh
diff --git a/examples/Cpp/PaddleRec/criteo_ctr/benchmark_batch.py b/examples/C++/PaddleRec/criteo_ctr/benchmark_batch.py
similarity index 100%
rename from examples/Cpp/PaddleRec/criteo_ctr/benchmark_batch.py
rename to examples/C++/PaddleRec/criteo_ctr/benchmark_batch.py
diff --git a/examples/Cpp/PaddleRec/criteo_ctr/benchmark_batch.sh b/examples/C++/PaddleRec/criteo_ctr/benchmark_batch.sh
similarity index 100%
rename from examples/Cpp/PaddleRec/criteo_ctr/benchmark_batch.sh
rename to examples/C++/PaddleRec/criteo_ctr/benchmark_batch.sh
diff --git a/examples/Cpp/PaddleRec/criteo_ctr/clean.sh b/examples/C++/PaddleRec/criteo_ctr/clean.sh
similarity index 100%
rename from examples/Cpp/PaddleRec/criteo_ctr/clean.sh
rename to examples/C++/PaddleRec/criteo_ctr/clean.sh
diff --git a/examples/Cpp/PaddleRec/criteo_ctr/get_data.sh b/examples/C++/PaddleRec/criteo_ctr/get_data.sh
similarity index 100%
rename from examples/Cpp/PaddleRec/criteo_ctr/get_data.sh
rename to examples/C++/PaddleRec/criteo_ctr/get_data.sh
diff --git a/examples/Cpp/PaddleRec/criteo_ctr/local_train.py b/examples/C++/PaddleRec/criteo_ctr/local_train.py
similarity index 100%
rename from examples/Cpp/PaddleRec/criteo_ctr/local_train.py
rename to examples/C++/PaddleRec/criteo_ctr/local_train.py
diff --git a/examples/Cpp/PaddleRec/criteo_ctr/network_conf.py b/examples/C++/PaddleRec/criteo_ctr/network_conf.py
similarity index 100%
rename from examples/Cpp/PaddleRec/criteo_ctr/network_conf.py
rename to examples/C++/PaddleRec/criteo_ctr/network_conf.py
diff --git a/examples/Cpp/PaddleRec/criteo_ctr/test_client.py b/examples/C++/PaddleRec/criteo_ctr/test_client.py
similarity index 96%
rename from examples/Cpp/PaddleRec/criteo_ctr/test_client.py
rename to examples/C++/PaddleRec/criteo_ctr/test_client.py
index fd6c6e031..c1c1ea685 100644
--- a/examples/Cpp/PaddleRec/criteo_ctr/test_client.py
+++ b/examples/C++/PaddleRec/criteo_ctr/test_client.py
@@ -21,6 +21,7 @@
import numpy as np
import sys
+
class CriteoReader(object):
def __init__(self, sparse_feature_dim):
self.cont_min_ = [0, -3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
@@ -52,6 +53,7 @@ def process_line(self, line):
return sparse_feature
+
py_version = sys.version_info[0]
client = Client()
@@ -68,8 +70,8 @@ def process_line(self, line):
data = reader.process_line(f.readline())
feed_dict = {}
for i in range(1, 27):
- feed_dict["sparse_{}".format(i - 1)] = np.array(data[i-1]).reshape(-1)
- feed_dict["sparse_{}.lod".format(i - 1)] = [0, len(data[i-1])]
+ feed_dict["sparse_{}".format(i - 1)] = np.array(data[i - 1]).reshape(-1)
+ feed_dict["sparse_{}.lod".format(i - 1)] = [0, len(data[i - 1])]
fetch_map = client.predict(feed=feed_dict, fetch=["prob"])
print(fetch_map)
end = time.time()
diff --git a/examples/Cpp/PaddleRec/criteo_ctr/test_server.py b/examples/C++/PaddleRec/criteo_ctr/test_server.py
similarity index 100%
rename from examples/Cpp/PaddleRec/criteo_ctr/test_server.py
rename to examples/C++/PaddleRec/criteo_ctr/test_server.py
diff --git a/examples/Cpp/PaddleRec/criteo_ctr_with_cube/README.md b/examples/C++/PaddleRec/criteo_ctr_with_cube/README.md
similarity index 100%
rename from examples/Cpp/PaddleRec/criteo_ctr_with_cube/README.md
rename to examples/C++/PaddleRec/criteo_ctr_with_cube/README.md
diff --git a/examples/Cpp/PaddleRec/criteo_ctr_with_cube/README_CN.md b/examples/C++/PaddleRec/criteo_ctr_with_cube/README_CN.md
similarity index 100%
rename from examples/Cpp/PaddleRec/criteo_ctr_with_cube/README_CN.md
rename to examples/C++/PaddleRec/criteo_ctr_with_cube/README_CN.md
diff --git a/examples/Cpp/PaddleRec/criteo_ctr_with_cube/criteo_reader.py b/examples/C++/PaddleRec/criteo_ctr_with_cube/criteo_reader.py
similarity index 100%
rename from examples/Cpp/PaddleRec/criteo_ctr_with_cube/criteo_reader.py
rename to examples/C++/PaddleRec/criteo_ctr_with_cube/criteo_reader.py
diff --git a/examples/Cpp/PaddleRec/criteo_ctr_with_cube/cube/conf/cube.conf b/examples/C++/PaddleRec/criteo_ctr_with_cube/cube/conf/cube.conf
similarity index 100%
rename from examples/Cpp/PaddleRec/criteo_ctr_with_cube/cube/conf/cube.conf
rename to examples/C++/PaddleRec/criteo_ctr_with_cube/cube/conf/cube.conf
diff --git a/examples/Cpp/PaddleRec/criteo_ctr_with_cube/cube/conf/gflags.conf b/examples/C++/PaddleRec/criteo_ctr_with_cube/cube/conf/gflags.conf
similarity index 100%
rename from examples/Cpp/PaddleRec/criteo_ctr_with_cube/cube/conf/gflags.conf
rename to examples/C++/PaddleRec/criteo_ctr_with_cube/cube/conf/gflags.conf
diff --git a/examples/Cpp/PaddleRec/criteo_ctr_with_cube/cube/keys b/examples/C++/PaddleRec/criteo_ctr_with_cube/cube/keys
similarity index 100%
rename from examples/Cpp/PaddleRec/criteo_ctr_with_cube/cube/keys
rename to examples/C++/PaddleRec/criteo_ctr_with_cube/cube/keys
diff --git a/examples/Cpp/PaddleRec/criteo_ctr_with_cube/cube_prepare.sh b/examples/C++/PaddleRec/criteo_ctr_with_cube/cube_prepare.sh
similarity index 100%
rename from examples/Cpp/PaddleRec/criteo_ctr_with_cube/cube_prepare.sh
rename to examples/C++/PaddleRec/criteo_ctr_with_cube/cube_prepare.sh
diff --git a/examples/Cpp/PaddleRec/criteo_ctr_with_cube/get_data.sh b/examples/C++/PaddleRec/criteo_ctr_with_cube/get_data.sh
similarity index 100%
rename from examples/Cpp/PaddleRec/criteo_ctr_with_cube/get_data.sh
rename to examples/C++/PaddleRec/criteo_ctr_with_cube/get_data.sh
diff --git a/examples/Cpp/PaddleRec/criteo_ctr_with_cube/local_train.py b/examples/C++/PaddleRec/criteo_ctr_with_cube/local_train.py
similarity index 99%
rename from examples/Cpp/PaddleRec/criteo_ctr_with_cube/local_train.py
rename to examples/C++/PaddleRec/criteo_ctr_with_cube/local_train.py
index 555e2e929..27ed67852 100755
--- a/examples/Cpp/PaddleRec/criteo_ctr_with_cube/local_train.py
+++ b/examples/C++/PaddleRec/criteo_ctr_with_cube/local_train.py
@@ -25,6 +25,8 @@
dense_feature_dim = 13
paddle.enable_static()
+
+
def train():
args = parse_args()
sparse_only = args.sparse_only
diff --git a/examples/Cpp/PaddleRec/criteo_ctr_with_cube/network_conf.py b/examples/C++/PaddleRec/criteo_ctr_with_cube/network_conf.py
similarity index 100%
rename from examples/Cpp/PaddleRec/criteo_ctr_with_cube/network_conf.py
rename to examples/C++/PaddleRec/criteo_ctr_with_cube/network_conf.py
diff --git a/examples/Cpp/PaddleRec/criteo_ctr_with_cube/test_client.py b/examples/C++/PaddleRec/criteo_ctr_with_cube/test_client.py
similarity index 93%
rename from examples/Cpp/PaddleRec/criteo_ctr_with_cube/test_client.py
rename to examples/C++/PaddleRec/criteo_ctr_with_cube/test_client.py
index f12d727a3..e9d517e0d 100755
--- a/examples/Cpp/PaddleRec/criteo_ctr_with_cube/test_client.py
+++ b/examples/C++/PaddleRec/criteo_ctr_with_cube/test_client.py
@@ -44,14 +44,13 @@
feed_dict['dense_input'] = np.array(data[0][0]).reshape(1, len(data[0][0]))
for i in range(1, 27):
- feed_dict["embedding_{}.tmp_0".format(i - 1)] = np.array(data[0][i]).reshape(len(data[0][i]))
+ feed_dict["embedding_{}.tmp_0".format(i - 1)] = np.array(data[0][
+ i]).reshape(len(data[0][i]))
feed_dict["embedding_{}.tmp_0.lod".format(i - 1)] = [0, len(data[0][i])]
- fetch_map = client.predict(feed=feed_dict, fetch=["prob"],batch=True)
+ fetch_map = client.predict(feed=feed_dict, fetch=["prob"], batch=True)
print(fetch_map)
prob_list.append(fetch_map['prob'][0][1])
label_list.append(data[0][-1][0])
-
end = time.time()
print(end - start)
-
diff --git a/examples/Cpp/PaddleRec/criteo_ctr_with_cube/test_server.py b/examples/C++/PaddleRec/criteo_ctr_with_cube/test_server.py
similarity index 100%
rename from examples/Cpp/PaddleRec/criteo_ctr_with_cube/test_server.py
rename to examples/C++/PaddleRec/criteo_ctr_with_cube/test_server.py
diff --git a/examples/Cpp/PaddleSeg/deeplabv3/N0060.jpg b/examples/C++/PaddleSeg/deeplabv3/N0060.jpg
similarity index 100%
rename from examples/Cpp/PaddleSeg/deeplabv3/N0060.jpg
rename to examples/C++/PaddleSeg/deeplabv3/N0060.jpg
diff --git a/examples/Cpp/PaddleSeg/deeplabv3/README.md b/examples/C++/PaddleSeg/deeplabv3/README.md
similarity index 100%
rename from examples/Cpp/PaddleSeg/deeplabv3/README.md
rename to examples/C++/PaddleSeg/deeplabv3/README.md
diff --git a/examples/Cpp/PaddleSeg/deeplabv3/README_CN.md b/examples/C++/PaddleSeg/deeplabv3/README_CN.md
similarity index 100%
rename from examples/Cpp/PaddleSeg/deeplabv3/README_CN.md
rename to examples/C++/PaddleSeg/deeplabv3/README_CN.md
diff --git a/examples/Cpp/PaddleSeg/deeplabv3/deeplabv3_client.py b/examples/C++/PaddleSeg/deeplabv3/deeplabv3_client.py
similarity index 100%
rename from examples/Cpp/PaddleSeg/deeplabv3/deeplabv3_client.py
rename to examples/C++/PaddleSeg/deeplabv3/deeplabv3_client.py
diff --git a/examples/Cpp/PaddleSeg/unet_for_image_seg/N0060.jpg b/examples/C++/PaddleSeg/unet_for_image_seg/N0060.jpg
similarity index 100%
rename from examples/Cpp/PaddleSeg/unet_for_image_seg/N0060.jpg
rename to examples/C++/PaddleSeg/unet_for_image_seg/N0060.jpg
diff --git a/examples/Cpp/PaddleSeg/unet_for_image_seg/README.md b/examples/C++/PaddleSeg/unet_for_image_seg/README.md
similarity index 100%
rename from examples/Cpp/PaddleSeg/unet_for_image_seg/README.md
rename to examples/C++/PaddleSeg/unet_for_image_seg/README.md
diff --git a/examples/Cpp/PaddleSeg/unet_for_image_seg/README_CN.md b/examples/C++/PaddleSeg/unet_for_image_seg/README_CN.md
similarity index 100%
rename from examples/Cpp/PaddleSeg/unet_for_image_seg/README_CN.md
rename to examples/C++/PaddleSeg/unet_for_image_seg/README_CN.md
diff --git a/examples/Cpp/PaddleSeg/unet_for_image_seg/seg_client.py b/examples/C++/PaddleSeg/unet_for_image_seg/seg_client.py
similarity index 100%
rename from examples/Cpp/PaddleSeg/unet_for_image_seg/seg_client.py
rename to examples/C++/PaddleSeg/unet_for_image_seg/seg_client.py
diff --git a/examples/Cpp/PaddleSeg/unet_for_image_seg/unet_benchmark/README.md b/examples/C++/PaddleSeg/unet_for_image_seg/unet_benchmark/README.md
similarity index 100%
rename from examples/Cpp/PaddleSeg/unet_for_image_seg/unet_benchmark/README.md
rename to examples/C++/PaddleSeg/unet_for_image_seg/unet_benchmark/README.md
diff --git a/examples/Cpp/PaddleSeg/unet_for_image_seg/unet_benchmark/img_data/N0060.jpg b/examples/C++/PaddleSeg/unet_for_image_seg/unet_benchmark/img_data/N0060.jpg
similarity index 100%
rename from examples/Cpp/PaddleSeg/unet_for_image_seg/unet_benchmark/img_data/N0060.jpg
rename to examples/C++/PaddleSeg/unet_for_image_seg/unet_benchmark/img_data/N0060.jpg
diff --git a/examples/Cpp/PaddleSeg/unet_for_image_seg/unet_benchmark/launch_benckmark.sh b/examples/C++/PaddleSeg/unet_for_image_seg/unet_benchmark/launch_benckmark.sh
similarity index 100%
rename from examples/Cpp/PaddleSeg/unet_for_image_seg/unet_benchmark/launch_benckmark.sh
rename to examples/C++/PaddleSeg/unet_for_image_seg/unet_benchmark/launch_benckmark.sh
diff --git a/examples/Cpp/PaddleSeg/unet_for_image_seg/unet_benchmark/unet_benchmark.py b/examples/C++/PaddleSeg/unet_for_image_seg/unet_benchmark/unet_benchmark.py
similarity index 100%
rename from examples/Cpp/PaddleSeg/unet_for_image_seg/unet_benchmark/unet_benchmark.py
rename to examples/C++/PaddleSeg/unet_for_image_seg/unet_benchmark/unet_benchmark.py
diff --git a/examples/Cpp/encryption/README.md b/examples/C++/encryption/README.md
similarity index 100%
rename from examples/Cpp/encryption/README.md
rename to examples/C++/encryption/README.md
diff --git a/examples/Cpp/encryption/README_CN.md b/examples/C++/encryption/README_CN.md
similarity index 100%
rename from examples/Cpp/encryption/README_CN.md
rename to examples/C++/encryption/README_CN.md
diff --git a/examples/Cpp/encryption/encrypt.py b/examples/C++/encryption/encrypt.py
similarity index 100%
rename from examples/Cpp/encryption/encrypt.py
rename to examples/C++/encryption/encrypt.py
diff --git a/examples/Cpp/encryption/get_data.sh b/examples/C++/encryption/get_data.sh
similarity index 100%
rename from examples/Cpp/encryption/get_data.sh
rename to examples/C++/encryption/get_data.sh
diff --git a/examples/Cpp/encryption/test_client.py b/examples/C++/encryption/test_client.py
similarity index 100%
rename from examples/Cpp/encryption/test_client.py
rename to examples/C++/encryption/test_client.py
diff --git a/examples/Cpp/fit_a_line/README.md b/examples/C++/fit_a_line/README.md
similarity index 100%
rename from examples/Cpp/fit_a_line/README.md
rename to examples/C++/fit_a_line/README.md
diff --git a/examples/Cpp/fit_a_line/README_CN.md b/examples/C++/fit_a_line/README_CN.md
similarity index 100%
rename from examples/Cpp/fit_a_line/README_CN.md
rename to examples/C++/fit_a_line/README_CN.md
diff --git a/examples/Cpp/fit_a_line/benchmark.py b/examples/C++/fit_a_line/benchmark.py
similarity index 100%
rename from examples/Cpp/fit_a_line/benchmark.py
rename to examples/C++/fit_a_line/benchmark.py
diff --git a/examples/Cpp/fit_a_line/benchmark.sh b/examples/C++/fit_a_line/benchmark.sh
similarity index 100%
rename from examples/Cpp/fit_a_line/benchmark.sh
rename to examples/C++/fit_a_line/benchmark.sh
diff --git a/examples/Cpp/fit_a_line/get_data.sh b/examples/C++/fit_a_line/get_data.sh
similarity index 100%
rename from examples/Cpp/fit_a_line/get_data.sh
rename to examples/C++/fit_a_line/get_data.sh
diff --git a/examples/Cpp/fit_a_line/local_train.py b/examples/C++/fit_a_line/local_train.py
similarity index 100%
rename from examples/Cpp/fit_a_line/local_train.py
rename to examples/C++/fit_a_line/local_train.py
diff --git a/examples/Cpp/fit_a_line/test_client.py b/examples/C++/fit_a_line/test_client.py
similarity index 100%
rename from examples/Cpp/fit_a_line/test_client.py
rename to examples/C++/fit_a_line/test_client.py
diff --git a/examples/Cpp/fit_a_line/test_httpclient.py b/examples/C++/fit_a_line/test_httpclient.py
similarity index 100%
rename from examples/Cpp/fit_a_line/test_httpclient.py
rename to examples/C++/fit_a_line/test_httpclient.py
diff --git a/examples/Cpp/fit_a_line/test_multi_process_client.py b/examples/C++/fit_a_line/test_multi_process_client.py
similarity index 100%
rename from examples/Cpp/fit_a_line/test_multi_process_client.py
rename to examples/C++/fit_a_line/test_multi_process_client.py
diff --git a/examples/Cpp/fit_a_line/test_server.py b/examples/C++/fit_a_line/test_server.py
similarity index 100%
rename from examples/Cpp/fit_a_line/test_server.py
rename to examples/C++/fit_a_line/test_server.py
diff --git a/examples/Cpp/imdb/README.md b/examples/C++/imdb/README.md
similarity index 100%
rename from examples/Cpp/imdb/README.md
rename to examples/C++/imdb/README.md
diff --git a/examples/Cpp/imdb/README_CN.md b/examples/C++/imdb/README_CN.md
similarity index 100%
rename from examples/Cpp/imdb/README_CN.md
rename to examples/C++/imdb/README_CN.md
diff --git a/examples/Cpp/imdb/abtest_client.py b/examples/C++/imdb/abtest_client.py
similarity index 82%
rename from examples/Cpp/imdb/abtest_client.py
rename to examples/C++/imdb/abtest_client.py
index f5f721b67..1a14c87c3 100644
--- a/examples/Cpp/imdb/abtest_client.py
+++ b/examples/C++/imdb/abtest_client.py
@@ -1,4 +1,3 @@
-
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
@@ -34,10 +33,13 @@
"words.lod": [0, word_len]
}
fetch = ["acc", "cost", "prediction"]
- [fetch_map, tag] = client.predict(feed=feed, fetch=fetch, need_variant_tag=True,batch=True)
- if (float(fetch_map["prediction"][0][1]) - 0.5) * (float(label[0]) - 0.5) > 0:
+ [fetch_map, tag] = client.predict(
+ feed=feed, fetch=fetch, need_variant_tag=True, batch=True)
+ if (float(fetch_map["prediction"][0][1]) - 0.5) * (float(label[0]) - 0.5
+ ) > 0:
cnt[tag]['acc'] += 1
cnt[tag]['total'] += 1
for tag, data in cnt.items():
- print('[{}](total: {}) acc: {}'.format(tag, data['total'], float(data['acc'])/float(data['total']) ))
+ print('[{}](total: {}) acc: {}'.format(tag, data[
+ 'total'], float(data['acc']) / float(data['total'])))
diff --git a/examples/Cpp/imdb/abtest_get_data.py b/examples/C++/imdb/abtest_get_data.py
similarity index 93%
rename from examples/Cpp/imdb/abtest_get_data.py
rename to examples/C++/imdb/abtest_get_data.py
index c6bd7ea57..904d23ae0 100644
--- a/examples/Cpp/imdb/abtest_get_data.py
+++ b/examples/C++/imdb/abtest_get_data.py
@@ -20,4 +20,5 @@
with open('processed.data', 'w') as fout:
for line in fin:
word_ids, label = imdb_dataset.get_words_and_label(line)
- fout.write("{};{}\n".format(','.join([str(x) for x in word_ids]), label[0]))
+ fout.write("{};{}\n".format(','.join([str(x) for x in word_ids]),
+ label[0]))
diff --git a/examples/Cpp/imdb/benchmark.py b/examples/C++/imdb/benchmark.py
similarity index 100%
rename from examples/Cpp/imdb/benchmark.py
rename to examples/C++/imdb/benchmark.py
diff --git a/examples/Cpp/imdb/benchmark.sh b/examples/C++/imdb/benchmark.sh
similarity index 100%
rename from examples/Cpp/imdb/benchmark.sh
rename to examples/C++/imdb/benchmark.sh
diff --git a/examples/Cpp/imdb/clean_data.sh b/examples/C++/imdb/clean_data.sh
similarity index 100%
rename from examples/Cpp/imdb/clean_data.sh
rename to examples/C++/imdb/clean_data.sh
diff --git a/examples/Cpp/imdb/get_data.sh b/examples/C++/imdb/get_data.sh
similarity index 100%
rename from examples/Cpp/imdb/get_data.sh
rename to examples/C++/imdb/get_data.sh
diff --git a/examples/Cpp/imdb/imdb_reader.py b/examples/C++/imdb/imdb_reader.py
similarity index 100%
rename from examples/Cpp/imdb/imdb_reader.py
rename to examples/C++/imdb/imdb_reader.py
diff --git a/examples/Cpp/imdb/local_train.py b/examples/C++/imdb/local_train.py
similarity index 99%
rename from examples/Cpp/imdb/local_train.py
rename to examples/C++/imdb/local_train.py
index 98333e4e3..42c867abc 100644
--- a/examples/Cpp/imdb/local_train.py
+++ b/examples/C++/imdb/local_train.py
@@ -23,6 +23,7 @@
logger.setLevel(logging.INFO)
paddle.enable_static()
+
def load_vocab(filename):
vocab = {}
with open(filename) as f:
diff --git a/examples/Cpp/imdb/nets.py b/examples/C++/imdb/nets.py
similarity index 100%
rename from examples/Cpp/imdb/nets.py
rename to examples/C++/imdb/nets.py
diff --git a/examples/Cpp/imdb/test_client.py b/examples/C++/imdb/test_client.py
similarity index 100%
rename from examples/Cpp/imdb/test_client.py
rename to examples/C++/imdb/test_client.py
diff --git a/examples/Cpp/imdb/test_http_client.py b/examples/C++/imdb/test_http_client.py
similarity index 100%
rename from examples/Cpp/imdb/test_http_client.py
rename to examples/C++/imdb/test_http_client.py
diff --git a/examples/Cpp/low_precision/resnet50/README.md b/examples/C++/low_precision/resnet50/README.md
similarity index 100%
rename from examples/Cpp/low_precision/resnet50/README.md
rename to examples/C++/low_precision/resnet50/README.md
diff --git a/examples/Cpp/low_precision/resnet50/README_CN.md b/examples/C++/low_precision/resnet50/README_CN.md
similarity index 100%
rename from examples/Cpp/low_precision/resnet50/README_CN.md
rename to examples/C++/low_precision/resnet50/README_CN.md
diff --git a/examples/Cpp/low_precision/resnet50/daisy.jpg b/examples/C++/low_precision/resnet50/daisy.jpg
similarity index 100%
rename from examples/Cpp/low_precision/resnet50/daisy.jpg
rename to examples/C++/low_precision/resnet50/daisy.jpg
diff --git a/examples/Cpp/low_precision/resnet50/resnet50_client.py b/examples/C++/low_precision/resnet50/resnet50_client.py
similarity index 87%
rename from examples/Cpp/low_precision/resnet50/resnet50_client.py
rename to examples/C++/low_precision/resnet50/resnet50_client.py
index 5d7b31241..1599600df 100644
--- a/examples/Cpp/low_precision/resnet50/resnet50_client.py
+++ b/examples/C++/low_precision/resnet50/resnet50_client.py
@@ -17,8 +17,7 @@
from paddle_serving_app.reader import RGB2BGR, Transpose, Div, Normalize
client = Client()
-client.load_client_config(
- "serving_client/serving_client_conf.prototxt")
+client.load_client_config("serving_client/serving_client_conf.prototxt")
client.connect(["127.0.0.1:9393"])
seq = Sequential([
@@ -28,5 +27,6 @@
image_file = "daisy.jpg"
img = seq(image_file)
-fetch_map = client.predict(feed={"image": img}, fetch=["save_infer_model/scale_0.tmp_0"])
+fetch_map = client.predict(
+ feed={"image": img}, fetch=["save_infer_model/scale_0.tmp_0"])
print(fetch_map["save_infer_model/scale_0.tmp_0"].reshape(-1))
diff --git a/examples/Cpp/util/README.md b/examples/C++/util/README.md
similarity index 100%
rename from examples/Cpp/util/README.md
rename to examples/C++/util/README.md
diff --git a/examples/Cpp/util/README_CN.md b/examples/C++/util/README_CN.md
similarity index 100%
rename from examples/Cpp/util/README_CN.md
rename to examples/C++/util/README_CN.md
diff --git a/examples/Cpp/util/get_acc.py b/examples/C++/util/get_acc.py
similarity index 100%
rename from examples/Cpp/util/get_acc.py
rename to examples/C++/util/get_acc.py
diff --git a/examples/Cpp/util/show_profile.py b/examples/C++/util/show_profile.py
similarity index 100%
rename from examples/Cpp/util/show_profile.py
rename to examples/C++/util/show_profile.py
diff --git a/examples/Cpp/util/timeline_trace.py b/examples/C++/util/timeline_trace.py
similarity index 100%
rename from examples/Cpp/util/timeline_trace.py
rename to examples/C++/util/timeline_trace.py
diff --git a/examples/Cpp/xpu/bert/README.md b/examples/C++/xpu/bert/README.md
similarity index 100%
rename from examples/Cpp/xpu/bert/README.md
rename to examples/C++/xpu/bert/README.md
diff --git a/examples/Cpp/xpu/bert/bert_client.py b/examples/C++/xpu/bert/bert_client.py
similarity index 100%
rename from examples/Cpp/xpu/bert/bert_client.py
rename to examples/C++/xpu/bert/bert_client.py
diff --git a/examples/Cpp/xpu/bert/chinese_bert_reader.py b/examples/C++/xpu/bert/chinese_bert_reader.py
similarity index 96%
rename from examples/Cpp/xpu/bert/chinese_bert_reader.py
rename to examples/C++/xpu/bert/chinese_bert_reader.py
index 133cc0889..1b5cc06e7 100644
--- a/examples/Cpp/xpu/bert/chinese_bert_reader.py
+++ b/examples/C++/xpu/bert/chinese_bert_reader.py
@@ -49,9 +49,7 @@ def __init__(self, args={}):
self.cls_id = self.vocab["[CLS]"]
self.sep_id = self.vocab["[SEP]"]
self.mask_id = self.vocab["[MASK]"]
- self.feed_keys = [
- "input_ids", "token_type_ids"
- ]
+ self.feed_keys = ["input_ids", "token_type_ids"]
"""
inner function
@@ -90,7 +88,7 @@ def _pad_batch(self, token_ids, text_type_ids):
batch_text_type_ids,
max_seq_len=self.max_seq_len,
pad_idx=self.pad_id)
- return padded_token_ids, padded_text_type_ids
+ return padded_token_ids, padded_text_type_ids
"""
process function deals with a raw Chinese string as a sentence
diff --git a/examples/Cpp/xpu/bert/get_data.sh b/examples/C++/xpu/bert/get_data.sh
similarity index 100%
rename from examples/Cpp/xpu/bert/get_data.sh
rename to examples/C++/xpu/bert/get_data.sh
diff --git a/examples/Cpp/xpu/ernie/README.md b/examples/C++/xpu/ernie/README.md
similarity index 100%
rename from examples/Cpp/xpu/ernie/README.md
rename to examples/C++/xpu/ernie/README.md
diff --git a/examples/Cpp/xpu/ernie/chinese_ernie_reader.py b/examples/C++/xpu/ernie/chinese_ernie_reader.py
similarity index 100%
rename from examples/Cpp/xpu/ernie/chinese_ernie_reader.py
rename to examples/C++/xpu/ernie/chinese_ernie_reader.py
diff --git a/examples/Cpp/xpu/ernie/ernie_client.py b/examples/C++/xpu/ernie/ernie_client.py
similarity index 98%
rename from examples/Cpp/xpu/ernie/ernie_client.py
rename to examples/C++/xpu/ernie/ernie_client.py
index b02c9d0aa..69b094dff 100644
--- a/examples/Cpp/xpu/ernie/ernie_client.py
+++ b/examples/C++/xpu/ernie/ernie_client.py
@@ -32,6 +32,6 @@
feed_dict = reader.process(line)
for key in feed_dict.keys():
feed_dict[key] = np.array(feed_dict[key]).reshape((128, 1))
- # print(feed_dict)
+# print(feed_dict)
result = client.predict(feed=feed_dict, fetch=fetch, batch=False)
print(result)
diff --git a/examples/Cpp/xpu/ernie/get_data.sh b/examples/C++/xpu/ernie/get_data.sh
similarity index 100%
rename from examples/Cpp/xpu/ernie/get_data.sh
rename to examples/C++/xpu/ernie/get_data.sh
diff --git a/examples/Cpp/xpu/fit_a_line_xpu/README.md b/examples/C++/xpu/fit_a_line_xpu/README.md
similarity index 100%
rename from examples/Cpp/xpu/fit_a_line_xpu/README.md
rename to examples/C++/xpu/fit_a_line_xpu/README.md
diff --git a/examples/Cpp/xpu/fit_a_line_xpu/README_CN.md b/examples/C++/xpu/fit_a_line_xpu/README_CN.md
similarity index 100%
rename from examples/Cpp/xpu/fit_a_line_xpu/README_CN.md
rename to examples/C++/xpu/fit_a_line_xpu/README_CN.md
diff --git a/examples/Cpp/xpu/fit_a_line_xpu/benchmark.py b/examples/C++/xpu/fit_a_line_xpu/benchmark.py
similarity index 100%
rename from examples/Cpp/xpu/fit_a_line_xpu/benchmark.py
rename to examples/C++/xpu/fit_a_line_xpu/benchmark.py
diff --git a/examples/Cpp/xpu/fit_a_line_xpu/get_data.sh b/examples/C++/xpu/fit_a_line_xpu/get_data.sh
similarity index 100%
rename from examples/Cpp/xpu/fit_a_line_xpu/get_data.sh
rename to examples/C++/xpu/fit_a_line_xpu/get_data.sh
diff --git a/examples/Cpp/xpu/fit_a_line_xpu/local_train.py b/examples/C++/xpu/fit_a_line_xpu/local_train.py
similarity index 100%
rename from examples/Cpp/xpu/fit_a_line_xpu/local_train.py
rename to examples/C++/xpu/fit_a_line_xpu/local_train.py
diff --git a/examples/Cpp/xpu/fit_a_line_xpu/test_client.py b/examples/C++/xpu/fit_a_line_xpu/test_client.py
similarity index 100%
rename from examples/Cpp/xpu/fit_a_line_xpu/test_client.py
rename to examples/C++/xpu/fit_a_line_xpu/test_client.py
diff --git a/examples/Cpp/xpu/resnet_v2_50_xpu/README.md b/examples/C++/xpu/resnet_v2_50_xpu/README.md
similarity index 100%
rename from examples/Cpp/xpu/resnet_v2_50_xpu/README.md
rename to examples/C++/xpu/resnet_v2_50_xpu/README.md
diff --git a/examples/Cpp/xpu/resnet_v2_50_xpu/README_CN.md b/examples/C++/xpu/resnet_v2_50_xpu/README_CN.md
similarity index 100%
rename from examples/Cpp/xpu/resnet_v2_50_xpu/README_CN.md
rename to examples/C++/xpu/resnet_v2_50_xpu/README_CN.md
diff --git a/examples/Cpp/xpu/resnet_v2_50_xpu/daisy.jpg b/examples/C++/xpu/resnet_v2_50_xpu/daisy.jpg
similarity index 100%
rename from examples/Cpp/xpu/resnet_v2_50_xpu/daisy.jpg
rename to examples/C++/xpu/resnet_v2_50_xpu/daisy.jpg
diff --git a/examples/Cpp/xpu/resnet_v2_50_xpu/localpredict.py b/examples/C++/xpu/resnet_v2_50_xpu/localpredict.py
similarity index 93%
rename from examples/Cpp/xpu/resnet_v2_50_xpu/localpredict.py
rename to examples/C++/xpu/resnet_v2_50_xpu/localpredict.py
index 2e76098e9..904232835 100644
--- a/examples/Cpp/xpu/resnet_v2_50_xpu/localpredict.py
+++ b/examples/C++/xpu/resnet_v2_50_xpu/localpredict.py
@@ -18,7 +18,8 @@
import sys
predictor = LocalPredictor()
-predictor.load_model_config(sys.argv[1], use_lite=True, use_xpu=True, ir_optim=True)
+predictor.load_model_config(
+ sys.argv[1], use_lite=True, use_xpu=True, ir_optim=True)
seq = Sequential([
File2Image(), Resize(256), CenterCrop(224), RGB2BGR(), Transpose((2, 0, 1)),
diff --git a/examples/Cpp/xpu/resnet_v2_50_xpu/resnet50_client.py b/examples/C++/xpu/resnet_v2_50_xpu/resnet50_client.py
similarity index 100%
rename from examples/Cpp/xpu/resnet_v2_50_xpu/resnet50_client.py
rename to examples/C++/xpu/resnet_v2_50_xpu/resnet50_client.py
diff --git a/examples/Cpp/xpu/vgg19/README.md b/examples/C++/xpu/vgg19/README.md
similarity index 100%
rename from examples/Cpp/xpu/vgg19/README.md
rename to examples/C++/xpu/vgg19/README.md
diff --git a/examples/Cpp/xpu/vgg19/daisy.jpg b/examples/C++/xpu/vgg19/daisy.jpg
similarity index 100%
rename from examples/Cpp/xpu/vgg19/daisy.jpg
rename to examples/C++/xpu/vgg19/daisy.jpg
diff --git a/examples/Cpp/xpu/vgg19/vgg19_client.py b/examples/C++/xpu/vgg19/vgg19_client.py
similarity index 87%
rename from examples/Cpp/xpu/vgg19/vgg19_client.py
rename to examples/C++/xpu/vgg19/vgg19_client.py
index 65d0dd912..913800e1f 100644
--- a/examples/Cpp/xpu/vgg19/vgg19_client.py
+++ b/examples/C++/xpu/vgg19/vgg19_client.py
@@ -17,8 +17,7 @@
from paddle_serving_app.reader import RGB2BGR, Transpose, Div, Normalize
client = Client()
-client.load_client_config(
- "serving_client/serving_client_conf.prototxt")
+client.load_client_config("serving_client/serving_client_conf.prototxt")
client.connect(["127.0.0.1:7702"])
seq = Sequential([
@@ -28,6 +27,7 @@
image_file = "daisy.jpg"
img = seq(image_file)
-fetch_map = client.predict(feed={"image": img}, fetch=["save_infer_model/scale_0"])
+fetch_map = client.predict(
+ feed={"image": img}, fetch=["save_infer_model/scale_0"])
#print(fetch_map)
print(fetch_map["save_infer_model/scale_0"].reshape(-1))
diff --git a/examples/Cpp/PaddleDetection/faster_rcnn_r50_fpn_1x_coco/test_client.py b/examples/Cpp/PaddleDetection/faster_rcnn_r50_fpn_1x_coco/test_client.py
deleted file mode 100644
index b6b2c534b..000000000
--- a/examples/Cpp/PaddleDetection/faster_rcnn_r50_fpn_1x_coco/test_client.py
+++ /dev/null
@@ -1,46 +0,0 @@
-# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import sys
-import numpy as np
-from paddle_serving_client import Client
-from paddle_serving_app.reader import *
-import cv2
-
-preprocess = DetectionSequential([
- DetectionFile2Image(),
- DetectionNormalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225], True),
- DetectionResize(
- (800, 1333), True, interpolation=cv2.INTER_LINEAR),
- DetectionTranspose((2,0,1)),
- DetectionPadStride(128)
-])
-
-postprocess = RCNNPostprocess("label_list.txt", "output")
-client = Client()
-
-client.load_client_config("serving_client/serving_client_conf.prototxt")
-client.connect(['127.0.0.1:9494'])
-
-im, im_info = preprocess(sys.argv[1])
-fetch_map = client.predict(
- feed={
- "image": im,
- "im_shape": np.array(list(im.shape[1:])).reshape(-1),
- "scale_factor": im_info['scale_factor'],
- },
- fetch=["save_infer_model/scale_0.tmp_1"],
- batch=False)
-fetch_map["image"] = sys.argv[1]
-postprocess(fetch_map)