Skip to content

Commit

Permalink
Merge pull request #394 from wangjiawei04/develop
Browse files Browse the repository at this point in the history
FIX Criteo ctr doc
  • Loading branch information
MRXLT authored Apr 3, 2020
2 parents 4a82a8f + a4de909 commit 1aef5af
Show file tree
Hide file tree
Showing 5 changed files with 59 additions and 23 deletions.
22 changes: 11 additions & 11 deletions doc/TRAIN_TO_SERVICE.md
Original file line number Diff line number Diff line change
Expand Up @@ -228,7 +228,7 @@ if __name__ == "__main__":

</details>

! [Training process](./ imdb_loss.png) As can be seen from the above figure, the loss of the model starts to converge after the 65th round. We save the model and configuration file after the 65th round of training is completed. The saved files are divided into imdb_cnn_client_conf and imdb_cnn_model folders. The former contains client-side configuration files, and the latter contains server-side configuration files and saved model files.
![Training process](./imdb_loss.png) As can be seen from the above figure, the loss of the model starts to converge after the 65th round. We save the model and configuration file after the 65th round of training is completed. The saved files are divided into imdb_cnn_client_conf and imdb_cnn_model folders. The former contains client-side configuration files, and the latter contains server-side configuration files and saved model files.
The parameter list of the save_model function is as follows:

| Parameter | Meaning |
Expand All @@ -243,10 +243,10 @@ The parameter list of the save_model function is as follows:

The Paddle Serving framework supports two types of prediction service methods. One is to communicate through RPC and the other is to communicate through HTTP. The deployment and use of RPC prediction service will be introduced first. The deployment and use of HTTP prediction service will be introduced at Step 8. .

`` `shell
```shell
python -m paddle_serving_server.serve --model imdb_cnn_model / --port 9292 #cpu prediction service
python -m paddle_serving_server_gpu.serve --model imdb_cnn_model / --port 9292 --gpu_ids 0 #gpu prediction service
`` `
```

The parameter --model in the command specifies the server-side model and configuration file directory previously saved, --port specifies the port of the prediction service. When deploying the gpu prediction service using the gpu version, you can use --gpu_ids to specify the gpu used.

Expand Down Expand Up @@ -287,13 +287,13 @@ The script receives data from standard input and prints out the probability that

The client implemented in the previous step runs the prediction service as an example. The usage method is as follows:

`` `shell
```shell
cat test_data/part-0 | python test_client.py imdb_lstm_client_conf / serving_client_conf.prototxt imdb.vocab
`` `
```

Using 2084 samples in the test_data/part-0 file for test testing, the model prediction accuracy is 88.19%.

** Note **: The effect of each model training may be slightly different, and the accuracy of predictions using the trained model will be close to the examples but may not be exactly the same.
**Note**: The effect of each model training may be slightly different, and the accuracy of predictions using the trained model will be close to the examples but may not be exactly the same.

## Step8: Deploy HTTP Prediction Service

Expand Down Expand Up @@ -349,13 +349,13 @@ In the above command, the first parameter is the saved server-side model and con
## Step9: Call the prediction service with plaintext data
After starting the HTTP prediction service, you can make prediction with a single command:

`` `
```
curl -H "Content-Type: application / json" -X POST -d '{"words": "i am very sad | 0", "fetch": ["prediction"]}' http://127.0.0.1:9292/imdb/prediction
`` `
```
When the inference process is normal, the prediction probability is returned, as shown below.

`` `
```
{"prediction": [0.5592559576034546,0.44074398279190063]}
`` `
```

** Note **: The effect of each model training may be slightly different, and the inferred probability value using the trained model may not be consistent with the example.
**Note**: The effect of each model training may be slightly different, and the inferred probability value using the trained model may not be consistent with the example.
25 changes: 15 additions & 10 deletions python/examples/criteo_ctr/README.md
Original file line number Diff line number Diff line change
@@ -1,26 +1,31 @@
## CTR预测服务
## CTR Prediction Service

### 获取样例数据
([简体中文](./README_CN.md)|English)

### download criteo dataset
```
sh get_data.sh
```

### 保存模型和配置文件
### download inference model
```
python local_train.py
wget https://paddle-serving.bj.bcebos.com/criteo_ctr_example/criteo_ctr_demo_model.tar.gz
tar xf criteo_ctr_demo_model.tar.gz
mv models/ctr_client_conf .
mv models/ctr_serving_model .
```
执行脚本后会在当前目录生成serving_server_model和serving_client_config文件夹。
the directories like serving_server_model and serving_client_config will appear.

### 启动RPC预测服务
### Start RPC Inference Service

```
python -m paddle_serving_server.serve --model ctr_serving_model/ --port 9292 #启动CPU预测服务
python -m paddle_serving_server_gpu.serve --model ctr_serving_model/ --port 9292 --gpu_ids 0 #在GPU 0上启动预测服务
python -m paddle_serving_server.serve --model ctr_serving_model/ --port 9292 #CPU RPC Service
python -m paddle_serving_server_gpu.serve --model ctr_serving_model/ --port 9292 --gpu_ids 0 #RPC Service on GPU 0
```

### 执行预测
### RPC Infer

```
python test_client.py ctr_client_conf/serving_client_conf.prototxt raw_data/
```
预测完毕会输出预测过程的耗时。
the latency will display in the end.
31 changes: 31 additions & 0 deletions python/examples/criteo_ctr/README_CN.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
## CTR预测服务

(简体中文|[English](./README.md))

### 获取样例数据
```
sh get_data.sh
```

### 下载模型
```
wget https://paddle-serving.bj.bcebos.com/criteo_ctr_example/criteo_ctr_demo_model.tar.gz
tar xf criteo_ctr_demo_model.tar.gz
mv models/ctr_client_conf .
mv models/ctr_serving_model .
```
会在当前目录出现serving_server_model和serving_client_config文件夹。

### 启动RPC预测服务

```
python -m paddle_serving_server.serve --model ctr_serving_model/ --port 9292 #启动CPU预测服务
python -m paddle_serving_server_gpu.serve --model ctr_serving_model/ --port 9292 --gpu_ids 0 #在GPU 0上启动预测服务
```

### 执行预测

```
python test_client.py ctr_client_conf/serving_client_conf.prototxt raw_data/
```
预测完毕会输出预测过程的耗时。
2 changes: 1 addition & 1 deletion python/examples/criteo_ctr_with_cube/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ in the root directory of this git project
```
mkdir build_server
cd build_server
cmake -DPYTHON_INCLUDE_DIR=$PYTHONROOT/include/python2.7/ -DPYTHON_LIBRARIES=$PYTHONROOT/lib/libpython2.7.so -DPYTHON_EXECUTABLE=$PYTHONROOT/bin/python -DCLIENT_ONLY=OFF ..
cmake -DPYTHON_INCLUDE_DIR=$PYTHONROOT/include/python2.7/ -DPYTHON_LIBRARIES=$PYTHONROOT/lib64/libpython2.7.so -DPYTHON_EXECUTABLE=$PYTHONROOT/bin/python -DSERVER=ON ..
make -j10
make install -j10
```
Expand Down
2 changes: 1 addition & 1 deletion python/examples/criteo_ctr_with_cube/README_CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
```
mkdir build_server
cd build_server
cmake -DPYTHON_INCLUDE_DIR=$PYTHONROOT/include/python2.7/ -DPYTHON_LIBRARIES=$PYTHONROOT/lib/libpython2.7.so -DPYTHON_EXECUTABLE=$PYTHONROOT/bin/python -DCLIENT_ONLY=OFF ..
cmake -DPYTHON_INCLUDE_DIR=$PYTHONROOT/include/python2.7/ -DPYTHON_LIBRARIES=$PYTHONROOT/lib64/libpython2.7.so -DPYTHON_EXECUTABLE=$PYTHONROOT/bin/python -DSERVER=ON ..
make -j10
make install -j10
```
Expand Down

0 comments on commit 1aef5af

Please sign in to comment.