Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pick Readme changes to v1.0.0 #79

Merged
merged 4 commits into from
May 23, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
161 changes: 38 additions & 123 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,15 +11,13 @@ Data is becoming more and more expensive nowadays, and sharing of raw data is ve

## Overview of PaddleFL

### Horizontal Federated Learning

<img src='images/FL-framework.png' width = "1000" height = "320" align="middle"/>

In PaddleFL, horizontal and vertical federated learning strategies will be implemented according to the categorization given in [4]. Application demonstrations in natural language processing, computer vision and recommendation will be provided in PaddleFL.

#### A. Federated Learning Strategy

- **Vertical Federated Learning**: Logistic Regression with PrivC[5], Neural Network with ABY3 [11]
- **Vertical Federated Learning**: Logistic Regression with PrivC[5], Neural Network with MPC [11]

- **Horizontal Federated Learning**: Federated Averaging [2], Differential Privacy [6], Secure Aggregation

Expand All @@ -31,19 +29,15 @@ In PaddleFL, horizontal and vertical federated learning strategies will be imple

- **Active Learning**

### Federated Learning with MPC

<img src='images/PFM-overview.png' width = "1000" height = "446" align="middle"/>
There are mainly two components in PaddleFL: **Data Parallel** and **Federated Learning with MPC (PFM)**.

Paddle FL MPC (PFM) is a framework for privacy-preserving deep learning based on PaddlePaddle. It follows the same running mechanism and programming paradigm with PaddlePaddle, while using secure multi-party computation (MPC) to enable secure training and prediction.
With Data Parallel, distributed data holders can finish their Federated Learning tasks based on common horizontal federated strategies, such as FedAvg, DPSGD, etc.

With PFM, it is easy to train models or conduct prediction as on PaddlePaddle over encrypted data, without the need for cryptography expertise. Furthermore, the rich industry-oriented models and algorithms built on PaddlePaddle can be smoothly migrated to secure versions on PFM with little effort.
Besides, PFM is implemented based on secure multi-party computation (MPC) to enable secure training and prediction. As a key product of PaddleFL, PFM intrinsically supports federated learning well, including horizontal, vertical and transfer learning scenarios. Users with little cryptography expertise can also train models or conduct prediction on encrypted data.

As a key product of PaddleFL, PFM intrinsically supports federated learning well, including horizontal, vertical and transfer learning scenarios. It provides both provable security (semantic security) and competitive performance.
## Installation

## Compilation and Installation

### Docker Installation
We **highly recommend** to run PaddleFL in Docker

```sh
#Pull and run the docker
Expand All @@ -54,61 +48,39 @@ docker run --name <docker_name> --net=host -it -v $PWD:/root <image id> /bin/bas
pip install paddle_fl
```

### Compile From Source Code

#### A. Environment preparation

* CentOS 6 or CentOS 7 (64 bit)
* Python 2.7.15+/3.5.1+/3.6/3.7 ( 64 bit) or above
* pip or pip3 9.0.1+ (64 bit)
* PaddlePaddle release 1.8
* Redis 5.0.8 (64 bit)
* GCC or G++ 4.8.3+
* cmake 3.15+
If you want to compile and install from source code, please click [here](./docs/source/md/compile_and_install.md) to get instructions.

#### B. Clone the source code, compile and install
We also prepare a stable redis package for you to download and install, which will be used in tasks with MPC.

Fetch the source code and checkout stable release
```sh
git clone https://github.com/PaddlePaddle/PaddleFL
cd /path/to/PaddleFL

# Checkout stable release
mkdir build && cd build
wget --no-check-certificate https://paddlefl.bj.bcebos.com/redis-stable.tar
tar -xf redis-stable.tar
cd redis-stable && make
```

Execute compile commands, where `PYTHON_EXECUTABLE` is path to the python binary where the PaddlePaddle is installed, `CMAKE_CXX_COMPILER` is the path of G++ and `PYTHON_INCLUDE_DIRS` is the corresponding python include directory. You can get the `PYTHON_INCLUDE_DIRS` via the following command:
## Easy deployment with kubernetes

### Data Parallel
```sh
${PYTHON_EXECUTABLE} -c "from distutils.sysconfig import get_python_inc;print(get_python_inc())"
```
Then you can put the directory in the following command and make:
```sh
cmake ../ -DPYTHON_EXECUTABLE=${PYTHON_EXECUTABLE} -DPYTHON_INCLUDE_DIRS=${python_include_dir} -DCMAKE_CXX_COMPILER=${g++_path}
make -j$(nproc)
```
Install the package:

```sh
make install
cd /path/to/PaddleFL/python
${PYTHON_EXECUTABLE} setup.py sdist bdist_wheel
pip or pip3 install dist/***.whl -U
```
We also prepare a stable redis package for you to download and install
kubectl apply -f ./python/paddle_fl/paddle_fl/examples/k8s_deployment/master.yaml

```sh
wget --no-check-certificate https://paddlefl.bj.bcebos.com/redis-stable.tar
tar -xf redis-stable.tar
cd redis-stable && make
```
Please refer [K8S deployment example](./python/paddle_fl/paddle_fl/examples/k8s_deployment/README.md) for details

You can also refer [K8S cluster application and kubectl installation](./python/paddle_fl/paddle_fl/examples/k8s_deployment/deploy_instruction.md) to deploy your K8S cluster

### Federated Learning with MPC

To be added.

## Framework design of PaddleFL

### Horizontal Federated Learning
### Data Parallel

<img src='images/FL-training.png' width = "1000" height = "400" align="middle"/>

In PaddleFL, components for defining a federated learning task and training a federated learning job are as follows:
In Data Parallel, components for defining a federated learning task and training a federated learning job are as follows:

#### A. Compile Time

Expand All @@ -126,112 +98,55 @@ In PaddleFL, components for defining a federated learning task and training a fe

- **FL-Worker**: Each organization participates in federated learning will have one or more federated workers that will communicate with the federated parameter server.

- **FL-scheduler**: Decide which set of trainers can join the training before each updating cycle.
- **FL-scheduler**: Decide which set of trainers can join the training before each updating cycle.

For more instructions, please refer to the [examples](./python/paddle_fl/paddle_fl/examples)

### Federated Learning with MPC

<img src='images/PFM-overview.png' width = "1000" height = "446" align="middle"/>

Paddle FL MPC implements secure training and inference tasks based on the underlying MPC protocol like ABY3[11], which is a high efficient three-party computing model.

In ABY3, participants can be classified into roles of Input Party (IP), Computing Party (CP) and Result Party (RP). Input Parties (e.g., the training data/model owners) encrypt and distribute data or models to Computing Parties. Computing Parties (e.g., the VM on the cloud) conduct training or inference tasks based on specific MPC protocols, being restricted to see only the encrypted data or models, and thus guarantee the data privacy. When the computation is completed, one or more Result Parties (e.g., data owners or specified third-party) receive the encrypted results from Computing Parties, and reconstruct the plaintext results. Roles can be overlapped, e.g., a data owner can also act as a computing party.
In ABY3, participants can be classified into roles of Input Party (IP), Computing Party (CP) and Result Party (RP). Input Parties (e.g., the training data/model owners) encrypt and distribute data or models to Computing Parties. Computing Parties (e.g., the VM on the cloud) conduct training or inference tasks based on specific MPC protocols, being restricted to see only the encrypted data or models, and thus guarantee the data privacy. When the computation is completed, one or more Result Parties (e.g., data owners or specified third-party) receive the encrypted results from Computing Parties, and reconstruct the plaintext results. Roles can be overlapped, e.g., a data owner can also act as a computing party.

A full training or inference process in PFM consists of mainly three phases: data preparation, training/inference, and result reconstruction.

#### A. Data preparation

##### 1. Private data alignment

PFM enables data owners (IPs) to find out records with identical keys (like UUID) without revealing private data to each other. This is especially useful in the vertical learning cases where segmented features with same keys need to be identified and aligned from all owners in a private manner before training. Using the OT-based PSI (Private Set Intersection) algorithm, PFM can perform private alignment at a speed of up to 60k records per second.

##### 2. Encryption and distribution
- **Private data alignment**: PFM enables data owners (IPs) to find out records with identical keys (like UUID) without revealing private data to each other. This is especially useful in the vertical learning cases where segmented features with same keys need to be identified and aligned from all owners in a private manner before training.

In PFM, data and models from IPs will be encrypted using Secret-Sharing[10], and then be sent to CPs, via directly transmission or distributed storage like HDFS. Each CP can only obtain one share of each piece of data, and thus is unable to recover the original value in the Semi-honest model.
- **Encryption and distribution**: In PFM, data and models from IPs will be encrypted using Secret-Sharing[10], and then be sent to CPs, via directly transmission or distributed storage like HDFS. Each CP can only obtain one share of each piece of data, and thus is unable to recover the original value in the Semi-honest model.

#### B. Training/inference

<img src='images/PFM-design.png' width = "1000" height = "622" align="middle"/>
A PFM program is exactly a PaddlePaddle program, and will be executed as normal PaddlePaddle programs. Before training/inference, user needs to choose a MPC protocol, define a machine learning model and their training strategies. Typical machine learning operators are provided in `paddle_fl.mpc` over encrypted data, of which the instances are created and run in order by Executor during run-time.

As in PaddlePaddle, a training or inference job can be separated into the compile-time phase and the run-time phase:

##### 1. Compile time

* **MPC environment specification**: a user needs to choose a MPC protocol, and configure the network settings. In current version, PFM provides only the "ABY3" protocol. More protocol implementation will be provided in future.
* **User-defined job program**: a user can define the machine learning model structure and the training strategies (or inference task) in a PFM program, using the secure operators.

##### 2. Run time

A PFM program is exactly a PaddlePaddle program, and will be executed as normal PaddlePaddle programs. For example, in run-time a PFM program will be transpiled into ProgramDesc, and then be passed to and run by the Executor. The main concepts in the run-time phase are as follows:

* **Computing nodes**: a computing node is an entity corresponding to a Computing Party. In real deployment, it can be a bare-metal machine, a cloud VM, a docker or even a process. PFM requires exactly three computing nodes in each run, which is determined by the underlying ABY3 protocol. A PFM program will be deployed and run in parallel on all three computing nodes.
* **Operators using MPC**: PFM provides typical machine learning operators in `paddle_fl.mpc` over encrypted data. Such operators are implemented upon PaddlePaddle framework, based on MPC protocols like ABY3. Like other PaddlePaddle operators, in run time, instances of PFM operators are created and run in order by Executor.
For more information of Training/inference phase, please refer to the following [doc](./docs/source/md/mpc_train.md).

#### C. Result reconstruction

Upon completion of the secure training (or inference) job, the models (or prediction results) will be output by CPs in encrypted form. Result Parties can collect the encrypted results, decrypt them using the tools in PFM, and deliver the plaintext results to users.

For more instructions, please refer to [mpc examples](./python/paddle_fl/mpc/examples)
## Easy deployment with kubernetes

### Horizontal Federated Learning
```sh

kubectl apply -f ./python/paddle_fl/paddle_fl/examples/k8s_deployment/master.yaml

```
Please refer [K8S deployment example](./python/paddle_fl/paddle_fl/examples/k8s_deployment/README.md) for details

You can also refer [K8S cluster application and kubectl installation](./python/paddle_fl/paddle_fl/examples/k8s_deployment/deploy_instruction.md) to deploy your K8S cluster

### Federated Learning with MPC

To be added.
For more instructions, please refer to [mpc examples](./python/paddle_fl/mpc/examples)

## Benchmark task

### Horzontal Federated Learning
### Data Parallel

Gru4Rec [9] introduces recurrent neural network model in session-based recommendation. PaddlePaddle's Gru4Rec implementation is in https://github.com/PaddlePaddle/models/tree/develop/PaddleRec/gru4rec. An example is given in [Gru4Rec in Federated Learning](https://paddlefl.readthedocs.io/en/latest/examples/gru4rec_examples.html)

### Federated Learning with MPC

#### A. Convergence of paddle_fl.mpc vs paddle

##### 1. Training Parameters
- Dataset: Boston house price dataset
- Number of Epoch: 20
- Batch Size: 10

##### 2. Experiment Results

| Epoch/Step | paddle_fl.mpc | Paddle |
| ---------- | ------------- | ------ |
| Epoch=0, Step=0 | 738.39491 | 738.46204 |
| Epoch=1, Step=0 | 630.68834 | 629.9071 |
| Epoch=2, Step=0 | 539.54683 | 538.1757 |
| Epoch=3, Step=0 | 462.41159 | 460.64722 |
| Epoch=4, Step=0 | 397.11516 | 395.11017 |
| Epoch=5, Step=0 | 341.83102 | 339.69815 |
| Epoch=6, Step=0 | 295.01114 | 292.83597 |
| Epoch=7, Step=0 | 255.35141 | 253.19429 |
| Epoch=8, Step=0 | 221.74739 | 219.65132 |
| Epoch=9, Step=0 | 193.26459 | 191.25981 |
| Epoch=10, Step=0 | 169.11423 | 167.2204 |
| Epoch=11, Step=0 | 148.63138 | 146.85835 |
| Epoch=12, Step=0 | 131.25081 | 129.60391 |
| Epoch=13, Step=0 | 116.49708 | 114.97599 |
| Epoch=14, Step=0 | 103.96669 | 102.56854 |
| Epoch=15, Step=0 | 93.31706 | 92.03858 |
| Epoch=16, Step=0 | 84.26219 | 83.09653 |
| Epoch=17, Step=0 | 76.55664 | 75.49785 |
| Epoch=18, Step=0 | 69.99673 | 69.03561 |
| Epoch=19, Step=0 | 64.40562 | 63.53539 |
We conduct tests on PFM using Boston house price dataset, and the implementation is given in [uci_demo](./python/paddle_fl/mpc/examples/uci_demo)

## On Going and Future Work

- Vertial Federated Learning will support more algorithms.

- Add K8S deployment scheme for Paddle Encrypted.
- Add K8S deployment scheme for PFM.

- FL mobile simulator will be open sourced in following versions.

## Reference

Expand Down
Loading