Releases: IBM/federated-learning-lib
Releases · IBM/federated-learning-lib
IBMFL 2.0.1
IBMFL 2.0.0
Release 2.0.0
IBMFL 2.0.0 greatly simplified the way to set up the running environment, and it also includes new quorum checkup, better docker support and bug fixes.
Breaking changes:
- New
quickstart.md
,setup.md
andsetup_crypto.md
for setting up the virtual environment - New setup instructions to support Mac M1/M2 users, refer to
setup.md
- Remove
requirement.txt
improvements
- New quorum checking mechanism to reduce aggregator wait time, see our updated tutorial
- New CLI examples for training with fully homomorphic encryption
- Docker support for fully homomorphic encryption
- New FHE examples for running with Openshift
- Upgrade Scikit-learn support to 1.0.2
- Improved README files
- Improved tutorials
Bug fixes
- Minor typo fixes for logs, examples, and notebooks
IBMFL 1.1.0
Release 1.1.0
IBMFL 1.1.0 includes new cryptographic functionality, new fusion algorithms, improvements, and bug fixes.
New Functionality
- Integration with fully homomorphic encryption (FHE)
- New robust fusion algorithm Adaptive Federated Averaging
- New tutorial on how to utilize cryptographic technique (FHE) in IBM FL
- New Jupyter notebooks to illustrate using FHE in IBM FL for TensorFlow, PyTorch and Scikit-learn
Changes
- Support higher Skorch, PyTorch, and RL libraries
- Improved README files
- Improved tutorials
Bug fixes
- Minor fix for Federated Averaging fusion algorithm to work with data generators
- Fix negative root mean squared error (nrmse) and negative mean squared error (nmse) metrics computation error
- Fix OpenShift-related issues
IBMFL 1.0.7
Release 1.0.7
IBMFL 1.0.7 includes new functionality, improvements, and bug fixes.
Changes
- Re-organization of examples to help end-user
- New tutorial on how to add a new fusion algorithm
- Improved README files
- Internal change on how PyTorch models are specified. The change enables new optimizers, see this tutorial for more details
New Functionality
- New fusion algorithm to train (Doc2Vec) models.
- New robust fusion algorithm (Comparative Elimination) to train robust neural network models.
- New connection type PubSub. It replaces RabbitMQ based connection type supported by previous releases.
- New tutorial on how to add a new fusion algorithm to IBM FL
- New Jupyter notebook to illustrate quorum and rejoin support
- New Jupyter notebook to illustrate how to train Pytorch models
Bug fixes
- Minor fix for iterative averaging fusion algorithm.
- Fix memory leakage issue in the Aggregator side. Now memory usage stays stable as the number of global rounds increases.
IBMFL 1.0.6
Release 1.0.6
IBMFL 1.0.6 includes several new functionality added, with improvements and bug fixes.
Changes
- Support for Python 3.8.
- Enhanced quorum support for the PFNM fusion algorithm.
- Enhancement for FedAvgPlus fusion algorithm (previously named Fed+).
- Refinement for library dependency.
- Refactor of python notebook dashboard for the experimental manager.
- Experiment manager dashboard now supports custom datasets.
- No breaking changes.
New Functionality
- Multi-cloud cluster support for OpenShift Orchestrator, see new
openshift_fl
folder. - New Shuffle fusion algorithm
- Two variations of the Fed+ fusion algorithm, including Coordinate-median+ and Geometric-median+ (see new examples)
- New Fed+ examples for PyTorch models and CIFAR10 dataset
- Support to load and split custom datasets in
csv
format viagenerate_data.py -d <custom_dataset_name>
Bug fixes
- Fix fairness datasets loading issues on aggregator and party sides
- Correct Fed+ hyperparameter
rho
initialization - Fix default data path for id3_dt on Adult dataset example
- Other minor fixes
IBMFL 1.0.5
Release 1.0.5
IBMFL 1.0.5 includes improvements and a new experiment runner to help orchestrate experiments easily.
Changes
- Support for Python 3.7
- New Runner Module to orchestrate IBMFL experiments, on local and remote machines.
- Improved examples to support a greater number of fusion-model-dataset combinations
- Update of MNIST examples to download and save the original MNIST dataset after normalization. The new version ensures all build-in data handlers for MNIST assume the provided dataset is already normalized.
- No breaking changes.
New Functionality
- Improved Keras Model APIs to support collecting pre-train and post-train metrics.
- Improved Metrics-Handling capabilities at Party.
- New Runner Module to orchestrate experiments on remote machines via CLI or from withing Python scripts. Examples can be found here Experiment Manager
Bug fixes
- Fix Ray library dependency issue for Windows and Linux machines.
- Fix PyTorch model bug - issue59
IBMFL 1.0.4
Release 1.0.4
Changes
- Added multiple bias mitigation methods for training fair FL models.
- Bug fixes in Tensorflow 2.1 model wrapper.
- Early termination support with multiple metrics.
- No API changes.
New Functionality
- Support bias mitigation methods: Abay et al.
- Local Reweighing
- Global Reweighing with Differential Privacy
- Federated Prejudice Removal
- New FL algorithm, FedProx: Tian Li et al..
IBMFL 1.0.3
Release 1.0.3
Major Features
- Support for Tensorflow 2.1.0.
- Support for PyTorch 1.4.0.
- GPU training support for all neural network machine learning libraries.
- New fusion algorithm: Fed+ Yu et al..
Highlights
- Support for two popular machine learning libraries: PyTorch 1.4.0. and Tensorflow 2.1.0.
- New tutorials and examples (PyTorch, Tensorflow) to demonstrate how to use PyTorch and TF models for training.
IBMFL 1.0.2
Release 1.0.2
IBMFL 1.0.2 is a minor release with improvements and new features such as setting up aggregation quorum, allowing party rejoin and enabling keras training with GPU.
Changes
- Improved Logging by providing better messages.
- TF version requirement changed to
1.15.0
- Sklearn version requirement changed to
0.23.1
. - No API changes.
- No breaking changes.
New functionality
- Quorum: specify quorum percentage and maximum timeout in the aggregator config file to provide flexibility to parties that have potential connectivity failure.
- Rejoin: dropped parties can rejoin training process.
- GPU support: Enabling training with GPU(s) for
KerasFLModel
(you can check our new tutorial).
Bug fixes
- Fix possible feature dimension mismatch for Sklearn multi-class classification models training on Non-iid party distribution.
- Fix save/load inconsistency caused by
pickle
andjoblib
for Sklearn models.
IBMFL 1.0.1
Release 1.0.1
Major Features
- Supports various types of machine learning models to be trained in a federated learning fashion.
- Multiple state-of-the-art fusion algorithms that fit a variety of federated learning use cases.
- Flask based connection module for easy communication between parties and aggregator.
- Powerful configuration capability for easy testing and switching fusion handlers.
- Extensive examples to demonstrate how to use IBM FL.
- Step-by-step quickstart tutorial for IBM FL.
Highlights
- Supported machine learning model types:
- Neural networks (any neural network topology supported by Keras)
- Decision Tree ID3
- Linear classifiers/regressions (with regularizer): logistic regression, linear SVM, ridge regression, KMeans, and Naïve Bayes
- Deep Reinforcement Learning algorithms including DQN, DDPG, PPO and more
- Supported fusion algorithms:
- Iterative Average
- FedAvg McMahan et al.
- Gradient Average
- Probabilistic Federated Neural Matching (PFNM) Yurochkin et al.
- Krum Blanchard et al.
- Coordinated median Yin et al.
- Zeno Xie et al.
- Statistical Parameter Aggregation via Heterogeneous Matching (SPAHM) Yurochkin et al.
- ID3 fusion for decision tree Quinlan
- Naive Bayes fusion with differential privacy