FedCCFA: Classifier Clustering and Feature Alignment for Federated Learning under Distributed Concept Drift
This is the implementation of our paper: Classifier Clustering and Feature Alignment for Federated Learning under Distributed Concept Drift (NeurIPS 2024). In this paper, we propose a federated learning framework with classifier clustering and feature alignment (FedCCFA).
This repository also provides the implementations of the following methods:
- FedAvg: Communication-Efficient Learning of Deep Networks from Decentralized Data AISTATS 2017
- FedProx: Federated Optimization in Heterogeneous Networks MLSys 2020
- SCAFFOLD: SCAFFOLD: Stochastic Controlled Averaging for Federated Learning ICML 2020
- FedFM: FedFM: Anchor-Based Feature Matching for Data Heterogeneity in Federated Learning IEEE TSP 2023
- pFedMe: Personalized Federated Learning with Moreau Envelopes NeurIPS 2020
- Ditto: Ditto: Fair and Robust Federated Learning Through Personalization ICML 2021
- FedRep: Exploiting Shared Representations for Personalized Federated Learning ICML 2021
- FedBABU: FedBABU: Toward Enhanced Representation for Federated Image Classification ICLR 2022
- FedPAC: Personalized Federated Learning with Feature Alignment and Classifier Collaboration ICLR 2023
- IFCA: An Efficient Framework for Clustered Federated Learning NeurIPS 2020
- Adaptive-FedAvg: Adaptive Federated Learning in Presence of Concept Drift IJCNN 2021
- FedDrift: Federated Learning under Distributed Concept Drift AISTATS 2023
- Flash: Flash: Concept Drift Adaptation in Federated Learning ICML 2023
- FedCCFA: Classifier Clustering and Feature Alignment for Federated Learning under Distributed Concept Drift NeurIPS 2024
All hyperparameters can be set in ./configs/*.yaml. Common hyperparameters in our experiments are list as follows:
Hyperparameter | Values | Description |
---|---|---|
client_num | 20 or 100 | 20 with full participation or 100 with 20% participation |
sample_ratio | 1 or 0.2 | full participation or 20% participation |
dataset | Fashion-MNIST or CIFAR-10 or CINIC-10 | Three datasets in our experiments |
alpha | 0.1 or 0.5 | concentration parameter in Dirichlet distribution |
drift_pattern | false / sudden / incremental / recurrent | concept drift pattern |
The above numerical values (for client_num, sample_raio and alpha) are used in our experiments. You can select any value you want.
For the descriptions of other hyperparameters, please refer to our paper for more details.
Three datasets (Fashion-MNIST, CIFAR-10 and CINIC-10) are used in our experiments. You can download these datasets and put them in ./data
.
For other datasets, you can download them and implement data distribution in ./utils/gen_dataset.py
.
Edit configs/FedCCFA.yaml (e.g., set "drift_pattern" to "sudden").
cd methods
export PYTHONPATH=../
python3 FedCCFA.py
You can easily develop other FL algorithms by this repository.
First, create a new python script: entities/XXX.py
from entities.base import Client, Server
Class XXXClient(Client):
def __init__(self, client_id, args, train_set, test_set, global_test_id):
super().__init__(client_id, args, train_set, test_set, global_test_id)
def train(self):
"""New local training method if needed"""
pass
Class XXXServer(Server):
def __init__(self, args):
super().__init__(args)
def aggregate_by_params(self, clients):
"""New aggregation method if needed"""
pass
Then, create a new python script: methods/XXX.py. Please refer to methods/FedAvg.py to implement new FL process.
If you have any questions related to the code or the paper, feel free to email Junbao ([email protected]). If you encounter any problems when using the code, or want to report a bug, you can open an issue.
If you find FedCCFA useful for your research, please consider citing our paper:
@article{chen2024classifier,
title={Classifier Clustering and Feature Alignment for Federated Learning under Distributed Concept Drift},
author={Chen, Junbao and Xue, Jingfeng and Wang, Yong and Liu, Zhenyan and Huang, Lu},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024}
}