Official repository of:
- E. Fanì, M. Ciccone, B. Caputo. FedDrive v2: an Analysis of the Impact of Label Skewness in Federated Semantic Segmentation for Autonomous Driving. 5th Italian Conference on Robotics and Intelligent Machines (I-RIM), 2023.
- L. Fantauzzo*, E. Fanì*, D. Caldarola, A. Tavera, F. Cermelli1, M. Ciccone, B. Caputo. FedDrive: Generalizing Federated Learning to Semantic Segmentation in Autonomous Driving, IEEE/RSJ International Conference on Intelligent Robots and Systems, 2022.
Corresponding author: [email protected].
All the authors are supported by Politecnico di Torino, Turin, Italy.
*Equal contribution. 1Fabio Cermelli is with Italian Institute of Technology, Genoa, Italy.
Official website: https://feddrive.github.io/
If you find our work relevant to your research or use our code, please cite our papers:
@inproceedings{feddrive2023,
title={FedDrive v2: an Analysis of the Impact of Label Skewness in Federated Semantic Segmentation for Autonomous Driving},
author={Fanì, Eros and Ciccone, Marco and Caputo, Barbara},
journal={5th Italian Conference on Robotics and Intelligent Machines (I-RIM)},
year={2023}
}
@inproceedings{feddrive2022,
title={FedDrive: Generalizing Federated Learning to Semantic Segmentation in Autonomous Driving},
author={Fantauzzo, Lidia and Fanì, Eros and Caldarola, Debora and Tavera, Antonio and Cermelli, Fabio and Ciccone, Marco and Caputo, Barbara},
booktitle={Proceedings of the 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems},
year={2022}
}
FedDrive is a new benchmark for the Semantic Segmentation task in a Federated Learning scenario for autonomous driving.
It consists of 12 distinct scenarios, incorporating the real-world challenges of statistical heterogeneity and domain generalization. FedDrive incorporates algorithms and style transfer methods from Federated Learning, Domain Generalization, and Domain Adaptation literature. Its main goal is to enhance model generalization and robustness against statistical heterogeneity.
We show the importance of using the correct clients’ statistics when dealing with different domains and label skewness and how style transfer techniques can improve the performance on unseen domains, proving FedDrive to be a solid baseline for future research in federated semantic segmentation.
Dataset | Setting | Distribution | # Clients | # img/cl | Test clients |
---|---|---|---|---|---|
Cityscapes | - | Uniform, Heterogeneous, Class Imbalance | 146 | 10-45 | unseen cities |
IDDA | Country | Uniform, Heterogeneous, Class Imbalance | 90 | 48 | seen + unseen (country) domains |
Rainy | Uniform, Heterogeneous, Class Imbalance | 69 | 48 | seen + unseen (rainy) domains | |
Bus | Uniform, Heterogeneous, Class Imbalance | 83 | 48 | seen + unseen (bus) domains |
Please visit the FedDrive official website for the results.
-
Clone this repository
-
Move to the root path of your local copy of the repository
-
Create the
feddrive
new conda virtual environment and activate it:
conda env create -f environment.yml
conda activate feddrive
-
Download the Cityscapes dataset from here. You may need a new account if you do not have one yet. Download the
gtFine_trainvaltest.zip
andleftImg8bit_trainvaltest.zip
archives -
Extract the archives and move the
gtFine
andleftImg8bit
folders in[local_repo_path]/data/cityscapes/data/
-
Ask for the
IDDA V3
version of IDDA, available here -
Extract the archive and move the
IDDAsmall
folder in[local_repo_path]/data/idda/data/
-
Make a new wandb account if you do not have one yet, and create a new wandb project.
-
In the
configs
folder, it is possible to find examples of config files for some of the experiments to replicate the results of the paper. Run one of the exemplar configs or a custom one:
./run.sh [path/to/config]
N.B. change the wandb_entity
argument with the entity name of your wandb project.
N.B. always leave a blank new line at the end of the config. Otherwise, your last argument will be ignored.
The script plot_samples.py
is designed to save and eventually visualize
sets of (image, CFSI(image), LAB(image), target, model(image))
from samples in the test set(s) associated with a dataset, given a checkpoint
and the indices of the images to show.
To use this script:
- Download the checkpoint of the desired run from WandB
- Copy the
[run_args]
from the info of the same run on wandb - Customize the load_path, indices, path_to_save_folder and plot variables options
- Modify the
CUDA_VISIBLE_DEVICES
environment variable to select one single desired GPU - Move to the root directory of this repository and run the following command:
python src/plot_samples.py [run_args]