A Comprehensive Federated Learning Framework for Diabetic Retinopathy Grading and Lesion Segmentation
- Create conda environment and install dependencies:
conda create -y -n TJDR-FL python=3.8
conda activate TJDR-FL
conda install -y pytorch==1.10.2 torchvision==0.11.3 torchaudio==0.10.2 cudatoolkit=11.3 -c pytorch
pip install pip -U
pip install -r requirements.txt
- Prepare dataset:
dataset_dir="../TJDR-FL/task/datas/"
-
For
IDRiD
,DDR-seg
,DDR-cls
, andAPTOS2019
datasets, download the official dataset into the corresponding dir and subsequently run../TJDR-FL/task/experiments/{dataset_name}.py
-
For
TJDR
, download the dataset from the link our provided and move all files into../TJDR-FL/task/datas/TJDR
Code runs according to *.yaml
config file, we provide two methods to run the code:
Training form configs/base_config.yaml
as follows:
cd task
python run.py
Note: Please make sure configs/base_config.yaml
is prepared as you expected, as the code will be executed based on
it in the case.
For convenience, we provide base config templates to perform training, in which code run
from configs/base_configs/{classification|segmentation}/{dataset}.yaml
. You can copy the template to override the
configs/base_config.yaml
to run the code.
Args:
-b, --base_config_path : Specified the path of base config, default is base_config.yaml , Optional
-g, --gpu : Specified gpu to run, default is specified by config files , Optional
-n, --network : Enable Network to parallel computing, default is false
--all_gpu : Enable all gpu to parallel computing, default is false
--host : Cloud host, default is initialized by 'configs/base_config.yaml', Optional
--port : Cloud port, default is initialized by 'configs/base_config.yaml', Optional
Our TJDR dataset is available at here.
We train our model on 6 NVIDIA GeForce RTX 3090 GPUs with a 24GB memory per-card. Testing is conducted on the same machines.