Skip to content

Baseline methods, training scripts, and pretrained models for LUMIR challenge at Learn2Reg 2024

License

Notifications You must be signed in to change notification settings

JHU-MedImage-Reg/LUMIR_L2R

Repository files navigation

Large Scale Unsupervised Brain MRI Image Registration (LUMIR)

Static Badge Static Badge

This official repository houses baseline methods, training scripts, and pretrained models for the LUMIR challenge at Learn2Reg 2024.
The challenge is dedicated to unsupervised brain MRI image registration and offers a comprehensive dataset of over 4000 preprocessed T1-weighted 3D brain MRI images, available for training, testing, and validation purposes.

Please visit learn2reg.grand-challenge.org for more information.

$${\color{red}New!}$$ - 10/07/2024 - Test phase ranking is available here this section, congrats to the winners!!
08/14/2024 - Test phase submission is available, see this section!

Dataset:

  • Download Training Dataset: Access the training dataset via Google Drive (~52GB).
  • Sanity Check: Since LUMIR focuses on unsupervised image registration, segmentation labels and landmarks for both the training and validation datasets are kept private. However, we provide a small subset to enable participants to perform sanity checks before submitting their results to the Grand Challenge.
    • Segmentation labels for 5 images in the training dataset (download) (Note that these labels are provided solely for sanity-check purposes and should not be used for training. The segmentation used for the test images may differ from the ones provided here.)
  • Preprocessing: The OpenBHB dataset underwent initial preprocessing by its creators, which included skull stripping and affine registration. For comprehensive details, refer to the OpenBHB GitHub page and their article. Subsequently, we performed N4 bias correction with ITK and intensity normalization using a pre-existing tool.
  • Annotation: We conducted segmentation of the anatomical structures using automated software. To enhance the dataset for evaluation purposes, an experienced radiologist and neurologist contributed manual landmark annotations to a subset of the images.
  • Image size: The dimensions of each image are 160 x 224 x 192.
  • Normalization: Intensity values for each image volume have been normalized to fall within the range [0,255].
  • Dataset structure:
    LUMIR/imagesTr/------
            LUMIRMRI_0000_0000.nii.gz   <--- a single brain T1 MR image
            LUMIRMRI_0001_0000.nii.gz
            LUMIRMRI_0002_0000.nii.gz
            .......
    LUMIR/imagesVal/------
            LUMIRMRI_3454_0000.nii.gz
            LUMIRMRI_3455_0000.nii.gz
  • Dataset json file: LUMIR_dataset.json

Baseline methods:

Learning-based models:

Learning-based foundation models:

Optimization-based methods:

Validation dataset results for baseline methods

Model Dice↑ TRE↓ (mm) NDV↓ (%) HdDist95↓
VFA 0.7726 ± 0.0286 2.4949 0.0788 3.2127
TransMorph 0.7594 ± 0.0319 2.4225 0.3509 3.5074
uniGradICON (w/ IO) 0.7512 ± 0.0366 2.4514 0.0001 3.5080
uniGradICON (w/o IO) 0.7369 ± 0.0412 2.5733 0.0000 3.6102
SynthMorph 0.7243 ± 0.0294 2.6099 0.0000 3.5730
VoxelMorph 0.7186 ± 0.0340 3.1545 1.1836 3.9821
SyN (ATNs) 0.6988 ± 0.0561 2.6497 0.0000 3.7048
deedsBCV 0.6977 ± 0.0274 2.2230 0.0001 3.9540
Initial 0.5657 ± 0.0263 4.3543 0.0000 4.7876

Test phase results:

team ISO*? TRE↓ (mm) Dice↑ HdDist95↓ NDV↓ (%) Score Rank GitHub
honkamj

3.0878 ± 4.17

0.7851 ± 0.11

3.0352 ± 2.41

0.0025 ± 0.00

0.814 1 GitHub
hnuzyx_next-gen-nn

3.1245 ± 4.19

0.7773 ± 0.12

3.2781 ± 2.55

0.0001 ± 0.00

0.781 2 -
lieweaver

3.0714 ± 4.22

0.7779 ± 0.12

3.2850 ± 2.64

0.0121 ± 0.00

0.737 3 -
zhuoyuanw210

3.1435 ± 4.20

0.7726 ± 0.12

3.2331 ± 2.53

0.0045 ± 0.00

0.723 4 -
LYU-zhouhu

3.1324 ± 4.21

0.7776 ± 0.12

3.2464 ± 2.53

0.0150 ± 0.00

0.722 5 -
Tsubasa025

3.1144 ± 4.16

0.7701 ± 0.12

3.2555 ± 2.56

0.0030 ± 0.00

0.702 6
uniGradICON (w/ ISO 50)

3.1350 ± 4.18

0.7596 ± 0.13

3.4010 ± 2.63

0.0002 ± 0.00

0.668 7 GitHub
VFA

3.1377 ± 4.21

0.7767 ± 0.11

3.1505 ± 2.47

0.0704 ± 0.05

0.667 8 GitHub
lukasf

3.1440 ± 4.20

0.7639 ± 0.12

3.4217 ± 2.60

0.2761 ± 0.08

0.561 9 -
Bailiang

3.1559 ± 4.16

0.7735 ± 0.12

3.3287 ± 2.57

0.0222 ± 0.01

0.526 10 GitHub
TransMorph

3.1420 ± 4.22

0.7624 ± 0.12

3.4617 ± 2.67

0.3621 ± 0.09

0.518 11 GitHub
TimH

3.1926 ± 4.17

0.7303 ± 0.13

3.5695 ± 2.61

0.0000 ± 0.00

0.487 12 -
deedsBCV

3.1042 ± 4.20

0.6958 ± 0.14

3.9446 ± 2.71

0.0002 ± 0.00

0.423 13 GitHub
uniGradICON

3.2400 ± 4.21

0.7422 ± 0.13

3.5747 ± 2.66

0.0001 ± 0.00

0.402 14 GitHub
kimjin2510

3.2354 ± 4.28

0.7355 ± 0.14

3.7328 ± 2.69

0.0033 ± 0.01

0.384 15 -
HongyuLyu

3.1962 ± 4.26

0.7596 ± 0.12

3.5511 ± 2.58

1.1646 ± 0.29

0.379 16 -
SynthMorph

3.2276 ± 4.24

0.7216 ± 0.14

3.6136 ± 2.61

0.0000 ± 0.00

0.361 17 GitHub
TS_UKE

3.2250 ± 4.23

0.7603 ± 0.12

3.6297 ± 2.78

0.0475 ± 0.03

0.351 18 -
ANTsSyN

3.4845 ± 4.24

0.7025 ± 0.14

3.6877 ± 2.60

0.0000 ± 0.00

0.265 19 GitHub
VoxelMorph

3.5282 ± 4.32

0.7144 ± 0.14

4.0718 ± 2.79

1.2167 ± 0.27

0.157 20 GitHub
ZeroDisplacement

4.3841 ± 4.33

0.5549 ± 0.17

4.9148 ± 2.50

0.0000 ± 0.00

0.157 20 -

*: ISO stands for Instance-specific Optimization

Evaluation metrics:

  1. TRE (Code)
  2. Dice (Code)
  3. HD95 (Code)
  4. Non-diffeomorphic volumes (NDV) (Code) See this article published in IJCV, and its associated GitHub papge

Test Phase Submission Guidelines:

The test set consists of 590 images, making it impractical to distribute and collect the deformation fields. As a result, the test set will not be made available to challenge participants. Instead, participants are required to containerize their methods with Docker and submit their Docker containers for evaluation. Your code won't be shared and will be only used internally by the Learn2Reg organizers.

Docker allows for running an algorithm in an isolated environment called a container. In particular, this container will locally replicate your pipeline requirements and execute your inference script.

Detailed instructions on how to build your Docker container is availble at learn2reg.grand-challenge.org/lumir-test-phase-submission/
We have provided examples and templates for creating a Docker image for submission on our GitHub. You may find it helpful to start with the example Docker submission we created for TransMorph (available here), or you can start from a blank template (available here).

Your submission should be a single .zip file containing the following things:

LUMIR_[your Grand Challenge username]_TestPhase.zip
└── [your docker image name].tar.gz <------ #Your Docker container
└── README.txt                      <------ #A description of the requirements for running your model, including the number of CPUs, amount of RAM, and the estimated computation time per subject.
└── validation_predictions.zip      <------ #A .zip file containing the predicted displacement fields for the validation dataset, ensuring the format adheres to ones outlined at this page.
    ├── disp_3455_3454.nii.gz
    ├── disp_3456_3455.nii.gz
    ├── disp_3457_3456.nii.gz
    ├── disp_3458_3457.nii.gz
    ├── ...
    └── ...

You will need to submit by 31st August 2024

Please choose ONE of the following:

  • EITHER Email the download link for your .zip file to jchen245 [at] jhmi.edu
  • OR Upload your .zip file here.

Validation Submission guidelines:

We expect to provide displacement fields for all registrations in the file naming format should be disp_PatID1_PatID2, where PatID1 and PatID2 represent the subject IDs for the fixed and moving images, respectively. The evaluation process requires the files to be organized in the following structure:

folder.zip
└── folder
    ├── disp_3455_3454.nii.gz
    ├── disp_3456_3455.nii.gz
    ├── disp_3457_3456.nii.gz
    ├── disp_3458_3457.nii.gz
    ├── ...
    └── ...

Submissions must be uploaded as zip file containing displacement fields (displacements only) for all validation pairs for all tasks (even when only participating in a subset of the tasks, in that case submit deformation fields of zeroes for all remaining tasks). You can find the validation pairs for in the LUMIR_dataset.json. The convention used for displacement fields depends on scipy's map_coordinates() function, expecting displacement fields in the format [X, Y, Z,[x, y, z]] or [[x, y, z],X, Y, Z], where X, Y, Z and x, y, z represent voxel displacements and image dimensions, respectively. The evaluation script expects .nii.gz files using full-precision format and having shapes 160x224x196x3. Further information can be found here.

Note for PyTorch users: When using PyTorch as deep learning framework you are most likely to transform your images with the grid_sample() routine. Please be aware that this function uses a different convention than ours, expecting displacement fields in the format [X, Y, Z,[x, y, z]] and normalized coordinates between -1 and 1. Prior to your submission you should therefore convert your displacement fields to match our convention.

Citations for dataset usage:

@article{dufumier2022openbhb,
title={Openbhb: a large-scale multi-site brain mri data-set for age prediction and debiasing},
author={Dufumier, Benoit and Grigis, Antoine and Victor, Julie and Ambroise, Corentin and Frouin, Vincent and Duchesnay, Edouard},
journal={NeuroImage},
volume={263},
pages={119637},
year={2022},
publisher={Elsevier}
}

@article{taha2023magnetic,
title={Magnetic resonance imaging datasets with anatomical fiducials for quality control and registration},
author={Taha, Alaa and Gilmore, Greydon and Abbass, Mohamad and Kai, Jason and Kuehn, Tristan and Demarco, John and Gupta, Geetika and Zajner, Chris and Cao, Daniel and Chevalier, Ryan and others},
journal={Scientific Data},
volume={10},
number={1},
pages={449},
year={2023},
publisher={Nature Publishing Group UK London}
}

@article{marcus2007open,
title={Open Access Series of Imaging Studies (OASIS): cross-sectional MRI data in young, middle aged, nondemented, and demented older adults},
author={Marcus, Daniel S and Wang, Tracy H and Parker, Jamie and Csernansky, John G and Morris, John C and Buckner, Randy L},
journal={Journal of cognitive neuroscience},
volume={19},
number={9},
pages={1498--1507},
year={2007},
publisher={MIT Press One Rogers Street, Cambridge, MA 02142-1209, USA journals-info~…}
}

If you have used Non-diffeomorphic volumes in the evaluation of the deformation regularity, please cite the following:

@article{liu2024finite,
  title={On finite difference jacobian computation in deformable image registration},
  author={Liu, Yihao and Chen, Junyu and Wei, Shuwen and Carass, Aaron and Prince, Jerry},
  journal={International Journal of Computer Vision},
  pages={1--11},
  year={2024},
  publisher={Springer}
}

About

Baseline methods, training scripts, and pretrained models for LUMIR challenge at Learn2Reg 2024

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages