title | colorFrom | colorTo | sdk | app_port | emoji | pinned | license | app_file |
---|---|---|---|---|---|---|---|---|
DDMR: Deep Deformation Map Registration of CT/MRIs |
indigo |
indigo |
docker |
7860 |
🧠 |
false |
mit |
demo/app.py |
Learning deep abdominal CT registration through adaptive loss weighting and synthetic data generation
DDMR was developed by SINTEF Health Research. The corresponding manuscript describing the framework has been published in PLOS ONE and the manuscript is openly available here.
- Setup virtual environment:
virtualenv -ppython3 venv --clear
source venv/bin/activate
- Install requirements:
pip install git+https://github.com/jpdefrutos/DDMR
Use the following CLI command to register images
ddmr --fixed path/to/fixed_image.nii.gz --moving path/to/moving_image.nii.gz --outputdir path/to/output/dir -a <anatomy> --model <model> --gpu <gpu-number> --original-resolution
where:
- anatomy: is the type of anatomy you want to register: B (brain) or L (liver)
- model: is the model you want to use:
- BL-N (baseline with NCC)
- BL-NS (baseline with NCC and SSIM)
- SG-ND (segmentation guided with NCC and DSC)
- SG-NSD (segmentation guided with NCC, SSIM, and DSC)
- UW-NSD (uncertainty weighted with NCC, SSIM, and DSC)
- UW-NSDH (uncertainty weighted with NCC, SSIM, DSC, and HD).
- gpu: is the GPU number you want to the model to run on, if you have multiple and want to use only one GPU
- original-resolution: (flag) whether to upsample the registered image to the fixed image resolution (disabled if the flag is not present)
Use ddmr --help
to see additional options like using precomputed segmentations to crop the images to the desired ROI, or debugging.
A live demo to easily test the best performing pretrained models was developed in Gradio and is deployed on Hugging Face
.
To access the live demo, click on the Hugging Face
badge above. Below is a snapshot of the current state of the demo app.
To develop the Gradio app locally, you can use either Python or Docker.
You can run the app locally by:
python demo/app.py --cwd ./ --share 0
Then open http://127.0.0.1:7860
in your favourite internet browser to view the demo.
Alternatively, you can use docker:
docker build -t ddmr .
docker run -it -p 7860:7860 ddmr
Then open http://127.0.0.1:7860
in your favourite internet browser to view the demo.
Use the "MultiTrain" scripts to launch the trainings, providing the neccesary parameters. Those in the COMET folder accepts a .ini
configuration file (see COMET/train_config_files/
for example configurations).
For instance:
python TrainingScripts/Train_3d.py
Use Evaluate_network to test the trained models. On the Brain folder, use Evaluate_network__test_fixed.py
instead.
For instance:
python EvaluationScripts/evaluation.py
Please, consider citing our paper, if you find the work useful:
@article{perezdefrutos2022ddmr, title = {Learning deep abdominal CT registration through adaptive loss weighting and synthetic data generation}, author = {Pérez de Frutos, Javier AND Pedersen, André AND Pelanis, Egidijus AND Bouget, David AND Survarachakan, Shanmugapriya AND Langø, Thomas AND Elle, Ole-Jakob AND Lindseth, Frank}, journal = {PLOS ONE}, publisher = {Public Library of Science}, year = {2023}, month = {02}, volume = {18}, doi = {10.1371/journal.pone.0282110}, url = {https://doi.org/10.1371/journal.pone.0282110}, pages = {1-14}, number = {2} }
This project is based on VoxelMorph library, and its related publication:
@article{balakrishnan2019voxelmorph, title={VoxelMorph: A Learning Framework for Deformable Medical Image Registration}, author={Balakrishnan, Guha and Zhao, Amy and Sabuncu, Mert R. and Guttag, John and Dalca, Adrian V.}, journal={IEEE Transactions on Medical Imaging}, year={2019}, volume={38}, number={8}, pages={1788-1800}, doi={10.1109/TMI.2019.2897538} }