❗ These implementations are based on the original fastMRI repository by Facebook Research. The original repository can be found here.
Deep Learning-based (DL) reconstruction framework for Magnetic Resonance Cholangiopancreatography (MRCP) imaging. We use ResNet-based DL models for supervised (SV) and self-supervised training. We train DL models on the 3T six-fold retrospectively undersampled MRCP. We evaluate the model for six-fold retrospective and prospective undersampling acquired at 3T and 0.55T.
- Input: SENSE-based synthesized k-space data on six-fold retrospective undersampling instead of zero-filled k-space data
- Target: GRAPPA reconstruction of two-fold accelerated MRCP (clinical standard)
- Sensitivity maps: Predefined using an ESPIRiT algorithm
- Unrolled network: The Variational Network model
- Clone the repository then navigate to the
MRCP_DLRecon
root directory.
git clone [email protected]:JinhoKim46/MRCP_DLRecon.git
cd MRCP_DLRecon
- Create a new conda environment
conda env create -f environment.yml
- Activate the conda environment
conda activate mrcp_dlrecon
- Install a dlrecon package
pip install -e .
- MRI data are stored in the
HDF5
format containing the following structures:- Datasets:
-
grappa
: target data (y$\times$ x$\times$ Slice) -
kdata_raw
: Raw$k$ -space data (x2) (nCoil$\times$ PE$\times$ RO$\times$ Slice) -
kdata_fs
: Fully-sampled$k$ -space data fromkdata_raw
using GRAPPA (nCoil$\times$ PE$\times$ RO$\times$ Slice) -
sm_espirit
: ESPIRiT-based sensitivity maps (nCoil$\times$ y$\times$ x$\times$ Slice)
-
- Datasets:
- The
sample_data
directory contains sample MRCP data for training, validation, and testing. We provide two two-fold (2x) 3D MRCP and one six-fold (6x) 3D MRCP. In thesample_data/dataset.csv
, the two 2x MRCP data are defined for training and validation, and the 6x MRCP data is marked for testing. You can find the data here. Additional information, such as header information, is ignored in the sample data. - Data splitting for training, validation, and testing is done by the
sample_data/dataset.csv
file. - Replace
data_path
in theconfigs/paths.yaml
file with the actual path to the data.
- Define the training configurations in the
configs/dlrecon.yaml
file. - Run
main.py
by
python main.py fit --config configs/dlrecon.yaml
- You can define the run name by adding the
--name
argument at run. Unless you define the run name, it is set to%Y%m%d_%H%M%S_{training_manner}
with the current date and time.
python main.py fit --config configs/dlrecon.yaml --name test_run
- You can overwrite the configurations in the
configs/dlrecon.yaml
file by adding arguments at run.
python main.py fit --config configs/dlrecon.yaml --model.training_manner ssv
- Log files containing
checkpoints/
,lightning_logs/
, andscript_dump/
are stored inlog_path/run_name
.log_path
is defined in theconfigs/paths.yaml
file. - You can resume the training by giving
run_name
withfit
command.*.ckpt
file should be placed inrun_name/checkpoints/
to resume the model.
python main.py fit --config configs/dlrecon.yaml --name run_name
- Run
main.py
withrun_name
by
python main.py test --config configs/dlrecon.yaml --name run_name
*.ckpt
file should be placed inrun_name/checkpoints/
to test the model.- The output files are saved in
log_path/run_name/npys/FILENAME
.