Skip to content

Pytorch implementation of the paper SuperFormer: Volumetric Transformer Architectures for MRI Super-Resolution. SASHIMI workshop MICCAI 2022

Notifications You must be signed in to change notification settings

BCV-Uniandes/SuperFormer

Repository files navigation

SuperFormer: Volumetric Transformer Architectures for MRI Super-Resolution

This repository provides a Pytorch implementation of the paper SuperFormer: Volumetric Transformer Architectures for MRI Super-Resolution. Presented at the 7th Simulation and Synthesis in Medical Imaging (SASHIMI) workshop in MICCAI 2022. SuperFormer is a novel volumetric visual transformer for MRI Super-Resolution. Our method leverages the 3D and multi-domain information from volume and feature embeddings to reconstruct HR MRIs using a local self-attention mechanism.

Paper

SuperFormer: Volumetric Transformer Architectures for MRI Super-Resolution
Cristhian Forigua $^1$ , María Escobar$^1$ and Pablo Arbeláez$^1$
$^1$ Center for Research and Formation in Artificial Intelligence (CINFONIA) , Universidad de Los Andes, Bogotá, Colombia.

OverviewSuperformer drawio (1)

Dependencies and installation

  1. Clone the repo
git clone https://github.com/BCV-Uniandes/SuperFormer.git
cd SuperFormer
  1. Create environment from .yml file
conda env create -f environment.yml
conda activate superformer

Human Connectome Project Dataset

Please refer to the Human Connectome Project to donwload the dataset. Locate the files from ./split into the ./HCP folder before running the code.

Low-resolution MRI generation

To generate the Low-resolution MRIs, we use the code in ./data/kspace.m. The factor_truncate variable controls the magnitude of subsampling the frequency domain space. Please change the "rootdir" path to your path where the data was downloaded. To run this code, you need MATLAB's "NIfTI_20140122" package.

./data/kspace.m

Train

Training command:

sh train.sh

Make sure to change the paths to the HCP folder inside the options files. Please change the parameters "dataroot_H" and "dataroot_L".
The default is to train SuperFormer from scratch. However, you can change the path of the options file inside train.sh to train either the swinIR 2D approach, 3D RRDBNet, or 3D EDSR. See the ./options/train folder.

Pre-trained Model and Test

You can find our pe-trained models here
Before testing, make sure you change the paths of the pretrained models inside the ./options/test files. Change the attribute "pretrained_netG". Also, change the path to the HCP data.
Test 3D command:

sh test.sh

Test 2D command:

sh train.sh

License and Acknowledgement

This project borrows heavily from KAIR, we thank the authors for their contributions to the community.
More details about license in LICENSE.

Contact

If you have any question, please email [email protected]

About

Pytorch implementation of the paper SuperFormer: Volumetric Transformer Architectures for MRI Super-Resolution. SASHIMI workshop MICCAI 2022

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages