Benchmarking preprocessing choices with validation data #121
Labels
effort: high
This task will probably require many person-hours
impact: high
This contribution/idea will help a lot of people
validation / evaluation
Proposed Pre-Processing Multiverse:
https://docs.mitk.org/2014.10/org_mitk_views_fiberfoxview.html
Generate simulated DWI phantoms with combinations of the following:
(1) High-direction / low-direction / split directions (i.e. in multiple runs)
(2) High b-value / low b-value / multi-shell / hemispheric
(3) Noise (Rician and Chi-Square distributed noise)
(4) Spikes (signal spikes to the k-space signal resulting in stripe artifacts across the corresponding image slice)
(5) Aliasing (low vs. high % FOV shrinkage)
(6) N/2 Ghosts (low vs. high offset between successive lines in k-space)
(7) Susceptibility distortions using the artificial "fieldmap generator"
(8) Motion artifacts (high vs. low, translation vs. rotation)
(9) Eddy Currents (high vs. low)
(10) Gibbs Ringing (high vs. low)
(11) Native DWI space vs. T1w Co-registered
Proposed Post-Processing Multiverse:
-Using various multi-compartment reconstructions (CSD, CSA, ball-and-stick, SFM), along with local-tracking/ recobundles in Dipy. After the tractograms are generated, then run:
https://github.com/scilus/ismrm_2015_tractography_challenge_scoring/tree/master/challenge_scoring
-Test-retest analysis of structural connectome features using HNU1 dataset.
We'll probably have to reduce this search-space to only the most essential to avoid a combinatorial explosion, but I figured we'd start by brainstorming as exhaustive a list as possible as our ideal, and go from there. We'll also need to figure out the best compute infrastructure to use to run these evaluations.
@jelleveraart @mattcieslak @arokem @oesteban
The text was updated successfully, but these errors were encountered: