Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Benchmarking preprocessing choices with validation data #121

Open
dPys opened this issue Sep 29, 2020 · 0 comments
Open

Benchmarking preprocessing choices with validation data #121

dPys opened this issue Sep 29, 2020 · 0 comments
Labels
effort: high This task will probably require many person-hours impact: high This contribution/idea will help a lot of people validation / evaluation

Comments

@dPys
Copy link
Collaborator

dPys commented Sep 29, 2020

Proposed Pre-Processing Multiverse:
https://docs.mitk.org/2014.10/org_mitk_views_fiberfoxview.html
Generate simulated DWI phantoms with combinations of the following:
(1) High-direction / low-direction / split directions (i.e. in multiple runs)
(2) High b-value / low b-value / multi-shell / hemispheric
(3) Noise (Rician and Chi-Square distributed noise)
(4) Spikes (signal spikes to the k-space signal resulting in stripe artifacts across the corresponding image slice)
(5) Aliasing (low vs. high % FOV shrinkage)
(6) N/2 Ghosts (low vs. high offset between successive lines in k-space)
(7) Susceptibility distortions using the artificial "fieldmap generator"
(8) Motion artifacts (high vs. low, translation vs. rotation)
(9) Eddy Currents (high vs. low)
(10) Gibbs Ringing (high vs. low)
(11) Native DWI space vs. T1w Co-registered

Proposed Post-Processing Multiverse:
-Using various multi-compartment reconstructions (CSD, CSA, ball-and-stick, SFM), along with local-tracking/ recobundles in Dipy. After the tractograms are generated, then run:
https://github.com/scilus/ismrm_2015_tractography_challenge_scoring/tree/master/challenge_scoring

-Test-retest analysis of structural connectome features using HNU1 dataset.

We'll probably have to reduce this search-space to only the most essential to avoid a combinatorial explosion, but I figured we'd start by brainstorming as exhaustive a list as possible as our ideal, and go from there. We'll also need to figure out the best compute infrastructure to use to run these evaluations.

@jelleveraart @mattcieslak @arokem @oesteban

@oesteban oesteban added effort: high This task will probably require many person-hours impact: high This contribution/idea will help a lot of people validation / evaluation labels May 19, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
effort: high This task will probably require many person-hours impact: high This contribution/idea will help a lot of people validation / evaluation
Projects
None yet
Development

No branches or pull requests

2 participants