Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: Generate subject fiducials #6693

Closed
wants to merge 6 commits into from
Closed

Conversation

jhouck
Copy link
Contributor

@jhouck jhouck commented Aug 21, 2019

What does this implement/fix?

This PR would add a little helper function get_mni_fiducials to estimate the locations of a subject's fiducial points by transforming the fsaverage fiducials to the subject's MRI coordinate space (RAS). It also adds an example plot_auto_alignment that applies the results.

While the helper function is not exactly earthshattering on its own, if the fiducials are saved to the default location, they can be used in the mne coreg gui to perform an automated co-registration. In my data (and in the sample dataset) the results are usually well below the > 2 mm coreg threshold. The parameters in the example probably need to be tweaked a little, but you get the gist.

@larsoner
Copy link
Member

@jhouck do you want to iterate on this based on comments or would you rather I push commits?

For example, you have a hard-coded path in there... :)

@codecov
Copy link

codecov bot commented Aug 21, 2019

Codecov Report

Merging #6693 into master will increase coverage by <.01%.
The diff coverage is n/a.

@@            Coverage Diff             @@
##           master    #6693      +/-   ##
==========================================
+ Coverage   89.41%   89.42%   +<.01%     
==========================================
  Files         416      417       +1     
  Lines       75253    75406     +153     
  Branches    12355    12369      +14     
==========================================
+ Hits        67289    67430     +141     
- Misses       5148     5157       +9     
- Partials     2816     2819       +3

@jhouck
Copy link
Contributor Author

jhouck commented Aug 21, 2019

@larsoner I won't quit my day job just yet. ;) Either is fine; happy to iterate with comments or if you've got the time feel free to hop in and push commits.

@jhouck
Copy link
Contributor Author

jhouck commented Aug 21, 2019

For what it's worth I also looked into getting more precise landmarks using images produced from the outer skin surface with tools like opencv & scikit-image. Nasion might be do-able since it seems to be a pretty commonly used keypoint. I stopped pursuing it when it started to look like the fsaverage fiducials would work.

@lgtm-com
Copy link

lgtm-com bot commented Aug 22, 2019

This pull request introduces 2 alerts when merging fad3f12 into d3a7142 - view on LGTM.com

new alerts:

  • 2 for Unreachable code

@larsoner
Copy link
Member

@agramfort
Copy link
Member

agramfort commented Aug 22, 2019 via email

@massich massich self-requested a review August 22, 2019 11:13
@jhouck
Copy link
Contributor Author

jhouck commented Aug 22, 2019

If the pipeline looks OK, could put a new function in coreg to run all the private functions from gui? The private functions are all from gui, but none is actually graphical, so it wouldn't necessarily make sense there. The example would then be maybe 5 lines of code. :)

One small related issue is that there seem to be three independent ways to calculate the coreg error: _get_point_distance, fit_matched_points (which is what fit_fiducials and fit_icp use), and dig_mri_distances. It might be good make it possible to preserve the coreg error result from the icp.

mne/coreg.py Outdated
fids_default, coord_frame = read_fiducials(fname_fids_fs)
xfm_tal = np.matrix(_read_fs_xfm(fname_tal)[0])

# Get Freesurfer vox2ras and vox2ras-tkr as matrices
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This comment just states what we can read in the code before.

Instead of this, it would be nice if you could give some rationale / background about the different coordinate systems instead. When reading the code, that's what I usually struggle to understand as these are not so well-documented.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I used https://surfer.nmr.mgh.harvard.edu/fswiki/CoordinateSystems as my reference for the transformation stuff. There's also good content on this in the plot_source_alignment tutorial. Would a pointer to these be useful, or are you thinking that it would be more helpful to include a condensed version of some of that content?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We do need to overhaul coord frame docs, there is an issue about it. It's particularly critical in two places: 1) vol source space, and 2) vol morphing. So I'd rather tackle it in a general overhaul rather than here.

But I will say that we ship the fs fids with MNE so you should just use that and not rely on them being in the subjects_dir that the user passes:

https://github.com/mne-tools/mne-python/tree/master/mne/data/fsaverage

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can include a link to the transformation details for the curious, and leave out the specifics for now.

I think the fsaverage fids are coming from the testing data right now, not from subjects_dir (i.e., from .io.tests import data_dir as fsaverage_dir & fname_fids_fs = op.join(fsaverage_dir, 'fsaverage-fiducials.fif') -- in the function, subjects_dir is only used to get the subject's transformations. It would definitely make more sense to get them from the location you provided, rather than from the testing data.

@larsoner
Copy link
Member

Regarding "private" code -- I think your PR basically shows what we'd need to use for it. I thought about how we could make these things all functions, but a class is probably justified here. The coregistration process really does have a state associated with it (things to omit, current head<->MRI trans, etc.). So maybe it actually makes sense to just make the CoregModel a proper public class in MNE-Python.

@larsoner
Copy link
Member

(but regarding the code dup you point out, we should ideally reduce it somehow, though it's possible/likely they are all doing slightly different things, so maybe one function with some options is the way to go)

@jhouck
Copy link
Contributor Author

jhouck commented Aug 23, 2019

(but regarding the code dup you point out, we should ideally reduce it somehow, though it's possible/likely they are all doing slightly different things, so maybe one function with some options is the way to go)

Yes, they're all doing slightly different things. Maybe add an option for fit_matched_points to return the error

err = np.sqrt(np.sum((est_pts - tgt_pts) ** 2, axis=1))
since that's the function fit_icp is using?
est = fit_matched_points(mri_pts, head_pts, scale=n_scale_params,
x0=est, out='params', weights=weights)
That would make QA a little easier.

@lgtm-com
Copy link

lgtm-com bot commented Aug 23, 2019

This pull request introduces 2 alerts when merging 6dee5a7 into 3769993 - view on LGTM.com

new alerts:

  • 2 for Unreachable code

@lgtm-com
Copy link

lgtm-com bot commented Aug 23, 2019

This pull request introduces 1 alert when merging 1f27976 into 3769993 - view on LGTM.com

new alerts:

  • 1 for Unreachable code

@agramfort
Copy link
Member

I won't have much time to look at this PR these days. To have a real look I'd like to play with some more datasets eg cam-can to see how much we can advertise some automated approach for coreg. Maybe someone wants to beat me to it...

@jhouck
Copy link
Contributor Author

jhouck commented Aug 26, 2019

@agramfort That was my first testing dataset, back before the fiducials thing was working well. If I'd noticed sooner that the coreg files had been released I would have added a section to the reliability paper. Which is a long way of saying that this will not take me very long to do...

@jhouck
Copy link
Contributor Author

jhouck commented Aug 27, 2019

For cam-can, the Neuroimage paper earlier this year reported the coreg error as "on the order of 0.5 cm." Using the transformation files from https://github.com/tbardouille/camcan_coreg, I'm getting mean coreg error of 2.358 mm (SD=0.523). The mean coreg error from the auto-coreg is 1.776 mm (SD=0.514). This is unexpected, as in my other sample manual coregistration performed a little better than automatic. There might be some differences in the surfaces, since everyone is running Freesurfer independently with the camcan MRIs.

In the cam-can sample, the auto coreg improves a bit if I allow the parameters to vary by subject (only 10% done that way, it's a somewhat slower process, but on average about a 0.3 mm further reduction in error so far, relative to the value reported above). https://gist.github.com/jhouck/c3a2a3ceae1c5a27af6f568f6cd9641c

So far, the auto coreg seems to be working pretty well. I can update again tomorrow after the other 500 subjects have run through the updated version.

@jhouck
Copy link
Contributor Author

jhouck commented Aug 28, 2019

@agramfort In the new, improved version from the above gist, mean auto-coreg error for cam-can (computed by _get_point_distance because that function takes into account the dropped outlier points) for the auto-coreg is 1.625 mm (SD 0.588). This remains significantly better than what I get for the manual coreg files described above.

The coreg error reported by dig_mri_distances was on average about 0.72 mm greater than what _get_point_distance returned (p < .001). From perusing a few of these, the higher coreg error seemed to be associated with subjects whose de-faced MRI did not include the nasion or for whom the HPI receiver may have moved (e.g., they had a set of at least 50 points close to the scalp surface, and a 'halo' of much more distant points).

In terms of the fit parameters, for the initial ICP fit after aligning fiducials and before dropping outliers, most subjects (53.3%) had their best fit with 5 or fewer iterations; 17.1% did best with 20 or more. For the final ICP fit, the vast majority of subjects (88.6%) did best with a nasion weight of 5. At that same fit, most subjects (63.0%) achieved their best fit with 10 iterations; for the remaining subjects it was 20 (12.4%), 30 (9.0%), 40 (7.0%), or 50 (8.7%) iterations.

For the auto-coreg, coreg error was weakly correlated with the percentage of head points retained in the final ICP fit (r = -0.244, p < .001). This effect was stronger for the manual cam-can coreg (r = -0.540, p < .001), although again these are the points not dropped as outliers by omit_hsp_points, which may or may not have any relationship with what would have been dropped or retained by the people doing the manual coreg.

Happy to attached the output (tab-delimited text) if there's a way to do that and anyone wants to see it.

@agramfort
Copy link
Member

agramfort commented Sep 1, 2019 via email

@larsoner
Copy link
Member

larsoner commented Sep 1, 2019

See also #6728, but even with this I agree visualization should be optional.

@larsoner larsoner changed the title Generate subject fiducials WIP: Generate subject fiducials Sep 6, 2019
@massich
Copy link
Contributor

massich commented Sep 9, 2019

#6733 closes this one? @larsoner

@massich massich closed this Sep 9, 2019
@massich massich reopened this Sep 9, 2019
@larsoner
Copy link
Member

larsoner commented Sep 9, 2019

This has an example we should adapt somehow

@jhouck
Copy link
Contributor Author

jhouck commented Oct 15, 2019

Thoughts on what to do with the example?

@agramfort
Copy link
Member

this is not a simple issue. Example makes public a lot of code that was never meant to be. Suddenly we would need to maintain the API of this code while it was just internal to the GUI. This API was never discussed in PRs as it was just GUI.

What I would do is propose a public API that does not depend of traits / vtk to do the job. It means some work to decouple GUI code from the model.

@larsoner
Copy link
Member

larsoner commented Mar 4, 2020

Closing as this example is noted in the roadmap.rst of #6262 which should be merged before 0.20

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants