-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Draft MEDIC dynamic distortion correction method (second attempt) #438
base: master
Are you sure you want to change the base?
Conversation
I cannot figure out how to get the FieldmapWrangler to find MEDIC-style setups (i.e., complex-valued, multi-echo BOLD scans). 😕 |
@tsalo Around L508 in your current medic_entities = {**base_entities, **{'part': 'mag'}}
has_magnitude = tuple()
with suppress(ValueError):
has_magnitude = layout.get(
suffix='bold',
**medic_entities,
)
for mag_img in has_magnitude:
phase_img = layout.get(**{**mag_img.get_entities(), **{'part': 'phase'}})
if not phase_img:
continue
phase_img = phase_img[
try:
e = fm.FieldmapEstimation(
[
fm.FieldmapFile(mag_img.path, metadata=mag_img.get_metadata()),
fm.FieldmapFile(phase_img.path, metadata=phase_img.get_metadata()),
]
)
except (ValueError, TypeError) as err:
_log_debug_estimator_fail(
logger, "potential MEDIC fieldmap", [mag_img, phase_img], layout.root, str(err)
)
else:
_log_debug_estimation(logger, e, layout.root)
estimators.append(e) |
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #438 +/- ##
==========================================
- Coverage 83.75% 74.95% -8.80%
==========================================
Files 32 33 +1
Lines 2831 2943 +112
Branches 381 294 -87
==========================================
- Hits 2371 2206 -165
- Misses 390 679 +289
+ Partials 70 58 -12 ☔ View full report in Codecov by Sentry. |
Thanks @effigies! |
Co-authored-by: Chris Markiewicz <[email protected]>
I created a MEDIC-compliant test dataset (dsD), but it seems like the tests in test_wrangler use skeletons. Should I drop the test dataset and just generate a skeleton in the test file? |
import sdcflows.config as sc | ||
|
||
# Reload is necessary to clean-up the layout config between parameterized runs | ||
reload(sc) | ||
|
||
path = (tmp_path / test_id).absolute() | ||
generate_bids_skeleton(path, config) | ||
with pytest.raises(SystemExit) as wrapped_exit: | ||
# This was set to raise a SystemExit, but was only raising an ImageFileError | ||
with pytest.raises(ImageFileError) as wrapped_exit: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was probably expecting valid headers in the images.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I reverted the changes I made to this test since other PRs don't seem to have an issue. I'm not sure why the problem is happening in this PR only, but I've tried to remove any extraneous changes in case those were causing the problem.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All I can think is that maybe adding Julia
and warpkit
to the dependencies is affecting which versions of other dependencies are installed (e.g., a newer version of nibabel?).
@effigies I finally got the tests passing. Do you know what my next steps should be? |
def test_cli_finder_wrapper(tmp_path, capsys, test_id, config, estimator_id): | ||
def _test_cli_finder_wrapper(tmp_path, capsys, test_id, config, estimator_id): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Renamed to skip the test since it was failing.
@effigies I was thinking I could create a reduced version of one of the runs in https://openneuro.org/datasets/ds005250 (maybe like 20 volumes?), but I don't have access to |
I pushed a single run with 50 volumes to https://gin.g-node.org/tsalo/ds005250-sdcflows. We can transfer ownership or make a fork in the nipreps-data organization. The dataset is 724 MB. Is that reasonable for a test dataset? I can reduce it further if necessary. EDIT: I dropped them down to 10 volumes and also created a duplicate second session with different metadata. |
Invited you to nipreps-data. |
Thanks! I'll create a GitHub repo for the datalad dataset and incorporate that into the tests. |
# ds005250 | ||
datalad install -r https://gin.g-node.org/tsalo/ds005250-sdcflows.git | ||
datalad update -r --merge -d ds005250-sdcflows/ | ||
datalad get -r -J 2 -d ds005250-sdcflows/ ds005250-sdcflows/sub-04/ses-2/ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I set up ses-2 with IntendedFor and ses-1 with B0Field fields. The B0Fields aren't working.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, it looks like BIDSLayout.get_B0FieldIdentifiers
doesn't work at all.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
are your B0Field* lists? see bids-standard/pybids#684
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They are! I didn't realize that was a known issue. I can modify my test dataset to just have strings for B0FieldIdentifier.
Closes #36. An alternative to #435 that installs MEDIC as a dependency instead of implementing the whole tool as a series of interfaces and workflows.
Changes proposed: