-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ENH: PET uptake model #112
Conversation
8c8db8c
to
75bb3ce
Compare
Making progress. @mnoergaard, I'm using the first subject of https://doi.org/10.18112/openneuro.ds004230.v2.3.1 - I don't know if it is a good example. |
Okay, I have managed to "simulate" a dataset with this model. The only remaining problems:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay, since the data split is not yet outsourced from the DWI object, this model can be tested this way: import json
from pathlib import Path
from eddymotion import model
from eddymotion import data
datapath = Path("/data/datasets/ds004230/")
# Load data
metadata = json.loads((datapath / "sub-PS19/ses-baselinebrain/pet/sub-PS19_ses-baselinebrain_rec-DynTOF_pet.json").read_text())
data = pet.load(datapath / "sub-PS19/ses-baselinebrain/pet/sub-PS19_ses-baselinebrain_rec-DynTOF_pet.nii.gz", frame_time=metadata["FrameTimesStart"], frame_duration=metadata["FrameDuration"])
# Initialize and fit model
petmodel = model.PETModel(timepoints=data.frame_time, xlim=data.total_duration)
petmodel.fit(data.dataobj, n_jobs=16)
# Predict fifth frame
predicted = petmodel.predict(data.frame_time[4]) |
7f3172c
to
f07a04c
Compare
Hi @oesteban! This is great - thanks for pushing this forward! ds004230 should be an ideal candidate with "only" a size of about 160MB per PET file. The largest PET files I have come across are around 1GB (high resolution scanners). For ds004230, you can also find the current motion correction results for comparison in /derivatives/petsurfer/logs/. The generated confounds file is located in /derivatives/petsurfer/subXX/sesXX/pet/subXX-sesXX_desc-confounds_timeseries.tsv. Furthermore, motion free PET data is also generated/synthesized and can be found here /derivatives/petsurfer/subXX/sesXX/pet/nopvc/yhat.nii.gz. This code has been used to perform motion correction for these data (https://github.com/mnoergaard/hmcpet/blob/master/init_pet_hmc_wf.py), that includes both a selection of frames (only frames after 2 minutes, because there is no signal before this) and also a smoothing step (10 mm) to increase SNR. These could ideally be included in the eddymotion workflow? Let me know if we should touch base over zoom soon. |
Hi Martin - responding quickly from my phone.
One question about the bigger size of datasets: I assume the larger size responds mostly to the larger number of voxels each volume, rather than substantially more frames. If the latter, I don't think that would be to worry, but it seems the former so model fit will take longer. One possibility would be to fit a single uptake function on data from high SNR voxels and one scaler parameter of that curve for each voxel. I believe a coarse brain mask would be necessary to (1) fit fewer voxels and (2) avoid the structured background which ANTs will love to align. I think a quick call next week would be useful if you're not on holidays already. I'll follow up via email. |
f07a04c
to
0255249
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for pushing these efforts forward @oesteban - I think this looks pretty good so far!
Resolves: #66.
Co-authored-by: Ariel Rokem <[email protected]>
0086345
to
4361da7
Compare
Co-authored-by: Martin Norgaard <[email protected]>
4361da7
to
c046fba
Compare
I think I will merge this as a base, and then we can address the frame selection by allowing weights (and fitting with WLS). |
I still need to test on some data (ha!), but these are the barebones of a PET uptake model.
The idea is a voxelwise fit of a low number of B-Splines (i.e., smooth function) on the data passed into
.fit()
, which can be the full dataset (if initialized withFullPETModel
) or all frames but the one being aligned (leave-one-volume-out).I will try to test on the dataset Martin shared over this week. If that worked, I think we could draft some abstract for OHBM with this and some further validation, WDTY?
cc/ @effigies and @mgxd , who will love to see B-Splines in one more place (it's ironic, but their feedback is always very much appreciated).
I'm attaching @mnoergaard and @arokem as reviewers, but I don't expect feedback until I test this really works on data and remove the "draft" marking of the PR. Feedback before that will be of course welcome nonetheless.
How to use (edited):