Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ENH: PET uptake model #112

Merged
merged 9 commits into from
Dec 16, 2022
Merged

ENH: PET uptake model #112

merged 9 commits into from
Dec 16, 2022

Conversation

oesteban
Copy link
Member

@oesteban oesteban commented Dec 10, 2022

I still need to test on some data (ha!), but these are the barebones of a PET uptake model.

The idea is a voxelwise fit of a low number of B-Splines (i.e., smooth function) on the data passed into .fit(), which can be the full dataset (if initialized with FullPETModel) or all frames but the one being aligned (leave-one-volume-out).

I will try to test on the dataset Martin shared over this week. If that worked, I think we could draft some abstract for OHBM with this and some further validation, WDTY?

cc/ @effigies and @mgxd , who will love to see B-Splines in one more place (it's ironic, but their feedback is always very much appreciated).

I'm attaching @mnoergaard and @arokem as reviewers, but I don't expect feedback until I test this really works on data and remove the "draft" marking of the PR. Feedback before that will be of course welcome nonetheless.

How to use (edited):

from eddymotion import estimator
from eddymotion.data import pet

data = pet.load("path/to/pet.nii.gz", frame_duration=[20, 20, 20, 60, 60, 60, 120, 120, 120, 300, 300, 600, 600, 600, 600, 600, 600, 600, 600])

# Run one initialization step fitting on the full dataset and a second step
# with a leave-one-volume-out fit/predict loop.
estimator.EddyMotionEstimator.fit(data, models=("FullPET", "PET"), n_jobs=4)

@oesteban oesteban force-pushed the enh/pet-model-2 branch 2 times, most recently from 8c8db8c to 75bb3ce Compare December 10, 2022 15:45
@oesteban oesteban linked an issue Dec 10, 2022 that may be closed by this pull request
@oesteban
Copy link
Member Author

Making progress. @mnoergaard, I'm using the first subject of https://doi.org/10.18112/openneuro.ds004230.v2.3.1 - I don't know if it is a good example.

@oesteban
Copy link
Member Author

Okay, I have managed to "simulate" a dataset with this model. The only remaining problems:

  • The new PET data structure will need the leave-one-volume-out splitter to be implemented as a function that operates on the data object.
  • Fitting the full volume is very slow. Most likely, we don't want to fit one model per voxel.

Copy link
Collaborator

@arokem arokem left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks promising! A couple of random thoughts. In particular, cross-check with #98 for work that @teresamg has been doing to work out memory consumption in the DWI case.

src/eddymotion/data/pet.py Show resolved Hide resolved
src/eddymotion/data/pet.py Outdated Show resolved Hide resolved
src/eddymotion/model.py Outdated Show resolved Hide resolved
src/eddymotion/model.py Outdated Show resolved Hide resolved
src/eddymotion/model.py Outdated Show resolved Hide resolved
src/eddymotion/model.py Outdated Show resolved Hide resolved
@oesteban
Copy link
Member Author

Okay, since the data split is not yet outsourced from the DWI object, this model can be tested this way:

import json
from pathlib import Path
from eddymotion import model
from eddymotion import data

datapath = Path("/data/datasets/ds004230/")

# Load data
metadata = json.loads((datapath / "sub-PS19/ses-baselinebrain/pet/sub-PS19_ses-baselinebrain_rec-DynTOF_pet.json").read_text())
data = pet.load(datapath / "sub-PS19/ses-baselinebrain/pet/sub-PS19_ses-baselinebrain_rec-DynTOF_pet.nii.gz", frame_time=metadata["FrameTimesStart"], frame_duration=metadata["FrameDuration"])

# Initialize and fit model
petmodel = model.PETModel(timepoints=data.frame_time, xlim=data.total_duration)
petmodel.fit(data.dataobj, n_jobs=16)

# Predict fifth frame
predicted = petmodel.predict(data.frame_time[4])

@mnoergaard
Copy link
Collaborator

Making progress. @mnoergaard, I'm using the first subject of https://doi.org/10.18112/openneuro.ds004230.v2.3.1 - I don't know if it is a good example.

Hi @oesteban! This is great - thanks for pushing this forward! ds004230 should be an ideal candidate with "only" a size of about 160MB per PET file. The largest PET files I have come across are around 1GB (high resolution scanners). For ds004230, you can also find the current motion correction results for comparison in /derivatives/petsurfer/logs/. The generated confounds file is located in /derivatives/petsurfer/subXX/sesXX/pet/subXX-sesXX_desc-confounds_timeseries.tsv. Furthermore, motion free PET data is also generated/synthesized and can be found here /derivatives/petsurfer/subXX/sesXX/pet/nopvc/yhat.nii.gz.

This code has been used to perform motion correction for these data (https://github.com/mnoergaard/hmcpet/blob/master/init_pet_hmc_wf.py), that includes both a selection of frames (only frames after 2 minutes, because there is no signal before this) and also a smoothing step (10 mm) to increase SNR. These could ideally be included in the eddymotion workflow?

Let me know if we should touch base over zoom soon.

@oesteban
Copy link
Member Author

Hi Martin - responding quickly from my phone.

One question about the bigger size of datasets: I assume the larger size responds mostly to the larger number of voxels each volume, rather than substantially more frames. If the latter, I don't think that would be to worry, but it seems the former so model fit will take longer.

One possibility would be to fit a single uptake function on data from high SNR voxels and one scaler parameter of that curve for each voxel.

I believe a coarse brain mask would be necessary to (1) fit fewer voxels and (2) avoid the structured background which ANTs will love to align.

I think a quick call next week would be useful if you're not on holidays already. I'll follow up via email.

Copy link
Collaborator

@mnoergaard mnoergaard left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for pushing these efforts forward @oesteban - I think this looks pretty good so far!

@oesteban oesteban force-pushed the enh/pet-model-2 branch 2 times, most recently from 0086345 to 4361da7 Compare December 16, 2022 08:45
Co-authored-by: Martin Norgaard <[email protected]>
@oesteban
Copy link
Member Author

I think I will merge this as a base, and then we can address the frame selection by allowing weights (and fitting with WLS).

@oesteban oesteban marked this pull request as ready for review December 16, 2022 15:59
@oesteban oesteban merged commit 962ff19 into main Dec 16, 2022
@oesteban oesteban deleted the enh/pet-model-2 branch December 16, 2022 16:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

PET uptake model
3 participants