-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PET uptake model #66
Comments
As discussed on prep call, this can be more general than PET: for starters, this could be a time-series interpolation model that is agnostic to the specifics of the time-series, only assuming that neighboring time-points are similar to each other. Could take a variety of forms: linear interpolation, GP, spline interpolation. I suggest starting with linear interpolation first, because it's fast and simple. At a second (?) stage, we could go more PET-specific, by using the PET-surfer model as a basis for volume and/or slice predictions. Both real and simulated data would be good for benchmarking and testing. |
When generalizing to time-series, a good place to start would be to think about |
Thanks, Ariel! Post-meeting thoughts: Have to think about this a little bit more, but in an ideal model for PET, we need to take into account the time between volumes (frames) as they will often follow a sequence of short frames in the beginning and then longer frames in the end (an example is [20,20,20,60,60,60,120,120,120,300,300,600,600,600,600,600,600,600,600] seconds). In this regard, the tracer used will be important (11C, 18F) as it will provide information about the decay/half-life, and thereby what type of decay/signal (and SNR) is expected when leaving out a volume and then trying to predict it based on the remaining frames. This will be particularly important for the late frames. You suggested Gaussian processes, Arial, and in this case, I assume we should be able to add weights to the model fitting to take into account time between frames and signal decay. |
Create one model that uses eddymotion's leave-one-volume-out framework to generate a motion-less target by interpolating from the "train" set of volumes, to then register the left-out volume to it.
cc/ @mnoergaard
The text was updated successfully, but these errors were encountered: