-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GSoC, ENH: Improve Time Frequency Analysis in Source Space #6290
Comments
I asked @DiGyt to start a conversation here to get more opinions. We have already a few SourceEstimate classes such as VolSourceEstimate, MixedSourceEstimate, VectorSourceEstimate, VolVectorSourceEstimate and i have doubts on adding even more of these to have time frequency content in source space. does anyone has some opinion on this? |
just to instigate the discussion a bit, I have some questions can you post some usage example? what is the "kernel trick" in the proposal of the google doc? Is it like a LinearOperator? Do you foresee the apply_inverse method also accepting TFR objects? |
Hey @jasmainak . Thanks for the reply! import numpy as np
import mne
from mne.datasets import sample
from mne.time_frequency import tfr_multitaper, tfr_stockwell
from mne.minimum_norm import read_inverse_operator, apply_inverse
data_path = sample.data_path()
ave_fname = data_path + '/MEG/sample/sample_audvis-ave.fif'
inv_fname = data_path +\
'/MEG/sample/sample_audvis-meg-eeg-oct-6-meg-eeg-inv.fif'
evoked = mne.read_evokeds(ave_fname, baseline=(None, 0), condition=0)
inv = read_inverse_operator(inv_fname)
stc = apply_inverse(evoked, inv,pick_ori="vector")
freqs = np.arange(5., 100., 3.)
n_cycles = freqs / 2.
t_bandwidth = 8
src_multi = tfr_multitaper(stc, freqs=freqs, n_cycles=n_cycles,
time_bandwidth=t_bandwidth)
src_stockw = tfr_stockwell(stc, fmin=5, fmax=100) and then go on to do stuff like As said, the idea is just making everything from Reading 'kernel trick' data into |
hmm okay ... wouldn't it be prohibitively time consuming to do TFR on the sources? Could you do it in sensor space and then do apply_inverse? what about epochs? how will you handle them? I know they are really slow to compute even on sensor space. |
also how is the 5D visualization work? Is it going to live as a method of some class? or just a function? will it work both for surface and volume source estimates? |
Well yes, although I'd say that (in my environment) it's not prohibitively time consuming, but it's surely the slower solution. Handling of epochs is another part in question. Since multiple stcs are also stored as lists of stcs, and TFR data multiplies the size of these stcs, i would say that giving out 'epochs' of SourceTFR as lists of SourceTFR would be the primary solution, while creating an |
I'd say the best thing for surface data is certainly to go the same way as in |
Maybe a simpler thing to start with would be to change a bit how stc's are used to store and display frequency-resolved source-space representation (that is before adressing time-frequency). Currently it is a bit hacky as |
This sounds like we might end up recreating Xarray :) |
@larsoner |
I considered this option too. Only two SourceEstimates | VolSourceEstimates
object and a dims attribute which is a tuple that can be 'vert', 'freq',
'time', 'orientation'
… |
If we're going to get rid of the Vector/non-vector and time/freq class-based distinctions, we might as well get rid of the surface/volume/mixed class-based distinctions, too. We don't have great support for mixed source spaces and trying to have a general class could help fix this problem. I propose we come up with a new class that's for any type of source-level data we want to work with. I'd call it SourceData because even the idea of it being an "estimate" is a misnomer (e.g. when simulating you're actually giving ground truth). Here is a first pass at a draft spec for what a SourceData instance
For methods:
I think this could be made to work for all variants of |
Incidentally, this |
I like that! |
How much backward compatible is this proposal going to be? Isn't |
We'll have to decide how backward compatible we want it to be. The way I would start is by looking at how far we can get by supporting "upsampling" our five existing classes to
Maybe we deprecate them with a long cycle, maybe not. But I'm optimistic it's a solvable problem. Based on the API I outlined above I think it's worth the necessary work to get this more compact-yet-complete interface working. It would simplify a lot of code I've locally written to handle the various SourceEstimate classes, as well as things internally. |
okay I see, thin wrappers sounds like a good idea and +1 to a long deprecation cycle. I have neuroscientists complaining to me that the API changes too fast in Python / MNE and things break when they update versions. |
This looks like a great solution to me. So this would mean |
Yes exactly. Preserve and utilize linearity as long as possible. We have functions like compute_source_epochs_psd that take advantage of linearity for speed purposes under the hood. This class would standardize that practice and keep it in one place. One other thing to consider is whether we can extend this behavior to things like label time course extraction, too. That is sometimes one more linear step. But even if it's nonlinear, it reduces the dimensions. One maybe easy way to do this is to move away from "vertex" as the first dimension and let it be "location". A "location" could be a single vertex (analogous to SourceEstimate) or more than one vertex (analogous to what you get after extract_label_time_course). So in the end, get_data() could give a standard source estimate 8196 vertices by 1000 time point array, or something as complicated as the 72 labels by 20 epochs by 50 freqs and 40 time point array (internally stored efficiently as a list of sensor data epoch ffts, imaging kernel, label vertex mapping, and averaging mode). This would I think "only" in addition to the above require:
This can be a follow-up PR, but it's good to think of such an additional transformation being tacked on. We could think about where morphing could fit in, too. A SourceMorph class could be an attribute to take the data (eventually) to a different subject's space. I think that would cover most of the source space transformations we tend to do anyway. Or maybe what we want instead of a single monolithic class is a set of modular transformations that can be chained... Maybe I'll see if I can come up with proposals for the monolithic and modular designs with advantages and disadvantages. |
I think we'll have to think on this Maybe the first thing to start with for GSoC is assume for now that you have some |
Hmm. but there's alot of stuff from the project that would depend on this (e.g. not needing to read lists in Is there maybe some kind of middle way? @larsoner @agramfort |
I agree with @larsoner maybe one option is to draft a SouceData-like class for the time-frequency estiamtes in source space and make the plotting code work (with and without the kernel trick). once this object is agreed upon in terms of API we can see how to make it possibly a first class citizen in mne-python. |
Sounds good! I'll do that. |
Hey Everyone, After the Google summer of Code project, I was switching University and quickly found myself in a situation where I had to finance my full-time study degree with working an off-time 50% position. Therefore I never found much time to finish the PRs associated with this Issue. In the upcoming months I would have the opportunity to work on these PRs again and finally complete them. However before doing that, I would first want to know whether there is even a demand to have this problem solved, since this Issue did not seem highly frequented at all and it might be smarter to allocate the time into something that's needed more than this. So if anyone is still interested in a simpler MNE workflow for Time-Frequency Analysis in Source Space, please add some type of reaction to this thread, so I'll know whether this is still an issue. |
I agree there hasn't been much traffic on the issues / PRs, so it doesn't seem to be a critical priority for people at least. We can always resurrect the code/commits later if someone is motivated by a specific use case. Are there other things you're interested in working on in MNE? |
Yes! I'm currently looking into noise detection and could imagine that Artefact Subspace Reconstruction is a method that might gain wider popularity in the future, so it probably would be a good idea to have an MNE implementation of this. atm I'm checking out if it's feasible to carry on the work started in #7479 and add (r)ASR to MNE (possibly reducing the number of external dependencies). Edit: |
what is the fundamental difference between ASR and SSP? it seems like
linear projections and least squares like SSP no?
… |
I would say that fundamentally they do a similar thing with both of them removing noise subcomponents from the data (I'm sure ASR somehow follows the SSP idea and that SSP could be adapted to do the same job as ASR). The main difference in my understanding is how they do this: SSP takes the characteristics of a specific noise component and removes them (requiring a pre-marked subset of according noise). ASR takes the characteristics of clean data and removes components with deviating properties. I think that the practicability of ASR and its EEGLab implementation might be a result of their simple "plug-and-play" approach, not having to set many parameters to make it work. Yet I haven't looked into it enough to actually say if it does better on the SNR than e.g. AutoReject (maybe a systematic comparison on this would be a nice idea). |
ok got it
ASR : give me good segments to estimate signal subspace
SSP: give me artifact segments to estimate artifact subspace and the
signal is the orthogonal subspace
this suggests that the existing SSP machinery could be used
… |
Exactly. Though there are some details to the ASR algorithm procedure, (like the data being fitted as a sliding window or the riemannian variant using principal geodesic analysis instead PCA) that create a somewhat different construct of functions. @nbara's code is extremely close to the original implementation, which is nice for testing equivalence. Edit: |
Closing this issue in favor of smaller more targeted work like #11985 for now. There is good stuff in here and the linked PRs that we should keep in mind moving forward, though! |
Hey everyone, I'm going to work on improving Time Frequency Analysis in Source Space as my Google Summer of Code project this year!
I opened this issue, so we can discuss different aspects of the project, e.g. what you would like to have implemented and what not. To get an idea of the project, you can take a look at this google doc, which is a copy of my GSoC proposal. Feel free to comment on the google doc, anyhow, it would probably be better if we keep the general discussion within the thread of this issue.
Problem:
For TFR Analysis in source space, it would be nice to be able to apply MNE's standard repertoire of TFR functions from
mne.time_frequency
onSourceEstimate
data, instead of being restricted to use thesource_induced_power
andsource_band_induced_power
functions.Solution/ alternatives:
There are some possible solutions listed in the proposal.
The basic idea was to enable
mne.time_frequency
functions to takeVectorSourceEstimates
andVolVectorSourceEstimates
as data input and return one or moreSourceTFR
classes, which contain time-frequency transformed source level data.Anyhow, feel free to express your opinion on any solution or alternative (that's what this issue was created for).
The text was updated successfully, but these errors were encountered: