-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
FIX: wrong dimension in sample_mean computation #33
Conversation
Hum do tests still pass? I thought i had checked with the c code |
It was still wrong; sample_mean has to be computed over epochs. This is still not the right solution, as we have to compute a sample_mean for each event type (see MNE manual p. 89). Epochs currently can't handle multiple different event IDs (unless all events are used). Can we extend Epochs s.t. event_id can be a list of integers (in the test code we would use event_id = [1, 2, 3, 4])? The test currently fails (and was failing before the last commit). |
I have to admit I am a bit confused but you can use mne.merge_events Let me know how it goes On Monday, March 12, 2012, Martin Luessi <
|
Unless I am confused too, this won't do the right thing. i.e., when using merge_events, the Epochs object will just have the (single) ID of the merged event in it, so it won't be possible to tell the different event types apart when computing the covariance. |
I am sure of what keep sample mean should do now. Can you check with matti? On Monday, March 12, 2012, Martin Luessi <
|
I briefly talked to him yesterday, but I will double check that I understand correctly. I now think it would be a better idea to have compute_covariance() accept a list of Epochs instead of modifying Epochs s.t. it can have multiple event IDs. Like that we can better replicate what MNE does with the .cov file, where the epochs for different events can have different lengths and baselines. The reason it was passing the test before is that "np.dot(sample_mean, sample_mean.T)" is small and the error tolerance in the test is too large; it passes the test even without subtracting "np.dot(..)". |
Cov computation should now be correct for multiple event types when keep_sample_mean=False. The test passes now with a tolerance of 0.005 (before the tolerance was 0.06). |
looks good. I'm merging. Add a note to what's new too. |
FIX: wrong dimension in sample_mean computation
There was a nasty bug in the covariance computation.. we were computing the sample_mean across sensors instead of time. Unfortunately the sample_mean was an 1-d array, so np.dot(sample_mean, sample_mean.T) was a scalar and Python didn't complain.