Skip to content

Commit

Permalink
CSD with working example and tests
Browse files Browse the repository at this point in the history
added some tests

WIP CSD MVP example

working with example

weird cherry-picking error again

Update examples/preprocessing/plot_csd.py

Co-Authored-By: Alexandre Gramfort <[email protected]>

Update examples/preprocessing/plot_csd.py

Co-Authored-By: Alexandre Gramfort <[email protected]>

Update examples/preprocessing/plot_csd.py

Co-Authored-By: Alexandre Gramfort <[email protected]>

Update mne/preprocessing/_csd.py

Co-Authored-By: Alexandre Gramfort <[email protected]>

Update examples/preprocessing/plot_csd.py

Co-Authored-By: Alexandre Gramfort <[email protected]>

typos/small fixes

incorporated Eric comments

more Eric comments

refactoring core comp per Eric

realized I did that wrong

Eric comments 3

plot fix

Fix julian date conversions (mne-tools#6911)

* Fix julian date conversions

* remove tabs

* use UTC and private functions.

closes mne-tools#6901 [skip travis][skip azp] (mne-tools#6914)

ENH: Add source space SNR estimate (mne-tools#6908)

* source-snr-edit

* FIX move estimate_snr to be a method

* ENH: name change

* Make it work

* SNR works, now clean up code

* fixed unnecessary stc copy, returns stc object

* added plot source space SNR tutorial file

* updated author names

* fixed period for pydocsty and suggested documenation edit

* edits suggested from @drammock

* made edits suggested by reviewers

* made edits suggested by reviewers

* N -> n_channels

* Fixed formatting of Reference in comment

* fixed formatting in doc and in code

* fixed snr_stc creation

* further minor edits

* fixed formatting issues and colormap choice in eg

* removed colon after Reference

* Corrected SNR calculation to take unwhitened sensor signal b instead of whitened, updated tutorial file

* Minor changes to formatting

* FIX: transparent

* FIX: Rename

* FIX?

Added dSPM (mne-tools#6913)

* Added dSPM

* removed term signal covariance

* Minor modification of dSPM definition

* Corrected according to Mattis suggestions

[MRG]: read *.dat electrode position files (mne-tools#6909)

* ADD: channels.read_dig_dat()

* DOC

* DOC

* TEST

* DOC, __all__

* DOC: intro

* DOC

* pep & py3.5 compatibility

* py3.5

MRG: allow users to annotate epochs with colors (mne-tools#6855)

* ENH: copy over diff from autoreject

* Make it work

* More fixes

* More fixes

* Make horizontal patch

* Cleanup

* TST: add tests

* Address Eric's comments

* Address alex comments

* DOC: update whats new

* FIX travis + circle

* FIX For ICA plot

Fix infomax (mne-tools#6921)

* fix extended infomax

* update whats new

* cleaner way, using setdefault

DOC: Document authors better [ci skip]

FIX: Comments and order

umlauts

MRG, MAINT: Simplify code for vol ravel (mne-tools#6920)

* MAINT: Simplify code for vol ravel

* STY: Flake

ENH: Convert NIRS raw data to optical density (mne-tools#6827)

* Initial framework for fnirs raw to optical density conversion

* Add optical density to docs

* Update mne/preprocessing/tests/test_optical_density.py

Co-Authored-By: Eric Larson <[email protected]>

* Update mne/preprocessing/tests/test_optical_density.py

Co-Authored-By: Eric Larson <[email protected]>

* Import testing

* Put optical density docs in preprocessing section

* Implement optical density conversion

* Fix optical density code style issues

* Add plotting of optical density data

* Add defaults for optical density plotting

* Update mne/preprocessing/optical_density.py

Co-Authored-By: Eric Larson <[email protected]>

* Handle negative intensities in optical density conversion

* Add in -1* that was removed in previous commit

* Add tests and fix imports for optical density

* Add tests for optical density conversion

* Use assert_allclose for testing optical density

* Update test_optical_density.py

* Update mne/preprocessing/_optical_density.py

Co-Authored-By: Alexandre Gramfort <[email protected]>

* Do not operate in place for optical density

* Force data to load and modify test to include preload=False condition

import fix

tidying up

weird file conflict errors
  • Loading branch information
Alex Rockhill committed Oct 29, 2019
1 parent 0a45ec5 commit 76f7817
Show file tree
Hide file tree
Showing 14 changed files with 464 additions and 14 deletions.
2 changes: 2 additions & 0 deletions doc/changes/latest.inc
Original file line number Diff line number Diff line change
Expand Up @@ -61,6 +61,8 @@ Bug

- Fix bug in :func:`mne.viz.plot_bem` where some sources were not plotted by `Jean-Remi King`_ and `Eric Larson`_

- Fix bug in :class:`~mne.preprocessing.ICA` where requesting extended infomax via ``fit_params={'extended': True}`` was overridden, by `Daniel McCloy`_.

- Fix TAL channel parsing (annotations) for EDF-D files by `Clemens Brunner`_

- Fix bug with :func:`mne.viz.plot_dipole_locations` when plotting in head coordinates by `Eric Larson`_
Expand Down
1 change: 1 addition & 0 deletions doc/python_reference.rst
Original file line number Diff line number Diff line change
Expand Up @@ -343,6 +343,7 @@ Projections:

ICA
Xdawn
compute_current_source_density
compute_proj_ecg
compute_proj_eog
create_ecg_epochs
Expand Down
1 change: 1 addition & 0 deletions examples/inverse/plot_mixed_norm_inverse.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@
L0.5/L2 is done with irMxNE which allows for sparser
source estimates with less amplitude bias due to the non-convexity
of the L0.5/L2 mixed norm penalty.
"""
# Author: Alexandre Gramfort <[email protected]>
# Daniel Strohmeier <[email protected]>
Expand Down
99 changes: 99 additions & 0 deletions examples/preprocessing/plot_csd.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,99 @@
# -*- coding: utf-8 -*-
"""
.. _ex-: plot_csd
=================================================================
Analyze Data Using Current Source Density (CSD)
=================================================================
This script shows an example of how to use CSD. CSD
takes the spatial Laplacian of the sensor signal (derivative in both
x and y). It does what a planar gradiometer does in MEG. Spatial derivative
reduces point spread. CSD transformed data have a sharper or more distinct topography,
reducing the negative impact of volume conduction.
"""
# Authors: Alex Rockhill <[email protected]>
#
# https://gist.github.com/sherdim/28b069c8ebca17073f24322e8e721e1f

# License: BSD (3-clause)
import matplotlib.pyplot as plt

import numpy as np
import mne
from mne.datasets import sample

from mne.preprocessing import compute_current_source_density

print(__doc__)

###############################################################################
# Load sample subject data
data_path = sample.data_path()

raw = mne.io.read_raw_fif(data_path + '/MEG/sample/sample_audvis_raw.fif',
preload=True)
raw = raw.pick_types(meg=False, eeg=True, eog=True, ecg=True, stim=True,
exclude=raw.info['bads'])
event_id = {'auditory/left': 1, 'auditory/right': 2, 'visual/left': 3,
'visual/right': 4, 'smiley': 5, 'button': 32}
events = mne.find_events(raw)
epochs = mne.Epochs(raw, events, event_id=event_id, tmin=-0.2, tmax=.5,
preload=True)
evo = epochs.average()

###############################################################################
# Let's look at the topography of CSD compared to average

fig, axes = plt.subplots(1, 6)
times = np.array([-0.1, 0., 0.05, 0.1, 0.15])
evo.plot_topomap(times=times, cmap='Spectral_r', axes=axes[:5],
outlines='skirt', contours=4, time_unit='s',
colorbar=True, show=False, title='Average Reference')
fig, axes = plt.subplots(1, 6)
csd_evo = compute_current_source_density(evo, copy=True)
csd_evo.plot_topomap(times=times, axes=axes[:5], cmap='Spectral_r',
outlines='skirt', contours=4, time_unit='s',
colorbar=True, title='Current Source Density')

###############################################################################
# Let's add the evoked plot

evo.plot_joint(title='Average Reference', show=False)
csd_evo.plot_joint(title='Current Source Density')

###############################################################################
# Let's look at the effect of smoothing and spline flexibility

fig, ax = plt.subplots(4, 4)
fig.set_size_inches(10, 10)
for i, lambda2 in enumerate([1e-7, 1e-5, 1e-3, 0]):
for j, m in enumerate([5, 4, 3, 2]):
this_csd_evo = compute_current_source_density(evo, stiffness=m,
lambda2=lambda2,
copy=True)
this_csd_evo.plot_topomap(0.1, axes=ax[i, j],
outlines='skirt', contours=4, time_unit='s',
colorbar=False, show=False)
ax[i, j].set_title('m=%i, lambda=%s' % (m, lambda2))

plt.show()

# References
# ----------
#
# Perrin et al. (1989, 1990)
#
# \MATLAB\eeglab14_1_1b\functions\popfunc\eeg_laplac.m
# Juan Sebastian Gonzalez, DFI / Universidad de Chile
#
# MATLAB\eeglab14_1_1b\plugins\PrepPipeline0.55.3\utilities\private\spherical_interpolate.m
# Copyright 2009- by Jason D.R. Farquhar ([email protected])
#
# Wang K, Begleiter, 1999
#
# MATLAB\eeglab14_1_1b\plugins\erplab6.1.3\functions\csd\current_source_density.m
# Copyright (C) 2003 by Jürgen Kayser (Email: [email protected])
# (published in appendix of Kayser J, Tenke CE,
# Clin Neurophysiol 2006;117(2):348-368)
#
19 changes: 19 additions & 0 deletions mne/channels/interpolation.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,25 @@
from ..forward import _map_meg_channels


def _calc_h(cosang, stiffness=4, num_lterms=50):
"""Calculate spherical spline h function between points on a sphere.
Parameters
----------
cosang : array-like | float
cosine of angles between pairs of points on a spherical surface. This
is equivalent to the dot product of unit vectors.
stiffness : float
stiffnes of the spline. Also referred to as `m`.
num_lterms : int
number of Legendre terms to evaluate.
"""
factors = [(2 * n + 1) /
(n ** (stiffness - 1) * (n + 1) ** (stiffness - 1) * 4 * np.pi)
for n in range(1, num_lterms + 1)]
return legval(cosang, [0] + factors)


def _calc_g(cosang, stiffness=4, num_lterms=50):
"""Calculate spherical spline g function between points on a sphere.
Expand Down
3 changes: 3 additions & 0 deletions mne/io/meas_info.py
Original file line number Diff line number Diff line change
Expand Up @@ -2028,6 +2028,9 @@ def _bad_chans_comp(info, ch_names):
# should this be thought of as a bug?
return False, []

if 'csd' in info['comps']: # added for csd 0.20
return True, []

# only include compensation channels that would affect selected channels
ch_names_s = set(ch_names)
comp_names = []
Expand Down
23 changes: 12 additions & 11 deletions mne/io/pick.py
Original file line number Diff line number Diff line change
Expand Up @@ -477,17 +477,18 @@ def pick_info(info, sel=(), copy=True, verbose=None):
info['bads'] = [ch for ch in info['bads'] if ch in info['ch_names']]

if 'comps' in info:
comps = deepcopy(info['comps'])
for c in comps:
row_idx = [k for k, n in enumerate(c['data']['row_names'])
if n in info['ch_names']]
row_names = [c['data']['row_names'][i] for i in row_idx]
rowcals = c['rowcals'][row_idx]
c['rowcals'] = rowcals
c['data']['nrow'] = len(row_names)
c['data']['row_names'] = row_names
c['data']['data'] = c['data']['data'][row_idx]
info['comps'] = comps
if 'csd' not in info['comps']: # added for csd 0.20
comps = deepcopy(info['comps'])
for c in comps:
row_idx = [k for k, n in enumerate(c['data']['row_names'])
if n in info['ch_names']]
row_names = [c['data']['row_names'][i] for i in row_idx]
rowcals = c['rowcals'][row_idx]
c['rowcals'] = rowcals
c['data']['nrow'] = len(row_names)
c['data']['row_names'] = row_names
c['data']['data'] = c['data']['data'][row_idx]
info['comps'] = comps
info._check_consistency()
return info

Expand Down
1 change: 1 addition & 0 deletions mne/preprocessing/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,4 +21,5 @@
from .stim import fix_stim_artifact
from .maxwell import maxwell_filter
from .xdawn import Xdawn
from ._csd import compute_current_source_density
from ._optical_density import optical_density
169 changes: 169 additions & 0 deletions mne/preprocessing/_csd.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,169 @@
# Copyright 2003-2010 Jürgen Kayser <[email protected]>
# Copyright 2017 Federico Raimondo <[email protected]> and
# Denis A. Engemann <[email protected]>
#
#
# The original CSD Toolbox can be find at
# http://psychophysiology.cpmc.columbia.edu/Software/CSDtoolbox/

# Authors: Denis A. Engeman <[email protected]>
# Alex Rockhill <[email protected]>
#
# License: Relicensed under BSD (3-clause) and adapted with
# permission from authors of original GPL code

import numpy as np

from scipy.linalg import inv

from mne import pick_types, pick_info
from mne.utils import _validate_type
from mne.io import BaseRaw
from mne.epochs import BaseEpochs
from mne.evoked import Evoked
from mne.channels.interpolation import _calc_g, _calc_h


def _prepare_G(G, lambda2):
G.flat[::len(G) + 1] += lambda2
# compute the CSD
Gi = inv(G)

TC = Gi.sum(0)
sgi = np.sum(TC) # compute sum total

return Gi, TC, sgi


def _compute_csd(data, G_precomputed, H, radius):
"""compute the CSD"""
n_channels, n_times = data.shape
mu = data.mean(0)[None]
Z = data - mu
X = np.zeros_like(data)
radius **= 2

Gi, TC, sgi = G_precomputed

Cp2 = np.dot(Gi, Z)
c02 = np.sum(Cp2, axis=0) / sgi
C2 = Cp2 - np.dot(TC[:, None], c02[None, :])
X = np.dot(C2.T, H).T / radius
return X


def compute_current_source_density(inst, lambda2=1e-5, stiffness=4,
n_legendre_terms=50,
sphere=(0., 0., 0., 0.095),
copy=True):
"""Current Source Density (CSD) transformation
Transformation based on spherical spline surface Laplacian.
.. note:: This function applies an average reference to the data.
Do not transform CSD data to source space.
Parameters
----------
inst : instance of Raw, Epochs or Evoked
The data to be transformed.
lambda2 : float
Regularization parameter, produces smoothnes. Defaults to 1e-5.
stiffnes : float
stiffnes of the spline. Also referred to as `m`
(if g_matrix or h_matrix is provided this will be ignored)
n_legendre_terms : int
number of Legendre terms to evaluate (if g_matrix or h_matrix
is provided this will be ignored)
sphere : tuple, (float, float, float, float)
The sphere, head-model of the form (x, y, z, r) where x, y, z
is the center of the sphere and r is the radius
copy : bool
Whether to overwrite instance data or create a copy.
Returns
-------
inst_csd : instance of Epochs or Evoked
The transformed data. Output type will match input type.
"""

_validate_type(inst, (BaseEpochs, BaseRaw, Evoked), 'inst')

if 'csd' in inst.info['comps']:
raise ValueError('CSD already applied, should not be reapplied')

inst = inst.copy() if copy else inst

picks = pick_types(inst.info, meg=False, eeg=True, exclude='bads')

if len(picks) == 0:
raise ValueError('No EEG channels found.')

if lambda2 is None:
lambda2 = 1e-5

_validate_type(lambda2, (float, int), 'lambda2')
if 0 > lambda2 or lambda2 > 1:
raise ValueError('lambda2 must be between 0 and 1, got %s' % lambda2)

_validate_type(stiffness, (float, int), 'stiffness')
if stiffness < 0:
raise ValueError('stiffness must be non-negative got %s' % stiffness)

_validate_type(n_legendre_terms, (int), 'n_legendre_terms')
if n_legendre_terms < 1:
raise ValueError('n_legendre_terms must be greater than 0, '
'got %s' % n_legendre_terms)

_validate_type(sphere, tuple, 'sphere')
x, y, z, radius = sphere
_validate_type(x, float, 'x')
_validate_type(y, float, 'y')
_validate_type(z, float, 'z')
_validate_type(radius, float, 'radius')
if radius <= 0:
raise ValueError('sphere radius must be greater than 0, '
'got %s' % radius)

_validate_type(copy, (bool), 'copy')

pos = np.array([inst.info['chs'][pick]['loc'][:3] for pick in picks])
for this_pos, pick in zip(pos, picks):
if this_pos.sum() == 0.:
raise ValueError('Zero position found for '
'%s' % inst.ch_names[pick])
if any([not np.isfinite(coord) for coord in this_pos]):
raise ValueError('Non-finite position found for '
'%s' % inst.ch_names[pick])
pos -= (x, y, z)

G = _calc_g(np.dot(pos, pos.T), stiffness=stiffness,
num_lterms=n_legendre_terms)
H = _calc_h(np.dot(pos, pos.T), stiffness=stiffness,
num_lterms=n_legendre_terms)

G_precomputed = _prepare_G(G, lambda2)

trans_csd = _compute_csd(np.eye(len(picks)),
G_precomputed=G_precomputed,
H=H, radius=radius)

if isinstance(inst, BaseEpochs):
for epo in inst._data[:, picks]:
epo[picks] = np.dot(trans_csd, epo[picks])
else:
inst._data = np.dot(trans_csd, inst._data[picks])

pick_info(inst.info, picks, copy=False)
inst.info['comps'].append('csd')
return inst

# References
# ----------
#
# [1] Perrin et al. (1989, 1990), published in appendix of Kayser J, Tenke CE,
# Clin Neurophysiol 2006;117(2):348-368)
#
# [2] Perrin, Pernier, Bertrand, and Echallier. Electroenceph Clin Neurophysiol
# 1989;72(2):184-187, and Corrigenda EEG 02274 in Electroenceph Clin
# Neurophysiol 1990;76:565.
1 change: 1 addition & 0 deletions mne/preprocessing/_optical_density.py
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,7 @@ def optical_density(raw):
# Set all negative values to abs(x), this also has the benefit of ensuring
# that the means are all greater than zero for the division below.
if np.any(raw._data[picks] <= 0):

warn("Negative intensities encountered. Setting to abs(x)")
raw._data[picks] = np.abs(raw._data[picks])

Expand Down
Loading

0 comments on commit 76f7817

Please sign in to comment.