Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merge 0.4.5 into master #270

Merged
merged 13 commits into from
Feb 22, 2022
Merged
2 changes: 1 addition & 1 deletion .github/workflows/test-devel.yml
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ jobs:
python-version: ${{ matrix.python-version }}

- name: Install Poetry
uses: snok/install-poetry@v1.1.6
uses: snok/install-poetry@v1
with:
virtualenvs-create: true
virtualenvs-in-project: true
Expand Down
12 changes: 9 additions & 3 deletions .github/workflows/test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -26,10 +26,8 @@ jobs:
with:
python-version: ${{ matrix.python-version }}

- uses: pre-commit/[email protected]

- name: Install Poetry
uses: snok/install-poetry@v1.1.6
uses: snok/install-poetry@v1
with:
virtualenvs-create: true
virtualenvs-in-project: true
Expand Down Expand Up @@ -62,3 +60,11 @@ jobs:
run: |
source $VENV
poetry run python -m moabb.run --pipelines=./moabb/tests/test_pipelines/ --verbose

- name: Upload Coverage to Codecov
uses: codecov/codecov-action@v2
if: success()
with:
verbose: true
directory: /home/runner/work/moabb/moabb
files: ./.coverage
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -129,7 +129,7 @@ can upgrade your pip version using: `pip install -U pip` before installing `moab
## Supported datasets

The list of supported datasets can be found here :
http://moabb.neurotechx.com/docs/datasets.html
https://neurotechx.github.io/moabb/datasets.html

### Submit a new dataset

Expand Down Expand Up @@ -256,6 +256,6 @@ BCI algorithms applied on an extensive list of freely available EEG datasets.
[link_sylvain]: https://sylvchev.github.io/
[link_neurotechx_signup]: https://neurotechx.com/
[link_gitter]: https://gitter.im/moabb_dev/community
[link_moabb_docs]: http://moabb.neurotechx.com/docs/index.html
[link_moabb_docs]: https://neurotechx.github.io/moabb/
[link_arxiv]: https://arxiv.org/abs/1805.06427
[link_jne]: http://iopscience.iop.org/article/10.1088/1741-2552/aadea0/meta
5 changes: 2 additions & 3 deletions docs/source/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -128,8 +128,7 @@ can upgrade your pip version using: `pip install -U pip` before installing `moab

## Supported datasets

The list of supported datasets can be found here :
http://moabb.neurotechx.com/docs/datasets.html
The list of supported datasets can be found here : https://neurotechx.github.io/moabb/

### Submit a new dataset

Expand Down Expand Up @@ -258,6 +257,6 @@ BCI algorithms applied on an extensive list of freely available EEG datasets.
[link_sylvain]: https://sylvchev.github.io/
[link_neurotechx_signup]: https://neurotechx.com/
[link_gitter]: https://gitter.im/moabb_dev/community
[link_moabb_docs]: http://moabb.neurotechx.com/docs/index.html
[link_moabb_docs]: https://neurotechx.github.io/moabb/
[link_arxiv]: https://arxiv.org/abs/1805.06427
[link_jne]: http://iopscience.iop.org/article/10.1088/1741-2552/aadea0/meta
24 changes: 23 additions & 1 deletion docs/source/whats_new.rst
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,29 @@ API changes
- None


Version - 0.4.4 (Stable - PyPi)
Version - 0.4.5 (Stable - PyPi)
---------------------------------

Enhancements
~~~~~~~~~~~~

- Progress bars, pooch, tqdm (:gh:`258` by `Divyesh Narayanan`_ and `Sylvain Chevallier`_)
- Adding test and example for set_download_dir (:gh:`249` by `Divyesh Narayanan`_)
- Update to newer version of Schirrmeister2017 dataset (:gh:`265` by `Robin Schirrmeister`_)
- Adding Huebner2017 and Huebner2018 P300 datasets (:gh:`260` by `Jan Sosulski`_)
- Adding Sosulski2019 auditory P300 datasets (:gh:`266` by `Jan Sosulski`_)
- New script to visualize ERP on all datasets, as a sanity check (:gh:`261` by `Jan Sosulski`_)

Bugs
~~~~

- Removing dependency on mne method for PhysionetMI data downloading, renaming runs (:gh:`257` by `Divyesh Narayanan`_)
- Correcting events management in Schirrmeister2017, renaming session and run (:gh:`255` by `Pierre Guetschel`_ and `Sylvain Chevallier`_)
- Switch session and runs in MAMEM1, 2 and 3 to avoid error in WithinSessionEvaluation (:gh:`256` by `Sylvain Chevallier`_)
- Correct doctstrings for the documentation, incuding Lee2017 (:gh:`256` by `Sylvain Chevallier`_)


Version - 0.4.4
---------------

Enhancements
Expand Down
12 changes: 6 additions & 6 deletions examples/advanced_examples/plot_filterbank_csp_vs_csp.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
FilterBank CSP versus CSP
=========================

This Example show a comparison of CSP versus FilterBank CSP on the
This example show a comparison of CSP versus FilterBank CSP on the
very popular dataset 2a from the BCI competition IV.
"""
# Authors: Alexandre Barachant <[email protected]>
Expand All @@ -27,7 +27,7 @@
moabb.set_log_level("info")

##############################################################################
# Create pipelines
# Create Pipelines
# ----------------
#
# The CSP implementation from MNE is used. We selected 8 CSP components, as
Expand All @@ -51,7 +51,7 @@
# ----------
#
# Since two different preprocessing will be applied, we have two different
# paradigm objects. We have to make sure their filter matchs so the comparison
# paradigm objects. We have to make sure their filter matches so the comparison
# will be fair.
#
# The first one is a standard `LeftRightImagery` with a 8 to 35 Hz broadband
Expand All @@ -75,7 +75,7 @@
)
results = evaluation.process(pipelines)

# bank of 6 filter, by 4 Hz increment
# Bank of 6 filters, by 4 Hz increment
filters = [[8, 12], [12, 16], [16, 20], [20, 24], [24, 28], [28, 35]]
paradigm = FilterBankLeftRightImagery(filters=filters)
evaluation = CrossSessionEvaluation(
Expand All @@ -93,10 +93,10 @@
# Plot Results
# ----------------
#
# Here we plot the results via normal methods. We the first plot is a pointplot
# Here we plot the results via seaborn. We first display a pointplot
# with the average performance of each pipeline across session and subjects.
# The second plot is a paired scatter plot. Each point representing the score
# of a single session. An algorithm will outperforms another is most of the
# of a single session. An algorithm will outperform another is most of the
# points are in its quadrant.

fig, axes = plt.subplots(1, 2, figsize=[8, 4], sharey=True)
Expand Down
37 changes: 19 additions & 18 deletions examples/advanced_examples/plot_mne_and_scikit_estimators.py
Original file line number Diff line number Diff line change
@@ -1,16 +1,16 @@
"""
=========================
MNE Epochs-based piplines
=========================
==========================
MNE Epochs-based pipelines
==========================

This example shows how to use machine learning pipeline based on MNE Epochs
instead of numpy arrays. This is useful to make the most of the MNE code base
instead of Numpy arrays. This is useful to make the most of the MNE code base
and to embed EEG specific code inside sklearn pipelines.

We will compare compare different pipelines for P300:
- Logistic Regression, based on MNE Epochs
We will compare different pipelines for P300:
- Logistic regression, based on MNE Epochs
- XDAWN and Logistic Regression (LR), based on MNE Epochs
- XDAWN extended covariance and LR on tangent space, based on numpy
- XDAWN extended covariance and LR on tangent space, based on Numpy

"""
# Authors: Sylvain Chevallier
Expand Down Expand Up @@ -47,7 +47,7 @@
moabb.set_log_level("info")

###############################################################################
# Loading dataset
# Loading Dataset
# ---------------
#
# Load 2 subjects of BNCI 2014-009 dataset, with 3 session each
Expand All @@ -58,15 +58,15 @@
paradigm = P300()

##############################################################################
# Get data (optional)
# Get Data (optional)
# -------------------
#
# To get access to the EEG signals downloaded from the dataset, you could
# use ``dataset.get_data([subject_id)`` to obtain the EEG as MNE Epochs, stored
# in a dictionary of sessions and runs.
# The ``paradigm.get_data(dataset=dataset, subjects=[subject_id])`` allows to
# obtain the preprocessed EEG data, the labels and the meta information. By
# default, the EEG is return as a numpy array. With ``return_epochs=True``, MNE
# default, the EEG is return as a Numpy array. With ``return_epochs=True``, MNE
# Epochs are returned.

subject_list = [1]
Expand All @@ -77,14 +77,14 @@
)

##############################################################################
# A simple MNE pipeline
# A Simple MNE Pipeline
# ---------------------
#
# Using ``return_epochs=True`` in the evaluation, it is possible to design a
# pipeline based on MNE Epochs input. Let's create a simple one, that
# reshape the input data from epochs, rescale the data and uses a logistic
# regression to classify the data. We will need to write a basic Transformer
# estimator, that comply with
# estimator, that complies with
# `sklearn convention <https://scikit-learn.org/stable/developers/develop.html>`_.
# This transformer will extract the data from an input Epoch, and reshapes into
# 2D array.
Expand Down Expand Up @@ -124,13 +124,13 @@ def transform(self, X, y=None):
mne_res = mne_eval.process(mne_ppl)

##############################################################################
# Advanced MNE pipeline
# Advanced MNE Pipeline
# ---------------------
#
# In some case, the MNE pipeline should have access to the original labels from
# the dataset. This is the case for the XDAWN code of MNE. One could pass
# `mne_labels` to evaluation in order to keep this label.
# As an example, we will define a pipeline that compute an XDAWN filter, rescale,
# As an example, we will define a pipeline that computes an XDAWN filter, rescale,
# then apply a logistic regression.

mne_adv = {}
Expand All @@ -151,10 +151,10 @@ def transform(self, X, y=None):
adv_res = mne_eval.process(mne_adv)

###############################################################################
# Numpy-based pipeline
# Numpy-based Pipeline
# --------------------
#
# For the comparison, we will define a numpy-based pipeline that relies on
# For the comparison, we will define a Numpy-based pipeline that relies on
# pyriemann to estimate XDAWN-extended covariance matrices that are projected
# on the tangent space and classified with a logistic regression.

Expand All @@ -173,11 +173,12 @@ def transform(self, X, y=None):
sk_res = sk_eval.process(sk_ppl)

###############################################################################
# Combining results
# Combining Results
# -----------------
#
# Even if the results have been obtained by different evaluation processes, it
# possible to combine the resulting dataframes to analyze and plot the results.
# is possible to combine the resulting DataFrames to analyze and plot the
# results.

all_res = pd.concat([mne_res, adv_res, sk_res])

Expand Down
8 changes: 4 additions & 4 deletions examples/advanced_examples/plot_select_electrodes_resample.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
"""
================================
Select electrodes and resampling
Select Electrodes and Resampling
================================

Within paradigm, it is possible to restrict analysis only to a subset of
Expand Down Expand Up @@ -30,7 +30,7 @@
# Datasets
# --------
#
# Load 2 subjects of BNCI 2014-004 and Zhou2016 datasets, with 2 session each
# Load 2 subjects of BNCI 2014-004 and Zhou2016 datasets, with 2 sessions each

subj = [1, 2]
datasets = [Zhou2016(), BNCI2014001()]
Expand Down Expand Up @@ -63,7 +63,7 @@
print(results.head())

##############################################################################
# Electrode selection
# Electrode Selection
# -------------------
#
# It is possible to select the electrodes that are shared by all datasets
Expand All @@ -79,7 +79,7 @@
print(results.head())

##############################################################################
# Plot results
# Plot Results
# ------------
#
# Compare the obtained results with the two pipelines, CSP+LDA and logistic
Expand Down
25 changes: 13 additions & 12 deletions examples/advanced_examples/plot_statistical_analysis.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
"""=======================
"""
=======================
Statistical Analysis
=======================

Expand Down Expand Up @@ -40,20 +41,20 @@
# ---------------------
#
# First we need to set up a paradigm, dataset list, and some pipelines to
# test. This is explored more in the examples -- we choose a left vs right
# test. This is explored more in the examples -- we choose left vs right
# imagery paradigm with a single bandpass. There is only one dataset here but
# any number can be added without changing this workflow.
#
# Create pipelines
# Create Pipelines
# ----------------
#
# Pipelines must be a dict of sklearn pipeline transformer.
#
# The csp implementation from MNE is used. We selected 8 CSP components, as
# usually done in the litterature.
# The CSP implementation from MNE is used. We selected 8 CSP components, as
# usually done in the literature.
#
# The riemannian geometry pipeline consists in covariance estimation, tangent
# space mapping and finaly a logistic regression for the classification.
# The Riemannian geometry pipeline consists in covariance estimation, tangent
# space mapping and finally a logistic regression for the classification.

pipelines = {}

Expand All @@ -70,7 +71,7 @@
# ----------
#
# We define the paradigm (LeftRightImagery) and the dataset (BNCI2014001).
# The evaluation will return a dataframe containing a single AUC score for
# The evaluation will return a DataFrame containing a single AUC score for
# each subject / session of the dataset, and for each pipeline.
#
# Results are saved into the database, so that if you add a new pipeline, it
Expand All @@ -89,7 +90,7 @@
results = evaluation.process(pipelines)

##############################################################################
# MOABB plotting
# MOABB Plotting
# ----------------
#
# Here we plot the results using some of the convenience methods within the
Expand All @@ -109,7 +110,7 @@
plt.show()

###############################################################################
# Statistical testing and further plots
# Statistical Testing and Further Plots
# ----------------------------------------
#
# If the statistical significance of results is of interest, the method
Expand All @@ -124,13 +125,13 @@
###############################################################################
# The meta-analysis style plot shows the standardized mean difference within
# each tested dataset for the two algorithms in question, in addition to a
# meta-effect and significances both per-dataset and overall.
# meta-effect and significance both per-dataset and overall.
fig = moabb_plt.meta_analysis_plot(stats, "CSP+LDA", "RG+LDA")
plt.show()

###############################################################################
# The summary plot shows the effect and significance related to the hypothesis
# that the algorithm on the y-axis significantly out-performed the algorithm on
# that the algorithm on the y-axis significantly outperformed the algorithm on
# the x-axis over all datasets
moabb_plt.summary_plot(P, T)
plt.show()
Loading