Skip to content

dmalt/meg_speech_decoding

 
 

Repository files navigation

Quickstart

We tested the instructions on Ubuntu Linux, but in principle they should work on any platform. Although Windows and MacOS will require different conda enviroment setup (see the note in step 3).

Installation prerequisites:

TL;DR (Linux)

Installation:

git clone --recurse-submodules https://github.com/dmalt/meg_speech_decoding.git && \
pip install dvc dvc[gdrive] && \
cd meg_speech_decoding/speech_meg && \
dvc pull -r test --glob "rawdata/derivatives/*/sub-test.dvc" && \
cd .. && \
conda env create -f environment_freeze.yml && \
conda activate speechdl3.9 && \
pip install --no-deps -e neural_data_preprocessing && \
pip install --no-deps -e speech_meg

Launch:

python regression_speech.py +experiment=test

Installation

  1. Clone this project with submodules:

    git clone --recurse-submodules https://github.com/dmalt/meg_speech_decoding.git
  2. Load the test data

    Use DVC to load the data stored on GDrive. Install dvc and gdrive extension:

    pip install dvc dvc[gdrive]

    From meg_speech_decoding/speech_meg folder run

    dvc pull -r test --glob "rawdata/derivatives/*/sub-test.dvc"

    N.B.

    The data download should start after gmail account authentification. You'll see some warnings about "some cache files not existing locally nor on remote". This is normal behaviour: test remote stores no subjects but the test one and doesn't contain other files which dvc expects to find.

    More info on the loaded data structure here

  3. Setup and activate conda virtual env

    From meg_speech_decoding folder run:

    conda env create -f environment_freeze.yml
    conda activate speechdl3.9

    N.B.

    The frozen enviroment file is for Linux since conda packages are not cross-platform. On Windows or MacOS use conda env create -f enviroment.yml, which will solve the environment for you. Note, that solving the enviroment with conda might take forever (not tested). In case conda freezes we recommend trying mamba instead.

  4. Install the submodules

    From meg_speech_decoding folder run:

    pip install --no-deps -e neural_data_preprocessing
    pip install --no-deps -e speech_meg

Launch

Again, make sure meg_speech_decoding is the current working directory.

Launch training for regression with:

python regression_speech.py +experiment=test

Launch training for classification with:

python classification_overtcovert.py +experiment=test

The script will save model dump, tensorboard stats, logs etc. in outputs/ under unuque date and time subfolders.

Configuration

  • Main configuration file for regression: configs/regression_speech_config.yaml
  • Main configuration file for classification: configs/classification_overtcovert_config.yaml

Configuration files are available at configs/.

Main configuration file for each script determines how hydra assembles the final configuration from the files in configs. This final configuration allows hydra to generate the CLI params that we can pass to the launch scripts.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 78.6%
  • Python 21.4%