Skip to content

Commit

Permalink
Merge remote-tracking branch 'upstream/main' into ica
Browse files Browse the repository at this point in the history
* upstream/main:
  change default for info to use for inverse mne-tools#905 (mne-tools#919)
  Improve documentation and config validation of `loose` and `depth` parameters; drop support for `loose=None` (mne-tools#915)
  enhance documentation of caching, continuation of mne-tools#914 (mne-tools#918)
  [pre-commit.ci] pre-commit autoupdate (mne-tools#917)
  Restructure configuration options documentation sections (mne-tools#914)
  Try to fix documentation deployment (mne-tools#913)
  Do not show `Annotated` types in configuration options documentation (mne-tools#911)
  Add number of subjects to grand-average report (cont'd) (mne-tools#910)
  MAINT: Ensure input changes cause output changes (mne-tools#904)
  Render type annotations in the documentation again (mne-tools#909)
  • Loading branch information
larsoner committed Apr 16, 2024
2 parents 8438dc0 + cbeeb98 commit 56befac
Show file tree
Hide file tree
Showing 23 changed files with 418 additions and 201 deletions.
8 changes: 8 additions & 0 deletions .circleci/config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -1030,6 +1030,10 @@ jobs:
- attach_workspace:
at: ~/
- bash_env
- run:
name: Install dependencies
command: |
pip install -ve .[docs]
- run:
name: Build documentation
command: |
Expand Down Expand Up @@ -1058,6 +1062,10 @@ jobs:
at: ~/
- bash_env
- gitconfig
- run:
name: Install dependencies
command: |
pip install -ve .[docs]
- run:
# This is a bit computationally inefficient, but it should be much
# faster to "cp" directly on the machine rather than persist
Expand Down
2 changes: 1 addition & 1 deletion .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ files: ^(.*\.(py|yaml))$
exclude: ^(\.[^/]*cache/.*|.*/freesurfer/contrib/.*)$
repos:
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.3.4
rev: v0.3.5
hooks:
- id: ruff
args: ["--fix"]
Expand Down
9 changes: 8 additions & 1 deletion docs/mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -103,7 +103,11 @@ nav:
- Source space & forward solution: settings/source/forward.md
- Inverse solution: settings/source/inverse.md
- Report generation: settings/reports/report_generation.md
- Execution: settings/execution.md
- Caching: settings/caching.md
- Parallelization: settings/parallelization.md
- Logging: settings/logging.md
- Error handling: settings/error_handling.md

- Examples:
- Examples Gallery: examples/examples.md
- examples/ds003392.md
Expand Down Expand Up @@ -152,6 +156,9 @@ plugins:
show_root_toc_entry: false
show_root_full_path: false
separate_signature: true
show_signature_annotations: true
unwrap_annotated: true
signature_crossrefs: true
line_length: 88 # Black's default
show_bases: false
docstring_style: numpy
Expand Down
13 changes: 11 additions & 2 deletions docs/source/settings/gen_settings.py
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,13 @@
"reports": "reports",
"report generation": "report_generation",
# root file
"execution": "execution",
"caching": "caching",
# root file
"parallelization": "parallelization",
# root file
"logging": "logging",
# root file
"error handling": "error_handling",
}
# TODO: Make sure these are consistent, autogenerate some based on section names,
# and/or autogenerate based on inputs/outputs of actual functions.
Expand Down Expand Up @@ -76,7 +82,10 @@
"inverse solution": ("inverse-solution",),
"reports": (),
"report generation": ("report",),
"execution": (),
"caching": ("cache",),
"parallelization": ("paralleliation", "dask", "out-of-core"),
"logging": ("logging", "error-handling"),
"error handling": ("error-handling",),
}

extra_headers = {
Expand Down
27 changes: 19 additions & 8 deletions docs/source/v1.9.md.inc
Original file line number Diff line number Diff line change
Expand Up @@ -2,26 +2,37 @@

### :new: New features & enhancements

- Added number of subject to `sub-average` report (#902 by @SophieHerbst)
- Added number of subject to `sub-average` report (#902, #910 by @SophieHerbst)
- The type annotations in the default configuration file are now easier to read: We
replaced `Union[X, Y]` with `X | Y` and `Optional[X]` with `X | None`. (#908 by @hoechenberger)

[//]: # (- Whatever (#000 by @whoever))
replaced `Union[X, Y]` with `X | Y` and `Optional[X]` with `X | None`. (#908, #911 by @hoechenberger)

### :warning: Behavior changes

- All ICA HTML reports have been consolidated in the standard subject `*_report.html`
file instead of producing separate files (#899 by @larsoner).
- Changed default for `source_info_path_update` to `None`. In `_04_make_forward.py`
and `_05_make_inverse.py`, we retrieve the info from the file from which
the `noise_cov` is computed (#919 by @SophieHerbst)
- The [`depth`][mne_bids_pipeline._config.depth] parameter doesn't accept `None`
anymore. Please use `0` instead. (#915 by @hoechenberger)
### :package: Requirements
- We dropped support for Python 3.9. You now need Python 3.10 or newer.
- We dropped support for Python 3.9. You now need Python 3.10 or newer. (#908 by @hoechenberger)
### :book: Documentation
[//]: # (- Whatever (#000 by @whoever))
- We removed the `Execution` section from configuration options documentation and
replaced it with new, more explicit sections (namely, Caching, Parallelization,
Logging, and Error handling), and enhanced documentation. (#914 by @hoechenberger, #916 by @SophieHerbst)
[//]: # (### :bug: Bug fixes)
### :bug: Bug fixes
[//]: # (- Whatever (#000 by @whoever))
- When running the pipeline with [`find_bad_channels_meg`][mne_bids_pipeline._config. find_bad_channels_meg] enabled,
then disabling it and running the pipeline again, the pipeline would incorrectly still use automatically detected
bad channels from the first pipeline run. Now, we ensure that the original bad channels would be used and the
related section is removed from the report in this case. (#902 by @larsoner)
- Fixed group-average decoding statistics were not updated in some cases, even if relevant configuration options had been changed. (#902 by @larsoner)
### :medical_symbol: Code health and infrastructure
Expand Down
145 changes: 94 additions & 51 deletions mne_bids_pipeline/_config.py
Original file line number Diff line number Diff line change
Expand Up @@ -1989,24 +1989,33 @@ def mri_landmarks_kind(bids_path):

# ## Inverse solution

loose: float | Literal["auto"] = 0.2
loose: Annotated[float, Interval(ge=0, le=1)] | Literal["auto"] = 0.2
"""
Value that weights the source variances of the dipole components
that are parallel (tangential) to the cortical surface. If `0`, then the
inverse solution is computed with **fixed orientation.**
If `1`, it corresponds to **free orientation.**
The default value, `'auto'`, is set to `0.2` for surface-oriented source
spaces, and to `1.0` for volumetric, discrete, or mixed source spaces,
unless `fixed is True` in which case the value 0. is used.
A value between 0 and 1 that weights the source variances of the dipole components
that are parallel (tangential) to the cortical surface.
If `0`, then the inverse solution is computed with **fixed orientation**, i.e.,
only dipole components perpendicular to the cortical surface are considered.
If `1`, it corresponds to **free orientation**, i.e., dipole components with any
orientation are considered.
The default value, `0.2`, is suitable for surface-oriented source spaces.
For volume or mixed source spaces, choose `1.0`.
!!! info
Support for modeling volume and mixed source spaces will be added in a future
version of MNE-BIDS-Pipeline.
"""

depth: float | dict | None = 0.8
depth: Annotated[float, Interval(ge=0, le=1)] | dict = 0.8
"""
If float (default 0.8), it acts as the depth weighting exponent (`exp`)
to use (must be between 0 and 1). None is equivalent to 0, meaning no
depth weighting is performed. Can also be a `dict` containing additional
keyword arguments to pass to :func:`mne.forward.compute_depth_prior`
(see docstring for details and defaults).
If a number, it acts as the depth weighting exponent to use
(must be between `0` and`1`), with`0` meaning no depth weighting is performed.
Can also be a dictionary containing additional keyword arguments to pass to
`mne.forward.compute_depth_prior` (see docstring for details and defaults).
"""

inverse_method: Literal["MNE", "dSPM", "sLORETA", "eLORETA"] = "dSPM"
Expand Down Expand Up @@ -2099,22 +2108,35 @@ def noise_cov(bids_path):
of `mne.compute_covariance` for details.
"""

source_info_path_update: dict[str, str] | None = dict(suffix="ave")
source_info_path_update: dict[str, str] | None = None
"""
When computing the forward and inverse solutions, by default the pipeline
retrieves the `mne.Info` object from the cleaned evoked data. However, in
certain situations you may wish to use a different `Info`.
When computing the forward and inverse solutions, it is important to
provide the `mne.Info` object from the data on which the noise covariance was
computed, to avoid problems resulting from mismatching ranks.
This parameter allows you to explicitly specify from which file to retrieve the
`mne.Info` object. Use this parameter to supply a dictionary to
`BIDSPath.update()` during the forward and inverse processing steps.
If set to `None` (default), the info will be retrieved either from the raw
file specified in `noise_cov`, or the cleaned evoked
(if `noise_cov` is None or `ad-hoc`).
???+ example "Example"
Use the `Info` object stored in the cleaned epochs:
```python
source_info_path_update = {'processing': 'clean',
'suffix': 'epo'}
```
Use the `Info` object stored in a raw file (e.g. resting state):
```python
source_info_path_update = {'processing': 'clean',
'suffix': 'raw',
'task': 'rest'}
```
If you set `noise_cov = 'rest'` and `source_path_info = None`,
then the behavior is identical to that above
(it will automatically use the resting state data).
"""

inverse_targets: list[Literal["evoked"]] = ["evoked"]
Expand Down Expand Up @@ -2182,10 +2204,48 @@ def noise_cov(bids_path):
"""

# %%
# # Execution
# # Caching
#
# These options control how the pipeline is executed but should not affect
# what outputs get produced.
# Per default, the pipeline output is cached (temporarily stored),
# to avoid unnecessary reruns of previously computed steps.
# Yet, for consistency, changes in configuration parameters trigger
# automatic reruns of previous steps.
# !!! info
# To force rerunning a given step, run the pipeline with the option: `--no-cache`.

memory_location: PathLike | bool | None = True
"""
If not None (or False), caching will be enabled and the cache files will be
stored in the given directory. The default (True) will use a
`"_cache"` subdirectory (name configurable via the
[`memory_subdir`][mne_bids_pipeline._config.memory_subdir]
variable) in the BIDS derivative root of the dataset.
"""

memory_subdir: str = "_cache"
"""
The caching directory name to use if `memory_location` is `True`.
"""

memory_file_method: Literal["mtime", "hash"] = "mtime"
"""
The method to use for cache invalidation (i.e., detecting changes). Using the
"modified time" reported by the filesystem (`'mtime'`, default) is very fast
but requires that the filesystem supports proper mtime reporting. Using file
hashes (`'hash'`) is slower and requires reading all input files but should
work on any filesystem.
"""

memory_verbose: int = 0
"""
The verbosity to use when using memory. The default (0) does not print, while
1 will print the function calls that will be cached. See the documentation for
the joblib.Memory class for more information."""

# %%
# # Parallelization
#
# These options control parallel processing (e.g., multiple subjects at once),

n_jobs: int = 1
"""
Expand Down Expand Up @@ -2225,6 +2285,11 @@ def noise_cov(bids_path):
The maximum amount of RAM per Dask worker.
"""

# %%
# # Logging
#
# These options control how much logging output is produced.

log_level: Literal["info", "error"] = "info"
"""
Set the pipeline logging verbosity.
Expand All @@ -2235,6 +2300,13 @@ def noise_cov(bids_path):
Set the MNE-Python logging verbosity.
"""


# %%
# # Error handling
#
# These options control how errors while processing the data or the configuration file
# are handled.

on_error: Literal["continue", "abort", "debug"] = "abort"
"""
Whether to abort processing as soon as an error occurs, continue with all other
Expand All @@ -2245,35 +2317,6 @@ def noise_cov(bids_path):
Enabling debug mode deactivates parallel processing.
"""

memory_location: PathLike | bool | None = True
"""
If not None (or False), caching will be enabled and the cache files will be
stored in the given directory. The default (True) will use a
`"_cache"` subdirectory (name configurable via the
[`memory_subdir`][mne_bids_pipeline._config.memory_subdir]
variable) in the BIDS derivative root of the dataset.
"""

memory_subdir: str = "_cache"
"""
The caching directory name to use if `memory_location` is `True`.
"""

memory_file_method: Literal["mtime", "hash"] = "mtime"
"""
The method to use for cache invalidation (i.e., detecting changes). Using the
"modified time" reported by the filesystem (`'mtime'`, default) is very fast
but requires that the filesystem supports proper mtime reporting. Using file
hashes (`'hash'`) is slower and requires reading all input files but should
work on any filesystem.
"""

memory_verbose: int = 0
"""
The verbosity to use when using memory. The default (0) does not print, while
1 will print the function calls that will be cached. See the documentation for
the joblib.Memory class for more information."""

config_validation: Literal["raise", "warn", "ignore"] = "raise"
"""
How strictly to validate the configuration. Errors are always raised for
Expand Down
1 change: 0 additions & 1 deletion mne_bids_pipeline/_config_import.py
Original file line number Diff line number Diff line change
Expand Up @@ -344,7 +344,6 @@ def _default_factory(key, val):
allowlist = [
{"n_mag": 1, "n_grad": 1, "n_eeg": 1}, # n_proj_*
{"custom": (8, 24.0, 40)}, # decoding_csp_freqs
{"suffix": "ave"}, # source_info_path_update
["evoked"], # inverse_targets
[4, 8, 16], # autoreject_n_interpolate
]
Expand Down
7 changes: 4 additions & 3 deletions mne_bids_pipeline/_import_data.py
Original file line number Diff line number Diff line change
Expand Up @@ -816,12 +816,13 @@ def _import_data_kwargs(*, config: SimpleNamespace, subject: str) -> dict:
)


def _get_run_type(
def _read_raw_msg(
bids_path_in: BIDSPath,
run: str | None,
task: str | None,
) -> str:
) -> tuple[str]:
if run is None and task in ("noise", "rest"):
run_type = dict(rest="resting-state", noise="empty-room")[task]
else:
run_type = "experimental"
return run_type
return f"Reading {run_type} recording: {bids_path_in.basename}", run_type
1 change: 1 addition & 0 deletions mne_bids_pipeline/_logging.py
Original file line number Diff line number Diff line change
Expand Up @@ -117,6 +117,7 @@ def gen_log_kwargs(
emoji: str = "⏳️",
) -> LogKwargsT:
# Try to figure these out
assert isinstance(message, str), type(message)
stack = inspect.stack()
up_locals = stack[1].frame.f_locals
if subject is None:
Expand Down
4 changes: 2 additions & 2 deletions mne_bids_pipeline/_report.py
Original file line number Diff line number Diff line change
Expand Up @@ -209,7 +209,7 @@ def _plot_mean_cv_score(x, **kwargs):
g.set_xlabels("")

fig = g.fig
return fig, caption
return fig, caption, data


def _plot_time_by_time_decoding_scores(
Expand Down Expand Up @@ -603,7 +603,7 @@ def add_csp_grand_average(
import matplotlib.pyplot as plt # nested import to help joblib

# First, plot decoding scores across frequency bins (entire epochs).
section = "Decoding: CSP"
section = f"Decoding: CSP, N = {len(cfg.subjects)}"
freq_name_to_bins_map, _ = _handle_csp_args(
cfg.decoding_csp_times,
cfg.decoding_csp_freqs,
Expand Down
Loading

0 comments on commit 56befac

Please sign in to comment.