Skip to content

Commit

Permalink
improve docs + changelog
Browse files Browse the repository at this point in the history
  • Loading branch information
gagnonanthony committed Dec 23, 2024
1 parent 84441d3 commit 97176fc
Show file tree
Hide file tree
Showing 3 changed files with 82 additions and 9 deletions.
11 changes: 11 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,17 @@
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/)
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).

## [Unreleased] - [2024-12-23]

### `Added`

- Update minimal `nextflow` version to 24.10.0.
- Add subject and global-level QC using the MultiQC module.
- If `-profile freesurfer` is used, do not run T1 preprocessing.
- Optional DWI preprocessing using parameters.
- New docker containers for fastsurfer, atlases, and QC.
- Transform computation in `segmentation/fastsurfer`.

## [Unreleased] - [2024-11-21]

### `Added`
Expand Down
77 changes: 71 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,14 +8,39 @@

![nf-pediatric-schema](/assets/nf-pediatric-schema.svg)

A detailed description of the minimal outputs can be found [here](/docs/output.md).

## Usage

> [!NOTE]
> If you are new to Nextflow and nf-core, please refer to [this page](https://nf-co.re/docs/usage/installation) on how to set-up Nextflow.
The pipeline required input is in the form of a samplesheet containing the path to all your input files for all your subjects. For the most basic usage (`-profile tracking`), your input samplesheet should look like this:
`nf-pediatric` core functionalities are accessed and selected using profiles. This means users can select which part of the pipeline they want to run depending on their specific aims and the current state of their data (already preprocessed or not). Depending on your selection, the mandatory inputs will change, please see the [inputs](/docs/usage.md) documentation for a comprehensive overview for each profile. As of now, here is a list of the available profiles and a short description of their processing steps:

**Processing profiles**:

1. `-profile freesurfer`: By selecting this profile, [FreeSurfer `recon-all`](https://surfer.nmr.mgh.harvard.edu/) or [FastSurfer](https://deep-mi.org/research/fastsurfer/) will be used to process the T1w images and the Brainnetome Child Atlas ([Li et al., 2022](https://doi.org/10.1093/cercor/bhac415)) will be registered using surface-based methods in the native subject space.
1. `-profile tracking`: This is the core profile behind `nf-pediatric`. By selecting it, DWI data will be preprocessed (denoised, corrected for distortion, normalized, resampled, ...). In parallel, T1 will be preprocessed (if `-profile freesurfer` is not selected), registered into diffusion space, and segmented to extract tissue masks. Preprocessed DWI data will be used to fit both the DTI and fODF models. As the final step, whole-brain tractography will be performed using either local tracking or particle filter tracking (PFT).
1. `-profile connectomics`: By selecting this profiles, the whole-brain tractogram will be filtered to remove false positive streamlines, labels will be registered in diffusion space and used to segment the tractogram into individual connections. Following segmentation, connectivity matrices will be computed for a variety of metrics and outputted as numpy arrays usable for further statistical analysis.
1. `-profile infant`: As opposed to the other profiles, the `infant` profile does not enable a specific block of processing steps, but will change various configs and parameters to adapt the existing profile for infant data (<2 years). This profile is made to be used in conjunction with the others (with the exception of `-profile freesurfer` which is unavailable for infant data for now).

**Configuration profiles**:

1. `-profile docker`: Each process will be run using docker containers.
1. `-profile apptainer` or `-profile singularity`: Each process will be run using apptainer/singularity images.
1. `-profile arm`: Made to be use on computers with an ARM architecture.
1. `-profile no_symlink`: By default, the results directory contains symlink to files within the `work` directory. By selecting this profile, results will be copied from the work directory without the use of symlinks.
1. `-profile slurm`: If selected, the SLURM job scheduler will be used to dispatch jobs.

**Using either `-profile docker` or `-profile apptainer` is highly recommended, as it controls the version of the software used and avoids the installation of all the required softwares.**

For example, to perform the end-to-end connectomics pipeline, users should select `-profile tracking,freesurfer,connectomics` for pediatric data and `-profile infant,tracking,connectomics` for infant data. Once you selected your profile, you can check which input files are mandatory [here](/docs/usage.md). In addition to profile selection, users can change default parameters using command line arguments at runtime. To view a list of the parameters that can be customized, use the `--help` argument as follow:

```bash
nextflow run scilus/nf-pediatric -r main --help
```

### Input specification

The pipeline required input is in the form of a samplesheet (.csv file) containing the path to all your input files for all your subjects (for more details regarding which files are mandatory for each profile, see [here](/docs/usage.md)). For the most basic usage (`-profile tracking`), your input samplesheet should look like this:

`samplesheet.csv`:

Expand All @@ -24,9 +49,7 @@ subject,t1,t2,dwi,bval,bvec,rev_b0,labels,wmparc,trk,peaks,fodf,mat,warp,metrics
sub-1000,/input/sub-1000/t1.nii.gz,/input/sub-1000/dwi.nii.gz,/input/sub-1000/dwi.bval,/input/sub-1000/dwi.bvec,/input/sub-1000/rev_b0.nii.gz
```

Each row represents a subject, and each column represent a specific file that can be passed as an input.

Now, you can run the pipeline using:
Each row represents a subject, and each column represent a specific file that can be passed as an input. The pipeline has only two required parameters that need to be supplied at runtime: `--outdir` and `--input`. Now, you can run the pipeline using:

```bash
nextflow run scilus/nf-pediatric \
Expand All @@ -41,6 +64,48 @@ With this command, you will run the `tracking` profile for non-infant data. Ther
> [!WARNING]
> Please provide pipeline parameters via the CLI or Nextflow `-params-file` option. Custom config files including those provided by the `-c` Nextflow option can be used to provide any configuration _**except for parameters**_; see [docs](https://nf-co.re/docs/usage/getting_started/configuration#custom-configuration-files).
## Using `nf-pediatric` on computer nodes without internet access.

Some computing nodes does not have access to internet at runtime. Since the pipeline interacts with the containers repository and pull during execution, it won't work if the nodes do not have access to the internet. Fortunately, containers can be downloaded prior to the pipeline execution, and fetch locally during runtime. Using `nf-core` tools (for a detailed installation guide, see the [nf-core documentation](https://nf-co.re/docs/nf-core-tools/installation)), we can use the `nf-core pipelines download` command. To view the options before the download, you can use `nf-core pipelines download -h`. To use the prompts, simply run `nf-core pipelines download` as follows (*downloading all containers takes ~15 minutes*):

```bash
$ nf-core pipelines download -l docker.io


,--./,-.
___ __ __ __ ___ /,-._.--~\
|\ | |__ __ / ` / \ |__) |__ } {
| \| | \__, \__/ | \ |___ \`-._,-`-,
`._,._,'
nf-core/tools version 3.1.1 - https://nf-co.re
Specify the name of a nf-core pipeline or a GitHub repository name (user/repo).
? Pipeline name: scilus/nf-pediatric
WARNING Could not find GitHub authentication token. Some API requests may fail.
? Select release / branch: main [branch]
If you are working on the same system where you will run Nextflow, you can amend the downloaded images to the ones in the$NXF_SINGULARITY_CACHEDIR folder,
Nextflow will automatically find them. However if you will transfer the downloaded files to a different system then they should be copied to the target
folder.
? Copy singularity images from $NXF_SINGULARITY_CACHEDIR to the target folder or amend new images to the cache? copy
If transferring the downloaded files to another system, it can be convenient to have everything compressed in a single file.
This is not recommended when downloading Singularity images, as it can take a long time and saves very little space.
? Choose compression type: none
INFO Saving 'scilus/nf-pediatric'
Pipeline revision: 'main'
Use containers: 'singularity'
Container library: 'docker.io'
Using $NXF_SINGULARITY_CACHEDIR': /home/gagnona/test-download'
Output directory: 'scilus-nf-pediatric_main'
Include default institutional configuration: 'False'
INFO Downloading workflow files from GitHub
```
**Once all images are downloaded, you need to set `NXF_SINGULARITY_CACHEDIR` to the directory in which you downloaded the images. You can either include it in your `.bashrc` or export it prior to launching the pipeline.**
## Credits
nf-pediatric was originally written by Anthony Gagnon.
Expand Down
3 changes: 0 additions & 3 deletions docs/usage.md
Original file line number Diff line number Diff line change
Expand Up @@ -207,9 +207,6 @@ They are loaded in sequence, so later profiles can overwrite earlier profiles.

If `-profile` is not specified, the pipeline will run locally and expect all software to be installed and available on the `PATH`. This is _not_ recommended, since it can lead to different results on different machines dependent on the computer enviroment.

- `test`
- A profile with a complete configuration for automated testing
- Includes links to test data so needs no other parameters
- `docker`
- A generic configuration profile to be used with [Docker](https://docker.com/)
- `singularity`
Expand Down

0 comments on commit 97176fc

Please sign in to comment.