Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Release review comments #52

Merged
merged 16 commits into from
Jun 7, 2024
Merged
Show file tree
Hide file tree
Changes from 13 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 1 addition & 9 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/)
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).

## [v1.0.0](https://github.com/nf-core/reportho/releases/tag/1.0.0) - Magnificent Mainsail - [2024-06-06]
## [v1.0.0](https://github.com/nf-core/reportho/releases/tag/1.0.0) - Magnificent Mainsail - [2024-06-07]

Although its location and design may vary greatly, the mainsail is always a key source of propulsion for a ship.

Expand All @@ -30,10 +30,6 @@ The pipeline was created. In particular, it has the following features:
- basic downstream analysis of the obtained ortholog list
- generation of a human-readable report

### `Fixed`

Nothing yet.

### `Dependencies`

The pipeline has the following notable dependencies:
Expand All @@ -60,7 +56,3 @@ At release date, the following database versions were current and used for testi
| PANTHER | 18 |
| OrthoInspector | Eukaryota2023 |
| EggNOG | 5.0 |

### `Deprecated`

Nothing.
4 changes: 2 additions & 2 deletions conf/test_fasta.config
Original file line number Diff line number Diff line change
Expand Up @@ -11,8 +11,8 @@
*/

params {
config_profile_name = 'Test profile'
config_profile_description = 'Minimal test dataset to check pipeline function'
config_profile_name = 'Test profile with FASTA input'
config_profile_description = 'Minimal test dataset to check pipeline function with FASTA input'

// Limit resources so that this can run on GitHub Actions
max_cpus = 2
Expand Down
4 changes: 2 additions & 2 deletions conf/test_offline.config
Original file line number Diff line number Diff line change
Expand Up @@ -11,8 +11,8 @@
*/

params {
config_profile_name = 'Test profile'
config_profile_description = 'Minimal test dataset to check pipeline function'
config_profile_name = 'Test profile with offline databases'
config_profile_description = 'Minimal test dataset to check pipeline function with offline databases'

// Limit resources so that this can run on GitHub Actions
max_cpus = 2
Expand Down
Binary file removed docs/images/nf-core-reportho_tube_map_beta.png
Binary file not shown.
2 changes: 1 addition & 1 deletion docs/images/reportho_tube_map.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
1 change: 1 addition & 0 deletions modules/local/dump_params.nf
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@ process DUMP_PARAMS {
tag "$meta.id"
label 'process_single'

conda "conda-forge::coreutils=9.5"
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
'https://depot.galaxyproject.org/singularity/ubuntu:20.04' :
'nf-core/ubuntu:20.04' }"
Expand Down
2 changes: 1 addition & 1 deletion modules/local/fetch_panther_group_online.nf
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ process FETCH_PANTHER_GROUP_ONLINE {
tuple val(meta), path(uniprot_id), path(taxid), path(exact)

output:
tuple val(meta), path("*_panther_group.csv"), emit:panther_group
tuple val(meta), path("*_panther_group.csv"), emit: panther_group
path "versions.yml" , emit: versions

when:
Expand Down
2 changes: 1 addition & 1 deletion modules/local/plot_orthologs.nf
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ process PLOT_ORTHOLOGS {
tuple val(meta), path("*_supports_light.png"), path("*_supports_dark.png"), emit: supports
tuple val(meta), path("*_venn_light.png"), path("*_venn_dark.png") , emit: venn
tuple val(meta), path("*_jaccard_light.png"), path("*_jaccard_dark.png") , emit: jaccard
path "versions.yml" , emit: versions
path "versions.yml" , emit: versions

when:
task.ext.when == null || task.ext.when
Expand Down
2 changes: 1 addition & 1 deletion modules/local/plot_tree.nf
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ process PLOT_TREE {

output:
tuple val(meta), path("*_light.png"), path("*_dark.png") , emit: plot
path "versions.yml" , emit: versions
path "versions.yml" , emit: versions

when:
task.ext.when == null || task.ext.when
Expand Down
2 changes: 1 addition & 1 deletion nextflow_schema.json
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@
"local_databases": {
"type": "boolean",
"default": "false",
"description": "Use local databases for the analysis. If use_all is set to `true`, online databases might still be used.",
"description": "Use local databases for the analysis.",
"help_text": "If set to `true`, the pipeline will use local databases for the analysis.",
"fa_icon": "fas fa-database"
},
Expand Down
22 changes: 0 additions & 22 deletions subworkflows/local/fetch_sequences.nf

This file was deleted.

19 changes: 0 additions & 19 deletions subworkflows/local/fetch_structures.nf

This file was deleted.

16 changes: 7 additions & 9 deletions subworkflows/local/get_orthologs.nf
Original file line number Diff line number Diff line change
Expand Up @@ -23,19 +23,18 @@ workflow GET_ORTHOLOGS {
take:
ch_samplesheet_query
ch_samplesheet_fasta
ch_oma_groups
ch_oma_uniprot
ch_oma_ensembl
ch_oma_refseq
ch_panther
ch_eggnog
ch_eggnog_idmap

main:
ch_versions = Channel.empty()
ch_orthogroups = Channel.empty()

ch_oma_groups = params.oma_path ? Channel.value(file(params.oma_path)) : Channel.empty()
ch_oma_uniprot = params.oma_uniprot_path ? Channel.value(file(params.oma_uniprot_path)) : Channel.empty()
ch_oma_ensembl = params.oma_ensembl_path ? Channel.value(file(params.oma_ensembl_path)) : Channel.empty()
ch_oma_refseq = params.oma_refseq_path ? Channel.value(file(params.oma_refseq_path)) : Channel.empty()
ch_panther = params.panther_path ? Channel.value(file(params.panther_path)) : Channel.empty()
ch_eggnog = params.eggnog_path ? Channel.value(file(params.eggnog_path)) : Channel.empty()
ch_eggnog_idmap = params.eggnog_idmap_path ? Channel.value(file(params.eggnog_idmap_path)) : Channel.empty()

ch_samplesheet_fasta.map {
if (params.offline_run) {
error "Tried to use FASTA input in an offline run. Aborting pipeline for user safety."
Expand All @@ -53,7 +52,6 @@ workflow GET_ORTHOLOGS {
ch_fasta
)

ch_query = IDENTIFY_SEQ_ONLINE.out.seqinfo
ch_versions = ch_versions.mix(IDENTIFY_SEQ_ONLINE.out.versions)

WRITE_SEQINFO (
Expand Down
45 changes: 31 additions & 14 deletions workflows/reportho.nf
Original file line number Diff line number Diff line change
Expand Up @@ -11,12 +11,13 @@ include { softwareVersionsToYAML } from '../subworkflows/nf-core/utils_nfcore_pi
include { methodsDescriptionText } from '../subworkflows/local/utils_nfcore_reportho_pipeline'

include { GET_ORTHOLOGS } from '../subworkflows/local/get_orthologs'
include { FETCH_SEQUENCES } from '../subworkflows/local/fetch_sequences'
include { FETCH_STRUCTURES } from '../subworkflows/local/fetch_structures'
include { ALIGN } from '../subworkflows/local/align'
include { MAKE_TREES } from '../subworkflows/local/make_trees'
include { REPORT } from '../subworkflows/local/report'

include { FETCH_SEQUENCES_ONLINE } from '../subworkflows/local/fetch_sequences_online'
include { FETCH_AFDB_STRUCTURES } from '../subworkflows/local/fetch_afdb_structures'

/*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
RUN MAIN WORKFLOW
Expand All @@ -35,9 +36,24 @@ workflow REPORTHO {
ch_multiqc_files = Channel.empty()
ch_fasta_query = ch_samplesheet_query.map { [it[0], []] }.mix(ch_samplesheet_fasta.map { [it[0], file(it[1])] })

ch_oma_groups = params.oma_path ? Channel.value(file(params.oma_path)) : Channel.empty()
ch_oma_uniprot = params.oma_uniprot_path ? Channel.value(file(params.oma_uniprot_path)) : Channel.empty()
ch_oma_ensembl = params.oma_ensembl_path ? Channel.value(file(params.oma_ensembl_path)) : Channel.empty()
ch_oma_refseq = params.oma_refseq_path ? Channel.value(file(params.oma_refseq_path)) : Channel.empty()
ch_panther = params.panther_path ? Channel.value(file(params.panther_path)) : Channel.empty()
ch_eggnog = params.eggnog_path ? Channel.value(file(params.eggnog_path)) : Channel.empty()
ch_eggnog_idmap = params.eggnog_idmap_path ? Channel.value(file(params.eggnog_idmap_path)) : Channel.empty()

GET_ORTHOLOGS (
ch_samplesheet_query,
ch_samplesheet_fasta
ch_samplesheet_fasta,
ch_oma_groups,
ch_oma_uniprot,
ch_oma_ensembl,
ch_oma_refseq,
ch_panther,
ch_eggnog,
ch_eggnog_idmap
)

ch_versions = ch_versions.mix(GET_ORTHOLOGS.out.versions)
Expand All @@ -55,30 +71,31 @@ workflow REPORTHO {
ch_fastme = ch_samplesheet.map { [it[0], []] }

if (!params.skip_downstream) {
FETCH_SEQUENCES (
GET_ORTHOLOGS.out.orthologs,
ch_fasta_query
ch_sequences_input = GET_ORTHOLOGS.out.orthologs.join(ch_fasta_query)

FETCH_SEQUENCES_ONLINE (
ch_sequences_input
)

ch_seqhits = FETCH_SEQUENCES.out.hits
ch_seqhits = FETCH_SEQUENCES_ONLINE.out.hits

ch_seqmisses = FETCH_SEQUENCES.out.misses
ch_seqmisses = FETCH_SEQUENCES_ONLINE.out.misses

ch_versions = ch_versions.mix(FETCH_SEQUENCES.out.versions)
ch_versions = ch_versions.mix(FETCH_SEQUENCES_ONLINE.out.versions)

if (params.use_structures) {
FETCH_STRUCTURES (
FETCH_AFDB_STRUCTURES (
GET_ORTHOLOGS.out.orthologs
)

ch_strhits = FETCH_STRUCTURES.out.hits
ch_strhits = FETCH_AFDB_STRUCTURES.out.hits

ch_strmisses = FETCH_STRUCTURES.out.misses
ch_strmisses = FETCH_AFDB_STRUCTURES.out.misses

ch_versions = ch_versions.mix(FETCH_STRUCTURES.out.versions)
ch_versions = ch_versions.mix(FETCH_AFDB_STRUCTURES.out.versions)
}

ch_structures = params.use_structures ? FETCH_STRUCTURES.out.structures : Channel.empty()
ch_structures = params.use_structures ? FETCH_AFDB_STRUCTURES.out.structures : Channel.empty()

ALIGN (
FETCH_SEQUENCES.out.sequences,
Expand Down
Loading