Skip to content

Commit

Permalink
Merge remote-tracking branch 'upstream/dev' into fixes
Browse files Browse the repository at this point in the history
  • Loading branch information
JoseEspinosa committed May 28, 2024
2 parents 6cbe19d + 2da7da1 commit a512e7a
Show file tree
Hide file tree
Showing 14 changed files with 47 additions and 30 deletions.
2 changes: 2 additions & 0 deletions .nf-core.yml
Original file line number Diff line number Diff line change
@@ -1,2 +1,4 @@
repository_type: pipeline
nf_core_version: "2.14.1"
lint:
files_exist: conf/igenomes.config
4 changes: 0 additions & 4 deletions .vscode/settings.json

This file was deleted.

14 changes: 8 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,13 +19,13 @@

## Introduction

**nf-core/reportho** is a bioinformatics pipeline that compares and assembles orthology predictions for a query protein. It fetches ortholog lists for a query (or its closest annotated homolog) from public sources, calculates pairwise and global agreement, and generates a consensus list with the desired level of confidence. Optionally, it offers common analysis on the consensus orthologs, such as MSA and phylogeny reconstruction. Additionally, it generates a clean, human-readable report of the results.
**nf-core/reportho** is a bioinformatics pipeline that compares and summarizes orthology predictions for one or a set of query proteins. For each query (or its closest annotated homolog), it fetches ortholog lists from public databases, calculates the agreement of the obtained predictions(pairwise and global) and finally generates a consensus list of orthologs with the desired level of confidence. Optionally, it offers common analysis on the consensus orthologs, such as MSA and phylogeny reconstruction. Additionally, it generates a clean, human-readable report of the results.

<!-- Tube map -->

![nf-core-reportho tube map](docs/images/reportho_tube_map.svg?raw=true "nf-core-reportho tube map")

1. **Obtain Query Information**: (depends on provided input) identification of Uniprot ID and taxon ID for the query or its closest homolog.
1. **Obtain Query Information**: identification of Uniprot ID and taxon ID for the query (or its closest homolog if the fasta file is used as input instead of the Uniprot ID).
2. **Fetch Orthologs**: fetching of ortholog predictions from public databases, either through API or from local snapshot.
3. **Compare and Assemble**: calculation of agreement statistics, creation of ortholog lists, selection of the consensus list.

Expand All @@ -47,13 +47,15 @@ First, prepare a samplesheet with your input data that looks as follows:
```csv title="samplesheet_fasta.csv"
id,fasta
BicD2,data/bicd2.fasta
HBB,data/hbb.fasta
```

or if you know the UniProt ID of the protein you can provide it directly:

```csv title="samplesheet.csv"
id,query
BicD2,Q8TD16
HBB,P68871
```

> [!NOTE]
Expand Down Expand Up @@ -82,13 +84,13 @@ For more details about the output files and reports, please refer to the

## Credits

nf-core/reportho was originally written by Igor Trujnara (@itrujnara).
nf-core/reportho was originally written by Igor Trujnara ([@itrujnara](https://github.com/itrujnara)).

We thank the following people for their extensive assistance in the development of this pipeline:

- Luisa Santus (@lsantus)
- Alessio Vignoli (@avignoli)
- Jose Espinosa-Carrasco (@JoseEspinosa)
- Luisa Santus ([@luisas](https://github.com/luisas))
- Alessio Vignoli ([@alessiovignoli](https://github.com/alessiovignoli))
- Jose Espinosa-Carrasco ([@JoseEspinosa](https://github.com/JoseEspinosa))

## Contributions and Support

Expand Down
8 changes: 8 additions & 0 deletions bin/fetch_oma_by_sequence.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,14 @@
from Bio import SeqIO
from utils import fetch_seq

# Script overview:
# Fetches the OMA entry for a given protein sequence
# The sequence is passed as a FASTA file
# If the sequence is not found, the script exits with an error
# It outputs 3 files:
# 1. The canonical ID of the sequence
# 2. The taxonomy ID of the species
# 3. A boolean indicating if the sequence was an exact match

def main() -> None:
if len(sys.argv) < 5:
Expand Down
2 changes: 2 additions & 0 deletions docs/usage.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,13 +25,15 @@ A final samplesheet file may look something like the one below:
```csv title="samplesheet.csv"
id,query
BicD2,Q8TD16
HBB,P68871
```

or the one below, if you provide the sequence of the protein in FASTA format:

```csv title="samplesheet.csv"
id,fasta
BicD2,/home/myuser/data/bicd2.fa
HBB,/home/myuser/data/hbb.fa
```

| Column | Description |
Expand Down
4 changes: 2 additions & 2 deletions main.nf
Original file line number Diff line number Diff line change
Expand Up @@ -46,8 +46,8 @@ workflow NFCORE_REPORTHO {
samplesheet_fasta,
)

// emit:
// multiqc_report = REPORTHO.out.multiqc_report // channel: /path/to/multiqc_report.html
emit:
multiqc_report = REPORTHO.out.multiqc_report // channel: /path/to/multiqc_report.html

}
/*
Expand Down
2 changes: 1 addition & 1 deletion modules.json
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@
},
"csvtk/join": {
"branch": "master",
"git_sha": "5e0c5677ea33b3d4c3793244035a191bd03e6736",
"git_sha": "614abbf126f287a3068dc86997b2e1b6a93abe20",
"installed_by": ["modules"]
},
"fastme": {
Expand Down
1 change: 1 addition & 0 deletions modules/local/create_tcoffeetemplate.nf
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@ process CREATE_TCOFFEETEMPLATE {

output:
tuple val (meta), path("*_template.txt"), emit: template
path("versions.yml"), emit: versions

when:
task.ext.when == null || task.ext.when
Expand Down
1 change: 1 addition & 0 deletions modules/local/dump_params.nf
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@ process DUMP_PARAMS {

output:
tuple val(meta), path("params.yml"), emit: params
path("versions.yml"), emit: versions

when:
task.ext.when == null || task.ext.when
Expand Down
8 changes: 4 additions & 4 deletions modules/local/fetch_eggnog_group_local.nf
Original file line number Diff line number Diff line change
Expand Up @@ -2,10 +2,8 @@ process FETCH_EGGNOG_GROUP_LOCAL {
tag "$meta.id"
label 'process_single'

conda "conda-forge::python=3.11.0 conda-forge::biopython=1.83.0 conda-forge::requests=2.31.0"
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
'https://depot.galaxyproject.org/singularity/mulled-v2-bc54124b36864a4af42a9db48b90a404b5869e7e:5258b8e5ba20587b7cbf3e942e973af5045a1e59-0' :
'biocontainers/mulled-v2-bc54124b36864a4af42a9db48b90a404b5869e7e:5258b8e5ba20587b7cbf3e942e973af5045a1e59-0' }"
conda "conda-forge::python=3.12.3 conda-forge::ripgrep=14.1.0"
container "community.wave.seqera.io/library/python_ripgrep:324b372792aae9ce"

input:
tuple val(meta), path(uniprot_id), path(taxid), path(exact)
Expand Down Expand Up @@ -34,6 +32,7 @@ process FETCH_EGGNOG_GROUP_LOCAL {
cat <<- END_VERSIONS > versions.yml
"${task.process}":
Python: \$(python --version | cut -f2)
ripgrep: \$(rg --version | head -n1 | cut -d' ' -f2)
END_VERSIONS
"""

Expand All @@ -46,6 +45,7 @@ process FETCH_EGGNOG_GROUP_LOCAL {
cat <<- END_VERSIONS > versions.yml
"${task.process}":
Python: \$(python --version | cut -f2)
ripgrep: \$(rg --version | head -n1 | cut -d' ' -f2)
END_VERSIONS
"""
}
17 changes: 12 additions & 5 deletions modules/local/fetch_oma_group_local.nf
Original file line number Diff line number Diff line change
Expand Up @@ -2,10 +2,8 @@ process FETCH_OMA_GROUP_LOCAL {
tag "$meta.id"
label 'process_single'

conda "conda-forge::python=3.11.0 conda-forge::biopython=1.83.0 conda-forge::requests=2.31.0"
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
'https://depot.galaxyproject.org/singularity/mulled-v2-bc54124b36864a4af42a9db48b90a404b5869e7e:5258b8e5ba20587b7cbf3e942e973af5045a1e59-0' :
'biocontainers/mulled-v2-bc54124b36864a4af42a9db48b90a404b5869e7e:5258b8e5ba20587b7cbf3e942e973af5045a1e59-0' }"
conda "conda-forge::python=3.12.3 conda-forge::ripgrep=14.1.0"
container "community.wave.seqera.io/library/python_ripgrep:324b372792aae9ce"

input:
tuple val(meta), path(uniprot_id), path(taxid), path(exact)
Expand All @@ -24,15 +22,23 @@ process FETCH_OMA_GROUP_LOCAL {
script:
prefix = task.ext.prefix ?: meta.id
"""
# Obtain the OMA ID for the given Uniprot ID of the query protein
omaid=\$(uniprot2oma_local.py $uniprot_idmap $uniprot_id)
zcat $db | grep \$omaid | head -1 | cut -f3- | awk '{gsub(/\\t/,"\\n"); print}' > ${prefix}_oma_group_oma.txt || test -f ${prefix}_oma_group_oma.txt
# Perform the database search for the given query in OMA
zcat $db | rg \$omaid | head -1 | cut -f3- | awk '{gsub(/\\t/,"\\n"); print}' > ${prefix}_oma_group_oma.txt || test -f ${prefix}_oma_group_oma.txt
# Convert the OMA ids to Uniprot, Ensembl and RefSeq ids
oma2uniprot_local.py $uniprot_idmap ${prefix}_oma_group_oma.txt > ${prefix}_oma_group_raw.txt
uniprotize_oma_local.py ${prefix}_oma_group_raw.txt $ensembl_idmap $refseq_idmap > ${prefix}_oma_group.txt
# Add the OMA column to the csv file
csv_adorn.py ${prefix}_oma_group.txt OMA > ${prefix}_oma_group.csv
cat <<- END_VERSIONS > versions.yml
"${task.process}":
Python: \$(python --version | cut -f2)
ripgrep: \$(rg --version | head -n1 | cut -d' ' -f2)
END_VERSIONS
"""

Expand All @@ -44,6 +50,7 @@ process FETCH_OMA_GROUP_LOCAL {
cat <<- END_VERSIONS > versions.yml
"${task.process}":
Python: \$(python --version | cut -f2)
ripgrep: \$(rg --version | head -n1 | cut -d' ' -f2)
END_VERSIONS
"""
}
10 changes: 5 additions & 5 deletions modules/local/fetch_panther_group_local.nf
Original file line number Diff line number Diff line change
Expand Up @@ -2,10 +2,8 @@ process FETCH_PANTHER_GROUP_LOCAL {
tag "$meta.id"
label 'process_single'

conda "conda-forge::python=3.11.0 conda-forge::biopython=1.83.0 conda-forge::requests=2.31.0"
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
'https://depot.galaxyproject.org/singularity/mulled-v2-bc54124b36864a4af42a9db48b90a404b5869e7e:5258b8e5ba20587b7cbf3e942e973af5045a1e59-0' :
'biocontainers/mulled-v2-bc54124b36864a4af42a9db48b90a404b5869e7e:5258b8e5ba20587b7cbf3e942e973af5045a1e59-0' }"
conda "conda-forge::python=3.12.3 conda-forge::ripgrep=14.1.0"
container "community.wave.seqera.io/library/python_ripgrep:324b372792aae9ce"

input:
tuple val(meta), path(uniprot_id), path(taxid), path(exact)
Expand All @@ -22,12 +20,13 @@ process FETCH_PANTHER_GROUP_LOCAL {
prefix = task.ext.prefix ?: meta.id
"""
id=\$(cat ${uniprot_id})
grep \$id $panther_db | tr '|' ' ' | tr '\\t' ' ' | cut -d' ' -f3,6 | awk -v id="\$id" -F'UniProtKB=' '{ for(i=0;i<=NF;i++) { if(\$i !~ id) s=s ? s OFS \$i : \$i } print s; s="" }' > ${prefix}_panther_group_raw.txt || test -f ${prefix}_panther_group_raw.txt
rg \$id $panther_db | tr '|' ' ' | tr '\\t' ' ' | cut -d' ' -f3,6 | awk -v id="\$id" -F'UniProtKB=' '{ for(i=0;i<=NF;i++) { if(\$i !~ id) s=s ? s OFS \$i : \$i } print s; s="" }' > ${prefix}_panther_group_raw.txt || test -f ${prefix}_panther_group_raw.txt
csv_adorn.py ${prefix}_panther_group_raw.txt PANTHER > ${prefix}_panther_group.csv
cat <<- END_VERSIONS > versions.yml
"${task.process}":
Python: \$(python --version | cut -f2)
ripgrep: \$(rg --version | head -n1 | cut -d' ' -f2)
END_VERSIONS
"""

Expand All @@ -39,6 +38,7 @@ process FETCH_PANTHER_GROUP_LOCAL {
cat <<- END_VERSIONS > versions.yml
"${task.process}":
Python: \$(python --version | cut -f2)
ripgrep: \$(rg --version | head -n1 | cut -d' ' -f2)
END_VERSIONS
"""
}
1 change: 0 additions & 1 deletion modules/nf-core/csvtk/join/tests/main.nf.test

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

3 changes: 1 addition & 2 deletions subworkflows/local/get_orthologs.nf
Original file line number Diff line number Diff line change
Expand Up @@ -50,8 +50,6 @@ workflow GET_ORTHOLOGS {
.map { it -> [it[0], file(it[1])] }
.set { ch_fasta }

ch_fasta.view()

IDENTIFY_SEQ_ONLINE (
ch_fasta
)
Expand Down Expand Up @@ -135,6 +133,7 @@ workflow GET_ORTHOLOGS {

ch_versions = ch_versions.mix(FETCH_INSPECTOR_GROUP_ONLINE.out.versions)

// EggNOG
FETCH_EGGNOG_GROUP_LOCAL (
ch_query,
ch_eggnog,
Expand Down

0 comments on commit a512e7a

Please sign in to comment.