Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Profile cleanup and singularity implementation #65

Merged
merged 6 commits into from
Apr 13, 2020
Merged

Conversation

replikation
Copy link
Owner

Backend cleanup

profile changes

  • WtP now demands the usage of -profile
  • in its current state we have local (tested) and lsf(untested) hope @hoelzer can check that
  • and the container docker (tested) and singularity(tested)
  • usage like -profile local,docker or -profile lsf,singularity etc.

Copy link
Collaborator

@mult1fractal mult1fractal left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Gj as always...
ehm there is one comment for basic under standing what you did with the config files

configs/local.config Show resolved Hide resolved
@hoelzer
Copy link
Collaborator

hoelzer commented Apr 13, 2020

I really like the merged profile structure you came up with @replikation ! Looks really clean. I will also go through the code now.

Some general things I observed:

Running locally with only docker specified

nextflow run phage.nf --fasta some.fasta -profile docker

This will run locally but throw a warning because params.cloudProcess is not defined. And, of course, the workdir etc.. is then not set appropriately. Maybe force the user to set one of the following profiles: local or ls (and then we can also add slurm... )

By the way: as an idea, is it also possible to have something like

nextflow run phage.nf --fasta some.fasta --executor lsf --engine docker

and then generate the profile input in the nextflow.config like

workflow.profile = "${params.executor},${params.engine}"

I think it is more easy to have a clear separation between

  • the system a workflow is running on (local, lsf, slurm, ...)
    and
  • the engine/technology that is used (docker, singularity, conda, ...)

Also, when you can define it like this, it is clear to the user that he has to specify both and input as I did above (only -profile docker) is not possible.

Running on an LSF w/ Singularity

nextflow run phage.nf --fasta test-data/T7_draft.fa --databases /homes/$USER/data/nextflow-databases/ --workdir /hps/nobackup2/production/metagenomics/$USER/nextflow-work-$USER --cachedir /hps/nobackup2/singularity/$USER -profile lsf,singularity

works! nice job.

The EBI profile needs to be adjusted to this code snippet to be working again using -profile ebi,singularity:

       ebi {
              executor {
                     name = "lsf"
                     queueSize = 200
              }
              process.cache = "lenient"
              params.cloudProcess = true
              params.workdir = "/hps/nobackup2/production/metagenomics/$USER/nextflow-work-$USER"
              workDir = params.workdir
              params.databases = "/homes/$USER/data/nextflow-databases/"
              params.cachedir = "/hps/nobackup2/singularity/$USER"	
              includeConfig 'configs/node.config'
       }

nextflow.config Outdated Show resolved Hide resolved
configs/container.config Outdated Show resolved Hide resolved
nextflow.config Outdated Show resolved Hide resolved
phage.nf Show resolved Hide resolved
@replikation
Copy link
Owner Author

@martin i like the
workflow.profile = "${params.executor},${params.engine}"
idea with --engine etc. ill check if it works correctly

With this, we can control more easily the user input and its clearer. ill add that to this PR and if it's working then ill do the merge

@hoelzer
Copy link
Collaborator

hoelzer commented Apr 13, 2020

@martin i like the
workflow.profile = "${params.executor},${params.engine}"
idea with --engine etc. ill check if it works correctly

With this, we can control more easily the user input and its clearer. ill add that to this PR and if it's working then ill do the merge

Great, Yeah I was also thinking about this setup again and I only see advantages. Will be also more easy to write error mssgs then and to control User input.

@replikation replikation merged commit 12950e0 into master Apr 13, 2020
@replikation replikation deleted the singularity branch April 13, 2020 17:52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Build own virsorter docker Singularity containers
3 participants