-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Singularity containers #61
Comments
|
Thanks for the suggestion. Actually, depending on the configuration of your HPC, you can already use WtP with singularity support. I was able to run the pipeline on an LSF system w/ singularity. Nextflow will handle the conversion of images from Dockerhub into Singularity images automatically. Currently, we have only a profile for LSF that works with singularity and needs some additional parameters depending on your HPC structure: nextflow run phage.nf --fasta your.fasta --workdir ${WORK} --databases ${DB} --cachedir ${SINGULARITY} -profile lsf
Depending on your HPC scheduler (LSF, SLURM, ...) you can use the
In the future we will use Nextflows functionality of merging different profiles to make this easier and likely also add Singularity directly. |
Thanks! Seems like a good alternative.
Apparently some error with SLURM biding. Any idea what could it be? |
Looks for me that still cp configs/lsf.config configs/slurm.config and change
to
and then add the new configuration to
and then try again with |
I also added this code to a
Unfortunately, I can not test this because I dont have access to a SLURM w/ Singularity atm. Thus, I would highly appreciate when you can report back if this is working. |
Thank you, @hoelzer ! It seems to work with an on SLURM, however I get the following now:
Although I've seen "Convert SIF to Sandbox" messages before in the info output, it seems that Singularity still tries to run |
@stefanches7 okay, great and thanks for the feedback! Can you run singularity shell /s/project/phagehost/c/nanozoo-basics-1.0--962b907.img without any problems? You should see an output like:
What singularity version is installed on your SLURM cluster? singularity --version |
@hoelzer thanks for tips of things to begin with. Right configuration (e.g. enabling However, there are some bad news to the original issue:
I am not yet sure what the cause is, however it is explainable by so-called "Concurrent pull" which is currently not supported by Singularity: https://github.com/sylabs/singularity/issues/5020. |
@stefanches7 Okay, we are getting there. Are these variables set for you or are they empty? export SINGULARITY_LOCALCACHEDIR=/gpfs/scratch/[username]
export SINGULARITY_CACHEDIR=/gpfs/scratch/[username]
export SINGULARITY_TMPDIR=/gpfs/scratch[username] In my experience, it might help to set these variables according to your HPC configuration to point to directories where you have write permission and enough disk space. Maybe then just try the command out of the Nextflow environment and see if this works first: singularity pull --name multifractal-deepvirfinder-0.1.img docker://multifractal/deepvirfinder:0.1 This was working for me on LSF with Singularity v2.6.0-dist. For example, my configuration is: (base) [mhoelzer@noah-login-01 ~]$ echo $SINGULARITY_CACHEDIR
/hps/nobackup2/production/metagenomics/mhoelzer
(base) [mhoelzer@noah-login-01 ~]$ echo $SINGULARITY_LOCALCACHEDIR
/scratch So the Singularity image is then stored at /hps/nobackup2/production/metagenomics/mhoelzer/multifractal-deepvirfinder-0.1.img |
@hoelzer yeah, I've also specified Singularity environment variables, and standalone pull does work. |
Yeah, that might be a workaround for now. Pulling the images manually and then run the pipeline by pointing the I don't think that the multiple
or
If you want, you can easily test your hypothesis by setting all What you can also try: do not start the pipeline from a login node. Start an interactive session on a compute node and then execute the pipeline like so (I think): srun -N 1 --ntasks-per-node=2 --pty bash |
@hoelzer I think I would try the first way. Where can I find the locations of tools Docker containers? |
okay, here you can see all docker images the pipeline is using: If you are able to generate and store singularity images from these docker images via |
As clusters in the academia often ban docker as an obligation for people to get root rights, maybe it is possible to use the current workflow with Singularity? Maybe just convert the existing Docker steps to Singularity containers (e.g. with https://github.com/singularityhub/docker2singularity)?
The text was updated successfully, but these errors were encountered: