Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tRNAscan-SE slow with multiple threads #289

Open
ohickl opened this issue May 21, 2024 · 3 comments
Open

tRNAscan-SE slow with multiple threads #289

ohickl opened this issue May 21, 2024 · 3 comments
Labels
enhancement New feature or request

Comments

@ohickl
Copy link
Contributor

ohickl commented May 21, 2024

Hi, not sure where to go with this so I'll start here to ask if anyone experienced something similar since I use tRNAscan-SE with bakta.
I noticed on tRNAscan-SE would run much slower the more threads Id give bakta. It wouldnt finsh after ~1 hour with 128 threads on a test metagenomic assembly with ca 6k contigs, whereas with 1 thread it took less than ~30s.
I am running bakta (conda version 1.9.3) like this:

bakta --db <db> \
      --verbose \
      --debug \
      --prefix <prefix> \
      --keep-contig-headers \
      --meta \
      --force \
      --tmp-dir <tmp> \
      --output <bakta_dir> \
      --threads <threads> \
      <assembly>

To 'fix' the problem I set the number of threads in t_rna.py to 1 for now.
I also manually ran tRNAscan-SE and also used different file systems to see if there was some other possible cause, but the results were identical, i.e. slow with many, fast with few threads.

I will also ask in their repo if it turns out this is only happening to me.

Thanks for the great tool!

Best

Oskar

BTW, are you planning to include more parameter changes for the meta mode? I also changed to the general tRNA model since I will run bakta exclusively on metagenomes.

@ohickl ohickl added the bug Something isn't working label May 21, 2024
@oschwengers oschwengers added enhancement New feature or request and removed bug Something isn't working labels May 28, 2024
@oschwengers
Copy link
Owner

Thanks a lot @ohickl for checking and reporting!
I just ran a quick test on our test E.coli genome with 8 an 1 thread. But actually, running Bakta providing tRNAscan-SE only 1 core is ~10s slower than using 8.

It might be the case, that running Bakta using this large number of threads on a multiuser system invokes too much IO requests (network traffic between the machine and the maybe network-attached storage). Can you repeat these benchmarks exclusively using local storage?

@ohickl
Copy link
Contributor Author

ohickl commented May 30, 2024

Hey, I just finished a Bakta prototype that splits the assembly into chunks and runs CMSCAN, etc., on each chunk. While I saw a slight speedup with more cores on my personal machine, on the cluster, even when loading the Bakta database into the node's memory, the processing speed would still decrease with a large number of cores (both tests were with unmodified Bakta).

With parallel chunk processing, the runtime for a sample with ~400K contigs and ~5Gbps length went from not finishing in a day to taking less than 2 hours for everything except the GenBank and EMBL output writing. The writing of these files takes extremely long for large files, so I added an option to skip it.

Now, I run this version with the database loaded into node memory and skipping EMBL/GenBank file writing, and it seems to run quite quickly even on large samples.

I'm not entirely sure how much potential network/latency issues on the cluster contribute to the original problem, but since the database is in the node memory, I assume it can't be too great of a factor.

If you have time, feel free to check my fork and let me know what you think in principle (it's pretty quick and dirty for now).

@oschwengers
Copy link
Owner

Thank you very much! I'll take a look at it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants