Skip to content

Salmon Beta v0.3.0

Compare
Choose a tag to compare
@rob-p rob-p released this 16 Feb 06:05
· 168 commits to master since this release

What is Salmon?

It's a type of fish! But, in the context of RNA-seq, it's the
successor to Sailfish (a rapid "alignment-free" method for transcript
quantification from RNA-seq reads).

Why use Salmon?

Well, Salmon is designed with a lot of what we learned from Sailfish
in mind. We're still adding to and improving Salmon, but it already
has a number of benefits over Sailfish; for example:

  • Salmon can make better use of paired-end information than Sailfish. The
    algorithm used in Salmon considers both ends of a paired-end read
    simultaneously, rather than independently as does Sailfish. Therefore it is
    able to make use of any increased specificity that results from this
    information.
  • Salmon has a smaller memory footprint that Sailfish. While the
    quantification phase of Sailfish already has a fairly compact memory
    footprint, Salmon's is even smaller. Additionally, building the Salmon index
    (which is only required if you don't use the alignment-based mode) requires
    substantially less memory, and can be faster, than building the Sailfish
    index.
  • Salmon can use (but doesn't require) pre-computed alignments. If you want,
    you can use Salmon much like Sailfish, by building an index and feeding it
    raw reads. However, if you already have reads from your favorite, super-fast
    aligner (cough cough maybe STAR), you can use them with Salmon to quantify
    the transcript abundances.
  • Salmon is fast --- in all of our testing so far, it's actually faster than
    Sailfish (though the same relationship doesn't hold between the actual fish).

Further details are contained in the online documentation.

What's New

Version 0.3.0

fixes a number of small but important bugs that popped up in previous versions. Specifically, sometimes a small number of reads would be ignored when using the sampling option in alignment-based salmon, resulting in not all of the reads having their alignments sampled in the output. Also, a bug in one of the concurrent queue implementations used by salmon was causing a segfault, when using the mapping cache, on certain data sets, under heavy contention. While a full solution to this bug is underway, it can be avoided by disabling thread-local storage in the queue (which has been done in this release and eliminates the bug).

Also, version 0.3.0 adds a new and substantially improved alignment model. It incorporates spatially-varying features of alignments, but also allows for alignments of unrestricted length or number of gaps. While this model is still officially experimental (it must be enabled with --useErrorModel), it has been fairly well-tested and can improve accuracy measurably in certain cases (but doesn't hurt accuracy in any case we've encountered).

Finally, this release replaces Shark's implementation of PCA with a simple custom implementation built using the header-only library eigen (now included with the source). This finally eliminates the dependency on the shark library, and thus, allows salmon to be compiled with versions of Boost newer that 1.55.

Version 0.2.7

adds an in-memory cache for alignment-based salmon just like the cache for read-based salmon that was added in v0.2.5. This significantly speed up inference for bam files where the number of mapping reads is less than the in-memory alignment cache size (which can be set with --mappingCacheMemoryLimit). This version also fixed a bug where for small files, the fragment length distribution might not have been taken into account when computing alignment likelihoods.

Version 0.2.6

fixes a significant bug in 0.2.5 which prevented the proper parsing of single-end read libraries.

Version 0.2.5

significantly speeds up read-based salmon on small datasets by keeping the mapping cache in memory. It also fixes a concurrency-related bug in alignment-based salmon that could result in some input alignments being skipped (again, this issue was most prevalent on relatively small datasets). There are also further, general stability, speed, and error-reporting improvements.

Version 0.2.3

fixes a very rare but potentially frustrating bug in v0.2.2, where read-based salmon could encounter a race condition when using the mapping cache and dealing with very small data sets.