Skip to content

Reproduce Results

Ken Nakatsu edited this page Aug 3, 2023 · 8 revisions

Overview

The results presented in the paper can be broken down into the following parts:

  1. miRNA benchmark
  2. Multi-species benchmark
  3. Conserved fragments

We will cover two sections, the miRNA benchmark and the conserved fragments. Multi-species time benchmark is not reproducible due to system differences...

miRNA benchmark

Purpose: The purpose of the miRNA benchmark was to show that peak calling can accurately call start and end positions and that the pipeline could work with reasonable accuracy. We note in the paper that the peaks that are miscalled are miscalled because there are "false peaks" that do not correspond to the miRbase annotation.

  1. Preparation of the miRNA annotation is done and is in the "pub_fig" directory.
  2. Copy and paste config parameters. Note that this is needed because we changed some config files after the benchmarking. (Note: No changes to algorithm)
  3. Run the pipeline with this config.
  4. Open the R script file, miRNA_bench.R, in R studio.
  5. Based upon the command, organize a directory such that it can read the specified files (most of which are included in the pub_fig directory).
  6. A certain degree of manual calculation was used to calculate metrics. However, figures will be produced properly.

Conserved Fragments

Clone this wiki locally