Skip to content

Commit

Permalink
Run PPG benchmark directly on the Capnobase download (no need to unpa…
Browse files Browse the repository at this point in the history
…ck data first).
  • Loading branch information
JanCBrammer committed Sep 22, 2020
1 parent 89fe7c8 commit 41d7a6f
Show file tree
Hide file tree
Showing 3 changed files with 58 additions and 43 deletions.
40 changes: 0 additions & 40 deletions biopeaks/benchmarks/benchmark_PPG.py

This file was deleted.

44 changes: 44 additions & 0 deletions biopeaks/benchmarks/benchmark_PPG_local.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
# -*- coding: utf-8 -*-

import h5py
import numpy as np
from pathlib import Path
from biopeaks.heart import ppg_peaks
from wfdb.processing import compare_annotations


data_dir = Path(".../TBME2013-PPGRR-Benchmark_R3/data") # replace with your local "data" directory once you've downloaded the database

sfreq = 300
tolerance = int(np.rint(.05 * sfreq)) # in samples; 50 milliseconds in accordance with Elgendi et al., 2013, doi:10.1371/journal.pone.0076585
print(f"Setting tolerance for match between algorithmic and manual annotation"
f" to {tolerance} samples, corresponding to 50 milliseconds at a sampling rate of {sfreq}.")

sensitivity = []
precision = []

for subject in data_dir.iterdir():

f = h5py.File(subject, "r")
record = np.ravel(f["signal"]["pleth"]["y"])
annotation = np.ravel(f["labels"]["pleth"]["peak"]["x"])

peaks = ppg_peaks(record, sfreq)

comparitor = compare_annotations(peaks, annotation, tolerance)
tp = comparitor.tp
fp = comparitor.fp
fn = comparitor.fn

sensitivity.append(float(tp) / (tp + fn))
precision.append(float(tp) / (tp + fp))

print(f"\nResults {subject}")
print("-" * len(str(subject)))
print(f"sensitivity = {sensitivity[-1]}")
print(f"precision = {precision[-1]}")

print(f"\nAverage results over {len(precision)} records")
print("-" * 31)
print(f"sensitivity: mean = {np.mean(sensitivity)}, std = {np.std(sensitivity)}")
print(f"precision: mean = {np.mean(precision)}, std = {np.std(precision)}")
17 changes: 14 additions & 3 deletions docs/tests.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,15 +17,26 @@ pytest -v


## Extrema detection benchmarks

### ECG
To validate the performance of the ECG peak detector `heart.ecg_peaks()`, please install the [wfdb](https://github.com/MIT-LCP/wfdb-python) and [aiohttp](https://github.com/aio-libs/aiohttp):
```
conda install -c conda-forge wfdb
conda install -c conda-forge aiohttp
```

You can then run the `benchmark_ECG_stream` script in the `benchmarks` folder. The script streams ECG and annotation files from the [Glasgow University Database (GUDB)](http://researchdata.gla.ac.uk/716/).
You can select an experiment, ECG channel, and annotation file.
You can select an experiment, ECG channel, and annotation file (for details have a look at the docstrings of `BenchmarkDetectorGUDB.benchmark_records()` in `benchmarks\benchmark_utils`).

Alternatively, you can download the GUDB and run the `benchmark_ECG_local` script in the `benchmarks` folder. In the script, replace the `data_dir` with your local directory (see comments in the script).

### PPG

To validate the performance of the PPG peak detector `heart.ppg_peaks()`
please download the [Capnobase IEEE TBME benchmark dataset](http://www.capnobase.org/index.php?id=857).
After extracting the PPG signals and peak annotations you can run the `benchmark_PPG` script in the `benchmarks` folder.
please download the [Capnobase IEEE TBME benchmark dataset](http://www.capnobase.org/index.php?id=857) and install [wfdb](https://github.com/MIT-LCP/wfdb-python) and [h5py](https://www.h5py.org/):
```
conda install -c conda-forge wfdb
conda install -c conda-forge h5py
```

You can then run the `benchmark_PPG_local` script in the `benchmarks` folder. In the script, replace the `data_dir` with your local directory (see comments in the script).

0 comments on commit 41d7a6f

Please sign in to comment.