Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question/suggestions about benchmark files #14

Closed
sappelhoff opened this issue Sep 5, 2020 · 3 comments
Closed

Question/suggestions about benchmark files #14

sappelhoff opened this issue Sep 5, 2020 · 3 comments
Labels
JOSS review https://github.com/openjournals/joss-reviews/issues/2621

Comments

@sappelhoff
Copy link
Contributor

sappelhoff commented Sep 5, 2020

related to JOSS review (openjournals/joss-reviews#2621)

In the tests section of the docs you explain nicely how one benchmark from biopeaks/benachmarks can be run.

For the other benchmarks I feel like some documentation would come in handy (related to #11 (comment))

For example I think that I as a user should set these two lines from benchmark_PPG.py to the path where I download the Capnobase IEEE TBME benchmark dataset to?

record_dir = r"C:\Users\JohnDoe\surfdrive\Beta\example_data\PPG\signal"
annotation_dir = r"C:\Users\JohnDoe\surfdrive\Beta\example_data\PPG\annotations"
records = os.listdir(record_dir)

and I assume that benchmark_ECG_local.py can be used when the GUDB data has been downloaded instead of streaming it via benchmark_ECG_stream.py? --> all of this should be cleared up using module level docstrings (see link to doc comment above) and smaller inline comments.

Another note: "uncommented" code in scripts like here ...

# plt.figure()
# plt.plot(ecg)
# plt.scatter(manupeaks, ecg[manupeaks], c="m")
# plt.scatter(algopeaks, ecg[algopeaks], c='g', marker='X', s=150)

... could be improved with an if / else clause and a simple setting of a boolean value at the top of the script. that way of coding is often easier understood and leads to less errors.


Suggestion in case you didn't know: Documenting code with sphinx and sphinx-gallery allows you to integrate such "benchmarks" as examples that get built automatically upon building your documentation --> that way users can inspect the code and all of its results online from their browser, instead of having to run it themselves.

You can see that in action here: https://mne.tools/mne-bids/stable/auto_examples/convert_eeg_to_bids.html#sphx-glr-auto-examples-convert-eeg-to-bids-py

@sappelhoff
Copy link
Contributor Author

One more question about benchmark_PPG.py --> according to the docs, I should first download the data from http://www.capnobase.org/index.php?id=857

I did that (specifically: TBME2013-PPGRR-Benchmark_R3.zip), and the data looks like the following. I cannot find "signal" and "annotations" as required in benchmark_PPG.py. where am I going wrong?

TBME2013-PPGRR-Benchmark_R3
├── data
│   ├── 0009_8min.mat
│   ├── 0015_8min.mat
│   ├── 0016_8min.mat
│   ├── 0018_8min.mat
│   ├── 0023_8min.mat
│   ├── 0028_8min.mat
│   ├── 0029_8min.mat
│   ├── 0030_8min.mat
│   ├── 0031_8min.mat
│   ├── 0032_8min.mat
│   ├── 0035_8min.mat
│   ├── 0038_8min.mat
│   ├── 0103_8min.mat
│   ├── 0104_8min.mat
│   ├── 0105_8min.mat
│   ├── 0115_8min.mat
│   ├── 0121_8min.mat
│   ├── 0122_8min.mat
│   ├── 0123_8min.mat
│   ├── 0125_8min.mat
│   ├── 0127_8min.mat
│   ├── 0128_8min.mat
│   ├── 0133_8min.mat
│   ├── 0134_8min.mat
│   ├── 0142_8min.mat
│   ├── 0147_8min.mat
│   ├── 0148_8min.mat
│   ├── 0149_8min.mat
│   ├── 0150_8min.mat
│   ├── 0309_8min.mat
│   ├── 0311_8min.mat
│   ├── 0312_8min.mat
│   ├── 0313_8min.mat
│   ├── 0322_8min.mat
│   ├── 0325_8min.mat
│   ├── 0328_8min.mat
│   ├── 0329_8min.mat
│   ├── 0330_8min.mat
│   ├── 0331_8min.mat
│   ├── 0332_8min.mat
│   ├── 0333_8min.mat
│   └── 0370_8min.mat
└── README.txt

1 directory, 43 files

@JanCBrammer
Copy link
Owner

I cannot find "signal" and "annotations" as required in benchmark_PPG.py. where am I going wrong?

@sappelhoff, you're right. I didn't document that the "signal" and "annotations" need to be extracted from the .mat files before running benchmark_PPG. I updated the script such that the PPG benchmark can now be run directly on TBME2013-PPGRR-Benchmark_R3/data (the extraction of "signal" and "annotations" is now taken care of in the script). See 41d7a6f.

I removed the commented code from benchmark_ECG_local (I only used it for debugging).

Will address the module level docstrings once I get around to #11.

@sappelhoff
Copy link
Contributor Author

great, I was able to run all benchmarks now. Thanks a bunch.

The performance details you describe in the paper match approximately with what I can reproduce on my machine (slight rounding deviations are to be expected I guess.)

@JanCBrammer JanCBrammer added the JOSS review https://github.com/openjournals/joss-reviews/issues/2621 label Sep 29, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
JOSS review https://github.com/openjournals/joss-reviews/issues/2621
Projects
None yet
Development

No branches or pull requests

2 participants