-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question/suggestions about benchmark files #14
Comments
One more question about I did that (specifically:
|
@sappelhoff, you're right. I didn't document that the "signal" and "annotations" need to be extracted from the .mat files before running I removed the commented code from Will address the module level docstrings once I get around to #11. |
great, I was able to run all benchmarks now. Thanks a bunch. The performance details you describe in the paper match approximately with what I can reproduce on my machine (slight rounding deviations are to be expected I guess.) |
related to JOSS review (openjournals/joss-reviews#2621)
In the tests section of the docs you explain nicely how one benchmark from
biopeaks/benachmarks
can be run.For the other benchmarks I feel like some documentation would come in handy (related to #11 (comment))
For example I think that I as a user should set these two lines from
benchmark_PPG.py
to the path where I download theCapnobase IEEE TBME benchmark dataset
to?and I assume that
benchmark_ECG_local.py
can be used when the GUDB data has been downloaded instead of streaming it viabenchmark_ECG_stream.py
? --> all of this should be cleared up using module level docstrings (see link to doc comment above) and smaller inline comments.Another note: "uncommented" code in scripts like here ...
biopeaks/biopeaks/benchmarks/benchmark_ECG_local.py
Lines 55 to 58 in 89fe7c8
... could be improved with an if / else clause and a simple setting of a boolean value at the top of the script. that way of coding is often easier understood and leads to less errors.
Suggestion in case you didn't know: Documenting code with sphinx and sphinx-gallery allows you to integrate such "benchmarks" as examples that get built automatically upon building your documentation --> that way users can inspect the code and all of its results online from their browser, instead of having to run it themselves.
You can see that in action here: https://mne.tools/mne-bids/stable/auto_examples/convert_eeg_to_bids.html#sphx-glr-auto-examples-convert-eeg-to-bids-py
The text was updated successfully, but these errors were encountered: