-
Notifications
You must be signed in to change notification settings - Fork 56
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
dsp
tests
#310
dsp
tests
#310
Conversation
gipert
commented
Jul 6, 2022
•
edited
Loading
edited
- added some first tests
- fix/add type hints
- fix docstrings
- make overwrite mode = false in build_raw and build_dsp
Codecov Report
@@ Coverage Diff @@
## main #310 +/- ##
==========================================
+ Coverage 44.88% 49.42% +4.53%
==========================================
Files 74 74
Lines 6372 6396 +24
==========================================
+ Hits 2860 3161 +301
+ Misses 3512 3235 -277
Continue to review full report at Codecov.
|
@iguinn can you review this when you get a chance? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As someone fairly inexperienced in writing tests for python, I want to make sure that I understand the goals for each test:
test_processing_chain.py: this seems to be essentially have integration tests to make sure that it can run the kinds of processors we want it to run (like running a scipy processor) and unit tests to make sure that the parsing of different cases works
test_build_dsp.py: this seems to be a functional test that's designed to make sure that we can process a real file
test_waveform_browser.py: this seems to be a functional test that draws waveforms
For test_build_dsp, as I commented above, do we want this to use a full-fledged LEGEND DSP config file, or do we want to pare it down to more of a minimal working file? My (possibly flawed) understanding is that we would prefer to use unit testing for the individual processors and design this test to be simpler?
For test_waveform_browser.py, this involves actually drawing waveforms. How does this work for automatic testing? Is it more or less just going to look for if any errors are thrown, or is this more meant for someone to run on their own to visually confirm that waveforms are drawn correctly?
from pygama.lgdo.lh5_store import LH5Store, ls | ||
from pygama.raw import build_raw | ||
|
||
config_dir = Path(__file__).parent/'configs' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are we sure this is how we want to handle paths to json config files for testing? Another option would be to add the files to our setup.cfg (https://setuptools.pypa.io/en/latest/userguide/datafiles.html) and use importtools
(https://docs.python.org/3/library/importlib.html#module-importlib.resources) to use them in cases like this. This could also be useful if we wanted to have a default config file that we use for tutorials and such.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think it's useful t to install test data/configs. By design, the test suite is meant to be run by developers on the pygama source (not the installed code!).
For tutorials/docs the story is different. I agree that we should include them as installed data files.
|
||
|
||
def test_build_dsp_basics(lgnd_test_data): | ||
build_dsp(lgnd_test_data.get_path('lh5/LDQTA_r117_20200110T105115Z_cal_geds_raw.lh5'), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I'm missing something here...What kind of object is lgnd_test_data
and how is it getting passed here? I assume this is pointing to the test data repository?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's defined here as a Pytest fixture. Fixtures are useful to share objects between tests.
@@ -0,0 +1,327 @@ | |||
{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are we sure we want to use a full-fledged DSP file like this, rather than a more minimal example? It seems like the job of testing the individual processors should go to unit_tests, and for testing build_dsp and processing_chain, we may want a more minimal file
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree that we want unit tests for the processors folder. My plan (not part of this PR) is to have a test for each single processor. Still, testing something similar to what we have in the L200 data production seems useful to me...
@iguinn can you check these conflicts when you get a chance? |
Correct.
Correct, see my comment above.
Yes, It is more or less just looking for if any errors are thrown. Testing image outputs is fairly complicated, obviously, and I'm not sure we'll ever have time to work on this... |
@jasondet the Sphinx bug has been fixed: sphinx-doc/sphinx#10701 I'm going to merge this, I'll continue working on tests in future PRs. |