-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unit Tests for PointSourceResponse #243
base: develop
Are you sure you want to change the base?
Conversation
Thanks @augustus-thomas! This looks good to me. One suggestion: currently you are basically testing whether various methods run without crashing, which is good. In addition, it would be better to also test that the output is what's expected. For example, you can add a check (an assert) that the total expectation equals (or approximately equals) what we're currently getting, so that if a future version of the code breaks this it won't go unnoticed. Please let me know if you have time and are willing to do this. I understand if you don't, in which case I would merge this anyway since it's nonetheless progress! |
Hey! Thanks for the feedback here @israelmcmc. I would like to insure that the expectations are correct, and I do have the time to issue a new commit for that. I will follow the example in |
Sound good, thanks @augustus-thomas . You can see an example of how to initialize a model and get the expectation here: https://cositools.github.io/cosipy/tutorials/response/DetectorResponse.html |
This is now updated based on our conversations. The forced push was because I included some other commits on accident |
I'll close and reopen this PR to kick the unit tests, which seem stuck. |
@augustus-thomas Thanks for the changes. It seems something is off with the unit test 🤔 . Please take a look at the "checks" tab. I tried to reproduce this in my system, and they also fail at that point, but due to a different reason:
🤔 Have you seen this? |
Partially addresses response testing for Issue #226
PointSourceReponse.py
now has its own associated test file calledtest_point_source_response.py
.get_expectation
method call, which should ensure full coverage.FullDetectorResponse.get_point_source_response()
FullDetectorResponse
used was generated from thetest_full_detector_response.h5
Let me know if this PR is helpful! I am very excited to contribute something to this team :)