-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug fix/conditionally compute tau cirrus after met downselection #87
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you Tharun, this is great!
I made some a commit straight to main while your PR was opened ... can you rebase off main then re-push?
compute_tau_cirrus: bool | str = "auto", | ||
shift_radiation_time: np.timedelta64 | None = None, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does it make sense to remove the default values for compute_tau_cirrus
(and for shift_radiation_time
)? If we only ever pass model parameters into this function, then we may be less likely to create future bugs if these parameters don't take defaults. But if we're calling process_met_datasets
in places where we don't specify compute_tau_cirrus
or shift_radiation_time
, leave as is.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The only other place we call process_met_datasets
is in tests/benchmark/north-atlantic-study/validate.py, is this benchmark test still used? I agree that it would be safer to not include defaults here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We've been meaning to run the benchmark tests occasionally in the github CI. That hasn't happened yet, and so I think there are many slightly broken things in the benchmark directory at this point. No need to worry about that!
Changes
Breaking changes
None
Fixes
Tests
make test
)-- GCP tests in test_cache.py fail locally, presumably because of auth issues?
-- Let me know the best way to run these if necessary!
Reviewer