-
Notifications
You must be signed in to change notification settings - Fork 44
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
autograd gradient fixes and miscellaneous improvements #1891
Conversation
48aa49f
to
49c49b0
Compare
49c49b0
to
207d41d
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Lots of fixes! Looks good, only some minor comments.
# field components numerical is 3x higher | ||
intensity = anp.nan_to_num(anp.sum(sim_data.get_intensity(mnt_data.monitor.name).values)) | ||
value += intensity | ||
# intensity numerical is 4.79x higher | ||
value += anp.sum(mnt_data.flux.values) | ||
# flux is 18.4x lower |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are these the factors for the grads?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes.
|
||
|
||
@pytest.mark.parametrize("structure_key, monitor_key", (("medium", "mode"),)) | ||
def _test_autograd_numerical(structure_key, monitor_key): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this test not supposed to be run?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it needs to call the actual web API to run sims, so yea it's not supposed to be run in the automated tests. In the future I will remove the leading underscore for numerical testing.
Any suggestions on better ways to handle this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah we should probably introduce pytest marks (slow, web, etc…) to mark test sets and then selectively run them, with some sane default sets
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
could run the full test suite then only before release for example, or on a big merge
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That would be good. Although the numerical testing will still be extremely long if we test every combination of structure and monitor. So I'm not sure we want to do it in an automated way. Instead we could leave it for individual feature testing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not parametrization, just custom marks. they are basically labels for tests, and then you can selectively run specific labels (e.g. all the tests that are labelled as „slow“)
i agree with not running every possible combination
it‘s just an alternative to the „underscore method“ 😄
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah ok, so here's what I did
RUN_NUMERICAL = False
...
@pytest.mark.skipif(not RUN_NUMERICAL, reason="Numerical gradient tests runs through web API.")
...
def test_numerical
That way it will skip it by default and we can change this behavior by setting RUN_NUMERICAL
.
I didnt see a way to skip a test by default and trigger it by command line argument. how does this sound?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah this is fine for now. what i mean by marks is this, i.e.:
@pytest.mark.slow # or for example @pytest.mark.numerical
def test_XX(): ...
where slow
or numerical
are custom marks registered with pytest that we define (inpytest.ini
or pyproject.toml
, and then you can run pytest -v -m slow
, which will run only the tests marked slow
.
i think this is something to tackle in a larger test refactor though, so i'll just add it to the pile
a8567bf
to
b37133a
Compare
0432890
to
617f9a2
Compare
No description provided.