Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

autograd gradient fixes and miscellaneous improvements #1891

Merged
merged 4 commits into from
Aug 22, 2024

Conversation

tylerflex
Copy link
Collaborator

No description provided.

@tylerflex tylerflex added 2.7 will go into version 2.7.* .3 labels Aug 7, 2024
@tylerflex tylerflex marked this pull request as draft August 7, 2024 20:18
@tylerflex tylerflex marked this pull request as ready for review August 16, 2024 15:58
@tylerflex tylerflex force-pushed the tyler/autograd_/numerical_test branch 2 times, most recently from 48aa49f to 49c49b0 Compare August 16, 2024 16:24
@tylerflex tylerflex changed the title autograd testing autograd gradient fixes and miscellaneous improvements Aug 16, 2024
@tylerflex tylerflex force-pushed the tyler/autograd_/numerical_test branch from 49c49b0 to 207d41d Compare August 21, 2024 19:38
Copy link
Contributor

@yaugenst-flex yaugenst-flex left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lots of fixes! Looks good, only some minor comments.

tests/test_components/test_autograd.py Outdated Show resolved Hide resolved
tests/test_components/test_autograd.py Outdated Show resolved Hide resolved
tests/test_components/test_autograd.py Outdated Show resolved Hide resolved
Comment on lines +444 to +449
# field components numerical is 3x higher
intensity = anp.nan_to_num(anp.sum(sim_data.get_intensity(mnt_data.monitor.name).values))
value += intensity
# intensity numerical is 4.79x higher
value += anp.sum(mnt_data.flux.values)
# flux is 18.4x lower
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are these the factors for the grads?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes.



@pytest.mark.parametrize("structure_key, monitor_key", (("medium", "mode"),))
def _test_autograd_numerical(structure_key, monitor_key):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this test not supposed to be run?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it needs to call the actual web API to run sims, so yea it's not supposed to be run in the automated tests. In the future I will remove the leading underscore for numerical testing.

Any suggestions on better ways to handle this?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah we should probably introduce pytest marks (slow, web, etc…) to mark test sets and then selectively run them, with some sane default sets

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could run the full test suite then only before release for example, or on a big merge

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That would be good. Although the numerical testing will still be extremely long if we test every combination of structure and monitor. So I'm not sure we want to do it in an automated way. Instead we could leave it for individual feature testing

Copy link
Contributor

@yaugenst yaugenst Aug 22, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not parametrization, just custom marks. they are basically labels for tests, and then you can selectively run specific labels (e.g. all the tests that are labelled as „slow“)
i agree with not running every possible combination

it‘s just an alternative to the „underscore method“ 😄

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah ok, so here's what I did

RUN_NUMERICAL = False
...
@pytest.mark.skipif(not RUN_NUMERICAL, reason="Numerical gradient tests runs through web API.")
...
def test_numerical

That way it will skip it by default and we can change this behavior by setting RUN_NUMERICAL.

I didnt see a way to skip a test by default and trigger it by command line argument. how does this sound?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah this is fine for now. what i mean by marks is this, i.e.:

@pytest.mark.slow  # or for example @pytest.mark.numerical
def test_XX(): ...

where slow or numerical are custom marks registered with pytest that we define (inpytest.ini or pyproject.toml, and then you can run pytest -v -m slow, which will run only the tests marked slow.
i think this is something to tackle in a larger test refactor though, so i'll just add it to the pile

tidy3d/components/grid/grid_spec.py Outdated Show resolved Hide resolved
@tylerflex tylerflex force-pushed the tyler/autograd_/numerical_test branch from a8567bf to b37133a Compare August 22, 2024 13:36
@tylerflex tylerflex force-pushed the tyler/autograd_/numerical_test branch from 0432890 to 617f9a2 Compare August 22, 2024 17:34
@tylerflex tylerflex merged commit ca71944 into develop Aug 22, 2024
15 checks passed
@tylerflex tylerflex deleted the tyler/autograd_/numerical_test branch August 22, 2024 17:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
2.7 will go into version 2.7.* .3
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants