-
Notifications
You must be signed in to change notification settings - Fork 76
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
skip viewer.add_data if already in viewer #1724
skip viewer.add_data if already in viewer #1724
Conversation
Codecov ReportBase: 87.27% // Head: 87.27% // Increases project coverage by
Additional details and impacted files@@ Coverage Diff @@
## main #1724 +/- ##
=======================================
Coverage 87.27% 87.27%
=======================================
Files 95 95
Lines 10082 10084 +2
=======================================
+ Hits 8799 8801 +2
Misses 1283 1283
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. ☔ View full report at Codecov. |
Would this be in conflict with #1420 ? |
That's just an issue/request (not a PR), but no, I see no reason why this would eventually cause issues with implementing that. This is part of an ongoing effort to optimize and profile the data menu, but I split it out to get the bugfix in separately. I'll open it for review once I get regression testing implemented. |
Yes, just an issue. Just want to make sure this won't make fixing that other issue harder. Thanks for the clarification! |
940d4c6
to
7862c6f
Compare
|
||
# regression test for https://github.com/spacetelescope/jdaviz/issues/1723 | ||
# test to make sure plot options (including percentile) stick when toggling visibility | ||
selected_data_items = app._viewer_item_by_id('imviz-0')['selected_data_items'] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Huh, so this use case cannot be tested with a more public API? I can't say I understand what is really going on here. It would be better if we can test it as described in the original ticket, "Load one image, set the stretch percentile to something that is not 95%, load another image. The stretch gets reset to 95%."
Also you don't really need WCS to test this. You can load the same plain number array twice. What matter is the layer percentile setting.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe something like this:
import numpy as np
from numpy.testing import assert_allclose
from jdaviz import Imviz
def test_visibility_toggle(imviz_helper):
a = np.arange(4).reshape((2, 2))
imviz_helper.load_data(a, data_label='image_1')
imviz_helper.default_viewer.cuts = '90%'
assert_allclose(imviz_helper.default_viewer.cuts, (0.15000000000000002, 2.8499999999999996))
imviz_helper.load_data(a, data_label='image_2')
imviz_helper.default_viewer.blink_once() # Goes back to the first image (you need to double check if this is needed)
assert_allclose(imviz_helper.default_viewer.cuts, (0.15000000000000002, 2.8499999999999996))
This code currently fails on main
like this:
AssertionError:
Not equal to tolerance rtol=1e-07, atol=0
Mismatched elements: 2 / 2 (100%)
Max absolute difference: 0.075
Max relative difference: 0.5
x: array([0.075, 2.925])
y: array([0.15, 2.85])
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The existing test triggers the code where the problem actually existed (literally checking the boxes, we confirmed offline that the report was for toggling the visibility not calling load_data
from the API). Blinking currently does trigger the same bug, but the implementation for blinking may change in the future, so for the sake of regression testing, I think it makes more sense to use non-public API here. As far as using astrowidgets vs plot options to confirm they have not reverted, I don't see any advantage to one over the other 🤷
I did remove the WCS as that definitely isn't needed - I had just copied that from another test since I'm not too familiar with all the imviz test fixtures.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From a user perspective, I would be more interested in testing that the user workflow won't be broken in future changes. I don't care as much if the app._viewer_item_by_id('imviz-0')['selected_data_items']
behavior changes or not, as long as the user workflow still works as expected. I guess it comes down to the philosophy of that this test is really testing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this does test the more direct user workflow of checking the boxes in the data menu UI (there just isn't a nice user-friendly API for that yet). Blinking or changing layer visibility manually would still trigger this same logic, but relies on observers on the visibility state and that logic flow could change in the future. I'd still like to keep this test since its the most direct way to trigger the code-block that had the bug, but if you think either of those other two workflows (blinking, manually setting layer visibility) warrants separate tests, we can do that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, por que no los dos? So...
- Add my suggestion as a second test.
- Add comments in your test so someone not you understands what it is supposed to be doing.
Sounds good?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm trying to avoid too much redundancy. The previous test (test_data_menu_toggles
) already tests that a change to layer.visible
updates the data menu. This current test then tests that changes in the data menu will not reset the percentile as a regression test (I think that's an important distinction - this isn't something that would normally be tested if it wasn't for the fact that there used to be a bug in the logic that caused the add_data_to_viewer
to be re-called - it's not anything fancy, just an overlooked if-statement).
I'm not sure the best way to write a blink test that decouples it from the data-menu. I tried migrating your suggestion into test_tools.test_blink
, but I think that just really clutters up that test and adds confusion. The existing blink test effectively already ensures that at least the top layer has been updated accordingly. Currently we know this is done by setting the visibility state, which is a case already covered in the other existing tests.
I did modify the comment in the test to explain that it is designed to test changes to the checkbox in data menu being handled correctly (and that points back to the issue and here, so I'm not too worried about a lack of bookkeeping).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While I'm just about to mark my approval since this test does test the bug, I just wanted to put I think I'm generally in favor of @pllim's thought of why not both? Fair, it's redundant, but I don't necessarily see redundancy in tests has a huge issue, especially in examples like this where it shows different entrypoints to the same behavior. For instance, we test app.load_data lots of times via all our configs independently testing their viz.load_data methods, but since they have different entrypoints, it checks not only the lower level methods, but also the different pathways to get there.
Granted this is a more proactive approach to testing, rather than need-based, so I'm happy to get this in and not have it hold up the review
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
#1742 introduces a nicer (almost public, but not technically public since its at the app-level) API and updates the tests to make use of that.
4263ca3
to
95ee39e
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, since you feel so much more strongly than the test than I am, I'll let it go. LGTM.
The https://mast.stsci.edu/api/v0.1/Download/file/?uri=mast:jwst/product/jw01529-c1002_t002_miri_p750l_s2d.fits download failure seems to be unrelated but it is also blocking CI. 😿 If you put that in browser, it says |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tested in Imviz Example Notebook and fixes on my side. Thanks @kecnry!
|
||
# regression test for https://github.com/spacetelescope/jdaviz/issues/1723 | ||
# test to make sure plot options (including percentile) stick when toggling visibility | ||
selected_data_items = app._viewer_item_by_id('imviz-0')['selected_data_items'] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While I'm just about to mark my approval since this test does test the bug, I just wanted to put I think I'm generally in favor of @pllim's thought of why not both? Fair, it's redundant, but I don't necessarily see redundancy in tests has a huge issue, especially in examples like this where it shows different entrypoints to the same behavior. For instance, we test app.load_data lots of times via all our configs independently testing their viz.load_data methods, but since they have different entrypoints, it checks not only the lower level methods, but also the different pathways to get there.
Granted this is a more proactive approach to testing, rather than need-based, so I'm happy to get this in and not have it hold up the review
* to avoid extra cost and resetting default percentile/color
95ee39e
to
ff1582a
Compare
skip viewer.add_data if already in viewer (cherry picked from commit 5c8cea2)
Description
This pull request avoids re-calling
viewer.add_data
if the data is already loaded in the viewer. This both saves time and avoids resetting some plot options that are passed as default (percentile and color from the cycler). Note that unloading and re-loading data from the viewer (either through the API or with the "x" button) will still reset plot options as that entirely destroys the layer.Fixes #1723
Change log entry
CHANGES.rst
? If you want to avoid merge conflicts,list the proposed change log here for review and add to
CHANGES.rst
before merge. If no, maintainershould add a
no-changelog-entry-needed
label.Checklist for package maintainer(s)
This checklist is meant to remind the package maintainer(s) who will review this pull request of some common things to look for. This list is not exhaustive.
trivial
label.