Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

skip viewer.add_data if already in viewer #1724

Merged
merged 2 commits into from
Oct 14, 2022

Conversation

kecnry
Copy link
Member

@kecnry kecnry commented Oct 12, 2022

Description

This pull request avoids re-calling viewer.add_data if the data is already loaded in the viewer. This both saves time and avoids resetting some plot options that are passed as default (percentile and color from the cycler). Note that unloading and re-loading data from the viewer (either through the API or with the "x" button) will still reset plot options as that entirely destroys the layer.

Fixes #1723

Change log entry

  • Is a change log needed? If yes, is it added to CHANGES.rst? If you want to avoid merge conflicts,
    list the proposed change log here for review and add to CHANGES.rst before merge. If no, maintainer
    should add a no-changelog-entry-needed label.

Checklist for package maintainer(s)

This checklist is meant to remind the package maintainer(s) who will review this pull request of some common things to look for. This list is not exhaustive.

  • Are two approvals required? Branch protection rule does not check for the second approval. If a second approval is not necessary, please apply the trivial label.
  • Do the proposed changes actually accomplish desired goals? Also manually run the affected example notebooks, if necessary.
  • Do the proposed changes follow the STScI Style Guides?
  • Are tests added/updated as required? If so, do they follow the STScI Style Guides?
  • Are docs added/updated as required? If so, do they follow the STScI Style Guides?
  • Did the CI pass? If not, are the failures related?
  • Is a milestone set?
  • After merge, any internal documentations need updating (e.g., JIRA, Innerspace)?

@kecnry kecnry added the bug Something isn't working label Oct 12, 2022
@kecnry kecnry added this to the 3.0.2 milestone Oct 12, 2022
@codecov
Copy link

codecov bot commented Oct 12, 2022

Codecov Report

Base: 87.27% // Head: 87.27% // Increases project coverage by +0.00% 🎉

Coverage data is based on head (ff1582a) compared to base (22559cf).
Patch coverage: 100.00% of modified lines in pull request are covered.

Additional details and impacted files
@@           Coverage Diff           @@
##             main    #1724   +/-   ##
=======================================
  Coverage   87.27%   87.27%           
=======================================
  Files          95       95           
  Lines       10082    10084    +2     
=======================================
+ Hits         8799     8801    +2     
  Misses       1283     1283           
Impacted Files Coverage Δ
jdaviz/app.py 94.04% <100.00%> (+0.01%) ⬆️

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

☔ View full report at Codecov.
📢 Do you have feedback about the report comment? Let us know in this issue.

@pllim
Copy link
Contributor

pllim commented Oct 12, 2022

Would this be in conflict with #1420 ?

@kecnry
Copy link
Member Author

kecnry commented Oct 12, 2022

That's just an issue/request (not a PR), but no, I see no reason why this would eventually cause issues with implementing that.

This is part of an ongoing effort to optimize and profile the data menu, but I split it out to get the bugfix in separately. I'll open it for review once I get regression testing implemented.

@pllim
Copy link
Contributor

pllim commented Oct 12, 2022

That's just an issue/request (not a PR)

Yes, just an issue. Just want to make sure this won't make fixing that other issue harder. Thanks for the clarification!

@kecnry kecnry force-pushed the fix-add-data-reset-percentile branch from 940d4c6 to 7862c6f Compare October 12, 2022 17:50
@kecnry kecnry marked this pull request as ready for review October 12, 2022 17:59

# regression test for https://github.com/spacetelescope/jdaviz/issues/1723
# test to make sure plot options (including percentile) stick when toggling visibility
selected_data_items = app._viewer_item_by_id('imviz-0')['selected_data_items']
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Huh, so this use case cannot be tested with a more public API? I can't say I understand what is really going on here. It would be better if we can test it as described in the original ticket, "Load one image, set the stretch percentile to something that is not 95%, load another image. The stretch gets reset to 95%."

Also you don't really need WCS to test this. You can load the same plain number array twice. What matter is the layer percentile setting.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe something like this:

import numpy as np
from numpy.testing import assert_allclose

from jdaviz import Imviz

def test_visibility_toggle(imviz_helper):
    a = np.arange(4).reshape((2, 2))
    imviz_helper.load_data(a, data_label='image_1')
    imviz_helper.default_viewer.cuts = '90%'
    assert_allclose(imviz_helper.default_viewer.cuts, (0.15000000000000002, 2.8499999999999996))

    imviz_helper.load_data(a, data_label='image_2')
    imviz_helper.default_viewer.blink_once()  # Goes back to the first image (you need to double check if this is needed)
    assert_allclose(imviz_helper.default_viewer.cuts, (0.15000000000000002, 2.8499999999999996))

This code currently fails on main like this:

AssertionError: 
Not equal to tolerance rtol=1e-07, atol=0

Mismatched elements: 2 / 2 (100%)
Max absolute difference: 0.075
Max relative difference: 0.5
 x: array([0.075, 2.925])
 y: array([0.15, 2.85])

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The existing test triggers the code where the problem actually existed (literally checking the boxes, we confirmed offline that the report was for toggling the visibility not calling load_data from the API). Blinking currently does trigger the same bug, but the implementation for blinking may change in the future, so for the sake of regression testing, I think it makes more sense to use non-public API here. As far as using astrowidgets vs plot options to confirm they have not reverted, I don't see any advantage to one over the other 🤷

I did remove the WCS as that definitely isn't needed - I had just copied that from another test since I'm not too familiar with all the imviz test fixtures.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From a user perspective, I would be more interested in testing that the user workflow won't be broken in future changes. I don't care as much if the app._viewer_item_by_id('imviz-0')['selected_data_items'] behavior changes or not, as long as the user workflow still works as expected. I guess it comes down to the philosophy of that this test is really testing.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this does test the more direct user workflow of checking the boxes in the data menu UI (there just isn't a nice user-friendly API for that yet). Blinking or changing layer visibility manually would still trigger this same logic, but relies on observers on the visibility state and that logic flow could change in the future. I'd still like to keep this test since its the most direct way to trigger the code-block that had the bug, but if you think either of those other two workflows (blinking, manually setting layer visibility) warrants separate tests, we can do that.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, por que no los dos? So...

  • Add my suggestion as a second test.
  • Add comments in your test so someone not you understands what it is supposed to be doing.

Sounds good?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm trying to avoid too much redundancy. The previous test (test_data_menu_toggles) already tests that a change to layer.visible updates the data menu. This current test then tests that changes in the data menu will not reset the percentile as a regression test (I think that's an important distinction - this isn't something that would normally be tested if it wasn't for the fact that there used to be a bug in the logic that caused the add_data_to_viewer to be re-called - it's not anything fancy, just an overlooked if-statement).

I'm not sure the best way to write a blink test that decouples it from the data-menu. I tried migrating your suggestion into test_tools.test_blink, but I think that just really clutters up that test and adds confusion. The existing blink test effectively already ensures that at least the top layer has been updated accordingly. Currently we know this is done by setting the visibility state, which is a case already covered in the other existing tests.

I did modify the comment in the test to explain that it is designed to test changes to the checkbox in data menu being handled correctly (and that points back to the issue and here, so I'm not too worried about a lack of bookkeeping).

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While I'm just about to mark my approval since this test does test the bug, I just wanted to put I think I'm generally in favor of @pllim's thought of why not both? Fair, it's redundant, but I don't necessarily see redundancy in tests has a huge issue, especially in examples like this where it shows different entrypoints to the same behavior. For instance, we test app.load_data lots of times via all our configs independently testing their viz.load_data methods, but since they have different entrypoints, it checks not only the lower level methods, but also the different pathways to get there.

Granted this is a more proactive approach to testing, rather than need-based, so I'm happy to get this in and not have it hold up the review

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

#1742 introduces a nicer (almost public, but not technically public since its at the app-level) API and updates the tests to make use of that.

@kecnry kecnry force-pushed the fix-add-data-reset-percentile branch 3 times, most recently from 4263ca3 to 95ee39e Compare October 13, 2022 15:56
Copy link
Contributor

@pllim pllim left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, since you feel so much more strongly than the test than I am, I'll let it go. LGTM.

@pllim
Copy link
Contributor

pllim commented Oct 13, 2022

The https://mast.stsci.edu/api/v0.1/Download/file/?uri=mast:jwst/product/jw01529-c1002_t002_miri_p750l_s2d.fits download failure seems to be unrelated but it is also blocking CI. 😿

If you put that in browser, it says {"detail":"An internal error occurred looking up the file"}.

Copy link
Collaborator

@duytnguyendtn duytnguyendtn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tested in Imviz Example Notebook and fixes on my side. Thanks @kecnry!


# regression test for https://github.com/spacetelescope/jdaviz/issues/1723
# test to make sure plot options (including percentile) stick when toggling visibility
selected_data_items = app._viewer_item_by_id('imviz-0')['selected_data_items']
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While I'm just about to mark my approval since this test does test the bug, I just wanted to put I think I'm generally in favor of @pllim's thought of why not both? Fair, it's redundant, but I don't necessarily see redundancy in tests has a huge issue, especially in examples like this where it shows different entrypoints to the same behavior. For instance, we test app.load_data lots of times via all our configs independently testing their viz.load_data methods, but since they have different entrypoints, it checks not only the lower level methods, but also the different pathways to get there.

Granted this is a more proactive approach to testing, rather than need-based, so I'm happy to get this in and not have it hold up the review

* to avoid extra cost and resetting default percentile/color
@kecnry kecnry force-pushed the fix-add-data-reset-percentile branch from 95ee39e to ff1582a Compare October 14, 2022 15:02
@kecnry kecnry merged commit 5c8cea2 into spacetelescope:main Oct 14, 2022
@kecnry kecnry deleted the fix-add-data-reset-percentile branch October 14, 2022 15:22
rosteen pushed a commit that referenced this pull request Oct 18, 2022
skip viewer.add_data if already in viewer

(cherry picked from commit 5c8cea2)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working Ready for final review
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Imviz: stretch percentile resets when adding data
3 participants