Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use v3.1 imagehash.hex_to_hash #2240

Merged
merged 2 commits into from
Nov 14, 2016
Merged

Conversation

bjlittle
Copy link
Member

@bjlittle bjlittle commented Nov 11, 2016

Now that SciTools/conda-recipes-scitools#210 has been merged (thanks to @ocefpaf in SciTools/conda-recipes-scitools#211) we can now use the shiny new v3.1 of imagehash which has the fix required to imagehash.hex_to_hash for variable hash_size's.

@bjlittle bjlittle added this to the v1.11 milestone Nov 11, 2016
@bjlittle
Copy link
Member Author

We've got some looooong running tests, that are starting to kill us ... something reeks within.

@@ -776,7 +761,9 @@ def _hex_to_hash(hexstr, hash_size=_HASH_SIZE):
else:
uris = repo[unique_id]
# Create the expected perceptual image hashes from the uris.
expected = [_hex_to_hash(os.path.splitext(os.path.basename(uri))[0])
to_hash = imagehash.hex_to_hash
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bit odd... does this keep the next line <80char in length?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes

def _calculate_hit(uris, phash, action):
# Create the expected perceptual image hashes from the uris.
expected = [_hex_to_hash(os.path.splitext(os.path.basename(uri))[0])
to_hash = imagehash.hex_to_hash
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Again

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, again 😉

@@ -107,7 +107,7 @@ install:

script:
- if [[ $TEST_TARGET == 'default' ]]; then
python -m iris.tests.runner --default-tests --system-tests --print-failed-images;
travis_wait python -m iris.tests.runner --default-tests --system-tests --print-failed-images;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this change belong here..?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just testing this out ...

@QuLogic
Copy link
Member

QuLogic commented Nov 12, 2016

Should just strip those last two...

@@ -163,7 +163,7 @@ def run(self):

args = ['', None, '--processes=%s' % n_processors,
'--verbosity=2', regexp_pat,
'--process-timeout=600']
'--process-timeout=180']
Copy link
Member Author

@bjlittle bjlittle Nov 12, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dkillick This change brought the TEST_TARGET=default matrix runners from +18m down to 6m17s for py27, and 7m48s for py34, whoop! ... that's at least 10 minutes quicker than I've seen for a while ... 🎉

Admittedly, it was me that bumped it up that high due to the recent graphics testing changes 😱 ... looks like nose uses the full timeout per multi-process runner regardlessly. Originally I thought this was used as a timeout hwm ... clearly not?! If you watch the travis tests running live, you'll notice that the last test finishes, but there is a period of delay before the test runner terminates. The process timeout appears to be used by nose as a process sample time also.

Clearly there is a sweet-spot for this process-timeout for our tests, which we may have to tweak, but I'm totally 👍 for this change!

This change also resolves the following carnage that other PRs are experiencing in travis at the moment:

No output has been received in the last 10m0s, this potentially indicates a stalled build or something wrong with the build itself.
Check the details on how to adjust your build configuration on: https://docs.travis-ci.com/user/common-build-problems/#Build-times-out-because-no-output-was-received

10m0s or 600s is the magic number that's causing this problem ... what were the odds!

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This number sounds suspiciously like the default image tolerance from the image tests... Still, we can change the value as we need, and always put some effort into fixing it in a year or two 😉

And it's great to get the travis tests running faster! Good stuff @bjlittle 👍

@bjlittle bjlittle mentioned this pull request Nov 12, 2016
@DPeterK DPeterK merged commit b05bf53 into SciTools:v1.11.x Nov 14, 2016
@pp-mo pp-mo mentioned this pull request Apr 4, 2017
@bjlittle bjlittle deleted the update-imagehash branch November 3, 2017 14:33
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants