Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pytest's tests should work without hypothesis #4977

Closed
blueyed opened this issue Mar 22, 2019 · 23 comments
Closed

pytest's tests should work without hypothesis #4977

blueyed opened this issue Mar 22, 2019 · 23 comments
Labels
type: bug problem that needs to be addressed type: infrastructure improvement to development/releases/CI structure
Milestone

Comments

@blueyed
Copy link
Contributor

blueyed commented Mar 22, 2019

Tests should run without hypothesis being installed, i.e. those tests should be skipped.

Currently it fails like this:

platform linux -- Python 3.8.0a2+, pytest-4.3.2.dev103+g15d60886.d20190322, py-1.8.0, pluggy-0.9.0
rootdir: …/Vcs/pytest, inifile: tox.ini
collected 2278 items / 1 errors / 2277 selected

===== ERRORS =====
_____ ERROR collecting testing/python/metafunc.py _____
ImportError while importing test module '…/Vcs/pytest/testing/python/metafunc.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
testing/python/metafunc.py:7: in <module>
    import hypothesis
E   ModuleNotFoundError: No module named 'hypothesis'
===== short test summary info =====
FAILED testing/python/metafunc.py
!!!!! Interrupted: 1 errors during collection !!!!!
===== 1 error in 4.31 seconds =====

This is an old issue, since #1470 I guess.
/cc @ceridwen

@blueyed blueyed added type: bug problem that needs to be addressed type: infrastructure improvement to development/releases/CI structure labels Mar 22, 2019
@blueyed
Copy link
Contributor Author

blueyed commented Mar 22, 2019

I think it does not make sense to have hypothesis just for this test, and given that it did not find anything new since 2016 (although state is not saved on CI, should it maybe (if we would keep it)?), I'd rather remove it than conditionally skipping this test.

/cc @The-Compiler via #1470 (comment)

@The-Compiler
Copy link
Member

I guess it would be good to use hypothesis in more places - what's the problem with requiring hypothesis for pytest's testsuite?

@blueyed
Copy link
Contributor Author

blueyed commented Mar 24, 2019

What is the benefit?
Isn't it that it would become useful only when being used for longer? (i.e. it does not make sense on CI without a cache, for example)

I guess it would be good to use hypothesis in more places

Yes - I think the current single use does not warrant it really; but I have not much experience with it really.

@The-Compiler
Copy link
Member

FWIW I've found many issues in qutebrowser on the CI without caching the database.

@nicoddemus
Copy link
Member

I agree with @The-Compiler that we might not be using Hypothesis to its full potential, while I also see @blueyed's point that we have used it only for a single test since 2016.

How about we compromise: let's make a group effort to use Hypothesis more. After some time (say how about right after pytest 5.0 gets released) we reevaluate our usage: if we are still in the same situation, we can discuss dropping it again. How does that sound?

@blueyed
Copy link
Contributor Author

blueyed commented Mar 26, 2019

@nicoddemus
Sounds good, except that I would drop it already for now - the patch for this is simple, and can easily be reverted then.

@nicoddemus
Copy link
Member

Sounds good, except that I would drop it already for now - the patch for this is simple, and can easily be reverted then.

If we drop it now, we will create a small friction when we need to use it again. I would rather keep it, and follow with the plan of reviewing this after 5.0.

From my part, I don't see too much problem with just pip install hypothesis in a venv if I want to run pytest manually, or use tox.

@nicoddemus nicoddemus added this to the 5.0 milestone Mar 26, 2019
@blueyed
Copy link
Contributor Author

blueyed commented Mar 26, 2019

What is the friction?
Reverting a commit?
Isn't that very similar to pip install hypothesis? :)

(I am using pip install -e '.[testing]' usually, but ran into missing hypothesis often enough to report it)

@nicoddemus
Copy link
Member

Well we can argue with it either way to be honest.

I think we should treat this as a "wake up call" for us in order to use more hypothesis. Starting it by removing the dependency doesn't seem the right way to approach this.

@blueyed
Copy link
Contributor Author

blueyed commented Mar 27, 2019

Ok, closing this issue then - we should have a new one about using hypothesis more then.

@blueyed blueyed closed this as completed Mar 27, 2019
@blueyed
Copy link
Contributor Author

blueyed commented May 14, 2019

FWIW: it is causing the py38-dev build to fail: https://travis-ci.org/pytest-dev/pytest/jobs/532103482#L288

@The-Compiler
Copy link
Member

FWIW: it is causing the py38-dev build to fail: https://travis-ci.org/pytest-dev/pytest/jobs/532103482#L288

Hmm, according to HypothesisWorks/hypothesis#1959 this should be fixed with a more current Python 3.8 Alpha - maybe Travis CI is still shipping an older one or so?

@blueyed
Copy link
Contributor Author

blueyed commented May 14, 2019

Great, thanks for the info!

@blueyed
Copy link
Contributor Author

blueyed commented May 29, 2019

Causes py38-dev to fail again: https://travis-ci.org/pytest-dev/pytest/jobs/538556026
Looks similar to the previous issue?!

@blueyed
Copy link
Contributor Author

blueyed commented Oct 15, 2019

follow with the plan of reviewing this after 5.0

Re-opening then for now.

@blueyed
Copy link
Contributor Author

blueyed commented Oct 29, 2019

For reference: current usage appears to be only in

@hypothesis.given(strategies.text() | strategies.binary())
@hypothesis.settings(
deadline=400.0
) # very close to std deadline and CI boxes are not reliable in CPU power
def test_idval_hypothesis(self, value):
from _pytest.python import _idval
escaped = _idval(value, "a", 6, None, item=None, config=None)
assert isinstance(escaped, str)
escaped.encode("ascii")

@blueyed
Copy link
Contributor Author

blueyed commented Oct 29, 2019

Causing flaky build failures recently:

=================================== FAILURES ===================================
______________________ TestMetafunc.test_idval_hypothesis ______________________
[gw0] darwin -- Python 3.7.0 /Users/travis/build/pytest-dev/pytest/.tox/py37-xdist/bin/python
self = <metafunc.TestMetafunc object at 0x1066d0400>
    @hypothesis.given(strategies.text() | strategies.binary())
>   @hypothesis.settings(
        deadline=400.0
    )  # very close to std deadline and CI boxes are not reliable in CPU power
    def test_idval_hypothesis(self, value):
E   hypothesis.errors.FailedHealthCheck: Data generation is extremely slow: Only produced 7 valid examples in 1.12 seconds (0 invalid ones and 2 exceeded maximum size). Try decreasing size of the data you're generating (with e.g.max_size or max_leaves parameters).
E   See https://hypothesis.readthedocs.io/en/latest/healthchecks.html for more information about this. If you want to disable just this health check, add HealthCheck.too_slow to the suppress_health_check settings for this test.
testing/python/metafunc.py:195: FailedHealthCheck
---------------------------------- Hypothesis ----------------------------------
You can add @seed(35364130883208350154606057602597663902) to this test or run pytest with --hypothesis-seed=35364130883208350154606057602597663902 to reproduce this failure.
=============================== warnings summary ===============================

blueyed added a commit to blueyed/pytest that referenced this issue Oct 30, 2019
It is not used really, and starts to cause flaky failures on CI even
(hypothesis.errors.FailedHealthCheck).
Ref: pytest-dev#4977
blueyed added a commit to blueyed/pytest that referenced this issue Oct 31, 2019
It is not used really, and starts to cause flaky failures on CI even
(hypothesis.errors.FailedHealthCheck).
Ref: pytest-dev#4977
blueyed added a commit to blueyed/pytest that referenced this issue Oct 31, 2019
It is not used really, and starts to cause flaky failures on CI even
(hypothesis.errors.FailedHealthCheck).
Ref: pytest-dev#4977

* ci: Travis: tox -vv also with install

* setup.py: use_scm_version: git_describe_command (new default)

* ci: Travis: python: 3.7.4

(workaround DeprecationWarning with imp via distutils.spawn)
@blueyed
Copy link
Contributor Author

blueyed commented Nov 1, 2019

JFI: flaky failures due to hypothesis are confusing new contributors: #6107 (comment)

@nicoddemus
Copy link
Member

@Zac-HD any hints on avoiding the flakyness issue on Hypothesis that @blueyed is mentioning above?

We might want to revisit this. While I like Hypothesis and use it in many other projects, it might not bring much to the table in pytest's case, or at least contributors don't seem to use it and core contributors (myself included of course) fail to suggest valid cases where Hypothesis could be used in new tests.

@Zac-HD
Copy link
Member

Zac-HD commented Nov 7, 2019

I can't see what was flaky, so it's hard to give specific advice! The derandomize setting, or suppress_health_check=[HealthCheck.too_slow] might be useful though?

Otherwise, if the project isn't finding Hypothesis tests useful and nobody wants to write or maintain them, it might just not be a good fit at the moment. Nothing wrong with that (though we'd love to hear how we could help). If you do want to write more property-based tests, what's stopping you?

@blueyed
Copy link
Contributor Author

blueyed commented Nov 7, 2019

@Zac-HD
The failure is #4977 (comment)

@Zac-HD
Copy link
Member

Zac-HD commented Nov 7, 2019

Yeah, for timing flakiness I just use the suppress_health_check=[HealthCheck.too_slow] setting. Not super elegant, but it has the substantial advantage of working. HypothesisWorks/hypothesis#2108 might help too.

If it would resolve your problems with Hypothesis I'd be happy to write a PR sorting out the configuration?

@blueyed
Copy link
Contributor Author

blueyed commented Nov 7, 2019

If it would resolve your problems with Hypothesis I'd be happy to write a PR sorting out the configuration?

Yeah, please go ahead.
FWIW I'm still in the camp of not having it for just one test.
But we should at least make it not cause confusion with new contributors / flaky CI failures.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type: bug problem that needs to be addressed type: infrastructure improvement to development/releases/CI structure
Projects
None yet
Development

No branches or pull requests

4 participants