-
-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pytest's tests should work without hypothesis #4977
Comments
I think it does not make sense to have hypothesis just for this test, and given that it did not find anything new since 2016 (although state is not saved on CI, should it maybe (if we would keep it)?), I'd rather remove it than conditionally skipping this test. /cc @The-Compiler via #1470 (comment) |
I guess it would be good to use hypothesis in more places - what's the problem with requiring hypothesis for pytest's testsuite? |
What is the benefit?
Yes - I think the current single use does not warrant it really; but I have not much experience with it really. |
FWIW I've found many issues in qutebrowser on the CI without caching the database. |
I agree with @The-Compiler that we might not be using Hypothesis to its full potential, while I also see @blueyed's point that we have used it only for a single test since 2016. How about we compromise: let's make a group effort to use Hypothesis more. After some time (say how about right after pytest 5.0 gets released) we reevaluate our usage: if we are still in the same situation, we can discuss dropping it again. How does that sound? |
@nicoddemus |
If we drop it now, we will create a small friction when we need to use it again. I would rather keep it, and follow with the plan of reviewing this after 5.0. From my part, I don't see too much problem with just |
What is the friction? (I am using |
Well we can argue with it either way to be honest. I think we should treat this as a "wake up call" for us in order to use more hypothesis. Starting it by removing the dependency doesn't seem the right way to approach this. |
Ok, closing this issue then - we should have a new one about using hypothesis more then. |
FWIW: it is causing the py38-dev build to fail: https://travis-ci.org/pytest-dev/pytest/jobs/532103482#L288 |
Hmm, according to HypothesisWorks/hypothesis#1959 this should be fixed with a more current Python 3.8 Alpha - maybe Travis CI is still shipping an older one or so? |
Great, thanks for the info! |
Causes py38-dev to fail again: https://travis-ci.org/pytest-dev/pytest/jobs/538556026 |
Re-opening then for now. |
For reference: current usage appears to be only in pytest/testing/python/metafunc.py Lines 194 to 203 in d39f956
|
Causing flaky build failures recently:
|
It is not used really, and starts to cause flaky failures on CI even (hypothesis.errors.FailedHealthCheck). Ref: pytest-dev#4977
It is not used really, and starts to cause flaky failures on CI even (hypothesis.errors.FailedHealthCheck). Ref: pytest-dev#4977
It is not used really, and starts to cause flaky failures on CI even (hypothesis.errors.FailedHealthCheck). Ref: pytest-dev#4977 * ci: Travis: tox -vv also with install * setup.py: use_scm_version: git_describe_command (new default) * ci: Travis: python: 3.7.4 (workaround DeprecationWarning with imp via distutils.spawn)
JFI: flaky failures due to hypothesis are confusing new contributors: #6107 (comment) |
@Zac-HD any hints on avoiding the flakyness issue on Hypothesis that @blueyed is mentioning above? We might want to revisit this. While I like Hypothesis and use it in many other projects, it might not bring much to the table in pytest's case, or at least contributors don't seem to use it and core contributors (myself included of course) fail to suggest valid cases where Hypothesis could be used in new tests. |
I can't see what was flaky, so it's hard to give specific advice! The Otherwise, if the project isn't finding Hypothesis tests useful and nobody wants to write or maintain them, it might just not be a good fit at the moment. Nothing wrong with that (though we'd love to hear how we could help). If you do want to write more property-based tests, what's stopping you? |
@Zac-HD |
Yeah, for timing flakiness I just use the If it would resolve your problems with Hypothesis I'd be happy to write a PR sorting out the configuration? |
Yeah, please go ahead. |
Tests should run without
hypothesis
being installed, i.e. those tests should be skipped.Currently it fails like this:
This is an old issue, since #1470 I guess.
/cc @ceridwen
The text was updated successfully, but these errors were encountered: