-
Notifications
You must be signed in to change notification settings - Fork 587
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pytest fixture warning - more harm than good? #2370
Comments
See also pytest-dev/pytest#6888
I can see where you're coming from, but in general it's either safe to reuse a fixture (in which case it doesn't need to be Of course there really are cases where this warning triggers for programs which are actually OK! However, I'm also concerned by the inverse problem, where users incorrectly believe that cross-invocation persistence isn't a problem but
Our judgement when we merged it was that deliberately breaking some working setups was an unfortunate but acceptable cost to convert a (we hope) larger set of subtle problems into explicit warnings and eventually errors. I don't think there's a 'clearly correct' response, but personally I favor designing for beginner and intermediate users - while inelegant there are workarounds for expert users, but if beginners get stuck or frustrated or generally have a bad experience we might as well not exist at all for them 😕 |
Which is a non-solution in my eyes: After my hypothesis-test is done, I want whatever I changed via monkeypatch to be undone properly. I don't want to happen at the end of the module (or test run).
I disagree, see my example about monkeypatch above. If I have some fixture which should do some kind of cleanup after some action, I'd prefer it to do this cleanup as soon as possible, to affect as little unrelated tests as possible.
Which is why I think it should just be a normal warning (to make the user aware of the consequences of their actions), not a deprecation.
You mean scenarios where users deliberately turn the warning off, run into issues, and you get to debug it? I don't think this will be a big issue to be honest - and it's likely not something you could prevent. If this turned into an error, I'd probably just disable the plugin entirely via
I fully agree, modulo the "eventually errors" part - the warning in itself is probably a good idea (and will help people realize what's going on). I just don't think it makes sense to make things more difficult for expert users in scenarios where things would be fine, for almost no gain (other than preventing novice users from deliberately ignoring the warning). |
to prevent a large number of warnings Its not clear at the moment how to best avoid these and open issues may result in changes HypothesisWorks/hypothesis#2370
See HypothesisWorks/hypothesis#2370 (cherry picked from commit f33e15a)
I just discovered #377/#2356 and wonder if the warning is really appropriate. Let me share some (simplfiied) scenarios from my testsuite where I feel like it's doing more harm than good:
monkeypatch
Here, I'm using the builtin monkeypatch fixture to patch something away. In pretty much any scenario I can imagine, this should work just fine with hypothesis calling the test function multiple times. With the warning, I don't see a way to make the same thing work again...
Providing some object
Here things probably aren't as clear-cut. I am sure whatever thing I do inside my fixture in the hypothesis test can be repeated, yet for other tests I'd like the additional isolation between tests. There are ways to solve this one, but it'd mean code duplication between two identical fixtures.
I can see why the warning is there: There certainly are situations where it's good that developers are alerted to things maybe not behaving like they'd naively expect.
Yet, I think it's too strict: Why is it a DeprecationWarning? Is this behavior really something which will turn into an error? If so, are there good reasons for turning it into an error (e.g. some possible simplification inside hypothesis I'm not aware of), or would it just deliberately break situations which worked just fine?
I think:
hypothesis_fixture_whitelist
setting (or so), to have a simpler way to say "hey, this scenario is okay" rather than viaignorewarnings
What do you think?
The text was updated successfully, but these errors were encountered: