Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deprecating pytest-stepwise in favour for --last-failed -x #14

Closed
nip3o opened this issue Aug 20, 2018 · 17 comments
Closed

Deprecating pytest-stepwise in favour for --last-failed -x #14

nip3o opened this issue Aug 20, 2018 · 17 comments

Comments

@nip3o
Copy link
Owner

nip3o commented Aug 20, 2018

Once upon a time, quite a few pytest versions ago, pytest-cache was an external plugin that provided the --last-failed flag, which ran only the failing tests from the last test run. The pytest -x flag also existed back then, which exits the run after the first failing test. Combining these didn't work very well, which led to the birth of pytest-stepwise

Since then the cache plugin has become integrated in pytest, and a few releases ago (I haven't been able to find an issue or PR for this), the combination of these two flags seems to work in the same way as Stepwise does - i.e. let's you run the same command multiple times and catch a failing step at a time. I have tested it out myself during the last weeks - not in detail but it works a lot better than I remember it to do.

I think that means that this plugin is not needed anymore, unless someone has a compelling use-case that is not handled by pytest itself (and then that probably means that it should be integrated into pytest rather than here).

Any thoughts or examples of use-cases where the behaviour differs?

@davidszotten
Copy link
Contributor

example where, at least in my view stepwise is superior to --lf -x:

$ cat test_lf.py

def test_ok():
    assert True

def test_1():
    assert False

def test_2():
    assert False

$ py.test --lf -x
runs test_ok, test_1, fails and stops

fix test 1

$ py.test --lf -x
runs just test_1, passes and stops

to get to the next failing test i have to run test_ok (the entire test suite) again

with stepwise, the next run runs test_1 (which passes) and continues until the next failing test

@nip3o
Copy link
Owner Author

nip3o commented Aug 21, 2018

Hmm, this was the exact use-case that I thought had changed in pytest when I tested it a few weeks ago (the question came up on Twitter, so I started looking into it). Now when I try this again, however, I get the same results as you do.

@pelme
Copy link

pelme commented Aug 21, 2018

Hmm, what about opening a pytest issue to discuss this? Maybe --lf -x could be changed to fully be like --sw? In its current form is just confusing and not very useful.

@davidszotten
Copy link
Contributor

well i think it does sort of work as advertised. I think the intended use-case is to first run the entire test suite, and then run with --lf -x, in which case it works fine. Stepwise just enables a different workflow

@nip3o
Copy link
Owner Author

nip3o commented Aug 21, 2018

Ohh, that was probably what I was doing when I tested it out the previous time. I agree that it does work as advertised at the moment, even though it would be awesome to get the stepwise workflow into pytest somehow.

@davidszotten
Copy link
Contributor

what's your objection to it living as a plugin? maintenance? reach?

@nip3o
Copy link
Owner Author

nip3o commented Aug 21, 2018

I would be fine with it continue living as a plugin as long as it offers something more than pytest itself does, but mostly I think it's a really useful workflow that more people should know about. I also believe that is so close to --lf -x that most people not to tell the difference on beforehand it and therefore does not see the point in using a plugin.
So yes, reach basically. #7 would surely also help, but as said I don't think people can tell the difference.

The tweet where this started: https://twitter.com/cameronmaske/status/1023913232011087872

@nicoddemus
Copy link

Indeed the workflow is slightly different, I did not realize the difference myself when I changed the behavior of the cache mechanism to support --lf -x the way I wanted. 😁

I think it should be simple to add a new --sw option to the core, implemented inside the current LFPlugin or as a separate class, because it is an absolutely brilliant way to fix a test suite.

@pelme
Copy link

pelme commented Aug 22, 2018

Awesome 👍 I agree that this workflow is too good to not be in core.

So it is a matter of transforming pytest-stepwise to a pytest PR? pytest-stepwise already have extensive pytester tests that should be helpful when moving it over. :)

@nicoddemus
Copy link

Awesome 👍 I agree that this workflow is too good to not be in core.
So it is a matter of transforming pytest-stepwise to a pytest PR?

IMHO yes. @RonnyPfannschmidt any objections?

@RonnyPfannschmidt
Copy link

i#m confused s to which the pytest behaviour is now - if the combination of --lf -x forgets the rest of the failures when a single test passes this is a major bug in my understanding

@RonnyPfannschmidt
Copy link

am i understanding this workflow correctly as ignore -x when the pool of runnable known failed tests is empty

@nip3o
Copy link
Owner Author

nip3o commented Aug 23, 2018

am i understanding this workflow correctly as ignore -x when the pool of runnable known failed tests is empty

The way stepwise is implemented it does not care about any known failed tests, it just runs all tests and stops as soon as it encounters a failure, and saves the position where it failed so it can continue from there. The approach of --lf is to remember failed tests, which in this case has the drawback that they have to fail in the first place.

@nicoddemus
Copy link

Yep the difference is subtle at first glance, but #14 (comment) exemplifies it well. 👍

@davidszotten
Copy link
Contributor

lest we forget, one pretty handy feature of pytest-stepwise is also the --skip option (which i guess we might want to rename to something a bit more specific if bringing this plugin into core)

@nip3o
Copy link
Owner Author

nip3o commented Nov 15, 2018

🎉

@nip3o nip3o closed this as completed Nov 15, 2018
@nicoddemus
Copy link

Awesome, thanks @nip3o!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants