Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Flaky reruns tests with failing teardown but test suite is failed #159

Open
dimaqq opened this issue Aug 9, 2019 · 3 comments
Open

Flaky reruns tests with failing teardown but test suite is failed #159

dimaqq opened this issue Aug 9, 2019 · 3 comments

Comments

@dimaqq
Copy link

dimaqq commented Aug 9, 2019

flaky re-runs tests that fail in teardown, but test suite result is always ERROR. 🤔

Meanwhile if test fails in the test itself, it is reran, and test suite result is success. 👌

With this example, flaky does the right thing:

# test_a.py
> cat test_a.py
import random
from flaky import flaky


@flaky(max_runs=3)
def test_foo():
    assert random.random() < 0.3

result

> pytest test_a.py -v
========================== test session starts ==========================
platform darwin -- Python 3.7.3, pytest-4.6.5, py-1.8.0, pluggy-0.12.0 -- /.../bin/python
cachedir: .pytest_cache
rootdir: /...
plugins: asyncio-0.10.0, cov-2.7.1, flaky-3.6.1, freezegun-0.3.0.post1
collected 1 item

test_a.py::test_foo PASSED                                        [100%]
===Flaky Test Report===

test_foo failed (2 runs remaining out of 3).
	<class 'AssertionError'>
	assert 0.6181257110745261 < 0.3
 +  where 0.6181257110745261 = <built-in method random of Random object at 0x7fea1e00aa18>()
 +    where <built-in method random of Random object at 0x7fea1e00aa18> = random.random
	[<TracebackEntry /.../test_a.py:7>]
test_foo failed (1 runs remaining out of 3).
	<class 'AssertionError'>
	assert 0.6575059088357618 < 0.3
 +  where 0.6575059088357618 = <built-in method random of Random object at 0x7fea1e00aa18>()
 +    where <built-in method random of Random object at 0x7fea1e00aa18> = random.random
	[<TracebackEntry /.../test_a.py:7>]
test_foo passed 1 out of the required 1 times. Success!

===End Flaky Test Report===

======================= 1 passed in 0.04 seconds ========================

And with this example, the result is weird:

# test_b.py
import random
import pytest
from flaky import flaky


@pytest.fixture
def foobar():
    # no setup
    yield 42
    assert random.random() < 0.3


@flaky(max_runs=3)
def test_foo(foobar):
    pass

result

> pytest test_b.py
========================== test session starts ==========================
platform darwin -- Python 3.7.3, pytest-4.6.5, py-1.8.0, pluggy-0.12.0
rootdir: /...
plugins: asyncio-0.10.0, cov-2.7.1, flaky-3.6.1, freezegun-0.3.0.post1
collected 1 item

test_b.py .E                                                      [100%]

================================ ERRORS =================================
_____________________ ERROR at teardown of test_foo _____________________

    @pytest.fixture
    def foobar():
        # no setup
        yield 42
>       assert random.random() < 0.3
E       assert 0.7628703616086037 < 0.3
E        +  where 0.7628703616086037 = <built-in method random of Random object at 0x7fda9300aa18>()
E        +    where <built-in method random of Random object at 0x7fda9300aa18> = random.random

test_b.py:10: AssertionError
===Flaky Test Report===

test_foo passed 1 out of the required 1 times. Success!

===End Flaky Test Report===
=================== 1 passed, 1 error in 0.06 seconds ===================

Might be related to #135 but this is more specific and concise.

@dimaqq
Copy link
Author

dimaqq commented Aug 13, 2019

Perhaps the behaviour is deliberate:

# Start flaky modifications
# only retry on call, not setup or teardown

@zdelagrange
Copy link

just wondering if anyone worked around this or has a working fork for rerunning tests that fail in setup and/or teardown

@dimaqq
Copy link
Author

dimaqq commented Dec 5, 2020

We've stopped using flaky and just re-run pytest --lf a few times to catch flaky tests.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants