Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature proposal: Capability to mark tests as xfail #122

Open
Alex-Izquierdo opened this issue Feb 14, 2023 · 7 comments
Open

Feature proposal: Capability to mark tests as xfail #122

Alex-Izquierdo opened this issue Feb 14, 2023 · 7 comments

Comments

@Alex-Izquierdo
Copy link

I had to write multiple assertions in the same test and this plugin is awesome. As the same way that you can capture one error and continue executing assertions It would be great if the assertion fails be able to mark it as xfail as the same way that you can do for the whole test or param.

@okken
Copy link
Owner

okken commented Mar 22, 2023

I'm glad you are finding the plugin useful.
This is an interesting request.
I'll think about it.
Are you using the check assert helper functions such as check.equal(), check.less(), etc?

With the functions, I could imagine an extra xfail parameter for the reason, for example:

def test_foo(check):
    check.equal(1, 2, xfail="they're not equal")

or more explicitly, perhaps:

def test_foo(check):
    check.equal(1, 2, xfail_reason="they're not equal")

For the context manager case, I'm not sure how that could be expressed. Perhaps also a parameter:

def test_foo(check):
    with check(xfail_reason="they're not equal"):
        assert 1 == 2

Something like this might work.
Although I don't think I could support XPASS with this.

@Alex-Izquierdo
Copy link
Author

I'm using only the context manager, but I didn't knew that I can define a xfail_reason with the helpers, so maybe I can use it for these cases. I previously discarded the helpers because I lose the introspection of pytest assertions.

Thanks for your answer.

@okken
Copy link
Owner

okken commented Mar 22, 2023

Oh. You can't use xfail_reason, yet.
I'm just kicking around an API design idea.

@harmin-parra
Copy link

I'll prefer xfail="they're not equal" over xfail_reason="they're not equal"

@harmin-parra
Copy link

harmin-parra commented Aug 21, 2023

But actually, wouldn't it better to have the possibility to choose between failed and xfailed for the outcome status for failed checks ?

By default, the plugin sets the failed status to failed checks.

Perhaps we can have the possibility to choose between failed and xfailed, globally and /or per test ?

Something like this ?

  1. Global configuration: with pytest.ini option:

check_outcome_status=failed or xfailed

  1. Per-test configuration with decorators:
  • If we want fail (current behavior)
@check.mark.fail
def test_foo(check):
    check.equal(1, 2)
  • If we want xfail (requested feature)
@check.mark.xfail(reason="my reason")
def test_foo(check):
    check.equal(1, 2)

@jessie-github
Copy link

For me I am using pytest-check especially because I have a lot of "expected fails" and I don't want them to stop the test.
So I will have little use for a general @check.mark.xfail associated to the test but I will be happy to have an extra parameter xfail for check.
(and status of the test should be xpass if at least one of the expected fails is passed, but I suppose others may disagree)

@skrawcz
Copy link

skrawcz commented May 20, 2024

Would this proposal be useful in the following context? I'm trying to understand it more (and whether this library is something I want to invest in).

  1. For a test, I want to run all asserts, i.e. checks.
  2. but I only want to mark a few asserts as having the ability to fail a test. The ones marked with xfail, could fail or pass without impacting the test overall.
  3. I'd still like to keep track of the results of all asserts, so that I can compare things over different runs.

Cheers!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants