Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

matchers verify_true fails when expr is false #44

Open
mmaypumphrey opened this issue May 14, 2013 · 12 comments
Open

matchers verify_true fails when expr is false #44

mmaypumphrey opened this issue May 14, 2013 · 12 comments

Comments

@mmaypumphrey
Copy link

This toy test...

@pytest.marks('matchers')
def test_verify_true_with_true_condition(self):
    self.matchers.verify_true(1==1,"Expected to pass because expr true")
    self.matchers.verify_true(1==2,"Expected to fail because expr false")
    self.matchers.assert_true(3==3,"Expected to pass because expr pass")
    log_info("Finished!")

fails in two ways:

  1. It produces a stack trace instead of outputting our msg for the second verify_true call.
  2. It does NOT proceed on to the assert_true call as a soft assert should.

adobe-MacBookPro:WD mmaypump$ pysaunter -m matchers -v
================================================================================ test session starts =================================================================================
platform darwin -- Python 2.7.2 -- pytest-2.3.4 -- /Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python
plugins: marks, xdist
collected 186 items
version = 66
version = 66

scripts/Footer.py:13: CheckFooter.test_verify_true_with_true_condition FAILED

====================================================================================== FAILURES ======================================================================================
__________________________________________________________________ CheckFooter.test_verify_true_with_true_condition __________________________________________________________________
Traceback (most recent call last):
File "/Users/mmaypump/Desktop/conftest.py", line 26, in pytest_runtest_makereport
assert([] == item.parent.obj.verificationErrors)
AssertionError: assert [] == ['Expected to fail because e...assert bool(False) is True']
Right contains more items, first extra item: 'Expected to fail because expr false:\nassert bool(False) is True'
---------------------------------------------------------------------------------- Captured stderr -----------------------------------------------------------------------------------
10:42:58 INFO: Starting new HTTPS connection (1): secure.stage.echosign.com
10:42:58 DEBUG: "GET /version HTTP/1.1" 200 77
10:43:01 INFO: Finished!
---------------------------------------------- generated xml file: /Users/mmaypump/Desktop/qa/PYSAUNTER/WD/logs/2013-05-14-10-42-56.xml ----------------------------------------------
====================================================================== 185 tests deselected by "-m 'matchers'" =======================================================================
====================================================================== 1 failed, 185 deselected in 4.44 seconds ======================================================================
adobe-MacBookPro:WD

@dakotasmith
Copy link

I'm not sure you can use the matchers.assert_ with the matchers.verify_ or at least I am uncertain of a clear expectation. I have always just used one or the other, because I found that when an assert would fail, it would stop the rest of the test execution, which is the opposite of soft-asserts.

https://github.com/Element-34/py.saunter/blob/master/saunter/matchers.py#L67

Checking the source it seems that any attempt to use matchers.assert_ is going to assert something. If that fails, you get a stack trace because of the pytest_runtest_call() method in your conftest.py file. Mine has something like this:

def pytest_runtest_call(item, __multicall__):
    try:
        __multicall__.execute()
    except Exception as e:
       if hasattr(item.parent.obj, 'driver') or hasattr(item.parent.obj, 'selenium'):
       item.parent.obj.take_named_screenshot('exception')
   raise

However, matchers.verify_ doesn't allow the exception to be raised. When an assert fails, it adds the user message or a generated message to self.verification_errors, which is then evaluated in pytest_runtest_makereport() in the same conftest.py file.

def pytest_runtest_makereport(__multicall__, item, call):
if call.when == "call":
    try:
        assert([] == item.parent.obj.verificationErrors)
    except AssertionError:
        call.excinfo = py.code.ExceptionInfo()
[...]

Since matchers.verify_true() is just a try/except around matchers.assert_true(), the "exception" that is written into the list of verification_errors is the message you are seeing.

In your example, if the assert is first, it always runs. I know it's a trivial example, but is there any reason the assert_true couldn't be an additional verify_true?

@mmaypumphrey
Copy link
Author

So, the calls to assert_* are supposed to stop the test and the calls to verify_* are supposed to NOT stop the test. Mixing them within a single test is highly desirable IMO.

The issue above is that the verify_* is stopping the test when it should NOT do that. Secondly, the msg param I specified is not getting output.

The problem is with my second verify_true, not the assert_true at the end.

@dakotasmith
Copy link

Just trying to help out and investigate

So, the calls to assert_* are supposed to stop the test and the calls to verify_* are supposed to NOT stop the test. Mixing them within a single test is highly desirable IMO.

I had never thought of it that way. For me it was a hard rule: calls to assert_* are to be used when a test should stop if it throws an exception; calls to verify_* collect messages into a list when an exception is thrown. Though mixing them may be desirable, I never saw it as a feature. In fact, as I've said, I try the opposite tact, which is to not use both verify_ and assert_ methods in one test.

The issue above is that the verify_* is stopping the test when it should NOT do that. Secondly, the msg param I specified is not getting output.

The difference between how assert_ and verify_ process failures is connected to conftest.py how py.test "realizes" the failure and displays it to you. If your message is "Expected to fail because expr false" then I do see it as being output, and with the stack trace that "caused" the failure, but that might not be what you expect.

Reference the pytest_runtest_makereport() method again in conftest.py. Since verification errors are appending to a list of verification errors, when using verify_, pytest asserts []==self.verificationErrors or that an empty list is equal to the list of verification errors on our current test. Since the only exception this assert will throw is the contents of a list not being equal, it can only show you the messages that you put in the list, not the entire stack track, which is why I mentioned on the Google Group thread using very explicit messages.

The problem is with my second verify_true, not the assert_true at the end.

Here is my attempt at understanding the problem as you have described it

  • You expect the output for verify_ methods to be the same as assert_ methods, where each collected verifiedError displays the stack trace which led to the underlying assert failure.
  • When writing tests which I don't want to halt upon an expectation mismatch, I use verify_ the whole way through.
  • You would like to mix and match verify_ and assert_ to allow for some evaluations that DO halt the execution of your tests, and some evaluations that will not halt upon mismatch. That also sounds like an awesome feature.
  • You have a preference towards the stack trace style of reporting used in assert_. Admittedly, I like it too, but given how pytest is reporting and what verify_* does, I wasn't sure it would be possible.

Here is the unexpected behavior we both are observing

While I'm not sure what to expect when mixing matchers.a* and matchers.v*, exceptions are silenced once you use matchers.verify_. I have created an example of that here.

https://gist.github.com/dakotasmith/7bbf7eab1df8d0ff5db6#file-verify_eats_exceptions-py

@dakotasmith
Copy link

Have you tried just using assert, instead of the matchers.assert wrappers?

assert value==expectedvalue

Should still cause py.test to fail the test in the way you and py.saunter expect.

@mmaypumphrey
Copy link
Author

The Python assert doesn't have a verify equivalent. It also doesn't support a msg argument from the test dev.

Your four bullet points summarizing the problem are not quite right. I do NOT want a stack trace--I want my own msg="" string displayed. What I'm getting (and complaining about here!) is a stack trace.

@dakotasmith
Copy link

I understand Python assert doesn't have a verify equivalent. I was suggesting that, when you are using matchers.verify and want to assert something in a way that throws and exception and halts your test, to use assert instead of matchers.assert_

@dakotasmith
Copy link

As for the stack trace, I think the work here would be the creation of a VerificationError exception, and in conftest.py's pytest_runtest_makereport, ensure that

call.excinfo = py.code.ExceptionInfo() 

is in a raised VerificationError in a try/except block inside the except AssertionError block. The message of VerificationError would be set to accept a list of msgs. But to be fair, I don't know what py.code.ExceptionInfo does when it is nested that way.

I'm pretty sure as long as the except block that contains the above line is an AssertionError, using verify will return the stack trace you see. Which does include the message, just with a bit extra stuff around it. Change the exception, and the stack trace will change.

I'll take a stab at something, but I'm not certain I'll solve it.

@adamgoucher
Copy link
Member

In conftest.py, if you switch out the existing function or this one, does it display the way you want? I think its better than currently done since apparently py.test does a bit of monkeypatching of assert.

def pytest_runtest_makereport(__multicall__, item, call):
    if call.when == "call":
        try:
            if len(item.parent.obj.verificationErrors) != 0:
                raise AssertionError(item.parent.obj.verificationErrors)
        except AssertionError:
            call.excinfo = py.code.ExceptionInfo()

    report = __multicall__.execute()

    item.outcome = report.outcome

    if call.when == "call":
        if hasattr(item.parent.obj, 'config') and item.parent.obj.config.getboolean('SauceLabs', 'ondemand'):
            s = saunter.saucelabs.SauceLabs(item)

    return report

@adamgoucher
Copy link
Member

Actually, here is a better one. It appears to work when

  • verifications passed, assert failed
  • verifications failed, assert passed
  • verifications failed, assert failed
def pytest_runtest_makereport(__multicall__, item, call):
    if call.when == "call":
        try:
            if len(item.parent.obj.verificationErrors) != 0:
                if call.excinfo:
                    raise AssertionError((call.excinfo.exconly(), item.parent.obj.verificationErrors))
                else:
                    raise AssertionError(item.parent.obj.verificationErrors)
        except AssertionError:
            call.excinfo = py.code.ExceptionInfo()

    report = __multicall__.execute()

    item.outcome = report.outcome

    if call.when == "call":
        if hasattr(item.parent.obj, 'config') and item.parent.obj.config.getboolean('SauceLabs', 'ondemand'):
            s = saunter.saucelabs.SauceLabs(item)

    return report

@adamgoucher
Copy link
Member

And if you want to get fancy

                    raise AssertionError({"assert": call.excinfo.exconly(), "verifications": item.parent.obj.verificationErrors})

@mmaypumphrey
Copy link
Author

Adam: Your conftest.py code (2 comments up) still doesn't work on pysaunter 0.54.

@ghost
Copy link

ghost commented Sep 11, 2013

I have worked with mmaypumphrey recently and have also tried to implement the soft asserts in version 0.64 without success.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants