-
-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add example scripts for issue #519 #2207
Conversation
Hmm seems like an interesting idea, it certainly makes it easy to contribute new tests and facilitates debugging like you mention. Should those tests/acceptance tests always pass? I feel that would be too limiting, plenty of times we want to ascertain a different outcome (a failure or skip). Perhaps an additional file which describes the expectation of the test? It could even be easily implemented in terms of what we already have, consider the test: def test_1():
pass
def test_2():
assert 0 We could associate the expected outcome to that test file (using the same filename perhaps): - passed: 1
- failed: 0 Or: - fnmatch_lines:
- '*1 passed, 1 failed*' And so on. There's a risk of creating an entire meta-language to what is already relatively simple in Python, because some tests require more complex logic. Just some quick brainstorming, would like to hear if you have any other ideas. |
if they are expected to fail should be in the filename, we should use them from tests in real tests folders, or be able to run them directly and their folder should be in collect_ignore so we can only run them explicitly |
03a54be
to
3410c8e
Compare
I'm writing a pytest plugin to print json for each test report and I'm very interested into an examples directory! I was thinking having at least these tests:
About the running of acceptance tests, I'm contributing to the Mercurial project where we have an internal testing tool that let us write tests like this:
We can then run the test-runner with the -i option so we can accept the differences:
The test file then is:
Any backward incompatible change will change the output and makes the test-runner to mark the test file as failed. We use this test-runner extensively for testing Mercurial and there has been some interest into extracting this test-runner from Mercurial into an independent project. I think it could be a good fit for pytest acceptance tests, what do you think? |
@Lothiraldan i am fairly interested in something like the mercurial acceptance test framework, before i just didn't have the time to reimplement, and there is the licensing issue with the GPL |
@RonnyPfannschmidt is there still interest in this? |
@nicoddemus yes, but i need to start it in a different manner i think |
@nicoddemus i'd like to put the scripts in simply to have them there and something with them |
Can you elaborate what you mean by "something"? If you are thinking of only have a set of verbatim test files that should pass, I think this would be a nice start. If one needs something more elaborate (to check for a specific message or if a certain test that should fail instead) then we can always fallback to using |
@nicoddemus mainly i want to start adding a set of test files that demonstrate more elaborate/strange issues, in future i'd then like to use those in actual tests of pytest using the normal mechanisms - but right now i want to have them available in repo for direct usage |
@RonnyPfannschmidt got it. The test is currently failing, so it seems it needs some work yet as we can't have failing tests in the suite. |
@nicoddemus the test is a known failure and shouldn't execute at all |
4b52035
to
ea90605
Compare
LGTM, please just add a After that feel free to merge it! 👍 |
this pr simply adds a script thats demonstrating issue #519
as far as i can tell by now it shows that the issue is no more
this is also a starting point for adding verbatim acceptance tests to the folder tree in order to execute py.test directly on those (in order to get direct feedback and better debugging)
opinions by @nicoddemus and @The-Compiler appreciated