-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add TAP tests support #2923
Comments
Hmm... I kind of like this idea. I have an external testsuite that runs a number of individual tests, and this protocol would work well for communicating the results, assuming that it can express XFAIL (expected failures). We could also make the 30-second time-out function as a per-test timeout if there was a way for testsuites to communicate with the test harness like this. |
This overlaps with #186 |
@bredelings XFAIL is the TODO directive. The test is run and fails, but the TODO means it appears in the XFAIL section of the results, e.g. If the test passes then it is XPASS: TAP is a streaming protocol so there should be no problem with a 30 second delay for a particular test result to be sent. Additional diagnostic information on the test failure can be sent in a YAMLish block: https://testanything.org/tap-version-13-specification.html#yaml-blocks At present there is no spec for sub-tests. I think one of the Javascript frameworks came up with a good solution. I can't recall if it was this one: http://www.node-tap.org/subtests/ TAP has a large number of tools like pretty printers: |
I have been using https://github.com/endlessm/webhelper/blob/master/test/tap.py to integrate a test harness that outputs TAP (not version 13 though) with Meson. However, Meson only supports outputting the results on a per-file level, not per-test. I'd really like to see per-test, since I find the per-test output in Autotools useful. This is currently the only blocker for switching one of my projects to Meson. |
FWIW this is how I get a list of tests from my test runner script and pass it to the Meson test harness. Each Meson
So my This approach has lead me to wonder if Meson I've also noted that changing a test's description doesn't currently update in the test harness. I assume it is cached by Meson. |
This provides an initial support for parsing TAP output. It detects failures and skipped tests without relying on exit code, as well as early termination of the test due to an error or a crash. For now, subtests are not recorded in the TestRun object. However, because the TAP output goes on stdout, it is printed by --print-errorlogs when a test does not behave as expected. Handling subtests as TestRuns, and serializing them to JSON, can be added later. The parser was written specifically for Meson, and comes with its own test suite. Fixes mesonbuild#2923.
This provides an initial support for parsing TAP output. It detects failures and skipped tests without relying on exit code, as well as early termination of the test due to an error or a crash. For now, subtests are not recorded in the TestRun object. However, because the TAP output goes on stdout, it is printed by --print-errorlogs when a test does not behave as expected. Handling subtests as TestRuns, and serializing them to JSON, can be added later. The parser was written specifically for Meson, and comes with its own test suite. Fixes mesonbuild#2923.
This provides an initial support for parsing TAP output. It detects failures and skipped tests without relying on exit code, as well as early termination of the test due to an error or a crash. For now, subtests are not recorded in the TestRun object. However, because the TAP output goes on stdout, it is printed by --print-errorlogs when a test does not behave as expected. Handling subtests as TestRuns, and serializing them to JSON, can be added later. The parser was written specifically for Meson, and comes with its own test suite. Fixes mesonbuild#2923.
This provides an initial support for parsing TAP output. It detects failures and skipped tests without relying on exit code, as well as early termination of the test due to an error or a crash. For now, subtests are not recorded in the TestRun object. However, because the TAP output goes on stdout, it is printed by --print-errorlogs when a test does not behave as expected. Handling subtests as TestRuns, and serializing them to JSON, can be added later. The parser was written specifically for Meson, and comes with its own test suite. Fixes mesonbuild#2923.
This provides an initial support for parsing TAP output. It detects failures and skipped tests without relying on exit code, as well as early termination of the test due to an error or a crash. For now, subtests are not recorded in the TestRun object. However, because the TAP output goes on stdout, it is printed by --print-errorlogs when a test does not behave as expected. Handling subtests as TestRuns, and serializing them to JSON, can be added later. The parser was written specifically for Meson, and comes with its own test suite. Fixes mesonbuild#2923.
This provides an initial support for parsing TAP output. It detects failures and skipped tests without relying on exit code, as well as early termination of the test due to an error or a crash. For now, subtests are not recorded in the TestRun object. However, because the TAP output goes on stdout, it is printed by --print-errorlogs when a test does not behave as expected. Handling subtests as TestRuns, and serializing them to JSON, can be added later. The parser was written specifically for Meson, and comes with its own test suite. Fixes mesonbuild#2923.
@jpakkane I believe this is not fully addressed by #4958, as meson does not really integrate the tests the way it does with "native" tests:
While it should look somewhat like:
Basically currently meson just remembers success/failure for the whole TAP suite of tests depending if any of the tests failed, but it does not properly output those the way I would expect it to and does not print diagnostics properly either but just dumps the full TAP output. |
It seems TODO is currently not handled correctly either, as TODO indicates it is not expected to succeed currently:
|
Another issue is #6810 |
Each test is handled as a single Meson test, because it's normal for a single TAP output to produce hundreds, or even thousands of output lines. If for example you're using gtest, the way TAP was implemented means you have the same
|
I don't really see it being useful at all in the current way. Whats the point in tap reporting detailed tests outputs if meson does not properly reports those either? I compare the behavior to how autotools does it, which is similar to how I explained it. |
If not having detailed output, but instead just a basic success/fail parsing of a TAP-output test is a needed feature for some cases, we could probably add a new kwarg for that? But lots of people that used TAP in autotools probably want more detailed output and not just a raw TAP dump on failure. |
The detailed output is available if you use |
Yes but I believe the current behavior is not how it should work. It's ok if you disagree with that as people have different usecases and thanks a lot for the initial support, I just think it is not integrated well enough yet and does not work out of the box the way one would expect it to… |
Maybe it would make more sense to be able to specify that as testsuite() or something though, so its clear that it will be multiple tests vs one test… |
Personally I prefer to see grouped results because the group leads me to which executable to run in order to reproduce. So what I do is to run "meson test -v", which lets me both see which subtests failed (and ideally cut-and-paste the command line into a shell, see #5025). But I can see how people can feel different; these are things that affect the workflow quite directly. |
In other words, I would keep the current overall behavior, but add support to |
That sounds good. My main issue is currently especially on failure that it just dumps the whole raw tap output, which is hard to read sometimes when there are a lot of tests. Would be nicer to have it just show the subtests that actually failed. |
I too would like to see an explicit and pretty list of the failed tests. I can understand not listing individual TAP lines when things go as expected, but it would be good to see the counts in a summary, where you count each TAP test line rather than only each executable. Seeing the counts of XFAIL, SKIP etc. separate from the just OK and FAIL would be really good. Especially SKIP count, if you have a few cases inside one test executable that skip - that might be unexpected. |
@ppaalanen, @ePirat, see #7830 for the plan around further TAP improvements. |
It would be very useful if meson would support the TAP protocol for tests,
that would make it easy to write tests that do not have to be split up into individual binaries.
I could not find anything about that in the documentation, so I am quite sure this is not supported currently.
The text was updated successfully, but these errors were encountered: