-
Notifications
You must be signed in to change notification settings - Fork 90
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature request: report results in terms of passing tests (not subtests) #3740
Comments
This is something that I've often wanted as well, since I often want to know where there are the most tests with some failure, and subtest counts are not a great guide to that. I think a |
As part of this, I think it's also a great opportunity to inline the test statuses to not have to click through, similar to what I suggested for reftests previously: #421 |
@jcscottiii has implemented something for this now and there's an RFC up at web-platform-tests/rfcs#190. @mathiasbynens can you try it out? |
The staging URLs:
…solve our use case perfectly. Thanks so much for prototyping this! |
My use case: Throughout the year, I’d like to easily track the progress towards my team’s goal of increasing the test pass rate for a specific set of tests, for the specific browser we are working on.
Illustrative example: For the specific scenario I’m talking about, I’ve created the
chromium-bidi-2023
label which lets us view the results of the desired set of tests: https://wpt.fyi/results/webdriver/tests/bidi?q=label:chromium-bidi-2023 However, what’s missing is an easy & stable way to interpret the test pass rate. The UI currently looks like this:It says things like “Subtest Total: 2667 / 2911” while at the same time the blue box at the top says “Showing 174 tests (3232 subtests)”. As a user, it’s unclear whether 2911 or 3232 is the total number of subtests. As it turns out, neither number is guaranteed to be correct, since even the larger number (in the blue box) only counts subtests that have run overall (across any of the browsers) — it’d still miss any subtests that time out on all browsers, for example. This makes it hard to use these metrics for progress tracking.
One possible solution could be to show total tests instead of subtests, e.g. “145 / 174”, which would be more stable — but I’m open to other ideas.
The text was updated successfully, but these errors were encountered: