Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inline WPT test information into specs #1116

Open
6 tasks done
tabatkins opened this issue Oct 20, 2017 · 30 comments
Open
6 tasks done

Inline WPT test information into specs #1116

tabatkins opened this issue Oct 20, 2017 · 30 comments

Comments

@tabatkins
Copy link
Collaborator

tabatkins commented Oct 20, 2017

Background

Talking with fantasai, a major hurdle she has with dealing with testing is figuring out what is un/undertested. WPT has a view that at least sorts tests by ToC section, but that's not enough - you still have to manually read all of the tests and reverse them to testable statements in the spec yourself, annotating your personal copy, and then look at what you have and what wasn't touched. Better would be to have test information inline in the spec, where after each paragraph (or even sentence!) you list the tests associated with that bit of text. This gives you an immediate view in the source of what's untested, and makes it easy to keep the tests in the right place when you move stuff around in the spec.

Benefits / Drawbacks

  • This is slightly more continuing work for the spec author, as they have to keep the tests up-to-date in their spec. It's substantially less work for the test-writer/maintainer, as they have an immediate view of what things are tests and what aren't. (In many cases these two people are the same, so it's a net win, I think.)
  • Technically duplicates existing data somewhat, as tests currently link to things they're testing. However, most such links are very coarse-grained; useful enough for reviewing the test, but of minimal usefulness for reviewing the test suite. They can also drift out of sync if the spec moves text around, with no mechanism for keeping them in sync; a longer-term goal here could be to replace the link-to-spec with automatically-extracted information from the spec itself, displayed in the test harness.
  • This produces "noise" in the spec, interrupting the flow of thought in the source. It's useful noise, and often won't be too much; I should investigate common editor's code-folding behavior and make sure the planned syntax is compatible with it.
  • WPT and the spec can now be kept in perfect sync, and with minimal adjustment needed for changes on either side. Moving a WPT folder just means updating the single metadata line in the spec declaring the path prefix (and in the meantime, the spec will throw a thousand errors, or maybe I'll detect that specially and throw a single one). Adding new tests to WPT will alert the spec author next time they build the spec, making it easy to review and insert them where needed.
  • There's an existing large backlog of specs with tests in WPT but no inline annotations, requiring a large update to add them all. On the other hand, it's a one-time cost, with clear benefits, and can be done relatively mindlessly by anyone in a PR, not just the spec author.

Why Hasn't This Been Done Before?

Very few people have ever worked on the spec-authoring side of the toolchain before; in recent memory, only Wattsi (very HTML-specific) and ReSpec (can't handle large quantities of data, like the test db) have competed with Bikeshed. Most people working on test-tooling, then, operate from the assumption that the specs are unchangeable, and direct their engineering efforts accordingly to work around specs and link into them, rather than having the specs link out. As such, I think it's simply been a matter of no one ever trying to work on it from this side.

Rough Design

  • Find and keep up-to-date a list of all the test paths that WPT knows about. (Done - using wpt manifest to get the list of paths. Stored in spec-data/wpt-tests.txt.)
  • Add a <wpt>foo/bar.html</wpt> element to Bikeshed. Takes test names/paths as text content, one test per line. Does not show up in the final document by default. (Done.)
  • Add a test-path-prefix metadata, so you can omit the common parts of your test path. Probably add an attribute to <test> to let you override that, if for whatever reason you need to. Path is concatenated with the testname (with intervening / if necessary) to locate the actual test. (Append to the prefix on a per-section/container basis, to let you specialize to a subfolder?) (Done, via WPT Path Prefix metadata, or pathprefix attribute on <wpt>.)
  • For each listed test, verify that it actually exists in WPT. Fatal error if any are wrong. Do a levenshtein-distance search for possible typos? (Done, but typo-correction not done yet.)
  • For the specified prefix, verify that every test in WPT is listed in the spec. Fatal error if any are missing. (Multiple versions of a spec share the same folder; level N doesn't want to get errors for test for level N+1 features. Maybe encourage grouping features by subfolder? Think about this more.) (Done; we'll see if this causes problems.)
  • Collect all the test's titles and asserts, which isn't currently collected by the manifest.
  • Controlled by some metadata, display the test information directly in the output. @fantasai’s draft HTML was <a title="test's <title> and assert" href="link to test runner for test">test name</a> <a href="link to test source">...</a>. (Done - if WPT Display: inline is set, <wpt> becomes a Tests block that lists the tests with several useful related links.)
@tabatkins
Copy link
Collaborator Author

Issue: Sometimes we do combinatorial testing of a feature, which can result in huge numbers of tests for a single line. If these use predictable naming scheme, maybe I can have <tests some-arg>foo-{arg}.html</tests>-style templating, which Bikeshed auto-expands for you? Need both sets of values, and ranges, I think.

@astearns
Copy link

I don't think that automated typo-fixing would be that useful - probably better to require the actual test name so that there isn't any confusion or need to reverse-engineer the code to locate a test.

@astearns
Copy link

Multiple (versioned) specs will share the same test folder. So it shouldn't be an error that a level 3 draft isn't including a level 4 test in its unversioned test folder.

@tabatkins
Copy link
Collaborator Author

Oh, definitely not typo-fixing. Just typo-finding, so I can report suggestions in the error, like I do today for biblio link typos.

@domenic
Copy link
Collaborator

domenic commented Oct 20, 2017

This sounds pretty great!!

One thing I don't see addressed here is the cost to the spec author to continually monitor the appropriate directory and add appropriate annotations to their source spec as new tests flow in. This burden seems unavoidable on someone, and placing it on the spec author is IMO better than placing it on the test writer. But I think it's worth keeping in mind that the burden exists.

@astearns
Copy link

So the convention is to add the annotation after the assertion. Perhaps we could have a bulk-import option to automatically add these to the ends of sections based on the test metadata links (if present)? This could come with an additional note (automated-entry) so the spec editor could move it to a more specific location within the section. This could be used the first time you added the annotations to the spec, and/or each time a large number of new tests needed to be accounted for.

@tabatkins
Copy link
Collaborator Author

One thing I don't see addressed here is the cost to the spec author to continually monitor the appropriate directory and add appropriate annotations to their source spec as new tests flow in.

Check the fifth bullet-point in the task-list - Bikeshed knows the current state of WPT (if you've run bikeshed update recently), and can tell you if new tests have been added that you're not currently tracking, so there's no need for the spec author to proactively monitor a separate location; they just need to add stuff when Bikeshed complains at them (and gives them links and advice to make this as easy as possible).

So the convention is to add the annotation after the assertion. Perhaps we could have a bulk-import option to automatically add these to the ends of sections based on the test metadata links (if present)? This could come with an additional note (automated-entry) so the spec editor could move it to a more specific location within the section. This could be used the first time you added the annotations to the spec, and/or each time a large number of new tests needed to be accounted for.

Might be reasonable. I can still throw a warning while automated-placement tests exist, reminding you to move them to the correct place, but at least letting you opt into the prefix-watching immediately.

@gsnedders
Copy link
Contributor

One thing I don't see addressed here is the cost to the spec author to continually monitor the appropriate directory and add appropriate annotations to their source spec as new tests flow in. This burden seems unavoidable on someone, and placing it on the spec author is IMO better than placing it on the test writer. But I think it's worth keeping in mind that the burden exists.

This seems like it'll be particularly burdensome whenever there's a large number of new tests all being written more or less at once (either because of mass-upstreaming, or during initial implementation).

A few other thoughts:

  • There's plenty of cases where there's a single test file for a section with lots of subtests. That doesn't work with just linking to the file.

  • Given talk about replacing the current CSS test harness inline result boxes with something pulling data from https://wpt.fyi (and therefore more generally up-to-date as well as covering all of WPT), how should we do this? There was a fair bit of talk about doing this in JS, given you aren't then limited to when the draft was last built, and we probably want to include tests we only have a vague idea of where they apply to, and we probably don't want a box for every normative requirement (given that'd be way too many!).

  • Can you see any sensible way to keep the metadata in sync between the test and in the spec? Do we want a warning if the test links to a different anchor or section to where the spec links to it?

  • How do we determine what sections a test is testing? The previous suggestion here was to infer it off of directory structure (with some helper file) or <link rel=help>, but if the metadata then also lives in the spec we need to also get the data from somewhere that gets it from the Bikeshed source file?

@tabatkins
Copy link
Collaborator Author

There's plenty of cases where there's a single test file for a section with lots of subtests. That doesn't work with just linking to the file.

That's definitely a hard problem, and I'm not sure what to do about it. I'm wondering if part of the reason for big single-page tests is because it's annoying to do extra tracking and reviewing for multiple tests? Maybe this (and further improvements) can shift the rewards here.

Given talk about replacing the current CSS test harness inline result boxes with something pulling data from wpt.fyi

This issue would at least give us better targetting of the boxes. Haven't given any thought to how it would interact, tho. We could run with the same basic design as the CanIUse panels that Bikeshed currently adds, which are as up-to-date as the last time you ran bikeshed update. That means that a spec not generated via CI wouldn't get the most up-to-date stuff, but it results in better perf when viewing the spec.

Can you see any sensible way to keep the metadata in sync between the test and in the spec? Do we want a warning if the test links to a different anchor or section to where the spec links to it?

Yes, I think so. The plan already has me gathering the title/assert from the test file; I can grab more metadata as needed and do whatever linting is useful based on it. (I might, btw, be putting a bug on y'all to track this info in the manifest file instead, so I don't have to parse everything on my own, and others can get the same info.)

How do we determine what sections a test is testing? The previous suggestion here was to infer it off of directory structure (with some helper file) or , but if the metadata then also lives in the spec we need to also get the data from somewhere that gets it from the Bikeshed source file?

Right, Shepherd (or another service that knows about the set of Bikeshed specs) can parse the HTML of the with-tests output file, and automagically get the data out. This doesn't seem particularly hard.

@foolip
Copy link
Collaborator

foolip commented Oct 26, 2017

This is very exciting! @rwaldron, given your work on inline annotations in https://rwaldron.github.io/webrtc-pc/, WDYT about this approach?

Taking https://fullscreen.spec.whatwg.org/ as an example, there is something I imagine I'd quickly want, namely to associate https://fullscreen.spec.whatwg.org/#dom-element-requestfullscreen with fullscreen/api/element-request-fullscreen-*.html (wildcard) in wpt.

As for presenting the tests, it would be pretty great to have fresh results from wpt.fyi show up. @lukebjerring, you added an endpoint for the latest test results recently, can you point us to that?

@tabatkins
Copy link
Collaborator Author

namely to associate fullscreen.spec.whatwg.org/#dom-element-requestfullscreen with fullscreen/api/element-request-fullscreen-*.html (wildcard) in wpt.

Explicitly a non-goal. The point is to put the tests next to the text they're testing. Wildcarding like that is basically equivalent to the existing Test Suite metadata (aka almost nil value for reviewing test-suite coverage).

@lukebjerring
Copy link

Currently https://wpt.fyi/latest/{platform}/[test path] redirects to JSON for the test run.

This may change with web-platform-tests/results-collection#161 and web-platform-tests/results-collection#172 being resolved.

@tabatkins
Copy link
Collaborator Author

tabatkins commented Oct 27, 2017

Desirable syntaxes and outputs, from @fantasai:

Compact

<tests>test1, test2, ...</tests>
=>
block holding compact list of the tests+info

Annotations

<tests>
  free md text...
  <t>test1</t>, or some autolinking-ish shorthand
</tests>
=>
block containing the formatted text, 
with the tests formatted in the same way as above, but inline

Table of related tests

<tests var-1="foo, bar" var-2="baz, qux">
 test-{1}-{2}
</tests>
=>
table of tests generated according to the template
(remember that there may be more than 2 dimensions)

@gsnedders
Copy link
Contributor

That's definitely a hard problem, and I'm not sure what to do about it. I'm wondering if part of the reason for big single-page tests is because it's annoying to do extra tracking and reviewing for multiple tests? Maybe this (and further improvements) can shift the rewards here.

Even something relatively small like https://github.com/w3c/web-platform-tests/blob/master/dom/nodes/Node-isEqualNode.html could be problematic. It seems overkill to me to split it up into separate files for each different case in https://dom.spec.whatwg.org/#concept-node-equals (and IMO would be much less maintainable as a testsuite like that given you'll constantly be opening and closing new files looking at very similar but very slightly different tests).

Do we know how non-CSS spec authors feel about this? Do they want to move over to having such fine-grained metadata at all, or does the the current directory structure approach suffice? It's only the CSS WG that seems to have much problem with looking at test coverage, and I don't know if that means the CSS WG care far more about test coverage than any other spec (which seems unlikely given the relatively small test suites) or if there's something else different.

@tabatkins
Copy link
Collaborator Author

It's only the CSS WG that seems to have much problem with looking at test coverage,

I... strongly doubt that. There are no automated coverage trackers for specs, so I'm nearly certain that "doesn't have a problem with" actually means "doesn't really know what their test coverage is". The current method for determining coverage is annoying and labor-intensive, so I doubt that's done basically ever; at best, people are relying on "I wrote a pretty comprehensive suite at Time N, and since then I've required tests for every new addition"—a "dead reckoning" style of coverage tracking vs navigating based on actual information.

@tabatkins
Copy link
Collaborator Author

Even something relatively small like https://github.com/w3c/web-platform-tests/blob/master/dom/nodes/Node-isEqualNode.html could be problematic. It seems overkill to me to split it up into separate files for each different case in https://dom.spec.whatwg.org/#concept-node-equals (and IMO would be much less maintainable as a testsuite like that given you'll constantly be opening and closing new files looking at very similar but very slightly different tests).

That's fine, it's a single small dl, put all of it's tests together. Or split them up per dt/dd pair, that's easy too, it's like five cases, and you won't have to remember to add a new test if you add a new case (because the lack of test will be obvious).

@foolip
Copy link
Collaborator

foolip commented Oct 31, 2017

namely to associate fullscreen.spec.whatwg.org/#dom-element-requestfullscreen with fullscreen/api/element-request-fullscreen-*.html (wildcard) in wpt.

Explicitly a non-goal. The point is to put the tests next to the text they're testing. Wildcarding like that is basically equivalent to the existing Test Suite metadata (aka almost nil value for reviewing test-suite coverage).

Is Test Suite metadata something that's generated from <meta rel=help> for CSS specs? In any case, let me give more examples:

The requestFullscreen() algorithm is big and some granularity could be useful, but here I can't see myself spending time splitting tests across a few lines of spec text, where the list of tests would get longer than the spec test itself.

Does that make sense?

@rwaldron
Copy link

rwaldron commented Nov 1, 2017

@foolip sorry, somehow I missed your mention. This definitely sounds interesting to me and I'd like to "test drive" the idea on some existing spec content to see how the authoring flow works.

@marcoscaceres
Copy link
Contributor

Simple implementation on the ReSpec side (look for "tests: 1"):
https://w3c.github.io/payment-request/#abort-method

It just uses:

<foo data-tests="name-of-test.html">

And uses test suite URL from config. Will see what we need to align with this proposal.

@gregwhitworth
Copy link
Contributor

I'm super excited about this - adding myself to this thread to keep track of changes.

@tabatkins
Copy link
Collaborator Author

tabatkins commented Jun 8, 2018

Some feature suggestions for v2:

  • Inline test annotations (a <test>/path/to/test</test> element?)
  • Test-block summaries (a <wpt><summary /></wpt> element?)
  • A freeform annotation block that allows test listings at arbitrary points (<wpt-notes>...html...<wpt>/path</wpt>...html...</wpt-notes>?)
  • If WPT Path Prefix is specified, link to the wpt.fyi results summary for that prefix
  • Add a WPT Display option that renders the test block with collapsed <details> (lighter-weight visually and more suitable for general consumption)
  • Collect all the test's titles and asserts, which isn't currently collected by the manifest. (punted from v1)
  • Show current test results inline - something more compact than wpt.fyi, just the browser icons against a green or red background.
  • If a spec is using WPT at all, fire WARNINGs for sections that don't have tests and aren't marked informative.
  • Display a "Test Index" on all specs with tests, regardless of WPT Display value.
  • Test-name templating

@fantasai as the source of some of these suggestions, since they wanted to be subscribed to the thread ^_^

@gregwhitworth
Copy link
Contributor

gregwhitworth commented Jun 8, 2018

Whoa, how did I miss this?!? By the looks of this - it looks like it's a new prop at the top of the bs file pointing to the central w3 test suite (eg: http://w3c-test.org/css/css-flexbox/) and then it grabs all of the test info?

I agree with @fantasai that I want inline testing as that lets you know whether you do or don't have a test for specific line of test. The above accomplishes similar capabilities to the script I wrote (although having it in the build process is good) which gives you a rough idea of the counts but it won't let you know if a specific line is tested (unless I'm missing something in the code - which is possible ;) )

@tabatkins
Copy link
Collaborator Author

@gregwhitworth I recommend reading the documentation at https://tabatkins.github.io/bikeshed/#testing; by "inline" I mean as in "inline element in the middle of a sentence". Putting the tests in particular parts of the spec is already supported (and explicitly encouraged), it just makes a block element. ^_^

@gregwhitworth
Copy link
Contributor

@tabatkins beautiful - thanks!

@plehegar
Copy link
Contributor

plehegar commented Jun 8, 2018

side note here: w3c-test.org isn't meant to be a stable server since it's running wptserve. If we link it more from specs, we should make suire we're ok with the server crashing from time to time. This would contradict w3c/specberus#758 btw.

@zcorpan
Copy link
Contributor

zcorpan commented Jun 20, 2018

I think it would be great if it was possible to determine test coverage for specs using this feature. I'm not sure what is needed to make that possible, but maybe a way for tests to be explicitly associated with a piece of spec text (e.g. instead of a <wpt> element, a global wpt="" attribute that you can use on <p>, <li> etc), and a way to annotate elements/sections as not needing any tests.

@marcoscaceres
Copy link
Contributor

Well, everything in a spec is normative, unless it's a note, issue, figure, etc. or explicitly marked as informative. Thus, it's theoretically possible to identify untested sections - particularly algorithms - in a spec (of course, YMMV).

@zcorpan
Copy link
Contributor

zcorpan commented Jun 21, 2018

You can also have statements of fact and definitions, which may not need any tests. But maybe it's possible to detect those (check for lack of RFC 2119 keywords, check for <dfn>).

@tabatkins
Copy link
Collaborator Author

Detail: once I have subtest-addressing, how should I react when you link to some of the subtests but not all?

It was pointed out in the ecosystem-infra call that there shouldn't be a semantic difference between subtests-in-one-file and tests-in-one-folder; which you use is just an implementation difference based on which is easier in a single case. The two cases should be handled parallel.

So, since I think it's reasonable to be able to say "all of the subtests in this file are for this particular feature" (and have Bikeshed automatically expand it to all the subtests), it sounds also reasonable to be able to say that "all of the tests in this folder are for this particular feature" (and have Bikeshed automatically expand). This'll be similar to <wpt-rest> handling, where anything explicitly listed elsewhere is excluded. Probably a path feature like foo/* for folders, or foo.html#* for subfiles.

(This potentially suffers from the "later people put more tests in this folder/file that aren't meant to show up at this spot in the spec", but I figure it's useful enough that it overrides this downside.)

sideshowbarker added a commit to whatwg/wattsi that referenced this issue Aug 26, 2018
This change adds initial support for the `<wpt>` element, as documented
at https://tabatkins.github.io/bikeshed/#wpt-element) and discussed at
speced/bikeshed#1116.

This change causes lists of tests in `<wpt>` elements from the source to
generate TESTS sections in the spec output, with links to corresponding
https://github.com/web-platform-tests/wpt, http://web-platform-tests.live,
and https://wpt.fyi URLs for the listed tests.

The change doesn’t provide the following `<wpt>`-related features:

  - Doesn’t yet verify that each test listed in a `<wpt>` element
    actually exists in https://github.com/web-platform-tests/wpt/

  - Doesn’t yet verify that every single test file that exists in the
    https://github.com/web-platform-tests/wpt/html tree is listed in a
    `<wpt>` element somewhere in the spec source.

Fixes #87
sideshowbarker added a commit to whatwg/wattsi that referenced this issue Aug 26, 2018
This change adds initial support for the `<wpt>` element, as documented
at https://tabatkins.github.io/bikeshed/#wpt-element) and discussed at
speced/bikeshed#1116.

This change causes lists of tests in `<wpt>` elements from the source to
generate TESTS sections in the spec output, with links to corresponding
https://github.com/web-platform-tests/wpt, http://web-platform-tests.live,
and https://wpt.fyi URLs for the listed tests.

The change doesn’t provide the following `<wpt>`-related features:

  - Doesn’t yet verify that each test listed in a `<wpt>` element
    actually exists in https://github.com/web-platform-tests/wpt/

  - Doesn’t yet verify that every single test file that exists in the
    https://github.com/web-platform-tests/wpt/tree/master/html tree is
    listed in a `<wpt>` element somewhere in the spec source.

Fixes #87
sideshowbarker added a commit to whatwg/wattsi that referenced this issue Aug 26, 2018
This change adds initial support for the `<wpt>` element, as documented
at https://tabatkins.github.io/bikeshed/#wpt-element) and discussed at
speced/bikeshed#1116.

This change causes lists of tests in `<wpt>` elements from the source to
generate TESTS sections in the spec output, with links to corresponding
https://github.com/web-platform-tests/wpt, http://web-platform-tests.live,
and https://wpt.fyi URLs for the listed tests.

The change doesn’t provide the following `<wpt>`-related features:

  - Doesn’t yet verify that each test listed in a `<wpt>` element
    actually exists in https://github.com/web-platform-tests/wpt/

  - Doesn’t yet verify that every single test file that exists in the
    https://github.com/web-platform-tests/wpt/tree/master/html tree is
    listed in a `<wpt>` element somewhere in the spec source.

Fixes #87
sideshowbarker added a commit to whatwg/wattsi that referenced this issue Aug 26, 2018
This change adds initial support for the `<wpt>` element, as documented
at https://tabatkins.github.io/bikeshed/#wpt-element) and discussed at
speced/bikeshed#1116.

This change causes lists of tests in `<wpt>` elements from the source to
generate TESTS sections in the spec output, with links to corresponding
https://github.com/web-platform-tests/wpt, http://web-platform-tests.live,
and https://wpt.fyi URLs for the listed tests.

The change doesn’t provide the following `<wpt>`-related features:

  - Doesn’t yet verify that each test listed in a `<wpt>` element
    actually exists in https://github.com/web-platform-tests/wpt/

  - Doesn’t yet verify that every single test file that exists in the
    https://github.com/web-platform-tests/wpt/tree/master/html tree is
    listed in a `<wpt>` element somewhere in the spec source.

Fixes #87
sideshowbarker added a commit to whatwg/wattsi that referenced this issue Aug 26, 2018
This change adds initial support for the `<wpt>` element, as documented
at https://tabatkins.github.io/bikeshed/#wpt-element) and discussed at
speced/bikeshed#1116.

This change causes lists of tests in `<wpt>` elements from the source to
generate TESTS sections in the spec output, with links to corresponding
https://github.com/web-platform-tests/wpt, http://web-platform-tests.live,
and https://wpt.fyi URLs for the listed tests.

The change doesn’t provide the following `<wpt>`-related features:

  - Doesn’t yet verify that each test listed in a `<wpt>` element
    actually exists in https://github.com/web-platform-tests/wpt/

  - Doesn’t yet verify that every single test file that exists in the
    https://github.com/web-platform-tests/wpt/tree/master/html tree is
    listed in a `<wpt>` element somewhere in the spec source.

Fixes #87
sideshowbarker added a commit to whatwg/wattsi that referenced this issue Aug 30, 2018
This change adds support for generating output from the `<wpt>` element,
as documented at https://tabatkins.github.io/bikeshed/#wpt-element) and
discussed at speced/bikeshed#1116.

Specifically, this change causes lists of tests in `<wpt>` elements from
the source to generate TESTS sections in the spec output, with links to
corresponding https://github.com/web-platform-tests/wpt, https://wpt.fyi
and https://web-platform-tests.live URLs for the listed tests.

The change by design intentionally doesn’t provide the following
`<wpt>`-related features:

  - Doesn’t verify that each test listed in a `<wpt>` element
    actually exists in https://github.com/web-platform-tests/wpt/

  - Doesn’t verify that every single test file that exists in the
    https://github.com/web-platform-tests/wpt/tree/master/html tree is
    listed in a `<wpt>` element somewhere in the spec source.

Fixes #87
domenic pushed a commit to whatwg/wattsi that referenced this issue Oct 21, 2018
This adds support for generating output from the `<wpt>` element, as documented at https://tabatkins.github.io/bikeshed/#wpt-element) and discussed at speced/bikeshed#1116.

Specifically, this change causes lists of tests in `<wpt>` elements from the source to generate TESTS sections in the spec output, with links to corresponding https://github.com/web-platform-tests/wpt, https://wpt.fyi and https://web-platform-tests.live URLs for the listed tests.

The change intentionally doesn’t provide the following `<wpt>`-related features:

- Doesn’t verify that each test listed in a `<wpt>` element actually exists in https://github.com/web-platform-tests/wpt/

- Doesn’t verify that every single test file that exists in the https://github.com/web-platform-tests/wpt/tree/master/html tree is listed in a `<wpt>` element somewhere in the spec source.

Fixes #87.
@svgeesus
Copy link
Contributor

Unclear why this issue is still open, the proposed feature is implemented and in widespread use. Close?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests