Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Backpressure exposure for asynchronous send() #158

Open
agoode opened this issue Feb 10, 2016 · 85 comments
Open

Backpressure exposure for asynchronous send() #158

agoode opened this issue Feb 10, 2016 · 85 comments
Labels
category: new feature https://www.w3.org/policies/process/#class-4 Needs Discussion The issue needs more discussion before it can be fixed. Priority: Soon https://speced.github.io/spec-maintenance/about/
Milestone

Comments

@agoode
Copy link

agoode commented Feb 10, 2016

We have so far been able to use synchronous send but this provides no mechanism for backpressure.

The current implementation in Chrome uses large internal buffers to hide this issue, but this is wasteful and results in the browser silently dropping messages when the buffer is overrun. This condition is not common with normal MIDI messages, but sysex messages can be of arbitrary length and can be slow to send, so Chrome also puts limits on those as well.

In order to allow implementations to avoid these buffers and arbitrary limits, we need a mechanism for non-blocking send.

@yutakahirano
Copy link

Is Streams useful for the use case? cc: @domenic @tyoshino

@domenic
Copy link

domenic commented Feb 10, 2016

This does sound like a pretty good fit for a writable stream... In fact MIDIOutput looks pretty WritableStream-like in general. It is specifically designed to take care of this queuing and backpressure-signaling concern for you, so that other specs can build on top of it and reuse the architecture.

I'm not sure about MIDIInput, as I'm not familiar enough with MIDI. The choice would be between the current spec's no-backpressure, event-based model (where if you aren't subscribed you miss the event), vs. a readable stream model with the data buffered for future reading, and potential backpressure when it's not read.

Unfortunately the spec's current setup where MIDIInput and MIDIOutput both derive from MIDIPort doesn't seem very easy to fit with the stream API. I'm not sure how we'd do this without just creating a parallel WritableMIDIPort type (and maybe ReadableMIDIPort).

There's also the issue that writable streams need some spec love, but if this is a urgent use case I can turn my attention to them and get that straightened out quickly.

@agoode
Copy link
Author

agoode commented Feb 11, 2016

The event-based model for MIDIInput is fine, since MIDI is essentially a multicast system, with no backpressure possible at the protocol level. MIDI doesn't guarantee reliable delivery.

Can WritableStream handle framing? We do parse the bytes and require that they are valid MIDI packets (1, 2, or 3 bytes in the common case, or arbitrary length in the case of sysex). Is this implementable as chunks in the streams standard? Can a stream reject an invalid chunk?

@domenic
Copy link

domenic commented Feb 11, 2016

Can WritableStream handle framing? We do parse the bytes and require that they are valid MIDI packets (1, 2, or 3 bytes in the common case, or arbitrary length in the case of sysex). Is this implementable as chunks in the streams standard? Can a stream reject an invalid chunk?

Yes, for sure. The underlying source's write hook (and optional writev hook, once we get that working) can process the data in arbitrary ways, returning a promise for when the processing is finished---which can be rejected to indicate that the input was invalid.

@jussi-kalliokoski
Copy link
Member

👍 for using streams from me.

We could add a new method to MidiPort: Promise<Stream> openAsStream() to preserve bw compat.

However, how do we solve sending timed messages?

@bome
Copy link

bome commented Feb 11, 2016

I'm somewhat reluctant for adding streams, it's seems like mainly adding clutter to the WebMIDI API and implementation, while providing very little new functionality. Can't we just add a function sendAsync(data, timestamp) with a Promise as return, or define a MIDIOutput listener which fires whenever a MIDI message is delivered?

results in the browser silently dropping messages

That's really bad. In that case, the send() function should either block, or throw an exception of some sort (which would need to be defined in the spec).

@cwilso
Copy link
Contributor

cwilso commented Feb 11, 2016

I'm REALLY uncomfortable rebasing on top of Streams, since it's highly
unlikely you would be piping any readablestream into a MIDI output device.
I think the best way to solve this is have send() return a Promise (that
resolves when it's been sent to the hardware), and expose the "current
available output message size" (i.e. the maximum chunk size you can write,
right now). This would enable things that currently work to keep working;
it would enable developers who want to send large amounts of sysex data to
do so, too.

On Thu, Feb 11, 2016 at 9:53 AM, Florian [email protected] wrote:

I'm somewhat reluctant for adding streams, it's seems like mainly adding
clutter to the WebMIDI API and implementation, while providing very little
new functionality. Can't we just add a function sendAsync(data, timestamp)
with a Promise as return, or define a MIDIOutput listener which fires
whenever a MIDI message is delivered?

results in the browser silently dropping messages
That's really bad. In that case, the send() function should either block,
or throw an exception of some sort (which would need to be defined in the
spec).


Reply to this email directly or view it on GitHub
#158 (comment)
.

@jussi-kalliokoski
Copy link
Member

I think the best way to solve this is have send() return a Promise (that
resolves when it's been sent to the hardware), and expose the "current
available output message size"

This would just be duplicating stream functionality, except in a more awkward and less interoperable way. I can see plenty of use cases for MIDI messages in Streams. For example:

  • Developers can use general purpose stream transform utilities (such as buffering with different strategies like FIFO to ensure all messages get delivered in order, or another strategy that makes sure certain messages take priority).
  • Pipe messages from one client's MIDIInput to another's MIDIOutput via P2P communications or websockets.

@domenic
Copy link

domenic commented Feb 11, 2016

I agree that such a design is just duplicating streams and will need to reinvent the queuing mechanisms and so forth they are specifically designed to let other specs reuse. The "current available output message size" is further evidence of that kind of duplication (of the desiredSize property of a writable stream, after whatwg/streams#318 is implemented). It's similar to reinventing an IDBRequest-style object when promises are available, based on the speculation that nobody would ever need to reuse the return value inside a promise callback.

However, how do we solve sending timed messages?

I'd assume each chunk would be of the form { timestamp, data } or similar.

@jussi-kalliokoski
Copy link
Member

I'd assume each chunk would be of the form { timestamp, data } or similar.

That sounds reasonable.

@cwilso
Copy link
Contributor

cwilso commented Feb 14, 2016

Can someone who understands streams put together a sketch of an API based on it, and examples of simple and complex usage?

@jussi-kalliokoski
Copy link
Member

Maybe something like this:

interface MIDIOutput {
  // ...
  Promise<WritableStream> openAsStream({ size, highWaterMark });
}

interface MIDIInput {
  // ...
  Promise<ReadableStream> openAsStream({ size, highWaterMark });
}

here's 2 usage examples - the first is just a dumb non-synced sequencer writing to the first available MIDIOutput, the second one takes advantage of piping streams to pipe from first MIDIInput to first MIDIOutput, filtering out messages that aren't note on or note off, pushing all messages to channel 3, pitch shifting by 2 octaves and buffering based on number of messages at high water mark of 1000.

EDIT: Note that the advanced example uses TransformStream which is not in the Streams spec yet.

@toyoshim
Copy link
Contributor

Reply for the first Adam's description:
send is not synchronous at all. If so, we can just return boolean, but we cannot. Also generally said we should avoid having synchronous APIs as much as possible on the main thread. So, what we want to do in this thread is to add a reliable send-like method.

Maybe using Streams would be a right approach. But I feel it's a little complicated as Chris said. Also, I feel Web MIDI should be aligned with other similar modern APIs. For instance Web Bluetooth and WebUSB return Promise.

So my preference is bome's sendAsync like thing. But, I feel sendAsync is confusing name since as I said existing send is also async.

Here is my proposal.
send: a user can use it for short messages, and sysex. But internal implementation may drop a message if a user want to send many data in a short time. Also very long sysex mesages would fail to send always.
sendLong: returning Promise. could not be called twice before the previous call finish. internal implementation never drop a message for buffer limits, but it does not ensure that the message reaches to the device because MIDI transport layer does not have any fail-safe mechanisms and the message could be dropped on immediate browser and OS shutdown. We probably need to allow sending long sysex messages as fragments.

@domenic
Copy link

domenic commented Feb 16, 2016

The sketch in #158 (comment) is pretty reasonable, although we'd update it to use the writer design. I'll add some explicit demonstrations of backpressure support:

destination.openAsStream().then(s => s.getWriter()).then(async (writer) => {
  console.log(writer.desiredSize); // how many bytes (or other units) we "should" write

  // note: 0 means "please stop sending stuff", not "the buffer is full and
  // we will start discarding data". So, desiredSize can go negative.

  writer.write({ data: data1, timestamp: ts1 }); // disregard backpressure

  // // wait for successful processing before writing more
  await writer.write({ data: data2, timestamp: ts2 });

  await writer.waitForDesiredSize(100); // wait until desiredSize goes >= 100

  writer.write({ data: oneHundredTwentyBytes, timestamp: ts3 });

  await writer.waitForDesiredSize(); // wait until desiredSize goes >= high water mark

  // also, maybe default timestamp to performance.now()?
});

I might also collapse @jussi-kalliokoski's .openAsStream(new CountQueuingStrategy({ highWaterMark: 1000 })) into .openAsStream({ highWaterMark: 1000 }).

@toyoshim
Copy link
Contributor

After thinking about incremental writing of a large sysex, I noticed that it will allow a malicious attack to lock all output ports exclusively. E.g. just sending a "sysex start" byte will lock the port forever. So, we should keep on having the restriction that a user can not send an incomplete or a fragment message.

So even if we have a back-pressure, the sysex size will be limited to the maximum size of ArrayBuffer.

@toyoshim
Copy link
Contributor

@domenic Have you ever talked with Web Bluetooth and WebUSB guys before?
It would be great if these all APIs include Web MIDI are consistent in terms of API design. So, if there was a discussion, I'd hear opinions discussed there.

@domenic
Copy link

domenic commented Feb 16, 2016

In an offline thread @cwilso mentioned that @jussi-kalliokoski's examples are too complex and he'd like a three-liner. Here you go (slightly changed from the above since I am not sure why @jussi-kalliokoski made stream acquisition async):

const writer = midiOutput.asStream().getWriter();

await writer.write({ data: d1, timestamp: ts1 });
await writer.write({ data: d2, timestamp: ts2 });

@bome
Copy link

bome commented Feb 16, 2016

@toyoshim you can intersperse short messages during an ongoing sys ex message, and the implementation could also use a timeout to abandon stalled sys ex messages after a while.

Silently dropping MIDI messages is always bad, maybe the current send() function can return a bool: false on buffer full.

I agree that sendAsync is not good, but sendLong is also misleading. Also, why wouldn't it be possible to call your proposed sendLong function again before the previous one finished? I would welcome "unlimited" buffering there! So, maybe sendBuffered? or spelled out: sendWithPromise?

@domenic
Copy link

domenic commented Feb 16, 2016

Have you ever talked with Web Bluetooth and WebUSB guys before? It would be great if these all APIs include Web MIDI are consistent in terms of API design. So, if there was a discussion, I'd hear opinions discussed there.

I previously talked with Jeffrey about Web ... Bluetooth? ... and the conclusion was that since there was no backpressure support it wasn't urgent and we could wait on adding streams until later.

@agoode
Copy link
Author

agoode commented Feb 16, 2016

Stream acquisition is async since you could block for an arbitrary amount of time with an OS-level open() call.

@domenic
Copy link

domenic commented Feb 16, 2016

Stream acquisition is async since you could block for an arbitrary amount of time with an OS-level open() call.

That's fine; that just means that the first write() won't succeed (or fail) until the open comes back. The stream object can still be used, traditionally, in these scenarios.

@agoode
Copy link
Author

agoode commented Feb 16, 2016

Is there a way to force the open to complete, to ensure we don't have to wait until the write to determine success?

@domenic
Copy link

domenic commented Feb 16, 2016

I don't understand what "force the open to complete" would mean. You can see an example implementation here if it helps: https://streams.spec.whatwg.org/#example-ws-backpressure (unfortunately the usage example still does not have writers and uses the coarse-grained "waiting" signal instead of the fine-grained desiredSize discussed above). The file descriptor gets opened immediately upon stream creation, but the stream machinery takes care (both in specs and in implementations!) of making sure not to write to the descriptor until that successfully completes.

@agoode
Copy link
Author

agoode commented Feb 16, 2016

So, we currently have the notion of pending in Web MIDI. This is the state where open has not yet fully completed. In streams, I think pending would mean desiredSize = 0 and open would mean desiredSize != 0?

It's good to know when the MIDI port is fully open, since we can signal in a UI that "preflight" is complete and all devices are fully ready to send/receive.

@toyoshim
Copy link
Contributor

@bome Web MIDI backend needs to multiplex all sending messages from multiple MIDIAccess and once one of it contains 'sysex start' byte, and it was sent to an actual device, we can not abandon stalled sysex in any way, right?

Making send return a boolean sounds possible, but it never means the data is sent to the device, but just means the data is successfully buffered.

The answer for the second question of only one request is we need to ensure message orders in cases where one of in-flight message fails in the middle. Please imagine a case there the second request fails asynchronously, and a user already send the third request that may success. That will cause something a user do not expect.

@agoode
Copy link
Author

agoode commented Feb 16, 2016

Let me introduce a classic MIDI use case, and folks can weigh in on various ideas for making it work.

An SMF file contains a stream of <timestamp><event> pairs (basically the same as your proposed chunk description above), which we want to stream to an underlying MIDI sink at exactly the correct time. MIDI itself has no concept of timestamps, so something has to schedule this. SMF files can trivially take arbitrary amounts of wall clock time to play, with arbitrary gaps between sounds.

Right now, Web MIDI clients have to schedule themselves to submit some number of events with timestamps, and use setInterval to remind themselves to send some more. We should be able to do better than this.

If we want to get fancy, allow for user-supplied tempo changes which take effect immediately, while the SMF is streaming.

@toyoshim
Copy link
Contributor

@domenic hum... It probably makes sense that Web Bluetooth does not need Streams at this point. Bluetooth defines write ack in the protocol level, and OS level APIs seem to expose this model directly. So, mapping write and write ack to a write method with returning Promise sounds a straight-forward reasonable solution.

But, I believe WebUSB will need Streams more than Web MIDI does.

@toyoshim
Copy link
Contributor

Here is my answer for async or sync asStream.

Probably we should make MIDIOutput explicitly require MIDIOutput.open completed before calling asStream. Since buffer allocation happens in renderer side, remaining tasks for asStream could finish synchronously.

@toyoshim
Copy link
Contributor

For SMF playback, I'd prefer to use requestAnimationFrame(timestamp => { }) even though my task isn't related to graphics. We can calculate delta time in the callback, and send next messages that won't make it in the next callback cycle as estimated with the calculated delta.

@yutakahirano
Copy link

How about the close operation? I mean, what is the relation between WritableStreamWriter.close and MidiOutputPort.close?

@ghost
Copy link

ghost commented Feb 26, 2021

I built a midiWebApi sequenser that work to playback,,playup, record "simultaneously" well more or less. But the playup lags because i use timeout, i would like to change the timeout solution to a buffered playup. Will it remove the playup lag?
Can you direct me to an example on howto implement a buffered playup, i do not quite get what it is. Since i just use timeouts/waitstate beteween note playup.

Very few examples showing how buffered playup work.

https://jonasth.github.io/

@cwilso
Copy link
Contributor

cwilso commented Feb 26, 2021

Hey @JonasTH, I wrote an article a long time ago about Web Audio clocks, that also applies to Web MIDI - in particular, why using setTimeout is not a good idea. https://www.html5rocks.com/en/tutorials/audio/scheduling/.

The issue you're having is unrelated to this issue (this isssue is essentially about sending massive amounts of data on a slow MIDI link).

@ghost
Copy link

ghost commented Feb 27, 2021 via email

@ghost
Copy link

ghost commented Feb 27, 2021 via email

@rianhunter
Copy link

rianhunter commented Mar 17, 2021

What is the status of this issue? Without any way to detect backpressure I have to artificially slow down sending bursts of messages to account for older midi devices that don't use USB.

For what it's worth I agree with @bome:

I'm somewhat reluctant for adding streams, it's seems like mainly adding clutter to the WebMIDI API and implementation, while providing very little new functionality. Can't we just add a function sendAsync(data, timestamp) with a Promise as return, or define a MIDIOutput listener which fires whenever a MIDI message is delivered?

A method like his proposed sendAsync would be sufficient, extensible, and relatively easy to spec.

@gudsoft
Copy link

gudsoft commented Sep 4, 2021

I second the other requests for providing a solution for this issue. It is important. For many of the reasons already discussed in this thread.

First, I would like to commend all the work done on Web MIDI API so far... it is truly exciting (to me) what browser-based MIDI means! At the same time, and at this stage of the development of the API, it is quite surprising this foreseeable and basic reality (the need for SysEx throttling) has not yet been accommodated by the API (and is the subject of so much debate and "needed explanation"). The need for throttling is simply a reality of the mature MIDI ecosystem and its legacy, much of which is just as relevant today as it ever was. Pointing fingers at old or new devices, drivers, etc. (or wherever the need for throttling may arise in a particular system) does very little to advance all the good that the Web MIDI API otherwise brings.

Every piece of MIDI software of consequence created in the last few decades that deals with SysEx has recognized that providing an ability to throttle SysEx transmission is a necessity, either to accommodate limitations of the receiving device or transmitting system (OS, drivers, interfaces, etc. -- and now potentially even arbitrary internal browser limitations!). Indeed, the computer, with its infinite flexibility for adaptation and reprogramming, is in the best possible position to accommodate "harder" limitations found elsewhere in a MIDI system (within "set-in-stone" old hardware, or "hard-to-update" firmware, etc.). As a component of that flexibility, the Web MIDI API very clearly has a role to play here, IMO.

The Web MIDI API should provide (or at least enable) a solution for throttling SysEx transmission, period. If it does not, whole classes of "imperfect", yet still valuable and viable, MIDI devices and software (synthesizers, interfaces, and even apps, etc.) will be needlessly cut out of the benefits of browser-based MIDI.

An arguably important "modern" case in point: Many of the cheap "commodity" USB MIDI cables available today (which usually have full-speed SysEx transmission bugs) could benefit (read, be made to "actually work") via a simple adaptation of the software running on the computer (to throttle SysEx). Wouldn't it actually further the goals of the Web MIDI API to enable as many MIDI devices (and users!) as possible, such as these commodity interfaces, despite their flaws?

Web MIDI API's current lack of any means for SysEx throttling, as well as its insistence that whole SysEx messages be provided for transmission are, IMO, arbitrary and harmful limitations of the API, and prohibit certain valid and creative uses of MIDI. For example, why can't my browser-based MIDI app "stream" SysEx (generating it over time), or "open" a bidirectional SysEx connection using a single 0xF0 on each side, and then communicate asynchronously entirely with an "exclusive" protocol of 7-bit messages? Both things are trivially possible with MIDI itself, but currently not the Web MIDI API. (It should be pointed out, some of the earliest -- and perfectly valid -- SysEx implementations in Casio and Roland devices are effectively asynchronous 7-bit protocols, reliant on intra-SysEx handshaking, something the Web MIDI API is currently incompatible with, for no good reason, IMO.)

MIDI has grown to its popularity and ubiquity today, yes, based on the efforts of many to adhere to a robust, simple, and dare we say "perfect" common specification, but also in no small part due to the creative (and arguably "necessarily obvious") accommodations that have been made, especially in software, aimed at making sure MIDI works as well as possible even for some of the most "imperfect" members of the ecosystem, and that unforeseen innovations are not precluded by assumed "valid uses" of features like System Exclusive.

It would be awesome if the Web MIDI API wholeheartedly recognized and continued in this tradition.

@gudsoft
Copy link

gudsoft commented Sep 4, 2021

As a separate note, I would like the suggest that, the sendAsync method proposed earlier in this thread should not only allow an individual SysEx message to be sent asynchronously, with completion signaling via Promise or other mechanism, but it should allow a partial SysEx message to be sent (nominally to be followed by the sending of subsequent part(s) until completion of the whole message).

Adding these two basic abilities... asynchrony and divisibility... are enough to "re-enable" everything that is natively possible with SysEx over MIDI itself, and would provide the means for the Web MIDI API to fully accommodate past and future innovations, and other vital use-cases that have been with us since MIDI's birth (see below). A suitable sendAsync would accomplish this, as would some kind of streaming interface, but the important thing is that the API expose both abilities together.

It should also be pointed out that SysEx throttling isn't the only issue of importance here... the ability for apps to freely interject MIDI realtime messages (such as Timing Clock) during a long-running SysEx message (something the MIDI protocol has always been designed to handle) is an important use-case for playback and synchronization. Again, fine control over the divisibility of SysEx data from the perspective of the API is required.

I applaud any and all principled efforts to keep the Web MIDI API simple and easy-to-use (achieving this for any API is no small feat), but for all the reasons above, a synchronous, atomic send method is, in fact, too simple, and precludes much that the API could and should enable for browser-based MIDI apps.

@hoch
Copy link
Member

hoch commented Sep 11, 2023

2023 TPAC Audio WG Discussion:
The WG is still undecided about whether this feature will be included in V1 or the next version of the specification. It is likely to be added later, after developer surveys, prototyping, and field experiments have been conducted.

@hoch hoch added the Agenda+ https://speced.github.io/spec-maintenance/about/ label Sep 11, 2023
@hoch hoch removed this from the V1 milestone Sep 11, 2023
@mjwilson-google mjwilson-google added the category: enhancement https://www.w3.org/policies/process/#class-3 label Sep 13, 2023
@mjwilson-google mjwilson-google added this to the CR milestone Sep 25, 2023
@mjwilson-google
Copy link
Contributor

I will schedule this for CR / V1 until we have a chance to discuss it in the Audio Working Group, but we will likely push it to future work.

@mjwilson-google
Copy link
Contributor

Audio Working Group 2023-10-05 meeting conclusions:

  • This looks like it will be a significant amount of spec and implementation work
  • It also looks like it is important
  • The Working Group doesn't have enough information to make a final decision on how to handle this, so we will conduct a developer survey

@mjwilson-google mjwilson-google added Needs External Dependency Resolution An external dependency is blocking progress on this issue. and removed Agenda+ https://speced.github.io/spec-maintenance/about/ labels Oct 5, 2023
@rianhunter
Copy link

Thanks @mjwilson-google looking forward to seeing progress on this issue!

@mjwilson-google mjwilson-google added Needs Discussion The issue needs more discussion before it can be fixed. and removed Needs External Dependency Resolution An external dependency is blocking progress on this issue. labels Sep 24, 2024
@mjwilson-google
Copy link
Contributor

TPAC 2024 notes:

This is actually somewhat covered by MIDI 2.0, which is out of scope for version 1 of the Web MIDI specification.

I propose moving this to future work, and not fixing this version 1.0 of the specification.

@agoode
Copy link
Author

agoode commented Sep 25, 2024

Does MIDI 2.0 contain some kind of flow control within the protocol? If so, that is different than this proposal, which is meant to reflect the backpressure that the existing MIDI 1.0 APIs already provide (by blocking at the OS level).

@starfishmod
Copy link

MIDI 2.0 UMP does not contain a low-level flow control. There are some flow control mechanisms within MIDI-CI.
Maybe @mjwilson-google is referring to the fact that UMP messages (inc SysEx7) is in packets now. So an application is not sending large buffers of SysEx all at once.
However, an application may be sending a large number of UMP's at one time and a similar buffering issue may occur?

@cwilso
Copy link
Contributor

cwilso commented Sep 25, 2024

This shouldn't be related to MIDI 2.0; this was a problem reported by real-world use cases. (When sending sysex, typically.).

@mjwilson-google
Copy link
Contributor

I did not check details in the MIDI 2.0 spec. I am seeing strong objections to pushing this out so we'll keep it scheduled for the current CR milestone. But I still don't have a good idea of how to specify this yet, or how to conduct a valid survey.

I think I will try to fix the other CR issues first. If this becomes the last blocker it will be easier to focus on. I am aware it may take some time to resolve this issue which is motivation for trying to get things moving sooner, but it also seems like it's possible to make changes during CR review and it should be easier to get more eyes on the spec during that process. We may end up drafting a change that we have low confidence in, and use the wide review to help verify if it's sufficient or not.

Thanks for the quick responses, and more feedback is always welcome.

@cwilso
Copy link
Contributor

cwilso commented Sep 25, 2024

Happy to brainstorm a solution with you, @mjwilson-google

@mjwilson-google mjwilson-google added category: new feature https://www.w3.org/policies/process/#class-4 Priority: Soon https://speced.github.io/spec-maintenance/about/ and removed category: enhancement https://www.w3.org/policies/process/#class-3 labels Nov 27, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
category: new feature https://www.w3.org/policies/process/#class-4 Needs Discussion The issue needs more discussion before it can be fixed. Priority: Soon https://speced.github.io/spec-maintenance/about/
Projects
None yet
Development

No branches or pull requests