-
Notifications
You must be signed in to change notification settings - Fork 168
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Integration between event-loop of AudioWorkletGlobalScope and rendering loop #2008
Comments
Seeing this was discussed in #1511 To me, altough that previous issue was closed, the spec in it's current state is still not clear. I think an improvement was made by describing how microtasks are run, and that still leaves handling of proper tasks from the event-loop of From what I understand from the previous thread, it seems important to not interupt Step 3 "Process a render quantum." of the rendering loop, hence the microtask handling is moved to Step 4(versus interleaving them with calls to So we would still have to spec when tasks are handled then. On another point, I think we can also assume that the actual "audio backend" could be running on another thread, or even another process? I read something(sorry no link at hand) about Chromium moving Audio to a dedicated Mojo service? So that would mean the rendering thread would communicate the "render result" over ipc to the Audio service? In that case, could it perhaps be realistic to spec the rendering loop in the following way:
The reasoning behind this list: When you get to 5, you communicate with the audio backend. That means the backend will have to processs the result, and then communicate back via a control message to potentially ask for more data. So while the backend is handling the result, it's a good time to do other work. The first thing you need to do is perform a microtask checkpoint, to resolve promises resulting from calls inside Then, you go back to 2, and that's where you either have received a new control message, or not. The important thing is that by putting the task handling at step 3, the UA already knows whether it has received a control message or not, and what the content of that message was, which can help it decide whether to run a step of the event-loop or skip it and rendering another quantum. Note that this means the processor can't rely on getting a steady flow of messages on it's port, however it doesn't prevent it from sending messages on the port as part of So one can wonder what use-case is most important for developpers. Having fast and steady calls to Also, the idea is really that you're not continuously running the event-loop of the worklet global scope, that is something that is explicitally "driven" by step 3 of the rendering loop, precluding any race-condition between an potential Also, it's somewhat not spec compliant, since HTML says that an "An event loop must continually run through the following steps for as long as it exists"(https://html.spec.whatwg.org/multipage/#event-loop-processing-model). That's maybe a good point in the HTML spec to add, "unless it is a worklet-event-loop, in which case the UA can decide on per-case basis when to run steps of the event-loop(cc @annevk re whatwg/html#4213). |
In general, I agree with your two messages, this is something we're missing.
The audio system code is generally running on another process in modern browsers, yes (at least Firefox, for now just on Linux but this will change soon, and Chrome everywhere I believe, even before Mojo). It's never running on the same thread as something else, and it's often at higher priority.
Regarding the sentence I put in bold: it's preferable to do the former, otherwise it would break the fundamental rule of audio programming: there would be a risk of delaying the audio callback (see http://www.rossbencina.com/code/real-time-audio-programming-101-time-waits-for-nothing for a more in depth explanation). Things should be defined so it's clear, but a program that uses Promises or uses a lot of events as part of its audio processing is buggy. It's possible because they cannot be disabled. tc39/ecma262#1120 has the background for this. The right thing to do is to use a The list of steps should be roughly:
We could even spec this as a normal event loop, if 2 and 4 are events, and 1 is an event that synchronously waits for a signal from the implementation that there is more audio to render. |
There would be value in allowing the implementation to process control messages and events while waiting for a signal from the system that more audio should be processed. |
Thanks, very interesting article and I think a good foundation on which a UA could make potential decisions with regards to prioritizing things.
In the context of the HTML processing model, a UA can actually throttle tasks of a given task-queue, or prioritize one task-queue over all others, as long as within a given task-queue, ordering is preserved. However, with promises it's a bit different, since they resolve within a microtask, and because in the current HTML semantics, if you run a task, you must run a microtask-checkpoint after it. So the question becomes: semantically, what is a call to And if it's a task, could you spec a specific "worklet event-loop" that would not perform microtask checkpoints after running each task? In theory it could be possible to queue all micro tasks on a dedicated task-queue, and run them as regular tasks, and run all tasks without a dedicated "microtask checkpoint". That could then be a way to "ignore" micro-tasks, and fully prioritize a potential "rendering" task-queue(where a task would call Also, in practice you wouldn't have to queue a task for the rendering, or introduce a "rendering" task-queue, it would be purely a way to express the computation.
It could be more robust to spec this as a formal event-loop simply running one task at a time, with the various steps above being instead expressed via different task-queues that the UA could prioritize. Something like:
You could express that via giving full flexibility to UA in prioritizing task-sources, as opposed to via an imperative list of steps on the rendering loop. In practice, it could result in a sequence like:
Now off-course in practice you could just immediately run the "process a rander quantum" steps when receiving the control message from the backend, you wouldn't have to queue a task and then select on the various tasks-queues. Semantically, what would allow to "immediately run the "process a render quantum" steps when receiving the control message from the backend" is the fact that the control message would be a task from the "control-task-source", and the handling of that task itself would queue a task on the "render-a-quantum-task-source", and since you could fully prioritize the "render-a-quantum-task-source"(and you know it's currently empty since the only way to queue a task on it is via handling of a task from the "control-task-source"), you could immediately process a render quantum. In practice you would effectively do what the spec currently says, without queuing additional tasks, but the spec language would be more robust to differences in implementations. Currently it reads a bit like you'd have to change the spec if you decided to re-order some steps to get better performance. And the current processing model actually comes with a lot of flexibility already:
I think what worklet needs is just a change at Step 8 that would read "if this is not a worklet event-loop, Perform a microtask checkpoint."(in the case of a worklet event-loop, the microtask queue is treated like any other task-queue) Similar to how Step 11 reads: "Update the rendering: if this is a window event loop, then:" |
Yes I agree, and note again the problem that speccing something as a task, means that in theory it should be followed by a microtask checkpoint. Hence the need, I think, to introduce some flexibility for a dedicated "worklet event-loop" to not run microtask checkpoints after every task, instead treating it as just another task-queue. Re "1 is an event that synchronously waits for a signal from the implementation that there is more audio to render", you might want to merge that into 2, since something like a |
By the way, sorry for writing looong posts on this, hoping it's still readable, and mainly trying to bounce off some ideas. On second thoughts, I came to realize my suggestion above would turn the "rendering loop" basically into the event-loop of the worklet global-scope, which might be taking things a bit too far in one direction. It might actually make sense to spec a "rendering loop" that isn't using HTML event-loop semantics, and rather reflects that realities of dealing with real-time audio(like is done now). The problem of the current spec is that it doesn't say when you should run the "tasks"(not the microtasks, which are mentioned) or the worklet global scope, for example for incoming messages on the message port of a processor. So a more "direct" solution to that problem, versus re-writing the entire loop, is what I initially proposed in my first post: step 4 could process all task-queues of the event-loop of the AudioWorkletGlobalScope, with microtask checkpoints interleaved? (as opposed to only run microtasks, as is specced now). And then we'd have to accept that calling It would probably be important for Step 4 to first do a microtask checkpoint, handling any microtasks enqueued as part of calls to That could turn into quite a big "step 4", and it would be the responsibility of the developer to ensure it doesn't fill up with too many tasks... |
F2F summary:
This means that |
Have you considered defining a new "render-an-audio-quantum" task-source? Then it could be said that the event-loop on the rendering thread has three task-sources(I think):
Then the UA could pick a runnable task from any task-queue at each iteration of the loop, and each task will be followed by a microtask checkpoint, without having to define anything special in the audio spec. It also reads like the control-message-queue is defined as a special purpose shared queue between the control and the rendering thread, and that definition could perhaps be replaced with a task-source of the rendering thread(and perhaps also the control thread, if two-way communication is required), since a task-source gives you all the atomic guarantees you need, I think. Also the way Step 2, Process the control message queue, is defined, seems to specify that the rendering loop must handle all messages that are currently enqueued, which is somewhat not compliant with the concept of "running a normal event-loop", since that requires allowing the UA to choose one runnable task to run at each iteration, so long as ordering per task-source is preserved. In practice, if the post-message-queue was defined using a task-source, a UA could still prioritize the control message queue until it was empty, and you wouldn't have to define this behavior specifically in the audio spec(although if it's really important to do so, you could add a note to that effect).
And this statement, for example, could be replaced by queuing a task using the "render-an-audio-quantum" task-source from the audio callback. |
I don't think this reproduction scenario adds anything new to the conversation, but here it is for posterity. It's maybe helpful since it describes a concrete way that this issue affects developers out in the wild: https://jackschaedler.github.io/offline-audio-worklet-repro/ |
Describe the issue
Web Audio describes a control-thread, and a rendering-thread, where the control-thread runs a "traditional" event-loop as described in HTML, and the rendering thread is running a custom "rendering loop", described at https://webaudio.github.io/web-audio-api/#rendering-loop.
Step 3 of the rendering loop, "Process a render quantum.", ends-up, in the case of a
AudioWorkletNode
, calling into theprocess
method of the correspondingAudioWorkletProcessor
.It should be noted that the
AudioWorkletProcessor
runs in aAudioWorkletGlobalScope
, which is a sub-class ofWorkletGlobalScope
, which should have it's own distinct event-loop as described in https://drafts.css-houdini.org/worklets/#the-event-loopA further complication is that each
AudioWorkletProcessor
also has a shippedMessagePort
, whosepost-message-queue
should be treated as a first-class task-queue for the "event-loop" on which it happens to be, in this case the event-loop of theAudioWorkletGlobalScope
.So the problem is that it's not clear how the rendering loop integrates with the event-loop of the
AudioWorkletGlobalScope
.This seems to imply a kind of integration where the rendering loop would run a microtask checkpoint for the
AudioWorkletGlobalScope
, as is found in Step 4 of the rendering loop.However that doesn't cover messages received on the port, since those aren't microtasks.
If we take this example:
https://github.com/GoogleChromeLabs/web-audio-samples/blob/master/audio-worklet/design-pattern/shared-buffer/shared-buffer-worklet-processor.js
We see an interplay between the
onmessage
handler on the port, and theprocess
method, whereprocess
essentially does nothing until an "initialization" message has been received on the port.In practice, how does a UA interleave running tasks on the
AudioWorkletGlobalScope
, for example when a new message is received on the port, with running the rendering loop which itself calls intoprocess
of the processor running in the context ofAudioWorkletGlobalScope
?And what about tasks enqueued on the event-loop of
AudioWorkletGlobalScope
as a result of the call to https://drafts.css-houdini.org/worklets/#dom-worklet-addmodule ?I can imagine two ways to do it in practice with the current spec:
AudioWorkletGlobalScope
could be plugged into the "control-message-queue" of the rendering thread?AudioWorkletGlobalScope
in parallel to the render-loop, but somehow "stop" it when the render-loop wants to callprocess
?It appears pretty clear that the goal is not running the event-loop of the
AudioWorkletGlobalScope
fully in parallel to the render loop, since that would create potential race-conditions between theonmessage
handler of a processor and it'sprocess
method. Yet I cannot find anything in the spec that integrates both event-loops so as to interleave both sequentially in some way.A solution could be rewording https://webaudio.github.io/web-audio-api/#rendering-loop, where step 4 would process all task-queues of the event-loop of the
AudioWorkletGlobalScope
, with microtask checkpoints interleaved?Where Is It
https://webaudio.github.io/web-audio-api/#rendering-thread
Additional Information
Could be relevant for worklets in general, see whatwg/html#4213
The text was updated successfully, but these errors were encountered: