Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reconcile the processing algorithm with the worklet event loop #1511

Closed
padenot opened this issue Feb 27, 2018 · 24 comments
Closed

Reconcile the processing algorithm with the worklet event loop #1511

padenot opened this issue Feb 27, 2018 · 24 comments
Assignees
Labels
Needs Edits Decision has been made, the issue can be fixed. https://speced.github.io/spec-maintenance/about/
Milestone

Comments

@padenot
Copy link
Member

padenot commented Feb 27, 2018

For now, the AudioWorklet text uses things that need an event loop to work (postMessage, onmessage, new Promises, that kind of thing), but it's not defined how it all works.

It must be defined, otherwise it can't be implemented properly.

For example, does onmessage execute before or after the call to process? On which thread? When does resolved promises execute?

This was discussed but forgotten about in #1193 (comment).

@padenot padenot added Architectural/Fundamental (Breaking change) Needs Discussion The issue needs more discussion before it can be fixed. labels Feb 27, 2018
@hoch
Copy link
Member

hoch commented Feb 27, 2018

Let's start with these:

  • AudioWorkletGlobalScope runs on the render thread.
  • AudioWorkletGlobalScope has an event loop.

My thoughts:

  1. node.port.postMessage() queues a task to the command queue in the render thread. Later processor.port.onmessage fires when the queued task gets executed.
  2. processor.port.postMessage() queues a task to the main thread's event loop. Later node.port.onmessage fires when the queued task gets executed.
  3. The only Promise that is relevant to the AudioWorklet operation is the one from worklet.addModule(). Its resolution should happen on the main thread so I don't think it is any different from context.resume() or context.suspend().

@padenot: so your last comment only applies to the point 1:

does onmessage execute before or after the call to process?

I think the order of queued tasks should follow its scheduled order. Do you mean we need to prioritize and change the order of queued tasks in the command queue somehow?

Other than that, I think we just need to update the processing model to reflect the points 1 and 2.

@karlt
Copy link
Contributor

karlt commented Feb 27, 2018

I assume the rendering thread would or at least could check for available
messages between rendering quanta, but not between process() calls for a
single rendering quantum.

The model of queuing messages from the main thread on the rendering thread
immediately is simple. However, it would not provide a mechanism for
a client to atomically make changes to two different AudioWorkletNode
processors.

Support for atomic changes to the graph can be provided by queuing messages
from the main thread on the main thread until a stable state, at which point
all of the messages are queued in a batch on the rendering thread. (This is
sometimes called "tail dispatch".)

Similarly, batching messages from the rendering thread to the main thread would
permit a rendering-time-consistent view of rendering thread state from the main thread
(except during onmessage handling).

@rtoy rtoy assigned hoch Mar 1, 2018
@padenot
Copy link
Member Author

padenot commented Mar 2, 2018

We had a productive call yesterday about this. The proposed solution is to call the process() method off of the AudioWorklet event loop. Here are a couple scenarios and what MUST happen:

Say we're using a promise in an onmessage handler. The result of the promise MUST execute right after the onmessage handler returns. It is wrong to have the process() method executed before the result of the promise executes.

Say we're using a promise in the process() method, that gets resolved during this process() call. Same deal, the result of the promise must be executed right after the process() method returns, nothing else can be executed in between (no onmessage, for example).

Those scenarios are just examples. Normative description of what MUST happen is in the event loop section of the HTML standard.

There should not be a prioritization between onmessage and process, my sentence was merely to point out the fact that we can't answer this question in the current state of things, this was a bit clumsy on my part.

Other than that, I think we just need to update the processing model to reflect the points 1 and 2.

In addition to point 1 and 2, we want to make process fit somewhere in the event loop. Since Promises are an ECMAScript concept, and we can't really opt-out of them (see this ECMAScript issue for a related discussion), and because authors can simply do new Promise to have a promise in an AudioWokletGlobalScope, I don't think point 3 is correct.

@annevk
Copy link

annevk commented Mar 2, 2018

It sounds like you want to use "queue a task" and have that task run process(). The rest should fall out automatically. And I guess you want to make sure that the task source used for that task is different from the one used for the message event so the browser can decide which to run next.

@hoch
Copy link
Member

hoch commented Mar 2, 2018

@padenot process() is a part of the graph rendering chain. Let's say you have a graph and AudioWorkletProcessor is a part of it. Once the rendering chain of graph is invoked, there is no way to stop it until the graph rendering is over. In this sense, the task being queued in the render thread side is more of renderGraph() not an individual process() call. I would like to clarify this first.

If you can agree on this, the rest should fall out automatically like @annevk said. All in all, any task in the render thread should be executed by the task scheduler (i.e. the command queue in our processing model), and that is how Chrome's AudioWorklet behavior on the render thread is implemented.

@hoch
Copy link
Member

hoch commented Mar 2, 2018

I would like to add few more points here. I think this issue is closely related to the recent comment from TAG or it

  1. We need to revise the processing model section with the notion of two layers (AudioNode and its processor) and the event loop in the render thread. More specifically the "Rendering an Audio Graph" section should be changed. This "rendering loop" routine must be atomic and that's what I meant by renderGraph() in the comment above.
  2. Then AudioWorkletProcessor naturally becomes a part of the rendering process along with other processors for AudioNodes in the graph. So I don't think any special treatment for AudioWorklet is required here.
  3. I am not sure if we should make a special case for onmessage or Promise for the rendering thread (or AudioWorkletGlobalScope), because the event loop takes care of them. We do have to define the task source for the Web Audio's rendering loop task.

I think we should create another issue to track the processing model change and make this as "blocked" by that change.

@padenot
Copy link
Member Author

padenot commented Mar 2, 2018

@padenot process() is a part of the graph rendering chain. Let's say you have a graph and AudioWorkletProcessor is a part of it. Once the rendering chain of graph is invoked, there is no way to stop it until the graph rendering is over. In this sense, the task being queued in the render thread side is more of renderGraph() not an individual process() call. I would like to clarify this first.

The whole question here is where and how many event loops we have. You seem to imply in this comment (sorry if I misunderstood), that we have an even loop per AudioWorkletGlobalScope. If this is the case, the a rendering quantum would be something along these lines:

  1. For each AudioNodeProcessor to process this render quantum
    1. If this AudioNodeProcessor is a native node
      1. ... up/down mix the buffers ... process the audio ...
    2. Else if this AudioNodeProcess is an AudioWorkletProcessor
      1. Fire an event named "process" at this AudioWorkletProcessor
      2. Spin the event loop of this AudioWorkletProcessor's AudioWorkletGlobalScope until it is empty

If we decide on a model where we have a single event called renderGraph() (as you seem to imply in this comment), it would look something like this, off on onrenderGraph or something:

  1. For each AudioNodeProcessor to process this render quantum
    1. ... up/down mix the buffers ... process the audio ... maybe synchronously calling some process() functions for some AudioWorkletProcessor.

I think it's rather frowned upon to call into not from its own task, but I don't remember exactly if this is true or if I'm confusing this with something else, nor the reason.

Now, to try to be a bit more practical, say I have two AudioWorkletProcessor:

class A extends AudioWorkletProcessor {
        console.log("A");
        var p = new Promise(function(resolve, reject) {
            // compute
            resolve("promise A");
        });
        p.then(function(str) {
            console.log(str)
        })
}

class B extends AudioWorkletProcessor {
    process(inputs, outputs, parameters) {
       console.log("B");
        var p = new Promise(function(resolve, reject) {
            // compute
            resolve("promise B");
        });
        p.then(function(str) {
            console.log(str)
        })
    }
}

and then this document:

var ac = new AudioContext();
let a = new AudioWorkletNode(context, 'A');
let b = new AudioWorkletNode(context, 'B');
a.connect(ac.destination);
b.connect(ac.destination);

If we ignore the issue of the processing order (that is an issue in itself, but let's ignore it for now), an say that A is always processed before B. In the first model, we have:

A
promise A
B
promise B

in the second model we have:

A
B
promise A
promise B

Trying to synthetize the issues here:

  • Do we have multiple event loops, one for each AudioWorkletGlobalScope, spun by the rendering thread?
  • Do we have a task per process() as Anne says? If this is the case, we should see the first solution: right after the process() call, we have a microtask checkpoint, and we should see the promise settling print its message. Practically speaking, putting this in the context of a real-time audio callback, the microtasks checkpoints are running during the audio callback
  • Do we have a task per render quantum as with your renderGraph() function ? If this is the case, the process() call is a synchronous call from a single task, we have single event loop, and we don't return to the event loop until all process() calls have been made. In this case, we see the second result. Practically speaking, putting this in the context of a real-time audio callback, the microtask checkpoints are running before or after the real-time audio callback (possibly even in between the audio callback).

We have to decide what behaviour we want here, it is clearly possible to define two implementation that do each do something different: just in this thread we have two or three possible definitions that are clearly implying different results for the same script.

It's unclear to me yet which one of the two is preferable (I'm still thinking about it), but it's clear that we can't reach interoperability until this is resolved. I think I'm leaning towards the first solution, though, but I don't know whether spinning event loops like this has issues.

@hoch
Copy link
Member

hoch commented Mar 2, 2018

That's why I asked the clarification about process() above because queueing a single process() call does not really make sense to me.

I am really against making the graph rendering task non-atomic. What if your promise resolution involves the any state change in the other processor? For this reason, Chrome implementation has a graph mutex the protect the graph during the rendering task. For the optimum rendering performance, we have to ensure the rendering task to be atomic. I strongly prefer the second model.

Do we have multiple event loops, one for each AudioWorkletGlobalScope, spun by the rendering thread?

Do we have any precedence of multiple event loops in the single thread/scope?

@padenot
Copy link
Member Author

padenot commented Mar 5, 2018

I am really against making the graph rendering task non-atomic. What if your promise resolution involves the any state change in the other processor? For this reason, Chrome implementation has a graph mutex the protect the graph during the rendering task. For the optimum rendering performance, we have to ensure the rendering task to be atomic. I strongly prefer the second model.

The promise resolution we're talking about happens inside the AudioWorkletGlobalScope, on the rendering thread. I don't see how this would possibly mutate the rendering graph or break atomicity.

Also, I've been trying to find prose about the creation process of an AudioWorkletGlobalScope and normative text about its cardinality, but I can only find mention of the the fact that there can be multiple ones or a single one in a BaseAudioContext. It's not defined why new ones would be created. Are all the AudioWorkletProcessors for a BaseAudioContext running in the same AudioWorkletGlobalScope?

  • If yes, I suppose it's being created when doing the first registerProcessor (and this should be specced), but why is there prose about having multiple ones ?
  • If no, when and why do we create a new one? In this case, we have multiple AudioWorkletGlobalScope, and multiple event loops (regardless of what thread they are running on). Having text that allows multiple AudioWorkletGlobalScope without saying much about them means that it's going to be technically impossible to have compatible implementations: user scripts don't know if they can, say, set an expando in one process() method invocation and find it in another processors process() invocation.

Chrome currently has one AudioWorkletGlobalScope per BaseAudioContext, and I think this is what the spec wants to say as well (pardon me if this is incorrect, I had to dig in a bit in Chromium sources).

This means we would have a single event loop on which all the process() method would be called, one after the other, sequentially. In addition, microtask checkpoints would be occurring in between those process() calls, naturally. onmessage could be happening in between those process() calls as well (), because they would be fired on regular tasks. This is good for interactivity under high load. All this paragraph happens on the rendering thread.

Anyways, this should be in the spec.

Atomicity is orthogonal to the issue at hand here, we're only even dealing with one thread, the rendering thread, and as you mention, it's clearly out of the question to stop it or pause it or whatnot, as we all know.

Do we have any precedence of multiple event loops in the single thread/scope?

That I don't know, @annevk, do you have an answer?

@annevk
Copy link

annevk commented Mar 6, 2018

There's no precedent for that. That sounds a little weird, though maybe it's actually different from a single loop with multiple task sources, which we do have?

@hoch
Copy link
Member

hoch commented Mar 6, 2018

I don't see how this would possibly mutate the rendering graph or break atomicity.

We will have to think about the mutation, but it certainly breaks the atomic nature. Any random task can be queued in the event loop between process() calls ordered by the algorithm. Perhaps we are seeing the term "atomic" differently.

Also, I've been trying to find prose about the creation process of an AudioWorkletGlobalScope and normative text about its cardinality, but I can only find mention of the the fact that there can be multiple ones or a single one in a BaseAudioContext.

I think this has more context. But I also don't think allowing more than one AWGS per BAC is really useful. I remember @joeberkovitz wanted to loosen the text for the future expansion.

If yes, I suppose it's being created when doing the first registerProcessor (and this should be specced)

Technically it is created and ready when addModule() gets resolved. This is defined in Worklet API. Not sure if we need to add more stuff on top of it since the algorithm looks well-defined.

Chrome currently has one AudioWorkletGlobalScope per BaseAudioContext, and I think this is what the spec wants to say as well (pardon me if this is incorrect, I had to dig in a bit in Chromium sources).

That's correct. You can think of it as one possible implementation based on the current spec: "BAC may have at least one AWGS".

Atomicity is orthogonal to the issue at hand here

In Chrome's implementation, the graph render task is not supposed to be interrupted to ensure the optimum rendering performance. If we have to wrap each process() function to insert it to the task runner (i.e. event loop), there will be price to be paid. I will have to profile it to get the actual numbers, but this is a significant change that will affect users for sure.

I also want to emphasize that the atomicity is not an AudioWorklet's problem anymore. This change applies all the processors (as in a counterpart of AudioNode) and will have the real world impact if it forces drastic changes in the existing implementation. Not to confuse the discussion, I think we should start a new issue on the revision of the processing model.

@padenot
Copy link
Member Author

padenot commented Mar 6, 2018

We will have to think about the mutation, but it certainly breaks the atomic nature. Any random task can be queued in the event loop between process() calls ordered by the algorithm. Perhaps we are seeing the term "atomic" differently.

Either you have to run the event loop tasks in between process() calls, or you have to run them before or after all process() calls have executed. In any case, no graph mutation can occur. There is simply no access to the API necessary to mutate the rendering graph.

I think this has more context. But I also don't think allowing more than one AWGS per BAC is really useful. I remember @joeberkovitz wanted to loosen the text for the future expansion.

I agree with you. Again, it is not possible with the current text to write implementation that are interoperable, we should mandate that there is a single scope or define how those AudioWorkletGlobalScope are created.

Technically it is created and ready when addModule() gets resolved. This is defined in Worklet API. Not sure if we need to add more stuff on top of it since the algorithm looks well-defined.

If we decide to have a single one, I think we're good indeed, but we have to decide that. Again I'm in favor of having a single scope.

In Chrome's implementation, the graph render task is not supposed to be interrupted to ensure the optimum rendering performance. If we have to wrap each process() function to insert it to the task runner (i.e. event loop), there will be price to be paid. I will have to profile it to get the actual numbers, but this is a significant change that will affect users for sure.

Of course, this is the case for any implementation. I don't see that being a problem in practice, but it's an implementation concern, and I don't know whether your event loop infrastructure in Chrome suits that.

Also, keep in mind that we are not talking about interrupting anything here, we're talking about, for example, putting an event in the event loop (a "process" event) and then synchronously spinning the event loop for some time (at an implementation level), until it's empty (and maybe we've processed a "message" event doing so), then proceeding with the rendering of the other AudioNodes.

I also want to emphasize that the atomicity is not an AudioWorklet's problem anymore. This change applies all the processors (as in a counterpart of AudioNode) and will have the real world impact if it forces drastic changes in the existing implementation. Not to confuse the discussion, I think we should start a new issue on the revision of the processing model.

Running script in between a the rendering of nodes, considering they can't modify the graph or block or whatever, is akin to running the script for an AudioWorkletProcessor in between the redering of two native audio node processor, which is not an issue.

Yes, anybody will be able to write very bad code and overload the audio callback, but they can do so in the process method, in the handler for onmessage, or in the resolution or rejection of a Promise.

@hoch
Copy link
Member

hoch commented Mar 6, 2018

In any case, no graph mutation can occur. There is simply no access to the API necessary to mutate the rendering graph.

AWP.process() can touch anything in AWGS including the state/property of the other processor. If the render task is atomic, touching things in AWGS is acceptable because the order of process() call is deterministic. My intention is to make the render process as predictable as possible.

The other reason is to keep the rendering performance optimal. What if your app pushes a tons of onmessage tasks into the queue? Then you will end up with a queue with a messy mixture of process() and onmessage in a random order - and it eventually slows down the render task. Ideally, the render task must be finished ASAP so it can deliver the result to the system audio callback. Interleaving onmessage and process() does not really help. Wouldn't you agree?

Lastly, what is the real-world benefit of braking the render task down to an individual process()? Being able to fire onmessage as soon as possible? I argue that the resolution of cross-thread async call should be the lowest priority in the queue. As long as the order of tasks in the queue is honored, firing onmessage a bit later does not really hurt.

I agree with you. Again, it is not possible with the current text to write implementation that are interoperable, we should mandate that there is a single scope or define how those AudioWorkletGlobalScope are created.

Do you think we have "additional processes" on top of what is defined in the Worklet spec? I believe only thing we're missing is a clear one-to-one association between AudioWorklet and AudioWorkletGlobalScope. (c.f. CSSPaintWorklet can have 2 PWGS)

If we decide to have a single one, I think we're good indeed, but we have to decide that. Again I'm in favor of having a single scope.

Great! I am also in favor of having a single scope.

@padenot
Copy link
Member Author

padenot commented Mar 7, 2018

AWP.process() can touch anything in AWGS including the state/property of the other processor. If the render task is atomic, touching things in AWGS is acceptable because the order of process() call is deterministic. My intention is to make the render process as predictable as possible.

The order or process() call is not deterministic, regardless of atomicity, I've hinted at it in a previous message, but that's for another issue. It has to be deterministic, because it is trivially observable. Implementations cannot be interoperable until the processing order is deterministic.

This is the case because we are going into the direction of having a single scope, and one can hook arbitrary data on the scope (via expandos), and they are visible to other processors (this is a feature, it allows data sharing between processors, allowing, for example in turn share large assets across processors, saving tons of memory and CPU).

The other reason is to keep the rendering performance optimal. What if your app pushes a tons of onmessage tasks into the queue? Then you will end up with a queue with a messy mixture of process() and onmessage in a random order - and it eventually slows down the render task. Ideally, the render task must be finished ASAP so it can deliver the result to the system audio callback. Interleaving onmessage and process() does not really help. Wouldn't you agree?

Authors can write arbitrary scripts and destroy the performance, this is not a very strong argument. If you want to have a high performance app, obviously all the messages will be batched into a single postMessage call (considering SharedArrayBuffer is unavailable for now).

We have to have a defined ordering of onmessage, Promise resolution and process(). They cannot run concurrently, since they share the same data, and data races are not possible on the web platform.

If you really don't want to have a break in between process calls, it's rather straightforward as well: we just spec that a full iteration of the processing model algorithm runs off a task (i.e. the rendering of a render quantum in web audio terms). In this case, onmessage will run either after or before this "iteration". Promise resolution will happen exactly after the iteration, in between the "render" task and any onmessage task that has been put in the render thread event loop while the rendering was happening. I think this is compatible with what Chrome is doing (you just have to spin the event loop before the iteration, then do a microtask checkpoint, do the rendering, do another microtask checkpoint).

Lastly, what is the real-world benefit of braking the render task down to an individual process()? Being able to fire onmessage as soon as possible? I argue that the resolution of cross-thread async call should be the lowest priority in the queue. As long as the order of tasks in the queue is honored, firing onmessage a bit later does not really hurt.

Yes, this gates minimal input latency on high load. If we don't do that, we'll have a minimal input latency for a 44100Hz context of, say 128/44100, and an average and maximum latency much higher than that.

It's rather important to not assume use cases here. It might be that this event is going to fire in reaction to a web midi NoteOn event, because a keyboard player hit a key on a MIDI keyboard. The latency here can be absolutely critical.

Do you think we have "additional processes" on top of what is defined in the Worklet spec? I believe only thing we're missing is a clear one-to-one association between AudioWorklet and AudioWorkletGlobalScope. (c.f. CSSPaintWorklet can have 2 PWGS)

Right, but we have to formally decide this.

Great! I am also in favor of having a single scope.

Does that sound good to other ? @joeberkovitz @mdjp @svgeesus.

@rtoy
Copy link
Member

rtoy commented Mar 7, 2018

I am in favor of a single global scope for each context. I also find it much easier to think about things, if graph rendering happens all at once and the messages are handled after rendering (or before).

@hoch
Copy link
Member

hoch commented Mar 7, 2018

Authors can write arbitrary scripts and destroy the performance, this is not a very strong argument. If you want to have a high performance app, obviously all the messages will be batched into a single postMessage call (considering SharedArrayBuffer is unavailable for now).

I disagree. Finishing the render task (or iteration, I like the term btw) as fast as we can is important for the audio stream stability. No matter the size or burden of the message being passed, allowing them interfere the render task can't be good.

The API will abused and we won't be able to stop the glitch no matter how we spec it. But we should not target for those abusive cases. IMHO, we should care about developers who care. The real-time audio has been a primary use case for Web Audio API, so I believe maintaining certain constraints to keep that quality is necessary.

It's rather important to not assume use cases here. It might be that this event is going to fire in reaction to a web midi NoteOn event, because a keyboard player hit a key on a MIDI keyboard. The latency here can be absolutely critical.

The MIDI input latency from the scheduling should be less than ~3ms, if an iteration finishes in the deadline. Having a minuscule delay on MIDI event is much better than glitches in the stream. Also the MIDI data received in the current render quantum would not make difference until the rendering of next quantum begins. So I don't think it really matters.

@padenot
Copy link
Member Author

padenot commented Mar 8, 2018

I disagree. Finishing the render task (or iteration, I like the term btw) as fast as we can is important for the audio stream stability. No matter the size or burden of the message being passed, allowing them interfere the render task can't be good.

You mention that you're taking a lock when you're rendering from the audio thread. This must be because you're executing the script for onmessage or other event loop things from another thread. If this is the case, and because the memory is shared, then you must also take the lock before processing the onmessage event or any Promise. If onmessage is taking too long, it will be impossible for the render to go forward in your implementation, so you have a similar issue, just in a slightly different way.

It's in fact strictly worst because the audio thread (that has often a higher priority), will be stalled, instead of simply possibly getting a bit more scheduling slices from the system because it's set to be non-preemptible (this is possible on macOS, and is what browsers do in practice). There is no inheritance on concurrency primitives on consumer non-real time modern OSes, so the bump will not occur across threads because a thread with high priority is waiting on a mutex that is currently acquired by a thread with a lower priority. This is a classic priority inversion issue, well documented in the literature.

You can also decide to do a trylock before trying to call the process() method of the worklet. If it fails, it's possible to render silence and continue doing the rendering of the rest of the graph, effectively trading a glitch against silence.

The API will abused and we won't be able to stop the glitch no matter how we spec it. But we should not target for those abusive cases. IMHO, we should care about developers who care. The real-time audio has been a primary use case for Web Audio API, so I believe maintaining certain constraints to keep that quality is necessary.

I strongly agree, but the constraints are both on the implementation and on the authors.

@hoch
Copy link
Member

hoch commented Mar 8, 2018

You mention that you're taking a lock when you're rendering from the audio thread. This must be because you're executing the script for onmessage or other event loop things from another thread.

Perhaps I misled you somehow. In Chrome, all the operations in the audio thread are carefully written not to use lock, or use trylock if unavoidable. It has nothing to do with AWP.onmessage. Those operations happens on the same thread with AWGS, so it shouldn't need a lock there.

@karlt
Copy link
Contributor

karlt commented Mar 14, 2018

This testcase may be useful to highlight what I assume are the two contending
options for handling microtasks from process() calls.

<!DOCTYPE html>
<html>
<head>
  <meta charset="utf-8">
  <title>Microtask test</title>
  <style>
.outer {
  background: green;
  padding: 50px;
  float: left;
}
.inner {
  background: blue;
  padding: 100px;
}
  </style>
</head>
<body>
  <div class="outer">
    <div class="inner"></div>
  </div>
</body>
<script>
function onClick() {
  console.log('click');
  
  setTimeout(function() {
    console.log('timeout');
  },0);
  
  Promise.resolve().then(function() {
    console.log('promise');
  });
}

document.querySelector('.inner').addEventListener('click', onClick);
document.querySelector('.inner').onclick = onClick;
document.querySelector('.outer').addEventListener('click', onClick);
document.querySelector('.outer').addEventListener('click', function() { onClick(); });

document.querySelector('.inner').click()
</script>
</html>

In the testcase, the behavior in response to click() is different from the
behavior in response to user events. If there is a sensible reason for this
difference then perhaps that reason also applies here. Or perhaps the
pre-existence of two different behaviors permits Web Audio to choose and still
remain consistent with an existing expectation.

click() generates

click
click
click
click
promise
promise
promise
promise
timeout
timeout
timeout
timeout

but a user event generates

click
promise
click
promise
click
promise
click
promise
timeout
timeout
timeout
timeout

@hoch
Copy link
Member

hoch commented Mar 15, 2018

What we have agreed upon so far:

  1. Introduce the notion of "AudioProcessor" in the spec. We're still not sure about the form of it. (normative or non-normative?)
  2. Revise the "processing model" based on the event loop, using the notion of AudioProcessor in the text.
  3. Making the graph render process atomic. As @padenot dubbed, the iteration can be an apt name of the atomic render task.

I believe The next step for this issue is to write up a PR. Or do we have something else to discuss here?

@padenot
Copy link
Member Author

padenot commented Mar 15, 2018

Introduce the notion of "AudioProcessor" in the spec. We're still not sure about the form of it. (normative or non-normative?)

Considering that we'll be using this new construct to describe normative behaviour (on your second point), I don't think it's possible to have it non-normative.

I believe The next step for this issue is to write up a PR. Or do we have something else to discuss here?

I think you're right and we need to write the text.

@hoch
Copy link
Member

hoch commented Mar 15, 2018

I am dragging you into this, so we can work on the write up together in the upcoming F2F. :)

@joeberkovitz
Copy link
Contributor

Just surveying the thread prior to call: I agree with @hoch's previous comment on what seems agreed, but would add the following suggested points (distilled from the conversation above):

  • AudioProcessor is a normative concept (agreeding with @padenot)
  • Mandate a single AWGS per graph.
  • Specify that in between iterations the UA must resolve all outstanding Promises (first) and then dispatch all outstanding messages (second).

@rtoy
Copy link
Member

rtoy commented Mar 15, 2018

@hoch and @padenot have agreed to work on this.

Removing WG review and adding Ready for Editing.

@rtoy rtoy added the Needs Edits Decision has been made, the issue can be fixed. https://speced.github.io/spec-maintenance/about/ label Mar 15, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Needs Edits Decision has been made, the issue can be fixed. https://speced.github.io/spec-maintenance/about/
Projects
None yet
Development

No branches or pull requests

6 participants