-
Notifications
You must be signed in to change notification settings - Fork 56
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WebXR Device API #403
Comments
Hi TAG members! We saw that one of the questions that came up while reviewing this API is what relationship it has with WebVR. That's an excellent question, and one that we felt justified answering in our explainer. We just added a new section towards the end to cover the topic, copied here for convenience. (The short version is "WebXR is a replacement for WebVR, developed by the same group.")
We also wanted to ask for the TAG to weigh in on a technical issue we've encountered with how WebXR interacts with Feature Policy. The full issue is detailed in WebXR Issue 768, but that's a long read and assumes some prior contextual knowledge, so I'll simplify it here: On some devices (such as phones) WebXR surfaces motion data that is effectively a re-packaging of the data exposed by deviceorientation events or the generic sensors APIs. (In fact, the polyfill relies on deviceorientation to function on mobile devices.) It's not exactly the same, as WebXR applies various motion prediction and skeletal modeling algorithms to the data to better serve the API's purpose, but they're close enough that a motivated developer could use WebXR as a deviceorientation alternative if needed. (Please note that this does not apply to devices such as tethered headsets connected to a PC, as they would not have their motion data exposed through deviceorientation/generic sensors.) The question then is: If a developer has specified thought Feature Policy that WebXR is allowed but one of the sensor APIs which surface related data is blocked, should WebXR also avoid surfacing that data? This would result in the WebXR reporting that it is unable to support VR content on mobile devices, while allowing desktop devices in the same circumstances, which seems difficult for developers to predict and test. On the other hand, if we allow WebXR to surface similar data to blocked APIs, it may be possible for developers to use WebXR to polyfill the other sensor APIs, subverting the presumably intentional blocking of those features via Feature Policy. Given that this seems to be a novel situation for the web platform, with the potential of setting precedent for how other APIs interact with Feature Policy in the future, we wanted to get the TAG's opinion before finalizing how WebXR will handle this situation. Any insight you may have is appreciated! |
Thanks for raising this review! I had a read through the spec (and, as discussed offline, sent a PR attempting to make some aspects of the explainer more concise and readable) and I came up with some questions/thoughts: AccessibilityObviously I'd like to see the question of accessibility addressed sooner rather than later. I am looking forward to the session at TPAC dedicated to this question, but I noted that the Goals section noted only "Display imagery on the XR device at the appropriate frame rate" alongside "Poll the XR device and associated input device state". That seems overly narrow even leaving the question of accessibility aside, given that many existing immersive experiences include sound and haptic feedback. In particular, though, for innovation in XR accessibility to be possible, authors will need the ability to control different modalities for conveying information about the virtual space which is being presented. Could the Web Audio, Vibration and Gamepad APIs make use of For users who require magnification, might it make sense to have an option on the viewport to perform appropriate scaling automatically? There are also some interesting use cases around accessibility mentioned in the research document linked above, which might make good motivating examples:
Explainer/API questions
|
Thank you for your feedback! I'll answer what I can below, with some tasks broken out into separate issues/PRs as indicated. Focusing on the Explainer/API questions first, since those can generally be answered more concisely:
Thank you for demonstrating an effective way to do this in your explainer PR. If we don't merge that PR directly we'll be sure to add a TOC ourselves soon.
We would very much like to see immersive playback in the Additionally, there is not yet consensus on the video/audio formats and projection techniques that are optimal for these use cases. (This is a similar problem to map projection, in that there's no "perfect" way to lay out the surface of a sphere on a flat plane.) Similarly, we've seen on the 2D web that various video players are not satisfied with the default video controls and will frequently provide their own. It's reasonable to expect that trend to continue with immersive video and it is not yet clear what the appropriate mechanism is for providing custom controls in that environment, whereas in WebXR it's implicitly the application's responsibility to render them. By starting with an imperative API we give developers a lot more flexibility in how they store, transmit, display, and control their content which ideally will help inform future discussions around what knobs and levers are necessary to add to the
I've opened an issue for further discussion on this topic, since it's one of the few potentially breaking changes you've brought up. It seems to me, though, like our usage here is in line with other similar methods that return a
This was actually left in the explainer erroneously. There is no default mode, which is reflected in the rest of the explainer and spec IDL. (PR to fix) Historically it was default because it was the mode which requires the least user consent.
We intend to introduce an // Not compatible with the current spec!
navigator.xr.requestSession({
immersive: true,
ar: true
}).then(/*...*/); The primary issue this introduced was that it implied that a non-immersive AR mode was a possibility, when we had no intent of ever supporting it. Plus every new mode that is added would then have to reason about how it interacted with each of those booleans even if they weren't necessarily applicable. The use of enums was eventually deemed to be a cleaner approach.
Issue filed to ensure we demonstrate handling context loss. More generally, there are two routes to ensuring context compatibility. If the context is created with the
Issue filed to add a
I'm not sure exactly what this is asking for? Deep link from where?
Issue files to add more code samples for
A layer is simply an image that will be displayed on the XR hardware somehow. Right now it's pretty minimal, with only a WebGL layer being exposed initially and only one active layer being allowed at a time. But we have known features that we'd like to implement in the future that would expand the types of layers that could be used and give more flexibility to how they're presented. For example, when WebGPU ships we would introduce a new layer type that allows a WebGPU context to render to the headset, and shorter term we'd like to add a layer type that takes better advantage of WebGL 2 features. Other examples of how we may use layers in the future:
Slightly oversimplifying here, but a It's worth noting that previously in WebVR we effectively used the
Having a concrete example in the explainer of when this state might apply would be a good idea. A quick visual aid, showing Oculus' dashboard system: Not all platforms support this type of interaction, especially if power is limited, and in those cases we would expect the session to only toggle between
We definitely understand the importance of accessibility, and also want to ensure that immersive web content does not unnecessarily exclude users due to faulty assumptions on the part of developers about the user's abilities. This is a large topic, however, and one that we've been seeing more discussion on recently, and so I think it would be more productive for us to outline our current thinking about accessibility in a separate doc which we'll link here. Needless to say, it's a complicated problem made more difficult by the imperative nature of the rendering APIs we rely on, the relative newness of the VR ecosystem, and the type of content the device capabilities encourage. It seems likely that our accessibility story will span API enhancements, working with tool and content developers to take advantage of existing accessibility features when appropriate, encouraging best practices around use of audio and haptics, and detailing UA-level accessibility features that can apply to all content. |
Hi, @alice, @dbaron, @plinss, and I talked about this a bit today at our Cupertino F2F. @NellWaliczek wrote, in a comment on immersive-web/webxr#818:
I think a variant of (ii) is best. The variation being that I don’t think “nearing CR” is the trigger, it’s “this is being implemented in a browser engine.” (This is essentially what @dbaron said in two comments on w3ctag/design-principles#99: (1, 2)) |
I believe there was a workshop on XR accessibility recently. Were there any documents produced in that context which might be relevant here?
Apologies: from the first paragraph of the Viewer tracking section which links to the Spatial Tracking Explainer - could this instead be a deep link to the relevant section?
My proposed edits included an "Important concepts" section encompassing the concepts I had to draw out as I was reading the explainer and my best guesses as to how to explain them. It would be helpful to have an explanation about layers in your explainer, as well as in this issue thread.
The example you gave here would work well! It doesn't seem to have been worked back in to the explainer yet. One other thing: Re-reading the explainer, I was confused by this sentence (emphasis added):
What does that last clause mean? i.e. what does it mean for the page to continue processing new frames, if it's not writing to the framebuffer? |
It looks like we marked this as |
It seems like this has largely settled, so I'm going to propose closing. We're generally happy with this direction, particularly since the Please comment here in the next week or so if you don't want us to close it; otherwise, you can comment or file a new issue after it's closed if you want more feedback. Thanks! |
I agree, and am fine with seeing this closed. As always, we're happy to reach out to the TAG if we have additional questions in the future or for reviews of additional modules that we develop. Thank you! |
こんにちはTAG!
I'm requesting a TAG review of:
@NellWaliczek, editor
@toji, editor
@cwilso, WG co-chair
@AdaRoseCannon, WG co-chair
@TrevorFSmith, CG chair
Further details:
The WebXR device API has recently reached the point where it is considered a feature complete replacement for the deprecated WebVR API. We have also switched the work mode to be based around modules where the current "vr complete" WebXR device API acts as a core with other modules such as 'webxr-ar-module' and 'webxr-gamepad-module' building on that, we are not requesting a review for these modules yet.
We are also working on a polyfill for the WebXR device API, https://github.com/immersive-web/webxr-polyfill/
In addition there are multiple browsers vendors working on implementations in their browsers.
We recommend the explainer to be in Markdown. On top of the usual information expected in the explainer, it is strongly recommended to add:
You should also know that...
[please tell us anything you think is relevant to this review]
We'd prefer the TAG provide feedback as (please select one):
Please preview the issue and check that the links work before submitting. In particular, if anything links to a URL which requires authentication (e.g. Google document), please make sure anyone with the link can access the document.
¹ For background, see our explanation of how to write a good explainer.
The text was updated successfully, but these errors were encountered: