-
Notifications
You must be signed in to change notification settings - Fork 386
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Detail magic window support #255
Conversation
Ping for comments. This is one of the critical pieces we identified for TAG readiness, so I'd definitely like some additional feedback on it. In particular:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like this approach a lot! A few minor nitpicky things and some clarification questions :)
@@ -462,7 +511,7 @@ function setupWebGLLayer() { | |||
}); | |||
``` | |||
|
|||
The second scaling mechanism is to request a scaled viewport into the `VRWebGLLayer.framebuffer`. For example, under times of heavy load the developer may choose to temporarily render fewer pixels. To do so, developers should call `VRWebGLLayer.requestViewportScaling()` and supply a value between 0.0 and 1.0. The UA may then respond by changing the `VRWebGLLayer.framebuffer` and/or the `VRViewport` values in future VR rendering frames. It is worth noting that the UA may change the viewports for reasons other than developer request; as such, developers must always query the viewport values on each VR rendering frame. | |||
The second scaling mechanism is to request a scaled viewport into the `VRWebGLLayer.framebuffer`. For example, under times of heavy load the developer may choose to temporarily render fewer pixels. To do so, developers should call `VRWebGLLayer.requestViewportScaling()` and supply a value between 0.0 and 1.0. The UA may then respond by changing the `VRWebGLLayer.framebuffer` and/or the `VRViewport` values in future VR rendering frames. It is worth noting that the UA may change the viewports for reasons other than developer request, and that not all UAs will respect requested viewport changes; as such, developers must always query the viewport values on each VR rendering frame. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch
explainer.md
Outdated
|
||
### VRRenderingContext | ||
|
||
Any time WebVR content is presented on the page it's done through a `VRRenderingContext`. Like a `WebGLRenderingContext`, developers acquire a `VRRenderingContext` by calling `HTMLCanvasElement.getContext()` or `OffscreenCanvas.getContext()` with the context id of "vr". (Editor's note: Naming is hard. Please _please_ *please* suggest something better!) The returned `VRRenderingContext` is permenantly bound to the canvas. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What about a context id of "vrmirror"?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The thought had crossed my mind too. I worry that's unintuitive for magic window case, though. Also toyed with "vroutput" but it just didn't sit with me well.
Hmm... what about "vrpresent" or "vrpresentation" paired with a "VRPresentationContext"? I worry that a "Rendering" context sounds like something that you should be producing imagery with rather than a mechanism for displaying it. "Presentation" carries a bit better implications in that regard.
explainer.md
Outdated
|
||
There are a couple of scenarios in which developers may want to present content rendered with the WebVR API on the page instead of (or in addition to) a headset. | ||
|
||
### VRRenderingContext |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: This is maybe just a readability thing, but it took me a few seconds to realize this wasn't one of the cases, but instead the solution. I'm not sure if it shouldn't be a separate "###" section or if we just need to rephrase stuff?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reading this back I definitely see what you mean. I'll take another stab at it.
explainer.md
Outdated
|
||
In order to mirror WebVR content to the page, developers provide a `VRRenderingContext` as the `outputContext` in the `VRSessionCreateParameters` of an exclusive session. Once the session has started any content displayed on the headset will then be mirrored into the canvas associated with the `outputContext`. | ||
|
||
When mirroring only one eye's content will be shown, and it should be shown undistorted. The UA may choose to crop the image shown, display it at a lower resolution than originally rendered, and the mirror may be multiple frames behind the image shown in the headset. The mirror may include or exclude elements added by the underlying VR system (such as visualizations of room boundaries) at the UA's discretion. Pages should not rely on a particular timing or presentation of mirrored content, it's really just for the benefit of bystanders or demo operators. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just to make sure we're saying the same thing... what do you mean by "it should be shown undistorted"?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I meant without any post processing to correct for optical distortion. I'll try to make this clearer.
explainer.md
Outdated
document.body.appendChild(mirrorCanvas); | ||
|
||
vrDevice.requestSession({ outputContext: mirrorCtx }) | ||
.then(OnSessionStarted); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since we're likely to see developers copy/paste, should we put the exception catching in the sample here as well?
explainer.md
Outdated
|
||
There are several scenarios where it's beneficial to render a scene on the page who's view is controlled by device tracking. For example: | ||
|
||
- Using phone rotation to view panoramic content. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can imagine someone asking why this wouldn't just be orientation APIs, so maybe expand this to say "Using phone rotation to view panoramic content that can easily be upgraded to an exclusive VR experience on the same device" Or... something?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point. I'll try to improve that.
explainer.md
Outdated
- Making use of head-tracking features for devices like [zSpace](http://zspace.com/) systems. | ||
|
||
These scenarios can make use of non-exclusive sessions to render tracked content to the page. Similar to mirroring, to make use of this mode a `VRRenderingContext` is provided as the `outputContext` at session creation time, as well as the `exclusive: false` flag. At that point content rendered to the `VRSession.baseLayer` will be rendered to the canvas associated with the `outputContext`. In the future, if multiple `VRLayers` are used their composited result will be what is displayed in the `outputContext`. Given that the session is tied to a specific `outputContext` it's permissible to have multiple non-exclusive sessions active at once to handle multiple output surfaces. | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
At that point content rendered to the
VRSession.baseLayer
will be rendered to the canvas associated with theoutputContext
. In the future, if multipleVRLayers
are used their composited result will be what is displayed in theoutputContext
.
Can we reword this to be slightly less restrictive so the VR compositor can add content (such as room boundaries) in the final pixels being drawn into the outputContext?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do have language like that for mirroring, but you're asking for the option to do boundary rendering for magic window? I'm not sure if I'm clear on the benefit of that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It was more that I wanted to leave us the option of having the VR system merge in its own pixels (for whatever reason) rather than just act as layer composition.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would love to have a more concrete example of what you have in mind for this, but I suspect that if you could have shared one you would have already. ;) In any case, I don't object to the idea of the UA being able to push additional pixels so I've added a line to that effect.
explainer.md
Outdated
- Taking advantage of 6DoF tracking on devices (like [Tango](https://get.google.com/tango/) phones) with no associated headset. | ||
- Making use of head-tracking features for devices like [zSpace](http://zspace.com/) systems. | ||
|
||
These scenarios can make use of non-exclusive sessions to render tracked content to the page. Similar to mirroring, to make use of this mode a `VRRenderingContext` is provided as the `outputContext` at session creation time, as well as the `exclusive: false` flag. At that point content rendered to the `VRSession.baseLayer` will be rendered to the canvas associated with the `outputContext`. In the future, if multiple `VRLayers` are used their composited result will be what is displayed in the `outputContext`. Given that the session is tied to a specific `outputContext` it's permissible to have multiple non-exclusive sessions active at once to handle multiple output surfaces. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Given that the session is tied to a specific
outputContext
it's permissible to have multiple non-exclusive sessions active at once to handle multiple output surfaces.
Not sure this sentence is strictly necessary since we've removed other references to the notion of multiple non-exclusive sessions for the time being. However, what is the expected behavior if a non-exclusive session and an exclusive session are using the same outputContext? I'm guessing your inclination will be to say the exclusive session will win (if for no other reason that the non-exclusive session will be suspended). Just for the purposes of the thought experiment, what would you expect to happen if two non-exclusive sessions were provided the same outputContext?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good question. I guess I was thinking that outputContext would be tied to the session for the session's lifetime, regardless of exclusivity. I guess it could be a convenience to re-use a context to switch between magic window and mirroring, but I don't see that as a huge concern.
In the meantime, I'll take out the reference to multiple sessions.
explainer.md
Outdated
readonly attribute HTMLCanvasElement canvas; | ||
}; | ||
|
||
dictionary VRRenderingContextSettings { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's this for? (did I miss something?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, I never mention that anywhere, do I? getContext
allows a dictionary of context creation args for any context type and I was just trying to think of what properties might be applicable here. Honestly I'm not even sure this is necessary, though. I can drop if for now.
explainer.md
Outdated
} | ||
``` | ||
|
||
### Non-exclusive sessions ("Magic Windows") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we be considering requiring an outputContext for non-exclusive sessions?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Given that the concept of a tracking-only context has proven controversial, yes. :)
Described how magic window mode works with the `outputCanvas` proposal. Also, because they’re natural variants of the same mechanism, described mirroring and tracking only behavior. Follows the proposal in #224. Also changed VRPresentationFrame -> VRFrame because if we moved forward with the tracking only portion of this there would be scenarios in which the VRPresentationFrames had no Presentation component.
It’s the least important and yet most contentious part of this proposal AFAICT, so I don’t mind dropping it to help move things along.
… type On @RafaelCintron’s suggestion.
Okay, updated the PR to take into account @NellWaliczek's feedback. (Thanks!) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems solid. There's a couple wording tweaks that seem like they'd be helpful, though we should probably be doing a scrub for that sort of stuff over the whole explainer, tbh. Only substantive question I had was about whether we want to allow the outputContext to be changeable after session creation. Also... i'm still iffy on the naming, but not so much that the PR shouldn't go in for now ;)
explainer.md
Outdated
@@ -90,7 +90,7 @@ Sessions can be created with one of two levels of access: | |||
|
|||
If a `VRDevice` is available and able to create an exclusive session, the application will usually want to add some UI to trigger activation of "VR Presentation Mode", where the application can begin sending imagery to the device. Testing to see if the device supports the capabilities the application needs is done via the `supportsSession` call, which takes a dictionary of the desired functionality and returns a promise which resolves if the device can create a session which supporting those properties and rejects otherwise. Querying for support this way is necessary because it allows the application to detect what VR features are available without actually engaging the sensors or beginning presentation, which can incur significant power or performance overhead on some systems and may have side effects such as launching a VR status tray or storefront. | |||
|
|||
In the following example we ask if the `VRDevice` supports sessions with `exclusive` access, since we want the ability to display imagery on the headset. | |||
In the following examples we will focus on using exclusive sessions, and cover non-exclusive session use in the [`Advanced Functionality`](#non-exclusive-sessions-magic-windows) secion. With that in mind, we ask here if the `VRDevice` supports sessions with `exclusive` access, since we want the ability to display imagery on the headset. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
secion->section
explainer.md
Outdated
@@ -90,7 +90,7 @@ Sessions can be created with one of two levels of access: | |||
|
|||
If a `VRDevice` is available and able to create an exclusive session, the application will usually want to add some UI to trigger activation of "VR Presentation Mode", where the application can begin sending imagery to the device. Testing to see if the device supports the capabilities the application needs is done via the `supportsSession` call, which takes a dictionary of the desired functionality and returns a promise which resolves if the device can create a session which supporting those properties and rejects otherwise. Querying for support this way is necessary because it allows the application to detect what VR features are available without actually engaging the sensors or beginning presentation, which can incur significant power or performance overhead on some systems and may have side effects such as launching a VR status tray or storefront. | |||
|
|||
In the following example we ask if the `VRDevice` supports sessions with `exclusive` access, since we want the ability to display imagery on the headset. | |||
In the following examples we will focus on using exclusive sessions, and cover non-exclusive session use in the [`Advanced Functionality`](#non-exclusive-sessions-magic-windows) secion. With that in mind, we ask here if the `VRDevice` supports sessions with `exclusive` access, since we want the ability to display imagery on the headset. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we remove the parameter to supportsSession in the example as well? I'm sort of torn on that one.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I am too but I think consistency is important. I'll remove it, but I'm also going to add an example in the magic window section of using supportsSession
to test for magic window support.
@@ -313,6 +301,67 @@ vrSession.addEventListener('ended', OnSessionEnded); | |||
|
|||
If the UA needs to halt use of a session temporarily the session should be suspended instead of ended. (See previous section.) | |||
|
|||
## Rendering to the Page | |||
|
|||
There are a couple of scenarios in which developers may want to present content rendered with the WebVR API on the page instead of (or in addition to) a headset: Mirroring and "Magic Window". Both methods display WebVR content on the page via a Canvas element with a `VRPresentationContext`. Like a `WebGLRenderingContext`, developers acquire a `VRPresentationContext` by calling `HTMLCanvasElement.getContext()` or `OffscreenCanvas.getContext()` with the context id of "vrpresent". The returned `VRPresentationContext` is permenantly bound to the canvas. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not totally thrilled with "vrpresent" :) but I'm good with not worrying too much about naming until the structure is solid (could even be a subsequent PR if needed)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The returned
VRPresentationContext
is permenantly bound to the canvas.
Should we call out that supplying an invalid (already used) VRPresentationContext should not cause session creation to fail? I assume we'd just want the mirroring part to fail?
explainer.md
Outdated
|
||
### Non-exclusive sessions ("Magic Windows") | ||
|
||
There are several scenarios where it's beneficial to render a scene on the page whose view is controlled by device tracking. For example: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: "to render a scene on the page" -> ", within a 2D page, to render a scene"?
explainer.md
Outdated
|
||
Similar to mirroring, to make use of this mode a `VRPresentationContext` is provided as the `outputContext` at session creation time, as well as the `exclusive: false` flag. At that point content rendered to the `VRSession.baseLayer` will be rendered to the canvas associated with the `outputContext`. In the future, if multiple `VRLayers` are used their composited result will be what is displayed in the `outputContext`. Requests to create a non-exclusive session without an output context will be rejected. | ||
|
||
Exclusive and non-exclusive sessions can use the same render loop, but there are some differences in behavior to be aware of. The sessions may run their render loops at at different rates. During exclusive sessions the UA runs the rendering loop at the `VRDevice`'s native refresh rate. During non-exclusive sessions the UA runs the rendering loop at the refresh rate of page (aligned with `window.requestAnimationFrame`.) Also, when rendering with an exclusive session the projection matrices provided each frame are based on the device optics, while non-exclusive sessions provide projection matrices based on the output canvas dimensions and possibly on the position of the users head in relation to the canvas if that can be determined. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, when rendering with an exclusive session the projection matrices provided each frame are based on the device optics, while non-exclusive sessions provide projection matrices based on the output canvas dimensions and possibly on the position of the users head in relation to the canvas if that can be determined.
I'm a bit nervous that this may give folks the impression that there are circumstances under which they don't need to check the projection matrices each frame.
Also, the projection matrices are also based on depthNear and depthFar, so maybe there's a way to word this a little less specifically?
@@ -557,10 +606,12 @@ interface VRDevice : EventTarget { | |||
|
|||
dictionary VRSessionCreateParametersInit { | |||
boolean exclusive = true; | |||
VRPresentationContext outputContext = null; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we want to allow this value to be reset? Or should it only allowed to be set at session creation time? My inclination is to start as you have it (not resettable) and see if there's use cases that would drive adding the feature.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's my feeling too. It's easier to add the ability to reset it in the future than it is to revoke it if it proves problematic.
Thanks again @NellWaliczek! I've made some verbiage tweaks to account for your feedback, though I've left the names alone for now. I agree that the context ID name especially is not my favorite, but that's easy to change as soon as someone suggests a better one. Describing the behavior is more important for now. I think I'm going to merge this soon so that we've got the basics in place for the TAG review. Obviously we can continue to iterate on it after that, but I'm feeling pretty good about the general shape of this part of the API. |
Merged! |
Described how magic window and mirroring work with a canvas `outputContext`. Follows the proposal in #224.
This is a continuation of #237, which Github helpfully closed without notifying my after some branch rebasing shenanigans and now won't let me re-open. Ugh. 😠
Anyway, to follow up on @RafaelCintron's comments from the closed PR: after giving it a lot of thought and having some conversations on the topic with various people I agree that we probably want to create our own
CanvasRenderingContext
type. The primary driver for this in my mind is that on some devices (like zSpace displays) we'll want to allow for stereo imagery to be shown in the canvas, and trying to hack images in anImageBitmap
is not something I care to do. Instead, we'll want a purpose built set of test-centric APIs and disallow imagery to come from anywhere but theVRSession
so we don't have to deal with issues like a canvas that's been showing stereo imagery suddenly switching to mono when anImageBitmap
is pushed to it and then back again on the next VR frame.I've updated the pull request to reflect that, and am really eager to have somebody suggest better names for the new context type. 😝 Also, this PR is avoiding attempting to add the testing-centric APIs mentioned by Rafael to the context as I figure that could get to be a lengthy discussion for a follow up patch.
(Also, sorry about the delayed reply! Between the US holiday and this simply being a tough issue I've been really slow to gather my thoughts on it.)