-
Notifications
You must be signed in to change notification settings - Fork 386
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
VRWebGLLayer and unified n-view rendering loop #218
VRWebGLLayer and unified n-view rendering loop #218
Conversation
- Removes VRCanvasLayer in favor of VRWebGLLayer which will provide developers with a dedicated framebuffer for rendering the layer - Unifies the 'magic window' and 'exclusive' rendering paths as much as is possible. - Enables each VR subsystem to maximize and optimize rendering performance giving the subsystem control of render target creation/size as well as the optimal viewport size for each view (aka eye). Developers are also able to request these values be scaled for their own performance reasons, and the VR subsystem will comply to the best of its ability. - Makes clear to developers the timing of when different rendering-specific properties (views, viewports, projection matrices, etc.) can change by isolating these properties into a cohesive set of objects that is unique to each rendering frame. - Moves to an N-view system rather than implicitly assuming all 'exclusive' sessions are stereoscopic and 'magic window' sessions are monoscopic.
Thanks @NellWaliczek and co! I feel like the high level goals behind this change are worth pursuing and will help make the API more robust and future-proof. I do have several questions and comments about the specific syntax in this proposal. First I should note that there's several bits of cleanup here that aren't really feature-specific, and that I've made some similar touchups in my variant of the proposal. I've pulled them out and committed them separately (eea6cef) for the sake of keeping this PR focused on functionality changes. And in doing so I have introduced conflicts with your patch... sorry! 😅 As for the feature itself, I think the number one question I have is what's the intent behind the Agreed that we don't need a per-layer commit anymore (which is nice!) and I don't mind the name I'm a bit concerned about the impact the proposed changes to the device pose would have on the amount of garbage created. We're already kind of biting the bullet for garbage creation by returning a new value, so I feel like we should be careful about making the returned object too complex or deep. That said, I recognize that this allows the So that's my high level thoughts on this pull request. I have my own variant of the feature that I'd like to put up for discussion as well, at which point I'm sure we can come to a happy midpoint between them. Not sure how best to do that without forking the conversation into two different pull requests, though. Hm... I'll think about it and post a follow up soon. |
Okay, for the sake of keeping this conversation in one place, I've pushed my variant of the proposal as a branch. You can see the diff here: 9462be0 and the read the full explainer here: https://github.com/w3c/webvr/blob/framebuffers/explainer.md As Nell alluded to I think it could be beneficial to have a session "bound" to a canvas or offscreen canvas at creation time, probably with syntax like this: vrDevice.requestSession({ canvas: glCanvas, exclusive: true }).then(OnSessionStarted); At which point all As far as buffer scaling goes, I feel pretty strongly that this is something we need to preserve. The basic reasoning is that some apps will want to render with high quality, while others will prefer to render at a lower resolution for the sake of performance. On mobile devices, especially, rendering at a lower res than the screen is capable of supporting is almost always recommended unless you're doing something like videos, an image viewer, or text. If we don't have some basic control over the size of the allocated buffers then implementations would have to allocate a "full sized" 1:1 buffer all the time, even if we recommend that users render to a smaller viewport, on the off chance that they might need the higher quality. This has performance implications on devices like Daydream, where we may end up copying the full buffer regardless of the viewports used, burning precious fillrate for the sake of an overbroad abstraction. To address this, I suggest that on layer creation we allow the developer to specify a framebuffer scale hint: let vrLayer = new VRWebGLLayer(vrSession, { framebufferScaleHint: 1.0 }); // Full resolution buffer This couldn't be changed after the layer was created, and the layer may adjust or ignore the scalar to satisfy implementation limitations. (The actual scale used would be reported back as an attribute on the layer.) There's still a need for viewport scaling, though, since the page may want to dynamically adjust how much rendering work it's doing frame to frame and we don't want to encourage resizing the framebuffer. Thus the need for another more lightweight viewport scaling hint on the layer. Something to point out: One interesting future use case for this could be a scenario where the developer wants to show a very crisp text or HUD layer over a lower-res background. That implies that not all layers would use the same framebuffer size, which affects the API shape. Additionally, I think we may want to allow for the possibility that even framebuffers within a single layer may not be the same size. Consider a hypothetical ultra-wide FoV HMD which renders the periphery with a large, very low res screen and the central view with a small high res screen. You can see my proposed IDLs in the explainer branch linked at the beginning of this comment, but for easy reference the render loop for this proposal looks something like this: function OnDrawVRFrame() {
let pose = vrSession.getDevicePose(vrFrameOfRef);
if (pose) {
// Maybe worth having a vrSession.viewCount?
for (let i in pose.views) {
let view = pose.views[i];
let viewport = vrLayer.viewports[i];
gl.bindFramebuffer(viewport.framebuffer);
gl.viewport(viewport.x, viewport.y, viewport.width, viewport.height);
drawScene(view.projectionMatrix, view.viewMatrix, view.eye);
}
}
vrSession.requestVRFrame().then(OnDrawFrame);
} Probably the biggest thing that annoys me about this approach is the loose coupling of the pose views to layer viewports by index, which you proposal made more explicit, but at the cost of some complexity. I'm on the fence about it. Would definitely love feedback from the wider community about both variants of the feature proposal! I expect we'll end up with something that's a mix of both. |
Hey @toji Those are some prolific comments ^_^ Hopefully my responses didn't miss anything! Cleanup text baseLayerTarget Multiview extension support Promise vs. Callback Garbage collection Passing a canvas to session creation Buffer scaling Differences in framebuffer resolution Hopefully that clarifies some things and thanks for the fast feedback! p.s. something seems fishy about the migration link you posted... When I clicked on it I saw instructions for 1.0->1.1 rather than 1.1->2.0. |
Okay, replying on buffer scaling then ;-) The scaling hint seems odd. We have 2 ways for a developer to control quality now, via initial frame buffer creation and then later via scaling their viewport which is how we recommend people doing dynamic resolution currently. Just had to fix all of our bugs related to this (well Artem did ;-) and so its clearly in use. It seems like buffer scaling is being munged in with a multitude of things that lead to the way the viewports are later set. I'd suggest continuing to let the developer specify the size of the buffer, but give them a method to retrieve the suggested viewport size the same as we do today, but have it take the suggested scaling hint in this API call and simply return the suggested size that way. -- Note I'm clumping this into buffer sizing because the whole wrap around a GL layer and then get an arbitrary number of views from it is one of those things that is maybe being munged -- Specifically I'm not a fan of trying to resolve exceptionally complex devices in our current system. We should be willing to instead describe the devices more clearly and how we suggest people use them. There are "mono" devices like cameras, there are "stereo" devices like current headsets. You can even specify the "recommended" set of displays that should be rendered to or give them a priority so the developer knows which ones they should try to render to. But note that most scenarios, if you ask them to render a 3rd version of the view, you are setting them up for failure. In fact, most scenarios, if we even increase their FPS from 60 to 72, they being to fail and skip. So its better to run these at 60fps. We have to realize that content is pretty highly tuned and making an API that forces most developers to scaling as automatic rather than explicit will actually result in a sub-par experience. We've discussed before, but an attached viewpoint camera can simply be another device. As could multiple perspectives within a cave. But the only experience that would automatically run on a CAVE would be an experience that chose to enumerate and display to all of those devices. We all know that responsive web pages are better than ones that are forced to render in a way that they weren't designed for. Asking a stereo experience to render into a cave by "tricking" them is definitely not going to lead us into the pit of success. There are a bunch of other interesting arguments against having too many automatic views from a single device call. Namely that you can no longer handle these views with different priorities. A 3rd person camera view could likely be updated at 10fps and not be that noticeable while an HMD stereo view requires 60fps updates. |
The note on Garbage Collection also sounds a bit confounding. A framebuffer is a "Platform Object" which means its backing storage is opaque to the calling page. What this means is that if you want to change things in the framebuffer, you can do so without reallocating a new JavaScript object. "Platform Objects" aren't immutable. I don't have the full context here, but even if you changed the framebuffer every frame, it doesn't mean this would be detectable at all by the JavaScript and that cached values would still work. You'd have to instead convince yourself that there is value in completely reallocating the framebuffer to include the JavaScript object and not just its backing GL object. Personally don't think it is. That aside we should still consider what is const data versus what is platform object data in these structures that will be allocated every frame. We should also consider field access versus property access and note that if you want to elevate something to a field then you are going to allocate it even if the user never uses it without sophisticated defer init code. |
@NellWaliczek: Derp. Didn't intend to link the the migration doc. (Which is thoroughly obsolete at this point.) Updated that comment to the right link, which is for the explainer from that branch. 🙄 |
On the topic of Garbage Collection: More on buffer scaling: If we switch to a system where the framebuffers are allocated internally, I feel like we still need a way to preserve that control. We could allow users to provide an exact size, but that would fail if the buffers should ideally be different sizes, and it would require us to still offer something like the Wherfor art thou, viewport scale?: But there's still a good case to be made for adjusting the viewport frame-to-frame to handle dynamic performance bumps without the high overhead of a buffer reallocation. Again, a simple scalar value gives us 90% of the previous functionality in a simpler package. The only thing we lose is the ability to overlap viewports for mono rendering, which I understand is not an option for WinMR anyway. (And if we DO want so support mono rendering that's probably worth having an explicit layer option for.) But again, I find the ability to control buffer resolution to be far more important. It's an easier thing to tweak with a bigger impact for most developers. The ability to fiddle with the viewport frame-to-frame is a much more advanced technique that requires the ability to track rendering performance carefully, which the web is not good at. If needed I could see an argument for deferring viewport scaling control to a later iteration of the API. N > 2 viewports: |
Been giving some consideration to the pattern that @NellWaliczek mentioned of trying to logically group values together. I feel like the Microsoft proposal is somewhat difficult to parse based on how many different interrelated types are at play, but I do like the spirit of the change and (as I said) feel my proposal with index-related values is a bit loose. So here's yet another variant to consider: //=========
// Partial IDL
//=========
partial interface VRSession : EventTarget {
VRDevicePose getDevicePose(VRCoordinateSystem coordinateSystem);
}
interface VRDevicePose {
readonly attribute Float32Array poseModelMatrix;
Float32Array getViewMatrix(VRView view);
}
enum VREye {
"left",
"right"
};
interface VRView {
readonly attribute VREye eye;
readonly attribute Float32Array projectionMatrix;
};
interface VRWebGLLayerView : VRView {
readonly attribute WebGLFramebuffer framebuffer;
readonly attribute long framebufferWidth;
readonly attribute long framebufferHeight;
readonly attribute long viewportX;
readonly attribute long viewportY;
readonly attribute long viewportWidth;
readonly attribute long viewportHeight;
};
dictionary VRWebGLLayerInit {
double framebufferScale = 0.0;
boolean alpha = true;
boolean depth = true;
boolean stencil = false;
boolean antialias = true;
};
[Constructor(VRSession session, VRWebGLLayerInit layerInitDict)]
interface VRWebGLLayer : VRLayer {
attribute FrozenArray<VRWebGLLayerView> views;
readonly attribute double framebufferScale;
readonly attribute boolean alpha;
readonly attribute boolean depth;
readonly attribute boolean stencil;
readonly attribute boolean antialias;
}; //==============
// Code Example
//==============
function OnDrawVRFrame() {
let pose = vrSession.getDevicePose(vrFrameOfRef);
for (let view of vrLayer.views) {
gl.bindFramebuffer(view.framebuffer);
gl.viewport(view.viewportX, view.viewportY, view.viewportWidth, view.viewportHeight);
drawScene(view.projectionMatrix, pose.getViewMatrix(view), view.eye);
}
vrSession.requestVRFrame().then(OnDrawVRFrame);
} Core ideas here: The layer is really what is going to determine the views being used. Maybe we make On the I'm still not 100% satisfied with this approach. I don't like forcing repeated calls to Maybe this represents a decent mid-point between the other two proposals? |
And in case you're not sick of hearing from me yet, I've also been giving some thought to how this can extend to take advantage of a multiview extension. In my opinion using said extension needs to be a very explicit step taken by the developer and not something that we attempt to abstract away too much. As a result, it seems like maybe a new layer type is appropriate? interface VRView {
readonly attribute VREye eye;
readonly attribute Float32Array projectionMatrix;
};
[Constructor(VRSession session, VRWebGLLayerInit layerInitDict)]
interface VRWebGLMultiviewLayer : VRLayer {
attribute FrozenArray<VRView> views;
readonly attribute WebGLFramebuffer framebuffer;
readonly attribute long framebufferWidth;
readonly attribute long framebufferHeight;
readonly attribute double framebufferScale;
readonly attribute long viewportX;
readonly attribute long viewportY;
readonly attribute long viewportWidth;
readonly attribute long viewportHeight;
readonly attribute boolean alpha;
readonly attribute boolean depth;
readonly attribute boolean stencil;
readonly attribute boolean antialias;
}; Things to note: The viewport has been moved out of the If the session doesn't support multiview then trying to construct one of these layers would throw an exception. I don't suggest that we include this as a core part of the 2.0 spec, but I do think it's useful to visualize what it could look like in order to ensure we're enabling enough flexibility. |
Thanks for all the awesome feedback, folks! I'm working on incorporating suggestions and should have an updated to the PR available shortly! Please stand by! |
- Added an optional bufferScalingFactor to the VRWebGLLayerInit dictionary that allows developers to request a different size buffer from the default supplied by the each UA. - Added alpha as an option to VRWebGLLayerInit - Addressed concerns about GC overhead - Clarified the use of the multiview extension for side by side rendering
…czek/FramebufferAndNViewRenderLoop
Thanks for the update! Compiling some feedback on it now. One quick note for others who may want to review: for some reason the links to the new commits from my email all hit an error page, but if you view the "changes" tab directly on the issue page it shows up fine. Yay GitHub. 😒 |
Latest update still exposes only one framebuffer for the layer, so I assume that means Microsoft is OK with side-by-side being the default? I'm good with it too if that's the case. Simplifies the basic render loop and leaves added complexity to devs doing optimizations. I'm a bit sceptical of a side-by-side multiview extension, but I think it's something that I'm better off discussing with the individuals actually working on the extension. Regardless, it does seem like we probably want to expose a way to get the framebuffer back as an actual texture array if supported, even if it's treated as an optional feature. Probably just: dictionary VRWebGLLayerInit {
// Other stuff
boolean multiview = false; // or textureArray?
} And the layer will indicate if it was actually created that way. I'm fairly sure Microsoft suggested this approach on our last call, but it didn't make it into the GH issue? For the record this is pretty much how Daydream does it anyway. Given that approach, I'd even be fine if this was a WebGL2 only feature for simplicities sake. (One more reason for content to upgrade when available.)
One thing I'm still lukewarm on is the idea of passing the coord system for both the pose matrix and view matrices separately. I think I can see what the aim is here, but I'm concerned about scalability. If we determine, for instance, that the various velocities and accelerations from 1.1 are something we want to add back in it starts to look a bit silly: interface VRDevicePose {
readonly attribute FrozenArray<VRView> views;
Float32Array? getPoseModelMatrix(VRCoordinateSystem coordinateSystem);
Float32Array? getLinearVelocity(VRCoordinateSystem coordinateSystem);
Float32Array? getLinearAcceleration(VRCoordinateSystem coordinateSystem);
Float32Array? getAngularVelocity(VRCoordinateSystem coordinateSystem);
Float32Array? getAngularAcceleration(VRCoordinateSystem coordinateSystem);
FrozenArray<WebGLViewport> getLayerViewports(VRWebGLLayer layer);
}; That's part of why I suggested the structure I did. I also continue to like the idea of possibly allowing a different number of views per layer, but that's pretty feature request-y and probably not worth bending over backwards to support. It also seems a bit odd to have a WebGL viewport returned by the generic pose. We really ought to be pushing to keep everything tied to a specific graphics API local to the layer. Fortunately I think that just means moving Finally, I think I'm coming around to the idea of a So given that, here's a slightly different proposal: //=============
// Partial IDL
//=============
partial interface VRSession {
Promise<VRFrame> requestVRFrame(/* Possible per-frame options here? */);
}
interface VRFrame {
readonly attribute FrozenArray<VRView> views;
VRDevicePose? getDevicePose(VRCoordinateSystem coordinateSystem);
}
interface VRView {
readonly attribute VREye eye;
readonly attribute Float32Array projectionMatrix;
}
interface VRDevicePose {
readonly attribute Float32Array poseModelMatrix;
// Future site of velocity and acceleration attributes?
Float32Array getViewMatrix(VRView view);
}
partial interface VRWebGLLayer {
VRWebGLViewport getViewport(VRView);
} //==============
// Code Example
//==============
function OnDrawVRFrame(vrFrame) {
let pose = vrFrame.getDevicePose(vrFrameOfRef);
gl.bindFramebuffer(vrLayer.framebuffer);
for (let view of vrFrame.views) {
let viewport = vrLayer.getViewport(view);
gl.viewport(viewport.x, viewport.y, viewport.width, viewport.height);
drawScene(view.projectionMatrix, pose.getViewMatrix(view), view.eye);
}
vrSession.requestVRFrame().then(OnDrawVRFrame);
} |
One minor update: After talking it over with another Googler I've had second thoughts on passing a canvas as an arg for session creation. An example to demonstrate why: if I'm trying to build any sort of tooling that injects, say, a perf hud layer it's far easier to maintain a separate context than it is to try and manage state in someone else's without breaking it. So instead of doing what I had originally suggested: vrDevice.requestSession({ canvas: glCanvas }).then(session => {
vrLayer = new VRWebGLLayer(session);
}); I'm onboard with what Microsoft proposed: vrDevice.requestSession().then(session => {
vrLayer = new VRWebGLLayer(session, glCanvas);
}); |
Re: Multiview and side-by-side. It is possible to implement a side-by-side multiview extension using a single framebuffer with multiple viewports into that framebuffer. On the CPU side, the browser instances the web developer's geometry across all of the viewports, passing the viewports RSSetViewports. The instance data are the viewport ids. In the vertex shader, the web developer checks the gl_ViewID and takes appropriate action, same as today's multview extension. In the geometry shader, SV_ViewportArrayIndex is available to direct primitives to viewports, which the browser would do based on the instance id. D3D also has a capability check you can perform that allows you to use SV_ViewportArrayIndex in the vertex shader for better performance. Directing vertices to viewports in the vertex shader is the something the browser would do as part of GLSL->HLSL shader generation. Re: This is pretty much how Daydream does it anyways: Are you saying that Daydream prefers texture arrays for VR? I was under the impression that Daydream prefered side-by-side. Or am I confusing that with IPC limitations for Chrome's multiprocess architecture? Re: requestViewportScaling: Viewport scaling is a mandatory thing apps have to do in Windows Mixed Reality (WMR). With WebVR v1.1, we scale down the web developer's full resolution content behind their back when it kicks in, only to have WMR scale it back up when it draws to the HMD. For WebVR v2.0, we would like the web developer to be in full control of viewport scaling for best performance. We're open to better suggestions for API ergonomics. Re: Multiple getXXXX functions in VRDevicePose: I think your suggestion of VRFrame has merit. Re: WebGL viewport: I agree, in principle, that we should not tie a specific graphics APIs to the layer. However, I fear that putting getViewport on VRWebGLLayer means web developers will fail to check it every frame and lead to problems with viewport scaling in WMR. That is why the proposal puts it on "frame" object for better clarity. Is there an alternate "rectangle" web object we can use instead of WebGLViewport? Re: Specifying a Canvas with requestSession: I am fine with having Canvas resource domains be attached to layers instead of sessions. However, having the VRWebGLLayer constructor be the place that fires context lost because you're on the wrong adapter seems suboptimal. Moving forward, if Canvas resource domains are tied to layers, then specifying layer definitions on requestSessions seems like a better approach. This is probably something we should discuss in the multi-gpu #223 issue. Specifying the Canvas in the constructor is something we left from the original explainer to avoid putting too many things in the pull request. |
Feeling under the weather today, so I'm just focusing on some clarifying comments here
Sorry, I made this more confusing than intended: Daydream doesn't really care about side-by-side vs. multi-texture, but it does have a special option to allocate a texture array for explicit use with OVR_multiview. The side-by-side preference is a Chrome-specific thing that comes from the Android IPC mechanism for textures being a real pain to work with and properly sync.
To make sure I have a clear idea of what's going on here: I'm under the impression that the WinMR system tweaks the viewports used frame-to-frame (for accuracy/performance?) and that the user can also request a scale factor on top of that for further performance control, which appears to be the only way for developers to control the the target output resolution? (IE: Buffer sizes are fixed by the system.)
It was my hope that making it so you had to have a VRView in order to query a viewport would imply the relationship between the frame and the viewports. And honestly no matter where we put these it's not going to stop an over-eager developer from trying to be clever about caching them. I do feel getting the viewports from the layer localizes the data better and doesn't make any assumptions about future graphics APIs. (Maybe WebGLNext will use UV-space viewports for some stupid reason.) For the sake of nudging people towards the right pattern I'm totally up for a console warning:
😄 |
Brandon, I hope you get better soon if you haven't already.
In WinMR, the developer does not have control over the target output texture size. However, they can request viewport scaling into the target texture. The resulting viewport is influenced by both the developer's request and the whims of the system. Failure to do as you're told will result in your content getting cut off when it is scaled to the HMD's output resolution. That's why it is important that the viewport information be close to any "frame data" object. But I hear what you're saying about spewing dire warnings to the dev console. Nell gets back from vacation on Monday. Let's discuss with her upon her return. |
First up, @toji had the good idea to divide the data that changes each presentation frame from the data that changes each presentation frame and is also dependent on the coordinate system. Thanks! Second, in an attempt to scope down the changes to the smallest tightly-coupled cohesive set necessary to merge, I've reverted back to the Promise pattern rather than the RAF pattern. Once we've gotten closure on the rest of this design, I'll open a separate issue to discuss this specific topic since I'm still on the fence. Third, after poking at it a bit more, there may actually be a path to using the currently proposed multiview extension (https://www.khronos.org/registry/webgl/extensions/proposals/WEBGL_multiview/) rather than a custom side-by-side one. That said, there are a few things to note: - The current multiview extension is designed such that developers attach textures to a framebuffer to create a multiview-aware framebuffer. In WebVR 2.0, the UA should be providing the optimal framebuffer for the hardware they are targeting rather the other way around. As a result, we can achieve the desired outcome by adding an optional "multiview" attribute to the VRWebGLLayerInit which will construct the appropriate framebuffer for multiview rendering. This framebuffer is then passed into `bindFramebuffer()` as usual and it behaves as if it was constructed using the multiview extension. - Because the array vs. side-by-side decision is abstracted from the web developer, we open up the option for the UA to pick the optimal texture format. Yay! - The main pain point with this design is that there is no easy way to restrict rendering to a subset of the views (say, just the right eye) because the framebuffer isn't populated by the web developer. In order for this design to be feasible, that will need to be addressed somehow. :-/ @RafaelCintron has started a discussion with Olli (who is proposing the WEBGL_multivew extension in question) and we'll see where it goes
First up, @toji had the good idea to divide the data that changes each presentation frame from the data that changes each presentation frame and is also dependent on the coordinate system. Thanks! Second, in an attempt to scope down the changes to the smallest tightly-coupled cohesive set necessary to merge, I've reverted back to the Promise pattern rather than the RAF pattern. Once we've gotten closure on the rest of this design, I'll open a separate issue to discuss this specific topic since I'm still on the fence. Third, after poking at it a bit more, there may actually be a path to using the currently proposed multiview extension (https://www.khronos.org/registry/webgl/extensions/proposals/WEBGL_multiview/) rather than a custom side-by-side one. That said, there are a few things to note:
Lastly, it's seeming like the use of dedicated framebuffers and a multiview extension both have support from the group. Double Yay! There are still some remaining important issues left to discuss such as GC optimizations, further unifying the rendering loop, etc. However, I’d like to know if folks are comfortable with using separate GitHub issues for them so the core of this structural change can be merged. What do you think? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is looking good, thanks! I have multiple inline comments, but I feel like we're getting pretty close on this.
`VRCanvasLayer.commit` returns a promise, which resolves when the VR compositor is ready draw a new frame. This enables it to act as an analog for requestAnimationFrame which runs at the device's refresh rate. If the layer's source is not dirty when `commit()` is called no new imagery is sent to the compositor but a valid promise is still returned. If `commit()` is called on a canvas layer multiple times in the course of a single frame only the image from the first commit is used, and the subsequent ones are discarded but a valid promise is still returned. | ||
During exclusive sessions: | ||
- The UA runs a rendering loop at the device's native refresh rate | ||
- `VRWebGLLayer.framebuffer` is a custom framebuffer similar to a canvas's default frame buffer. Using framebufferTexture2D, framebufferRenderbuffer, getFramebufferAttachmentParameter, and getRenderbufferParameter will all flag INVALID_OPERATION. Additionally, attempting to render to this framebuffer outside of the `requestVRPresentationFrame()` callback will flag INVALID_OPERATION. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
attempting to render to this framebuffer outside of the
requestVRPresentationFrame()
callback will flag INVALID_OPERATION.
Would it be cleaner/easier to make the error happen when attempting to bind instead of draw? (Granted, we could probably lean on the incomplete framebuffer language for a drawing restriction.) Also, I feel like any bound VRWebGLLayer framebuffers should be unbound when the requestVRPresentationFrame
callback exits. Is there an argument against that?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could add an error in bind. However, that is not going to solve the case where the developer first binds+draws successfully during the promise callback but subsequently tries to draw in a non-WebVR callback such as an onload handler. Here the bind was previously successful; it's the draw that is incorrect.
I would be in fine with communicating the error using INVALID_FRAMEBUFFER_OPERATION instead of a generic INVALID_OPERATION.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While this conversation is still in progress, I haven't made any changes.
explainer.md
Outdated
boolean stencil = false; | ||
boolean alpha = true; | ||
boolean multiview = false; | ||
double bufferScaleFactor; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
perhaps framebufferScaleFactor
instead, for consistency?
explainer.md
Outdated
readonly attribute boolean alpha; | ||
readonly attribute boolean multiview; | ||
|
||
readonly attribute WebGLFrameBuffer frameBuffer; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We'll want to report the framebuffer's width and height here, and probably the framebufferScaleFactor
that was ultimately settled on as well. OpenGL ES (and thus WebGL) doesn't actually have the ability to query GL_TEXTURE_WIDTH/HEIGHT
, and even if it did there's complications with querying the size of an arbitrary framebuffer attachment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I went back and forth on including the framebufferScaleFactor. Ultimately, I couldn't figure out what developers would use the information for, so I left it off. If you have a suggestion for how this would be used, I'd be happy to add it in.
readonly attribute Float32Array leftViewMatrix; | ||
VRDevicePose? getDevicePose(VRCoordinateSystem coordinateSystem); | ||
|
||
VRWebGLViewport getViewport(VRWebGLLayer layer, VRView? view); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Still a bit nervous about this, though I understand the desire to tightly couple the viewports with the frame. In the future I guess we just overload this with VRWebGLNextViewport getViewport(VRWebGLNextLayer ...);
? Still feels like concepts that belong on the various layer types.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@travisleithead suggested that the overload model was applicable here. Though an alternative could be (type name feedback welcome!):
interface VRLayerViewTarget {
};
interface VRWebGLLayerViewTarget : VRLayerViewTarget {
readonly attribute int viewportX;
readonly attribute int viewportY;
readonly attribute int viewportWidth;
readonly attribute int viewportHeight;
};
partial interface VRPresentationFrame {
VRLayerViewTarget getLayerViewTarget(VRLayer layer, VRView? view);
};
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I prefer the overloaded version to introducing more object hierarchy. For the sake of this PR I'd say just go with it as is, and we can have a more targeted discussion in a separate issue if needed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds good!
explainer.md
Outdated
let pose = vrFrame.getDevicePose(vrFrameOfRef); | ||
gl.bindFramebuffer(vrSession.baseLayer.framebuffer); | ||
|
||
if (vrSession.baseLayer.multiview) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Given that this is opt-in can we move this code path and the explanation around it down to the advanced section?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure!
explainer.md
Outdated
|
||
Once the session is started some setup must be done to prepare for rendering. The content to present to the device is defined by a [`VRLayer`](https://w3c.github.io/webvr/#interface-vrlayer). In the initial version of the spec only one layer type, `VRCanvasLayer`, is defined and only one layer can be used at a time. This is set via the `VRSession.baseLayer` attribute. (`baseLayer` because future versions of the spec will likely enable multiple layers, at which point this would act like the `firstChild` attribute of a DOM element.) | ||
Developers may optionally take advantage of the [WEBGL_multiview extension](https://www.khronos.org/registry/webgl/extensions/proposals/WEBGL_multiview/) to both WebGL 1.0 and WebGL 2.0 for optimized multiview rendering. If the UA is unable to support the multiview option, (e.g. the supplied context does not support this extension), the `VRWebGLLayer` constructor will throw an exception to ensure developers do not try to use incompatible shaders. Additionally, developers can query the `VRWebGLLayer.multiview` attribute to determine the framebuffer's configuration. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure I'm onboard with the idea of the constructor throwing because of an option like this. We wouldn't, for example, expect antialias: true
to throw, even though it's similarly not supported everywhere. In reality any apps that do choose to make use of the multiview capabilities will still have a fallback path (as is demonstrated in the code example below), so allowing multiview: true
to act as a hint which can be ignored or overridden when necessary seems like a more productive path.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good call.
explainer.md
Outdated
|
||
// Start the render loop | ||
vrSession.commit().then(OnFirstVRFrame); | ||
vrSession.requestVRPresentationFrame().then(onDrawFrame); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not a big deal, but is there a reason for preferring VRPresentationFrame
instead of just VRFrame
? I'm not overtly concerned about verbosity, but it's nice to cut down on it when reasonable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah it is definitely longer :-/ I was trying to avoid confusion between VRFrameOfReference and VRFrame. Thoughts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, I see. I'm personally not too concerned about there being much confusion between the two but I'll leave this at your discretion for now. Maybe TAG will have something to say about it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I left it for now. Let's see what the TAG says :)
explainer.md
Outdated
@@ -532,26 +533,40 @@ interface VRSession : EventTarget { | |||
attribute EventHandler onfocus; | |||
attribute EventHandler onresetpose; | |||
|
|||
VRSourceProperties getSourceProperties(optional double scale); | |||
readonly attribute VRDevicePose pose; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this a typo?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Erk. merge failure
And to address @NellWaliczek's previous comment more directly: I'm personally fine with earmarking certain topics (like promise vs. rAF-style render loops) as separate issues instead of piling on to this already fairly large change. For example, there's already some language here that would be replaced by concepts in #224, but those are better handled as part of that issue, since it's not part of the core of what this PR is trying to address. |
Ok, let me see if I can get an update ready before we get yet another merge conflict wompwompwomp :) |
Yes, let's try and refrain from pushing any other changes to the explainer on the master branch till we can land this. I'd really like to get it in ASAP, since a lot of other changes will want to build off of this pull request. |
- Changed the VRWebGLLayer constructor to fallback to non-multiview when not supported instead of throwing - Moved multiview explanation and sample to the advanced section - Cleaned up spurrious pose attribute - Renamed VRWebGLLayerInit value from bufferScaleFactor to framebufferScaleFactor - Added readonly attributes to VRWebGLLayer for the framebufferWidth and framebufferHeight
Clean up a few merge conflicts and typos
Hey @toji, I think I've addressed nearly all of the issues you brought up (with the one exception of the conversation you and @RafaelCintron are still in the middle of). In addition, the following topics were not fully addressed as part of this pull request and may need new issues filed to track them once it has been merged:
If you need any other changes let me know and I'll try to get to them ASAP! |
explainer.md
Outdated
} | ||
} | ||
|
||
function drawMultiviewScene(sequence<VRView> views, VRPose pose) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As a lover of typed languages, I feel for you here. 😉 But... this IS Javascript, so lets drop the arg types.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
goshdarnit >_<
explainer.md
Outdated
// Draw Scene | ||
} | ||
|
||
function drawScene(VRView view, VRPose pose) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This isn't actually different from the above definition, right? If so, let's just omit it here. If someone wants to know what it does they'll search the text for drawScene
and end up at the right place.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good call.
layer.requestViewportScaling(0.5); | ||
|
||
// Register for next frame callback | ||
vrSession.requestVRPresentationFrame().then(onDrawFrame); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks a little weird, calling back into a function that does nothing but set the viewportScaling. I'd be fine with the code sample just being the requestViewportScaling
line and the associated comment, or changing the name of this function to setViewportScale
or something similar.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
By putting this call in the frame's promise handler, I was trying to illustrate that this is something that's ok (and somewhat encouraged) to change regularly. What about this:
function onDrawFrame() {
// Draw the current frame
// In response to a performance dip, request the viewport be restricted
// to a percentage (ex: 50%) of the layer's actual buffer. This change
// will apply to subsequent rendering frames
layer.requestViewportScaling(0.5);
// Register for next frame callback
vrSession.requestVRPresentationFrame().then(onDrawFrame);
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Better
There's definitely some followup conversations I want to have about a few points, but I'd be comfortable merging this so that we can establish the most important parts and go after the details in more targeted issues/PRs. I'll allow some time for anyone else that wants to comment to do so before merging. GitHub says there's a rebase conflict here, though I don't know what it would be. :( Since you'll have to resolve that anyway I've left a few minor nit comments that you may want to tackle at the same time. None of them are about the functionality, though. |
- Removes function parameter types - Adds a comment to the requestViewportScaling sample to clarify usage - Removes duplicate of onDrawScene
Ok, merged! I'll file issues for the following topics shortly
Thanks for all the help, everyone!! |
Awesome! Thanks! This is a major step towards getting the API to a stable place. |
Hey folks,
@RafaelCintron, other Microsoft folks, and I have been batting around some ideas about unifying the rendering loop and performance optimizations. After much discussion, we’d like to propose the following:
Doing this design exploration has led us also to the following questions…
Thanks for being willing to entertain such a significant change!
^_^
Nell