Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Determine presentation types that are in scope for 2.0 #222

Closed
toji opened this issue Apr 19, 2017 · 3 comments
Closed

Determine presentation types that are in scope for 2.0 #222

toji opened this issue Apr 19, 2017 · 3 comments
Milestone

Comments

@toji
Copy link
Member

toji commented Apr 19, 2017

This has come up a few times during spec discussions, so I'd like to formalize it a bit by outlining what the potential presentation modes are for WebVR, what the requirements are for each mode, and use that to inform which one we want to focus on for the initial release. It's entirely possible that I've overlooked something here, by the way, so please respond with your own modes if you feel like I've forgotten something.

Note that having a mode listed below does not imply that it will be supported in WebVR 2.0, simply that it should be taken into consideration.

Immersive Presentation

Description: What most people think of as the standard "VR mode". Page renders the entire viewport, generating every pixel the user sees. Rendering is responsive to head tracking to create the impression of a fully immersive scene that completely surrounds the user.

Requirements: See explainer :) This is a pretty well understood problem so I'm not going to repeat all the requirements here.

Mirroring

Description: If the VR device has an external display (such as a desktop monitor) it's frequently desirable to show, in some capacity, what the user is seeing in the headset.

Requirements: For simple, single-layer WebGL scenes mirroring can be as simple as ensuring the rendered content can be copied manually to the canvas' default backbuffer, or that the backbuffer isn't cleared if the default backbuffer is also the mechanism for drawing to the VR display. For multi-layer content mirroring we would want the API to provide the composited VR content back to the page for rendering, either as a texture that the developer could manually draw or have the API automatically draw the mirrored content into a given canvas. Some native APIs (such as OpenVR) provide a way to retrieve the composited mirror texture, while others may not which could force VR compositing back into the browser instead of using the native API mechanisms for accurate mirroring.

Quality and accuracy is negotiable when mirroring, though, since all that's really required is a general sense of what the user in VR is seeing. Mirrored content could be higher latency, drop frames, show the content for a single eye or both eyes, be cropped to show a more natural field of view, or render at a lower resolution for better performance.

Third Person Spectator

Description: Also on VR devices with an external display it's sometimes more appropriate to show a third-person view of the scene that the VR user is seeing in first person, often with a visualization of the VR device's position and orientation. Can be useful for providing controls to a user that is driving demonstrations for the user in VR, so this is sometimes referred to as a "Demoer" or "Dungeon Master" view.

Requirements: The app needs to be able to poll device pose, but otherwise the API doesn't need to provide much as this is almost entirely in the developers hands. The API does need to avoid interfering with rendering to a canvas' default backbuffer using the same GL context as is used to render the VR scene.

Magic Window (Simple)

Description: Application renders a mono view of the scene to a canvas on a page in a standard 2D browser, using the device pose to transform the view. Most common variant involves using a phones accelerometer to rotate the view of a scene. May also refer to the user holding and moving a desktop headset to transform the mono view in a canvas, which is handy for debugging but less generally useful for average users.

Requirements: As with the third person view apps needs to be able to poll device pose, but that's about it. API can make it easier if it provides appropriate view and projection matrices, which should be based on the canvas dimensions rather than the device optics, but that's strictly a convenience. In the most user friendly scenario this could be handled with the same rendering path as immersive mode. If multi-layer rendering is to be exposed to this form of magic window the VR compositor needs to be able to provide the composited frame to the page compositor, which runs into some of the same requirements as mirroring, but with higher accuracy requirements.

Magic Window (Punch-through)

Description: I'm not sure how to better describe this mode. Application renders a mono or stereo view of the scene to a canvas on a 2D page in an environment where the users viewpoint can be tracked relative to the page. Examples include a 2D page viewed from within VR or a ZSpace-like device. Given the appropriate projection and view matrices the app could render a scene that responds to head movement to appears to exist behind the page, viewed through a more literal "magic window" element.

Requirements: API needs to know where the output canvas element is relative to the user's head to compute the appropriate frustums and viewing angles, which can then be provided to the developer as projection and view matrices. In the viewed-within-VR scenario ideally the API should also know what size the output canvas will be rendered at and adjust the target viewports accordingly to prevent rendering too much content.

@toji toji added this to the 2.0 milestone Apr 19, 2017
@toji toji closed this as completed Apr 19, 2017
@toji toji reopened this Apr 19, 2017
@delapuente
Copy link

delapuente commented Jul 24, 2017

I would want to suggest "Head Tracking" as a replacement for the "Magic Window (Punch-through)" item.

Edit: Or "Inverse Head Tracking" since the device is not actually tracking the head but inferring its position from the device positional differences.

@Martin-Pitt
Copy link

Martin-Pitt commented Jul 24, 2017

I think a "Magic Window (Punch-through)" is more well known as a Portal.

Portals are common in SciFi and Fantasy settings, in movies and across games (Valve's Portal 2 for example, even the name implies it). (Portal as in a magic window into another setting, but attached/linked/positioned in the world statically, using positional differences to observe the other location)

I think the mechanics of a portal are well understood that it explains this mode well. E.g. moving your head around to look through the screen into the other location, rather than having to move the screen (as is the case in the magic window) to look. Alternatively like in the case of portal 2, being able to stick an object through a portal and see it poking out of the portal. (E.g. not necessarily a flat surface into, but can expand out)

@toji
Copy link
Member Author

toji commented May 16, 2018

Not sure this needs to be an open "issue" any more.

@toji toji closed this as completed May 16, 2018
@cwilso cwilso modified the milestones: Spec-Complete for 1.0, 1.0 Apr 30, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants