Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Representing hands as controller inputs #48

Closed
cwilso opened this issue Apr 2, 2019 · 10 comments
Closed

Representing hands as controller inputs #48

cwilso opened this issue Apr 2, 2019 · 10 comments

Comments

@cwilso
Copy link
Member

cwilso commented Apr 2, 2019

We should track the need for representing hands as controllers, as this will possibly be substantially different from gamepad exposure.

@AlbertoElias
Copy link

+1 to this. An idea would be for a higher level api that assumes all inputs are hands, and then a variety of controllers are mapped to a hand depending on their capabilities. Similar to what Oculus does with Touch gestures.

@toji
Copy link
Member

toji commented Apr 2, 2019

I agree with @AlbertoElias. When Nell and I have talked about this in the past we've broadly imagined that you'd have a nullable hand attribute on the XRInputSource that somehow represents the hand's pose and other relevant details. It's important to note that this value could be present in addition to (or in the absence of) the gamepad attribute, because the hand might be tracked several different ways. For optical/glove tracking you'd likely have just the hand with no gamepad, but for something like Oculus Touch/Valve's Knuckles (or should I say the Valve Index Controller now?) the gamepad would communicate the raw device state while the hand would give an interpretation of the pose of the hand holding the controller.

What exactly that pose LOOKS like, I have no idea. Hands are complicated, yo. Lotsa wiggly bits.

@Manishearth
Copy link

I'm starting to look into this. Perhaps we should set up an incubation repo?

I'm imagining something where each hand contains a list of joints, and the device provides optional pose information for each one. We can either hardcode all possible joints and assume some won't be supported by some devices, or allow the device to specify the skeleton somehow.

There's a possibility that this too will need some kind of profiles support. I'm not sure this will be strictly necessary -- the number of joints in a hand is ultimately fixed so we don't have to fear Oculus announcing Hands 2.0 at Connect 2025. However, different devices may orient the joints differently (e.g. one may have x on the distal thumb phalange pointing along the bone, another may have y). We can of course fix that with specification.

However, it's also possible for devices with less hand support to support "pseudojoints": for example, there is no real "palm" joint but all the non-thumb metacarpals don't move much so you may still want a "palm" joint instead. Devices which don't support full hand tracking may also choose to average out the proximal/intermediate phalangeal joints to create a pseudojoint. Ideally we can get a first pass that doesn't require all this, and see what folks think.

cc @thetuvix

@Artyom17
Copy link

Artyom17 commented Dec 3, 2019

Yes, please! CC @rcabanier

@fordacious
Copy link

The problems described here are similar to those we are currently solving in OpenXR. We should attempt to align here and not solve these problems in 2 different ways.

@Manishearth
Copy link

Yeah, I've been looking at some of the ideas in the openxr space, I'm pretty fond of the design. It would be good if we could get an incubation repo where we can start filing issues to discuss individual aspects of this

cc @TrevorFSmith @cwilso is there something that needs to be done to make an incubation repo?

@TrevorFSmith
Copy link
Contributor

@Manishearth I think there's sufficient momentum on the problem of representing hands in WebXR sessions to merit an incubation repo.

To create the repo I'll need at least one (but ideally two from different orgs) feature champion who's job it is to help drive the conversation. This doesn't necessarily need to be someone editing spec text, just someone who's energizing the work and doing things like representing the work in calls and F2F meetings.

I'll also need general consensus on a name for the repo. I suggest "webxr-hand-input" since that constrains the work to that specific API and usage.

@TrevorFSmith
Copy link
Contributor

Also, I'm assuming that the repo will be originally hosted by the CG since it's a feature that is currently undefined but let me know if anyone feels it should start in the WG.

@Manishearth
Copy link

I'd be willing to champion. I currently don't have the time to write an actual full explainer, but I am aware of some of the design tradeoffs we'll have to figure out so I could file lots of issues and perhaps write a skeleton explainer. Maybe @thetuvix can be the second?

I was just going to call it "hands", but "webxr-hands-input" is probably what the spec would be called if it got to that point.

I had assumed that it would start in CG, I didn't know incubations could start in the WG.

@TrevorFSmith
Copy link
Contributor

I have created the webxr-hands-input feature incubation repo and so will close this Issue.

Please continue the conversation in the Issues section of the new repo.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants