-
Notifications
You must be signed in to change notification settings - Fork 5.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hand keypoint extraction #64
Comments
@AndrGolubkov Have you tried this? |
@ajinkyapuar Yes, I noticed that the models are available. But here the question is precisely in obtaining an array of coordinates instead of the rendered output |
@AndrGolubkov It can't output precise coordinates!! You can see the graph below carefully: |
If you look at the hand tracking graph as @MedlarTea mentioned, you can find the landmarks are output from the HandLandmarkSubgraph at stream "LANDMARKS:hand_landmarks". |
@AndrGolubkov |
@chuoling I ran below code, becuase "LANDMARKS:hand_landmarks" is vector of proto, but failed. processor.getGraph().addPacketCallback("hand_landmarks", new PacketCallback() {
@Override
public void process(Packet packet) {
PacketGetter.getVectorOfPackets(packet);
}
}); And I think a function to get type of packet is necessary. |
@chuoling I am interested in using and getting key points on iOS. |
I have the same question. But I'm wondering if the input is a 2d image then it's so hard to extract a 3D coordinator. Unless the input is a depth image containing depth data. |
@MedlarTea |
Hi @oishi89 , |
@AndrGolubkov @astinmiura |
@chuoling Thank you very much, that would be great |
@chuoling Thank you, we were hoping for such an API when we first read about this project. We all would appreciate this. |
@Hemanth715 @AndrGolubkov @astinmiura Before such an API is available, we have an intermediate solution in C++. See issue from #200 where we have example of Normalizedlandmark protos |
@AndrGolubkov @astinmiura Fixed in v0.6.6 Pls check it out and let us know |
Is there any way to extract the keypoint in python so that i can use these in the VR project ? Thankyou :) |
Is it possible using the Hand Tracking (GPU) example to extract not an video, but an array of keypoints? Perhaps I didn’t carefully read the documentation and considered the example, I apologize in advance.
The text was updated successfully, but these errors were encountered: