-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Move hand horizontally in relaxed handshake position to control mouse pointer #3
Comments
I can definitely think about it, thanks for the suggestion. It's true that the default configuration (i.e. laptop camera and up/down gestures) is a bit tiring, but as I mentioned in your other issue, I had not anticipated the usage of USB cameras. I can probably just add an option to switch between these two modes. :) In the meantime, you can already give it a try by creating your own data with hamoco-data and training your own model with hamoco-train. Default values for the network should be OK, and I used ~200 snapshots per hand-pose to create the dataset of the current model. All in all, this is only a matter of minutes to get everything working. I did not put a lot of time in the documentation, though, so I can definitely add that to my todo-list. |
If you know what you're doing! ;)
That's perfect! |
@jorisparet Are you planning to implement this anytime soon? Are you planning to further develop hamoco at all? (I think it would be really a pity if you dropped this project) |
Hi @omlins! Sorry that it hasn't been done yet. It's been in the back of my head for weeks, but I'm currently in the process of moving to another city to start a new position, etc. Also, I need to find a USB camera to train the appropriate model, or find another way to do it as clean as possible without one. Anyway, I hope I will find some time to do it soon, hopefully by the end of February (at the latest). So no, I won't drop the project anytime soon: I actually have a few ideas for some new features. :) |
I see... I am glad to hear that the project will continue and can't wait to test his new feature! |
Hi @omlins, As promised, I created a second model for a top-down view. At the moment, it is only visible on the newly created branch dev, so you can give it a try and let me know if this works as expected before I merge it to the main branch in the near future. So you have to install the package locally from that branch in order to be able to test it: switch to branch Essentially, you can switch between the original front view (i.e. laptop camera) and the top-down view (i.e. camera looking down, and hand hovering over the table) using the
I expect it to be less accurate than the front view, because it's more difficult for mediapipe to infer the positions of the fingers when the back of the hand is facing the camera, since they are hidden. Also, I created one model for each view, but I'm not sure if this is the right way to go. At some point, I may train a single more robust model that works for both views, so that we don't have to specify the Let me know what you think. ;) |
Moving the (open) hand in front of the camera up and down is naturally quite tiring for longer use. Thus, would it be possible to create a mode where one can move the hand in relaxed handshake position horizontally over the desk to control the mouse pointer? I mean just the same way as you would do with a ergonomic vertical mouse, but without holding a mouse! That would be the most ergonomic "mouse" ever made! The camera would of course have to be pointed downwards to the desk.
In the first step, it would be already absolutely awesome to only be able to move the mouse pointer that way without being able to do other gestures ( I would like to use it in any case in combination with a voice control software that I am developing, where I can control mouse buttons by voice, see https://github.com/omlins/JustSayIt.jl )
The text was updated successfully, but these errors were encountered: