Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement object trakcing with tracking.js #3

Open
JohnMcLear opened this issue Jun 30, 2013 · 2 comments
Open

Implement object trakcing with tracking.js #3

JohnMcLear opened this issue Jun 30, 2013 · 2 comments

Comments

@JohnMcLear
Copy link
Contributor

Lots of limitations and issues here.. See tracking.js issues

@JohnMcLear
Copy link
Contributor Author

Here is my brain dump..

If we can detect the magnetic strip we can skip one step.

  1. We will need a bunch of positive magnetic strip images (this will be hard to find) and crop them down just for the magnetic strip. See the guide here: http://note.sonots.com/SciSoftware/haartraining.html
  2. We will also need a bunch of negative images probably use these: http://tutorial-haartraining.googlecode.com/svn/trunk/data/negatives/
  3. Generate XML Files
  4. Convert XML to JSON --> http://www.freeformatter.com/xml-to-json-converter.html
  5. Create a tracking option for this haarcascade file on tracking.js
  6. Apply the tracking.js to a file and on success try to find the width/height and if it's within a portion (so the card isn't skewed) then continue..
  7. Take an image. We should have the width of the strip, this shouldn't be skewed.
  8. So this would be a good place to leave version 1.

We shouldn't feed these images back into the dataset as that would increase the possibility of overlearning which would be a bad thing.

Version 2

Version 2 should include some form of object detection on a per finger basis.

Doing this would be pretty tough. We would have to get images and crop them getting many separate fingers learning each set against the other 4 as negatives.. It'd be nice if we could do some sort of peak/troff type analysis and get the drop of each of 5 peaks, each hand should have a format like this..

right hand to camera, rightest part being thumb

  #
 ###
####
#####

left hand to camera, left part being thumb

  #
 ###
 ####
#####

it should be obvious which way the hand is around too which would be helpful for displaying the svg overlay

Gamification

There should be a way to create a game to improve the learning here but I can't think what it is, we have access to hand tracking positives already so actually getting the hand images prolly wont be that fruitful. What we'd actually need is cropped individual fingers which is very unlikely..

Actually getting the ring fingers width

So as we can detect each finger the next step would be to be somehow to detect the width..

This looks interesting

Release of HAAR classifier trained for hand gesture recognition - http://www.andol.info/hci/2059.htm
finger 1
Shows a potential method for getting the ring size for fingers 2/3 or 4 and 5 depending on how you look at it.. What matters so much about this example is that the software seems able to be able to highlight specific points on a hand, I have no idea how this is done, I should investigate this further..

It appears the logic for this is handled in https://github.com/yandol/GstHanddetect/blob/master/src/gsthanddetect.c

@JohnMcLear
Copy link
Contributor Author

@YandOl wrote the above source that gets each finger location / size. It is possible he can modify his code to get the correct ring size per finger. I would suggest the points of interest if he was to do that would be represented by black circles on this image:
xcemthy 1

I will email him to see if he can assist :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant