-
-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use the original image to get landmarks and descriptors. #294
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Huh, I did this as I though it will be slower a lot because image will be bigger? To be honest, I never tested it:) If there is no perf impact, of course this is better approach!
The slow part is the discovery of faces. So, analysis time and memory usage over the area remains absolutely linear. Maybe I should repeat https://github.com/matiasdelellis/facerecognition/wiki/Comparative-analysis-of-the-models to confirm memory usage, but about times definitely does not change. GIT MASTER
PROPOSED ORIGINAL LANDMARKING
As a curiosity, the
Out of curiosity, are you still using the application? 😅
My idea is to add another command to migrate between models. Reuse the faces of one of the model, and just calculate the landmarking and descriptors for the new model. It should be relatively fast .. |
This comment has been minimized.
This comment has been minimized.
Taking advantage of the latest code, here a new little experiment ..
As you can see, the result of model 2 are much worse, and therefore will discourage their use. 😞 The use is something like that ..
As you can see, the first analysis took 2.4 hours and the migration just 17 minutes. This is only 11% of the time. 😬 p.s.: I have my suspicions that the best model to analyze would be HOG since it will not return so many profile faces that are the problems. 🙈 |
Not at the moment. I changed how I use Nextcloud (from ordinary app to QNAP package...don't ask...), so didn't yet compiled it yet (ABI compat is PITA for QNAP, so you need to nail exact OS on which you are going to cross-compile for it to nail both CXXABI and GLIBCXX to be able to compile dlib)
Oh, good idea! Obviosuly, will not work in generic case, both it should work nicely for the models you currently have, I guess |
Houston, we have a problem!. 🙈 haha I deleted absolutely everything, and I did the same analysis, with all my photos. 😄 Well, after 5 hours thinking, and reviewing the code, I discovered that most of the photos are rotated by exif. 😞 |
I am sorry for your "loss", but this is super funny, I know I shouldn't, but I can't stop laughing:D this is super cool "bug":) |
haha.. Don't worry, finally this a "nice experiment". 😅 As a curiosity, this was one of the responses of the little survey I made, and it certainly is true. We are still defining many things, and we try to improve others. 😉
It's okay... For my part I continue to use with few images, I still have pending test the app with all the family photos. 😬 |
Well .. Definitely we have to discard this whole branch. I've been thinking about it, and it's impossible to do it right.
In resume, it is better to analyze the image with the rotation applied as the user would see, which is supposed to be how the faces are correctly aligned. and simply work with the points relative to that image. Dlib does not have support to open the rotated image, davisking/dlib#1706 It is a pity, but today I think there is nothing better to do. 😞 |
🙈 |
…rary file if really there are faces to encode
Well.. I think with this we come to the best we can offer. 😃 Finally add 10,000 photos of family, vacations and friends events to test. So after 4 days of analysis (With the latest commits, this is 6% slower.), with this branch (with the face size restriction too.) and default parameters i get:
Looking for the wrong groups, they are mainly profile faces, badly rotated images, and some babies. At this point, the only way to improve it is by training at least one new landmark model. For now, it's out of any plan. 😅 Well, doing math, this is just an 2.13% error. I think it is acceptable. |
f91d8cb
to
90eba1f
Compare
Rebased on master to test the minimum face size option. So, my latest conclusions: 1: The minimum face size seems to be quite useless. 😅 Deactivating the option (min_size = 0), i just get an 3.7% error with model 1, and using the default value (min_size = 125) it only improves to 3.23% but increases the number of clusters by 10%. The small benefit does not seem to be justified.. 😞
On the other hand, wrote some documentation, and already there began to doubt , because the smaller images look great. However I still think the smallest faces should be ignored. At least the smallest of 60px, so I will remove the frontend option, and left it as an advanced option. 2: It is evident that model 2 should be discouraged. After 4 days I finished analyzing my most complete collection of photos. Set up model 2, and migrate faces from model 1.
14.59% error, and reduces to 7.09% if use the default minimum face size. Very disappointing. 😞 Side note. If the dlib author recommends the 5-point model, use the 5-point model !! 😅 |
Ok. More conclusions: 3: This branch also seems quite useless.. haha. I am doing more tests:
Well, theoretically, the result of model 3, which was migrated analyzing the original images, should be the same or better. I have many theories, but I have to do more tests. 🤦 p.s: I am reluctant due of the excessive cost, but I think that I should buy a graphic accelerator with CUDA.. 😞 |
Yet another little experiment .. 😅 As I said in the last comment, when analyzing the faces in master and migrating the faces to model 3, it turned out a little worse. Just a little, and inconclusive, but considering that we hoped to improve the result, it contradicts the whole objective of this PR. The experiment is this... Change the detected face rectangle by randomly adding or subtracting a pixel.
...and the result is this ..
You can see that just by changing a pixel on one of the sides, the clustering error increases, and almost tripled when changing all 4 sides. Well, the first clue to this was this issue of dlib (Having difficulties on selecting bound box for shape_predictor. davisking/dlib#2093) but in particular this comment.. davisking/dlib#2093 (comment) In summary, the landmark model is very sensitive to training data, and in particular to the size of the face box. So, changing a single pixel can already result in an bad prediction of the landmarks. In this branch, we obtain the rectangle of the resized image, and we try to get the landmarks from the normalized rectangle. Since the rectangle is in pixels, we round the numbers to get integers, and here we come to the point that we cannot control and it fails. So, I'm going to cancel this PR again .. 😓 😅 |
Close in favour of #309 |
Well,
I already insisted many times for the degradation of the images when resizing, etc. 😅
Note that when dlib compute the descriptor, it already generates a cut of the face resizing it to 150px. This is yet another degradation, and therefore is better to pass the original image, and let dlib do its magic. 😃
Well, a little test, with our The Big Bang Theory test suite.. 😅
It is not a substantial improvement, but an improvement in the end. 😅
I test it several times, and I always get the same difference.