Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problem with recognition #48

Open
ramvdixit opened this issue Sep 25, 2019 · 14 comments
Open

Problem with recognition #48

ramvdixit opened this issue Sep 25, 2019 · 14 comments

Comments

@ramvdixit
Copy link

ramvdixit commented Sep 25, 2019

Hello,

So after my previous query, everything is sorted and works, except for one glitch. Please note that the offical PlayStore app also has this problem. I am not sure if the cause is the same as I can only talk about my logs.

Here is my scenario:

  1. I am required to take a video feed, extract the frames, train each frame-bitmap and proceed to recognize with another video feed where a single frame/or a couple of frames are fed in against the database of 50 images. The reason for the video feed instead of a static picture is to increase the "liveness" (I will bring in face detection, movement along pitch and yaw later)
  2. I have modified the code with two activities - one to train and one to recognize. I have used Eigenfaces only.
  3. The device will contain the images of only one person - the device owner - about 50 images processed from 50 frames of the live video feed.
  4. The training gets completed and the database shows the image B64 strings and other parameters
  5. Please note, in order to track better and execute the AsyncTasks in a linear manner, I have use custom interfaces as event listeners. Just a single interface method that is called when a frame is processed, distance is measured and training is compete, so the next frame in the array can be taken up.

The problem:

  1. The Recognition activity always reports that a face was recognized. Irrespective of what the camera sees - A face, someone else's face, a wall, a writing board, just anything.
  2. I have tried adjusting the thresholds from 0.045f to 0.5f
  3. I tried to reason this out by looking at your latest source and found no apparent mistakes in my code.
  4. I tried the app from the PlayStore and it has similar problems. The only difference being, it sometimes reports a false detection when shown an object such as a book or the computer monitor (correctly, as expected). However, another button click reports the person's name for the same object. Also, like my implementation, it reports the same name for different human faces.

FYI: I am using another camera library [https://github.com/natario1/CameraView] for this, as I need it for other purposes. I also can process frames directly instead of extracting it from a video file. Please note, it offers both camera1 and camera2 support. I am using the camera2 api support.

I have attached four files:

  1. MainActivity - Training
  2. RecognitionActivity - Recognition
  3. face-lib.cpp - (modified the log tag and the method signatures to match my package name - nothing else)
  4. The sharedPreference xml file.

Sorry about the .txt extensions, .java files are not allowed as attachments.

Can you please take a look and tell me where I am going wrong? I hope I am wrong, because your library is awesome and I would hate it if there is an inherent problem with it.

Thanks and Cheers!
MainActivity.txt
RecognitionActivity.txt
face-lib.txt
shared_preferences.txt

@UwaisWisitech
Copy link

Hi @ramvdixit

Did you fixed the issue?
I am also going to start recognition in my app.

@ramvdixit
Copy link
Author

ramvdixit commented Nov 18, 2019

@UwaisWisitech

Nope. I moved on to implementing it a different way, I combined OpenCV, SVM with KNN and my own camera implementation derived from the OtaliaStudios CameraView library. I am still working on it, so would abstain from giving you advice yet.

But I would definitely tell you to try other engines - TensorFlow SVM, KNN, Caffe, etc. EigenFaces did not really work too well or give the accuracy that I needed.

@UwaisWisitech
Copy link

Thanks @ramvdixit

@MustafaDar
Copy link

@UwaisWisitech

Nope. I moved on to implementing it a different way, I combined OpenCV, SVM with KNN and my own camera implementation derived from the OtaliaStudios CameraView library. I am still working on it, so would abstain from giving you advice yet.

But I would definitely tell you to try other engines - TensorFlow SVM, KNN, Caffe, etc. EigenFaces did not really work too well or give the accuracy that I needed.

Did you got a better accuracy through different ML algorithms??

@ramvdixit
Copy link
Author

My best bet would be TensorFlow with SVM/KNN. It's quite satisfactory. I get a positive result almost all the time. Keep in mind that there are a bunch of variables you have to take into account - Distance of camera from face, angle of camera, available light, etc. I also would suggest you use frames from a live video feed rather than a single snap from the camera, experiment with a bunch of frames in a loop to get a positive result and break out of the loop when you get it.

@MustafaDar
Copy link

MustafaDar commented Dec 9, 2019

My best bet would be TensorFlow with SVM/KNN. It's quite satisfactory. I get a positive result almost all the time. Keep in mind that there are a bunch of variables you have to take into account - Distance of camera from face, angle of camera, available light, etc. I also would suggest you use frames from a live video feed rather than a single snap from the camera, experiment with a bunch of frames in a loop to get a positive result and break out of the loop when you get it.

Okay got it!
One random question
Can the images in the training dataset be sent to localhost server?

@ramvdixit
Copy link
Author

You mean the processed files? sure you can. But why localhost? You either store it on a remote server or maintain them on the local device.

@MustafaDar
Copy link

yes, i mean the the images stored (along with their labels) of training dataset.
Just check on localhost then try on the remote server.

@ramvdixit
Copy link
Author

If you are opting to use TensorFlow with SVM, you dont need the images after the training is completed. Just the training dataset containing the vector classifications is enough, which you can store in a server or maintain it locally. My personal need was to have face recognition capability on the local device and have no dependency on a server, when there is no connectivity. So, yeah, sure you can store it anywhere you want.

@MustafaDar
Copy link

If you are opting to use TensorFlow with SVM, you dont need the images after the training is completed. Just the training dataset containing the vector classifications is enough, which you can store in a server or maintain it locally. My personal need was to have face recognition capability on the local device and have no dependency on a server, when there is no connectivity. So, yeah, sure you can store it anywhere you want.

In this project is the training data being saved in SQLLite database??

@ramvdixit
Copy link
Author

Yes, I have a SQLCipher database that stores the class data.

@kukokp
Copy link

kukokp commented Apr 6, 2020

@ramvdixit sir, are you got the success?

Could you please guide me.
I'm done same as you done.

I move the bot code in different activity/fragment.

When I train the dataset it's working fine.
But if I provide some other person image to recognition , It's fail.
I spend lot of time for this task.

Could you please give me any reference link to implement as You Done.

Thanks
Zala.

@shahimtiyaj
Copy link

shahimtiyaj commented Mar 7, 2021

If you are opting to use TensorFlow with SVM, you dont need the images after the training is completed. Just the training dataset containing the vector classifications is enough, which you can store in a server or maintain it locally. My personal need was to have face recognition capability on the local device and have no dependency on a server, when there is no connectivity. So, yeah, sure you can store it anywhere you want.

In this project is the training data being saved in SQLLite database??

Please share your source code with us

@venkakat83
Copy link

Hi,

I am struggling from few days to recognize the face using facenet in Android application.

Here are the steps I have followed:-

  1. Generated the Embeddings faces from the Python using face net.
  2. Integrated the embedding of the faces and added facenet tensorflow.
  3. On launch of the camera I extract the embeddings and finding the close one from perviously saved embeddings using l2 normialization. But it is not giving accurate results.

Kindly help.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants