-
-
Notifications
You must be signed in to change notification settings - Fork 88
Project Changes
- Modified
settings.gradle
to use the new plugin management system. - The conversion of
Bitmap
to NV21-formattedByteArray
( YUV420 ) is now transformed into a suspending function to avoid blocking of the UI thread when a large number of images are being processed.
- Users can now control the use of
GpuDelegate
andXNNPack
usinguseGpu
anduseXNNPack
inMainActivity.kt
,
// Use the device's GPU to perform faster computations.
// Refer https://www.tensorflow.org/lite/performance/gpu
private val useGpu = true
// Use XNNPack to accelerate inference.
// Refer https://blog.tensorflow.org/2020/07/accelerating-tensorflow-lite-xnnpack-integration.html
private val useXNNPack = true
-
The app now has a face mask detection feature with models obtained from achen353/Face-Mask-Detector repo. You may off it by setting
isMaskDetectionOn
inFrameAnalyser.kt
tofalse
. -
The source of the FaceNet model is now Sefik Ilkin Serengil's DeepFace, a lightweight framework for face recognition and facial attribute analysis. Hence, the users can now use two models,
FaceNet
andFaceNet512
. Also, the int-8 quantized versions of these models are also available. See the following line ineMainActivity.kt
,
private val modelInfo = Models.FACENET
You may use different configurations in the Models
class.
-
The app will now classify users, whose images were not scanned from the
images
folder, asUNKNOWN
. The app uses thresholds both for L2 norm and cosine similarity to achieve this functionality. -
For requesting the
CAMERA
permission and access to theimages
folder, the request code is now handled by the system itself. See Request app permissions.
-
We'll now use the
PreviewView
from Camera instead of directly using theTextureView
. See the official Android documentation forPreviewView
-
As of Android 10, apps couldn't access the root of the internal storage directly. So, we've implemented Scoped Storage, where the user has to allow the app to use the contents of a particular directory. In our case, users now have to choose the
images/
directory manually. See Grant access to a directory's contents. -
The feature request #11 for serializing the image data has been considered now. The app won't load the images everytime so as to ensure a faster start.
-
The feature request #6 has also been considered. After considering the use of
PreviewView
, the app can now be sed in the landscape orientation. -
The project is now backwards compatible to API level 25. For other details, see the
build.gradle
file. -
The lens facing has been changed to
FRONT
and users won't be able to change the lens facing. The app will open the front camera of the device as a default. -
The source of the FaceNet Keras model -> nyoki-mtl/keras-facenet
-
The image normalization step is now included in the TFLite model itself using a custom layer. We only need to cast images to
float32
using theCastOp
from TFLite Support Library. -
A
TextView
is now added on the screen which logs important information like number of images scanned, similarity score for users, etc.
- The source of the FaceNet model has been changed. We'll now use the FaceNet model from sirius-ai/MobileFaceNet_TF
- The project is now backwards compatible to API level 23 ( Android Marshmallow )
minSdkVersion 23
- Lens Facing of the camera can be changed now. A button is provided on the main screen itself.
- For multiple images for a single user, we compute the score for each image. An average score is computed for each group.
The group with the best score is chosen as the output. See
FrameAnalyser.kt
.
images ->
Rahul ->
image_rahul_1.png -> score=0.6 --- | average = 0.65 --- |
image_rahul_2.png -> score=0.5 ----| | --- output -> "Rahul"
Neeta -> |
image_neeta_1.png -> score=0.4 --- | average = 0.35 --- |
image_neeta_2.png -> score=0.3 ----|
- Cosine similarity can be used alongside L2 norm. See the
metricToBeUsed
variable inFrameAnalyser.kt
. - A new parameter has been added in
MainActivity.kt
. ThecropWithBBoxes
argument allows you to run the Firebase MLKit module on the images provided. If you are already providing cropped images in theimages/
folder, set this argument tofalse
. On setting the value totrue
, Firebase ML Kit will crop faces from the images and then run the FaceNet model on it.