This is a high-level implementation of the face recognizer described in the paper "FaceNet: A Unified Embedding for Face Recognition and Clustering" using the pre-trained model by davidsandberg The code was initially written by nyoki-mtl and forked from his repository keras-facenet.
- Follow Pretrained model instructions to download the Facenet model
- Run Jupiter Notebook either using Docker compose or your local
environment.
- For Docker compose execute:
docker-compose up
- For the local environment (Python 3):
run.sh
- For Docker compose execute:
- A sample dataset is provided in
data/images
but it can be replaced with your own. - Preprocess the dataset pre-process-dataset.ipynb
- Train the SVM model svm-classification.ipynb
- Run a webcam demo using the trained model demo-webcam.ipynb
You can quickly start facenet with pretrained Keras model (trained by MS-Celeb-1M dataset).
- Download model from here and save it in
model/keras/
You can also convert the Tensorflow model to Keras from the pretrained models:
- Download model from here and save it in
model/tf/
(keep the model version for further reference) - Open tf-to-keras.ipynb to convert the model for Keras
- Update the variable
model_version
- Execute the Notebook.
Note: Latest version of pre-train models have an output layer of 512
instead of 128. If you are converting the model from Tensorflow to Keras
you should specify this on the inicialization of the class InceptionResNetV1
.
model = InceptionResNetV1(classes=512)
For evaluation proposes is also included the code to test the AWS Rekognition service. Take into account that using this service may incur in costs.