This repo is based on https://github.com/biubug6/Pytorch_Retinaface which updated with the ability of converting model backbone into onnx model, allow using cpu for inferencing in realtime (TLDR: It's fast when running on cpu)
- Using conda to create a virtual environment
conda create --name py36 python=3.6
conda activate py36
- install needed python lib
pip install -r requirements.txt
- Download weight file here and put it in
/retinaface_onnx/module/face_detector/retinaface/weights/
- Run scripts
- Change setting in file config.py according to your need
- Script running Realtime detect face:
python infer.py
- Script convert pytorch model into onnx model
python convert_onnx.py
On my laptop (CPU: AMD Ryzen 5 3500U, 12Gb Ram), the smallest backbone (MobileNet) archive performance of 30ms/image