tldr: A file storage web app (basically google drive) with facial recognition as its main form of authentication. Users can create accounts with their face as the key to logging in to their respective accounts. For now, users can upload and view images in the web app. All the facial recognition, file upload and download, is handled on the backend in Python using the Django framework. The frontend is a NextJS app, which is a meta framework built on top of React. To run the full stack web app, we have the backend server locally hosted on port 8000 while the frontend is hosted on port 3000.
Our facial recognition functionality is built based on the face-recognition Python library, which itself is built on top of Dlib, a machine learning library built in C++. The endpoint in Django that handles face recognition goes through a two step process:
- It calls the face_locations method, which identifies human faces in the photo, based on this location data, it then encodes the photo into a 128 dimension face encoding. (idk what this means either, all I do is follow the documentation)
- We repeat step 1 twice for the face image data stored in our DB, and the photo taken at login from a potential user. With these two face encodings, we compare them and if the similarities meet a certain threshhold, the user is logged in to their account
- Frontend: NextJS, TailwindCSS, JS Fetch API
- Backend: Django(Python), facial_recognition
- Database: SQLite