Skip to content

A Rock-Paper-Scissors game using Machine Learning and Mediapipe’s hand tracking.

Notifications You must be signed in to change notification settings

dev-DTECH/vision-rps

Repository files navigation

Vision RPS

Live Preview!

A Rock-Paper-Scissors game using Machine Learning and Mediapipe’s hand tracking.

  • Designed and implemented a Rock-Paper-Scissors game using Machine Learning using Mediapipe’s hand tracking.
  • Collected a dataset of size 1329 using OpenCV and preprocessed it to our requirements.
  • Trained a classification model with a validation accuracy of 99.32% using TensorFlow and implemented it on website using TensorFlow.js.

Contents

  1. Running the project locally
  2. How to collect data
  3. How to train a new model
  4. How to convert model to tensorflow.js format
  5. Results

Running the project locally

Clone the git repository

git clone https://github.com/dev-DTECH/vision-rps.git
cd ./vision-rps

Run a HTTP Server

python -m http.server 8080

The project should be live at https://localhost:8080


How to collect data

Install the dependencies

pip install -r requirments.txt

Run the collect data manual script

python collect_data_manual.py

Press the keys from 0 to 9 as labels while showing your hand in the camera. It will generate a dataset and store it into data.csv


How to train a new model

Open the train.ipynb with Jupyter notebook and execute all the cells accordingly


Convert keras model to TensorFlow.js

Install TensorFlow.js

pip install tensorflowjs

Run the converter script

tensorflowjs_converter \                                         
    --input_format=keras \
    /models/v1.h5 \
    /models/v1_tfjs_model

Results

Accuracy = 0.9932279909706546

Confusion Matrix

alt text