Skip to content

This is the code for sign language reocgnition using LSTM model.

Notifications You must be signed in to change notification settings

aaronsam07/Sign-Language-Recognition

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 

Repository files navigation

Realtime Sign Language Detection Using LSTM Model

The Realtime Sign Language Detection Using LSTM Model is a deep learning-based project that aims to recognize and interpret sign language gestures in real-time. It utilizes a Long Short-Term Memory (LSTM) neural network architecture to learn and classify sign language gestures captured from a video feed. The project provides a user-friendly interface where users can perform sign language gestures in front of a camera, and the system will instantly detect and interpret the gestures. This can be used as an assistive technology for individuals with hearing impairments to communicate effectively. Key features of the project include real-time gesture detection, high accuracy in recognition, and the ability to add and train new sign language gestures. The system is built using Python, TensorFlow, OpenCV, and Numpy, making it accessible and easy to customize. With the Realtime Sign Language Detection Using LSTM Model, we aim to bridge the communication gap and empower individuals with hearing impairments

Demo

This section showcases a demonstration of the Realtime Sign Language Detection Using LSTM Model project.

label.1.1.mp4

The demo allows viewers to see how the system accurately interprets sign language gestures and provides real-time results.

About

This is the code for sign language reocgnition using LSTM model.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published