Skip to content

Latest commit

 

History

History
101 lines (64 loc) · 7.52 KB

README.md

File metadata and controls

101 lines (64 loc) · 7.52 KB

Synapse ai

Main presentation

Shruti-Drishti: Bridging the Communication Gap for the Deaf Community in India 🌉🇮🇳

Introduction 🙌

Shruti-Drishti is an innovative project aimed at addressing the communication gap between the deaf and non-deaf communities in South Asia, particularly in India. By leveraging deep learning models and state-of-the-art techniques, we strive to facilitate seamless communication and promote inclusivity for individuals with hearing impairments. 🌟

DEMO VIDEO

Demo for ISL based Sign Language Detection

Key Features ✨

  1. Sign Language to Text Conversion 🖐️➡️📝: Our custom Transformer-based Multi-Headed Attention Encoder, powered by Google's Tensorflow Mediapipe, accurately converts sign language videos into text, overcoming challenges related to dynamic sign similarity.

  2. Text to Sign Language Generation 📝➡️🖐️: Utilizing an Agentic LLM framework, Shruti-Drishti converts textual information into masked keypoints based sign language videos, tailored specifically for Indian Sign Language.

Text2sign

  1. Multilingual Support 🌐: Our app uses IndicTrans2 for multilingual support for all 22 scheduled Indian Languages. Accessibility is our top priority, and we make sure that everyone is included.

  2. Content Accessibility 📰🎥: Shruti-Drishti enables news channels and content creators to expand their user base by making their content accessible and inclusive through embedded sign language video layouts.

Dataset Details 📊

Link to the Dataset: INCLUDE Dataset

The INCLUDE dataset, sourced from AI4Bharat, forms the foundation of our project. It consists of 4,292 videos, with 3,475 videos used for training and 817 videos for testing. Each video captures a single Indian Sign Language (ISL) sign performed by deaf students from St. Louis School for the Deaf, Adyar, Chennai.

Model Architecture 🧠

Shruti-Drishti employs two distinct models for real-time Sign Language Detection:

  1. LSTM-based Model 📈: Leveraging keypoints extracted from Mediapipe for poses, this model utilizes a recurrent neural network (RNN) and Long-Short Term Memory Cells for evaluation.

    • Time distributed layers: Extract features from each frame based on the Mediapipe keypoints. These features capture spatial relationships between joints or movement patterns.
    • Sequential Layers: Allows the model to exploit the temporal nature of the pose data, leading to more accurate pose estimation across a video sequence.
  2. Transformer-based Model 🔄: Trained through extensive experimentation and hyperparameter tuning, this model offers enhanced performance and adaptability.

    • Training Strategies:
      1. Warmup: Gradually increases the learning rate from a very low value to the main training rate, helping the model converge on a good starting point in the parameter space before fine-tuning with higher learning rates.
      2. AdamW: An advanced optimizer algorithm that addresses some shortcomings of the traditional Adam optimizer and often leads to faster convergence and improved performance.
      3. ReduceLRonPlateau: Monitors a specific metric during training and reduces the learning rate if the metric stops improving for a certain number of epochs, preventing overfitting and allowing the model to refine its parameters.
      4. Finetuned VideoMAE: Utilizes the pre-trained weights from VideoMAE as a strong starting point and allows the model to specialize in recognizing human poses within videos.

We have also implemented the VideoMAE model, proposed in the paper "VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training." Fine-tuning techniques such as qLORA, peft, head and backbone fine-tuning, and only head fine-tuning were explored, with the latter proving to be the most successful approach.

Solution Approach 🎯

Shruti-Drishti tackles the communication gap through a two-fold approach:

  1. Sign Language to Text: Implementing a custom Transformer-based Multi-Headed Attention Encoder using Google's Tensorflow Mediapipe, we convert sign language videos into text while addressing challenges related to dynamic sign similarity.

  2. Text to Sign Language: Utilizing an Agentic LLM framework, Shruti-Drishti converts textual information into masked keypoints based sign language videos, tailored specifically for Indian Sign Language.

Action Plans 📋

  1. Pose-to-Text Implementation: Develop and implement a Pose-to-Text model based on the referenced paper for the Indian Sign Language dataset, using Agentic langchain based state flow as the decoder stage for text-to-gloss conversion and merging masked keypoint videos.

  2. Custom Transformer Model Evaluation: Assess the effectiveness of our custom Transformer/LSTM model on the Sign Language Dataset, focusing on accuracy and adaptability to dynamic signs.

  3. Multilingual App Development: Create a user-friendly multilingual app serving as an interface for our Sign Language Translation services, ensuring easy interaction and adoption by both deaf and non-deaf users.

Progress So Far ✅

  • Basic Deep Learning-based LSTM model for sign language recognition (Done)
  • Custom multi-headed attention-based encoder for sign language recognition for dynamic signs (Done)
  • Testing on the whole Indian dataset for our attention model (Done)
  • Implementing the pose-to-text using agentic framework (Langgraph) (Done)
  • Build multilingual app (Done)
  • Build Demo and update repo (Done)

Results 📈

Transformers

Results Image

For detailed results and insights, please refer to our presentation slides.

LSTM

(TODO)

Other Links 🔗

Project Contributors 👥