Sign Language Detection for Dumb and Deaf People Using Machine Learning
- Version
- Download 28
- File Size 391.58 KB
- File Count 1
- Create Date 3 December 2025
- Last Updated 3 December 2025
Sign Language Detection for Dumb and Deaf People Using Machine Learning
Sonali Nilesh Kelapure1, Dr. Parminder Kaur Dhingra2
1 Ph.D Scholar, Department of CSE, JNEC, MGM University.
2 Professor and Director, Department of CSE, JNEC, MGM University.
Abstract:
Communication is a fundamental human need, yet millions of individuals who are deaf or mute face significant barriers in expressing themselves due to the lack of understanding of sign language by the general population. This project presents a machine learning-based system designed to bridge this communication gap by detecting and translating sign language gestures into readable text and audible speech in real time. The proposed solution leverages computer vision and deep learning—specifically, a hybrid Convolution Neural Network (CNN) and Long Short-Term Memory (LSTM) model—to recognize static and dynamic hand gestures captured via webcam. A custom data-set of hand signs is used for training, and real-time prepossessing techniques such as background filtering and hand segmentation are applied for improved accuracy. Once a gesture is detected, it is translated into a corresponding word or sentence, and converted into voice output using a text-to-speech module. This system not only enhances communication for the deaf and mute but also paves the way for more inclusive human-computer Interaction systems.
Keywords: Deaf and Mute Communication, Gesture Recognition, Machine Learning, Real-time Detection; Assistive Technology, Text-to-Speech.
Download