Real time interface for deaf-hearing communication
- Version
- Download 28
- File Size 410.95 KB
- File Count 1
- Create Date 18 March 2025
- Last Updated 18 March 2025
Real time interface for deaf-hearing communication
Authors:
Mrs.Jangam Bhargavi*1, Chitikala Sairam*2, Donga Hemanth*3,
Kandula Surya Ganesh*4
*1Assistant Professor, Dept. Of CSE (AI & ML) ACE Engineering College Hyderabad, India.
*2,3,4 Students, Dept. Of CSE(AI&ML) of ACE Engineering College Hyderabad, India.
Abstract: Bridging the communication gap between the deaf and hearing communities using AI is achieved by integrating two key modules: Speech-to-Sign Language Translation and Sign Gesture Detection in Real Time. The first module translates English spoken language into American Sign Language (ASL) animations. It consists of three sub-modules: speech-to-text conversion using the speech recognition module in Python, English text to ASL gloss translation using an NLP model, and ASL gloss to animated video generation, where DWpose Pose Estimation, and an avatar is used for visual representation. The second module focuses on real-time sign gesture detection, where a dataset is created from the WLASL and MS-ASL datasets. Hand gestures are labeled using Labeling, and a YOLO-based model is trained for hand pose detection to enable real-time recognition. The system aims to enhance accessibility and interaction between deaf and hearing users through an efficient, automated translation and recognition pipeline.
Keywords: Speech-to-sign translation, real-time sign language recognition, ASL gloss, YOLO hand pose detection, AI for accessibility, deep learning for sign language, gesture recognition, DWpose Pose Estimation, NLP, dataset labeling, real-time gesture recognition.
Download