Signtalk: Unlocking Communication for Real-Time Sign Language and Audio Translation with Emotion Awareness
Signtalk: Unlocking Communication for Real-Time Sign Language and Audio Translation with Emotion Awareness
1Dinesh Kumar. S, 2Diwakar. S, 3Dr. V. Ramesh Babu, 4Dr. G. Victo Sudha George
5Dr. Rehkha K.K.
1,2UG Student, 3,4,5Professor of CSE Department of CSE
Dr. M.G.R. Educational and Research Institute, Chennai-95
Email:1dineshselvakumar0312@gmail.com,2diwakar2653@gmail.com, 3rameshbabu.cse@drmgrdu.ac.in
,4victosudhageorge@drmgrdu.ac.in
Abstract SignTalk is a real-time, bi-directional communication system designed to bridge the communication gap between deaf or hard-of- hearing individuals and non-sign language users. The system translates sign language gestures into text or speech and converts spoken language into sign language representations, preserving the emotional context. Computer vision techniques are used to capture hand gestures and facial expressions, which are processed using MediaPipe for accurate landmark extraction. A hybrid Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) model is employed to recognize continuous American Sign Language (ASL) gestures. Speech-to Text (STT) and Text- to-Speech (TTS) APIs facilitate seamless audio- to-text conversion, while Natural Language Processing (NLP) techniques enable linguistic mapping. Experimental evaluation using real- time webcam input demonstrates stable performance with low latency and reliable gesture recognition. The proposed system enhances accessibility and inclusivity in education, healthcare, and public service environments.Index Terms: Sign Language Translation, Computer Vision, CNN–LSTM, Emotion Awareness, Speech-to-Text, Textto-Speech