Sign Language Detection Using CNN Model
- Version
- Download 8
- File Size 436.44 KB
- File Count 1
- Create Date 10 July 2025
- Last Updated 16 July 2025
Sign Language Detection Using CNN Model
MUPPALA NAGA KEERTHI, PRATHAPARAO KALAHYNDAVI
Assistant Professor, 2MCA Final Semester,
Master of Computer Applications,
Sanketika Vidya Parishad Engineering College, Vishakhapatnam, Andhra Pradesh, India
Abstract
A wide range of fields, including computer vision, computer graphics, natural language processing, human-computer interaction, linguistics, and Deaf culture, are required to develop successful sign language recognition, generation, and translation systems. The speech and hearing- impaired community use sign language as a medium of their communication. Most people who aren't familiar with sign language find it difficult to communicate without an interpreter. Sign language recognition appertains to track and recognize the meaningful emotion of humanmade with head, arms, hands, fingers, etc. The technique that has been implemented here, describes the gestures from sign language to a spoken language which is easily understood by the listening. The gestures that have been translated include alphabets, words from static images. This becomes more important for the people who completely rely on gestural sign language for communication tries to communicate with a person who does not understand the sign language. Most of the systems that are under use face a recognition problem with the skin tone; by introducing a filter it will identify the symbols irrespective of the skin tone. The aim is to represent features that will be learned by a system known as convolutional neural networks (CNN), which contains four types of layers: convolution layers, pooling/subsampling layers, nonlinear layers, and fully connected layers.
IndexTerms: Convoutional Neural Networks(cnn), Hand Gesture Recognition, image Segmentaion, OpenCV, Region of Interest, Real Time Prediction, Feature Extraction, Deep Learning.
Download