Predicting Behaviour Change in Students with Special Educational Needs using Multimodal Learning Analytics
Predicting Behaviour Change in Students with Special Educational Needs using Multimodal Learning Analytics
Cuddapah Hameed1
Department of Computer Science and Engineering
Sri Venkateswara College of Engineering, Karkambadi
Tirupati, India, 517501
mailhameed5@gmail.com
Saritha A2
Department of Computer Science and Engineering
Sri Venkateswara College of
Engineering, Karkambadi
Tirupati, India, 517501
saritha.a@svcolleges.edu.in
Abstract— One important use of artificial intelligence is facial emotion recognition (FER), which allows machines to recognize and react to human emotions. Inorder to identify and categorize human emotions from real-time video input, this research shows the design and implementation of a real-time facial emotion recognition system that combines computer vision and deep learning approaches. Angry, Disgust, Fear, Happy, Neutral, Sad, and Surprise are the seven categories into which the system divides facial expressions after identifying faces using Haar Cascade classifiers. Normalized 48x48 pixel facial pictures were used to create and train a Convolutional Neural Network (CNN) architecture. Toenhance model generalization and lessen overfitting, data preparation methods like rescaling and picture augmentation like rotation, shifting, and horizontal flipping were used. Multiple convolutional, pooling, and dropout layers are used in the network to extract hierarchical spatial data. Then, fully connected layers are used to classify multiple classes of emotions using a soft max activation function. To guarantee effective training and precise prediction, the model was optimized using the categorical cross-entropy loss function and the Adam optimizer. The trained model was coupled with Tainter for graphical user interface creation and OpenCV for face identification for real-time deployment. The program records live webcam footage, recognizes facial features, preprocesses the area of interest, and instantly predicts emotions with confidence scores shown on the screen. The suggested system successfully demonstrates the useful application of deep learning in affective computing and provides dependable performance under regulated conditions, according to experimental data. In addition to highlighting the promise of emotion detection systems in fields including behavioural analytics, education, healthcare, and human-computer interaction, this study also identifies areas for future advancements in multimodal integration and robustness
Keywords— Facial Emotion Recognition (FER), Deep Learning and Computer Vision Convolutional Neural Network (CNN).