PROTECTING USERS FROM ONLINE HARASSMENT THROUGH AUTOMATED DETECTION SYSTEMS
- Version
- Download 13
- File Size 381.09 KB
- File Count 1
- Create Date 12 June 2025
- Last Updated 12 June 2025
PROTECTING USERS FROM ONLINE HARASSMENT THROUGH AUTOMATED DETECTION SYSTEMS
Authors:
1 Mr.R. RAMAKRISHNAN, 2 S. VARSHA
1 Associate Professor, Department of Computer Applications, Sri Manakula Vinayagar Engineering College (Autonomous), Puducherry 605008, India
ramakrishnanmca@smvec.ac.in
2 Post Graduate student, Department of Computer Applications, Sri Manakula Vinayagar Engineering College (Autonomous), Puducherry 605008, India
varsha86681@gmail.com
ABSTRACT: Cyberbullying is a behavior sometimes unique to electronic media such as social media, messaging apps, and online games, and refers to the digital harassment or harm of individuals. Cyberbullying can be particularly damaging emotionally because when private or damaging content is released or made public, it may become permanent and astonishingly this action ultimately harms not only the person's behavior, but also their reputation and image. In many contexts cyberbullying can take the form of online insults, hostility, or even the implicit or explicit release of personal information or images. Consequently, traditional machine learning (ML) and natural language processing (NLP) are inadequate for cyberbullying detection because their traditional methodologies do not accommodate the importance of contextual cues and semantics, which are integral considerations when detecting subtle forms of bullying, including insults. With these considerations, this project proposes a hybrid model that exploits the power of Long Short-Term Memory (LSTM) networks and Deep Convolutional Neural Networks (CNN) to examine the ability of the hybrid model to detect, and classify, instances of cyberbullying from electronic communication. Word2Vec was used to develop custom word embeddings to represent the contextual relations that can exist between words. The LSTM component of the hybrid model considers the sequential nature of the text produced by individuals, and can learn patterns in the data that includes having the relevant components spread over time or a sequence of words. It can identify a pattern when faced with separating intervals. The CNN components allow for deeper analysis of the input by extracting important and relevant features from longer bits of text input. Overall, the experimental results reveal that the LSTM-CNN model, in this project, outperformed the original methods used, in terms of accuracy and efficiency. By using this hybrid model, we have presented a viable tool for detecting harmful content in the form of cyberbullying ultimately improving the safety of individuals using digital communications devices.
KEYWORDS: Cyberbullying Detection, Deep Learning, Text Classification, Natural Language Processing (NLP), Personalized Travel Packages, Online Safety, Sequential Data Processing, Harmful Content Detection, Toxicity Classification.
Download