DRIVEN FRAMEWORK FOR IDENTIFYING MIS-INFORMATION BY USING DEEP LEARNING
The pervasive spread of fake news poses significant challenges to information integrity and public trust. Existing systems such as RoBERTa and BERT have demonstrated commendable performance in detecting fake news through advanced natural language processing techniques. However, there remains a critical need for models that not only achieve high accuracy but also provide interpretability in their predictions. This paper proposes an innovative approach to fake news detection by integrating XLNet, a state-of-the-art transformer model, with Explainable AI (XAI) techniques, specifically SHAP (Local Interpretable Model-agnostic Explanations). Additionally, we incorporate a hybrid model combining FastText for efficient word representation and a Convolutional Neural Network (CNN) for feature extraction, further enhancing the model's capability to understand and classify complex news content. The proposed system aims to not only improve detection accuracy but also offer transparent insights into the decision-making process, thereby fostering trust and facilitating the identification of misinformation. Extensive experiments on benchmark datasets demonstrate the superiority of our approach in terms of both performance and interpretability, making it a robust tool for combating the proliferation of fake news.
Keywords: Fake News Detection, XLNet, Explainable AI, SHAP, FastText, Hybrid Deep Learning, Natural Language Processing, RoBERTa, BERT, Convolutional Neural Network, Word Representation, Misinformation,s Interpretability, Transparency, Model Agnostic Explanations.