TOWARDS TRANSPARENT ARTIFICIAL INTELLIGENCE: A COMPARATIVE STUDY OF EXPLAINABLE AI MODELS FOR DECISION-MAKING IN FINANCIAL RISK ASSESSMENT
- Version
- Download 6
- File Size 4.08 MB
- File Count 1
- Create Date 18 June 2025
- Last Updated 18 June 2025
TOWARDS TRANSPARENT ARTIFICIAL INTELLIGENCE: A COMPARATIVE STUDY OF EXPLAINABLE AI MODELS FOR DECISION-MAKING IN FINANCIAL RISK ASSESSMENT
Deepak Kumar Patel
Research Scholar
Artificial Intelligence, Kalinga University Naya Raipur
Introduction
In recent years, the financial industry has witnessed a transformative shift driven by the integration of Artificial Intelligence (AI) into risk management systems. Machine learning algorithms now power a broad spectrum of financial applications, including credit scoring, fraud detection, and loan default prediction. These systems offer unprecedented speed and predictive accuracy, enabling institutions to process complex datasets and uncover risk patterns that traditional statistical methods might miss (Khandani, Kim & Lo, 2010; Liu et al., 2022). However, the rising reliance on AI-based decision-making has simultaneously introduced significant challenges—chief among them is the issue of transparency. Many high-performing AI models, particularly ensemble methods and deep neural networks, function as black-box systems with limited interpretability (Doshi-Velez & Kim, 2017). In domains like finance, where decisions have legal, ethical, and economic consequences, the inability to understand or audit the rationale behind model outputs poses a substantial barrier to trust and regulatory compliance (Samek et al., 2019; Barredo Arrieta et al., 2020). This growing concern has catalyzed research into Explainable Artificial Intelligence (XAI), a subfield of AI aimed at developing methods and tools that render AI decisions understandable to humans without compromising performance (Adadi & Berrada, 2018). The need for explainability is further underscored by global regulatory mandates, such as the General Data Protection Regulation (GDPR), which enshrine the "right to explanation" for automated decisions affecting individuals (Goodman & Flaxman, 2017). Despite the proliferation of XAI methods—ranging from model-agnostic techniques like SHAP (Lundberg & Lee, 2017) and LIME (Ribeiro, Singh & Guestrin, 2016) to inherently interpretable models such as decision trees and rule-based learners—there remains a critical gap in domain-specific evaluations. Current literature often lacks empirical, comparative analysis tailored to the unique constraints and requirements of financial risk assessment, where both interpretability and predictive performance are vital (Chen et al., 2023; Du et al., 2021). This study aims to address that gap by evaluating and comparing several prominent XAI techniques in the context of financial risk decision-making. We investigate how different models balance the trade-offs between transparency, accuracy, and computational efficiency when applied to real-world financial datasets. The primary contributions of this research are twofold: (1) a systematic comparison of selected XAI models applied to financial risk assessment tasks, and (2) practical insights into the strengths, limitations, and suitability of these models for adoption in regulatory-sensitive financial environments.
Download