Interpretable Deep Neural Networks using SHAP and LIME for Decision Making in Smart Home Automation
- Version
- Download 11
- File Size 273.91 KB
- File Count 1
- Create Date 9 May 2025
- Last Updated 9 May 2025
Interpretable Deep Neural Networks using SHAP and LIME for Decision Making in Smart Home Automation
Authors:
Mrs.M.P.Nisha1, D.Sonam2, B.Swathi3, K.Vamshi4
Assistant Professor of CSE(AI&ML) of ACE Engineering College1 Students of Department CSE(AI&ML) of ACE Engineering College2,3,4
Abstract - Deep Neural Networks (DNNs) are increasingly being used in smart home automation for intelligent decision-making based on IoT sensor data. This project aims to develop an interpretable deep neural network model for decision-making in smart home automation using SHAP (Shapley Additive Explanations) and LIME (Local Interpretable Model-agnostic Explanations). The focus is on enhancing transparency in AI-driven automation systems by providing clear explanations for model predictions. The approach involves collecting IoT sensor data from smart home environments, training a deep learning model to recognize patterns and make automation decisions, and applying SHAP and LIME to interpret its outputs. This will help homeowners and system developers understand why certain actions are triggered, increasing trust and reliability in AI-powered automation.
Key Words: Interpretable AI, Deep Learning, SHAP, LIME, Smart Home Automation, IoT, Explainability, Sensor Data, Decision Support Systems.
Download