XAI-Increment: A Novel Approach Leveraging LIME Explanations for Improved Incremental Learning

11/02/2022
by   Arnab Neelim Mazumder, et al.
0

Explainability of neural network prediction is essential to understand feature importance and gain interpretable insight into neural network performance. In this work, model explanations are fed back to the feed-forward training to help the model generalize better. To this extent, a custom weighted loss where the weights are generated by considering the Euclidean distances between true LIME (Local Interpretable Model-Agnostic Explanations) explanations and model-predicted LIME explanations is proposed. Also, in practical training scenarios, developing a solution that can help the model learn sequentially without losing information on previous data distribution is imperative due to the unavailability of all the training data at once. Thus, the framework known as XAI-Increment incorporates the custom weighted loss developed with elastic weight consolidation (EWC), to maintain performance in sequential testing sets. Finally, the training procedure involving the custom weighted loss shows around 1 loss based training for the keyword spotting task on the Google Speech Commands dataset and also shows low loss of information when coupled with EWC in the incremental learning setup.

READ FULL TEXT
research
06/24/2019

DLIME: A Deterministic Local Interpretable Model-Agnostic Explanations Approach for Computer-Aided Diagnosis Systems

Local Interpretable Model-Agnostic Explanations (LIME) is a popular tech...
research
02/27/2023

Explanations for Automatic Speech Recognition

We address quality assessment for neural network based ASR by providing ...
research
10/12/2021

Investigating the Effect of Natural Language Explanations on Out-of-Distribution Generalization in Few-shot NLI

Although neural models have shown strong performance in datasets such as...
research
02/06/2019

Global Explanations of Neural Networks: Mapping the Landscape of Predictions

A barrier to the wider adoption of neural networks is their lack of inte...
research
04/01/2020

Ontology-based Interpretable Machine Learning for Textual Data

In this paper, we introduce a novel interpreting framework that learns a...
research
02/23/2023

The Generalizability of Explanations

Due to the absence of ground truth, objective evaluation of explainabili...
research
10/27/2020

A robust low data solution: dimension prediction of semiconductor nanorods

Precise control over dimension of nanocrystals is critical to tune the p...

Please sign up or login with your details

Forgot password? Click here to reset