Simplifying the explanation of deep neural networks with sufficient and necessary feature-sets: case of text classification

During the last decade, deep neural networks (DNN) have demonstrated impressive performances solving a wide range of problems in various domains such as medicine, finance, law, etc. Despite their great performances, they have long been considered as black-box systems, providing good results without being able to explain them. However, the inability to explain a system decision presents a serious risk in critical domains such as medicine where people's lives are at stake. Several works have been done to uncover the inner reasoning of deep neural networks. Saliency methods explain model decisions by assigning weights to input features that reflect their contribution to the classifier decision. However, not all features are necessary to explain a model decision. In practice, classifiers might strongly rely on a subset of features that might be sufficient to explain a particular decision. The aim of this article is to propose a method to simplify the prediction explanation of One-Dimensional (1D) Convolutional Neural Networks (CNN) by identifying sufficient and necessary features-sets. We also propose an adaptation of Layer-wise Relevance Propagation for 1D-CNN. Experiments carried out on multiple datasets show that the distribution of relevance among features is similar to that obtained with a well known state of the art model. Moreover, the sufficient and necessary features extracted perceptually appear convincing to humans.

READ FULL TEXT
research
02/02/2021

A Survey on Understanding, Visualizations, and Explanation of Deep Neural Networks

Recent advancements in machine learning and signal processing domains ha...
research
10/09/2018

What made you do this? Understanding black-box decisions with sufficient input subsets

Local explanation frameworks aim to rationalize particular decisions mad...
research
04/06/2021

White Box Methods for Explanations of Convolutional Neural Networks in Image Classification Tasks

In recent years, deep learning has become prevalent to solve application...
research
06/17/2019

Why and How zk-SNARK Works

Despite the existence of multiple great resources on zk-SNARK constructi...
research
12/18/2019

Iterative and Adaptive Sampling with Spatial Attention for Black-Box Model Explanations

Deep neural networks have achieved great success in many real-world appl...
research
02/12/2020

Self-explainability as an alternative to interpretability for judging the trustworthiness of artificial intelligences

The ability to explain decisions made by AI systems is highly sought aft...
research
04/05/2022

A Set Membership Approach to Discovering Feature Relevance and Explaining Neural Classifier Decisions

Neural classifiers are non linear systems providing decisions on the cla...

Please sign up or login with your details

Forgot password? Click here to reset