Ada-SISE: Adaptive Semantic Input Sampling for Efficient Explanation of Convolutional Neural Networks

02/15/2021
by   Mahesh Sudhakar, et al.
0

Explainable AI (XAI) is an active research area to interpret a neural network's decision by ensuring transparency and trust in the task-specified learned models. Recently, perturbation-based model analysis has shown better interpretation, but backpropagation techniques are still prevailing because of their computational efficiency. In this work, we combine both approaches as a hybrid visual explanation algorithm and propose an efficient interpretation method for convolutional neural networks. Our method adaptively selects the most critical features that mainly contribute towards a prediction to probe the model by finding the activated features. Experimental results show that the proposed method can reduce the execution time up to 30 competitive interpretability without compromising the quality of explanation generated.

READ FULL TEXT
research
07/26/2022

From Interpretable Filters to Predictions of Convolutional Neural Networks with Explainable Artificial Intelligence

Convolutional neural networks (CNN) are known for their excellent featur...
research
06/01/2023

Discriminative Deep Feature Visualization for Explainable Face Recognition

Despite the huge success of deep convolutional neural networks in face r...
research
04/16/2022

Semantic interpretation for convolutional neural networks: What makes a cat a cat?

The interpretability of deep neural networks has attracted increasing at...
research
05/17/2023

FICNN: A Framework for the Interpretation of Deep Convolutional Neural Networks

With the continue development of Convolutional Neural Networks (CNNs), t...
research
12/18/2018

Explaining Neural Networks Semantically and Quantitatively

This paper presents a method to explain the knowledge encoded in a convo...
research
02/07/2019

CHIP: Channel-wise Disentangled Interpretation of Deep Convolutional Neural Networks

With the widespread applications of deep convolutional neural networks (...
research
01/27/2018

Towards an Understanding of Neural Networks in Natural-Image Spaces

Two major uncertainties, dataset bias and perturbation, prevail in state...

Please sign up or login with your details

Forgot password? Click here to reset