Ada-SISE: Adaptive Semantic Input Sampling for Efficient Explanation of Convolutional Neural Networks

02/15/2021
by   Mahesh Sudhakar, et al.
0

Explainable AI (XAI) is an active research area to interpret a neural network's decision by ensuring transparency and trust in the task-specified learned models. Recently, perturbation-based model analysis has shown better interpretation, but backpropagation techniques are still prevailing because of their computational efficiency. In this work, we combine both approaches as a hybrid visual explanation algorithm and propose an efficient interpretation method for convolutional neural networks. Our method adaptively selects the most critical features that mainly contribute towards a prediction to probe the model by finding the activated features. Experimental results show that the proposed method can reduce the execution time up to 30 competitive interpretability without compromising the quality of explanation generated.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

06/10/2021

Explainable AI, but explainable to whom?

Advances in AI technologies have resulted in superior levels of AI-based...
01/27/2022

Human Interpretation of Saliency-based Explanation Over Text

While a lot of research in explainable AI focuses on producing effective...
04/16/2022

Semantic interpretation for convolutional neural networks: What makes a cat a cat?

The interpretability of deep neural networks has attracted increasing at...
12/18/2018

Explaining Neural Networks Semantically and Quantitatively

This paper presents a method to explain the knowledge encoded in a convo...
02/07/2019

CHIP: Channel-wise Disentangled Interpretation of Deep Convolutional Neural Networks

With the widespread applications of deep convolutional neural networks (...
10/01/2020

Explaining Convolutional Neural Networks through Attribution-Based Input Sampling and Block-Wise Feature Aggregation

As an emerging field in Machine Learning, Explainable AI (XAI) has been ...
01/27/2018

Towards an Understanding of Neural Networks in Natural-Image Spaces

Two major uncertainties, dataset bias and perturbation, prevail in state...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.