FICNN: A Framework for the Interpretation of Deep Convolutional Neural Networks

05/17/2023
by   Hamed Behzadi-Khormouji, et al.
0

With the continue development of Convolutional Neural Networks (CNNs), there is a growing concern regarding representations that they encode internally. Analyzing these internal representations is referred to as model interpretation. While the task of model explanation, justifying the predictions of such models, has been studied extensively; the task of model interpretation has received less attention. The aim of this paper is to propose a framework for the study of interpretation methods designed for CNN models trained from visual data. More specifically, we first specify the difference between the interpretation and explanation tasks which are often considered the same in the literature. Then, we define a set of six specific factors that can be used to characterize interpretation methods. Third, based on the previous factors, we propose a framework for the positioning of interpretation methods. Our framework highlights that just a very small amount of the suggested factors, and combinations thereof, have been actually studied. Consequently, leaving significant areas unexplored. Following the proposed framework, we discuss existing interpretation methods and give some attention to the evaluation protocols followed to validate them. Finally, the paper highlights capabilities of the methods in producing feedback for enabling interpretation and proposes possible research problems arising from the framework.

READ FULL TEXT

page 1

page 11

page 12

page 13

page 14

page 15

research
02/07/2019

CHIP: Channel-wise Disentangled Interpretation of Deep Convolutional Neural Networks

With the widespread applications of deep convolutional neural networks (...
research
03/13/2018

Expert identification of visual primitives used by CNNs during mammogram classification

This work interprets the internal representations of deep neural network...
research
02/15/2021

Ada-SISE: Adaptive Semantic Input Sampling for Efficient Explanation of Convolutional Neural Networks

Explainable AI (XAI) is an active research area to interpret a neural ne...
research
09/16/2020

Are Interpretations Fairly Evaluated? A Definition Driven Pipeline for Post-Hoc Interpretability

Recent years have witnessed an increasing number of interpretation metho...
research
12/18/2017

Visual Explanation by Interpretation: Improving Visual Feedback Capabilities of Deep Neural Networks

Learning-based representations have become the defacto means to address ...
research
11/09/2018

An Overview of Computational Approaches for Analyzing Interpretation

It is said that beauty is in the eye of the beholder. But how exactly ca...
research
05/06/2020

Evaluation, Tuning and Interpretation of Neural Networks for Meteorological Applications

Neural networks have opened up many new opportunities to utilize remotel...

Please sign up or login with your details

Forgot password? Click here to reset