Hybrid CNN -Interpreter: Interpret local and global contexts for CNN-based Models

10/31/2022
by   Wenli Yang, et al.
0

Convolutional neural network (CNN) models have seen advanced improvements in performance in various domains, but lack of interpretability is a major barrier to assurance and regulation during operation for acceptance and deployment of AI-assisted applications. There have been many works on input interpretability focusing on analyzing the input-output relations, but the internal logic of models has not been clarified in the current mainstream interpretability methods. In this study, we propose a novel hybrid CNN-interpreter through: (1) An original forward propagation mechanism to examine the layer-specific prediction results for local interpretability. (2) A new global interpretability that indicates the feature correlation and filter importance effects. By combining the local and global interpretabilities, hybrid CNN-interpreter enables us to have a solid understanding and monitoring of model context during the whole learning process with detailed and consistent representations. Finally, the proposed interpretabilities have been demonstrated to adapt to various CNN-based model structures.

READ FULL TEXT

page 2

page 3

page 9

page 10

page 12

page 13

page 15

page 16

research
05/07/2019

Convolutional Neural Networks Considering Local and Global features for Image Enhancement

In this paper, we propose a novel convolutional neural network (CNN) arc...
research
11/21/2020

Densely connected multidilated convolutional networks for dense prediction tasks

Tasks that involve high-resolution dense prediction require a modeling o...
research
08/08/2017

Prune the Convolutional Neural Networks with Sparse Shrink

Nowadays, it is still difficult to adapt Convolutional Neural Network (C...
research
11/07/2019

Analysis of CNN-based remote-PPG to understand limitations and sensitivities

Deep learning based on convolutional neural network (CNN) has shown prom...
research
02/03/2023

A Simple Approach for Local and Global Variable Importance in Nonlinear Regression Models

The ability to interpret machine learning models has become increasingly...
research
11/04/2021

How Do Neural Sequence Models Generalize? Local and Global Context Cues for Out-of-Distribution Prediction

After a neural sequence model encounters an unexpected token, can its be...
research
10/19/2020

A Framework to Learn with Interpretation

With increasingly widespread use of deep neural networks in critical dec...

Please sign up or login with your details

Forgot password? Click here to reset