Understanding of Kernels in CNN Models by Suppressing Irrelevant Visual Features in Images

08/25/2021
by   Jia-Xin Zhuang, et al.
32

Deep learning models have shown their superior performance in various vision tasks. However, the lack of precisely interpreting kernels in convolutional neural networks (CNNs) is becoming one main obstacle to wide applications of deep learning models in real scenarios. Although existing interpretation methods may find certain visual patterns which are associated with the activation of a specific kernel, those visual patterns may not be specific or comprehensive enough for interpretation of a specific activation of kernel of interest. In this paper, a simple yet effective optimization method is proposed to interpret the activation of any kernel of interest in CNN models. The basic idea is to simultaneously preserve the activation of the specific kernel and suppress the activation of all other kernels at the same layer. In this way, only visual information relevant to the activation of the specific kernel is remained in the input. Consistent visual information from multiple modified inputs would help users understand what kind of features are specifically associated with specific kernel. Comprehensive evaluation shows that the proposed method can help better interpret activation of specific kernels than widely used methods, even when two kernels have very similar activation regions from the same input image.

READ FULL TEXT

page 5

page 6

page 7

page 8

research
02/26/2021

Neural Generalization of Multiple Kernel Learning

Multiple Kernel Learning is a conventional way to learn the kernel funct...
research
07/11/2023

Feature Activation Map: Visual Explanation of Deep Learning Models for Image Classification

Decisions made by convolutional neural networks(CNN) can be understood a...
research
06/12/2023

Adversarial Attacks on the Interpretation of Neuron Activation Maximization

The internal functional behavior of trained Deep Neural Networks is noto...
research
06/01/2018

Targeted Kernel Networks: Faster Convolutions with Attentive Regularization

We propose Attentive Regularization (AR), a method to constrain the acti...
research
10/23/2018

Brand > Logo: Visual Analysis of Fashion Brands

While lots of people may think branding begins and ends with a logo, fas...
research
06/20/2022

Neural Activation Patterns (NAPs): Visual Explainability of Learned Concepts

A key to deciphering the inner workings of neural networks is understand...

Please sign up or login with your details

Forgot password? Click here to reset