Intentional Attention Mask Transformation for Robust CNN Classification

05/07/2019
by   Masanari Kimura, et al.
0

Convolutional Neural Networks have achieved impressive results in various tasks, but interpreting the internal mechanism is a challenging problem. To tackle this problem, we exploit a multi-channel attention mechanism in feature space. Our network architecture allows us to obtain an attention mask for each feature while existing CNN visualization methods provide only a common attention mask for all features. We apply the proposed multi-channel attention mechanism to multi-attribute recognition task. We can obtain different attention mask for each feature and for each attribute. Those analyses give us deeper insight into the feature space of CNNs. Furthermore, our proposed attention mechanism naturally derives a method for improving the robustness of CNNs. From the observation of feature space based on the proposed attention mask, we demonstrate that we can obtain robust CNNs by intentionally emphasizing features that are important for attributes. The experimental results for the benchmark dataset show that the proposed method gives high human interpretability while accurately grasping the attributes of the data, and improves network robustness.

READ FULL TEXT

page 1

page 4

page 6

page 7

research
04/30/2019

Interpretation of Feature Space using Multi-Channel Attentional Sub-Networks

Convolutional Neural Networks have achieved impressive results in variou...
research
11/26/2019

Distraction-Aware Feature Learning for Human Attribute Recognition via Coarse-to-Fine Attention Mechanism

Recently, Human Attribute Recognition (HAR) has become a hot topic due t...
research
06/29/2021

Towards Understanding the Effectiveness of Attention Mechanism

Attention Mechanism is a widely used method for improving the performanc...
research
04/24/2021

A Multi-Size Neural Network with Attention Mechanism for Answer Selection

Semantic matching is of central significance to the answer selection tas...
research
06/26/2021

Interflow: Aggregating Multi-layer Feature Mappings with Attention Mechanism

Traditionally, CNN models possess hierarchical structures and utilize th...
research
10/22/2018

Exploring Correlations in Multiple Facial Attributes through Graph Attention Network

Estimating multiple attributes from a single facial image gives comprehe...
research
07/25/2023

Conditional Cross Attention Network for Multi-Space Embedding without Entanglement in Only a SINGLE Network

Many studies in vision tasks have aimed to create effective embedding sp...

Please sign up or login with your details

Forgot password? Click here to reset