Embedding Human Knowledge in Deep Neural Network via Attention Map

by   Masahiro Mitsuhara, et al.

Human-in-the-loop (HITL), which introduces human knowledge to machine learning, has been used in fine-grained recognition to estimate categories from the difference of local features. The conventional HITL approach has been successfully applied in non-deep machine learning, but it is difficult to use it with deep learning due to the enormous number of model parameters. To tackle this problem, in this paper, we propose using the Attention Branch Network (ABN) which is a visual explanation model. ABN applies an attention map for visual explanation to an attention mechanism. First, we manually modify the attention map obtained from ABN on the basis of human knowledge. Then, we use the modified attention map to an attention mechanism that enables ABN to adjust the recognition score. Second, for applying HITL to deep learning, we propose a fine-tuning approach that uses the modified attention map. Our fine-tuning updates the attention and perception branches of the ABN by using the training loss calculated from the attention map output from the ABN along with the modified attention map. This fine-tuning enables the ABN to output an attention map corresponding to human knowledge. Additionally, we use the updated attention map with its embedded human knowledge as an attention mechanism and inference at the perception branch, which improves the performance of ABN. Experimental results with the ImageNet dataset, CUB-200-2010 dataset, and IDRiD demonstrate that our approach clarifies the attention map in terms of visual explanation and improves the classification performance.


page 1

page 4

page 6

page 7

page 8


Attention Branch Network: Learning of Attention Mechanism for Visual Explanation

Visual explanation enables human to understand the decision making of De...

Learning from AI: An Interactive Learning Method Using a DNN Model Incorporating Expert Knowledge as a Teacher

Visual explanation is an approach for visualizing the grounds of judgmen...

Visual Explanation of Deep Q-Network for Robot Navigation by Fine-tuning Attention Branch

Robot navigation with deep reinforcement learning (RL) achieves higher p...

LFI-CAM: Learning Feature Importance for Better Visual Explanation

Class Activation Mapping (CAM) is a powerful technique used to understan...

Localisation via Deep Imagination: learn the features not the map

How many times does a human have to drive through the same area to becom...

Understanding Visual Ads by Aligning Symbols and Objects using Co-Attention

We tackle the problem of understanding visual ads where given an ad imag...

Fine-tuning of explainable CNNs for skin lesion classification based on dermatologists' feedback towards increasing trust

In this paper, we propose a CNN fine-tuning method which enables users t...

Please sign up or login with your details

Forgot password? Click here to reset