Learnable Visual Words for Interpretable Image Recognition

05/22/2022
by   Wenxiao Xiao, et al.
0

To interpret deep models' predictions, attention-based visual cues are widely used in addressing why deep models make such predictions. Beyond that, the current research community becomes more interested in reasoning how deep models make predictions, where some prototype-based methods employ interpretable representations with their corresponding visual cues to reveal the black-box mechanism of deep model behaviors. However, these pioneering attempts only either learn the category-specific prototypes and deteriorate their generalizing capacities, or demonstrate several illustrative examples without a quantitative evaluation of visual-based interpretability with further limitations on their practical usages. In this paper, we revisit the concept of visual words and propose the Learnable Visual Words (LVW) to interpret the model prediction behaviors with two novel modules: semantic visual words learning and dual fidelity preservation. The semantic visual words learning relaxes the category-specific constraint, enabling the general visual words shared across different categories. Beyond employing the visual words for prediction to align visual words with the base model, our dual fidelity preservation also includes the attention guided semantic alignment that encourages the learned visual words to focus on the same conceptual regions for prediction. Experiments on six visual benchmarks demonstrate the superior effectiveness of our proposed LVW in both accuracy and model interpretation over the state-of-the-art methods. Moreover, we elaborate on various in-depth analyses to further explore the learned visual words and the generalizability of our method for unseen categories.

READ FULL TEXT

page 3

page 6

page 8

research
11/25/2018

Visual Attention on the Sun: What Do Existing Models Actually Predict?

Visual attention prediction is a classic problem that seems to be well a...
research
04/30/2021

Interpretable Semantic Photo Geolocalization

Planet-scale photo geolocalization is the complex task of estimating the...
research
11/03/2021

Dual Progressive Prototype Network for Generalized Zero-Shot Learning

Generalized Zero-Shot Learning (GZSL) aims to recognize new categories w...
research
09/08/2020

Enhancing the Interpretability of Deep Models in Heathcare Through Attention: Application to Glucose Forecasting for Diabetic People

The adoption of deep learning in healthcare is hindered by their "black ...
research
10/22/2020

Learning Dual Semantic Relations with Graph Attention for Image-Text Matching

Image-Text Matching is one major task in cross-modal information process...
research
02/04/2019

Embodied Multimodal Multitask Learning

Recent efforts on training visual navigation agents conditioned on langu...
research
05/31/2022

Concept-level Debugging of Part-Prototype Networks

Part-prototype Networks (ProtoPNets) are concept-based classifiers desig...

Please sign up or login with your details

Forgot password? Click here to reset