DeepAI AI Chat
Log In Sign Up

Towards Visually Explaining Variational Autoencoders

11/18/2019
by   Wenqian Liu, et al.
Northeastern University
University of California, Riverside
Rensselaer Polytechnic Institute
85

Recent advances in Convolutional Neural Network (CNN) model interpretability have led to impressive progress in visualizing and understanding model predictions. In particular, gradient-based visual attention methods have driven much recent effort in using visual attention maps as a means for visual explanations. A key problem, however, is these methods are designed for classification and categorization tasks, and their extension to explaining generative models, , variational autoencoders (VAE) is not trivial. In this work, we take a step towards bridging this crucial gap, proposing the first technique to visually explain VAEs by means of gradient-based attention. We present methods to generate visual attention from the learned latent space, and also demonstrate such attention explanations serve more than just explaining VAE predictions. We show how these attention maps can be used to localize anomalies in images, demonstrating state-of-the-art performance on the MVTec-AD dataset. We also show how they can be infused into model training, helping bootstrap the VAE into learning improved latent space disentanglement, demonstrated on the Dsprites dataset.

READ FULL TEXT

page 1

page 5

page 7

page 8

08/13/2020

Towards Visually Explaining Similarity Models

We consider the problem of visually explaining similarity models, i.e., ...
07/19/2022

Multi-view hierarchical Variational AutoEncoders with Factor Analysis latent space

Real-world databases are complex, they usually present redundancy and sh...
11/01/2021

Gradient Frequency Modulation for Visually Explaining Video Understanding Models

In many applications, it is essential to understand why a machine learni...
11/19/2018

Reducing Visual Confusion with Discriminative Attention

Recent developments in gradient-based attention modeling have led to imp...
09/02/2021

GAM: Explainable Visual Similarity and Classification via Gradient Activation Maps

We present Gradient Activation Maps (GAM) - a machinery for explaining p...
11/18/2019

Learning Similarity Attention

We consider the problem of learning similarity functions. While there ha...
04/10/2022

Explaining Deep Convolutional Neural Networks via Latent Visual-Semantic Filter Attention

Interpretability is an important property for visual models as it helps ...

Code Repositories