Concept Saliency Maps to Visualize Relevant Features in Deep Generative Models

10/29/2019
by   Lennart Brocki, et al.
15

Evaluating, explaining, and visualizing high-level concepts in generative models, such as variational autoencoders (VAEs), is challenging in part due to a lack of known prediction classes that are required to generate saliency maps in supervised learning. While saliency maps may help identify relevant features (e.g., pixels) in the input for classification tasks of deep neural networks, similar frameworks are understudied in unsupervised learning. Therefore, we introduce a new method of obtaining saliency maps for latent representations of known or novel high-level concepts, often called concept vectors in generative models. Concept scores, analogous to class scores in classification tasks, are defined as dot products between concept vectors and encoded input data, which can be readily used to compute the gradients. The resulting concept saliency maps are shown to highlight input features deemed important for high-level concepts. Our method is applied to the VAE's latent space of CelebA dataset in which known attributes such as "smiles" and "hats" are used to elucidate relevant facial features. Furthermore, our application to spatial transcriptomic (ST) data of a mouse olfactory bulb demonstrates the potential of latent representations of morphological layers and molecular features in advancing our understanding of complex biological systems. By extending the popular method of saliency maps to generative models, the proposed concept saliency maps help improve interpretability of latent variable models in deep learning. Codes to reproduce and to implement concept saliency maps: https://github.com/lenbrocki/concept-saliency-maps

READ FULL TEXT

page 1

page 2

page 3

page 4

page 5

page 6

page 7

research
12/26/2018

Latent Variable Modeling for Generative Concept Representations and Deep Generative Models

Latent representations are the essence of deep generative models and det...
research
11/10/2020

Removing Brightness Bias in Rectified Gradients

Interpretation and improvement of deep neural networks relies on better ...
research
05/04/2021

Canonical Saliency Maps: Decoding Deep Face Models

As Deep Neural Network models for face processing tasks approach human-l...
research
06/25/2021

Energy-Based Generative Cooperative Saliency Prediction

Conventional saliency prediction models typically learn a deterministic ...
research
03/22/2023

Encoding Binary Concepts in the Latent Space of Generative Models for Enhancing Data Representation

Binary concepts are empirically used by humans to generalize efficiently...
research
04/29/2022

Concept Activation Vectors for Generating User-Defined 3D Shapes

We explore the interpretability of 3D geometric deep learning models in ...
research
10/07/2022

TCNL: Transparent and Controllable Network Learning Via Embedding Human-Guided Concepts

Explaining deep learning models is of vital importance for understanding...

Please sign up or login with your details

Forgot password? Click here to reset