The Hessian Penalty: A Weak Prior for Unsupervised Disentanglement

08/24/2020
by   William Peebles, et al.
9

Existing disentanglement methods for deep generative models rely on hand-picked priors and complex encoder-based architectures. In this paper, we propose the Hessian Penalty, a simple regularization term that encourages the Hessian of a generative model with respect to its input to be diagonal. We introduce a model-agnostic, unbiased stochastic approximation of this term based on Hutchinson's estimator to compute it efficiently during training. Our method can be applied to a wide range of deep generators with just a few lines of code. We show that training with the Hessian Penalty often causes axis-aligned disentanglement to emerge in latent space when applied to ProGAN on several datasets. Additionally, we use our regularization term to identify interpretable directions in BigGAN's latent space in an unsupervised fashion. Finally, we provide empirical evidence that the Hessian Penalty encourages substantial shrinkage when applied to over-parameterized latent spaces.

READ FULL TEXT

page 6

page 8

page 9

page 12

page 13

research
08/17/2021

Orthogonal Jacobian Regularization for Unsupervised Disentanglement in Image Generation

Unsupervised disentanglement learning is a crucial issue for understandi...
research
05/28/2019

A Hessian Based Complexity Measure for Deep Networks

Deep (neural) networks have been applied productively in a wide range of...
research
05/17/2021

Finding an Unsupervised Image Segmenter in Each of Your Deep Generative Models

Recent research has shown that numerous human-interpretable directions e...
research
02/06/2019

Latent Space Cartography: Generalised Metric-Inspired Measures and Measure-Based Transformations for Generative Models

Deep generative models are universal tools for learning data distributio...
research
12/20/2020

Discrete Hessian complexes in three dimensions

One conforming and one non-conforming virtual element Hessian complexes ...
research
12/01/2022

Generalizing and Improving Jacobian and Hessian Regularization

Jacobian and Hessian regularization aim to reduce the magnitude of the f...
research
06/20/2022

Identifiability of deep generative models under mixture priors without auxiliary information

We prove identifiability of a broad class of deep latent variable models...

Please sign up or login with your details

Forgot password? Click here to reset