Disentangling Disentanglement

12/06/2018
by   Emile Mathieu, et al.
6

We develop a generalised notion of disentanglement in Variational Auto-Encoders (VAEs) by casting it as a decomposition of the latent representation, characterised by i) enforcing an appropriate level of overlap in the latent encodings of the data, and ii) regularisation of the average encoding to a desired structure, represented through the prior. We motivate this by showing that a) the β-VAE disentangles purely through regularisation of the overlap in latent encodings, and through its average (Gaussian) encoder variance, and b) disentanglement, as independence between latents, can be cast as a regularisation of the aggregate posterior to a prior with specific characteristics. We validate this characterisation by showing that simple manipulations of these factors, such as using rotationally variant priors, can help improve disentanglement, and discuss how this characterisation provides a more general framework to incorporate notions of decomposition beyond just independence between the latents.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset