PCAAE: Principal Component Analysis Autoencoder for organising the latent space of generative networks

06/14/2020
by   Chi-Hieu Pham, et al.
0

Autoencoders and generative models produce some of the most spectacular deep learning results to date. However, understanding and controlling the latent space of these models presents a considerable challenge. Drawing inspiration from principal component analysis and autoencoder, we propose the Principal Component Analysis Autoencoder (PCAAE). This is a novel autoencoder whose latent space verifies two properties. Firstly, the dimensions are organised in decreasing importance with respect to the data at hand. Secondly, the components of the latent space are statistically independent. We achieve this by progressively increasing the latent space during training, and with a covariance loss applied to the latent codes. The resulting autoencoder produces a latent space which separates the intrinsic attributes of the data into different components of the latent space, in a completely unsupervised manner. We also describe an extension of our approach to the case of powerful, pre-trained GANs. We show results on both synthetic examples of shapes and on a state-of-the-art GAN. For example, we are able to separate the color shade scale of hair and skin, pose of faces and the gender in the CelebA, without accessing any labels. We compare the PCAAE with other state-of-the-art approaches, in particular with respect to the ability to disentangle attributes in the latent space. We hope that this approach will contribute to better understanding of the intrinsic latent spaces of powerful deep generative models.

READ FULL TEXT

page 6

page 8

page 15

page 17

page 18

page 19

page 20

page 21

research
04/02/2019

A PCA-like Autoencoder

An autoencoder is a neural network which data projects to and from a low...
research
05/19/2020

Symbolic Pregression: Discovering Physical Laws from Raw Distorted Video

We present a method for unsupervised learning of equations of motion for...
research
08/09/2023

Vector quantization loss analysis in VQGANs: a single-GPU ablation study for image-to-image synthesis

This study performs an ablation analysis of Vector Quantized Generative ...
research
10/10/2019

Rate-Distortion Optimization Guided Autoencoder for Generative Approach with quantitatively measurable latent space

In the generative model approach of machine learning, it is essential to...
research
12/15/2022

Reliable Measures of Spread in High Dimensional Latent Spaces

Understanding geometric properties of natural language processing models...
research
01/16/2023

Simplex Autoencoders

Synthetic data generation is increasingly important due to privacy conce...
research
04/30/2021

Latent Factor Decomposition Model: Applications for Questionnaire Data

The analysis of clinical questionnaire data comes with many inherent cha...

Please sign up or login with your details

Forgot password? Click here to reset