A PCA-like Autoencoder

04/02/2019
by   Said Ladjal, et al.
48

An autoencoder is a neural network which data projects to and from a lower dimensional latent space, where this data is easier to understand and model. The autoencoder consists of two sub-networks, the encoder and the decoder, which carry out these transformations. The neural network is trained such that the output is as close to the input as possible, the data having gone through an information bottleneck : the latent space. This tool bears significant ressemblance to Principal Component Analysis (PCA), with two main differences. Firstly, the autoencoder is a non-linear transformation, contrary to PCA, which makes the autoencoder more flexible and powerful. Secondly, the axes found by a PCA are orthogonal, and are ordered in terms of the amount of variability which the data presents along these axes. This makes the interpretability of the PCA much greater than that of the autoencoder, which does not have these attributes. Ideally, then, we would like an autoencoder whose latent space consists of independent components, ordered by decreasing importance to the data. In this paper, we propose an algorithm to create such a network. We create an iterative algorithm which progressively increases the size of the latent space, learning a new dimension at each step. Secondly, we propose a covariance loss term to add to the standard autoencoder loss function, as well as a normalisation layer just before the latent space, which encourages the latent space components to be statistically independent. We demonstrate the results of this autoencoder on simple geometric shapes, and find that the algorithm indeed finds a meaningful representation in the latent space. This means that subsequent interpolation in the latent space has meaning with respect to the geometric properties of the images.

READ FULL TEXT

page 3

page 6

page 7

page 8

research
06/14/2020

PCAAE: Principal Component Analysis Autoencoder for organising the latent space of generative networks

Autoencoders and generative models produce some of the most spectacular ...
research
07/07/2022

Machine Learning to Predict Aerodynamic Stall

A convolutional autoencoder is trained using a database of airfoil aerod...
research
03/27/2018

Image Semantic Transformation: Faster, Lighter and Stronger

We propose Image-Semantic-Transformation-Reconstruction-Circle(ISTRC) mo...
research
04/15/2019

Processsing Simple Geometric Attributes with Autoencoders

Image synthesis is a core problem in modern deep learning, and many rece...
research
07/11/2023

Latent Space Perspicacity and Interpretation Enhancement (LS-PIE) Framework

Linear latent variable models such as principal component analysis (PCA)...
research
11/21/2017

Autoencoder Node Saliency: Selecting Relevant Latent Representations

The autoencoder is an artificial neural network model that learns hidden...
research
10/13/2017

Deep Learning for Case-Based Reasoning through Prototypes: A Neural Network that Explains Its Predictions

Deep neural networks are widely used for classification. These deep mode...

Please sign up or login with your details

Forgot password? Click here to reset