DeepAI AI Chat
Log In Sign Up

From Principal Subspaces to Principal Components with Linear Autoencoders

by   Elad Plaut, et al.

The autoencoder is an effective unsupervised learning model which is widely used in deep learning. It is well known that an autoencoder with a single fully-connected hidden layer, a linear activation function and a squared error cost function trains weights that span the same subspace as the one spanned by the principal component loading vectors, but that they are not identical to the loading vectors. In this paper, we show how to recover the loading vectors from the autoencoder weights.


page 4

page 5

page 6


On the effect of the activation function on the distribution of hidden nodes in a deep network

We analyze the joint probability distribution on the lengths of the vect...

High-dimensional Asymptotics of Denoising Autoencoders

We address the problem of denoising data from a Gaussian mixture using a...

Widely Linear Complex-valued Autoencoder: Dealing with Noncircularity in Generative-Discriminative Models

We propose a new structure for the complex-valued autoencoder by introdu...

Complex-Valued Autoencoders

Autoencoders are unsupervised machine learning circuits whose learning g...

FrequentNet : A New Deep Learning Baseline for Image Classification

In this paper, we generalize the idea from the method called "PCANet" to...

Autoencoder Node Saliency: Selecting Relevant Latent Representations

The autoencoder is an artificial neural network model that learns hidden...

The Importance of Suppressing Complete Reconstruction in Autoencoders for Unsupervised Outlier Detection

Autoencoders are widely used in outlier detection due to their superiori...

Code Repositories


Principal component analysis using a linear autoencoder

view repo