From Principal Subspaces to Principal Components with Linear Autoencoders

04/26/2018
by   Elad Plaut, et al.
0

The autoencoder is an effective unsupervised learning model which is widely used in deep learning. It is well known that an autoencoder with a single fully-connected hidden layer, a linear activation function and a squared error cost function trains weights that span the same subspace as the one spanned by the principal component loading vectors, but that they are not identical to the loading vectors. In this paper, we show how to recover the loading vectors from the autoencoder weights.

READ FULL TEXT

page 4

page 5

page 6

research
01/07/2019

On the effect of the activation function on the distribution of hidden nodes in a deep network

We analyze the joint probability distribution on the lengths of the vect...
research
05/18/2023

High-dimensional Asymptotics of Denoising Autoencoders

We address the problem of denoising data from a Gaussian mixture using a...
research
03/05/2019

Widely Linear Complex-valued Autoencoder: Dealing with Noncircularity in Generative-Discriminative Models

We propose a new structure for the complex-valued autoencoder by introdu...
research
08/20/2011

Complex-Valued Autoencoders

Autoencoders are unsupervised machine learning circuits whose learning g...
research
01/04/2020

FrequentNet : A New Deep Learning Baseline for Image Classification

In this paper, we generalize the idea from the method called "PCANet" to...
research
11/21/2017

Autoencoder Node Saliency: Selecting Relevant Latent Representations

The autoencoder is an artificial neural network model that learns hidden...
research
11/06/2022

The Importance of Suppressing Complete Reconstruction in Autoencoders for Unsupervised Outlier Detection

Autoencoders are widely used in outlier detection due to their superiori...

Please sign up or login with your details

Forgot password? Click here to reset