Loss Landscapes of Regularized Linear Autoencoders

01/23/2019
by   Daniel Kunin, et al.
18

Autoencoders are a deep learning model for representation learning. When trained to minimize the Euclidean distance between the data and its reconstruction, linear autoencoders (LAEs) learn the subspace spanned by the top principal directions but cannot learn the principal directions themselves. In this paper, we prove that L_2-regularized LAEs learn the principal directions as the left singular vectors of the decoder, providing an extremely simple and scalable algorithm for rank-k SVD. More generally, we consider LAEs with (i) no regularization, (ii) regularization of the composition of the encoder and decoder, and (iii) regularization of the encoder and decoder separately. We relate the minimum of (iii) to the MAP estimate of probabilistic PCA and show that for all critical points the encoder and decoder are transposes. Building on topological intuition, we smoothly parameterize the critical manifolds for all three losses via a novel unified framework and illustrate these results empirically. Overall, this work clarifies the relationship between autoencoders and Bayesian models and between regularization and orthogonality.

READ FULL TEXT

page 7

page 8

research
08/31/2021

A manifold learning perspective on representation learning: Learning decoder and representations without an encoder

Autoencoders are commonly used in representation learning. They consist ...
research
07/13/2020

Regularized linear autoencoders recover the principal components, eventually

Our understanding of learning input-output relationships with neural net...
research
05/10/2020

A Simple and Scalable Shape Representation for 3D Reconstruction

Deep learning applied to the reconstruction of 3D shapes has seen growin...
research
05/19/2022

Closing the gap: Exact maximum likelihood training of generative autoencoders using invertible layers

In this work, we provide an exact likelihood alternative to the variatio...
research
04/03/2022

Fitting an immersed submanifold to data via Sussmann's orbit theorem

This paper describes an approach for fitting an immersed submanifold of ...
research
10/23/2022

Principal Component Classification

We propose to directly compute classification estimates by learning feat...
research
05/22/2023

It's Enough: Relaxing Diagonal Constraints in Linear Autoencoders for Recommendation

Linear autoencoder models learn an item-to-item weight matrix via convex...

Please sign up or login with your details

Forgot password? Click here to reset