Regularized linear autoencoders recover the principal components, eventually

07/13/2020
by   Xuchan Bao, et al.
39

Our understanding of learning input-output relationships with neural nets has improved rapidly in recent years, but little is known about the convergence of the underlying representations, even in the simple case of linear autoencoders (LAEs). We show that when trained with proper regularization, LAEs can directly learn the optimal representation – ordered, axis-aligned principal components. We analyze two such regularization schemes: non-uniform ℓ_2 regularization and a deterministic variant of nested dropout [Rippel et al, ICML' 2014]. Though both regularization schemes converge to the optimal representation, we show that this convergence is slow due to ill-conditioning that worsens with increasing latent dimension. We show that the inefficiency of learning the optimal representation is not inevitable – we present a simple modification to the gradient descent update that greatly speeds up convergence empirically.

READ FULL TEXT
research
01/23/2019

Loss Landscapes of Regularized Linear Autoencoders

Autoencoders are a deep learning model for representation learning. When...
research
01/06/2022

The dynamics of representation learning in shallow, non-linear autoencoders

Autoencoders are the simplest neural network for unsupervised learning, ...
research
12/22/2014

Learning Compact Convolutional Neural Networks with Nested Dropout

Recently, nested dropout was proposed as a method for ordering represent...
research
10/21/2021

On the Regularization of Autoencoders

While much work has been devoted to understanding the implicit (and expl...
research
06/18/2023

Dropout Regularization Versus ℓ_2-Penalization in the Linear Model

We investigate the statistical behavior of gradient descent iterates wit...
research
10/20/2021

Convergence Analysis and Implicit Regularization of Feedback Alignment for Deep Linear Networks

We theoretically analyze the Feedback Alignment (FA) algorithm, an effic...
research
01/24/2019

Infinite All-Layers Simple Foldability

We study the problem of deciding whether a crease pattern can be folded ...

Please sign up or login with your details

Forgot password? Click here to reset