Non-linear, Sparse Dimensionality Reduction via Path Lasso Penalized Autoencoders

02/22/2021
by   Oskar Allerbo, et al.
0

High-dimensional data sets are often analyzed and explored via the construction of a latent low-dimensional space which enables convenient visualization and efficient predictive modeling or clustering. For complex data structures, linear dimensionality reduction techniques like PCA may not be sufficiently flexible to enable low-dimensional representation. Non-linear dimension reduction techniques, like kernel PCA and autoencoders, suffer from loss of interpretability since each latent variable is dependent of all input dimensions. To address this limitation, we here present path lasso penalized autoencoders. This structured regularization enhances interpretability by penalizing each path through the encoder from an input to a latent variable, thus restricting how many input variables are represented in each latent dimension. Our algorithm uses a group lasso penalty and non-negative matrix factorization to construct a sparse, non-linear latent representation. We compare the path lasso regularized autoencoder to PCA, sparse PCA, autoencoders and sparse autoencoders on real and simulated data sets. We show that the algorithm exhibits much lower reconstruction errors than sparse PCA and parameter-wise lasso regularized autoencoders for low-dimensional representations. Moreover, path lasso representations provide a more accurate reconstruction match, i.e. preserved relative distance between objects in the original and reconstructed spaces.

READ FULL TEXT
research
10/31/2018

The Price of Fair PCA: One Extra Dimension

We investigate whether the standard dimensionality reduction technique o...
research
02/23/2015

Rectified Factor Networks

We propose rectified factor networks (RFNs) to efficiently construct ver...
research
06/11/2018

Autoencoders for music sound synthesis: a comparison of linear, shallow, deep and variational models

This study investigates the use of non-linear unsupervised dimensionalit...
research
06/25/2020

Neural Decomposition: Functional ANOVA with Variational Autoencoders

Variational Autoencoders (VAEs) have become a popular approach for dimen...
research
08/17/2020

Principal Ellipsoid Analysis (PEA): Efficient non-linear dimension reduction clustering

Even with the rise in popularity of over-parameterized models, simple di...
research
03/25/2021

Full Encoder: Make Autoencoders Learn Like PCA

While the beta-VAE family is aiming to find disentangled representations...
research
07/31/2017

Learning Robust Representations for Computer Vision

Unsupervised learning techniques in computer vision often require learni...

Please sign up or login with your details

Forgot password? Click here to reset