Neural PCA for Flow-Based Representation Learning

08/23/2022
by   Shen Li, et al.
0

Of particular interest is to discover useful representations solely from observations in an unsupervised generative manner. However, the question of whether existing normalizing flows provide effective representations for downstream tasks remains mostly unanswered despite their strong ability for sample generation and density estimation. This paper investigates this problem for such a family of generative models that admits exact invertibility. We propose Neural Principal Component Analysis (Neural-PCA) that operates in full dimensionality while capturing principal components in descending order. Without exploiting any label information, the principal components recovered store the most informative elements in their leading dimensions and leave the negligible in the trailing ones, allowing for clear performance improvements of 5%-10% in downstream tasks. Such improvements are empirically found consistent irrespective of the number of latent trailing dimensions dropped. Our work suggests that necessary inductive bias be introduced into generative modelling when representation quality is of interest.

READ FULL TEXT

page 12

page 13

page 15

research
04/19/2018

On optimal allocation of treatment/condition variance in principal component analysis

The allocation of a (treatment) condition-effect on the wrong principal ...
research
11/11/2022

Deep equilibrium models as estimators for continuous latent variables

Principal Component Analysis (PCA) and its exponential family extensions...
research
02/22/2023

Deep Kernel Principal Component Analysis for Multi-level Feature Learning

Principal Component Analysis (PCA) and its nonlinear extension Kernel PC...
research
04/21/2021

Principal Component Density Estimation for Scenario Generation Using Normalizing Flows

Neural networks-based learning of the distribution of non-dispatchable r...
research
12/12/2018

Recent Advances in Autoencoder-Based Representation Learning

Learning useful representations with little or no supervision is a key c...
research
04/18/2021

"Average" Approximates "First Principal Component"? An Empirical Analysis on Representations from Neural Language Models

Contextualized representations based on neural language models have furt...

Please sign up or login with your details

Forgot password? Click here to reset