Deep equilibrium models as estimators for continuous latent variables

11/11/2022
by   Russell Tsuchida, et al.
0

Principal Component Analysis (PCA) and its exponential family extensions have three components: observations, latents and parameters of a linear transformation. We consider a generalised setting where the canonical parameters of the exponential family are a nonlinear transformation of the latents. We show explicit relationships between particular neural network architectures and the corresponding statistical models. We find that deep equilibrium models – a recently introduced class of implicit neural networks – solve maximum a-posteriori (MAP) estimates for the latents and parameters of the transformation. Our analysis provides a systematic way to relate activation functions, dropout, and layer structure, to statistical assumptions about the observations, thus providing foundational principles for unsupervised DEQs. For hierarchical latents, individual neurons can be interpreted as nodes in a deep graphical model. Our DEQ feature maps are end-to-end differentiable, enabling fine-tuning for downstream tasks.

READ FULL TEXT
research
08/23/2022

Neural PCA for Flow-Based Representation Learning

Of particular interest is to discover useful representations solely from...
research
08/16/2021

Flexible Principal Component Analysis for Exponential Family Distributions

Traditional principal component analysis (PCA) is well known in high-dim...
research
02/09/2023

Evaluation of population structure inferred by principal component analysis or the admixture model

Principal component analysis (PCA) is commonly used in genetics to infer...
research
03/15/2012

Bayesian exponential family projections for coupled data sources

Exponential family extensions of principal component analysis (EPCA) hav...
research
06/20/2018

Log-sum-exp neural networks and posynomial models for convex and log-log-convex data

We show that a one-layer feedforward neural network with exponential act...
research
06/23/2020

Principal Component Networks: Parameter Reduction Early in Training

Recent works show that overparameterized networks contain small subnetwo...

Please sign up or login with your details

Forgot password? Click here to reset