RENs: Relevance Encoding Networks

05/25/2022
by   Krithika Iyer, et al.
51

The manifold assumption for high-dimensional data assumes that the data is generated by varying a set of parameters obtained from a low-dimensional latent space. Deep generative models (DGMs) are widely used to learn data representations in an unsupervised way. DGMs parameterize the underlying low-dimensional manifold in the data space using bottleneck architectures such as variational autoencoders (VAEs). The bottleneck dimension for VAEs is treated as a hyperparameter that depends on the dataset and is fixed at design time after extensive tuning. As the intrinsic dimensionality of most real-world datasets is unknown, often, there is a mismatch between the intrinsic dimensionality and the latent dimensionality chosen as a hyperparameter. This mismatch can negatively contribute to the model performance for representation learning and sample generation tasks. This paper proposes relevance encoding networks (RENs): a novel probabilistic VAE-based framework that uses the automatic relevance determination (ARD) prior in the latent space to learn the data-specific bottleneck dimensionality. The relevance of each latent dimension is directly learned from the data along with the other model parameters using stochastic gradient descent and a reparameterization trick adapted to non-Gaussian priors. We leverage the concept of DeepSets to capture permutation invariant statistical properties in both data and latent spaces for relevance determination. The proposed framework is general and flexible and can be used for the state-of-the-art VAE models that leverage regularizers to impose specific characteristics in the latent space (e.g., disentanglement). With extensive experimentation on synthetic and public image datasets, we show that the proposed model learns the relevant latent bottleneck dimensionality without compromising the representation and generation quality of the samples.

READ FULL TEXT
research
05/28/2015

Automatic Relevance Determination For Deep Generative Models

A recurring problem when building probabilistic latent variable models i...
research
12/05/2019

Representing Closed Transformation Paths in Encoded Network Latent Space

Deep generative networks have been widely used for learning mappings fro...
research
06/18/2012

Manifold Relevance Determination

In this paper we present a fully Bayesian latent variable model which ex...
research
12/02/2019

Rodent: Relevance determination in ODE

From a set of observed trajectories of a partially observed system, we a...
research
02/29/2020

Determination of Latent Dimensionality in International Trade Flow

Currently, high-dimensional data is ubiquitous in data science, which ne...
research
07/16/2021

ScRAE: Deterministic Regularized Autoencoders with Flexible Priors for Clustering Single-cell Gene Expression Data

Clustering single-cell RNA sequence (scRNA-seq) data poses statistical a...
research
10/07/2019

Increasing Expressivity of a Hyperspherical VAE

Learning suitable latent representations for observed, high-dimensional ...

Please sign up or login with your details

Forgot password? Click here to reset