DeepAI
Log In Sign Up

Probabilistic Autoencoder using Fisher Information

10/28/2021
by   Johannes Zacherl, et al.
16

Neural Networks play a growing role in many science disciplines, including physics. Variational Autoencoders (VAEs) are neural networks that are able to represent the essential information of a high dimensional data set in a low dimensional latent space, which have a probabilistic interpretation. In particular the so-called encoder network, the first part of the VAE, which maps its input onto a position in latent space, additionally provides uncertainty information in terms of a variance around this position. In this work, an extension to the Autoencoder architecture is introduced, the FisherNet. In this architecture, the latent space uncertainty is not generated using an additional information channel in the encoder, but derived from the decoder, by means of the Fisher information metric. This architecture has advantages from a theoretical point of view as it provides a direct uncertainty quantification derived from the model, and also accounts for uncertainty cross-correlations. We can show experimentally that the FisherNet produces more accurate data reconstructions than a comparable VAE and its learning performance also apparently scales better with the number of latent space dimensions.

READ FULL TEXT

page 15

page 16

page 17

page 19

page 21

11/14/2022

Disentangling Variational Autoencoders

A variational autoencoder (VAE) is a probabilistic machine learning fram...
09/12/2018

Coordinated Heterogeneous Distributed Perception based on Latent Space Representation

We investigate a reinforcement approach for distributed sensing based on...
06/30/2021

Improving black-box optimization in VAE latent space using decoder uncertainty

Optimization in the latent space of variational autoencoders is a promis...
08/08/2022

Sparse Representation Learning with Modified q-VAE towards Minimal Realization of World Model

Extraction of low-dimensional latent space from high-dimensional observa...
05/27/2021

Classification and Uncertainty Quantification of Corrupted Data using Semi-Supervised Autoencoders

Parametric and non-parametric classifiers often have to deal with real-w...
05/31/2021

Variational Autoencoders: A Harmonic Perspective

In this work we study Variational Autoencoders (VAEs) from the perspecti...
05/08/2021

Adaptive Latent Space Tuning for Non-Stationary Distributions

Powerful deep learning tools, such as convolutional neural networks (CNN...