DeepAI AI Chat
Log In Sign Up

Pulling back information geometry

06/09/2021
by   Georgios Arvanitidis, et al.
0

Latent space geometry has shown itself to provide a rich and rigorous framework for interacting with the latent variables of deep generative models. The existing theory, however, relies on the decoder being a Gaussian distribution as its simple reparametrization allows us to interpret the generating process as a random projection of a deterministic manifold. Consequently, this approach breaks down when applied to decoders that are not as easily reparametrized. We here propose to use the Fisher-Rao metric associated with the space of decoder distributions as a reference metric, which we pull back to the latent space. We show that we can achieve meaningful latent geometries for a wide range of decoder distributions for which the previous theory was not applicable, opening the door to `black box' latent geometries.

READ FULL TEXT

page 6

page 7

page 8

page 9

10/31/2017

Latent Space Oddity: on the Curvature of Deep Generative Models

Deep generative models provide a systematic way to learn nonlinear data ...
12/20/2022

Identifying latent distances with Finslerian geometry

Riemannian geometry provides powerful tools to explore the latent space ...
10/22/2020

Geometry-Aware Hamiltonian Variational Auto-Encoder

Variational auto-encoders (VAEs) have proven to be a well suited tool fo...
03/28/2019

Toroidal AutoEncoder

Enforcing distributions of latent variables in neural networks is an act...
05/31/2022

Mario Plays on a Manifold: Generating Functional Content in Latent Space through Differential Geometry

Deep generative models can automatically create content of diverse types...
06/05/2018

On Latent Distributions Without Finite Mean in Generative Models

We investigate the properties of multidimensional probability distributi...
02/18/2023

Autocodificadores Variacionales (VAE) Fundamentos Teóricos y Aplicaciones

VAEs are probabilistic graphical models based on neural networks that al...