DeepAI AI Chat
Log In Sign Up

Pulling back information geometry

by   Georgios Arvanitidis, et al.

Latent space geometry has shown itself to provide a rich and rigorous framework for interacting with the latent variables of deep generative models. The existing theory, however, relies on the decoder being a Gaussian distribution as its simple reparametrization allows us to interpret the generating process as a random projection of a deterministic manifold. Consequently, this approach breaks down when applied to decoders that are not as easily reparametrized. We here propose to use the Fisher-Rao metric associated with the space of decoder distributions as a reference metric, which we pull back to the latent space. We show that we can achieve meaningful latent geometries for a wide range of decoder distributions for which the previous theory was not applicable, opening the door to `black box' latent geometries.


page 6

page 7

page 8

page 9


Latent Space Oddity: on the Curvature of Deep Generative Models

Deep generative models provide a systematic way to learn nonlinear data ...

Identifying latent distances with Finslerian geometry

Riemannian geometry provides powerful tools to explore the latent space ...

Geometry-Aware Hamiltonian Variational Auto-Encoder

Variational auto-encoders (VAEs) have proven to be a well suited tool fo...

Toroidal AutoEncoder

Enforcing distributions of latent variables in neural networks is an act...

Mario Plays on a Manifold: Generating Functional Content in Latent Space through Differential Geometry

Deep generative models can automatically create content of diverse types...

On Latent Distributions Without Finite Mean in Generative Models

We investigate the properties of multidimensional probability distributi...

Autocodificadores Variacionales (VAE) Fundamentos Teóricos y Aplicaciones

VAEs are probabilistic graphical models based on neural networks that al...