On the Fisher-Rao Gradient of the Evidence Lower Bound

07/20/2023
by   Nihat Ay, et al.
0

This article studies the Fisher-Rao gradient, also referred to as the natural gradient, of the evidence lower bound, the ELBO, which plays a crucial role within the theory of the Variational Autonecoder, the Helmholtz Machine and the Free Energy Principle. The natural gradient of the ELBO is related to the natural gradient of the Kullback-Leibler divergence from a target distribution, the prime objective function of learning. Based on invariance properties of gradients within information geometry, conditions on the underlying model are provided that ensure the equivalence of minimising the prime objective function and the maximisation of the ELBO.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/30/2022

Invariance Properties of the Natural Gradient in Overparametrised Systems

The natural gradient field is a vector field that lives on a model equip...
research
10/28/2019

On the Degree of Boolean Functions as Polynomials over Z_m

Polynomial representations of Boolean functions over various rings such ...
research
05/21/2020

On the Locality of the Natural Gradient for Deep Learning

We study the natural gradient method for learning in deep Bayesian netwo...
research
06/01/2019

BreGMN: scaled-Bregman Generative Modeling Networks

The family of f-divergences is ubiquitously applied to generative modeli...
research
10/12/2022

Asymptotics for the least trimmed squares estimator

Novel properties of the objective function in both empirical and populat...
research
03/27/2023

A simplified lower bound for implicational logic

We present a streamlined and simplified exponential lower bound on the l...
research
06/06/2011

Nearest Prime Simplicial Complex for Object Recognition

The structure representation of data distribution plays an important rol...

Please sign up or login with your details

Forgot password? Click here to reset