The f-Divergence Expectation Iteration Scheme

09/26/2019
by   Kamélia Daudel, et al.
0

This paper introduces the f-EI(ϕ) algorithm, a novel iterative algorithm which operates on measures and performs f-divergence minimisation in a Bayesian framework. We prove that for a rich family of values of (f,ϕ) this algorithm leads at each step to a systematic decrease in the f-divergence and show that we achieve an optimum. In the particular case where we consider a weighted sum of Dirac measures and the α-divergence, we obtain that the calculations involved in the f-EI(ϕ) algorithm simplify to gradient-based computations. Empirical results support the claim that the f-EI(ϕ) algorithm serves as a powerful tool to assist Variational methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/20/2020

Infinite-dimensional gradient-based descent for alpha-divergence minimisation

This paper introduces the (α, Γ)-descent, an iterative algorithm which o...
research
03/09/2021

Monotonic Alpha-divergence Minimisation

In this paper, we introduce a novel iterative algorithm which carries ou...
research
04/25/2023

Statistics of Random Binning Based on Tsallis Divergence

Random binning is a powerful and widely used tool in information theory....
research
11/09/2022

Regularized Rényi divergence minimization through Bregman proximal gradient algorithms

We study the variational inference problem of minimizing a regularized R...
research
10/16/2012

Learning to Rank With Bregman Divergences and Monotone Retargeting

This paper introduces a novel approach for learning to rank (LETOR) base...
research
09/13/2022

Rényi Divergence Deep Mutual Learning

This paper revisits an incredibly simple yet exceedingly effective compu...
research
01/18/2019

Gambling and Rényi Divergence

For gambling on horses, a one-parameter family of utility functions is p...

Please sign up or login with your details

Forgot password? Click here to reset