Maximizing the Bregman divergence from a Bregman family

01/23/2020
by   Johannes Rauh, et al.
0

The problem to maximize the information divergence from an exponential family is generalized to the setting of Bregman divergences and suitably defined Bregman families.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/26/2021

Tsallis and Rényi deformations linked via a new λ-duality

Tsallis and Rényi entropies, which are monotone transformations of such ...
research
02/22/2022

The duo Fenchel-Young divergence

By calculating the Kullback-Leibler divergence between two probability m...
research
01/18/2019

Gambling and Rényi Divergence

For gambling on horses, a one-parameter family of utility functions is p...
research
01/07/2022

Bregman divergence based em algorithm and its application to classical and quantum rate distortion theory

We formulate em algorithm in the framework of Bregman divergence, which ...
research
11/07/2020

When Optimizing f-divergence is Robust with Label Noise

We show when maximizing a properly defined f-divergence measure with res...
research
10/09/2014

Distributed Estimation, Information Loss and Exponential Families

Distributed learning of probabilistic models from multiple data reposito...
research
08/11/2020

Conditions for the existence of a generalization of Rényi divergence

We give necessary and sufficient conditions for the existence of a gener...

Please sign up or login with your details

Forgot password? Click here to reset