DeepAI AI Chat
Log In Sign Up

Fisher Auto-Encoders

by   Khalil Elkhalil, et al.
University of Minnesota
Duke University

It has been conjectured that the Fisher divergence is more robust to model uncertainty than the conventional Kullback-Leibler (KL) divergence. This motivates the design of a new class of robust generative auto-encoders (AE) referred to as Fisher auto-encoders. Our approach is to design Fisher AEs by minimizing the Fisher divergence between the intractable joint distribution of observed data and latent variables, with that of the postulated/modeled joint distribution. In contrast to KL-based variational AEs (VAEs), the Fisher AE can exactly quantify the distance between the true and the model-based posterior distributions. Qualitative and quantitative results are provided on both MNIST and celebA datasets demonstrating the competitive performance of Fisher AEs in terms of robustness compared to other AEs such as VAEs and Wasserstein AEs.


page 7

page 8

page 12


A Note on the Kullback-Leibler Divergence for the von Mises-Fisher distribution

We present a derivation of the Kullback Leibler (KL)-Divergence (also kn...

Understanding VAEs in Fisher-Shannon Plane

In information theory, Fisher information and Shannon information (entro...

Some results on a χ-divergence, an extended Fisher information and generalized Cramér-Rao inequalities

We propose a modified χ^β-divergence, give some of its properties, and s...

Learning Distributions via Monte-Carlo Marginalization

We propose a novel method to learn intractable distributions from their ...

An Explicit Expansion of the Kullback-Leibler Divergence along its Fisher-Rao Gradient Flow

Let V_* : ℝ^d →ℝ be some (possibly non-convex) potential function, and c...

The Power Spherical distribution

There is a growing interest in probabilistic models defined in hyper-sph...

Identifying Invariant Factors Across Multiple Environments with KL Regression

Many datasets are collected from multiple environments (e.g. different l...