The Role of Mutual Information in Variational Classifiers

10/22/2020
by   Matías Vera, et al.
0

Overfitting data is a well-known phenomenon related with the generation of a model that mimics too closely (or exactly) a particular instance of data, and may therefore fail to predict future observations reliably. In practice, this behaviour is controlled by various–sometimes heuristics–regularization techniques, which are motivated by developing upper bounds to the generalization error. In this work, we study the generalization error of classifiers relying on stochastic encodings trained on the cross-entropy loss, which is often used in deep learning for classification problems. We derive bounds to the generalization error showing that there exists a regime where the generalization error is bounded by the mutual information between input features and the corresponding representations in the latent space, which are randomly generated according to the encoding distribution. Our bounds provide an information-theoretic understanding of generalization in the so-called class of variational classifiers, which are regularized by a Kullback-Leibler (KL) divergence term. These results give theoretical grounds for the highly popular KL term in variational inference methods that was already recognized to act effectively as a regularization penalty. We further observe connections with well studied notions such as Variational Autoencoders, Information Dropout, Information Bottleneck and Boltzmann Machines. Finally, we perform numerical experiments on MNIST and CIFAR datasets and show that mutual information is indeed highly representative of the behaviour of the generalization error.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/22/2017

Information-theoretic analysis of generalization capability of learning algorithms

We derive upper bounds on the generalization error of a learning algorit...
research
05/12/2020

Upper Bounds on the Generalization Error of Private Algorithms

In this work, we study the generalization capability of algorithms from ...
research
11/04/2016

Information Dropout: Learning Optimal Representations Through Noisy Computation

The cross-entropy loss commonly used in deep learning is closely related...
research
08/26/2021

Quadratic mutual information regularization in real-time deep CNN models

In this paper, regularized lightweight deep convolutional neural network...
research
02/14/2018

The Role of Information Complexity and Randomization in Representation Learning

A grand challenge in representation learning is to learn the different e...
research
05/08/2016

On-Average KL-Privacy and its equivalence to Generalization for Max-Entropy Mechanisms

We define On-Average KL-Privacy and present its properties and connectio...
research
05/28/2019

Understanding the Behaviour of the Empirical Cross-Entropy Beyond the Training Distribution

Machine learning theory has mostly focused on generalization to samples ...

Please sign up or login with your details

Forgot password? Click here to reset