REVE: Regularizing Deep Learning with Variational Entropy Bound

10/15/2019
by   Antoine Saporta, et al.
0

Studies on generalization performance of machine learning algorithms under the scope of information theory suggest that compressed representations can guarantee good generalization, inspiring many compression-based regularization methods. In this paper, we introduce REVE, a new regularization scheme. Noting that compressing the representation can be sub-optimal, our first contribution is to identify a variable that is directly responsible for the final prediction. Our method aims at compressing the class conditioned entropy of this latter variable. Second, we introduce a variational upper bound on this conditional entropy term. Finally, we propose a scheme to instantiate a tractable loss that is integrated within the training procedure of the neural network and demonstrate its efficiency on different neural networks and datasets.

READ FULL TEXT
research
04/29/2018

SHADE: Information-Based Regularization for Deep Learning

Regularization is a big issue for training deep neural networks. In this...
research
01/29/2019

Variational Characterizations of Local Entropy and Heat Regularization in Deep Learning

The aim of this paper is to provide new theoretical and computational un...
research
04/29/2018

SHARE: Regularization for Deep Learning

Regularization is a big issue for training deep neural networks. In this...
research
12/21/2020

Neural Joint Entropy Estimation

Estimating the entropy of a discrete random variable is a fundamental pr...
research
09/25/2019

Compression based bound for non-compressed network: unified generalization error analysis of large compressible deep neural network

One of biggest issues in deep learning theory is its generalization abil...
research
09/20/2019

Do Compressed Representations Generalize Better?

One of the most studied problems in machine learning is finding reasonab...
research
06/06/2019

Class-Conditional Compression and Disentanglement: Bridging the Gap between Neural Networks and Naive Bayes Classifiers

In this draft, which reports on work in progress, we 1) adapt the inform...

Please sign up or login with your details

Forgot password? Click here to reset