The Role of Information Complexity and Randomization in Representation Learning

02/14/2018
by   Matías Vera, et al.
0

A grand challenge in representation learning is to learn the different explanatory factors of variation behind the high dimen- sional data. Encoder models are often determined to optimize performance on training data when the real objective is to generalize well to unseen data. Although there is enough numerical evidence suggesting that noise injection (during training) at the representation level might improve the generalization ability of encoders, an information-theoretic understanding of this principle remains elusive. This paper presents a sample-dependent bound on the generalization gap of the cross-entropy loss that scales with the information complexity (IC) of the representations, meaning the mutual information between inputs and their representations. The IC is empirically investigated for standard multi-layer neural networks with SGD on MNIST and CIFAR-10 datasets; the behaviour of the gap and the IC appear to be in direct correlation, suggesting that SGD selects encoders to implicitly minimize the IC. We specialize the IC to study the role of Dropout on the generalization capacity of deep encoders which is shown to be directly related to the encoder capacity, being a measure of the distinguishability among samples from their representations. Our results support some recent regularization methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/26/2017

Spectrally-normalized margin bounds for neural networks

This paper presents a margin-based multiclass generalization bound for n...
research
05/28/2019

Understanding the Behaviour of the Empirical Cross-Entropy Beyond the Training Distribution

Machine learning theory has mostly focused on generalization to samples ...
research
10/22/2020

The Role of Mutual Information in Variational Classifiers

Overfitting data is a well-known phenomenon related with the generation ...
research
10/07/2021

On the Generalization of Models Trained with SGD: Information-Theoretic Bounds and Implications

This paper follows up on a recent work of (Neu, 2021) and presents new a...
research
11/04/2016

Information Dropout: Learning Optimal Representations Through Noisy Computation

The cross-entropy loss commonly used in deep learning is closely related...
research
08/28/2023

Multi-Scale and Multi-Layer Contrastive Learning for Domain Generalization

During the past decade, deep neural networks have led to fast-paced prog...
research
11/08/2018

On the Statistical and Information-theoretic Characteristics of Deep Network Representations

It has been common to argue or imply that a regularizer can be used to a...

Please sign up or login with your details

Forgot password? Click here to reset