Learnability for the Information Bottleneck

07/17/2019
by   Tailin Wu, et al.
0

The Information Bottleneck (IB) method (tishby2000information) provides an insightful and principled approach for balancing compression and prediction for representation learning. The IB objective I(X;Z)-β I(Y;Z) employs a Lagrange multiplier β to tune this trade-off. However, in practice, not only is β chosen empirically without theoretical guidance, there is also a lack of theoretical understanding between β, learnability, the intrinsic nature of the dataset and model capacity. In this paper, we show that if β is improperly chosen, learning cannot happen -- the trivial representation P(Z|X)=P(Z) becomes the global minimum of the IB objective. We show how this can be avoided, by identifying a sharp phase transition between the unlearnable and the learnable which arises as β is varied. This phase transition defines the concept of IB-Learnability. We prove several sufficient conditions for IB-Learnability, which provides theoretical guidance for choosing a good β. We further show that IB-learnability is determined by the largest confident, typical, and imbalanced subset of the examples (the conspicuous subset), and discuss its relation with model capacity. We give practical algorithms to estimate the minimum β for a given dataset. We also empirically demonstrate our theoretical conditions with analyses of synthetic datasets, MNIST, and CIFAR10.

READ FULL TEXT
research
01/07/2020

Phase Transitions for the Information Bottleneck in Representation Learning

In the Information Bottleneck (IB), when tuning the relative strength be...
research
09/29/2021

PAC-Bayes Information Bottleneck

Information bottleneck (IB) depicts a trade-off between the accuracy and...
research
06/26/2018

Phase transition in the knapsack problem

We examine the phase transition phenomenon for the Knapsack problem from...
research
08/05/2021

Applying the Information Bottleneck Principle to Prosodic Representation Learning

This paper describes a novel design of a neural network-based speech gen...
research
03/21/2023

A Tale of Two Circuits: Grokking as Competition of Sparse and Dense Subnetworks

Grokking is a phenomenon where a model trained on an algorithmic task fi...
research
11/22/2021

A Free Lunch from the Noise: Provable and Practical Exploration for Representation Learning

Representation learning lies at the heart of the empirical success of de...
research
09/03/2018

Minimum Description Length codes are critical

Learning from the data, in Minimum Description Length (MDL), is equivale...

Please sign up or login with your details

Forgot password? Click here to reset