DeepAI AI Chat
Log In Sign Up

Learnability for the Information Bottleneck

07/17/2019
by   Tailin Wu, et al.
MIT
0

The Information Bottleneck (IB) method (tishby2000information) provides an insightful and principled approach for balancing compression and prediction for representation learning. The IB objective I(X;Z)-β I(Y;Z) employs a Lagrange multiplier β to tune this trade-off. However, in practice, not only is β chosen empirically without theoretical guidance, there is also a lack of theoretical understanding between β, learnability, the intrinsic nature of the dataset and model capacity. In this paper, we show that if β is improperly chosen, learning cannot happen -- the trivial representation P(Z|X)=P(Z) becomes the global minimum of the IB objective. We show how this can be avoided, by identifying a sharp phase transition between the unlearnable and the learnable which arises as β is varied. This phase transition defines the concept of IB-Learnability. We prove several sufficient conditions for IB-Learnability, which provides theoretical guidance for choosing a good β. We further show that IB-learnability is determined by the largest confident, typical, and imbalanced subset of the examples (the conspicuous subset), and discuss its relation with model capacity. We give practical algorithms to estimate the minimum β for a given dataset. We also empirically demonstrate our theoretical conditions with analyses of synthetic datasets, MNIST, and CIFAR10.

READ FULL TEXT
01/07/2020

Phase Transitions for the Information Bottleneck in Representation Learning

In the Information Bottleneck (IB), when tuning the relative strength be...
09/29/2021

PAC-Bayes Information Bottleneck

Information bottleneck (IB) depicts a trade-off between the accuracy and...
06/26/2018

Phase transition in the knapsack problem

We examine the phase transition phenomenon for the Knapsack problem from...
08/05/2021

Applying the Information Bottleneck Principle to Prosodic Representation Learning

This paper describes a novel design of a neural network-based speech gen...
03/21/2023

A Tale of Two Circuits: Grokking as Competition of Sparse and Dense Subnetworks

Grokking is a phenomenon where a model trained on an algorithmic task fi...
02/09/2021

A Provably Convergent Information Bottleneck Solution via ADMM

The Information bottleneck (IB) method enables optimizing over the trade...