DeepAI AI Chat
Log In Sign Up

Perturbation Theory for the Information Bottleneck

by   Vudtiwat Ngampruetikorn, et al.

Extracting relevant information from data is crucial for all forms of learning. The information bottleneck (IB) method formalizes this, offering a mathematically precise and conceptually appealing framework for understanding learning phenomena. However the nonlinearity of the IB problem makes it computationally expensive and analytically intractable in general. Here we derive a perturbation theory for the IB method and report the first complete characterization of the learning onset, the limit of maximum relevant information per bit extracted from data. We test our results on synthetic probability distributions, finding good agreement with the exact numerical solution near the onset of learning. We explore the difference and subtleties in our derivation and previous attempts at deriving a perturbation theory for the learning onset and attribute the discrepancy to a flawed assumption. Our work also provides a fresh perspective on the intimate relationship between the IB method and the strong data processing inequality.


page 1

page 2

page 3

page 4


On Triangular Inequality of the Discounted Least Information Theory of Entropy (DLITE)

The Discounted Least Information Theory of Entropy (DLITE) is a new info...

A perturbation based out-of-sample extension framework

Out-of-sample extension is an important task in various kernel based non...

Analysis of the Feshbach-Schur method for the planewave discretizations of Schrödinger operators

In this article, we propose a new numerical method and its analysis to s...

Limit distribution theory for f-Divergences

f-divergences, which quantify discrepancy between probability distributi...

First-order Perturbation Theory for Eigenvalues and Eigenvectors

We present first-order perturbation analysis of a simple eigenvalue and ...