DeepAI AI Chat
Log In Sign Up

Perturbation Theory for the Information Bottleneck

05/28/2021
by   Vudtiwat Ngampruetikorn, et al.
0

Extracting relevant information from data is crucial for all forms of learning. The information bottleneck (IB) method formalizes this, offering a mathematically precise and conceptually appealing framework for understanding learning phenomena. However the nonlinearity of the IB problem makes it computationally expensive and analytically intractable in general. Here we derive a perturbation theory for the IB method and report the first complete characterization of the learning onset, the limit of maximum relevant information per bit extracted from data. We test our results on synthetic probability distributions, finding good agreement with the exact numerical solution near the onset of learning. We explore the difference and subtleties in our derivation and previous attempts at deriving a perturbation theory for the learning onset and attribute the discrepancy to a flawed assumption. Our work also provides a fresh perspective on the intimate relationship between the IB method and the strong data processing inequality.

READ FULL TEXT

page 1

page 2

page 3

page 4

10/14/2022

On Triangular Inequality of the Discounted Least Information Theory of Entropy (DLITE)

The Discounted Least Information Theory of Entropy (DLITE) is a new info...
09/07/2020

A perturbation based out-of-sample extension framework

Out-of-sample extension is an important task in various kernel based non...
08/25/2020

Analysis of the Feshbach-Schur method for the planewave discretizations of Schrödinger operators

In this article, we propose a new numerical method and its analysis to s...
11/21/2022

Limit distribution theory for f-Divergences

f-divergences, which quantify discrepancy between probability distributi...
03/02/2019

First-order Perturbation Theory for Eigenvalues and Eigenvectors

We present first-order perturbation analysis of a simple eigenvalue and ...