On the Relevance-Complexity Region of Scalable Information Bottleneck

11/02/2020
by   Mohammad Mahdi Mahvari, et al.
0

The Information Bottleneck method is a learning technique that seeks a right balance between accuracy and generalization capability through a suitable tradeoff between compression complexity, measured by minimum description length, and distortion evaluated under logarithmic loss measure. In this paper, we study a variation of the problem, called scalable information bottleneck, where the encoder outputs multiple descriptions of the observation with increasingly richer features. The problem at hand is motivated by some application scenarios that require varying levels of accuracy depending on the allowed level of generalization. First, we establish explicit (analytic) characterizations of the relevance-complexity region for memoryless Gaussian sources and memoryless binary sources. Then, we derive a Blahut-Arimoto type algorithm that allows us to compute (an approximation of) the region for general discrete sources. Finally, an application example in the pattern classification problem is provided along with numerical results.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/15/2021

Scalable Vector Gaussian Information Bottleneck

In the context of statistical learning, the Information Bottleneck metho...
research
04/05/2016

Collaborative Information Bottleneck

This paper investigates a multi-terminal source coding problem under a l...
research
07/11/2018

Distributed Variational Representation Learning

The problem of distributed representation learning is one in which multi...
research
01/31/2020

On the Information Bottleneck Problems: Models, Connections, Applications and Information Theoretic Views

This tutorial paper focuses on the variants of the bottleneck problem ta...
research
11/14/2017

The Multi-layer Information Bottleneck Problem

The muti-layer information bottleneck (IB) problem, where information is...
research
10/26/2018

Information Bottleneck Methods for Distributed Learning

We study a distributed learning problem in which Alice sends a compresse...
research
04/05/2022

Pareto-optimal clustering with the primal deterministic information bottleneck

At the heart of both lossy compression and clustering is a trade-off bet...

Please sign up or login with your details

Forgot password? Click here to reset