The Variational Bandwidth Bottleneck: Stochastic Evaluation on an Information Budget

04/24/2020
by   Anirudh Goyal, et al.
0

In many applications, it is desirable to extract only the relevant information from complex input data, which involves making a decision about which input features are relevant. The information bottleneck method formalizes this as an information-theoretic optimization problem by maintaining an optimal tradeoff between compression (throwing away irrelevant input information), and predicting the target. In many problem settings, including the reinforcement learning problems we consider in this work, we might prefer to compress only part of the input. This is typically the case when we have a standard conditioning input, such as a state observation, and a "privileged" input, which might correspond to the goal of a task, the output of a costly planning algorithm, or communication with another agent. In such cases, we might prefer to compress the privileged input, either to achieve better generalization (e.g., with respect to goals) or to minimize access to costly information (e.g., in the case of communication). Practical implementations of the information bottleneck based on variational inference require access to the privileged input in order to compute the bottleneck variable, so although they perform compression, this compression operation itself needs unrestricted, lossless access. In this work, we propose the variational bandwidth bottleneck, which decides for each example on the estimated value of the privileged information before seeing it, i.e., only based on the standard input, and then accordingly chooses stochastically, whether to access the privileged input or not. We formulate a tractable approximation to this framework and demonstrate in a series of reinforcement learning experiments that it can improve generalization and reduce access to computationally costly information.

READ FULL TEXT

page 7

page 15

research
03/23/2021

Drop-Bottleneck: Learning Discrete Compressed Representation for Noise-Robust Exploration

We propose a novel information bottleneck (IB) method named Drop-Bottlen...
research
05/24/2016

Relevant sparse codes with variational information bottleneck

In many applications, it is desirable to extract only the relevant aspec...
research
04/01/2016

The deterministic information bottleneck

Lossy compression and clustering fundamentally involve a decision about ...
research
01/26/2021

Variational Information Bottleneck Model for Accurate Indoor Position Recognition

Recognizing user location with WiFi fingerprints is a popular approach f...
research
01/31/2020

On the Information Bottleneck Problems: Models, Connections, Applications and Information Theoretic Views

This tutorial paper focuses on the variants of the bottleneck problem ta...
research
02/15/2021

Scalable Vector Gaussian Information Bottleneck

In the context of statistical learning, the Information Bottleneck metho...
research
06/16/2023

The Information Bottleneck's Ordinary Differential Equation: First-Order Root-Tracking for the IB

The Information Bottleneck (IB) is a method of lossy compression. Its ra...

Please sign up or login with your details

Forgot password? Click here to reset