The Conditional Entropy Bottleneck

02/13/2020
by   Ian Fischer, et al.
0

Much of the field of Machine Learning exhibits a prominent set of failure modes, including vulnerability to adversarial examples, poor out-of-distribution (OoD) detection, miscalibration, and willingness to memorize random labelings of datasets. We characterize these as failures of robust generalization, which extends the traditional measure of generalization as accuracy or related metrics on a held-out set. We hypothesize that these failures to robustly generalize are due to the learning systems retaining too much information about the training data. To test this hypothesis, we propose the Minimum Necessary Information (MNI) criterion for evaluating the quality of a model. In order to train models that perform well with respect to the MNI criterion, we present a new objective function, the Conditional Entropy Bottleneck (CEB), which is closely related to the Information Bottleneck (IB). We experimentally test our hypothesis by comparing the performance of CEB models with deterministic models and Variational Information Bottleneck (VIB) models on a variety of different datasets and robustness challenges. We find strong empirical evidence supporting our hypothesis that MNI models improve on these problems of robust generalization.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/04/2021

Revisiting Hilbert-Schmidt Information Bottleneck for Adversarial Robustness

We investigate the HSIC (Hilbert-Schmidt independence criterion) bottlen...
research
10/01/2019

Entropy Penalty: Towards Generalization Beyond the IID Assumption

It has been shown that instead of learning actual object features, deep ...
research
06/11/2022

Improving the Adversarial Robustness of NLP Models by Information Bottleneck

Existing studies have demonstrated that adversarial examples can be dire...
research
10/12/2021

Gated Information Bottleneck for Generalization in Sequential Environments

Deep neural networks suffer from poor generalization to unseen environme...
research
03/29/2021

Lagrangian Objective Function Leads to Improved Unforeseen Attack Generalization in Adversarial Training

Recent improvements in deep learning models and their practical applicat...
research
10/16/2018

Multiparty Dynamics and Failure Modes for Machine Learning and Artificial Intelligence

Overoptimization failures in machine learning and artificial intelligenc...
research
01/10/2020

Guess First to Enable Better Compression and Adversarial Robustness

Machine learning models are generally vulnerable to adversarial examples...

Please sign up or login with your details

Forgot password? Click here to reset