Gated Information Bottleneck for Generalization in Sequential Environments

10/12/2021
by   Francesco Alesiani, et al.
0

Deep neural networks suffer from poor generalization to unseen environments when the underlying data distribution is different from that in the training set. By learning minimum sufficient representations from training data, the information bottleneck (IB) approach has demonstrated its effectiveness to improve generalization in different AI applications. In this work, we propose a new neural network-based IB approach, termed gated information bottleneck (GIB), that dynamically drops spurious correlations and progressively selects the most task-relevant features across different environments by a trainable soft mask (on raw features). GIB enjoys a simple and tractable objective, without any variational approximation or distributional assumption. We empirically demonstrate the superiority of GIB over other popular neural network-based IB approaches in adversarial robustness and out-of-distribution (OOD) detection. Meanwhile, we also establish the connection between IB theory and invariant causal representation learning, and observed that GIB demonstrates appealing performance when different environments arrive sequentially, a more practical scenario where invariant risk minimization (IRM) fails. Code of GIB is available at https://github.com/falesiani/GIB

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/31/2021

Deep Deterministic Information Bottleneck with Matrix-based Entropy Functional

We introduce the matrix-based Renyi's α-order entropy functional to para...
research
03/24/2021

Meta-Learned Invariant Risk Minimization

Empirical Risk Minimization (ERM) based machine learning algorithms have...
research
05/30/2023

How Does Information Bottleneck Help Deep Learning?

Numerous deep learning algorithms have been inspired by and understood v...
research
04/22/2022

Multi-view Information Bottleneck Without Variational Approximation

By "intelligently" fusing the complementary information across different...
research
02/13/2020

The Conditional Entropy Bottleneck

Much of the field of Machine Learning exhibits a prominent set of failur...
research
10/19/2021

Learning Representations that Support Robust Transfer of Predictors

Ensuring generalization to unseen environments remains a challenge. Doma...
research
03/22/2020

Invariant Rationalization

Selective rationalization improves neural network interpretability by id...

Please sign up or login with your details

Forgot password? Click here to reset