Gated Information Bottleneck for Generalization in Sequential Environments

by   Francesco Alesiani, et al.

Deep neural networks suffer from poor generalization to unseen environments when the underlying data distribution is different from that in the training set. By learning minimum sufficient representations from training data, the information bottleneck (IB) approach has demonstrated its effectiveness to improve generalization in different AI applications. In this work, we propose a new neural network-based IB approach, termed gated information bottleneck (GIB), that dynamically drops spurious correlations and progressively selects the most task-relevant features across different environments by a trainable soft mask (on raw features). GIB enjoys a simple and tractable objective, without any variational approximation or distributional assumption. We empirically demonstrate the superiority of GIB over other popular neural network-based IB approaches in adversarial robustness and out-of-distribution (OOD) detection. Meanwhile, we also establish the connection between IB theory and invariant causal representation learning, and observed that GIB demonstrates appealing performance when different environments arrive sequentially, a more practical scenario where invariant risk minimization (IRM) fails. Code of GIB is available at



There are no comments yet.


page 1

page 2

page 3

page 4


Deep Deterministic Information Bottleneck with Matrix-based Entropy Functional

We introduce the matrix-based Renyi's α-order entropy functional to para...

Meta-Learned Invariant Risk Minimization

Empirical Risk Minimization (ERM) based machine learning algorithms have...

Robust Reconfigurable Intelligent Surfaces via Invariant Risk and Causal Representations

In this paper, the problem of robust reconfigurable intelligent surface ...

Multi-view Information Bottleneck Without Variational Approximation

By "intelligently" fusing the complementary information across different...

The Conditional Entropy Bottleneck

Much of the field of Machine Learning exhibits a prominent set of failur...

Learning Representations that Support Robust Transfer of Predictors

Ensuring generalization to unseen environments remains a challenge. Doma...

An Information Bottleneck Approach for Controlling Conciseness in Rationale Extraction

Decisions of complex language understanding models can be rationalized b...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.