DeepAI
Log In Sign Up

Entropy Penalty: Towards Generalization Beyond the IID Assumption

10/01/2019
by   Devansh Arpit, et al.
11

It has been shown that instead of learning actual object features, deep networks tend to exploit non-robust (spurious) discriminative features that are shared between training and test sets. Therefore, while they achieve state of the art performance on such test sets, they achieve poor generalization on out of distribution (OOD) samples where the IID (independent, identical distribution) assumption breaks and the distribution of non-robust features shifts. Through theoretical and empirical analysis, we show that this happens because maximum likelihood training (without appropriate regularization) leads the model to depend on all the correlations (including spurious ones) present between inputs and targets in the dataset. We then show evidence that the information bottleneck (IB) principle can address this problem. To do so, we propose a regularization approach based on IB, called Entropy Penalty, that reduces the model's dependence on spurious features-- features corresponding to such spurious correlations. This allows deep networks trained with Entropy Penalty to generalize well even under distribution shift of spurious features. As a controlled test-bed for evaluating our claim, we train deep networks with Entropy Penalty on a colored MNIST (C-MNIST) dataset and show that it is able to generalize well on vanilla MNIST, MNIST-M and SVHN datasets in addition to an OOD version of C-MNIST itself. The baseline regularization methods we compare against fail to generalize on this test-bed. Our code is available at https://github.com/salesforce/EntropyPenalty.

READ FULL TEXT
01/31/2021

Deep Deterministic Information Bottleneck with Matrix-based Entropy Functional

We introduce the matrix-based Renyi's α-order entropy functional to para...
09/26/2019

Stochastic Weight Matrix-based Regularization Methods for Deep Neural Networks

The aim of this paper is to introduce two widely applicable regularizati...
11/28/2022

Beyond Invariance: Test-Time Label-Shift Adaptation for Distributions with "Spurious" Correlations

Spurious correlations, or correlations that change across domains where ...
07/25/2022

Domain Decorrelation with Potential Energy Ranking

Machine learning systems, especially the methods based on deep learning,...
02/13/2020

The Conditional Entropy Bottleneck

Much of the field of Machine Learning exhibits a prominent set of failur...
11/17/2021

Understanding and Testing Generalization of Deep Networks on Out-of-Distribution Data

Deep network models perform excellently on In-Distribution (ID) data, bu...
10/16/2017

Generalization in Deep Learning

This paper explains why deep learning can generalize well, despite large...

Code Repositories

corr_based_prediction

This repo provides code used in the paper "Predicting with High Correlation Features" (https://arxiv.org/abs/1910.00164):


view repo