Take Me Home: Reversing Distribution Shifts using Reinforcement Learning

02/20/2023
by   Vivian Lin, et al.
0

Deep neural networks have repeatedly been shown to be non-robust to the uncertainties of the real world. Even subtle adversarial attacks and naturally occurring distribution shifts wreak havoc on systems relying on deep neural networks. In response to this, current state-of-the-art techniques use data-augmentation to enrich the training distribution of the model and consequently improve robustness to natural distribution shifts. We propose an alternative approach that allows the system to recover from distribution shifts online. Specifically, our method applies a sequence of semantic-preserving transformations to bring the shifted data closer in distribution to the training set, as measured by the Wasserstein distance. We formulate the problem of sequence selection as an MDP, which we solve using reinforcement learning. To aid in our estimates of Wasserstein distance, we employ dimensionality reduction through orthonormal projection. We provide both theoretical and empirical evidence that orthonormal projection preserves characteristics of the data at the distributional level. Finally, we apply our distribution shift recovery approach to the ImageNet-C benchmark for distribution shifts, targeting shifts due to additive noise and image histogram modifications. We demonstrate an improvement in average accuracy up to 14.21 state-of-the-art ImageNet classifiers.

READ FULL TEXT

page 4

page 14

research
07/01/2020

Measuring Robustness to Natural Distribution Shifts in Image Classification

We study how robust current ImageNet models are to distribution shifts a...
research
01/28/2022

Certifying Model Accuracy under Distribution Shifts

Certified robustness in machine learning has primarily focused on advers...
research
03/28/2019

Model Vulnerability to Distributional Shifts over Image Transformation Sets

We are concerned with the vulnerability of computer vision models to dis...
research
06/10/2022

Memory Classifiers: Two-stage Classification for Robustness in Machine Learning

The performance of machine learning models can significantly degrade und...
research
09/08/2022

Black-Box Audits for Group Distribution Shifts

When a model informs decisions about people, distribution shifts can cre...
research
06/30/2022

Exposing and addressing the fragility of neural networks in digital pathology

Neural networks have achieved impressive results in many medical imaging...
research
07/12/2023

Single Domain Generalization via Normalised Cross-correlation Based Convolutions

Deep learning techniques often perform poorly in the presence of domain ...

Please sign up or login with your details

Forgot password? Click here to reset