DeepAI AI Chat
Log In Sign Up

Take Me Home: Reversing Distribution Shifts using Reinforcement Learning

02/20/2023
by   Vivian Lin, et al.
0

Deep neural networks have repeatedly been shown to be non-robust to the uncertainties of the real world. Even subtle adversarial attacks and naturally occurring distribution shifts wreak havoc on systems relying on deep neural networks. In response to this, current state-of-the-art techniques use data-augmentation to enrich the training distribution of the model and consequently improve robustness to natural distribution shifts. We propose an alternative approach that allows the system to recover from distribution shifts online. Specifically, our method applies a sequence of semantic-preserving transformations to bring the shifted data closer in distribution to the training set, as measured by the Wasserstein distance. We formulate the problem of sequence selection as an MDP, which we solve using reinforcement learning. To aid in our estimates of Wasserstein distance, we employ dimensionality reduction through orthonormal projection. We provide both theoretical and empirical evidence that orthonormal projection preserves characteristics of the data at the distributional level. Finally, we apply our distribution shift recovery approach to the ImageNet-C benchmark for distribution shifts, targeting shifts due to additive noise and image histogram modifications. We demonstrate an improvement in average accuracy up to 14.21 state-of-the-art ImageNet classifiers.

READ FULL TEXT

page 4

page 14

07/01/2020

Measuring Robustness to Natural Distribution Shifts in Image Classification

We study how robust current ImageNet models are to distribution shifts a...
01/28/2022

Certifying Model Accuracy under Distribution Shifts

Certified robustness in machine learning has primarily focused on advers...
03/28/2019

Model Vulnerability to Distributional Shifts over Image Transformation Sets

We are concerned with the vulnerability of computer vision models to dis...
09/29/2022

Generalizability of Adversarial Robustness Under Distribution Shifts

Recent progress in empirical and certified robustness promises to delive...
09/08/2022

Black-Box Audits for Group Distribution Shifts

When a model informs decisions about people, distribution shifts can cre...
06/30/2022

Exposing and addressing the fragility of neural networks in digital pathology

Neural networks have achieved impressive results in many medical imaging...
03/08/2021

Contemplating real-world object classification

Deep object recognition models have been very successful over benchmark ...