Certifying Model Accuracy under Distribution Shifts

01/28/2022
by   Aounon Kumar, et al.
0

Certified robustness in machine learning has primarily focused on adversarial perturbations of the input with a fixed attack budget for each point in the data distribution. In this work, we present provable robustness guarantees on the accuracy of a model under bounded Wasserstein shifts of the data distribution. We show that a simple procedure that randomizes the input of the model within a transformation space is provably robust to distributional shifts under the transformation. Our framework allows the datum-specific perturbation size to vary across different points in the input distribution and is general enough to include fixed-sized perturbations as well. Our certificates produce guaranteed lower bounds on the performance of the model for any (natural or adversarial) shift of the input distribution within a Wasserstein ball around the original distribution. We apply our technique to: (i) certify robustness against natural (non-adversarial) transformations of images such as color shifts, hue shifts and changes in brightness and saturation, (ii) certify robustness against adversarial shifts of the input distribution, and (iii) show provable lower bounds (hardness results) on the performance of models trained on so-called "unlearnable" datasets that have been poisoned to interfere with model training.

READ FULL TEXT

page 3

page 19

research
03/29/2021

Learning Under Adversarial and Interventional Shifts

Machine learning models are often trained on data from one distribution ...
research
06/07/2023

Optimal Transport Model Distributional Robustness

Distributional robustness is a promising framework for training deep lea...
research
03/28/2023

Provable Robustness for Streaming Models with a Sliding Window

The literature on provable robustness in machine learning has primarily ...
research
10/20/2018

Learning Models with Uniform Performance via Distributionally Robust Optimization

A common goal in statistics and machine learning is to learn models that...
research
02/20/2023

Take Me Home: Reversing Distribution Shifts using Reinforcement Learning

Deep neural networks have repeatedly been shown to be non-robust to the ...
research
08/03/2023

Statistical Estimation Under Distribution Shift: Wasserstein Perturbations and Minimax Theory

Distribution shifts are a serious concern in modern statistical learning...
research
07/22/2020

Robust Machine Learning via Privacy/Rate-Distortion Theory

Robust machine learning formulations have emerged to address the prevale...

Please sign up or login with your details

Forgot password? Click here to reset