Robust Machine Learning via Privacy/Rate-Distortion Theory

07/22/2020
by   Ye Wang, et al.
1

Robust machine learning formulations have emerged to address the prevalent vulnerability of deep neural networks to adversarial examples. Our work draws the connection between optimal robust learning and the privacy-utility tradeoff problem, which is a generalization of the rate-distortion problem. The saddle point of the game between a robust classifier and an adversarial perturbation can be found via the solution of a maximum conditional entropy problem. This information-theoretic perspective sheds light on the fundamental tradeoff between robustness and clean data performance, which ultimately arises from the geometric structure of the underlying data distribution and perturbation constraints. Further, we show that under mild conditions, the worst case adversarial distribution with Wasserstein-ball constraints on the perturbation has a fixed point characterization. This is obtained via the first order necessary conditions for optimality of the derived maximum conditional entropy problem. This fixed point characterization exposes the interplay between the geometry of the ground cost in the Wasserstein-ball constraint, the worst-case adversarial distribution, and the given reference data distribution.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/29/2017

Certifiable Distributional Robustness with Principled Adversarial Training

Neural networks are vulnerable to adversarial examples and researchers h...
research
04/28/2020

Robust Generative Adversarial Network

Generative adversarial networks (GANs) are powerful generative models, b...
research
01/28/2022

Certifying Model Accuracy under Distribution Shifts

Certified robustness in machine learning has primarily focused on advers...
research
07/01/2021

The Interplay between Distribution Parameters and the Accuracy-Robustness Tradeoff in Classification

Adversarial training tends to result in models that are less accurate on...
research
02/26/2020

Learning Adversarially Robust Representations via Worst-Case Mutual Information Maximization

Training machine learning models to be robust against adversarial inputs...
research
08/19/2022

A Novel Plug-and-Play Approach for Adversarially Robust Generalization

In this work, we propose a robust framework that employs adversarially r...
research
08/11/2021

Higher-Order Expansion and Bartlett Correctability of Distributionally Robust Optimization

Distributionally robust optimization (DRO) is a worst-case framework for...

Please sign up or login with your details

Forgot password? Click here to reset