Distributional Robustness with IPMs and links to Regularization and GANs

06/08/2020
by   Hisham Husain, et al.
0

Robustness to adversarial attacks is an important concern due to the fragility of deep neural networks to small perturbations and has received an abundance of attention in recent years. Distributionally Robust Optimization (DRO), a particularly promising way of addressing this challenge, studies robustness via divergence-based uncertainty sets and has provided valuable insights into robustification strategies such as regularization. In the context of machine learning, the majority of existing results have chosen f-divergences, Wasserstein distances and more recently, the Maximum Mean Discrepancy (MMD) to construct uncertainty sets. We extend this line of work for the purposes of understanding robustness via regularization by studying uncertainty sets constructed with Integral Probability Metrics (IPMs) - a large family of divergences including the MMD, Total Variation and Wasserstein distances. Our main result shows that DRO under any choice of IPM corresponds to a family of regularization penalties, which recover and improve upon existing results in the setting of MMD and Wasserstein distances. Due to the generality of our result, we show that other choices of IPMs correspond to other commonly used penalties in machine learning. Furthermore, we extend our results to shed light on adversarial generative modelling via f-GANs, constituting the first study of distributional robustness for the f-GAN objective. Our results unveil the inductive properties of the discriminator set with regards to robustness, allowing us to give positive comments for several penalty-based GAN methods such as Wasserstein-, MMD- and Sobolev-GANs. In summary, our results intimately link GANs to distributional robustness, extend previous results on DRO and contribute to our understanding of the link between regularization and robustness at large.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

02/13/2018

Distributionally Robust Mean-Variance Portfolio Selection with Wasserstein Distances

We revisit Markowitz's mean-variance portfolio selection model by consid...
05/27/2019

Distributionally Robust Optimization and Generalization in Kernel Methods

Distributionally robust optimization (DRO) has attracted attention in ma...
02/10/2019

(q,p)-Wasserstein GANs: Comparing Ground Metrics for Wasserstein GANs

Generative Adversial Networks (GANs) have made a major impact in compute...
12/17/2017

Wasserstein Distributional Robustness and Regularization in Statistical Learning

A central question in statistical learning is to design algorithms that ...
12/16/2017

On reproduction of On the regularization of Wasserstein GANs

This report has several purposes. First, our report is written to invest...
02/27/2018

Robust GANs against Dishonest Adversaries

Robustness of deep learning models is a property that has recently gaine...
05/19/2017

Relaxed Wasserstein with Applications to GANs

We propose a novel class of statistical divergences called Relaxed Wasse...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.