Understanding the Intrinsic Robustness of Image Distributions using Conditional Generative Models

03/01/2020
by   Xiao Zhang, et al.
4

Starting with Gilmer et al. (2018), several works have demonstrated the inevitability of adversarial examples based on different assumptions about the underlying input probability space. It remains unclear, however, whether these results apply to natural image distributions. In this work, we assume the underlying data distribution is captured by some conditional generative model, and prove intrinsic robustness bounds for a general class of classifiers, which solves an open problem in Fawzi et al. (2018). Building upon the state-of-the-art conditional generative models, we study the intrinsic robustness of two common image benchmarks under ℓ_2 perturbations, and show the existence of a large gap between the robustness limits implied by our theory and the adversarial robustness achieved by current state-of-the-art robust models. Code for all our experiments is available at https://github.com/xiaozhanguva/Intrinsic-Rob.

READ FULL TEXT
research
05/29/2019

Empirically Measuring Concentration: Fundamental Limits on Intrinsic Robustness

Many recent works have shown that adversarial examples that fool classif...
research
10/08/2018

Limitations of adversarial robustness: strong No Free Lunch Theorem

This manuscript presents some new results on adversarial robustness in m...
research
02/20/2019

advertorch v0.1: An Adversarial Robustness Toolbox based on PyTorch

advertorch is a toolbox for adversarial robustness research. It contains...
research
03/24/2021

Improved Estimation of Concentration Under ℓ_p-Norm Distance Metrics Using Half Spaces

Concentration of measure has been argued to be the fundamental cause of ...
research
09/27/2018

Conditional WaveGAN

Generative models are successfully used for image synthesis in the recen...
research
11/01/2021

A Unified View of cGANs with and without Classifiers

Conditional Generative Adversarial Networks (cGANs) are implicit generat...
research
04/19/2023

GREAT Score: Global Robustness Evaluation of Adversarial Perturbation using Generative Models

Current studies on adversarial robustness mainly focus on aggregating lo...

Please sign up or login with your details

Forgot password? Click here to reset