Distributionally Adversarial Attack

08/16/2018
by   Tianhang Zheng, et al.
0

Recent work on adversarial attack has shown that Projected Gradient Descent (PGD) Adversary is a universal first-order adversary, and the classifier adversarially trained by PGD is robust against a wide range of first-order attacks. However, it is worth noting that the objective of an attacking/defense model relies on a data distribution, typically in the form of risk maximization/minimization: /E_p(x)L(x), with p(x) the data distribution and L(·) a loss function. While PGD generates attack samples independently for each data point, the procedure does not necessary lead to good generalization in terms of risk maximization. In the paper, we achieve the goal by proposing distributionally adversarial attack (DAA), a framework to solve an optimal adversarial data distribution, a perturbed distribution that is close to the original data distribution but increases the generalization risk maximally. Algorithmically, DAA performs optimization on the space of probability measures, which introduces direct dependency between all data points when generating adversarial samples. DAA is evaluated by attacking state-of-the-art defense models, including the adversarially trained models provided by MadryLab. Notably, DAA outperforms all the attack algorithms listed in MadryLab's white-box leaderboard, reducing the accuracy of their secret MNIST model to 88.79% (with l_∞ perturbations of ϵ = 0.3) and the accuracy of their secret CIFAR model to 44.73% (with l_∞ perturbations of ϵ = 8.0). Code for the experiments is released on https://github.com/tianzheng4/Distributionally-Adversarial-Attack

READ FULL TEXT
10/10/2018

Is PGD-Adversarial Training Necessary? Alternative Training via a Soft-Quantization Network with Noisy-Natural Samples Only

Recent work on adversarial attack and defense suggests that PGD is a uni...
03/15/2020

Output Diversified Initialization for Adversarial Attacks

Adversarial examples are often constructed by iteratively refining a ran...
08/20/2022

Adversarial contamination of networks in the setting of vertex nomination: a new trimming method

As graph data becomes more ubiquitous, the need for robust inferential g...
10/21/2019

An Alternative Surrogate Loss for PGD-based Adversarial Testing

Adversarial testing methods based on Projected Gradient Descent (PGD) ar...
06/23/2022

A Framework for Understanding Model Extraction Attack and Defense

The privacy of machine learning models has become a significant concern ...
02/15/2021

How to Learn when Data Reacts to Your Model: Performative Gradient Descent

Performative distribution shift captures the setting where the choice of...
12/13/2021

How to Learn when Data Gradually Reacts to Your Model

A recent line of work has focused on training machine learning (ML) mode...

Code Repositories

Distributionally-Adversarial-Attack

None


view repo