Are Adversarial Examples Created Equal? A Learnable Weighted Minimax Risk for Robustness under Non-uniform Attacks

10/24/2020
by   Huimin Zeng, et al.
0

Adversarial Training is proved to be an efficient method to defend against adversarial examples, being one of the few defenses that withstand strong attacks. However, traditional defense mechanisms assume a uniform attack over the examples according to the underlying data distribution, which is apparently unrealistic as the attacker could choose to focus on more vulnerable examples. We present a weighted minimax risk optimization that defends against non-uniform attacks, achieving robustness against adversarial examples under perturbed test data distributions. Our modified risk considers importance weights of different adversarial examples and focuses adaptively on harder examples that are wrongly classified or at higher risk of being classified incorrectly. The designed risk allows the training process to learn a strong defense through optimizing the importance weights. The experiments show that our model significantly improves state-of-the-art adversarial accuracy under non-uniform attacks without a significant drop under uniform attacks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/24/2021

Adversarial Robustness with Non-uniform Perturbations

Robustness of machine learning models is critical for security related a...
research
08/01/2023

Doubly Robust Instance-Reweighted Adversarial Training

Assigning importance weights to adversarial data has achieved great succ...
research
11/12/2017

Machine vs Machine: Minimax-Optimal Defense Against Adversarial Examples

Recently, researchers have discovered that the state-of-the-art object c...
research
04/09/2016

A General Retraining Framework for Scalable Adversarial Classification

Traditional classification algorithms assume that training and test data...
research
07/20/2023

Shared Adversarial Unlearning: Backdoor Mitigation by Unlearning Shared Adversarial Examples

Backdoor attacks are serious security threats to machine learning models...
research
12/29/2020

With False Friends Like These, Who Can Have Self-Knowledge?

Adversarial examples arise from excessive sensitivity of a model. Common...
research
09/23/2020

Adversarial robustness via stochastic regularization of neural activation sensitivity

Recent works have shown that the input domain of any machine learning cl...

Please sign up or login with your details

Forgot password? Click here to reset