Playing the Game of Universal Adversarial Perturbations

09/20/2018
by   Julien Perolat, et al.
12

We study the problem of learning classifiers robust to universal adversarial perturbations. While prior work approaches this problem via robust optimization, adversarial training, or input transformation, we instead phrase it as a two-player zero-sum game. In this new formulation, both players simultaneously play the same game, where one player chooses a classifier that minimizes a classification loss whilst the other player creates an adversarial perturbation that increases the same loss when applied to every sample in the training set. By observing that performing a classification (respectively creating adversarial samples) is the best response to the other player, we propose a novel extension of a game-theoretic algorithm, namely fictitious play, to the domain of training robust classifiers. Finally, we empirically show the robustness and versatility of our approach in two defence scenarios where universal attacks are performed on several image classification datasets -- CIFAR10, CIFAR100 and ImageNet.

READ FULL TEXT
research
12/10/2018

Defending against Universal Perturbations with Shared Adversarial Training

Classifiers such as deep neural networks have been shown to be vulnerabl...
research
11/27/2018

Universal Adversarial Training

Standard adversarial attacks change the predicted class label of an imag...
research
10/15/2020

Adversarial Images through Stega Glasses

This paper explores the connection between steganography and adversarial...
research
01/27/2021

Adversaries in Online Learning Revisited: with applications in Robust Optimization and Adversarial training

We revisit the concept of "adversary" in online learning, motivated by s...
research
04/21/2021

Jacobian Regularization for Mitigating Universal Adversarial Perturbations

Universal Adversarial Perturbations (UAPs) are input perturbations that ...
research
06/06/2019

Robust Attacks against Multiple Classifiers

We address the challenge of designing optimal adversarial noise algorith...
research
09/13/2020

Machine Learning's Dropout Training is Distributionally Robust Optimal

This paper shows that dropout training in Generalized Linear Models is t...

Please sign up or login with your details

Forgot password? Click here to reset