When Should You Defend Your Classifier – A Game-theoretical Analysis of Countermeasures against Adversarial Examples

08/17/2021
by   Maximilian Samsinger, et al.
0

Adversarial machine learning, i.e., increasing the robustness of machine learning algorithms against so-called adversarial examples, is now an established field. Yet, newly proposed methods are evaluated and compared under unrealistic scenarios where costs for adversary and defender are not considered and either all samples or no samples are adversarially perturbed. We scrutinize these assumptions and propose the advanced adversarial classification game, which incorporates all relevant parameters of an adversary and a defender. Especially, we take into account economic factors on both sides and the fact that all so far proposed countermeasures against adversarial examples reduce accuracy on benign samples. Analyzing the scenario in detail, where both players have two pure strategies, we identify all best responses and conclude that in practical settings, the most influential factor might be the maximum amount of adversarial examples.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/08/2016

Adversarial examples in the physical world

Most existing machine learning classifiers are highly vulnerable to adve...
research
03/06/2019

GanDef: A GAN based Adversarial Training Defense for Neural Network Classifier

Machine learning models, especially neural network (NN) classifiers, are...
research
02/21/2022

HoneyModels: Machine Learning Honeypots

Machine Learning is becoming a pivotal aspect of many systems today, off...
research
12/29/2020

With False Friends Like These, Who Can Have Self-Knowledge?

Adversarial examples arise from excessive sensitivity of a model. Common...
research
07/26/2021

Benign Adversarial Attack: Tricking Algorithm for Goodness

In spite of the successful application in many fields, machine learning ...
research
04/22/2022

How Sampling Impacts the Robustness of Stochastic Neural Networks

Stochastic neural networks (SNNs) are random functions and predictions a...
research
03/05/2018

Stochastic Activation Pruning for Robust Adversarial Defense

Neural networks are known to be vulnerable to adversarial examples. Care...

Please sign up or login with your details

Forgot password? Click here to reset