Achieve Optimal Adversarial Accuracy for Adversarial Deep Learning using Stackelberg Game

07/17/2022
by   Xiao-Shan Gao, et al.
0

Adversarial deep learning is to train robust DNNs against adversarial attacks, which is one of the major research focuses of deep learning. Game theory has been used to answer some of the basic questions about adversarial deep learning such as the existence of a classifier with optimal robustness and the existence of optimal adversarial samples for a given class of classifiers. In most previous work, adversarial deep learning was formulated as a simultaneous game and the strategy spaces are assumed to be certain probability distributions in order for the Nash equilibrium to exist. But, this assumption is not applicable to the practical situation. In this paper, we give answers to these basic questions for the practical case where the classifiers are DNNs with a given structure, by formulating the adversarial deep learning as sequential games. The existence of Stackelberg equilibria for these games are proved. Furthermore, it is shown that the equilibrium DNN has the largest adversarial accuracy among all DNNs with the same structure, when Carlini-Wagner's margin loss is used. Trade-off between robustness and accuracy in adversarial deep learning is also studied from game theoretical aspect.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/19/2017

Regular Potential Games

A fundamental problem with the Nash equilibrium concept is the existence...
research
02/26/2020

Randomization matters. How to defend against strong adversarial attacks

Is there a classifier that ensures optimal robustness against all advers...
research
03/22/2019

Deep Fictitious Play for Stochastic Differential Games

In this paper, we apply the idea of fictitious play to design deep neura...
research
06/01/2023

Score-Based Equilibrium Learning in Multi-Player Finite Games with Imperfect Information

Real-world games, which concern imperfect information, multiple players,...
research
06/28/2021

Scalable Optimal Classifiers for Adversarial Settings under Uncertainty

We consider the problem of finding optimal classifiers in an adversarial...
research
06/06/2019

Robust Attacks against Multiple Classifiers

We address the challenge of designing optimal adversarial noise algorith...
research
03/12/2021

Game-theoretic Understanding of Adversarially Learned Features

This paper aims to understand adversarial attacks and defense from a new...

Please sign up or login with your details

Forgot password? Click here to reset