Towards Optimal Randomized Strategies in Adversarial Example Game

06/29/2023
by   Jiahao Xie, et al.
0

The vulnerability of deep neural network models to adversarial example attacks is a practical challenge in many artificial intelligence applications. A recent line of work shows that the use of randomization in adversarial training is the key to find optimal strategies against adversarial example attacks. However, in a fully randomized setting where both the defender and the attacker can use randomized strategies, there are no efficient algorithm for finding such an optimal strategy. To fill the gap, we propose the first algorithm of its kind, called FRAT, which models the problem with a new infinite-dimensional continuous-time flow on probability distribution spaces. FRAT maintains a lightweight mixture of models for the defender, with flexibility to efficiently update mixing weights and model parameters at each iteration. Furthermore, FRAT utilizes lightweight sampling subroutines to construct a random strategy for the attacker. We prove that the continuous-time limit of FRAT converges to a mixed Nash equilibria in a zero-sum game formed by a defender and an attacker. Experimental results also demonstrate the efficiency of FRAT on CIFAR-10 and CIFAR-100 datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/13/2021

Mixed Nash Equilibria in the Adversarial Examples Game

This paper tackles the problem of adversarial examples from a game theor...
research
04/26/2022

Mixed Strategies for Security Games with General Defending Requirements

The Stackelberg security game is played between a defender and an attack...
research
06/06/2019

Robust Attacks against Multiple Classifiers

We address the challenge of designing optimal adversarial noise algorith...
research
10/26/2022

Optimal Patrolling Strategies for Trees and Complete Networks

We present solutions to a continuous patrolling game played on network. ...
research
04/01/2019

Defending against adversarial attacks by randomized diversification

The vulnerability of machine learning systems to adversarial attacks que...
research
01/11/2023

Learning Near-Optimal Intrusion Responses Against Dynamic Attackers

We study automated intrusion response and formulate the interaction betw...
research
06/28/2021

Scalable Optimal Classifiers for Adversarial Settings under Uncertainty

We consider the problem of finding optimal classifiers in an adversarial...

Please sign up or login with your details

Forgot password? Click here to reset