Game Theory for Adversarial Attacks and Defenses

10/08/2021
by   Shorya Sharma, et al.
0

Adversarial attacks can generate adversarial inputs by applying small but intentionally worst-case perturbations to samples from the dataset, which leads to even state-of-the-art deep neural networks outputting incorrect answers with high confidence. Hence, some adversarial defense techniques are developed to improve the security and robustness of the models and avoid them being attacked. Gradually, a game-like competition between attackers and defenders formed, in which both players would attempt to play their best strategies against each other while maximizing their own payoffs. To solve the game, each player would choose an optimal strategy against the opponent based on the prediction of the opponent's strategy choice. In this work, we are on the defensive side to apply game-theoretic approaches on defending against attacks. We use two randomization methods, random initialization and stochastic activation pruning, to create diversity of networks. Furthermore, we use one denoising technique, super resolution, to improve models' robustness by preprocessing images before attacks. Our experimental results indicate that those three methods can effectively improve the robustness of deep-learning neural networks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/05/2018

Stochastic Activation Pruning for Robust Adversarial Defense

Neural networks are known to be vulnerable to adversarial examples. Care...
research
04/12/2019

Evaluating Robustness of Deep Image Super-Resolution against Adversarial Attacks

Single-image super-resolution aims to generate a high-resolution version...
research
08/26/2021

Understanding the Logit Distributions of Adversarially-Trained Deep Neural Networks

Adversarial defenses train deep neural networks to be invariant to the i...
research
11/26/2022

Game Theoretic Mixed Experts for Combinational Adversarial Machine Learning

Recent advances in adversarial machine learning have shown that defenses...
research
12/29/2021

Super-Efficient Super Resolution for Fast Adversarial Defense at the Edge

Autonomous systems are highly vulnerable to a variety of adversarial att...
research
02/18/2020

Block Switching: A Stochastic Approach for Deep Learning Security

Recent study of adversarial attacks has revealed the vulnerability of mo...
research
04/01/2020

Towards Achieving Adversarial Robustness by Enforcing Feature Consistency Across Bit Planes

As humans, we inherently perceive images based on their predominant feat...

Please sign up or login with your details

Forgot password? Click here to reset