Adaptive Modeling Against Adversarial Attacks

12/23/2021
by   Zhiwen Yan, et al.
0

Adversarial training, the process of training a deep learning model with adversarial data, is one of the most successful adversarial defense methods for deep learning models. We have found that the robustness to white-box attack of an adversarially trained model can be further improved if we fine tune this model in inference stage to adapt to the adversarial input, with the extra information in it. We introduce an algorithm that "post trains" the model at inference stage between the original output class and a "neighbor" class, with existing training data. The accuracy of pre-trained Fast-FGSM CIFAR10 classifier base model against white-box projected gradient attack (PGD) can be significantly improved from 46.8

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/07/2020

Bridging the Performance Gap between FGSM and PGD Adversarial Training

Deep learning achieves state-of-the-art performance in many tasks but ex...
research
08/27/2020

Adversarial Eigen Attack on Black-Box Models

Black-box adversarial attack has attracted a lot of research interests f...
research
03/21/2018

Adversarial Defense based on Structure-to-Signal Autoencoders

Adversarial attack methods have demonstrated the fragility of deep neura...
research
05/04/2022

CE-based white-box adversarial attacks will not work using super-fitting

Deep neural networks are widely used in various fields because of their ...
research
09/19/2023

Language Guided Adversarial Purification

Adversarial purification using generative models demonstrates strong adv...
research
02/05/2018

Robust Pre-Processing: A Robust Defense Method Against Adversary Attack

Deep learning algorithms and networks are vulnerable to perturbed inputs...
research
06/13/2021

ATRAS: Adversarially Trained Robust Architecture Search

In this paper, we explore the effect of architecture completeness on adv...

Please sign up or login with your details

Forgot password? Click here to reset