A Multi-strength Adversarial Training Method to Mitigate Adversarial Attacks

05/27/2017
by   Chang Song, et al.
0

Some recent works revealed that deep neural networks (DNNs) are vulnerable to so-called adversarial attacks where input examples are intentionally perturbed to fool DNNs. In this work, we revisit the DNN training process that includes adversarial examples into the training dataset so as to improve DNN's resilience to adversarial attacks, namely, adversarial training. Our experiments show that different adversarial strengths, i.e., perturbation levels of adversarial examples, have different working zones to resist the attack. Based on the observation, we propose a multi-strength adversarial training method (MAT) that combines the adversarial training examples with different adversarial strengths to defend adversarial attacks. Two training structures - mixed MAT and parallel MAT - are developed to facilitate the tradeoffs between training time and memory occupation. Our results show that MAT can substantially minimize the accuracy degradation of deep learning systems to adversarial attacks on MNIST, CIFAR-10, CIFAR-100, and SVHN.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/08/2022

Latent Boundary-guided Adversarial Training

Deep Neural Networks (DNNs) have recently achieved great success in many...
research
03/10/2023

Do we need entire training data for adversarial training?

Deep Neural Networks (DNNs) are being used to solve a wide range of prob...
research
03/11/2022

Learning from Attacks: Attacking Variational Autoencoder for Improving Image Classification

Adversarial attacks are often considered as threats to the robustness of...
research
02/23/2018

DeepDefense: Training Deep Neural Networks with Improved Robustness

Despite the efficacy on a variety of computer vision tasks, deep neural ...
research
10/24/2020

ATRO: Adversarial Training with a Rejection Option

This paper proposes a classification framework with a rejection option t...
research
08/17/2022

Two Heads are Better than One: Robust Learning Meets Multi-branch Models

Deep neural networks (DNNs) are vulnerable to adversarial examples, in w...
research
02/14/2018

Security Analysis and Enhancement of Model Compressed Deep Learning Systems under Adversarial Attacks

DNN is presenting human-level performance for many complex intelligent t...

Please sign up or login with your details

Forgot password? Click here to reset