Enhancing the Robustness of Deep Neural Networks by Boundary Conditional GAN

02/28/2019
by   Ke Sun, et al.
0

Deep neural networks have been widely deployed in various machine learning tasks. However, recent works have demonstrated that they are vulnerable to adversarial examples: carefully crafted small perturbations to cause misclassification by the network. In this work, we propose a novel defense mechanism called Boundary Conditional GAN to enhance the robustness of deep neural networks against adversarial examples. Boundary Conditional GAN, a modified version of Conditional GAN, can generate boundary samples with true labels near the decision boundary of a pre-trained classifier. These boundary samples are fed to the pre-trained classifier as data augmentation to make the decision boundary more robust. We empirically show that the model improved by our approach consistently defenses against various types of adversarial attacks successfully. Further quantitative investigations about the improvement of robustness and visualization of decision boundaries are also provided to justify the effectiveness of our strategy. This new defense mechanism that uses boundary samples to enhance the robustness of networks opens up a new way to defense adversarial attacks consistently.

READ FULL TEXT

page 3

page 6

research
05/17/2018

Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models

In recent years, deep neural network approaches have been widely adopted...
research
07/09/2020

Boundary thickness and robustness in learning models

Robustness of machine learning models to various adversarial and non-adv...
research
12/24/2019

Characterizing the Decision Boundary of Deep Neural Networks

Deep neural networks and in particular, deep neural classifiers have bec...
research
05/23/2018

Robust Perception through Analysis by Synthesis

The intriguing susceptibility of deep neural networks to minimal input p...
research
02/19/2023

Stationary Point Losses for Robust Model

The inability to guarantee robustness is one of the major obstacles to t...
research
05/27/2023

Pre-trained transformer for adversarial purification

With more and more deep neural networks being deployed as various daily ...
research
03/19/2022

Distinguishing Non-natural from Natural Adversarial Samples for More Robust Pre-trained Language Model

Recently, the problem of robustness of pre-trained language models (PrLM...

Please sign up or login with your details

Forgot password? Click here to reset