A Learning and Masking Approach to Secure Learning

09/13/2017
by   Linh Nguyen, et al.
0

Deep Neural Networks (DNNs) have been shown to be vulnerable against adversarial examples, which are data points cleverly constructed to fool the classifier. Such attacks can be devastating in practice, especially as DNNs are being applied to ever increasing critical tasks like image recognition in autonomous driving. In this paper, we introduce a new perspective on the problem. We do so by first defining robustness of a classifier to adversarial exploitation. Next, we show that the problem of adversarial example generation can be posed as learning problem. We also categorize attacks in literature into high and low perturbation attacks; well-known attacks like fast-gradient sign method (FGSM) and our attack produce higher perturbation adversarial examples while the more potent but computationally inefficient Carlini-Wagner (CW) attack is low perturbation. Next, we show that the dual approach of the attack learning problem can be used as a defensive technique that is effective against high perturbation attacks. Finally, we show that a classifier masking method achieved by adding noise to the a neural network's logit output protects against low distortion attacks such as the CW attack. We also show that both our learning and masking defense can work simultaneously to protect against multiple attacks. We demonstrate the efficacy of our techniques by experimenting with the MNIST and CIFAR-10 datasets.

READ FULL TEXT
research
08/10/2021

On Procedural Adversarial Noise Attack And Defense

Deep Neural Networks (DNNs) are vulnerable to adversarial examples which...
research
09/13/2017

EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples

Recent studies have highlighted the vulnerability of deep neural network...
research
05/28/2020

QEBA: Query-Efficient Boundary-Based Blackbox Attack

Machine learning (ML), especially deep neural networks (DNNs) have been ...
research
01/21/2020

Generate High-Resolution Adversarial Samples by Identifying Effective Features

As the prevalence of deep learning in computer vision, adversarial sampl...
research
01/30/2020

Adversarial Attacks on Convolutional Neural Networks in Facial Recognition Domain

Numerous recent studies have demonstrated how Deep Neural Network (DNN) ...
research
08/06/2019

MetaAdvDet: Towards Robust Detection of Evolving Adversarial Attacks

Deep neural networks (DNNs) are vulnerable to adversarial attack which i...
research
05/19/2022

On Trace of PGD-Like Adversarial Attacks

Adversarial attacks pose safety and security concerns for deep learning ...

Please sign up or login with your details

Forgot password? Click here to reset