Defense against adversarial attacks on deep convolutional neural networks through nonlocal denoising

06/25/2022
by   Sandhya Aneja, et al.
5

Despite substantial advances in network architecture performance, the susceptibility of adversarial attacks makes deep learning challenging to implement in safety-critical applications. This paper proposes a data-centric approach to addressing this problem. A nonlocal denoising method with different luminance values has been used to generate adversarial examples from the Modified National Institute of Standards and Technology database (MNIST) and Canadian Institute for Advanced Research (CIFAR-10) data sets. Under perturbation, the method provided absolute accuracy improvements of up to 9.3 in the MNIST data set and 13 transformed images with higher luminance values increases the robustness of the classifier. We have shown that transfer learning is disadvantageous for adversarial machine learning. The results indicate that simple adversarial examples can improve resilience and make deep learning easier to apply in various applications.

READ FULL TEXT

page 2

page 8

research
04/20/2018

Learning More Robust Features with Adversarial Training

In recent years, it has been found that neural networks can be easily fo...
research
12/25/2018

PPD: Permutation Phase Defense Against Adversarial Examples in Deep Learning

Deep neural networks have demonstrated cutting edge performance on vario...
research
09/17/2019

Adversarial Attacks and Defenses in Images, Graphs and Text: A Review

Deep neural networks (DNN) have achieved unprecedented success in numero...
research
08/29/2022

Towards Adversarial Purification using Denoising AutoEncoders

With the rapid advancement and increased use of deep learning models in ...
research
06/04/2020

Characterizing the Weight Space for Different Learning Models

Deep Learning has become one of the primary research areas in developing...
research
02/08/2019

Discretization based Solutions for Secure Machine Learning against Adversarial Attacks

Adversarial examples are perturbed inputs that are designed (from a deep...
research
02/16/2019

Adversarial Examples in RF Deep Learning: Detection of the Attack and its Physical Robustness

While research on adversarial examples in machine learning for images ha...

Please sign up or login with your details

Forgot password? Click here to reset