QuSecNets: Quantization-based Defense Mechanism for Securing Deep Neural Network against Adversarial Attacks

11/04/2018
by   Hassan Ali, et al.
0

Deep Neural Networks (DNNs) have recently been shown vulnerable to adversarial attacks in which the input examples are perturbed to fool these DNNs towards confidence reduction and (targeted or random) misclassification. In this paper, we demonstrate that how an efficient quantization technique can be leveraged to increase the robustness of a given DNN against adversarial attacks. We present two quantization-based defense mechanisms, namely Constant Quantization (CQ) and Variable Quantization (VQ), applied at the input to increase the robustness of DNNs. In CQ, the intensity of the input pixel is quantized according to the number of quantization levels. While in VQ, the quantization levels are recursively updated during the training phase, thereby providing a stronger defense mechanism. We apply our techniques on the Convolutional Neural Networks (CNNs, a particular type of DNN which is heavily used in vision-based applications) against adversarial attacks from the open-source Cleverhans library. Our experimental results show 1 the adversarial accuracy for MNIST and 0 accuracy for CIFAR10.

READ FULL TEXT

page 1

page 5

research
09/05/2020

Bluff: Interactively Deciphering Adversarial Attacks on Deep Neural Networks

Deep neural networks (DNNs) are now commonly used in many domains. Howev...
research
11/04/2018

SSCNets: A Selective Sobel Convolution-based Technique to Enhance the Robustness of Deep Neural Networks against Security Attacks

Recent studies have shown that slight perturbations in the input data ca...
research
12/06/2018

On Configurable Defense against Adversarial Example Attacks

Machine learning systems based on deep neural networks (DNNs) have gaine...
research
12/28/2022

Publishing Efficient On-device Models Increases Adversarial Vulnerability

Recent increases in the computational demands of deep neural networks (D...
research
05/31/2019

L0 Regularization Based Neural Network Design and Compression

We consider complexity of Deep Neural Networks (DNNs) and their associat...
research
07/18/2018

Defend Deep Neural Networks Against Adversarial Examples via Fixed andDynamic Quantized Activation Functions

Recent studies have shown that deep neural networks (DNNs) are vulnerabl...
research
03/28/2023

Denoising Autoencoder-based Defensive Distillation as an Adversarial Robustness Algorithm

Adversarial attacks significantly threaten the robustness of deep neural...

Please sign up or login with your details

Forgot password? Click here to reset