QUANOS- Adversarial Noise Sensitivity Driven Hybrid Quantization of Neural Networks

04/22/2020
by   Priyadarshini Panda, et al.
0

Deep Neural Networks (DNNs) have been shown to be vulnerable to adversarial attacks, wherein, a model gets fooled by applying slight perturbations on the input. With the advent of Internet-of-Things and the necessity to enable intelligence in embedded devices, low-power and secure hardware implementation of DNNs is vital. In this paper, we investigate the use of quantization to potentially resist adversarial attacks. Several recent studies have reported remarkable results in reducing the energy requirement of a DNN through quantization. However, no prior work has considered the relationship between adversarial sensitivity of a DNN and its effect on quantization. We propose QUANOS- a framework that performs layer-specific hybrid quantization based on Adversarial Noise Sensitivity (ANS). We identify a novel noise stability metric (ANS) for DNNs, i.e., the sensitivity of each layer's computation to adversarial noise. ANS allows for a principled way of determining optimal bit-width per layer that incurs adversarial robustness as well as energy-efficiency with minimal loss in accuracy. Essentially, QUANOS assigns layer significance based on its contribution to adversarial perturbation and accordingly scales the precision of the layers. A key advantage of QUANOS is that it does not rely on a pre-trained model and can be applied in the initial stages of training. We evaluate the benefits of QUANOS on precision scalable Multiply and Accumulate (MAC) hardware architectures with data gating and subword parallelism capabilities. Our experiments on CIFAR10, CIFAR100 datasets show that QUANOS outperforms homogenously quantized 8-bit precision baseline in terms of adversarial robustness (3 compression (>5x) and energy savings (>2x) at iso-accuracy.

READ FULL TEXT
research
05/09/2021

Efficiency-driven Hardware Optimization for Adversarially Robust Neural Networks

With a growing need to enable intelligence in embedded devices in the In...
research
08/25/2020

Rethinking Non-idealities in Memristive Crossbars for Adversarial Robustness in Neural Networks

Deep Neural Networks (DNNs) have been shown to be prone to adversarial a...
research
08/09/2023

SAfER: Layer-Level Sensitivity Assessment for Efficient and Robust Neural Network Inference

Deep neural networks (DNNs) demonstrate outstanding performance across m...
research
07/24/2023

An Estimator for the Sensitivity to Perturbations of Deep Neural Networks

For Deep Neural Networks (DNNs) to become useful in safety-critical appl...
research
08/31/2020

An Integrated Approach to Produce Robust Models with High Efficiency

Deep Neural Networks (DNNs) needs to be both efficient and robust for pr...
research
08/03/2020

Hardware Accelerator for Adversarial Attacks on Deep Learning Neural Networks

Recent studies identify that Deep learning Neural Networks (DNNs) are vu...
research
04/08/2022

Characterizing and Understanding the Behavior of Quantized Models for Reliable Deployment

Deep Neural Networks (DNNs) have gained considerable attention in the pa...

Please sign up or login with your details

Forgot password? Click here to reset