Defensive Quantization: When Efficiency Meets Robustness

04/17/2019
by   Ji Lin, et al.
0

Neural network quantization is becoming an industry standard to efficiently deploy deep learning models on hardware platforms, such as CPU, GPU, TPU, and FPGAs. However, we observe that the conventional quantization approaches are vulnerable to adversarial attacks. This paper aims to raise people's awareness about the security of the quantized models, and we designed a novel quantization methodology to jointly optimize the efficiency and robustness of deep learning models. We first conduct an empirical study to show that vanilla quantization suffers more from adversarial attacks. We observe that the inferior robustness comes from the error amplification effect, where the quantization operation further enlarges the distance caused by amplified noise. Then we propose a novel Defensive Quantization (DQ) method by controlling the Lipschitz constant of the network during quantization, such that the magnitude of the adversarial noise remains non-expansive during inference. Extensive experiments on CIFAR-10 and SVHN datasets demonstrate that our new quantization method can defend neural networks against adversarial examples, and even achieves superior robustness than their full-precision counterparts while maintaining the same hardware efficiency as vanilla quantization approaches. As a by-product, DQ can also improve the accuracy of quantized models without adversarial attack.

READ FULL TEXT
research
10/23/2021

A Layer-wise Adversarial-aware Quantization Optimization for Improving Robustness

Neural networks are getting better accuracy with higher energy and compu...
research
05/13/2021

Stochastic-Shield: A Probabilistic Approach Towards Training-Free Adversarial Defense in Quantized CNNs

Quantized neural networks (NN) are the common standard to efficiently de...
research
10/04/2021

Pre-Quantized Deep Learning Models Codified in ONNX to Enable Hardware/Software Co-Design

This paper presents a methodology to separate the quantization process f...
research
12/29/2020

Improving Adversarial Robustness in Weight-quantized Neural Networks

Neural networks are getting deeper and more computation-intensive nowada...
research
09/27/2019

Impact of Low-bitwidth Quantization on the Adversarial Robustness for Embedded Neural Networks

As the will to deploy neural networks models on embedded systems grows, ...
research
08/04/2023

RobustMQ: Benchmarking Robustness of Quantized Models

Quantization has emerged as an essential technique for deploying deep ne...
research
05/16/2023

Ortho-ODE: Enhancing Robustness and of Neural ODEs against Adversarial Attacks

Neural Ordinary Differential Equations (NODEs) probed the usage of numer...

Please sign up or login with your details

Forgot password? Click here to reset