Qu-ANTI-zation: Exploiting Quantization Artifacts for Achieving Adversarial Outcomes

10/26/2021
by   Sanghyun Hong, et al.
0

Quantization is a popular technique that transforms the parameter representation of a neural network from floating-point numbers into lower-precision ones (e.g., 8-bit integers). It reduces the memory footprint and the computational cost at inference, facilitating the deployment of resource-hungry models. However, the parameter perturbations caused by this transformation result in behavioral disparities between the model before and after quantization. For example, a quantized model can misclassify some test-time samples that are otherwise classified correctly. It is not known whether such differences lead to a new security vulnerability. We hypothesize that an adversary may control this disparity to introduce specific behaviors that activate upon quantization. To study this hypothesis, we weaponize quantization-aware training and propose a new training framework to implement adversarial quantization outcomes. Following this framework, we present three attacks we carry out with quantization: (i) an indiscriminate attack for significant accuracy loss; (ii) a targeted attack against specific samples; and (iii) a backdoor attack for controlling the model with an input trigger. We further show that a single compromised model defeats multiple quantization schemes, including robust quantization techniques. Moreover, in a federated learning scenario, we demonstrate that a set of malicious participants who conspire can inject our quantization-activated backdoor. Lastly, we discuss potential counter-measures and show that only re-training consistently removes the attack artifacts. Our code is available at https://github.com/Secure-AI-Systems-Group/Qu-ANTI-zation

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/27/2021

Understanding and Overcoming the Challenges of Efficient Transformer Quantization

Transformer-based architectures have become the de-facto standard models...
research
02/10/2017

Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights

This paper presents incremental network quantization (INQ), a novel meth...
research
11/24/2021

Sharpness-aware Quantization for Deep Neural Networks

Network quantization is an effective compression method to reduce the mo...
research
03/19/2019

Trained Uniform Quantization for Accurate and Efficient Neural Network Inference on Fixed-Point Hardware

We propose a method of training quantization clipping thresholds for uni...
research
07/09/2022

CEG4N: Counter-Example Guided Neural Network Quantization Refinement

Neural networks are essential components of learning-based software syst...
research
07/05/2021

Q-SpiNN: A Framework for Quantizing Spiking Neural Networks

A prominent technique for reducing the memory footprint of Spiking Neura...
research
02/19/2023

Rethinking Data-Free Quantization as a Zero-Sum Game

Data-free quantization (DFQ) recovers the performance of quantized netwo...

Please sign up or login with your details

Forgot password? Click here to reset