Quantization-aware Interval Bound Propagation for Training Certifiably Robust Quantized Neural Networks

11/29/2022
by   Mathias Lechner, et al.
0

We study the problem of training and certifying adversarially robust quantized neural networks (QNNs). Quantization is a technique for making neural networks more efficient by running them using low-bit integer arithmetic and is therefore commonly adopted in industry. Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization, and certification of the quantized representation is necessary to guarantee robustness. In this work, we present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs. Inspired by advances in robust learning of non-quantized networks, our training algorithm computes the gradient of an abstract representation of the actual network. Unlike existing approaches, our method can handle the discrete semantics of QNNs. Based on QA-IBP, we also develop a complete verification procedure for verifying the adversarial robustness of QNNs, which is guaranteed to terminate and produce a correct answer. Compared to existing approaches, the key advantage of our verification procedure is that it runs entirely on GPU or other accelerator devices. We demonstrate experimentally that our approach significantly outperforms existing methods and establish the new state-of-the-art for training and certifying the robustness of QNNs.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/23/2021

A Layer-wise Adversarial-aware Quantization Optimization for Improving Robustness

Neural networks are getting better accuracy with higher energy and compu...
research
12/15/2020

Scalable Verification of Quantized Neural Networks (Technical Report)

Formal verification of neural networks is an active topic of research, a...
research
10/17/2022

ODG-Q: Robust Quantization via Online Domain Generalization

Quantizing neural networks to low-bitwidth is important for model deploy...
research
12/29/2020

Improving Adversarial Robustness in Weight-quantized Neural Networks

Neural networks are getting deeper and more computation-intensive nowada...
research
12/10/2022

QVIP: An ILP-based Formal Verification Approach for Quantized Neural Networks

Deep learning has become a promising programming paradigm in software de...
research
03/30/2020

Improved Gradient based Adversarial Attacks for Quantized Networks

Neural network quantization has become increasingly popular due to effic...
research
08/04/2023

RobustMQ: Benchmarking Robustness of Quantized Models

Quantization has emerged as an essential technique for deploying deep ne...

Please sign up or login with your details

Forgot password? Click here to reset