Impact of Low-bitwidth Quantization on the Adversarial Robustness for Embedded Neural Networks

09/27/2019
by   Rémi Bernhard, et al.
0

As the will to deploy neural networks models on embedded systems grows, and considering the related memory footprint and energy consumption issues, finding lighter solutions to store neural networks such as weight quantization and more efficient inference methods become major research topics. Parallel to that, adversarial machine learning has risen recently with an impressive and significant attention, unveiling some critical flaws of machine learning models, especially neural networks. In particular, perturbed inputs called adversarial examples have been shown to fool a model into making incorrect predictions. In this article, we investigate the adversarial robustness of quantized neural networks under different threat models for a classical supervised image classification task. We show that quantization does not offer any robust protection, results in severe form of gradient masking and advance some hypotheses to explain it. However, we experimentally observe poor transferability capacities which we explain by quantization value shift phenomenon and gradient misalignment and explore how these results can be exploited with an ensemble-based defense.

READ FULL TEXT

page 12

page 13

research
05/10/2023

Quantization Aware Attack: Enhancing the Transferability of Adversarial Attacks across Target Models with Different Quantization Bitwidths

Quantized Neural Networks (QNNs) receive increasing attention in resourc...
research
08/04/2020

TREND: Transferability based Robust ENsemble Design

Deep Learning models hold state-of-the-art performance in many fields, b...
research
08/07/2020

Adversarial Examples on Object Recognition: A Comprehensive Survey

Deep neural networks are at the forefront of machine learning research. ...
research
04/17/2019

Defensive Quantization: When Efficiency Meets Robustness

Neural network quantization is becoming an industry standard to efficien...
research
05/26/2022

An Analytic Framework for Robust Training of Artificial Neural Networks

The reliability of a learning model is key to the successful deployment ...
research
12/05/2018

Regularized Ensembles and Transferability in Adversarial Learning

Despite the considerable success of convolutional neural networks in a b...
research
05/13/2021

Stochastic-Shield: A Probabilistic Approach Towards Training-Free Adversarial Defense in Quantized CNNs

Quantized neural networks (NN) are the common standard to efficiently de...

Please sign up or login with your details

Forgot password? Click here to reset