DeepAI AI Chat
Log In Sign Up

DiverGet: A Search-Based Software Testing Approach for Deep Neural Network Quantization Assessment

07/13/2022
by   Ahmed Haj Yahmed, et al.
Corporation de l'ecole Polytechnique de Montreal
9

Quantization is one of the most applied Deep Neural Network (DNN) compression strategies, when deploying a trained DNN model on an embedded system or a cell phone. This is owing to its simplicity and adaptability to a wide range of applications and circumstances, as opposed to specific Artificial Intelligence (AI) accelerators and compilers that are often designed only for certain specific hardware (e.g., Google Coral Edge TPU). With the growing demand for quantization, ensuring the reliability of this strategy is becoming a critical challenge. Traditional testing methods, which gather more and more genuine data for better assessment, are often not practical because of the large size of the input space and the high similarity between the original DNN and its quantized counterpart. As a result, advanced assessment strategies have become of paramount importance. In this paper, we present DiverGet, a search-based testing framework for quantization assessment. DiverGet defines a space of metamorphic relations that simulate naturally-occurring distortions on the inputs. Then, it optimally explores these relations to reveal the disagreements among DNNs of different arithmetic precision. We evaluate the performance of DiverGet on state-of-the-art DNNs applied to hyperspectral remote sensing images. We chose the remote sensing DNNs as they're being increasingly deployed at the edge (e.g., high-lift drones) in critical domains like climate change research and astronomy. Our results show that DiverGet successfully challenges the robustness of established quantization techniques against naturally-occurring shifted data, and outperforms its most recent concurrent, DiffChaser, with a success rate that is (on average) four times higher.

READ FULL TEXT

page 24

page 26

page 29

page 31

09/05/2019

Training High-Performance and Large-Scale Deep Neural Networks with Full 8-bit Integers

Deep neural network (DNN) quantization converting floating-point (FP) da...
12/23/2021

Training Quantized Deep Neural Networks via Cooperative Coevolution

This work considers a challenging Deep Neural Network (DNN) quantization...
08/23/2021

On the Acceleration of Deep Neural Network Inference using Quantized Compressed Sensing

Accelerating deep neural network (DNN) inference on resource-limited dev...
05/27/2019

Differentiable Quantization of Deep Neural Networks

We propose differentiable quantization (DQ) for efficient deep neural ne...
12/18/2019

Neural Networks Weights Quantization: Target None-retraining Ternary (TNT)

Quantization of weights of deep neural networks (DNN) has proven to be a...
02/10/2023

On Achieving Privacy-Preserving State-of-the-Art Edge Intelligence

Deep Neural Network (DNN) Inference in Edge Computing, often called Edge...
03/01/2019

TamperNN: Efficient Tampering Detection of Deployed Neural Nets

Neural networks are powering the deployment of embedded devices and Inte...