Robustness of Neural Networks to Parameter Quantization

03/26/2019
by   Abhishek Murthy, et al.
0

Quantization, a commonly used technique to reduce the memory footprint of a neural network for edge computing, entails reducing the precision of the floating-point representation used for the parameters of the network. The impact of such rounding-off errors on the overall performance of the neural network is estimated using testing, which is not exhaustive and thus cannot be used to guarantee the safety of the model. We present a framework based on Satisfiability Modulo Theory (SMT) solvers to quantify the robustness of neural networks to parameter perturbation. To this end, we introduce notions of local and global robustness that capture the deviation in the confidence of class assignments due to parameter quantization. The robustness notions are then cast as instances of SMT problems and solved automatically using solvers, such as dReal. We demonstrate our framework on two simple Multi-Layer Perceptrons (MLP) that perform binary classification on a two-dimensional input. In addition to quantifying the robustness, we also show that Rectified Linear Unit activation results in higher robustness than linear activations for our MLPs.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/25/2021

QNNVerifier: A Tool for Verifying Neural Networks using SMT-Based Model Checking

QNNVerifier is the first open-source tool for verifying implementations ...
research
10/30/2021

ILMPQ : An Intra-Layer Multi-Precision Deep Neural Network Quantization framework for FPGA

This work targets the commonly used FPGA (field-programmable gate array)...
research
06/15/2022

Edge Inference with Fully Differentiable Quantized Mixed Precision Neural Networks

The large computing and memory cost of deep neural networks (DNNs) often...
research
08/18/2021

Verifying Low-dimensional Input Neural Networks via Input Quantization

Deep neural networks are an attractive tool for compressing the control ...
research
09/14/2022

Analysis of Quantization on MLP-based Vision Models

Quantization is wildly taken as a model compression technique, which obt...
research
05/03/2017

Formal Verification of Piece-Wise Linear Feed-Forward Neural Networks

We present an approach for the verification of feed-forward neural netwo...
research
07/29/2023

An Automata-Theoretic Approach to Synthesizing Binarized Neural Networks

Deep neural networks, (DNNs, a.k.a. NNs), have been widely used in vario...

Please sign up or login with your details

Forgot password? Click here to reset