An Automata-Theoretic Approach to Synthesizing Binarized Neural Networks

07/29/2023
by   Ye Tao, et al.
0

Deep neural networks, (DNNs, a.k.a. NNs), have been widely used in various tasks and have been proven to be successful. However, the accompanied expensive computing and storage costs make the deployments in resource-constrained devices a significant concern. To solve this issue, quantization has emerged as an effective way to reduce the costs of DNNs with little accuracy degradation by quantizing floating-point numbers to low-width fixed-point representations. Quantized neural networks (QNNs) have been developed, with binarized neural networks (BNNs) restricted to binary values as a special case. Another concern about neural networks is their vulnerability and lack of interpretability. Despite the active research on trustworthy of DNNs, few approaches have been proposed to QNNs. To this end, this paper presents an automata-theoretic approach to synthesizing BNNs that meet designated properties. More specifically, we define a temporal logic, called BLTL, as the specification language. We show that each BLTL formula can be transformed into an automaton on finite words. To deal with the state-explosion problem, we provide a tableau-based approach in real implementation. For the synthesis procedure, we utilize SMT solvers to detect the existence of a model (i.e., a BNN) in the construction process. Notably, synthesis provides a way to determine the hyper-parameters of the network before training.Moreover, we experimentally evaluate our approach and demonstrate its effectiveness in improving the individual fairness and local robustness of BNNs while maintaining accuracy to a great extent.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/22/2022

FxP-QNet: A Post-Training Quantizer for the Design of Mixed Low-Precision DNNs with Dynamic Fixed-Point Representation

Deep neural networks (DNNs) have demonstrated their effectiveness in a w...
research
12/10/2022

QVIP: An ILP-based Formal Verification Approach for Quantized Neural Networks

Deep learning has become a promising programming paradigm in software de...
research
12/06/2022

QEBVerif: Quantization Error Bound Verification of Neural Networks

While deep neural networks (DNNs) have demonstrated impressive performan...
research
08/13/2018

A Survey on Methods and Theories of Quantized Neural Networks

Deep neural networks are the state-of-the-art methods for many real-worl...
research
11/10/2017

Quantized Memory-Augmented Neural Networks

Memory-augmented neural networks (MANNs) refer to a class of neural netw...
research
03/26/2019

Robustness of Neural Networks to Parameter Quantization

Quantization, a commonly used technique to reduce the memory footprint o...
research
10/11/2021

Synthesizing Machine Learning Programs with PAC Guarantees via Statistical Sketching

We study the problem of synthesizing programs that include machine learn...

Please sign up or login with your details

Forgot password? Click here to reset