Incremental Verification of Fixed-Point Implementations of Neural Networks

12/21/2020
by   Luiz Sena, et al.
0

Implementations of artificial neural networks (ANNs) might lead to failures, which are hardly predicted in the design phase since ANNs are highly parallel and their parameters are barely interpretable. Here, we develop and evaluate a novel symbolic verification framework using incremental bounded model checking (BMC), satisfiability modulo theories (SMT), and invariant inference, to obtain adversarial cases and validate coverage methods in a multi-layer perceptron (MLP). We exploit incremental BMC based on interval analysis to compute boundaries from a neuron's input. Then, the latter are propagated to effectively find a neuron's output since it is the input of the next one. This paper describes the first bit-precise symbolic verification framework to reason over actual implementations of ANNs in CUDA, based on invariant inference, therefore providing further guarantees about finite-precision arithmetic and its rounding errors, which are routinely ignored in the existing literature. We have implemented the proposed approach on top of the efficient SMT-based bounded model checker (ESBMC), and its experimental results show that it can successfully verify safety properties, in actual implementations of ANNs, and generate real adversarial cases in MLPs. Our approach was able to verify and produce adversarial examples for 85.8 input images, and 100 our verification time is higher than existing approaches, our methodology can consider fixed-point implementation aspects that are disregarded by the state-of-the-art verification methodologies.

READ FULL TEXT
research
07/30/2019

Incremental Bounded Model Checking of Artificial Neural Networks in CUDA

Artificial Neural networks (ANNs) are powerful computing systems employe...
research
06/10/2021

Verifying Quantized Neural Networks using SMT-Based Model Checking

Artificial Neural Networks (ANNs) are being deployed for an increasing n...
research
11/25/2021

QNNVerifier: A Tool for Verifying Neural Networks using SMT-Based Model Checking

QNNVerifier is the first open-source tool for verifying implementations ...
research
07/02/2021

Model Checking C++ Programs

In the last three decades, memory safety issues in system programming la...
research
02/05/2023

2LS for Program Analysis

2LS ("tools") is a verification tool for C programs, built upon the CPRO...
research
09/28/2022

Bounded Invariant Checking for Stateflow

Stateflow models are complex software models, often used as part of indu...
research
04/28/2018

Formal Security Analysis of Neural Networks using Symbolic Intervals

Due to the increasing deployment of Deep Neural Networks (DNNs) in real-...

Please sign up or login with your details

Forgot password? Click here to reset