Probabilistic Verification and Reachability Analysis of Neural Networks via Semidefinite Programming

10/09/2019
by   Mahyar Fazlyab, et al.
0

Quantifying the robustness of neural networks or verifying their safety properties against input uncertainties or adversarial attacks have become an important research area in learning-enabled systems. Most results concentrate around the worst-case scenario where the input of the neural network is perturbed within a norm-bounded uncertainty set. In this paper, we consider a probabilistic setting in which the uncertainty is random with known first two moments. In this context, we discuss two relevant problems: (i) probabilistic safety verification, in which the goal is to find an upper bound on the probability of violating a safety specification; and (ii) confidence ellipsoid estimation, in which given a confidence ellipsoid for the input of the neural network, our goal is to compute a confidence ellipsoid for the output. Due to the presence of nonlinear activation functions, these two problems are very difficult to solve exactly. To simplify the analysis, our main idea is to abstract the nonlinear activation functions by a combination of affine and quadratic constraints they impose on their input-output pairs. We then show that the safety of the abstracted network, which is sufficient for the safety of the original network, can be analyzed using semidefinite programming. We illustrate the performance of our approach with numerical experiments.

READ FULL TEXT
research
03/04/2019

Safety Verification and Robustness Analysis of Neural Networks via Quadratic Constraints and Semidefinite Programming

Analyzing the robustness of neural networks against norm-bounded uncerta...
research
04/16/2020

Reach-SDP: Reachability Analysis of Closed-Loop Systems with Neural Network Controllers via Semidefinite Programming

There has been an increasing interest in using neural networks in closed...
research
12/03/2022

Probabilistic Verification of ReLU Neural Networks via Characteristic Functions

Verifying the input-output relationships of a neural network so as to ac...
research
10/15/2020

Certifying Neural Network Robustness to Random Input Noise from Samples

Methods to certify the robustness of neural networks in the presence of ...
research
12/14/2018

Specification-Guided Safety Verification for Feedforward Neural Networks

This paper presents a specification-guided safety verification method fo...
research
01/03/2022

Neural network training under semidefinite constraints

This paper is concerned with the training of neural networks (NNs) under...
research
04/21/2020

Probabilistic Safety for Bayesian Neural Networks

We study probabilistic safety for Bayesian Neural Networks (BNNs) under ...

Please sign up or login with your details

Forgot password? Click here to reset