Generate and Verify: Semantically Meaningful Formal Analysis of Neural Network Perception Systems

12/16/2020
by   Chris R. Serrano, et al.
20

Testing remains the primary method to evaluate the accuracy of neural network perception systems. Prior work on the formal verification of neural network perception models has been limited to notions of local adversarial robustness for classification with respect to individual image inputs. In this work, we propose a notion of global correctness for neural network perception models performing regression with respect to a generative neural network with a semantically meaningful latent space. That is, against an infinite set of images produced by a generative model over an interval of its latent space, we employ neural network verification to prove that the model will always produce estimates within some error bound of the ground truth. Where the perception model fails, we obtain semantically meaningful counter-examples which carry information on concrete states of the system of interest that can be used programmatically without human inspection of corresponding generated images. Our approach, Generate and Verify, provides a new technique to gather insight into the failure cases of neural network perception systems and provide meaningful guarantees of correct behavior in safety critical applications.

READ FULL TEXT

page 2

page 8

page 9

page 16

page 17

page 18

research
05/14/2021

Verification of Image-based Neural Network Controllers Using Generative Models

Neural networks are often used to process information from image-based s...
research
03/01/2021

Generating Probabilistic Safety Guarantees for Neural Network Controllers

Neural networks serve as effective controllers in a variety of complex s...
research
06/03/2019

Correctness Verification of Neural Networks

We present the first verification that a neural network produces a corre...
research
11/25/2019

CAMUS: A Framework to Build Formal Specifications for Deep Perception Systems Using Simulators

The topic of provable deep neural network robustness has raised consider...
research
09/25/2021

Auditing AI models for Verified Deployment under Semantic Specifications

Auditing trained deep learning (DL) models prior to deployment is vital ...
research
04/30/2020

Robustness Certification of Generative Models

Generative neural networks can be used to specify continuous transformat...
research
01/09/2023

3D Shape Perception Integrates Intuitive Physics and Analysis-by-Synthesis

Many surface cues support three-dimensional shape perception, but people...

Please sign up or login with your details

Forgot password? Click here to reset