DeepAI AI Chat
Log In Sign Up

Correctness Verification of Neural Networks

by   Yichen Yang, et al.

We present the first verification that a neural network produces a correct output within a specified tolerance for every input of interest. We define correctness relative to a specification which identifies 1) a state space consisting of all relevant states of the world and 2) an observation process that produces neural network inputs from the states of the world. Tiling the state and input spaces with a finite number of tiles, obtaining ground truth bounds from the state tiles and network output bounds from the input tiles, then comparing the ground truth and network output bounds delivers an upper bound on the network output error for any input of interest. Results from a case study highlight the ability of our technique to deliver tight error bounds for all inputs of interest and show how the error bounds vary over the state and input spaces.


Learning Two-layer Neural Networks with Symmetric Inputs

We give a new algorithm for learning a two-layer neural network under a ...

Generate and Verify: Semantically Meaningful Formal Analysis of Neural Network Perception Systems

Testing remains the primary method to evaluate the accuracy of neural ne...

Provably Bounding Neural Network Preimages

Most work on the formal verification of neural networks has focused on b...

Automation for Interpretable Machine Learning Through a Comparison of Loss Functions to Regularisers

To increase the ubiquity of machine learning it needs to be automated. A...

On Optimizing Back-Substitution Methods for Neural Network Verification

With the increasing application of deep learning in mission-critical sys...

Interval Neural Networks: Uncertainty Scores

We propose a fast, non-Bayesian method for producing uncertainty scores ...

Performance Bounds for Neural Network Estimators: Applications in Fault Detection

We exploit recent results in quantifying the robustness of neural networ...