Certifying Neural Network Robustness to Random Input Noise from Samples

10/15/2020
by   Brendon G. Anderson, et al.
0

Methods to certify the robustness of neural networks in the presence of input uncertainty are vital in safety-critical settings. Most certification methods in the literature are designed for adversarial input uncertainty, but researchers have recently shown a need for methods that consider random uncertainty. In this paper, we propose a novel robustness certification method that upper bounds the probability of misclassification when the input noise follows an arbitrary probability distribution. This bound is cast as a chance-constrained optimization problem, which is then reformulated using input-output samples to replace the optimization constraints. The resulting optimization reduces to a linear program with an analytical solution. Furthermore, we develop a sufficient condition on the number of samples needed to make the misclassification bound hold with overwhelming probability. Our case studies on MNIST classifiers show that this method is able to certify a uniform infinity-norm uncertainty region with a radius of nearly 50 times larger than what the current state-of-the-art method can certify.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/02/2020

Data-Driven Assessment of Deep Neural Networks with Random Input Uncertainty

When using deep neural networks to operate safety-critical systems, asse...
research
10/26/2021

Improving Robustness of Deep Neural Networks for Aerial Navigation by Incorporating Input Uncertainty

Uncertainty quantification methods are required in autonomous systems th...
research
10/09/2019

Probabilistic Verification and Reachability Analysis of Neural Networks via Semidefinite Programming

Quantifying the robustness of neural networks or verifying their safety ...
research
08/08/2022

Robust Training and Verification of Implicit Neural Networks: A Non-Euclidean Contractive Approach

This paper proposes a theoretical and computational framework for traini...
research
01/24/2022

Constrained Policy Optimization via Bayesian World Models

Improving sample-efficiency and safety are crucial challenges when deplo...
research
03/09/2020

An Empirical Evaluation on Robustness and Uncertainty of Regularization Methods

Despite apparent human-level performances of deep neural networks (DNN),...
research
04/21/2020

Probabilistic Safety for Bayesian Neural Networks

We study probabilistic safety for Bayesian Neural Networks (BNNs) under ...

Please sign up or login with your details

Forgot password? Click here to reset