PROVEN: Certifying Robustness of Neural Networks with a Probabilistic Approach

12/18/2018
by   Tsui-Wei Weng, et al.
10

With deep neural networks providing state-of-the-art machine learning models for numerous machine learning tasks, quantifying the robustness of these models has become an important area of research. However, most of the research literature merely focuses on the worst-case setting where the input of the neural network is perturbed with noises that are constrained within an ℓ_p ball; and several algorithms have been proposed to compute certified lower bounds of minimum adversarial distortion based on such worst-case analysis. In this paper, we address these limitations and extend the approach to a probabilistic setting where the additive noises can follow a given distributional characterization. We propose a novel probabilistic framework PROVEN to PRObabilistically VErify Neural networks with statistical guarantees -- i.e., PROVEN certifies the probability that the classifier's top-1 prediction cannot be altered under any constrained ℓ_p norm perturbation to a given input. Importantly, we show that it is possible to derive closed-form probabilistic certificates based on current state-of-the-art neural network robustness verification frameworks. Hence, the probabilistic certificates provided by PROVEN come naturally and with almost no overhead when obtaining the worst-case certified lower bounds from existing methods such as Fast-Lin, CROWN and CNN-Cert. Experiments on small and large MNIST and CIFAR neural network models demonstrate our probabilistic approach can achieve up to around 75% improvement in the robustness certification with at least a 99.99% confidence compared with the worst-case robustness certificate delivered by CROWN.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/15/2019

Robustness of Neural Networks: A Probabilistic and Practical Approach

Neural networks are becoming increasingly prevalent in software, and it ...
research
08/19/2022

A Novel Plug-and-Play Approach for Adversarially Robust Generalization

In this work, we propose a robust framework that employs adversarially r...
research
10/10/2022

Certified Training: Small Boxes are All You Need

We propose the novel certified training method, SABR, which outperforms ...
research
06/19/2020

Learning Optimal Power Flow: Worst-Case Guarantees for Neural Networks

This paper introduces for the first time a framework to obtain provable ...
research
02/18/2021

Reduced-Order Neural Network Synthesis with Robustness Guarantees

In the wake of the explosive growth in smartphones and cyberphysical sys...
research
12/06/2018

Verification of deep probabilistic models

Probabilistic models are a critical part of the modern deep learning too...
research
02/03/2023

Asymmetric Certified Robustness via Feature-Convex Neural Networks

Recent works have introduced input-convex neural networks (ICNNs) as lea...

Please sign up or login with your details

Forgot password? Click here to reset