Probabilistic Safety for Bayesian Neural Networks

04/21/2020
by   Matthew Wicker, et al.
0

We study probabilistic safety for Bayesian Neural Networks (BNNs) under adversarial input perturbations. Given a compact set of input points, T ⊆ R^m, we study the probability w.r.t. the BNN posterior that all the points in T are mapped to the same, given region S in the output space. In particular, this can be used to evaluate the probability that a network sampled from the BNN is vulnerable to adversarial attacks. We rely on relaxation techniques from non-convex optimization to develop a method for computing a lower bound on probabilistic safety for BNNs, deriving explicit procedures for the case of interval and linear function propagation techniques. We apply our methods to BNNs trained on a regression task, airborne collision avoidance, and MNIST, empirically showing that our approach allows one to certify probabilistic safety of BNNs with thousands of neurons.

READ FULL TEXT
research
06/23/2023

Adversarial Robustness Certification for Bayesian Neural Networks

We study the problem of certifying the robustness of Bayesian neural net...
research
02/10/2021

Bayesian Inference with Certifiable Adversarial Robustness

We consider adversarial training of deep neural networks through the len...
research
11/05/2020

Sampled Nonlocal Gradients for Stronger Adversarial Attacks

The vulnerability of deep neural networks to small and even imperceptibl...
research
06/01/2020

Second-Order Provable Defenses against Adversarial Attacks

A robustness certificate is the minimum distance of a given input to the...
research
04/29/2021

Grid-Free Computation of Probabilistic Safety with Malliavin Calculus

We work with continuous-time, continuous-space stochastic dynamical syst...
research
10/09/2019

Probabilistic Verification and Reachability Analysis of Neural Networks via Semidefinite Programming

Quantifying the robustness of neural networks or verifying their safety ...
research
10/15/2020

Certifying Neural Network Robustness to Random Input Noise from Samples

Methods to certify the robustness of neural networks in the presence of ...

Please sign up or login with your details

Forgot password? Click here to reset