The Effect of Prior Lipschitz Continuity on the Adversarial Robustness of Bayesian Neural Networks

01/07/2021
by   Arno Blaas, et al.
0

It is desirable, and often a necessity, for machine learning models to be robust against adversarial attacks. This is particularly true for Bayesian models, as they are well-suited for safety-critical applications, in which adversarial attacks can have catastrophic outcomes. In this work, we take a deeper look at the adversarial robustness of Bayesian Neural Networks (BNNs). In particular, we consider whether the adversarial robustness of a BNN can be increased by model choices, particularly the Lipschitz continuity induced by the prior. Conducting in-depth analysis on the case of i.i.d., zero-mean Gaussian priors and posteriors approximated via mean-field variational inference, we find evidence that adversarial robustness is indeed sensitive to the prior variance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/11/2020

Robustness of Bayesian Neural Networks to Gradient-Based Attacks

Vulnerability to adversarial attacks is one of the principal hurdles to ...
research
03/02/2021

Smoothness Analysis of Loss Functions of Adversarial Training

Deep neural networks are vulnerable to adversarial attacks. Recent studi...
research
07/13/2022

On the Robustness of Bayesian Neural Networks to Adversarial Attacks

Vulnerability to adversarial attacks is one of the principal hurdles to ...
research
06/17/2021

Evaluating the Robustness of Bayesian Neural Networks Against Different Types of Attacks

To evaluate the robustness gain of Bayesian neural networks on image cla...
research
03/30/2016

Robustness of Bayesian Pool-based Active Learning Against Prior Misspecification

We study the robustness of active learning (AL) algorithms against prior...
research
12/24/2022

A Bayesian Robust Regression Method for Corrupted Data Reconstruction

Because of the widespread existence of noise and data corruption, recove...
research
06/21/2020

Network Moments: Extensions and Sparse-Smooth Attacks

The impressive performance of deep neural networks (DNNs) has immensely ...

Please sign up or login with your details

Forgot password? Click here to reset