How Wrong Am I? - Studying Adversarial Examples and their Impact on Uncertainty in Gaussian Process Machine Learning Models

11/17/2017
by   Kathrin Grosse, et al.
0

Machine learning models are vulnerable to adversarial examples: minor, in many cases imperceptible, perturbations to classification inputs. Among other suspected causes, adversarial examples exploit ML models that offer no well-defined indication as to how well a particular prediction is supported by training data, yet are forced to confidently extrapolate predictions in areas of high entropy. In contrast, Bayesian ML models, such as Gaussian Processes (GP), inherently model the uncertainty accompanying a prediction in the well-studied framework of Bayesian Inference. This paper is first to explore adversarial examples and their impact on uncertainty estimates for Gaussian Processes. To this end, we first present three novel attacks on Gaussian Processes: GPJM and GPFGS exploit forward derivatives in GP latent functions, and Latent Space Approximation Networks mimic the latent space representation in unsupervised GP models to facilitate attacks. Further, we show that these new attacks compute adversarial examples that transfer to non-GP classification models, and vice versa. Finally, we show that GP uncertainty estimates not only differ between adversarial examples and benign data, but also between adversarial examples computed by different algorithms.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/06/2018

The Limitations of Model Uncertainty in Adversarial Settings

Machine learning models are vulnerable to adversarial examples: minor pe...
research
09/17/2018

Robustness Guarantees for Bayesian Inference with Gaussian Processes

Bayesian inference and Gaussian processes are widely used in application...
research
06/03/2021

Gaussian Processes on Hypergraphs

We derive a Matern Gaussian process (GP) on the vertices of a hypergraph...
research
05/02/2021

Intriguing Usage of Applicability Domain: Lessons from Cheminformatics Applied to Adversarial Learning

Defending machine learning models from adversarial attacks is still a ch...
research
05/29/2023

Gaussian Process Probes (GPP) for Uncertainty-Aware Probing

Understanding which concepts models can and cannot represent has been fu...
research
04/26/2018

Adaptive Sensing for Learning Nonstationary Environment Models

Most environmental phenomena, such as wind profiles, ozone concentration...

Please sign up or login with your details

Forgot password? Click here to reset