Adversarial Vulnerability Bounds for Gaussian Process Classification

09/19/2019
by   Michael Thomas Smith, et al.
6

Machine learning (ML) classification is increasingly used in safety-critical systems. Protecting ML classifiers from adversarial examples is crucial. We propose that the main threat is that of an attacker perturbing a confidently classified input to produce a confident misclassification. To protect against this we devise an adversarial bound (AB) for a Gaussian process classifier, that holds for the entire input domain, bounding the potential for any future adversarial method to cause such misclassification. This is a formal guarantee of robustness, not just an empirically derived result. We investigate how to configure the classifier to maximise the bound, including the use of a sparse approximation, leading to the method producing a practical, useful and provably robust classifier, which we test using a variety of datasets.

READ FULL TEXT

page 7

page 9

page 16

research
10/02/2019

Generating Semantic Adversarial Examples with Differentiable Rendering

Machine learning (ML) algorithms, especially deep neural networks, have ...
research
04/22/2021

Robust Certification for Laplace Learning on Geometric Graphs

Graph Laplacian (GL)-based semi-supervised learning is one of the most u...
research
07/14/2020

Adversarial Examples and Metrics

Adversarial examples are a type of attack on machine learning (ML) syste...
research
11/25/2019

Playing it Safe: Adversarial Robustness with an Abstain Option

We explore adversarial robustness in the setting in which it is acceptab...
research
05/23/2017

Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation

Recent work has shown that state-of-the-art classifiers are quite brittl...
research
06/06/2018

Killing Three Birds with one Gaussian Process: Analyzing Attack Vectors on Classification

The wide usage of Machine Learning (ML) has lead to research on the atta...
research
06/24/2023

Machine Learning needs its own Randomness Standard: Randomised Smoothing and PRNG-based attacks

Randomness supports many critical functions in the field of machine lear...

Please sign up or login with your details

Forgot password? Click here to reset