Killing Three Birds with one Gaussian Process: Analyzing Attack Vectors on Classification

06/06/2018
by   Kathrin Grosse, et al.
2

The wide usage of Machine Learning (ML) has lead to research on the attack vectors and vulnerability of these systems. The defenses in this area are however still an open problem, and often lead to an arms race. We define a naive, secure classifier at test time and show that a Gaussian Process (GP) is an instance of this classifier given two assumptions: one concerns the distances in the training data, the other rejection at test time. Using these assumptions, we are able to show that a classifier is either secure, or generalizes and thus learns. Our analysis also points towards another factor influencing robustness, the curvature of the classifier. This connection is not unknown for linear models, but GP offer an ideal framework to study this relationship for nonlinear classifiers. We evaluate on five security and two computer vision datasets applying test and training time attacks and membership inference. We show that we only change which attacks are needed to succeed, instead of alleviating the threat. Only for membership inference, there is a setting in which attacks are unsuccessful (<10 random guess). Given these results, we define a classification scheme based on voting, ParGP. This allows us to decide how many points vote and how large the agreement on a class has to be. This ensures a classification output only in cases when there is evidence for a decision, where evidence is parametrized. We evaluate this scheme and obtain promising results.

READ FULL TEXT

page 5

page 6

page 8

page 11

research
06/04/2018

ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models

Machine learning (ML) has become a core component of many real-world app...
research
02/27/2020

Membership Inference Attacks and Defenses in Supervised Learning via Generalization Gap

This work studies membership inference (MI) attack against classifiers, ...
research
10/19/2021

Multi-concept adversarial attacks

As machine learning (ML) techniques are being increasingly used in many ...
research
09/19/2019

Adversarial Vulnerability Bounds for Gaussian Process Classification

Machine learning (ML) classification is increasingly used in safety-crit...
research
04/12/2021

A Backdoor Attack against 3D Point Cloud Classifiers

Vulnerability of 3D point cloud (PC) classifiers has become a grave conc...
research
07/01/2014

Mind the Nuisance: Gaussian Process Classification using Privileged Noise

The learning with privileged information setting has recently attracted ...
research
09/15/2021

Bayesian testing of linear versus nonlinear effects using Gaussian process priors

A Bayes factor is proposed for testing whether the effect of a key predi...

Please sign up or login with your details

Forgot password? Click here to reset