Planting Undetectable Backdoors in Machine Learning Models

04/14/2022
by   Shafi Goldwasser, et al.
4

Given the computational cost and technical expertise required to train machine learning models, users may delegate the task of learning to a service provider. We show how a malicious learner can plant an undetectable backdoor into a classifier. On the surface, such a backdoored classifier behaves normally, but in reality, the learner maintains a mechanism for changing the classification of any input, with only a slight perturbation. Importantly, without the appropriate "backdoor key", the mechanism is hidden and cannot be detected by any computationally-bounded observer. We demonstrate two frameworks for planting undetectable backdoors, with incomparable guarantees. First, we show how to plant a backdoor in any model, using digital signature schemes. The construction guarantees that given black-box access to the original model and the backdoored version, it is computationally infeasible to find even a single input where they differ. This property implies that the backdoored model has generalization error comparable with the original model. Second, we demonstrate how to insert undetectable backdoors in models trained using the Random Fourier Features (RFF) learning paradigm or in Random ReLU networks. In this construction, undetectability holds against powerful white-box distinguishers: given a complete description of the network and the training data, no efficient distinguisher can guess whether the model is "clean" or contains a backdoor. Our construction of undetectable backdoors also sheds light on the related issue of robustness to adversarial examples. In particular, our construction can produce a classifier that is indistinguishable from an "adversarially robust" classifier, but where every input has an adversarial example! In summary, the existence of undetectable backdoors represent a significant theoretical roadblock to certifying adversarial robustness.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/04/2016

Adversarial Machine Learning at Scale

Adversarial examples are malicious inputs designed to fool machine learn...
research
06/30/2020

Black-box Certification and Learning under Adversarial Perturbations

We formally study the problem of classification under adversarial pertur...
research
06/07/2019

A cryptographic approach to black box adversarial machine learning

We propose an ensemble technique for converting any classifier into a co...
research
10/03/2016

cleverhans v2.0.0: an adversarial machine learning library

cleverhans is a software library that provides standardized reference im...
research
02/25/2021

Understanding Robustness in Teacher-Student Setting: A New Perspective

Adversarial examples have appeared as a ubiquitous property of machine l...
research
12/16/2019

Constructing a provably adversarially-robust classifier from a high accuracy one

Modern machine learning models with very high accuracy have been shown t...
research
06/06/2019

Image Synthesis with a Single (Robust) Classifier

We show that the basic classification framework alone can be used to tac...

Please sign up or login with your details

Forgot password? Click here to reset