RAB: Provable Robustness Against Backdoor Attacks

03/19/2020
by   Maurice Weber, et al.
4

Recent studies have shown that deep neural networks (DNNs) are vulnerable to various attacks, including evasion attacks and poisoning attacks. On the defense side, there have been intensive interests in provable robustness against evasion attacks. In this paper, we focus on improving model robustness against more diverse threat models. Specifically, we provide the first unified framework using smoothing functional to certify the model robustness against general adversarial attacks. In particular, we propose the first robust training process RAB to certify against backdoor attacks. We theoretically prove the robustness bound for machine learning models based on the RAB training process, analyze the tightness of the robustness bound, as well as proposing different smoothing noise distributions such as Gaussian and Uniform distributions. Moreover, we evaluate the certified robustness of a family of "smoothed" DNNs which are trained in a differentially private fashion. In addition, we theoretically show that for simpler models such as K-nearest neighbor models, it is possible to train the robust smoothed models efficiently. For K=1, we propose an exact algorithm to smooth the training process, eliminating the need to sample from a noise distribution.Empirically, we conduct comprehensive experiments for different machine learning models such as DNNs, differentially private DNNs, and KNN models on MNIST, CIFAR-10 and ImageNet datasets to provide the first benchmark for certified robustness against backdoor attacks. In particular, we also evaluate KNN models on a spambase tabular dataset to demonstrate its advantages. Both the theoretic analysis for certified model robustness against arbitrary backdoors, and the comprehensive benchmark on diverse ML models and datasets would shed light on further robust learning strategies against training time or even general adversarial attacks on ML models.

READ FULL TEXT
research
09/09/2020

SoK: Certified Robustness for Deep Neural Networks

Great advancement in deep neural networks (DNNs) has led to state-of-the...
research
06/11/2021

Knowledge Enhanced Machine Learning Pipeline against Diverse Adversarial Attacks

Despite the great successes achieved by deep neural networks (DNNs), rec...
research
02/27/2020

Provable Robust Learning Based on Transformation-Specific Smoothing

As machine learning systems become pervasive, safeguarding their securit...
research
08/16/2023

Benchmarking Adversarial Robustness of Compressed Deep Learning Models

The increasing size of Deep Neural Networks (DNNs) poses a pressing need...
research
09/27/2020

Differentially Private Adversarial Robustness Through Randomized Perturbations

Deep Neural Networks, despite their great success in diverse domains, ar...
research
05/18/2022

Backdoor Attacks on Bayesian Neural Networks using Reverse Distribution

Due to cost and time-to-market constraints, many industries outsource th...
research
06/27/2019

Adversarial Robustness via Adversarial Label-Smoothing

We study Label-Smoothing as a means for improving adversarial robustness...

Please sign up or login with your details

Forgot password? Click here to reset