Provable Robust Learning Based on Transformation-Specific Smoothing

02/27/2020
by   Linyi Li, et al.
6

As machine learning systems become pervasive, safeguarding their security is critical. Recent work has demonstrated that motivated adversaries could manipulate the test data to mislead ML systems to make arbitrary mistakes. So far, most research has focused on providing provable robustness guarantees for a specific ℓ_p norm bounded adversarial perturbation. However, in practice there are more adversarial transformations that are realistic and of semantic meaning, requiring to be analyzed and ideally certified. In this paper we aim to provide a unified framework for certifying ML model robustness against general adversarial transformations. First, we leverage the function smoothing strategy to certify robustness against a series of adversarial transformations such as rotation, translation, Gaussian blur, etc. We then provide sufficient conditions and strategies for certifying certain transformations. For instance, we propose a novel sampling based interpolation approach with the estimated Lipschitz upper bound to certify the robustness against rotation transformation. In addition, we theoretically optimize the smoothing strategies for certifying the robustness of ML models against different transformations. For instance, we show that smoothing by sampling from exponential distribution provides tighter robustness bound than Gaussian. We also prove two generalization gaps for the proposed framework to understand its theoretic barrier. Extensive experiments show that our proposed unified framework significantly outperforms the state-of-the-art certified robustness approaches on several datasets including ImageNet.

READ FULL TEXT
research
01/30/2022

TPC: Transformation-Specific Smoothing for Point Cloud Models

Point cloud models with neural network architectures have achieved great...
research
06/09/2022

GSmooth: Certified Robustness against Semantic Transformations via Generalized Randomized Smoothing

Certified defenses such as randomized smoothing have shown promise towar...
research
03/19/2020

RAB: Provable Robustness Against Backdoor Attacks

Recent studies have shown that deep neural networks (DNNs) are vulnerabl...
research
02/27/2020

Certification of Semantic Perturbations via Randomized Smoothing

We introduce a novel certification method for parametrized perturbations...
research
07/01/2020

Robust Learning against Logical Adversaries

Test-time adversarial attacks have posed serious challenges to the robus...
research
06/27/2019

Adversarial Robustness via Adversarial Label-Smoothing

We study Label-Smoothing as a means for improving adversarial robustness...
research
05/10/2022

Sibylvariant Transformations for Robust Text Classification

The vast majority of text transformation techniques in NLP are inherentl...

Please sign up or login with your details

Forgot password? Click here to reset