Relaxing Local Robustness

06/11/2021
by   Klas Leino, et al.
0

Certifiable local robustness, which rigorously precludes small-norm adversarial examples, has received significant attention as a means of addressing security concerns in deep learning. However, for some classification problems, local robustness is not a natural objective, even in the presence of adversaries; for example, if an image contains two classes of subjects, the correct label for the image may be considered arbitrary between the two, and thus enforcing strict separation between them is unnecessary. In this work, we introduce two relaxed safety properties for classifiers that address this observation: (1) relaxed top-k robustness, which serves as the analogue of top-k accuracy; and (2) affinity robustness, which specifies which sets of labels must be separated by a robustness margin, and which can be ϵ-close in ℓ_p space. We show how to construct models that can be efficiently certified against each relaxed robustness property, and trained with very little overhead relative to standard gradient descent. Finally, we demonstrate experimentally that these relaxed variants of robustness are well-suited to several significant classification problems, leading to lower rejection rates and higher certified accuracies than can be obtained when certifying "standard" local robustness.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

page 8

page 9

page 17

03/25/2019

Exploiting Excessive Invariance caused by Norm-Bounded Adversarial Robustness

Adversarial examples are malicious inputs crafted to cause a model to mi...
12/19/2020

Sample Complexity of Adversarially Robust Linear Classification on Separated Data

We consider the sample complexity of learning with adversarial robustnes...
05/06/2020

Proper measure for adversarial robustness

This paper analyzes the problems of standard adversarial accuracy and st...
09/02/2021

Impact of Attention on Adversarial Robustness of Image Classification Models

Adversarial attacks against deep learning models have gained significant...
02/21/2018

Adversarial classification: An adversarial risk analysis approach

Classification problems in security settings are usually contemplated as...
06/11/2019

Polymorphic Relaxed Noninterference

Information-flow security typing statically preserves confidentiality by...
04/07/2022

Adaptive-Gravity: A Defense Against Adversarial Samples

This paper presents a novel model training solution, denoted as Adaptive...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.