Calibration and Consistency of Adversarial Surrogate Losses

04/19/2021
by   Pranjal Awasthi, et al.
0

Adversarial robustness is an increasingly critical property of classifiers in applications. The design of robust algorithms relies on surrogate losses since the optimization of the adversarial loss with most hypothesis sets is NP-hard. But which surrogate losses should be used and when do they benefit from theoretical guarantees? We present an extensive study of this question, including a detailed analysis of the H-calibration and H-consistency of adversarial surrogate losses. We show that, under some general assumptions, convex loss functions, or the supremum-based convex losses often used in applications, are not H-calibrated for important hypothesis sets such as generalized linear models or one-layer neural networks. We then give a characterization of H-calibration and prove that some surrogate losses are indeed H-calibrated for the adversarial loss, with these hypothesis sets. Next, we show that H-calibration is not sufficient to guarantee consistency and prove that, in the absence of any distributional assumption, no continuous surrogate loss is consistent in the adversarial setting. This, in particular, proves that a claim presented in a COLT 2020 publication is inaccurate. (Calibration results there are correct modulo subtle definition differences, but the consistency claim does not hold.) Next, we identify natural conditions under which some surrogate losses that we describe in detail are H-consistent for hypothesis sets such as generalized linear models and one-layer neural networks. We also report a series of empirical results with simulated data, which show that many H-calibrated surrogate losses are indeed not H-consistent, and validate our theoretical assumptions.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/20/2022

Towards Consistency in Adversarial Classification

In this paper, we study the problem of consistency in the context of adv...
research
01/30/2019

On the Consistency of Top-k Surrogate Losses

The top-k error is often employed to evaluate performance for challengin...
research
05/04/2021

A Finer Calibration Analysis for Adversarial Robustness

We present a more general analysis of H-calibration for adversarially ro...
research
10/16/2022

Loss Minimization through the Lens of Outcome Indistinguishability

We present a new perspective on loss minimization and the recent notion ...
research
12/03/2021

On the Existence of the Adversarial Bayes Classifier (Extended Version)

Adversarial robustness is a critical property in a variety of modern mac...
research
05/16/2022

ℋ-Consistency Estimation Error of Surrogate Loss Minimizers

We present a detailed study of estimation errors in terms of surrogate l...
research
07/05/2023

Ranking with Abstention

We introduce a novel framework of ranking with abstention, where the lea...

Please sign up or login with your details

Forgot password? Click here to reset