DeepAI AI Chat
Log In Sign Up

A Finer Calibration Analysis for Adversarial Robustness

by   Pranjal Awasthi, et al.

We present a more general analysis of H-calibration for adversarially robust classification. By adopting a finer definition of calibration, we can cover settings beyond the restricted hypothesis sets studied in previous work. In particular, our results hold for most common hypothesis sets used in machine learning. We both fix some previous calibration results (Bao et al., 2020) and generalize others (Awasthi et al., 2021). Moreover, our calibration results, combined with the previous study of consistency by Awasthi et al. (2021), also lead to more general H-consistency results covering common hypothesis sets.


page 1

page 2

page 3

page 4


Calibration and Consistency of Adversarial Surrogate Losses

Adversarial robustness is an increasingly critical property of classifie...

Overparametrization improves robustness against adversarial attacks: A replication study

Overparametrization has become a de facto standard in machine learning. ...

Calibrating for Class Weights by Modeling Machine Learning

A much studied issue is the extent to which the confidence scores provid...

A Unifying Perspective on Multi-Calibration: Unleashing Game Dynamics for Multi-Objective Learning

We provide a unifying framework for the design and analysis of multi-cal...

A Description and Proof of a Generalised and Optimised Variant of Wikström's Mixnet

In this paper, we describe an optimised variant of Wikström's mixnet whi...

IFTT-PIN: Demonstrating the Self-Calibration Paradigm on a PIN-Entry Task

We demonstrate IFTT-PIN, a self-calibrating version of the PIN-entry met...