A Finer Calibration Analysis for Adversarial Robustness

05/04/2021
by   Pranjal Awasthi, et al.
15

We present a more general analysis of H-calibration for adversarially robust classification. By adopting a finer definition of calibration, we can cover settings beyond the restricted hypothesis sets studied in previous work. In particular, our results hold for most common hypothesis sets used in machine learning. We both fix some previous calibration results (Bao et al., 2020) and generalize others (Awasthi et al., 2021). Moreover, our calibration results, combined with the previous study of consistency by Awasthi et al. (2021), also lead to more general H-consistency results covering common hypothesis sets.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset