Cost-Sensitive Robustness against Adversarial Examples

10/22/2018
by   Xiao Zhang, et al.
10

Several recent works have developed methods for training classifiers that are certifiably robust against norm-bounded adversarial perturbations. However, these methods assume that all the adversarial transformations provide equal value for adversaries, which is seldom the case in real-world applications. We advocate for cost-sensitive robustness as the criteria for measuring the classifier's performance for specific tasks. We encode the potential harm of different adversarial transformations in a cost matrix, and propose a general objective function to adapt the robust training method of Wong & Kolter (2018) to optimize for cost-sensitive robustness. Our experiments on simple MNIST and CIFAR10 models and a variety of cost matrices show that the proposed approach can produce models with substantially reduced cost-sensitive robust error, while maintaining classification accuracy.

READ FULL TEXT

page 9

page 15

research
03/19/2020

Breaking certified defenses: Semantic adversarial examples with spoofed robustness certificates

To deflect adversarial attacks, a range of "certified" classifiers have ...
research
10/08/2019

Directional Adversarial Training for Cost Sensitive Deep Learning Classification Applications

In many real-world applications of Machine Learning it is of paramount i...
research
02/02/2023

On the Robustness of Randomized Ensembles to Adversarial Perturbations

Randomized ensemble classifiers (RECs), where one classifier is randomly...
research
02/20/2023

Seasoning Model Soups for Robustness to Adversarial and Natural Distribution Shifts

Adversarial training is widely used to make classifiers robust to a spec...
research
11/02/2017

Provable defenses against adversarial examples via the convex outer adversarial polytope

We propose a method to learn deep ReLU-based classifiers that are provab...
research
10/30/2018

On the Effectiveness of Interval Bound Propagation for Training Verifiably Robust Models

Recent works have shown that it is possible to train models that are ver...
research
05/19/2021

Balancing Robustness and Sensitivity using Feature Contrastive Learning

It is generally believed that robust training of extremely large network...

Please sign up or login with your details

Forgot password? Click here to reset