Adversarial Classification via Distributional Robustness with Wasserstein Ambiguity

05/28/2020
by   Nam Ho-Nguyen, et al.
0

We study a model for adversarial classification based on distributionally robust chance constraints. We show that under Wasserstein ambiguity, the model aims to minimize the conditional value-at-risk of the distance to misclassification, and we explore links to previous adversarial classification models and maximum margin classifiers. We also provide a reformulation of the distributionally robust model for linear classifiers, and show it is equivalent to minimizing a regularized ramp loss. Numerical experiments show that, despite the nonconvexity, standard descent methods appear to converge to the global minimizer for this problem. Inspired by this observation, we show that, for a certain benign distribution, the regularized ramp loss minimization problem has a single stationary point, at the global minimizer.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset