Achieving robustness in classification using optimal transport with hinge regularization

06/11/2020
by   Mathieu Serrurier, et al.
0

We propose a new framework for robust binary classification, with Deep Neural Networks, based on a hinge regularization of the Kantorovich-Rubinstein dual formulation for the estimation of the Wasserstein distance. The robustness of the approach is guaranteed by the strict Lipschitz constraint on functions required by the optimization problem and direct interpretation of the loss in terms of adversarial robustness. We prove that this classification formulation has a solution, and is still the dual formulation of an optimal transportation problem. We also establish the geometrical properties of this optimal solution. We summarize state-of-the-art methods to enforce Lipschitz constraints on neural networks and we propose new ones for convolutional networks (associated with an open source library for this purpose). The experiments show that the approach provides the expected guarantees in terms of robustness without any significant accuracy drop. The results also suggest that adversarial attacks on the proposed models visibly and meaningfully change the input, and can thus serve as an explanation for the classification.

READ FULL TEXT

page 8

page 9

research
04/11/2021

The Many Faces of 1-Lipschitz Neural Networks

Lipschitz constrained models have been used to solve specifics deep lear...
research
04/27/2022

The Multimarginal Optimal Transport Formulation of Adversarial Multiclass Classification

We study a family of adversarial multiclass classification problems and ...
research
11/19/2018

Optimal Transport Classifier: Defending Against Adversarial Attacks by Regularized Deep Embedding

Recent studies have demonstrated the vulnerability of deep convolutional...
research
06/12/2021

Adversarial Robustness via Fisher-Rao Regularization

Adversarial robustness has become a topic of growing interest in machine...
research
06/14/2022

When adversarial attacks become interpretable counterfactual explanations

We argue that, when learning a 1-Lipschitz neural network with the dual ...
research
05/23/2017

Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation

Recent work has shown that state-of-the-art classifiers are quite brittl...
research
03/20/2022

Distributionally robust risk evaluation with causality constraint and structural information

This work studies distributionally robust evaluation of expected functio...

Please sign up or login with your details

Forgot password? Click here to reset