Log In Sign Up

Generalised Lipschitz Regularisation Equals Distributional Robustness

by   Zac Cranko, et al.

The problem of adversarial examples has highlighted the need for a theory of regularisation that is general enough to apply to exotic function classes, such as universal approximators. In response, we give a very general equality result regarding the relationship between distributional robustness and regularisation, as defined with a transportation cost uncertainty set. The theory allows us to (tightly) certify the robustness properties of a Lipschitz-regularised model with very mild assumptions. As a theoretical application we show a new result explicating the connection between adversarial learning and distributional robustness. We then give new results for how to achieve Lipschitz regularisation of kernel classifiers, which are demonstrated experimentally.


page 1

page 2

page 3

page 4


Lipschitz Networks and Distributional Robustness

Robust risk minimisation has several advantages: it has been studied wit...

Lightweight Lipschitz Margin Training for Certified Defense against Adversarial Examples

How can we make machine learning provably robust against adversarial exa...

Evaluating the Robustness of Neural Networks: An Extreme Value Theory Approach

The robustness of neural networks to adversarial examples has received g...

Adversarial Robustness Curves

The existence of adversarial examples has led to considerable uncertaint...

A Unified Wasserstein Distributional Robustness Framework for Adversarial Training

It is well-known that deep neural networks (DNNs) are susceptible to adv...

Robustness of Bayesian Pool-based Active Learning Against Prior Misspecification

We study the robustness of active learning (AL) algorithms against prior...

Universal Lipschitz Approximation in Bounded Depth Neural Networks

Adversarial attacks against machine learning models are a rather hefty o...