Lipschitz Networks and Distributional Robustness

09/04/2018
by   Zac Cranko, et al.
0

Robust risk minimisation has several advantages: it has been studied with regards to improving the generalisation properties of models and robustness to adversarial perturbation. We bound the distributionally robust risk for a model class rich enough to include deep neural networks by a regularised empirical risk involving the Lipschitz constant of the model. This allows us to interpretand quantify the robustness properties of a deep neural network. As an application we show the distributionally robust risk upperbounds the adversarial training risk.

READ FULL TEXT
research
02/11/2020

Generalised Lipschitz Regularisation Equals Distributional Robustness

The problem of adversarial examples has highlighted the need for a theor...
research
05/30/2023

It begins with a boundary: A geometric view on probabilistically robust learning

Although deep neural networks have achieved super-human performance on m...
research
03/24/2022

A Manifold View of Adversarial Risk

The adversarial risk of a machine learning model has been widely studied...
research
12/10/2019

Statistically Robust Neural Network Classification

Recently there has been much interest in quantifying the robustness of n...
research
06/01/2022

The robust way to stack and bag: the local Lipschitz way

Recent research has established that the local Lipschitz constant of a n...
research
02/16/2021

A Law of Robustness for Weight-bounded Neural Networks

Robustness of deep neural networks against adversarial perturbations is ...
research
02/26/2022

Adversarial robustness of sparse local Lipschitz predictors

This work studies the adversarial robustness of parametric functions com...

Please sign up or login with your details

Forgot password? Click here to reset