DeepAI
Log In Sign Up

Lipschitz Networks and Distributional Robustness

09/04/2018
by   Zac Cranko, et al.
0

Robust risk minimisation has several advantages: it has been studied with regards to improving the generalisation properties of models and robustness to adversarial perturbation. We bound the distributionally robust risk for a model class rich enough to include deep neural networks by a regularised empirical risk involving the Lipschitz constant of the model. This allows us to interpretand quantify the robustness properties of a deep neural network. As an application we show the distributionally robust risk upperbounds the adversarial training risk.

READ FULL TEXT
02/11/2020

Generalised Lipschitz Regularisation Equals Distributional Robustness

The problem of adversarial examples has highlighted the need for a theor...
03/24/2022

A Manifold View of Adversarial Risk

The adversarial risk of a machine learning model has been widely studied...
12/10/2019

Statistically Robust Neural Network Classification

Recently there has been much interest in quantifying the robustness of n...
06/01/2022

The robust way to stack and bag: the local Lipschitz way

Recent research has established that the local Lipschitz constant of a n...
02/16/2021

A Law of Robustness for Weight-bounded Neural Networks

Robustness of deep neural networks against adversarial perturbations is ...
02/26/2022

Adversarial robustness of sparse local Lipschitz predictors

This work studies the adversarial robustness of parametric functions com...
04/28/2017

Parseval Networks: Improving Robustness to Adversarial Examples

We introduce Parseval networks, a form of deep neural networks in which ...