
Generalised Lipschitz Regularisation Equals Distributional Robustness
The problem of adversarial examples has highlighted the need for a theor...
read it

Adversarially Robust Neural Architectures
Deep Neural Network (DNN) are vulnerable to adversarial attack. Existing...
read it

Statistically Robust Neural Network Classification
Recently there has been much interest in quantifying the robustness of n...
read it

Parseval Networks: Improving Robustness to Adversarial Examples
We introduce Parseval networks, a form of deep neural networks in which ...
read it

LipschitzMargin Training: Scalable Certification of Perturbation Invariance for Deep Neural Networks
High sensitivity of neural networks against malicious perturbations on i...
read it

Efficient Proximal Mapping of the 1pathnorm of Shallow Networks
We demonstrate two new important properties of the 1pathnorm of shallo...
read it

Verification of Neural Network Control Policy Under Persistent Adversarial Perturbation
Deep neural networks are known to be fragile to small adversarial pertur...
read it
Lipschitz Networks and Distributional Robustness
Robust risk minimisation has several advantages: it has been studied with regards to improving the generalisation properties of models and robustness to adversarial perturbation. We bound the distributionally robust risk for a model class rich enough to include deep neural networks by a regularised empirical risk involving the Lipschitz constant of the model. This allows us to interpretand quantify the robustness properties of a deep neural network. As an application we show the distributionally robust risk upperbounds the adversarial training risk.
READ FULL TEXT
Comments
There are no comments yet.