Lightweight Lipschitz Margin Training for Certified Defense against Adversarial Examples

11/20/2018
by   Hajime Ono, et al.
0

How can we make machine learning provably robust against adversarial examples in a scalable way? Since certified defense methods, which ensure ϵ-robust, consume huge resources, they can only achieve small degree of robustness in practice. Lipschitz margin training (LMT) is a scalable certified defense, but it can also only achieve small robustness due to over-regularization. How can we make certified defense more efficiently? We present LC-LMT, a light weight Lipschitz margin training which solves the above problem. Our method has the following properties; (a) efficient: it can achieve ϵ-robustness at early epoch, and (b) robust: it has a potential to get higher robustness than LMT. In the evaluation, we demonstrate the benefits of the proposed method. LC-LMT can achieve required robustness more than 30 epoch earlier than LMT in MNIST, and shows more than 90 % accuracy against both legitimate and adversarial inputs.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/25/2018

Limitations of the Lipschitz constant as a defense against adversarial examples

Several recent papers have discussed utilizing Lipschitz constants to li...
research
05/22/2023

Mist: Towards Improved Adversarial Examples for Diffusion Models

Diffusion Models (DMs) have empowered great success in artificial-intell...
research
10/15/2019

Notes on Lipschitz Margin, Lipschitz Margin Training, and Lipschitz Margin p-Values for Deep Neural Network Classifiers

We provide a local class purity theorem for Lipschitz continuous, half-r...
research
09/12/2023

Certified Robust Models with Slack Control and Large Lipschitz Constants

Despite recent success, state-of-the-art learning-based models remain hi...
research
02/11/2020

Generalised Lipschitz Regularisation Equals Distributional Robustness

The problem of adversarial examples has highlighted the need for a theor...
research
09/23/2020

Adversarial robustness via stochastic regularization of neural activation sensitivity

Recent works have shown that the input domain of any machine learning cl...
research
06/23/2019

Defending Against Adversarial Examples with K-Nearest Neighbor

Robustness is an increasingly important property of machine learning mod...

Please sign up or login with your details

Forgot password? Click here to reset