Certified Robust Models with Slack Control and Large Lipschitz Constants

09/12/2023
by   Max Losch, et al.
0

Despite recent success, state-of-the-art learning-based models remain highly vulnerable to input changes such as adversarial examples. In order to obtain certifiable robustness against such perturbations, recent work considers Lipschitz-based regularizers or constraints while at the same time increasing prediction margin. Unfortunately, this comes at the cost of significantly decreased accuracy. In this paper, we propose a Calibrated Lipschitz-Margin Loss (CLL) that addresses this issue and improves certified robustness by tackling two problems: Firstly, commonly used margin losses do not adjust the penalties to the shrinking output distribution; caused by minimizing the Lipschitz constant K. Secondly, and most importantly, we observe that minimization of K can lead to overly smooth decision functions. This limits the model's complexity and thus reduces accuracy. Our CLL addresses these issues by explicitly calibrating the loss w.r.t. margin and Lipschitz constant, thereby establishing full control over slack and improving robustness certificates even with larger Lipschitz constants. On CIFAR-10, CIFAR-100 and Tiny-ImageNet, our models consistently outperform losses that leave the constant unattended. On CIFAR-100 and Tiny-ImageNet, CLL improves upon state-of-the-art deterministic L_2 robust accuracies. In contrast to current trends, we unlock potential of much smaller models without K=1 constraints.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/29/2023

Scaling in Depth: Unlocking Robustness Certification on ImageNet

Notwithstanding the promise of Lipschitz-based approaches to determinist...
research
11/20/2018

Lightweight Lipschitz Margin Training for Certified Defense against Adversarial Examples

How can we make machine learning provably robust against adversarial exa...
research
06/05/2020

Lipschitz Bounds and Provably Robust Training by Laplacian Smoothing

In this work we propose a graph-based learning framework to train models...
research
06/01/2022

The robust way to stack and bag: the local Lipschitz way

Recent research has established that the local Lipschitz constant of a n...
research
10/15/2019

Notes on Lipschitz Margin, Lipschitz Margin Training, and Lipschitz Margin p-Values for Deep Neural Network Classifiers

We provide a local class purity theorem for Lipschitz continuous, half-r...
research
07/13/2022

Lipschitz Continuity Retained Binary Neural Network

Relying on the premise that the performance of a binary neural network c...
research
11/29/2019

On the Benefits of Attributional Robustness

Interpretability is an emerging area of research in trustworthy machine ...

Please sign up or login with your details

Forgot password? Click here to reset