Scaling in Depth: Unlocking Robustness Certification on ImageNet

01/29/2023
by   Kai Hu, et al.
0

Notwithstanding the promise of Lipschitz-based approaches to deterministically train and certify robust deep networks, the state-of-the-art results only make successful use of feed-forward Convolutional Networks (ConvNets) on low-dimensional data, e.g. CIFAR-10. Because ConvNets often suffer from vanishing gradients when going deep, large-scale datasets with many classes, e.g., ImageNet, have remained out of practical reach. This paper investigates ways to scale up certifiably robust training to Residual Networks (ResNets). First, we introduce the Linear ResNet (LiResNet) architecture, which utilizes a new residual block designed to facilitate tighter Lipschitz bounds compared to a conventional residual block. Second, we introduce Efficient Margin MAximization (EMMA), a loss function that stabilizes robust training by simultaneously penalizing worst-case adversarial examples from all classes. Combining LiResNet and EMMA, we achieve new state-of-the-art robust accuracy on CIFAR-10/100 and Tiny-ImageNet under ℓ_2-norm-bounded perturbations. Moreover, for the first time, we are able to scale up deterministic robustness guarantees to ImageNet, bringing hope to the possibility of applying deterministic certification to real-world applications.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/12/2023

Certified Robust Models with Slack Control and Large Lipschitz Constants

Despite recent success, state-of-the-art learning-based models remain hi...
research
10/25/2021

Scalable Lipschitz Residual Networks with Convex Potential Flows

The Lipschitz constant of neural networks has been established as a key ...
research
07/04/2019

Adversarial Robustness through Local Linearization

Adversarial training is an effective methodology for training deep neura...
research
11/06/2018

MixTrain: Scalable Training of Verifiably Robust Neural Networks

Making neural networks robust against adversarial inputs has resulted in...
research
02/22/2021

On the robustness of randomized classifiers to adversarial examples

This paper investigates the theory of robustness against adversarial att...
research
11/22/2017

BlockDrop: Dynamic Inference Paths in Residual Networks

Very deep convolutional neural networks offer excellent recognition resu...
research
03/11/2021

Preprint: Norm Loss: An efficient yet effective regularization method for deep neural networks

Convolutional neural network training can suffer from diverse issues lik...

Please sign up or login with your details

Forgot password? Click here to reset