Householder Activations for Provable Robustness against Adversarial Attacks

08/05/2021
āˆ™
by   Sahil Singla, et al.
āˆ™
0
āˆ™

Training convolutional neural networks (CNNs) with a strict Lipschitz constraint under the l_2 norm is useful for provable adversarial robustness, interpretable gradients and stable training. While 1-Lipschitz CNNs can be designed by enforcing a 1-Lipschitz constraint on each layer, training such networks requires each layer to have an orthogonal Jacobian matrix (for all inputs) to prevent gradients from vanishing during backpropagation. A layer with this property is said to be Gradient Norm Preserving (GNP). To construct expressive GNP activation functions, we first prove that the Jacobian of any GNP piecewise linear function is only allowed to change via Householder transformations for the function to be continuous. Building on this result, we introduce a class of nonlinear GNP activations with learnable Householder transformations called Householder activations. A householder activation parameterized by the vector šÆ outputs (šˆ - 2šÆšÆ^T)š³ for its input š³ if šÆ^Tš³ā‰¤ 0; otherwise it outputs š³. Existing GNP activations such as MaxMin can be viewed as special cases of HH activations for certain settings of these transformations. Thus, networks with HH activations have higher expressive power than those with MaxMin activations. Although networks with HH activations have nontrivial provable robustness against adversarial attacks, we further boost their robustness by (i) introducing a certificate regularization and (ii) relaxing orthogonalization of the last layer of the network. Our experiments on CIFAR-10 and CIFAR-100 show that our regularized networks with HH activations lead to significant improvements in both the standard and provable robust accuracy over the prior works (gain of 3.65% and 4.46% on CIFAR-100 respectively).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
āˆ™ 05/24/2021

Skew Orthogonal Convolutions

Training convolutional neural networks with a Lipschitz constraint under...
research
āˆ™ 11/15/2022

Improved techniques for deterministic l2 robustness

Training convolutional neural networks (CNNs) with a strict 1-Lipschitz ...
research
āˆ™ 11/13/2018

Sorting out Lipschitz function approximation

Training neural networks subject to a Lipschitz constraint is useful for...
research
āˆ™ 11/03/2019

Preventing Gradient Attenuation in Lipschitz Constrained Convolutional Networks

Lipschitz constraints under L2 norm on deep neural networks are useful f...
research
āˆ™ 02/22/2018

L2-Nonexpansive Neural Networks

This paper proposes a class of well-conditioned neural networks in which...
research
āˆ™ 05/08/2019

Unsupervised Learning through Temporal Smoothing and Entropy Maximization

This paper proposes a method for machine learning from unlabeled data in...
research
āˆ™ 06/24/2018

SSIMLayer: Towards Robust Deep Representation Learning via Nonlinear Structural Similarity

Deeper convolutional neural networks provide more capacity to approximat...

Please sign up or login with your details

Forgot password? Click here to reset