L2-Nonexpansive Neural Networks

02/22/2018
by   Haifeng Qian, et al.
0

This paper proposes a class of well-conditioned neural networks in which a unit amount of change in the inputs causes at most a unit amount of change in the outputs or any of the internal layers. We develop the known methodology of controlling Lipschitz constants to realize its full potential in maximizing robustness: our linear and convolution layers subsume those in the previous Parseval networks as a special case and allow greater degrees of freedom; aggregation, pooling, splitting and other operators are adapted in new ways, and a new loss function is proposed, all for the purpose of improving robustness. With MNIST and CIFAR-10 classifiers, we demonstrate a number of advantages. Without needing any adversarial training, the proposed classifiers exceed the state of the art in robustness against white-box L2-bounded adversarial attacks. Their outputs are quantitatively more meaningful than ordinary networks and indicate levels of confidence. They are also free of exploding gradients, among other desirable properties.

READ FULL TEXT
research
04/20/2018

Learning More Robust Features with Adversarial Training

In recent years, it has been found that neural networks can be easily fo...
research
12/09/2021

PARL: Enhancing Diversity of Ensemble Networks to Resist Adversarial Attacks via Pairwise Adversarially Robust Loss Function

The security of Deep Learning classifiers is a critical field of study b...
research
03/02/2021

Smoothness Analysis of Loss Functions of Adversarial Training

Deep neural networks are vulnerable to adversarial attacks. Recent studi...
research
08/05/2021

Householder Activations for Provable Robustness against Adversarial Attacks

Training convolutional neural networks (CNNs) with a strict Lipschitz co...
research
06/10/2021

An Ensemble Approach Towards Adversarial Robustness

It is a known phenomenon that adversarial robustness comes at a cost to ...
research
02/10/2021

RoBIC: A benchmark suite for assessing classifiers robustness

Many defenses have emerged with the development of adversarial attacks. ...
research
04/28/2023

The Power of Typed Affine Decision Structures: A Case Study

TADS are a novel, concise white-box representation of neural networks. I...

Please sign up or login with your details

Forgot password? Click here to reset