DeepAI AI Chat
Log In Sign Up

Robustness against Adversarial Attacks in Neural Networks using Incremental Dissipativity

by   Bernardo Aquino, et al.
University of Michigan
University of Notre Dame

Adversarial examples can easily degrade the classification performance in neural networks. Empirical methods for promoting robustness to such examples have been proposed, but often lack both analytical insights and formal guarantees. Recently, some robustness certificates have appeared in the literature based on system theoretic notions. This work proposes an incremental dissipativity-based robustness certificate for neural networks in the form of a linear matrix inequality for each layer. We also propose an equivalent spectral norm bound for this certificate which is scalable to neural networks with multiple layers. We demonstrate the improved performance against adversarial attacks on a feed-forward neural network trained on MNIST and an Alexnet trained using CIFAR-10.


page 1

page 2

page 3

page 4


Robustness of Rotation-Equivariant Networks to Adversarial Perturbations

Deep neural networks have been shown to be vulnerable to adversarial exa...

Convolutional Networks with Adaptive Computation Graphs

Do convolutional networks really need a fixed feed-forward structure? Of...

Multiplicative Reweighting for Robust Neural Network Optimization

Deep neural networks are widespread due to their powerful performance. Y...

POPQORN: Quantifying Robustness of Recurrent Neural Networks

The vulnerability to adversarial attacks has been a critical issue for d...

Unbounded Output Networks for Classification

We proposed the expected energy-based restricted Boltzmann machine (EE-R...

Ortho-ODE: Enhancing Robustness and of Neural ODEs against Adversarial Attacks

Neural Ordinary Differential Equations (NODEs) probed the usage of numer...

Retrieval-Augmented Convolutional Neural Networks for Improved Robustness against Adversarial Examples

We propose a retrieval-augmented convolutional network and propose to tr...