A Push-Pull Layer Improves Robustness of Convolutional Neural Networks

01/29/2019
by   Nicola Strisciuglio, et al.
0

We propose a new layer in Convolutional Neural Networks (CNNs) to increase their robustness to several types of noise perturbations of the input images. We call this a push-pull layer and compute its response as the combination of two half-wave rectified convolutions, with kernels of opposite polarity. It is based on a biologically-motivated non-linear model of certain neurons in the visual system that exhibit a response suppression phenomenon, known as push-pull inhibition. We validate our method by substituting the first convolutional layer of the LeNet-5 and WideResNet architectures with our push-pull layer. We train the networks on nonperturbed training images from the MNIST, CIFAR-10 and CIFAR-100 data sets, and test on images perturbed by noise that is unseen by the training process. We demonstrate that our push-pull layers contribute to a considerable improvement in robustness of classification of images perturbed by noise, while maintaining state-of-the-art performance on the original image classification task.

READ FULL TEXT

page 4

page 5

page 6

page 7

page 11

page 13

page 14

research
06/05/2018

Perturbative Neural Networks

Convolutional neural networks are witnessing wide adoption in computer v...
research
09/17/2022

A study on the deviations in performance of FNNs and CNNs in the realm of grayscale adversarial images

Neural Networks are prone to having lesser accuracy in the classificatio...
research
11/26/2018

Brain-inspired robust delineation operator

In this paper we present a novel filter, based on the existing COSFIRE f...
research
03/21/2021

Natural Perturbed Training for General Robustness of Neural Network Classifiers

We focus on the robustness of neural networks for classification. To per...
research
10/08/2018

Diagnosing Convolutional Neural Networks using their Spectral Response

Convolutional Neural Networks (CNNs) are a class of artificial neural ne...
research
07/20/2021

Built-in Elastic Transformations for Improved Robustness

We focus on building robustness in the convolutions of neural visual cla...
research
11/18/2021

Wiggling Weights to Improve the Robustness of Classifiers

Robustness against unwanted perturbations is an important aspect of depl...

Please sign up or login with your details

Forgot password? Click here to reset