Suppressing the Unusual: towards Robust CNNs using Symmetric Activation Functions

03/16/2016
by   Qiyang Zhao, et al.
0

Many deep Convolutional Neural Networks (CNN) make incorrect predictions on adversarial samples obtained by imperceptible perturbations of clean samples. We hypothesize that this is caused by a failure to suppress unusual signals within network layers. As remedy we propose the use of Symmetric Activation Functions (SAF) in non-linear signal transducer units. These units suppress signals of exceptional magnitude. We prove that SAF networks can perform classification tasks to arbitrary precision in a simplified situation. In practice, rather than use SAFs alone, we add them into CNNs to improve their robustness. The modified CNNs can be easily trained using popular strategies with the moderate training load. Our experiments on MNIST and CIFAR-10 show that the modified CNNs perform similarly to plain ones on clean samples, and are remarkably more robust against adversarial and nonsense samples.

READ FULL TEXT

page 2

page 3

page 8

page 10

research
06/16/2020

SPLASH: Learnable Activation Functions for Improving Accuracy and Adversarial Robustness

We introduce SPLASH units, a class of learnable activation functions sho...
research
03/29/2021

Comparison of different convolutional neural network activation functions and methods for building ensembles

Recently, much attention has been devoted to finding highly efficient an...
research
08/20/2021

PowerLinear Activation Functions with application to the first layer of CNNs

Convolutional neural networks (CNNs) have become the state-of-the-art to...
research
07/15/2020

Attention as Activation

Activation functions and attention mechanisms are typically treated as h...
research
07/31/2017

An Effective Training Method For Deep Convolutional Neural Network

In this paper, we propose the nonlinearity generation method to speed up...
research
10/08/2018

Diagnosing Convolutional Neural Networks using their Spectral Response

Convolutional Neural Networks (CNNs) are a class of artificial neural ne...
research
02/10/2021

CIFS: Improving Adversarial Robustness of CNNs via Channel-wise Importance-based Feature Selection

We investigate the adversarial robustness of CNNs from the perspective o...

Please sign up or login with your details

Forgot password? Click here to reset