Defective Convolutional Layers Learn Robust CNNs

11/19/2019
by   Tiange Luo, et al.
20

Robustness of convolutional neural networks has recently been highlighted by the adversarial examples, i.e., inputs added with well-designed perturbations which are imperceptible to humans but can cause the network to give incorrect outputs. Recent research suggests that the noises in adversarial examples break the textural structure, which eventually leads to wrong predictions by convolutional neural networks. To help a convolutional neural network make predictions relying less on textural information, we propose defective convolutional layers which contain defective neurons whose activations are set to be a constant function. As the defective neurons contain no information and are far different from the standard neurons in its spatial neighborhood, the textural features cannot be accurately extracted and the model has to seek for other features for classification, such as the shape. We first show that predictions made by the defective CNN are less dependent on textural information, but more on shape information, and further find that adversarial examples generated by the defective CNN appear to have semantic shapes. Experimental results demonstrate the defective CNN has higher defense ability than the standard CNN against various types of attack. In particular, it achieves state-of-the-art performance against transfer-based attacks without applying any adversarial training.

READ FULL TEXT

page 2

page 5

page 17

page 27

research
07/27/2020

RANDOM MASK: Towards Robust Convolutional Neural Networks

Robustness of neural networks has recently been highlighted by the adver...
research
07/12/2022

Exploring Adversarial Examples and Adversarial Robustness of Convolutional Neural Networks by Mutual Information

A counter-intuitive property of convolutional neural networks (CNNs) is ...
research
05/10/2020

Class-Aware Domain Adaptation for Improving Adversarial Robustness

Recent works have demonstrated convolutional neural networks are vulnera...
research
10/09/2018

Analyzing the Noise Robustness of Deep Neural Networks

Deep neural networks (DNNs) are vulnerable to maliciously generated adve...
research
04/28/2017

Parseval Networks: Improving Robustness to Adversarial Examples

We introduce Parseval networks, a form of deep neural networks in which ...
research
03/11/2023

Improving the Robustness of Deep Convolutional Neural Networks Through Feature Learning

Deep convolutional neural network (DCNN for short) models are vulnerable...
research
12/14/2016

Beam Search for Learning a Deep Convolutional Neural Network of 3D Shapes

This paper addresses 3D shape recognition. Recent work typically represe...

Please sign up or login with your details

Forgot password? Click here to reset