RANDOM MASK: Towards Robust Convolutional Neural Networks

07/27/2020
by   Tiange Luo, et al.
20

Robustness of neural networks has recently been highlighted by the adversarial examples, i.e., inputs added with well-designed perturbations which are imperceptible to humans but can cause the network to give incorrect outputs. In this paper, we design a new CNN architecture that by itself has good robustness. We introduce a simple but powerful technique, Random Mask, to modify existing CNN structures. We show that CNN with Random Mask achieves state-of-the-art performance against black-box adversarial attacks without applying any adversarial training. We next investigate the adversarial examples which 'fool' a CNN with Random Mask. Surprisingly, we find that these adversarial examples often 'fool' humans as well. This raises fundamental questions on how to define adversarial examples and robustness properly.

READ FULL TEXT

page 4

page 12

page 17

research
11/19/2019

Defective Convolutional Layers Learn Robust CNNs

Robustness of convolutional neural networks has recently been highlighte...
research
11/28/2020

Generalized Adversarial Examples: Attacks and Defenses

Most of the works follow such definition of adversarial example that is ...
research
02/26/2018

Retrieval-Augmented Convolutional Neural Networks for Improved Robustness against Adversarial Examples

We propose a retrieval-augmented convolutional network and propose to tr...
research
06/14/2018

Copycat CNN: Stealing Knowledge by Persuading Confession with Random Non-Labeled Data

In the past few years, Convolutional Neural Networks (CNNs) have been ac...
research
06/28/2023

Does Saliency-Based Training bring Robustness for Deep Neural Networks in Image Classification?

Deep Neural Networks are powerful tools to understand complex patterns a...
research
03/14/2018

Defensive Collaborative Multi-task Training - Defending against Adversarial Attack towards Deep Neural Networks

Deep neural network (DNNs) has shown impressive performance on hard perc...
research
09/11/2019

Towards Noise-Robust Neural Networks via Progressive Adversarial Training

Adversarial examples, intentionally designed inputs tending to mislead d...

Please sign up or login with your details

Forgot password? Click here to reset