Detecting Adversarial Examples in Convolutional Neural Networks

12/08/2018
by   Stefanos Pertigkiozoglou, et al.
0

The great success of convolutional neural networks has caused a massive spread of the use of such models in a large variety of Computer Vision applications. However, these models are vulnerable to certain inputs, the adversarial examples, which although are not easily perceived by humans, they can lead a neural network to produce faulty results. This paper focuses on the detection of adversarial examples, which are created for convolutional neural networks that perform image classification. We propose three methods for detecting possible adversarial examples and after we analyze and compare their performance, we combine their best aspects to develop an even more robust approach. The first proposed method is based on the regularization of the feature vector that the neural network produces as output. The second method detects adversarial examples by using histograms, which are created from the outputs of the hidden layers of the neural network. These histograms create a feature vector which is used as the input of an SVM classifier, which classifies the original input either as an adversarial or as a real input. Finally, for the third method we introduce the concept of the residual image, which contains information about the parts of the input pattern that are ignored by the neural network. This method aims at the detection of possible adversarial examples, by using the residual image and reinforcing the parts of the input pattern that are ignored by the neural network. Each one of these methods has some novelties and by combining them we can further improve the detection results. For the proposed methods and their combination, we present the results of detecting adversarial examples on the MNIST dataset. The combination of the proposed methods offers some improvements over similar state of the art approaches.

READ FULL TEXT

page 5

page 9

page 13

page 15

page 19

page 21

research
01/10/2019

Image Transformation can make Neural Networks more robust against Adversarial Examples

Neural networks are being applied in many tasks related to IoT with enco...
research
11/18/2020

Adversarial Profiles: Detecting Out-Distribution Adversarial Samples in Pre-trained CNNs

Despite high accuracy of Convolutional Neural Networks (CNNs), they are ...
research
12/22/2016

Adversarial Examples Detection in Deep Networks with Convolutional Filter Statistics

Deep learning has greatly improved visual recognition in recent years. H...
research
08/26/2019

A Statistical Defense Approach for Detecting Adversarial Examples

Adversarial examples are maliciously modified inputs created to fool dee...
research
06/15/2018

Random depthwise signed convolutional neural networks

Random weights in convolutional neural networks have shown promising res...
research
06/17/2020

Adversarial Examples Detection and Analysis with Layer-wise Autoencoders

We present a mechanism for detecting adversarial examples based on data ...
research
12/11/2019

Detecting and Correcting Adversarial Images Using Image Processing Operations and Convolutional Neural Networks

Deep neural networks (DNNs) have achieved excellent performance on sever...

Please sign up or login with your details

Forgot password? Click here to reset