Image Transformation can make Neural Networks more robust against Adversarial Examples

01/10/2019
by   Dang Duy Thang, et al.
0

Neural networks are being applied in many tasks related to IoT with encouraging results. For example, neural networks can precisely detect human, objects and animal via surveillance camera for security purpose. However, neural networks have been recently found vulnerable to well-designed input samples that called adversarial examples. Such issue causes neural networks to misclassify adversarial examples that are imperceptible to humans. We found giving a rotation to an adversarial example image can defeat the effect of adversarial examples. Using MNIST number images as the original images, we first generated adversarial examples to neural network recognizer, which was completely fooled by the forged examples. Then we rotated the adversarial image and gave them to the recognizer to find the recognizer to regain the correct recognition. Thus, we empirically confirmed rotation to images can protect pattern recognizer based on neural networks from adversarial example attacks.

READ FULL TEXT

page 4

page 5

research
08/05/2019

Automated Detection System for Adversarial Examples with High-Frequency Noises Sieve

Deep neural networks are being applied in many tasks with encouraging re...
research
12/08/2018

Detecting Adversarial Examples in Convolutional Neural Networks

The great success of convolutional neural networks has caused a massive ...
research
10/13/2016

Assessing Threat of Adversarial Examples on Deep Neural Networks

Deep neural networks are facing a potential security threat from adversa...
research
12/24/2020

Exploring Adversarial Examples via Invertible Neural Networks

Adversarial examples (AEs) are images that can mislead deep neural netwo...
research
03/28/2017

Adversarial Transformation Networks: Learning to Generate Adversarial Examples

Multiple different approaches of generating adversarial examples have be...
research
03/29/2018

Weakening the Detecting Capability of CNN-based Steganalysis

Recently, the application of deep learning in steganalysis has drawn man...
research
10/19/2020

Verifying the Causes of Adversarial Examples

The robustness of neural networks is challenged by adversarial examples ...

Please sign up or login with your details

Forgot password? Click here to reset