The Vulnerability of the Neural Networks Against Adversarial Examples in Deep Learning Algorithms

11/02/2020
by   Rui Zhao, et al.
0

With further development in the fields of computer vision, network security, natural language processing and so on so forth, deep learning technology gradually exposed certain security risks. The existing deep learning algorithms cannot effectively describe the essential characteristics of data, making the algorithm unable to give the correct result in the face of malicious input. Based on current security threats faced by deep learning, this paper introduces the problem of adversarial examples in deep learning, sorts out the existing attack and defense methods of the black box and white box, and classifies them. It briefly describes the application of some adversarial examples in different scenarios in recent years, compares several defense technologies of adversarial examples, and finally summarizes the problems in this research field and prospects for its future development. This paper introduces the common white box attack methods in detail, and further compares the similarities and differences between the attack of the black and white box. Correspondingly, the author also introduces the defense methods, and analyzes the performance of these methods against the black and white box attack.

READ FULL TEXT
research
04/04/2019

White-to-Black: Efficient Distillation of Black-Box Adversarial Attacks

Adversarial examples are important for understanding the behavior of neu...
research
05/01/2019

NATTACK: Learning the Distributions of Adversarial Examples for an Improved Black-Box Attack on Deep Neural Networks

Powerful adversarial attack methods are vital for understanding how to c...
research
03/26/2020

On the adversarial robustness of DNNs based on error correcting output codes

Adversarial examples represent a great security threat for deep learning...
research
07/26/2021

Benign Adversarial Attack: Tricking Algorithm for Goodness

In spite of the successful application in many fields, machine learning ...
research
12/26/2019

Benchmarking Adversarial Robustness

Deep neural networks are vulnerable to adversarial examples, which becom...
research
12/08/2017

Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning

Learning-based pattern classifiers, including deep networks, have demons...
research
08/31/2021

Morphence: Moving Target Defense Against Adversarial Examples

Robustness to adversarial examples of machine learning models remains an...

Please sign up or login with your details

Forgot password? Click here to reset