Security Matters: A Survey on Adversarial Machine Learning

10/16/2018
by   Guofu Li, et al.
0

Adversarial machine learning is a fast growing research area, which considers the scenarios when machine learning systems may face potential adversarial attackers, who intentionally synthesize input data to make a well-trained model to make mistake. It always involves a defending side, usually a classifier, and an attacking side that aims to cause incorrect output. The earliest studies of the adversarial learning starts from the information security area, which considers a variety of possible attacks. But recent research focus that popularized by the deep learning community places strong emphasis on how the "imperceivable" perturbations on the normal inputs may cause dramatic mistakes by the deep learning with supposed super-human accuracy. This paper serves to give a comprehensive introduction to a wide range of aspects of the adversarial deep learning topic, including its foundations, typical attacking and defending strategies, and some extended studies. We also share our points of view on the root cause of its existence and possible future directions of this research field.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset