Techniques for Adversarial Examples Threatening the Safety of Artificial Intelligence Based Systems

09/29/2019 ∙ by Utku Kose, et al. ∙ 0

Artificial intelligence is known as the most effective technological field for rapid developments shaping the future of the world. Even today, it is possible to see intense use of intelligence systems in all fields of the life. Although advantages of the Artificial Intelligence are widely observed, there is also a dark side employing efforts to design hacking oriented techniques against Artificial Intelligence. Thanks to such techniques, it is possible to trick intelligent systems causing directed results for unsuccessful outputs. That is critical for also cyber wars of the future as it is predicted that the wars will be done unmanned, autonomous intelligent systems. Moving from the explanations, objective of this study is to provide information regarding adversarial examples threatening the Artificial Intelligence and focus on details of some techniques, which are used for creating adversarial examples. Adversarial examples are known as training data, which can trick a Machine Learning technique to learn incorrectly about the target problem and cause an unsuccessful or maliciously directed intelligent system at the end. The study enables the readers to learn enough about details of recent techniques for creating adversarial examples.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.