Adversarial Examples on Object Recognition: A Comprehensive Survey

08/07/2020
by   Alex Serban, et al.
6

Deep neural networks are at the forefront of machine learning research. However, despite achieving impressive performance on complex tasks, they can be very sensitive: Small perturbations of inputs can be sufficient to induce incorrect behavior. Such perturbations, called adversarial examples, are intentionally designed to test the network's sensitivity to distribution drifts. Given their surprisingly small size, a wide body of literature conjectures on their existence and how this phenomenon can be mitigated. In this article we discuss the impact of adversarial examples on security, safety, and robustness of neural networks. We start by introducing the hypotheses behind their existence, the methods used to construct or protect against them, and the capacity to transfer adversarial examples between different machine learning models. Altogether, the goal is to provide a comprehensive and self-contained survey of this growing field of research.

READ FULL TEXT

page 1

page 4

page 16

research
10/02/2018

Adversarial Examples - A Complete Characterisation of the Phenomenon

We provide a complete characterisation of the phenomenon of adversarial ...
research
11/01/2018

Reversible Adversarial Examples

Deep Neural Networks have recently led to significant improvement in man...
research
12/01/2016

Towards Robust Deep Neural Networks with BANG

Machine learning models, including state-of-the-art deep neural networks...
research
01/30/2019

A Simple Explanation for the Existence of Adversarial Examples with Small Hamming Distance

The existence of adversarial examples in which an imperceptible change i...
research
09/27/2019

Impact of Low-bitwidth Quantization on the Adversarial Robustness for Embedded Neural Networks

As the will to deploy neural networks models on embedded systems grows, ...
research
06/09/2022

Meet You Halfway: Explaining Deep Learning Mysteries

Deep neural networks perform exceptionally well on various learning task...
research
01/29/2019

Adversarial Examples Are a Natural Consequence of Test Error in Noise

Over the last few years, the phenomenon of adversarial examples --- mali...

Please sign up or login with your details

Forgot password? Click here to reset