Is Deep Learning Safe for Robot Vision? Adversarial Examples against the iCub Humanoid

08/23/2017
by   Marco Melis, et al.
0

Deep neural networks have been widely adopted in recent years, exhibiting impressive performances in several application domains. It has however been shown that they can be fooled by adversarial examples, i.e., images altered by a barely-perceivable adversarial noise, carefully crafted to mislead classification. In this work, we aim to evaluate the extent to which robot-vision systems embodying deep-learning algorithms are vulnerable to adversarial examples, and propose a computationally efficient countermeasure to mitigate this threat, based on rejecting classification of anomalous inputs. We then provide a clearer understanding of the safety properties of deep networks through an intuitive empirical analysis, showing that the mapping learned by such networks essentially violates the smoothness assumption of learning algorithms. We finally discuss the main limitations of this work, including the creation of real-world adversarial examples, and sketch promising research directions.

READ FULL TEXT

page 6

page 8

research
12/08/2017

Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning

Learning-based pattern classifiers, including deep networks, have demons...
research
10/13/2016

Assessing Threat of Adversarial Examples on Deep Neural Networks

Deep neural networks are facing a potential security threat from adversa...
research
11/12/2019

On Robustness to Adversarial Examples and Polynomial Optimization

We study the design of computationally efficient algorithms with provabl...
research
12/30/2019

Defending from adversarial examples with a two-stream architecture

In recent years, deep learning has shown impressive performance on many ...
research
10/01/2019

Deep Neural Rejection against Adversarial Examples

Despite the impressive performances reported by deep neural networks in ...
research
04/30/2019

Detecting Adversarial Examples through Nonlinear Dimensionality Reduction

Deep neural networks are vulnerable to adversarial examples, i.e., caref...
research
02/17/2020

On the Similarity of Deep Learning Representations Across Didactic and Adversarial Examples

The increasing use of deep neural networks (DNNs) has motivated a parall...

Please sign up or login with your details

Forgot password? Click here to reset