Increasing the Confidence of Deep Neural Networks by Coverage Analysis

01/28/2021
by   Giulio Rossolini, et al.
0

The great performance of machine learning algorithms and deep neural networks in several perception and control tasks is pushing the industry to adopt such technologies in safety-critical applications, as autonomous robots and self-driving vehicles. At present, however, several issues need to be solved to make deep learning methods more trustworthy, predictable, safe, and secure against adversarial attacks. Although several methods have been proposed to improve the trustworthiness of deep neural networks, most of them are tailored for specific classes of adversarial examples, hence failing to detect other corner cases or unsafe inputs that heavily deviate from the training samples. This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model robustness against different unsafe inputs. In particular, four coverage analysis methods are proposed and tested in the architecture for evaluating multiple detection logics. Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs, introducing limited extra-execution time and memory requirements.

READ FULL TEXT
research
12/19/2017

Adversarial Examples: Attacks and Defenses for Deep Learning

With rapid progress and great successes in a wide spectrum of applicatio...
research
06/17/2022

Detecting Adversarial Examples in Batches – a geometrical approach

Many deep learning methods have successfully solved complex tasks in com...
research
08/16/2016

Towards Evaluating the Robustness of Neural Networks

Neural networks provide state-of-the-art results for most machine learni...
research
08/24/2021

Out-of-Distribution Example Detection in Deep Neural Networks using Distance to Modelled Embedding

Adoption of deep learning in safety-critical systems raise the need for ...
research
02/24/2021

Identifying Untrustworthy Predictions in Neural Networks by Geometric Gradient Analysis

The susceptibility of deep neural networks to untrustworthy predictions,...
research
07/25/2022

p-DkNN: Out-of-Distribution Detection Through Statistical Testing of Deep Representations

The lack of well-calibrated confidence estimates makes neural networks i...
research
04/18/2022

Sardino: Ultra-Fast Dynamic Ensemble for Secure Visual Sensing at Mobile Edge

Adversarial example attack endangers the mobile edge systems such as veh...

Please sign up or login with your details

Forgot password? Click here to reset