Gradients as a Measure of Uncertainty in Neural Networks

08/18/2020 ∙ by Jinsol Lee, et al. ∙ 203

Despite tremendous success of modern neural networks, they are known to be overconfident even when the model encounters inputs with unfamiliar conditions. Detecting such inputs is vital to preventing models from making naive predictions that may jeopardize real-world applications of neural networks. In this paper, we address the challenging problem of devising a simple yet effective measure of uncertainty in deep neural networks. Specifically, we propose to utilize backpropagated gradients to quantify the uncertainty of trained models. Gradients depict the required amount of change for a model to properly represent given inputs, thus providing a valuable insight into how familiar and certain the model is regarding the inputs. We demonstrate the effectiveness of gradients as a measure of model uncertainty in applications of detecting unfamiliar inputs, including out-of-distribution and corrupted samples. We show that our gradient-based method outperforms state-of-the-art methods by up to 4.8 in corrupted input detection.

READ FULL TEXT
POST COMMENT

Comments

Stephen33

I would like to upload a few photos I sure dont feel up to typing it all out but the Publication is somewhat rare I think. Published in 1989 by Colin Beardon/Ellis Horwood Limited Library Eddition 

ISBN# 0-7458-0718-6


Intellijoule

Good idea! Great for Using AI for analyzing the market. More, please.

Vaclav Kosar

Learn listening to the paper here: https://www.youtube.com/watch?v=cIh89oBMGZ0 Detect weird, damaged, unfamiliar, or out-of-sample images with gradients. But be warned calculating gradients is expensive! 💰 (Gradients as a Measure of Uncertainty in Neural Networks)

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.