DeepAI AI Chat
Log In Sign Up

Detecting OODs as datapoints with High Uncertainty

by   Ramneet Kaur, et al.
University of Pennsylvania
SRI International

Deep neural networks (DNNs) are known to produce incorrect predictions with very high confidence on out-of-distribution inputs (OODs). This limitation is one of the key challenges in the adoption of DNNs in high-assurance systems such as autonomous driving, air traffic management, and medical diagnosis. This challenge has received significant attention recently, and several techniques have been developed to detect inputs where the model's prediction cannot be trusted. These techniques detect OODs as datapoints with either high epistemic uncertainty or high aleatoric uncertainty. We demonstrate the difference in the detection ability of these techniques and propose an ensemble approach for detection of OODs as datapoints with high uncertainty (epistemic or aleatoric). We perform experiments on vision datasets with multiple DNN architectures, achieving state-of-the-art results in most cases.


Are all outliers alike? On Understanding the Diversity of Outliers for Detecting OODs

Deep neural networks (DNNs) are known to produce incorrect predictions w...

One Versus all for deep Neural Network Incertitude (OVNNI) quantification

Deep neural networks (DNNs) are powerful learning models yet their resul...

iDECODe: In-distribution Equivariance for Conformal Out-of-distribution Detection

Machine learning methods such as deep neural networks (DNNs), despite th...

Sketching Curvature for Efficient Out-of-Distribution Detection for Deep Neural Networks

In order to safely deploy Deep Neural Networks (DNNs) within the percept...

Interpretable Self-Aware Neural Networks for Robust Trajectory Prediction

Although neural networks have seen tremendous success as predictive mode...

Laplacian Segmentation Networks: Improved Epistemic Uncertainty from Spatial Aleatoric Uncertainty

Out of distribution (OOD) medical images are frequently encountered, e.g...