Why have a Unified Predictive Uncertainty? Disentangling it using Deep Split Ensembles

09/25/2020
by   Utkarsh Sarawgi, et al.
6

Understanding and quantifying uncertainty in black box Neural Networks (NNs) is critical when deployed in real-world settings such as healthcare. Recent works using Bayesian and non-Bayesian methods have shown how a unified predictive uncertainty can be modelled for NNs. Decomposing this uncertainty to disentangle the granular sources of heteroscedasticity in data provides rich information about its underlying causes. We propose a conceptually simple non-Bayesian approach, deep split ensemble, to disentangle the predictive uncertainties using a multivariate Gaussian mixture model. The NNs are trained with clusters of input features, for uncertainty estimates per cluster. We evaluate our approach on a series of benchmark regression datasets, while also comparing with unified uncertainty methods. Extensive analyses using dataset shits and empirical rule highlight our inherently well-calibrated models. Our work further demonstrates its applicability in a multi-modal setting using a benchmark Alzheimer's dataset and also shows how deep split ensembles can highlight hidden modality-specific biases. The minimal changes required to NNs and the training procedure, and the high flexibility to group features into clusters makes it readily deployable and useful. The source code is available at https://github.com/wazeerzulfikar/deep-split-ensembles

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/03/2020

Uncertainty-Aware Multi-Modal Ensembling for Severity Prediction of Alzheimer's Dementia

Reliability in Neural Networks (NNs) is crucial in safety-critical appli...
research
11/19/2020

Robustness to Missing Features using Hierarchical Clustering with Split Neural Networks

The problem of missing data has been persistent for a long time and pose...
research
06/21/2020

Learned Uncertainty-Aware (LUNA) Bases for Bayesian Regression using Multi-Headed Auxiliary Networks

Neural Linear Models (NLM) are deep models that produce predictive uncer...
research
04/21/2021

Uncertainty-Aware Boosted Ensembling in Multi-Modal Settings

Reliability of machine learning (ML) systems is crucial in safety-critic...
research
04/08/2023

Deep Anti-Regularized Ensembles provide reliable out-of-distribution uncertainty quantification

We consider the problem of uncertainty quantification in high dimensiona...
research
05/29/2021

Greedy Bayesian Posterior Approximation with Deep Ensembles

Ensembles of independently trained neural networks are a state-of-the-ar...
research
10/18/2022

Disentangling the Predictive Variance of Deep Ensembles through the Neural Tangent Kernel

Identifying unfamiliar inputs, also known as out-of-distribution (OOD) d...

Please sign up or login with your details

Forgot password? Click here to reset