Deep Kernel Learning for Mortality Prediction in the Face of Temporal Shift

12/01/2022
by   Miguel Rios, et al.
0

Neural models, with their ability to provide novel representations, have shown promising results in prediction tasks in healthcare. However, patient demographics, medical technology, and quality of care change over time. This often leads to drop in the performance of neural models for prospective patients, especially in terms of their calibration. The deep kernel learning (DKL) framework may be robust to such changes as it combines neural models with Gaussian processes, which are aware of prediction uncertainty. Our hypothesis is that out-of-distribution test points will result in probabilities closer to the global mean and hence prevent overconfident predictions. This in turn, we hypothesise, will result in better calibration on prospective data. This paper investigates DKL's behaviour when facing a temporal shift, which was naturally introduced when an information system that feeds a cohort database was changed. We compare DKL's performance to that of a neural baseline based on recurrent neural networks. We show that DKL indeed produced superior calibrated predictions. We also confirm that the DKL's predictions were indeed less sharp. In addition, DKL's discrimination ability was even improved: its AUC was 0.746 (+- 0.014 std), compared to 0.739 (+- 0.028 std) for the baseline. The paper demonstrated the importance of including uncertainty in neural computing, especially for their prospective use.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/17/2022

Deep Learning-Based Discrete Calibrated Survival Prediction

Deep neural networks for survival prediction outper-form classical appro...
research
11/29/2022

NCTV: Neural Clamping Toolkit and Visualization for Neural Network Calibration

With the advancement of deep learning technology, neural networks have d...
research
05/28/2022

Teaching Models to Express Their Uncertainty in Words

We show that a GPT-3 model can learn to express uncertainty about its ow...
research
12/20/2020

Towards Trustworthy Predictions from Deep Neural Networks with Fast Adversarial Calibration

To facilitate a wide-spread acceptance of AI systems guiding decision ma...
research
07/03/2020

Diagnostic Uncertainty Calibration: Towards Reliable Machine Predictions in Medical Domain

Label disagreement between human experts is a common issue in the medica...
research
08/31/2019

A Logic-Driven Framework for Consistency of Neural Models

While neural models show remarkable accuracy on individual predictions, ...

Please sign up or login with your details

Forgot password? Click here to reset