Two-stage Modeling for Prediction with Confidence

09/19/2022
by   Dangxing Chen, et al.
0

The use of neural networks has been very successful in a wide variety of applications. However, it has recently been observed that it is difficult to generalize the performance of neural networks under the condition of distributional shift. Several efforts have been made to identify potential out-of-distribution inputs. Although existing literature has made significant progress with regard to images and textual data, finance has been overlooked. The aim of this paper is to investigate the distribution shift in the credit scoring problem, one of the most important applications of finance. For the potential distribution shift problem, we propose a novel two-stage model. Using the out-of-distribution detection method, data is first separated into confident and unconfident sets. As a second step, we utilize the domain knowledge with a mean-variance optimization in order to provide reliable bounds for unconfident samples. Using empirical results, we demonstrate that our model offers reliable predictions for the vast majority of datasets. It is only a small portion of the dataset that is inherently difficult to judge, and we leave them to the judgment of human beings. Based on the two-stage model, highly confident predictions have been made and potential risks associated with the model have been significantly reduced.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/26/2020

Generalized ODIN: Detecting Out-of-distribution Image without Learning from Out-of-distribution Data

Deep neural networks have attained remarkable performance when applied t...
research
12/19/2021

Managing dataset shift by adversarial validation for credit scoring

Dataset shift is common in credit scoring scenarios, and the inconsisten...
research
10/26/2021

Reliable and Trustworthy Machine Learning for Health Using Dataset Shift Detection

Unpredictable ML model behavior on unseen data, especially in the health...
research
05/15/2021

An Effective Baseline for Robustness to Distributional Shift

Refraining from confidently predicting when faced with categories of inp...
research
06/15/2021

Predicting Unreliable Predictions by Shattering a Neural Network

Piecewise linear neural networks can be split into subfunctions, each wi...
research
05/22/2019

Detecting Adversarial Examples and Other Misclassifications in Neural Networks by Introspection

Despite having excellent performances for a wide variety of tasks, moder...
research
06/30/2020

Classification Confidence Estimation with Test-Time Data-Augmentation

Machine learning plays an increasingly significant role in many aspects ...

Please sign up or login with your details

Forgot password? Click here to reset