Remember to correct the bias when using deep learning for regression!

03/30/2022
by   Christian Igel, et al.
0

When training deep learning models for least-squares regression, we cannot expect that the training error residuals of the final model, selected after a fixed training time or based on performance on a hold-out data set, sum to zero. This can introduce a systematic error that accumulates if we are interested in the total aggregated performance over many data points. We suggest to adjust the bias of the machine learning model after training as a default postprocessing step, which efficiently solves the problem. The severeness of the error accumulation and the effectiveness of the bias correction is demonstrated in exemplary experiments.

READ FULL TEXT
research
06/09/2022

Uncovering bias in the PlantVillage dataset

We report our investigation on the use of the popular PlantVillage datas...
research
11/20/2022

Instability in clinical risk stratification models using deep learning

While it has been well known in the ML community that deep learning mode...
research
09/14/2020

A machine learning approach for efficient multi-dimensional integration

We propose a novel multi-dimensional integration algorithm using a machi...
research
03/15/2016

Bias Correction for Regularized Regression and its Application in Learning with Streaming Data

We propose an approach to reduce the bias of ridge regression and regula...
research
08/24/2022

Identifying and Overcoming Transformation Bias in Forecasting Models

Log and square root transformations of target variable are routinely use...
research
02/06/2019

A Bayesian Approach for Accurate Classification-Based Aggregates

In this paper, we study the accuracy of values aggregated over classes p...
research
10/15/2019

Towards a Precipitation Bias Corrector against Noise and Maldistribution

With broad applications in various public services like aviation managem...

Please sign up or login with your details

Forgot password? Click here to reset