When No-Rejection Learning is Optimal for Regression with Rejection

07/06/2023
by   Xiaocheng Li, et al.
0

Learning with rejection is a prototypical model for studying the interaction between humans and AI on prediction tasks. The model has two components, a predictor and a rejector. Upon the arrival of a sample, the rejector first decides whether to accept it; if accepted, the predictor fulfills the prediction task, and if rejected, the prediction will be deferred to humans. The learning problem requires learning a predictor and a rejector simultaneously. This changes the structure of the conventional loss function and often results in non-convexity and inconsistency issues. For the classification with rejection problem, several works develop surrogate losses for the jointly learning with provable consistency guarantees; in parallel, there has been less work for the regression counterpart. We study the regression with rejection (RwR) problem and investigate the no-rejection learning strategy which treats the RwR problem as a standard regression task to learn the predictor. We establish that the suboptimality of the no-rejection learning strategy observed in the literature can be mitigated by enlarging the function class of the predictor. Then we introduce the truncated loss to single out the learning for the predictor and we show that a consistent surrogate property can be established for the predictor individually in an easier way than for the predictor and the rejector jointly. Our findings advocate for a two-step learning procedure that first uses all the data to learn the predictor and then calibrates the prediction loss for the rejector. It is better aligned with the common intuition that more data samples will lead to a better predictor and it calls for more efforts on a better design of calibration algorithms for learning the rejector. While our discussions mainly focus on the regression problem, the theoretical results and insights generalize to the classification problem as well.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/22/2023

Learning to Reject with a Fixed Predictor: Application to Decontextualization

We study the problem of classification with a reject option for a fixed ...
research
03/07/2017

On Structured Prediction Theory with Calibrated Convex Surrogate Losses

We provide novel theoretical insights on structured prediction in the co...
research
09/20/2020

Latent Representation Prediction Networks

Deeply-learned planning methods are often based on learning representati...
research
07/05/2023

Ranking with Abstention

We introduce a novel framework of ranking with abstention, where the lea...
research
11/03/2019

Generalized Learning with Rejection for Classification and Regression Problems

Learning with rejection (LWR) allows development of machine learning sys...
research
12/16/2020

Learning from the Best: Rationalizing Prediction by Adversarial Information Calibration

Explaining the predictions of AI models is paramount in safety-critical ...
research
01/15/2023

Rationalizing Predictions by Adversarial Information Calibration

Explaining the predictions of AI models is paramount in safety-critical ...

Please sign up or login with your details

Forgot password? Click here to reset