Statistical Learning from Biased Training Samples

06/28/2019
by   Pierre Laforgue, et al.
0

With the deluge of digitized information in the Big Data era, massive datasets are becoming increasingly available for learning predictive models. However, in many situations, the poor control of data acquisition processes may naturally jeopardize the outputs of machine-learning algorithms and selection bias issues are now the subject of much attention in the literature. It is precisely the purpose of the present article to investigate how to extend Empirical Risk Minimization (ERM), the main paradigm of statistical learning, when the training observations are generated from biased models, i.e. from distributions that are different from that of the data in the test/prediction stage. Precisely, we show how to build a "nearly debiased" training statistical population from biased samples and the related biasing functions following in the footsteps of the approach originally proposed in Vardi et al. (1985) and study, from a non asymptotic perspective, the performance of minimizers of an empirical version of the risk computed from the statistical population thus constructed. Remarkably, the learning rate achieved by this procedure is of the same order as that attained in absence of any selection bias phenomenon. Beyond these theoretical guarantees, illustrative experimental results supporting the relevance of the algorithmic approach promoted in this paper are also displayed.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset