Debiasing Credit Scoring using Evolutionary Algorithms

10/25/2021
by   Nigel Kingsman, et al.
0

This paper investigates the application of machine learning when training a credit decision model over real, publicly available data whilst accounting for "bias objectives". We use the term "bias objective" to describe the requirement that a trained model displays discriminatory bias against a given groups of individuals that doesn't exceed a prescribed level, where such level can be zero. This research presents an empirical study examining the tension between competing model training objectives which in all cases include one or more bias objectives. This work is motivated by the observation that the parties associated with creditworthiness models have requirements that can not certainly be fully met simultaneously. The research herein seeks to highlight the impracticality of satisfying all parties' objectives, demonstrating the need for "trade-offs" to be made. The results and conclusions presented by this paper are of particular importance for all stakeholders within the credit scoring industry that rely upon artificial intelligence (AI) models as part of the decision-making process when determining the creditworthiness of individuals. This paper provides an exposition of the difficulty of training AI models that are able to simultaneously satisfy multiple bias objectives whilst maintaining acceptable levels of accuracy. Stakeholders should be aware of this difficulty and should acknowledge that some degree of discriminatory bias, across a number of protected characteristics and formulations of bias, cannot be avoided.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset