fairml: A Statistician's Take on Fair Machine Learning Modelling
The adoption of machine learning in applications where it is crucial to ensure fairness and accountability has led to a large number of model proposals in the literature, largely formulated as optimisation problems with constraints reducing or eliminating the effect of sensitive attributes on the response. While this approach is very flexible from a theoretical perspective, the resulting models are somewhat black-box in nature: very little can be said about their statistical properties, what are the best practices in their applied use, and how they can be extended to problems other than those they were originally designed for. Furthermore, the estimation of each model requires a bespoke implementation involving an appropriate solver which is less than desirable from a software engineering perspective. In this paper, we describe the fairml R package which implements our previous work (Scutari, Panero, and Proissl 2022) and related models in the literature. fairml is designed around classical statistical models (generalised linear models) and penalised regression results (ridge regression) to produce fair models that are interpretable and whose properties are well-known. The constraint used to enforce fairness is orthogonal to model estimation, making it possible to mix-and-match the desired model family and fairness definition for each application. Furthermore, fairml provides facilities for model estimation, model selection and validation including diagnostic plots.
READ FULL TEXT