Mitigating Discrimination in Insurance with Wasserstein Barycenters

06/22/2023
by   Arthur Charpentier, et al.
0

The insurance industry is heavily reliant on predictions of risks based on characteristics of potential customers. Although the use of said models is common, researchers have long pointed out that such practices perpetuate discrimination based on sensitive features such as gender or race. Given that such discrimination can often be attributed to historical data biases, an elimination or at least mitigation is desirable. With the shift from more traditional models to machine-learning based predictions, calls for greater mitigation have grown anew, as simply excluding sensitive variables in the pricing process can be shown to be ineffective. In this article, we first investigate why predictions are a necessity within the industry and why correcting biases is not as straightforward as simply identifying a sensitive variable. We then propose to ease the biases through the use of Wasserstein barycenters instead of simple scaling. To demonstrate the effects and effectiveness of the approach we employ it on real data and discuss its implications.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/21/2017

Interventions over Predictions: Reframing the Ethical Debate for Actuarial Risk Assessment

Actuarial risk assessments might be unduly perceived as a neutral way to...
research
06/13/2022

A Machine Learning Model for Predicting, Diagnosing, and Mitigating Health Disparities in Hospital Readmission

The management of hyperglycemia in hospitalized patients has a significa...
research
04/01/2021

fairmodels: A Flexible Tool For Bias Detection, Visualization, And Mitigation

Machine learning decision systems are getting omnipresent in our lives. ...
research
10/18/2017

Themis-ml: A Fairness-aware Machine Learning Interface for End-to-end Discrimination Discovery and Mitigation

As more industries integrate machine learning into socially sensitive de...
research
04/29/2020

Demographics Should Not Be the Reason of Toxicity: Mitigating Discrimination in Text Classifications with Instance Weighting

With the recent proliferation of the use of text classifications, resear...
research
06/07/2023

M^3Fair: Mitigating Bias in Healthcare Data through Multi-Level and Multi-Sensitive-Attribute Reweighting Method

In the data-driven artificial intelligence paradigm, models heavily rely...
research
07/25/2023

AI and ethics in insurance: a new solution to mitigate proxy discrimination in risk modeling

The development of Machine Learning is experiencing growing interest fro...

Please sign up or login with your details

Forgot password? Click here to reset