When Personalization Harms: Reconsidering the Use of Group Attributes in Prediction

06/04/2022
by   Vinith M. Suriyakumar, et al.
0

The standard approach to personalization in machine learning consists of training a model with group attributes like sex, age group, and blood type. In this work, we show that this approach to personalization fails to improve performance for all groups who provide personal data. We discuss how this effect inflicts harm in applications where models assign predictions on the basis of group membership. We propose collective preference guarantees to ensure the fair use of group attributes in prediction. We characterize how common approaches to personalization violate fair use due to failures in model development and deployment. We conduct a comprehensive empirical study of personalization in clinical prediction models. Our results highlight the prevalence of fair use violations, demonstrate actionable interventions to mitigate harm and underscore the need to measure the gains of personalization for all groups who provide personal data.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/08/2023

Participatory Systems for Personalized Prediction

Machine learning models are often personalized based on information that...
research
09/28/2017

Inference of Personal Attributes from Tweets Using Machine Learning

Using machine learning algorithms, including deep learning, we studied t...
research
07/18/2022

On Fair Classification with Mostly Private Sensitive Attributes

Machine learning models have demonstrated promising performance in many ...
research
07/20/2020

An Empirical Characterization of Fair Machine Learning For Clinical Risk Prediction

The use of machine learning to guide clinical decision making has the po...
research
06/19/2020

Probabilistic Fair Clustering

In clustering problems, a central decision-maker is given a complete met...
research
08/24/2023

Prediction without Preclusion: Recourse Verification with Reachable Sets

Machine learning models are often used to decide who will receive a loan...
research
02/17/2023

The Unbearable Weight of Massive Privilege: Revisiting Bias-Variance Trade-Offs in the Context of Fair Prediction

In this paper we revisit the bias-variance decomposition of model error ...

Please sign up or login with your details

Forgot password? Click here to reset