Group fairness without demographics using social networks

05/19/2023
by   David Liu, et al.
0

Group fairness is a popular approach to prevent unfavorable treatment of individuals based on sensitive attributes such as race, gender, and disability. However, the reliance of group fairness on access to discrete group information raises several limitations and concerns, especially with regard to privacy, intersectionality, and unforeseen biases. In this work, we propose a "group-free" measure of fairness that does not rely on sensitive attributes and, instead, is based on homophily in social networks, i.e., the common property that individuals sharing similar attributes are more likely to be connected. Our measure is group-free as it avoids recovering any form of group memberships and uses only pairwise similarities between individuals to define inequality in outcomes relative to the homophily structure in the network. We theoretically justify our measure by showing it is commensurate with the notion of additive decomposability in the economic inequality literature and also bound the impact of non-sensitive confounding attributes. Furthermore, we apply our measure to develop fair algorithms for classification, maximizing information access, and recommender systems. Our experimental results show that the proposed approach can reduce inequality among protected classes without knowledge of sensitive attribute labels. We conclude with a discussion of the limitations of our approach when applied in real-world settings.

READ FULL TEXT
research
07/10/2022

On Graph Neural Network Fairness in the Presence of Heterophilous Neighborhoods

We study the task of node classification for graph neural networks (GNNs...
research
07/22/2018

An Intersectional Definition of Fairness

We introduce a measure of fairness for algorithms and data with regard t...
research
05/24/2021

MultiFair: Multi-Group Fairness in Machine Learning

Algorithmic fairness is becoming increasingly important in data mining a...
research
03/18/2019

Multi-Differential Fairness Auditor for Black Box Classifiers

Machine learning algorithms are increasingly involved in sensitive decis...
research
11/28/2018

Racial categories in machine learning

Controversies around race and machine learning have sparked debate among...
research
05/05/2020

Can gender inequality be created without inter-group discrimination?

Understanding human societies requires knowing how they develop gender h...
research
10/14/2020

Causal Multi-Level Fairness

Algorithmic systems are known to impact marginalized groups severely, an...

Please sign up or login with your details

Forgot password? Click here to reset