FairMod: Fair Link Prediction and Recommendation via Graph Modification

01/27/2022
by   Sean Current, et al.
0

As machine learning becomes more widely adopted across domains, it is critical that researchers and ML engineers think about the inherent biases in the data that may be perpetuated by the model. Recently, many studies have shown that such biases are also imbibed in Graph Neural Network (GNN) models if the input graph is biased. In this work, we aim to mitigate the bias learned by GNNs through modifying the input graph. To that end, we propose FairMod, a Fair Graph Modification methodology with three formulations: the Global Fairness Optimization (GFO), Community Fairness Optimization (CFO), and Fair Edge Weighting (FEW) models. Our proposed models perform either microscopic or macroscopic edits to the input graph while training GNNs and learn node embeddings that are both accurate and fair under the context of link recommendations. We demonstrate the effectiveness of our approach on four real world datasets and show that we can improve the recommendation fairness by several factors at negligible cost to link prediction accuracy.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/22/2022

Graph Learning with Localized Neighborhood Fairness

Learning fair graph representations for downstream applications is becom...
research
05/25/2023

GFairHint: Improving Individual Fairness for Graph Neural Networks via Fairness Hint

Given the growing concerns about fairness in machine learning and the im...
research
02/22/2023

Drop Edges and Adapt: a Fairness Enforcing Fine-tuning for Graph Neural Networks

The rise of graph representation learning as the primary solution for ma...
research
04/18/2021

Fair Representation Learning for Heterogeneous Information Networks

Recently, much attention has been paid to the societal impact of AI, esp...
research
09/08/2022

Analyzing the Effect of Sampling in GNNs on Individual Fairness

Graph neural network (GNN) based methods have saturated the field of rec...
research
09/07/2023

Promoting Fairness in GNNs: A Characterization of Stability

The Lipschitz bound, a technique from robust statistics, can limit the m...
research
01/10/2022

FairEdit: Preserving Fairness in Graph Neural Networks through Greedy Graph Editing

Graph Neural Networks (GNNs) have proven to excel in predictive modeling...

Please sign up or login with your details

Forgot password? Click here to reset