Promoting Fairness in GNNs: A Characterization of Stability

09/07/2023
by   Yaning Jia, et al.
0

The Lipschitz bound, a technique from robust statistics, can limit the maximum changes in the output concerning the input, taking into account associated irrelevant biased factors. It is an efficient and provable method for examining the output stability of machine learning models without incurring additional computation costs. Recently, Graph Neural Networks (GNNs), which operate on non-Euclidean data, have gained significant attention. However, no previous research has investigated the GNN Lipschitz bounds to shed light on stabilizing model outputs, especially when working on non-Euclidean data with inherent biases. Given the inherent biases in common graph data used for GNN training, it poses a serious challenge to constraining the GNN output perturbations induced by input biases, thereby safeguarding fairness during training. Recently, despite the Lipschitz constant's use in controlling the stability of Euclideanneural networks, the calculation of the precise Lipschitz constant remains elusive for non-Euclidean neural networks like GNNs, especially within fairness contexts. To narrow this gap, we begin with the general GNNs operating on an attributed graph, and formulate a Lipschitz bound to limit the changes in the output regarding biases associated with the input. Additionally, we theoretically analyze how the Lipschitz constant of a GNN model could constrain the output perturbations induced by biases learned from data for fairness training. We experimentally validate the Lipschitz bound's effectiveness in limiting biases of the model output. Finally, from a training dynamics perspective, we demonstrate why the theoretical Lipschitz bound can effectively guide the GNN training to better trade-off between accuracy and fairness.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/29/2021

Subgroup Generalization and Fairness of Graph Neural Networks

Despite enormous successful applications of graph neural networks (GNNs)...
research
05/11/2019

Stability Properties of Graph Neural Networks

Data stemming from networks exhibit an irregular support, whereby each d...
research
01/27/2022

FairMod: Fair Link Prediction and Recommendation via Graph Modification

As machine learning becomes more widely adopted across domains, it is cr...
research
12/14/2021

Robust Graph Neural Networks via Probabilistic Lipschitz Constraints

Graph neural networks (GNNs) have recently been demonstrated to perform ...
research
02/25/2021

Towards a Unified Framework for Fair and Stable Graph Representation Learning

As the representations output by Graph Neural Networks (GNNs) are increa...
research
09/04/2023

On the Robustness of Post-hoc GNN Explainers to Label Noise

Proposed as a solution to the inherent black-box limitations of graph ne...
research
03/16/2022

On the sensitivity of pose estimation neural networks: rotation parameterizations, Lipschitz constants, and provable bounds

In this paper, we approach the task of determining sensitivity bounds fo...

Please sign up or login with your details

Forgot password? Click here to reset