Fair Representation Learning for Heterogeneous Information Networks

04/18/2021
by   Ziqian Zeng, et al.
0

Recently, much attention has been paid to the societal impact of AI, especially concerns regarding its fairness. A growing body of research has identified unfair AI systems and proposed methods to debias them, yet many challenges remain. Representation learning for Heterogeneous Information Networks (HINs), a fundamental building block used in complex network mining, has socially consequential applications such as automated career counseling, but there have been few attempts to ensure that it will not encode or amplify harmful biases, e.g. sexism in the job market. To address this gap, in this paper we propose a comprehensive set of de-biasing methods for fair HINs representation learning, including sampling-based, projection-based, and graph neural networks (GNNs)-based techniques. We systematically study the behavior of these algorithms, especially their capability in balancing the trade-off between fairness and prediction accuracy. We evaluate the performance of the proposed methods in an automated career counseling application where we mitigate gender bias in career recommendation. Based on the evaluation results on two datasets, we identify the most effective fair HINs representation learning techniques under different conditions.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/17/2022

FairMILE: A Multi-Level Framework for Fair and Scalable Graph Representation Learning

Graph representation learning models have been deployed for making decis...
research
01/27/2022

FairMod: Fair Link Prediction and Recommendation via Graph Modification

As machine learning becomes more widely adopted across domains, it is cr...
research
07/10/2023

Improving Fairness of Graph Neural Networks: A Graph Counterfactual Perspective

Graph neural networks have shown great ability in representation (GNNs) ...
research
06/25/2021

Projection-wise Disentangling for Fair and Interpretable Representation Learning: Application to 3D Facial Shape Analysis

Confounding bias is a crucial problem when applying machine learning to ...
research
01/17/2022

Fair Interpretable Learning via Correction Vectors

Neural network architectures have been extensively employed in the fair ...
research
09/20/2022

Closing the Gender Wage Gap: Adversarial Fairness in Job Recommendation

The goal of this work is to help mitigate the already existing gender wa...
research
02/07/2022

Fair Interpretable Representation Learning with Correction Vectors

Neural network architectures have been extensively employed in the fair ...

Please sign up or login with your details

Forgot password? Click here to reset