DeepAI
Log In Sign Up

FairEdit: Preserving Fairness in Graph Neural Networks through Greedy Graph Editing

01/10/2022
by   Donald Loveland, et al.
20

Graph Neural Networks (GNNs) have proven to excel in predictive modeling tasks where the underlying data is a graph. However, as GNNs are extensively used in human-centered applications, the issue of fairness has arisen. While edge deletion is a common method used to promote fairness in GNNs, it fails to consider when data is inherently missing fair connections. In this work we consider the unexplored method of edge addition, accompanied by deletion, to promote fairness. We propose two model-agnostic algorithms to perform edge editing: a brute force approach and a continuous approximation approach, FairEdit. FairEdit performs efficient edge editing by leveraging gradient information of a fairness loss to find edges that improve fairness. We find that FairEdit outperforms standard training for many data sets and GNN methods, while performing comparably to many state-of-the-art methods, demonstrating FairEdit's ability to improve fairness across many domains and models.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

06/29/2021

Subgroup Generalization and Fairness of Graph Neural Networks

Despite enormous successful applications of graph neural networks (GNNs)...
06/10/2019

Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective

Graph neural networks (GNNs) which apply the deep neural networks to gra...
01/27/2022

FairMod: Fair Link Prediction and Recommendation via Graph Modification

As machine learning becomes more widely adopted across domains, it is cr...
04/06/2022

Graph Neural Networks Designed for Different Graph Types: A Survey

Graphs are ubiquitous in nature and can therefore serve as models for ma...
04/18/2022

A Comprehensive Survey on Trustworthy Graph Neural Networks: Privacy, Robustness, Fairness, and Explainability

Graph Neural Networks (GNNs) have made rapid developments in the recent ...
06/05/2021

Training Robust Graph Neural Networks with Topology Adaptive Edge Dropping

Graph neural networks (GNNs) are processing architectures that exploit g...
05/06/2019

Missing Data Imputation with Adversarially-trained Graph Convolutional Networks

Missing data imputation (MDI) is a fundamental problem in many scientifi...

1 Introduction

Over the past decade, deep learning algorithms have become widely used in automated decision-making tasks. Despite neural networks’ improved training speed and performance, understanding how neural networks can be adopted to potentially

biased settings remains an open topic. Discriminatory bias has the potential to appear in many human-centered applications of neural networks, such as in social networks, financial networks, or recommendation systems, where data has been historically generated unfairly. One example of how biased data can manifest comes from mortgage lending in Chicago and Milwaukee over the last five decades. Specifically, redlining (the process of dividing up regions of neighborhoods and restricting access to various services, such as insurance or loans) in racially segregated parts of these cities has caused financial records to disproportionately under-represent various racial and ethnic groups. As a byproduct, some financial institutions have continued to perpetuate these biases by making ”data-driven decisions” for lending while failing to recognize the historical context of the data. To this day, data generated through these processes are fed a myriad of decision-making algorithms that have the propensity to amplify any signals present, even if unethical.

A survey on fairness in machine learning

[1] identified two reasons for algorithmic unfairness: a) bias in the data and b) and an algorithm’s susceptibility to bias. They evaluated the COMPAS software which measures the risk of an offender recommitting a crime [1]

, finding heavy racial bias. They cite the unwillingness of the authors to open-source their proprietary data as one of the driving issues. Another example that has demonstrated unfairness is the recent adoption of facial recognition system (FRS). The FRVT 2002

[15], an evaluation of FRS, showed that many algorithms’ performance significantly degraded between different genders. Furthermore, several racial biases were also found [5], highlighting the serious social ramifications that can occur when the systems are used in sensitive scenarios [10], such as criminal suspect identification. Based on these sources of unfairness, two possible solutions can be proposed. One solution would focus on facilitating the analysis of bias in data by AI ethics researchers and domain experts. Unfortunately, this can be an arduous and inefficient process. Another route argues for promoting fairness in the decision-making process by using algorithms that are more robust against unfairness. We focus on the latter in this work and consider how one might change the training strategy of a model to promote fairness.

Separately, one new direction of research is developing deep learning algorithms that can operate on graphs. Graphs have been used to represent many real-world systems, such as social networks, transaction networks, and molecule structures. While fairness has been studied for traditional machine learning algorithms and even deep neural networks, graph neural networks (GNNs), neural networks that operate directly on graphs, have only received PUCFYH-YEZMNIminor attention. In this project, we propose to edit the graph adjacency matrix to promote fairness. Intuitively, many cases of historic discrimination have caused data sets to be incomplete due to structural biases that restrict various behaviors. For example, in the context of redlining, different racial and ethnic groups were stratified into distinct communities, causing networks built on this data to be homophily-dominant (i.e. nodes in the local neighborhood all belong to the same sensitive class) with minimal connections across nodes of differing sensitive attributes. Thus, the key assumption that drives our method is that graph data generated through discriminatory means is either a) missing edges that would have been present in more fair settings, or b) over-representing homophilous edges due to social stratification. While existing literature proposes different techniques to promote fairness, such as re-weighting existing connections or adversarial-like training, general graph editing methods (including edge addition and deletion) have yet to be explored. Graph editing is an important next step as networks that are locally homophilous, cannot simply be re-weighted to be made fair, as no connections to nodes with differing sensitive attributes exist. Instead, new connections must be added to learn more fair representations. Due to the lack of explainability in GNNs models, learning more fair representations can be one step in improving their applicability in sensitive applications. Likewise, we also intend to learn how different GNN architectures interplay with fairness, helping to elucidate how structure impacts decisions making.

In summary, our contributions in this work are as follows:

  • We perform a large set of empirical evaluations measuring how fair training mechanisms and models impact various predictive task and fairness metrics, elucidating GNN design choices which improve fairness.

  • We propose a new method to introduce fairness into GNN models by editing the graph data during training. The edits either generate new connections, which would have otherwise been there if not for discriminatory factors, or delete old connections which saturate the model.

  • We propose a second variant of this method, FairEdit, that relaxes the discrete optimization, required to search the graph edit space, into a continuous optimization problem. Given a set of edits, this approach is able to determine which edit is most impactful to fairness in constant time through gradient approximations.

2 Related Work

Graph Representation Learning

Graph neural networks (GNNs) have been developed as a generalization of convolution neural networks (CNNs) to accommodate graph structured data. Various GNN architectures have been proposed, each introducing a different mechanism to promote expressivity.

GCN (Graph Convolution Network) [11]

proposed a graph representation and propagation rule for convolution neural network to be applied on graph-structured data. Specifically, the work takes the average of all neighbors of a node to update its hidden representations. This can hurt representation learning as proximity usually dominates over useful node features.

GraphSAGE (SAmple and aggreGatE) [8] worked to generalize GCN to more applications by proposing new aggregation and embedding functions. GraphSAGE operates by concatenating the aggregating the neighborhood embeddings with the hidden representation of the node being updated, allowing for each node to learn unique representations. APPNP (personalized propagation of neural predictions) [12] recognized the issue of oversmoothing (nodes learn the same representations in deeper GNNs) present in GCN and GraphSage, forcing models to remain shallow. In order to access higher-order networks, the authors adopt a propagation scheme based on personalized PageRank where the model performs a weighted sum between the aggregated neighborhood representations and the representations of the node is updated.

Fairness in Machine Learning Various notions of fairness in machine learning have been developed in the past decade. The notion of Individual Fairness[4] and Disparate Treatment [19] emphasizes that individuals from different sensitive groups should have similar outcomes if they have similar non-sensitive attributes. [13] introduced counterfactual fairness, capturing the intuition that a decision is fair if it is the same in the actual world and in the counterfactual world where the individual belong to a different sensitive group. In contrast, Group Fairness concentrates on statistical parity, including demographic parity (or disparate impact) [19], and equality of opportunity [9]. While previous notions focused on prediction outcomes, Fairness Through Unawareness [6] focused on process fairness, which requires that the sensitive feature are not explicitly included in the decision-making process. Another attempt outside parity in treatment or outcome is a preference-based notion of fairness proposed by [18], securing parity while guaranteeing high accuracy.

Fairness in Graph Representation Learning The study of fairness in graph-structure data is a very new topic and many domain-specific issues are still open to be addressed. One major source of bias in graph learning is homophily, which means that similar nodes in graphs tend to interact with each other. FairDrop [17] proposes to create a random copy of the adjacency matrix biased towards a decrease in homophily and reduce predictability of its sensitive attributes. Nifty[2] tries to adopt an adversarial-like training paradigm that perturbs the graph towards a more fair objective. However, this work fails to consider adding an edge when perturbing the graph’s structure. FairGNN [3] considers non-i.i.d. data and tries to leverage graph structure with limited sensitive information, while maintaining high computation efficiency. Node2vec [7]

is an algorithmic framework that learns the mapping of nodes to a low-dimensional vector space with a random walk procedure.

FairWalk [16]

extended the node2vec algorithm by grouping neighbors based on sensitive attributes and forcing possible steps to have equal probability with respect to these groups. Our method,

FairEdit, will add new connections between vertices that promote fairness, extending upon the work of Nifty and FairDrop. Though [14] claims that ”adding fictitious links might mislead the directions of message passing, and further corrupt the representation learning”, various works have proven that adding edges can improve various tasks without comprising accuracy [20]. An illustration is presented in Figure 1 to demonstrate what the editing process may look like between two communities of differing sensitive attributes.

Figure 1: Red dashed edges indicate edges between the same sensitive attribute that have been dropped, and the green dashed edge indicates the edge added between nodes with different sensitive attribute

3 Notations and Mathematical Formulations

is used to denote a graph. A graph can be represented by a set of nodes and a set of edges . We consider as the number of nodes of graph which is mathematically given by . is used to denote the adjacency matrix of , where indicates an edge between node and node . Each node in the graph has a set of features , where the full set of node features denoted as . The notation is used to denote an object that has been edited. The specific editing mechanism will be explained when introduced for the first.

Algorithm and hyper-parameter notations: denotes the fairness function and is used to denote counter factual fairness (explained below).

is used to denote the loss function of the model, the binary cross-entropy loss. The sensitive attribute for the

node in a graph is given by and the model parameters are denoted by .

is a hyper-parameter of the model which is used to denote the number of edits per iteration and the total number of epochs is given by

.

Graph neural networks learn through a message passing framework where node representations are updated based on messages from their neighbors. A simple update for node at iteration can be formulated as where is some embedding function at layer (e.g. MLP), is an order agnostic aggregation function (e.g. average), and is the set of nodes around a neighborhood of node . Additional learnable functions are sometimes added to improve expressivity of the model, such as an LSTM over the series of learned representations. If the graph has self-loops, the original node is also included in the neighborhood. At every layer, each set of node features is embedded with the same learned function , introducing a weight sharing mechanism. At the last layer,

, the learned embeddings are passed to a final classifier or regressor (also usually an MLP) to perform the node prediction task. If the goal is to predict over an entire graph, an additional aggregator is applied to all of the node embeddings before the prediction, resulting in one final output.

4 Methods

As mentioned in section 2, edge re-weighting and edge deletion is a common method used to promote fairness in GNNs. In this work, we consider how one might also include edge addition to introduce relationships otherwise unavailable in the data. We propose two novel approaches that edit a graph’s adjacency matrix to determine additions and deletions which improves fairness. The first method searches the set of possible edits within a graph’s adjacency matrix, during each training step, and greedily edits based on the edge that provided the largest fairness gain. The second method relaxes the discrete assumption and instead utilizes edge gradients as a signal of importance to fairness. Below, each method is formalized, with the training procedure, and a comparison of time and space complexity.

4.1 Brute Force Edit Method

The brute force approach tweaks the underlying graph structure to improve fairness. For a graph we first consider the set of all the possible edge edits on the graph. Let the set of all possible edges of G be . Thus a graph of nodes has .

At the start of the iteration we make the following change for ,

(1)

Based on , the updated edge set in , the corresponding counterfactual fairness score, , (formalized in the evaluation section) is calculated. The procedure is repeated for all . The final update is made based on the edit that produces the best fairness score. Mathematically, the optimal edited graph , i.e. the graph with the edited edge set that maximizes counterfactual fairness. Due to the possibility of creating a graph that is too different from the original graph, we choose . This small number of edits is seen to be generally sufficient while minimizing the computational overhead.

A major drawback of this approach is its high computational complexity. The run-time complexity of each iteration in this mode is due to the size of , causing issues in the case of large graphs. Due to this drawback, we introduce FairEdit to reduce the run-time complexity.

Input: Training Graph , sensitive index
Set: Number of edits , edit starting epoch , number of epochs , optimizer , model

for  to  do
     Train one step using optimizer for model parameters on graph
     if  then
         for   do
              Update using equation 1 to obtain
              Calculate
         end for
         Find optimal edited graph:
         
     end if
end for

Output: Optimized model parameters with improved fairness

Algorithm 1 Brute-force Edit

4.2 FairEdit

FairEdit retains the same goal as our brute force method of tweaking the underlying graph structure to improve fairness. FairEdit first proposes a set of counterfactual edges to be edited by a) adding edges between nodes with different sensitive attributes with a probability and b) removing edges between nodes with the same sensitive attributes with a probability , creating a new perturbed graph . is passed through the GNN model and the new predictions are compared against the original graph predictions to measure fairness. Given counterfactual fairness is not differentiable, we propose a differentiable approximation of counterfactual fairness through

(2)

where a large norm indicates a strong sensitivity in the counterfactual edit. We determine the (approximately) optimal edit by taking the gradient of equation 2 with respect to and . The magnitude of the gradients can be seen as a relaxation of the fairness increase determined in the brute force method. To choose the final edit, with and , we greedily solve:

(3)

Intuitively, when an edge is added, we can determine how impactful that change was by looking at the gradient with respect to . In the deletion cases, impactful edges can be determined by looking at . Note deleted edges will have gradients equal to 0 in , and added edges will have gradients equal to 0 in . Since the adjacency matrices are discrete, we are unable to directly attribute gradients to each edge. Instead, we learn a scoring matrix that is able to determine how important an edge is to the loss function. The mask is injected by , where

is the sigmoid function and

indicates element-wise multiplication. is discretized into

by binarizing through a threshold of

. This process is performed for five iterations, updating the mask at each step to learn an appropriate score.

By approximating the importance through gradients, the run-time complexity of each iteration in this mode is

given the need to simply run two forward passes of the model and one backpropagation step. However, this method does require additional space complexity of

as the model requires storage of the additional edges. That said, this space complexity is often much smaller given the probability of addition is usually small.

Input: Training Graph , sensitive index
Set: Number of edits , number of epochs

for  to  do
     Train one step using optimizer for model parameters on graph
     if  then
         Generate new graph
         Calculate and
         Compute approximate gradients for and from equation 2
         Solve equation 3 to obtain optimal edit
         Perform edit on to get
         
     end if
end for

Output: Optimized model parameters with improved fairness

Algorithm 2 FairEdit

5 Evaluation

5.1 Dataset

We intend to use three data sets proposed by [2] in order to evaluate our GNN models. In each dataset, we will identify the size of graph, the prediction task, and the sensitive attribute we will be analyzing for fairness. Each of our tasks will focus on node classification. All of the data sets are available freely and have been proposed as a standard to benchmark fairness in GNN models. These data sets are useful as they have a well defined sensitive attribute that can be probed for fairness. These datasets include: (a) Recidivism Dataset which is comprised of 18,876 nodes, where each node is representative of a defendant who was released on bail in the U.S state court system during 1990-2009. Each node contains 18 attributes. Connections are related to similarity of crimes and past convictions. The classification task is to determine whether a defendant would receive bail (i.e., unlikely to commit a violent crime if released) or not (i.e., likely to commit a violent crime) and uses race as the sensitive attribute. (b) Credit Default Dataset is comprised of 30,000 nodes, where each node represents individuals who are utilizing some form of credit. Each node contains 13 attributes. Individuals are connected by their spending and payment behavior. The classification task is to determine whether an individual will default on the credit card payment. Age is used as the sensitive attribute. (c) German Credit Dataset is comprised of 1,000 nodes, where each node represents an individual who uses a specific German bank. Each node contains 27 attributes. Each connection identifies a similarity in credit accounts. The classification task is to classify individuals into those who have high versus low credit risk. The individual’s gender is used as the sensitive attribute.

5.2 Measurement metrics

The model will be evaluated on the three data sets mentioned in the previous section. The predictive performance will be evaluated through the f1-score to mitigate any discrepancies in class distribution, while fairness will be determined by statistical parity, counterfactual fairness, and model stability. Further details of these metrics are below:

Standard Metrics

The standard metrics for binary classification are based on the notation of true positive (TP), true negative (TN), false positive (FP) and false negative (FN). F1-score is defined as the harmonic mean of precision and recall, where precision is defined as true positive divided by all positive predictions and recall is defined as true positive divided by all samples are underlying positive. In mathematical terms, precision is

, recall is , and F1-score =

Fairness Metrics Counterfactual fairness is measured by calculating the proportion of test nodes whose predicted labels change when the node’s sensitive attribute is flipped. Model stability is measured by calculating the proportion of test nodes whose predicted labels change when test node features are perturbed by a small amount of noise. Statistical parity (SP), also known as group fairness, suggests that the predictor is unbiased if the prediction label is independent of the protected sensitive attribute S and can be computed by = . Equal opportunity (EO) assumes that similar nodes with different sensitive attributes should have similar outcomes. This is computed as .

5.3 Evaluation Process and Results

We evaluate our proposed fairness-training framework on a variety of modern graph representation learning and GNN architectures such as GCN [11], GraphSage [8] and APPNP [12]. Each model is also applied to other state-of-the-art fair training frameworks such as FairGNN [3] and Nifty [2]

to compare against. The evaluation metrics mentioned in 5.2 are calculated for the three data sets identified in 5.1. We perform hyper-parameter searches for each dataset, model, and training method, tuning the learning rate (

), hidden size (), and model depth (). Additionally, we compare against FairWalk [16]

, a non-GNN fair model baseline that operates on graphs. Given FairWalk only produces node embeddings, a random forest model is used for the actual classification.

The results for the proposed evaluation metrics are shown in table 1. Results for the APPNP model and FairGNN training method are not provided given the APPNP model does not adopt the required architecture for FairGNN. Likewise, results for the brute force method are not provided for the credit defaulter dataset due to its exceedingly high computation cost. These issues necessitate methods such as FairEdit which are model agnostic and do not scale with the graph size. For methods previously proposed, such as FairGNN and Nifty, we re-implement their code and repeat the authors’ analysis. Unfortunately, even with reported hyperparameters, we are unable to match the results presented in the previous papers. Thus, in our results table and discussion, we consider our results.

For many datasets and models, FairEdit outperforms standard training, usually improving F1-score and fairness. Notably, FairEdit applied to the Recidivism dataset outperforms the standard training baseline on every model and often outperforms FairGNN and Nifty both in predictive performance and fairness. That said, for the German credit dataset and Credit defaulter dataset, FairGNN, Nifty, and FairEdit each demonstrate varying performance depending on the model choice. This insight is important as studies comparing FairGNN and Nifty are nonexistent, and their variability indicates more work needs to be to create more stable fairness methods. In addition to FairEdit’s performance, we are also able to better understand how model architecture impacts performance. In the recidivism and credit datasets, GraphSage seems to perform best in both F1-score and fairness, arguing that the concatenation between node features and neighbor features may introduce a mechanism that promotes fairness. Furthermore, despite FairEdit only approximately solving for the best edge to edit, result comparisons between brute force and FairEdit indicate this approximation is enough to sufficiently improve fairness while maintaining performance.

Despite the apparent success in a handful of scenarios, FairEdit has instances where it doesn’t perform well, particularly on the German credit and credit defaulter datasets. Interestingly, when we look into the percentage of sensitive attributed nodes in all of the datasets analyzed, we find that they are evenly distributed across the two classes. In other words, the datasets are inherently very fair to begin with and do not display any proportionality issues. This is important as we initially assumed the data experienced some sort of bias between the sensitive attribute and class distribution. This issue speaks broadly to the lack of realistic datasets which mimic biased behavior seen in the real world and may partially explain our mixed performance.

Dataset Method F1-score () Unfairness () Instability () ) )
German credit graph Fairwalk 0.790 0.400 0.360 0.019 0.009
GCN 0.646 0.148 0.160 0.406 0.379
FairGCN 0.810 0.024 0.024 0.109 0.023
Nifty-GCN 0.740 0.016 0.092 0.448 0.368
GCN-BF 0.646 0.156 0.168 0.428 0.409
GCN-FairEdit 0.692 0.052 0.168 0.448 0.463
SAGE 0.749 0.224 0.304 0.355 0.303
FairSAGE 0.748 ¡0.001 ¡0.001 0.003 0.055
Nifty-SAGE 0.737 0.036 0.132 0.659 0.554
SAGE-BF 0.749 0.188 0.316 0.416 0.363
SAGE-FairEdit 0.684 0.219 0.324 0.347 0.319
APPNP 0.772 0.092 0.436 0.241 0.180
Nifty-APPNP 0.762 0.038 0.142 0.542 0.523
APPNP-BF 0.772 0.088 0.436 0.241 0.180
APPNP-FairEdit 0.734 0.184 0.124 0.343 0.288
Recidivism graph Fairwalk 0.419 0.483 0.482 0.009 0.004
GCN 0.773 0.085 0.226 0.099 0.046
FairGCN 0.727 0.064 0.153 0.071 0.067
Nifty-GCN 0.663 0.015 0.123 0.025 0.028
GCN-BF 0.779 0.091 0.427 0.004 0.018
GCN-FairEdit 0.813 0.004 0.175 0.078 0.014
SAGE 0.778 0.087 0.430 0.002 0.018
FairSAGE 0.810 0.056 0.472 0.012 0.018
Nifty-SAGE 0.834 0.004 0.233 0.058 0.034
SAGE-BF 0.779 0.092 0.427 0.004 0.018
SAGE-FairEdit 0.823 0.053 0.475 0.015 0.032
APPNP 0.747 0.015 0.168 0.080 0.057
Nifty-APPNP 0.752 0.011 0.174 0.094 0.042
APPNP-BF 0.749 0.015 0.168 0.083 0.062
APPNP-FairEdit 0.759 0.011 0.135 0.063 0.034
Credit defaulter graph Fairwalk 0.722 0.412 0.540 0.008 0.001
GCN 0.793 0.162 0.282 0.137 0.136
FairGCN 0.739 0.052 0.245 0.045 0.051
Nifty-GCN 0.799 ¡0.001 0.136 0.110 0.093
GCN-FairEdit 0.797 0.123 0.195 0.122 0.125
SAGE 0.820 0.130 0.378 0.092 0.094
FairSAGE 0.825 0.061 0.382 0.125 0.097
Nifty-SAGE 0.833 0.006 0.134 0.112 0.090
SAGE-FairEdit 0.796 0.173 0.326 0.088 0.094
APPNP 0.824 0.065 0.141 0.113 0.109
Nifty-APPNP 0.084 0.032 0.013 0.012 0.011
APPNP-FairEdit 0.823 0.063 0.136 0.110 0.104
Table 1: Fair Training Results: Arrows (, ) indicate the direction of better performance. BF indicates brute-force algorithm

6 Conclusion

In this work we propose a novel training method to promote fairness in GNNs through graph editing, i.e., adding or deleting edges of a graph. Given the complexities involved with solving the originally proposed optimization problem, we also develop a relaxed version with significantly improved run-time. The brute force technique showed positive results for the smaller data sets on which it was implemented, but failed to scale to larger graphs. FairEdit, the relaxed variant, proved able to compete with state-of-the-art fairness techniques such as FairGNN and Nifty on many experiments, while maintaining model agnosticism. Furthermore, our experiments were able to help us elucidate GNN architectures that intrinsically promote fairness; in a significant number of experiments, GraphSAGE was able to outperform other models independent of the training method.

Our results indicate that empowering graph editing, specifically with the inclusion of edge additions, can improve GNN fairness without comprising accuracy. In the future, we hope to improve upon this work both at an experimentation and algorithmic design level. For data, we would find new datasets which demonstrate significant bias and disparity, similar to what can be found in real world applications, in order to sufficiently test both our models and other models. We hypothesize that other models which do not specifically handle our assumptions regarding missing and over-represented edges may not perform as well in these scenarios. As for method development, we believe more work can be done to improve the quick and efficient optimization of graph edits. Currently, the gradient-based method experiences a significant amount of noise, as seen in fields such as explainable AI, meaning the edge gradients are likely often not optimal. Together, these two scenarios can demonstrate the power of the FairEdit algorithm while improving convergence and expressivity.

References

  • [1] N. ”Mehrabi, F. Morstatter, N. Saxena, K. Lerman, and A. Galstyan (2021) A survey on bias and fairness in machine learning. ACM Computing Surveys (CSUR) 54 (6), pp. 1–35. Cited by: §1.
  • [2] C. Agarwal, H. Lakkaraju, and M. Zitnik (2021) Towards a unified framework for fair and stable graph representation learning. arXiv preprint arXiv:2102.13186. Cited by: §2, §5.1, §5.3.
  • [3] E. Dai and S. Wang (2021) Say no to the discrimination: learning fair graph neural networks with limited sensitive attribute information. In Proceedings of the 14th ACM International Conference on Web Search and Data Mining, pp. 680–688. Cited by: §2, §5.3.
  • [4] C. Dwork, M. Hardt, T. Pitassi, O. Reingold, and R. Zemel (2012) Fairness through awareness. In Proceedings of the 3rd innovations in theoretical computer science conference, pp. 214–226. Cited by: §2.
  • [5] G. Givens, J. R. Beveridge, B. A. Draper, and D. Bolme (2003) A statistical assessment of subject factors in the pca recognition of human faces. In

    2003 Conference on Computer Vision and Pattern Recognition Workshop

    ,
    Vol. 8, pp. 96–96. Cited by: §1.
  • [6] N. Grgic-Hlaca, M. B. Zafar, K. P. Gummadi, and A. Weller (2016)

    The case for process fairness in learning: feature selection for fair decision making

    .
    In NIPS Symposium on Machine Learning and the Law, Vol. 1, pp. 2. Cited by: §2.
  • [7] A. Grover and J. Leskovec (2016) Node2vec: scalable feature learning for networks. In Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 855–864. Cited by: §2.
  • [8] W. Hamilton, Z. Ying, and J. Leskovec (2017) Inductive representation learning on large graphs. Advances in neural information processing systems 30. Cited by: §2, §5.3.
  • [9] M. Hardt, E. Price, and N. Srebro (2016)

    Equality of opportunity in supervised learning

    .
    Advances in neural information processing systems 29, pp. 3315–3323. Cited by: §2.
  • [10] L. D. Introna (2005) Disclosive ethics and information technology: disclosing facial recognition systems. Ethics and Information Technology 7 (2), pp. 75–86. Cited by: §1.
  • [11] T. N. Kipf and M. Welling (2016) Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907. Cited by: §2, §5.3.
  • [12] J. Klicpera, A. Bojchevski, and S. Günnemann (2018) Predict then propagate: graph neural networks meet personalized pagerank. arXiv preprint arXiv:1810.05997. Cited by: §2, §5.3.
  • [13] M. J. Kusner, J. R. Loftus, C. Russell, and R. Silva (2017) Counterfactual fairness. arXiv preprint arXiv:1703.06856. Cited by: §2.
  • [14] P. Li, Y. Wang, H. Zhao, P. Hong, and H. Liu (2020) On dyadic fairness: exploring and mitigating bias in graph connections. In International Conference on Learning Representations, Cited by: §2.
  • [15] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone (2003) Face recognition vendor test 2002. In 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443), pp. 44. Cited by: §1.
  • [16] T. Rahman, B. Surma, M. Backes, and Y. Zhang (2019) Fairwalk: towards fair graph embedding. In Proceedings of the Twenty-Eighth International Joint Conference on Artifitial Intelligence,IJCAI-19, Cited by: §2, §5.3.
  • [17] I. Spinelli, S. Scardapane, A. Hussain, and A. Uncini (2021) Biased edge dropout for enhancing fairness in graph representation learning. arXiv preprint arXiv:2104.14210. Cited by: §2.
  • [18] M. B. Zafar, I. Valera, M. G. Rodriguez, K. P. Gummadi, and A. Weller (2017) From parity to preference-based notions of fairness in classification. arXiv preprint arXiv:1707.00010. Cited by: §2.
  • [19] M. B. Zafar, I. Valera, M. G. Rogriguez, and K. P. Gummadi (2017) Fairness constraints: mechanisms for fair classification. In Artificial Intelligence and Statistics, pp. 962–970. Cited by: §2.
  • [20] D. Zügner, A. Akbarnejad, and S. Günnemann (2018-07) Adversarial attacks on neural networks for graph data. Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. External Links: Document Cited by: §2.