Should We Rely on Entity Mentions for Relation Extraction? Debiasing Relation Extraction with Counterfactual Analysis

05/08/2022
by   Yiwei Wang, et al.
0

Recent literature focuses on utilizing the entity information in the sentence-level relation extraction (RE), but this risks leaking superficial and spurious clues of relations. As a result, RE still suffers from unintended entity bias, i.e., the spurious correlation between entity mentions (names) and relations. Entity bias can mislead the RE models to extract the relations that do not exist in the text. To combat this issue, some previous work masks the entity mentions to prevent the RE models from overfitting entity mentions. However, this strategy degrades the RE performance because it loses the semantic information of entities. In this paper, we propose the CORE (Counterfactual Analysis based Relation Extraction) debiasing method that guides the RE models to focus on the main effects of textual context without losing the entity information. We first construct a causal graph for RE, which models the dependencies between variables in RE models. Then, we propose to conduct counterfactual analysis on our causal graph to distill and mitigate the entity bias, that captures the causal effects of specific entity mentions in each instance. Note that our CORE method is model-agnostic to debias existing RE systems during inference without changing their training processes. Extensive experimental results demonstrate that our CORE yields significant gains on both effectiveness and generalization for RE. The source code is provided at: https://github.com/vanoracai/CoRE.

READ FULL TEXT VIEW PDF

Authors

page 4

10/05/2020

Learning from Context or Names? An Empirical Study on Neural Relation Extraction

Neural models have achieved remarkable success on relation extraction (R...
09/16/2020

Minimize Exposure Bias of Seq2Seq Models in Joint Entity and Relation Extraction

Joint entity and relation extraction aims to extract relation triplets f...
06/17/2021

Element Intervention for Open Relation Extraction

Open relation extraction aims to cluster relation instances referring to...
06/07/2019

Matching the Blanks: Distributional Similarity for Relation Learning

General purpose relation extractors, which can model arbitrary relations...
11/26/2019

Integrating Relation Constraints with Neural Relation Extractors

Recent years have seen rapid progress in identifying predefined relation...
05/25/2022

Does Your Model Classify Entities Reasonably? Diagnosing and Mitigating Spurious Correlations in Entity Typing

The entity typing task aims at predicting one or more words or phrases t...
09/13/2019

Taxonomical hierarchy of canonicalized relations from multiple Knowledge Bases

This work addresses two important questions pertinent to Relation Extrac...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Sentence-level relation extraction (RE) is an important step to obtain a structural perception of unstructured text distiawan2019neural by extracting relations between entity mentions (names) from the textual context. From human oracle, textual context should be the main source of information that determines the ground-truth relations between entities. Consider a sentence “Mary gave birth to Jerry.111We use underline and wavy line to denote subject and object respectively by default.. Even if we change the entity mentions from ‘Jerry’ and ‘Mary’ to other people’s names, the relation ‘parents’ still holds between the subject and object as described by the textual context “gave birth to”.

Recently, some work aims to utilize entity mentions for RE yamada-etal-2020-luke; zhou2021improved, which, however, leak superficial and spurious clues about the relations zhang2018graph. In our work, we observe that entity information can lead to biased relation prediction by misleading RE models to extract relations that do not exist in the text. Figure 1 visualizes a relation prediction from a state-of-the-art RE model alt2020tacred (see more examples in Table 7). Although the context describes no relation between the highlighted entity pair, the model extracts the relation as “countries_of_residence”. Such an erroneous result can come from the spurious correlation between entity mentions and relations, or the entity bias in short. For example, if the model sees the relation “countries_of_residence” many more times than other relations when the object entity is Switzerland during training, the model can associate this relation with Switzerland during inference even though the relation does not exist in the text.

To combat this issue, some work zhang2017position; zhang2018graph proposes masking entities to prevent the RE models from over-fitting entity mentions. On the other hand, some other work peng2020learning; zhou2021improved finds that this strategy degrades the performance of RE because it loses the semantic information of entities.

Figure 1: (left) An example of RE produced by LUKE yamada-etal-2020-luke. In the input sentence, the subject is in blue and the object is in yellow. The ground-truth relation between the subject and object is “no_relation”, since there is not any relation reflected by the textual context. (right) Our proposed counterfactual analysis for RE, which compares the original prediction (upper) with the counterfactual one (lower) to mitigate the entity bias.

For both machines and humans, RE requires a combined understanding of textual context and entity mentions peng2020learning. Humans can avoid the entity bias and make unbiased decisions by correctly referring to the textual context that describes the relation. The underlying mechanism is causality-based van2015cognitive: humans identify the relations by pursuing the main causal effect of the textual context instead of the side-effect of entity mentions. In contrast, RE models are usually likelihood-based

: the prediction is analogous to looking up the entity mentions and textual context in a huge likelihood table, interpolated by training

tang2020unbiased. In this paper, our idea is to teach RE models to distinguish between the effects from the textual context and entity mentions through counterfactual analysis pearl2018causal:

Counterfactual analysis: If I had not seen the textual context, would I still extract the same relation?

The counterfactual analysis essentially gifts humans the hypothesizing abilities to make decisions collectively based on the textual context and entity mentions, as well as to introspect whether the decision is deceived (see Figure 1). Specifically, we are essentially comparing the original instance with a counterfactual instance, where only the textual context is wiped out, while keeping the entity mentions untouched. By doing so, we can focus on the main effects of the textual context without losing the entity information.

In our work, we propose a novel model-agnostic paradigm for debiasing RE, namely CoRE (Counterfactual analysis based Relation Extraction), which adopts the counterfactual analysis to mitigate the spurious influence of the entity mentions. Specifically, CoRE does not touch the training of RE models, i.e., it allows a model to be exposed to biases on the original training set. Then, we construct a causal graph for RE to analyze the dependencies between variables in RE models, which acts as a “roadmap” for capturing the causal effects of textual context and entity mentions. To rectify the test instances from the potentially biased prediction, in inference, CoRE “imagines” the counterfactual counterparts on our causal graph to distill the biases. Last but not least, CoRE performs a bias mitigation operation with adaptive weights to produce a debiased decision for RE.

We highlight that CoRE is a flexible debiasing method that is applicable to popular RE models without changing their training processes. To evaluate the effectiveness of CoRE, we perform extensive experiments on public benchmark datasets. The results demonstrate that our proposed method can significantly improve the effectiveness and generalization of the popular RE models by mitigating the biases in an entity-aware manner.

2 Related Work

Sentence-level relation extraction. Early research efforts nguyen-grishman-2015-relation; wang-etal-2016-relation; zhang2017position

train RE models from scratch based on lexicon-level features. The recent RE work fine-tunes pretrained language models (PLMs;

devlin-etal-2019-bert; liu2019roberta). For example, K-Adapter wang2020k fixes the parameters of the PLM and uses feature adapters to infuse factual and linguistic knowledge. Recent work focuses on utilizing the entity information for RE zhou2021improved; yamada-etal-2020-luke, but this leaks superficial and spurious clues about the relations zhang2018graph. Despite the biases in existing RE models, scarce work has discussed the spurious correlation between entity mentions and relations that causes such biases. Our work investigates this issue and proposes CoRE to debias RE models for higher effectiveness.

Debiasing for Natural Language Processing.

Debiasing is a fundamental problem in machine learning

torralba2011unbiased

. For natural language processing (NLP), some work performs data re-sampling to prevent models from capturing the unintended bias in training

dixon2018measuring; geng2007boosting; kang2016noise; rayhan2017cusboost; nguyen2011borderline. Alternatively, wei2019eda and qian2020enhancing develop data augmentation for debiasing. Some recent work debiases the NLP models based on causal inference qian2021counterfactual; nan2021uncovering. In RE, how to deal with the entity bias is also an important problem. For example, PA-LSTM zhang2017position masks the entity mentions with special tokens to prevent RE models from over-fitting entity names, which was also adopted by C-GCN zhang2018graph and SpanBERT joshi-etal-2020-spanbert. However, masking entities loses the semantic information of entities and leads to performance degradation. Different from it, our CoRE model tackles entity biases based on structured causal models. In this way, we debias the RE models to focus on the textual context without losing the entity information.

Figure 2: The causual graph of RE models.

3 Methodology

Sentence-level relation extraction (RE) aims to extract the relation between a pair of entities mentioned from a sentence. We propose CoRE (counterfactual analysis based Relation Extraction) as a model-agnostic technique to endow existing RE models with unbiased decisions during inference. CoRE follows the regular training process of existing work regardless of the bias from the entity mentions. During inference, CoRE post-adjusts the biased prediction according to the effects of the bias. CoRE can be flexibly incorporated into popular RE models to improve their effectiveness and generalization based on the counterfactual analysis without re-training the model.

In this section, we first formulate the existing RE models in the form of a causal graph. Then, we introduce our proposed bias distillation method to distill the entity bias with our designed counterfactual analysis. We conduct an empirical analysis to analyze how heavily the existing RE models rely on the entity mentions to make decisions. Finally, we mitigate the distilled bias from the predictions of RE models to improve their effectiveness.

3.1 Causality of Relation Extraction

In order to perform causal intervention, we first formulate the causal graph pearl2016causal; pearl2018book, a.k.a., structural causal model, for the RE models as Figure 2, which sheds light on how the textual context and entity mentions affect the RE predictions. The causal graph is a directed acyclic graph , indicating how a set of variables interact with each other through the causal relations behind the data and how variables obtain their values, e.g., in Figure 2. Before we conduct counterfactual analysis that deliberately manipulates the values of nodes and prunes the causal graph, we first revisit the conventional RE systems in the graphical view.

The causal graph in Figure 2 is applicable to a variety of RE models and imposes no constraints on the detailed implementations. Node is the input text. On the edge , we obtain the spans of subject and object entities as node through NER or human annotations zhang2017position. For example, in the aforementioned sentence Mary gave birth to Jerry.”, the entities are [’Mary’, ’Jerry’].

On the edges , existing RE models take different designs. For example, C-GCN zhang2018graph obtains the relation prediction by encoding entity mentions on the pruned dependency tree of using a graph convolutional network. IRE zhou2021improved uses PLMs as the encoder for , and marks the entity information of with special tokens to utilize the entity information.

3.2 Bias Distillation

Based on our causal graph in Figure 2, we diagnose how the entity bias affects inference. After training, the causal dependencies among the variables are learned in terms of the model parameters. The entity bias can mislead the models to make wrong predictions while ignoring the actual relation-describing textual context in , i.e., biased towards the causal dependency: .

Figure 3: The original causal graph of RE models (left) together with its two counterfactual alternates for the entity bias (middle) and label bias (right). The shading indicates the mask of corresponding variables.

The conventional biased prediction can only see the output of the entire graph given a sentence , ignoring how specific entity mentions affect the relation prediction. However, causal inference encourages us to think out of the black box. From the graphical point of view, we are no longer required to execute the entire causal graph as a whole. In contrast, we can directly manipulate the nodes and observe the output. The above operation is termed intervention in causal inference, which we denote as . It wipes out all the incoming links of a node and demands it to take a certain value.

We distill the entity bias by intervention and its induced counterfactual. The counterfactual means “counter to the facts”, and takes one step that further assigns the hypothetical combination of values to variables. For example, we can remove the input textual context by masking , but maintain as the original entity mentions, as if still exists.

We will use the input text as our control variable where the intervention is conducted, aiming to assess its effects, due to the fact that there would not be any valid relation between entities in if the input text

is empty. We denote the output logits

after the intervention as follows:

(1)

Following the above notation, the original prediction , i.e., can be re-written as .

To distill the entity bias, we conduct the intervention on , while keeping the variable as the original , as if the original input text had existed. Specifically, we mask the tokens in to produce but keep the entity mentions as original, so that the textual context is removed and the entity information is maintained. Accordingly, the counterfactual prediction is denoted as (see Figure 3). In this case, since the model cannot see any textual context in the factual input after the intervention , but still has access to the original entity mentions as the inputs, the prediction purely reflects the influence from . In other words,

refers to the output, i.e., a probability distribution or a logit vector, where only the entity mentions are given as the input without textual context.

Figure 4: Hit (y-axis) is the fraction of the test instances, that have the original relation prediction ranked in the top most confident relations of the counterfactual prediction . We report Hit of the model IRE on the test instances when the original relation prediction is title, employee_of, or origin.

To investigate how heavily the state-of-the-art models rely on the entity mentions for RE, we conduct an empirical study to compare the original prediction and the counterfactual one . Specifically, we calculate the fraction of the test instances (y-axis) that have the original relation prediction ranked in the top most confident relations of the counterfactual prediction . This fraction is termed as Hit.

We present Hit for IRE zhou2021improved, a state-of-the-art RE model, in Figure 4 on the test instances when the original relation prediction is title, employee_of, or origin. Higher Hit means that for more instances, the model infers the same relation given only the entity mentions no matter whether the textual context is given, which imply stronger causal effects from the entity mentions , i.e., the models rely more heavily on the entity mentions for RE.

We observe that when , the Hit is more than 50%, which implies that the model typically extracts the same relations even without textual context on more than a half of the instances. For a larger , the Hit increases significantly and reaches more than 80% for . These observations imply a promising but embarrassing result: the state-of-the-art model relies on the entity bias for RE on many instances. The entity bias reflected by can lead to the wrong extraction if the relation implied by the entity mentions does not exist in the input text. This poses a challenge to the generalization of RE models, as validated by our experimental results (Section 4.3).

In addition to that reflects the causal effects of entity mentions, there is another kind of bias not conditioned on the entity mentions , but reflecting the general bias in the whole dataset, which is . corresponds to the counterfactual inputs where both textual context and entity mentions are removed. In this case, since the model cannot access any information from the input after this removal, naturally reflects the label bias that exists in the model from the biased training. The causal graphic views of the original prediction , the counterfactual for the entity bias, and for the label bias are visualized in Figure 3.

3.3 Bias Mitigation

As we have discussed in Section 1, instead of the static likelihood that tends to be biased, the unbiased relation prediction lies in the difference between the observed outcome and its counterfactual predictions . The latter two are the biases that we want to mitigate from the relation prediction.

Intuitively, the unbiased prediction that we seek is the linguistic stimuli from blank to the observed textual context with specific relation descriptions, but not merely from the entity bias. The context-specific clues of the relations are key to the informative unbiased predictions, because even if the overall prediction is biased towards the relation “schools_attended” due to the object entity like “Duke University”, the textual context “work at” indicates the relation as “employee_of” rather than “schools_attended”.

Our final goal is to use the direct effect of the textual context from to for debiased prediction, mitigating (denoted as \) the label bias and the entity bias from the prediction: \ \, so as to block the spread of the biases from training to inference. The debiased prediction via bias mitigation can be formulated via the conceptually simple but empirically effective element-wise subtraction operation:

(2)

where and are two independent hyper-parameters balancing the terms for mitigating entity and label biases respectively. Note that the bias mitigation in Equation 2 for the entity and label biases correspond to Total Direct Effect (TDE) and Total Effect (TE) in causal inference tang2020unbiased; vanderweele2015explanation; pearl2009causality respectively. We adaptively set the values of and for different datasets based on the grid beam search hokamp2017lexically in a scoped two dimensional space:

(3)

where is a metric function (e.g., F1 scores) for evaluation, are the boundaries of the search range. We search the values of once on the validation set, and use the fixed values for inference on all testing instances. Since the entity types can restrict the candidate relations lyu2021relation, we use the entity type information, if available, to restrict the candidate relations for inference, which strengthens the effects of entity types for relation extraction.

Overall, the proposed CoRE replaces the conventional one-time prediction with to produce the debiased relation predictions, which essentially “thinks” twice: one for the original observation , the other for hypothesized .

width=

Dataset Train Dev Test Classes
TACRED 68,124 22,631 15509 42
SemEval 6,507 1,493 2,717 19
Re-TACRED 58,465 19,584 13418 40
TACRED-Revisit 68,124 22,631 15509 42
Table 1: Statistics of datasets.

4 Experiments

In this section, we evaluate the performance of our CoRE methods when applied to RE models. We compare our methods against a variety of strong baselines on the task of sentence-level RE. Our experimental settings closely follow those of the previous work zhang2017position; zhou2021improved; nan2021uncovering to ensure a fair comparison.

4.1 Experimental Settings

Datasets. We use four widely-used RE benchmarks: TACRED zhang2017position, SemEval hendrickx2019semeval, TACRED-Revisit alt2020tacred, and Re-TACRED stoica2021re for evaluation. TACRED contains over 106k mention pairs drawn from the yearly TAC KBP challenge. alt2020tacred relabeled the development and test sets of TACRED. Re-TACRED is a further relabeled version of TACRED after refining its label definitions. The statistics of these datasets are shown in Table 1.

Method TACRED TACRED-Revisit Re-TACRED SemEval
C-SGC wu2019simplifying 52.1 62.8 69.8 71.3
SpanBERT joshi-etal-2020-spanbert 55.7 65.1 74.1 74.9
CP peng2020learning 56.8 67.1 78.1 79.6
RECENT lyu2021relation 63.3 70.5 81.1 74.6
KnowPrompt chen2021knowprompt 57.6 68.7 79.0 81.8
IRE zhou2021improved 59.2 68.4 78.6 79.1
LUKE yamada-etal-2020-luke 58.8 67.5 80.2 82.1
LUKE + Resample burnaev2015influence 59.3 68.2 80.5 82.5
LUKE + Focal lin2017focal 59.1 67.7 80.3 82.4
LUKE + CFIE nan2021uncovering 59.8 68.0 80.4 82.2
LUKE + Entity Mask zhang2017position 57.9 67.0 79.5 82.0
LUKE + CoRE 61.7 70.2 81.6 83.6
IRE zhou2021improved 63.1 70.6 81.5 81.4
IRE + Resample burnaev2015influence 63.3 71.0 81.9 81.6
IRE + Focal lin2017focal 62.9 70.7 81.2 81.1
IRE + CFIE nan2021uncovering 63.3 70.9 81.6 81.7
IRE + Entity Mask zhang2017position 61.4 69.3 79.6 81.2
IRE + CoRE 64.4 71.8 82.8 82.3
Table 2: F1-macro scores (%) of RE on the test sets of TACRED, TACRED-Revisit, Re-TACRED, and SemEval. The best results in each column are highlighted in bold font.

We use the widely-used F1-macro score as the main evaluation metric

nan2021uncovering

, which is the balanced harmonic mean of precision and recall, as well as F1-micro for a more comprehensive evaluation. F1-macro is more suitable than F1-micro to reflect the extent of biases, especially for the highly-skewed cases, since F1-macro is evenly influenced by the performance in each category, i.e. category-sensitive, but F1-micro simply gives equal weights to all instances

kim2019small.

Method TACRED TACRED-Revisit Re-TACRED SemEval
LUKE yamada-etal-2020-luke 72.7 80.6 90.3 87.8
LUKE + Resample burnaev2015influence 73.1 80.9 90.5 87.9
LUKE + Focal lin2017focal 72.9 80.7 90.4 87.6
LUKE + CFIE nan2021uncovering 73.3 80.8 90.5 88.0
LUKE + Entity Mask zhang2017position 72.3 80.4 90.1 87.5
LUKE + CoRE 74.6 81.4 90.9 88.7
Table 3: F1-micro scores (%) of RE on the test sets of TACRED, TACRED-Revisit, Re-TACRED, and SemEval. The best results in each column are highlighted in bold font.

width= Method TACRED Re-TACRED LUKE yamada-etal-2020-luke 51.9 65.3 w/ Resample burnaev2015influence 53.2 66.7 w/ Focal lin2017focal 52.4 65.9 w/ CFIE nan2021uncovering 52.1 65.6 w/ Entity Mask zhang2017position 54.5 67.1 w/ CoRE (ours) 69.3 83.1 IRE zhou2021improved 56.4 68.1 w/ Resample burnaev2015influence 58.1 70.3 w/ Focal lin2017focal 56.8 68.7 w/ CFIE nan2021uncovering 57.1 68.4 w/ Entity Mask zhang2017position 57.3 68.9 w/ CoRE (ours) 73.6 85.4

Table 4: F1-macro scores (%) of RE on the challenging test sets of TACRED and Re-TACRED, in which the relations implied by the entity mentions do not exist in the textual context. ‘w’ denotes ‘with’. The best results in each column are highlighted in bold font.
Method TACRED TACRED-Revisit Re-TACRED SemEval
IRE zhou2021improved 61.2 59.3 57.5 54.1
IRE + Resample burnaev2015influence 60.5 58.4 56.8 53.5
IRE + Focal lin2017focal 60.9 58.9 57.1 53.7
IRE + CFIE nan2021uncovering 60.1 57.8 56.2 52.9
IRE + Entity Mask zhang2017position 61.5 60.1 57.3 54.2
IRE + CoRE 57.3 55.6 54.3 50.8
Table 5: Experimental results (unfairness; %) of Relation Extraction on the test sets of TACRED, TACRED-Revisit, Re-TACRED, and SemEval (lower is better). The best results in each column are highlighted in bold font.

Compared methods. We take the following RE models into comparison. (1) C-SGC wu2019simplifying simplifies GCN, and combines it with LSTM, leading to improved performance over each method alone. (2) SpanBERT joshi-etal-2020-spanbert extends BERT by introducing a new pretraining objective of continuous span prediction. (3) CP peng2020learning is an entity-masked contrastive pre-training framework for RE. (4) RECENT lyu2021relation restricts the candidate relations based on the entity types. (5) KnowPrompt chen2021knowprompt is Knowledge-aware Prompt-tuning approach. (6) LUKE yamada-etal-2020-luke

pretrains the language model on both large text corpora and knowledge graphs and further proposes an entity-aware self-attention mechanism. (7)

IRE zhou2021improved proposes an improved entity representation technique in the data preprocessing.

Among the above RE models, we apply our CoRE on LUKE and IRE. To demonstrate the effectiveness of debiased inference, we also compare with the following debiasing techniques that are applied to the same two RE models. (1) Focal lin2017focal adaptively reweights the losses of different instances so as to focus on the hard ones. (2) Resample burnaev2015influence up-samples rare categories by the inversed sample fraction during training. (3) Entity Mask zhang2017position: masks the entity mentions with special tokens to reduce the over-fitting on entities. (4) CFIE nan2021uncovering is also a causal inference method. In contrast to our method, CFIE strengthens the causal effects of entities by masking entity-centric information in the counterfactual predictions.

Model configuration. For the hyper-parameters of the considered baseline methods, e.g., the batch size, the number of hidden units, the optimizer, and the learning rate, we set them as suggested by their authors. For the hyper-parameters of our CoRE method, we set the search range of the hypermeters in Equation 3 as and the search step . For all experiments, we report the median scores of five runs of training using different random seeds.

4.2 Overall Performance

We implement our CoRE with LUKE and IRE. Table 3 reports the RE results on the TACRED, TACRED-Revisit, Re-TACRED, and SemEval datasets. Our CoRE method improves the F1-macro scores of LUKE by 4.9% on TACRED, 4.0% on TACRED-Revisit, 1.7% on Re-TACRED, and 1.7 on SemEval, and improves IRE by 1.2% on TACRED, 1.4% on TACRED-Revisit, 0.9% on Re-TACRED, and 1.8% on SemEval. As a result, our CoRE achieves substantial improvements for LUKE and IRE, and enables them to outperform the baseline methods. Additionally, we report the experimental results in terms of F1-micro scores in Table 3, showing the improvement from CoRE on LUKE by 2.6% on TACRED, 1.0% on TACRED-Revisit, 0.7% on Re-TACRED, and 1.0% on SemEval. Overall, our CoRE method improves the effectiveness of RE significantly in terms of both F1-macro and F1-micro scores. The above experimental results validate the effectiveness and generalization of our proposed method.

Among the baseline debiasing methods, Resample, Focal, CFIE cannot distill the entity bias in an entity-aware manner like ours. Entity Mask leads to the loss of information, while our CoRE enables RE models to focus on the main effects of textual context without losing the entity information. The superiority of CoRE highlights the importance of the causal inference based entity bias analysis for debiasing RE, which compares traditional likelihood-based predictions and hypothesized counterfactual ones to produce debiased predictions. Besides, the proposed CoRE works in inference and thus can be employed on the previous already-trained models. In this way, CoRE serves as a model-agnostic approach to enhance RE models without changing their training process.

width= LUKE + CoRE 61.7 IRE + CoRE 64.4 w/o CoRE 58.8 2.9 w/o CoRE 63.1 1.3 w/o EBM 59.5 2.2 w/o EBM 63.4 1.0 w/o LBM 60.8 0.9 w/o LBM 63.9 0.5 w/o BSH 60.1 1.6 w/o BSH 63.8 0.6

Table 6: Ablation study based on the TACRED dataset. The analyzed model components include entity bias mitigation operation (EBM), the label bias mitigation operation (LBM) and the beam search for hyper-parameters (BSH). ‘w/o’ denotes ‘without’. denotes performance drop in terms of F1-macro scores.

4.3 Analysis on Entity Bias

Some work argues that RE models may rely on the entity mentions to make relation predictions instead of the textual context zhang2018graph; joshi-etal-2020-spanbert. The empirical results in Figure 3 validates this argument. Regardless of whether the textual context exists or not, the baseline RE model makes the same predictions given only entity mentions on many instances. The entity bias can mislead the RE models to make wrong predictions when the relation implied by the entity mentions does not exist in the textual context.

width=

Input sentence Original Debiased Counterfactual
More than 1,100 miles (1,770 kilometers) away, Alan Gross passes his days in a Cuban military hospital, watching baseball on a small television or jamming with his jailers on a stringed instrument they gave him. origin ✗ countries_of_residence ✓ origin
He said that according to his investigation, Bibi drew the ire of fellow farmhands after a dispute in June 2009, when they refused to drink water she collected and she refused their demands that she convert to Islam. religion ✗ no_relation ✓ religion
ShopperTrak

also estimates foot traffic in the

U.S. was 11.2 percent below what it would have been Sunday if the blizzard had not occurred and 13.9 percent below what it could have been Monday.
country_of_headquarters ✗ no_relation ✓ country_of_headquarters
Table 7: A case study for IRE and our CoRE on the relation extraction dataset TACRED. Underlines and wavy lines highlight the subject and object entities respectively. We report the original prediction, the corresponding counterfactual prediction and the debiased prediction.

To evaluate whether RE models can generalize well to particularly challenging instances where relations implied by the entity mentions do not exist in the textual context, we propose a filtered evaluation setting, where we keep the test instances having the entity bias different from their ground-truth relations. In this setting, RE models cannot overly rely on the entity mentions for RE, since the entity mentions no longer provide the superficial and spurious clues for the ground-truth relations.

We present the evaluation results on the filtered test set in Table 4. Our CoRE method consistently and substantially improves the effectiveness of LUKE and IRE on the filtered test set and outperforms the baseline methods by a significant margin, which validates the effectiveness and generalization of our method to mitigate the entity bias in the challenging cases.

4.4 Evaluation on Fairness

According to sweeney2019transparent, the more imbalanced/skewed a prediction produced by a trained model is, the more unfair opportunities it gives over predefined categories, and the more unfairly discriminative the trained model is. We thus follow previous work xiang2020learning; sweeney2019transparent; qian2021counterfactual to use the metric – imbalance divergence – to evaluate how imbalanced/skewed/unfair a prediction is :

(4)

where is defined as the distance between

and the uniform distribution

. Specifically, we use the JS divergence as the distance metric since it is symmetric (i.e., ) and strictly scoped fuglede2004jensen. Based on this, to evaluate the entity bias of a trained RE model, we average the following relative entity mention imbalance (REI) measure over all the testing instances containing whichever entity mentions:

(5)

where is an input instance, is the testing set, is the prediction output, is an entity mention, and is the corpus of entity mentions. This metric captures the distance between all predictions and the fair uniform distribution .

We follow the experimental settings in Section 4.2 and report the fairness test in Table 5. The results show that our CoRE method reduces the imbalance metrics (lower is better) when employed on IRE significantly and consistently, indicating that it is helpful to mitigate the entity bias.

4.5 Ablation and Case Study

We conduct ablation studies on CoRE to empirically examine the contribution of its main technical components. including the entity bias mitigation operation (EBM), the label bias mitigation operation (LBM) and the beam search for hyper-parameters (BSH).

We report the experimental results of the ablation study in Table 6. We observe that removing our CoRE causes serious performance degradation. This provides evidence that using our counterfactual framework for RE can explicitly mitigate biases to generalize better on unseen examples. Moreover, we observe that mitigating the two types of biases is consistently helpful for RE. The key reason is that the distilled label bias provides an instance-agnostic offset and the distilled entity bias provides an entity-aware one in the prediction space, which makes the RE models focus on extracting relations on the textual context without losing the entity information. Meanwhile, the beam search for hyper-parameters effectively finds two dynamic scaling factors to amplify or shrink two biases, making the biases be mitigated properly and adaptively.

Table 7 gives a qualitative comparison example between CoRE and IRE on TACRED. The results show that the state-of-the-art RE model IRE returns the relations that do not exist in the textual context between the considered entities. For example, given “Bibi drew the ire of fellow farmhands after a dispute in June 2009, when they refused to drink water she collected and she refused their demands that she convert to Islam.”, there is no relation between Bibi and Islam exists in the text but the baseline model believes that the relation between them is “religion”. The counterfactual prediction can account for this disappointing result, where given only the entity mentions Bibi and Islam, the RE model returns the relation “religion” without any textual context. This implies that the model makes the prediction for the original input relying on the entity mentions, which leads to the wrong RE prediction. Our CoRE method distills the biases through counterfactual predictions and mitigates the biases to distinguish the main effects from the textual context, which leads to the correct predictions as shown in Table 7.

Last but not least, we conduct experiments on the fairness of different models, and present respective results in the appendix.

5 Conclusion

We have designed a counterfactual analysis based method named CoRE to debias RE. We distill the entity bias and mitigate the distilled biases with the help of our causal graph for RE, which is a road map for analyzing the RE models. Based on the counterfactual analysis, we can analyze the side-effects of entity mentions in the RE and debias the models in an entity-aware manner. Extensive experiments demonstrate that our methods can improve the effectiveness and generalization of RE. Future work includes analyzing the effects of other factors that can cause bias in natural language processing.

Acknowledgement

The authors would like to thank the anonymous reviewers for their discussion and feedback.

Muhao Chen and Wenxuan Zhou are supported by the National Science Foundation of United States Grant IIS 2105329, and by the DARPA MCS program under Contract No. N660011924033 with the United States Office Of Naval Research. Except for Muhao Chen and Wenxuan Zhou, this paper is supported by NUS ODPRT Grant R252-000-A81-133 and Singapore Ministry of Education Academic Research Fund Tier 3 under MOEs official grant number MOE2017-T3-1-007.

References