Poisoning Knowledge Graph Embeddings via Relation Inference Patterns

by   Peru Bhardwaj, et al.
ADAPT Centre

We study the problem of generating data poisoning attacks against Knowledge Graph Embedding (KGE) models for the task of link prediction in knowledge graphs. To poison KGE models, we propose to exploit their inductive abilities which are captured through the relationship patterns like symmetry, inversion and composition in the knowledge graph. Specifically, to degrade the model's prediction confidence on target facts, we propose to improve the model's prediction confidence on a set of decoy facts. Thus, we craft adversarial additions that can improve the model's prediction confidence on decoy facts through different inference patterns. Our experiments demonstrate that the proposed poisoning attacks outperform state-of-art baselines on four KGE models for two publicly available datasets. We also find that the symmetry pattern based attacks generalize across all model-dataset combinations which indicates the sensitivity of KGE models to this pattern.



There are no comments yet.


page 13


Graph Pattern Entity Ranking Model for Knowledge Graph Completion

Knowledge graphs have evolved rapidly in recent years and their usefulne...

Adversarial Attacks on Knowledge Graph Embeddings via Instance Attribution Methods

Despite the widespread use of Knowledge Graph Embeddings (KGE), little i...

Learning Hierarchy-Aware Knowledge Graph Embeddings for Link Prediction

Knowledge graph embedding, which aims to represent entities and relation...

Investigating Robustness and Interpretability of Link Prediction via Adversarial Modifications

Representing entities and relations in an embedding space is a well-stud...

Complex Evolutional Pattern Learning for Temporal Knowledge Graph Reasoning

A Temporal Knowledge Graph (TKG) is a sequence of KGs corresponding to d...

Dynamic Relation Repairing for Knowledge Enhancement

Dynamic relation repair aims to efficiently validate and repair the inst...

Hyperbolic Temporal Knowledge Graph Embeddings with Relational and Time Curvatures

Knowledge Graph (KG) completion has been excessively studied with a mass...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Knowledge graph embeddings (KGE) are increasingly deployed in domains with high stake decision making like healthcare and finance (noy2019knowledgegraphs), where it is critical to identify the potential security vulnerabilities that might cause failure. But the research on adversarial vulnerabilities of KGE models has received little attention. We study the adversarial vulnerabilities of KGE models through data poisoning attacks. These attacks craft input perturbations at training time that aim to subvert the learned model’s predictions at test time.

Poisoning attacks have been proposed for models that learn from other graph modalities (xu2020advgraphsurvey) but they cannot be applied directly to KGE models. This is because they rely on gradients of all possible entries in a dense adjacency matrix and thus, do not scale to large knowledge graphs with multiple relations. The main challenge in designing poisoning attacks for KGE models is the large combinatorial search space of candidate perturbations which is of the order of millions for benchmark knowledge graphs with thousands of nodes. Two recent studies zhang2019kgeattack; pezeshkpour2019criage attempt to address this problem through random sampling of candidate perturbations (zhang2019kgeattack) or through a vanilla auto-encoder that reconstructs discrete entities and relations from latent space pezeshkpour2019criage. However, random sampling depends on the number of candidates being sampled and the auto-encoder proposed in pezeshkpour2019criage is only applicable to multiplicative KGE models.

Figure 1: Composition based adversarial attack on fraud detection. The knowledge graph consists of two types of entities - Person and BankAccount. The target triple to predict is . Original KGE model predicts this triple as True. But a malicious attacker adds adversarial triples (in purple) that connect with a non-suspicious person through composition pattern. Now, the KGE model predicts the target triple as False.

In this work, we propose to exploit the inductive abilities of KGE models to craft poisoned examples against the model. The inductive abilities of KGE models are expressed through different connectivity patterns like symmetry, inversion and composition between relations in the knowledge graph. We refer to these as inference patterns. We focus on the task of link prediction using KGE models and consider the adversarial goal of degrading the predicted rank of target missing facts. To degrade the ranks of target facts, we propose to carefully select a set of decoy facts and exploit the inference patterns to improve performance on this decoy set. Figure 1 shows an example of the use of composition pattern to degrade KGE model’s performance.

We explore a collection of heuristic approaches to select the decoy triples and craft adversarial perturbations that use different inference patterns to improve the model’s predictive performance on these decoy triples. Our solution addresses the challenge of large candidate space by breaking down the search space into smaller steps - (i) determining adversarial relations; (ii) determining the decoy entities that most likely violate an inference pattern; and (iii) determining remaining adversarial entities in the inference pattern that are most likely to improve the rank of decoy triples.

We evaluate the proposed attacks on four state-of-art KGE models with varied inductive abilities - DistMult, ComplEx, ConvE and TransE. We use two publicly available benchmark datasets for link prediction - WN18RR and FB15k-237. Comparison against the state-of-art poisoning attacks for KGE models shows that our proposed attacks outperform them in

all cases. We find that the attacks based on symmetry pattern perform the best and generalize across all model-dataset combinations.

Thus, the main contribution of our research is an effective method to generate data poisoning attacks, which is based on inference patterns captured by KGE models. Through a novel reformulation of the problem of poisoning KGE models, we overcome the existing challenge in the scalability of poisoning attacks for KGE models. Furthermore, the extent of effectiveness of the attack relying on an inference pattern indicates the KGE model’s sensitivity to that pattern. Thus, our proposed poisoning attacks help in understanding the KGE models.

2 Problem Formulation

For a set of entities and a set of relations , a knowledge graph is a collection of triples represented as , where

represent the subject, relation and object in a triple. A Knowledge Graph Embedding (KGE) model encodes entities and relations to a low-dimensional continuous vector space

where is the embedding dimension. To do so, it uses a scoring function which depends on the entity and relation embeddings to assign a score to each triple . Table 1 shows the scoring functions of state-of-art KGE models studied in this research. The embeddings are learned such that the scores for true (existing) triples in the knowledge graph are higher than the scores for false (non-existing) triples in the knowledge graph.

Model Scoring Function
Table 1: Scoring functions of the KGE models used in this research. For ComplEx, ; for the remaining models . Here, denotes the tri-linear dot product;

denotes sigmoid activation function,

denotes 2D convolution; denotes conjugate for complex vectors, and 2D reshaping for real vectors in ConvE model; denotes l-p norm

Multiplicative vs Additive Interactions: The scoring functions of KGE models exhibit multiplicative or additive interactions chandrahas2018towards. The multiplicative models score triples through multiplicative interactions of subject, relation and object embeddings. The scoring function for these models can be expressed as where the function measures the compatibility between the subject and object embeddings and varies across different models within this family. DistMult, ComplEx and ConvE have such interactions. On the other hand, additive models score triples through additive interactions of subject, relation and object embeddings. The scoring function for such models can be expressed as where , and is the projection matrix from entity space to relation space . TransE has additive interactions.

Inductive Capacity of KGE models: The general intuition behind the design of the scoring functions of KGE models is to capture logical properties between relations from the observed facts in the knowledge graph. These logical properties or inference patterns can then be used to make downstream inferences about entities and relations. For example, the relation is inverse of the relation , and when the fact is true, then the fact is also true and vice versa. A model that can capture inversion pattern can thus predict missing facts about based on observed facts about . The most studied inference patterns in the current literature are symmetry, inversion and composition since they occur very frequently in real-world knowledge graphs. In this work, we use these patterns to investigate the adversarial vulnerability of KGE models.

Link Prediction: Since most of the existing knowledge graphs are incomplete, a standard use case of KGE models is to predict missing triples in the . This task is evaluated by an entity ranking procedure. Given a test triple , the subject entity is replaced by each entity from in turn. These replacements are referred to as synthetic negatives. The KGE model’s scoring function is used to predict scores of these negative triples. The scores are then sorted in descending order and the rank of the correct entity is determined. These steps are repeated for the object entity of the triple.

The state-of-art evaluation metrics for this task are (i)

MR which is the mean of the predicted ranks, (ii) MRR which is the mean of the reciprocals of predicted ranks and (iii) Hits@n which count the proportion of correct entities ranked in top-n. In the filtered setting (bordes2013transe), negative triples that already exist in the training, validation or test set are filtered out. That is, their scores are ignored while computing the ranks. Depending on the domain of use, either subject or object or both ranks of the test triple are used to determine the model’s confidence111

KGE models do not provide model uncertainty estimates.

in predicting a missing link.

Poisoning Attacks on KGE models:

We study poisoning attacks for the task of link prediction using KGE models. We focus on targeted attacks where the attacker targets a specific set of missing triples instead of the overall model performance. We use the notation for the target triple; in this case, are the target entities and is the target relation. The goal of an adversarial attacker is to degrade the ranks of missing triples which are predicted highly plausible by the model. The rank of a highly plausible target triple can be degraded by improving the rank of less plausible decoy triples. For a target triple , the decoy triple for degrading the rank on object side would be and the decoy triple for degrading the rank on subject side would be . Thus, the aim of the adversarial attacker is to select decoy triples from the set of valid synthetic negatives and craft adversarial edits to improve their ranks. The attacker does not add the decoy triple itself as an adversarial edit, rather chooses the adversarial edits that would improve the rank of a missing decoy triple through an inference pattern.

Threat Model:

To ensure reliable vulnerability analysis, we use a white-box attack setting where the attacker has full knowledge of the target KGE model (joseph_nelson_rubinstein_tygar_2019). They cannot manipulate the model architecture or learned embeddings directly; but only through addition of triples to the training data. We focus on adversarial additions which are more challenging to design than adversarial deletions for sparse knowledge graphs222For every target triple, the possible number of adversarial additions in the neighbourhood of each entity are

. For the benchmark dataset FB15k-237, this is of the order of millions; whereas the

maximum number of candidates for adversarial deletion are of the order of thousands..

As in prior studies (pezeshkpour2019criage; zhang2019kgeattack), the attacker is restricted to making edits only in the neighbourhood of target entities. They are also restricted to 1 decoy triple for each entity of the target triple. Furthermore, because of the use of filtered settings for KGE evaluation, the attacker cannot add the decoy triple itself to the training data (which intuitively would be a way to improve the decoy triple’s rank).

3 Poisoning Knowledge Graph Embeddings through Relation Inference Patterns

Since the inference patterns on the knowledge graph specify a logic property between the relations, they can be expressed as Horn Clauses which is a subset of FOL formulae. For example, a property represented in the form means that two entities linked by relation are also likely to be linked by the inverse relation . In this expression, the right hand side of the implication is referred to as the head and the left hand side as the body of the clause. Using such expressions, we define the three inference patterns used in our research.

Definition 3.1.

The symmetry pattern is expressed as . Here, the relation is symmetric relation.

Definition 3.2.

The inversion pattern is expressed as . Here, the relations and are inverse of each other.

Definition 3.3.

The composition pattern is expressed as . Here, the relation is a composition of and ; and the is the conjunction operator from relational logic.

The mapping of variables in the above expressions to entities is called a grounding. For example, we can map the logic expression to the grounding . Thus, a KGE model that captures the inversion pattern will assign a high prediction confidence to the head atom when the body of the clause exists in the graph.

In the above expressions, the decoy triple becomes the head atom and adversarial edits are the triples in the body of the expression. Since the decoy triple is an object or subject side negative of the target triple, the attacker already knows the relation in the head atom. They now want to determine (i) the adversarial relations in the body of the expression; (ii) the decoy entities which will most likely violate the inference pattern for the chosen relations and; (iii) the remaining entities in the body of the expression which will improve the prediction on the chosen decoy triple. Notice that the attacker needs all three steps for composition pattern only; for inversion pattern, only the first two steps are needed; and for symmetry pattern, only the second step is needed. Below we describe each step in detail. A computational complexity analysis of all the steps is available in Appendix A.

3.1 Step1: Determine Adversarial Relations

Expressing the relation patterns as logic expressions is based on relational logic and assumes that the relations are constants. Thus, we use an algebraic approach to determine the relations in the head and body of a clause. Given the target relation , we determine the adversarial relations using an algebraic model of inference (yang2015distmult).

Inversion: If an atom holds true, then for the learned embeddings in multiplicative models, we can assume ; where denotes the Hadamard (element-wise) product. If the atom holds true as well, then we can also assume . Thus, for inverse relations and when embeddings are learned from multiplicative models. We obtain a similar expression when embeddings are learned from additive models.

Thus, to determine adversarial relations for inversion pattern, we use the pre-trained embeddings to select that minimizes for multiplicative models; and that minimizes for additive models.

Composition: If two atoms and hold true, then for multiplicative models, and . Therefore, . Hence, relation is a composition of and if . Similarly, for embeddings from additive models, we can model composition as .

Thus, to determine adversarial relations for composition pattern, we use pre-trained embeddings to obtain all possible compositions of (). For multiplicative models, we use and for additive models we use . From these, we choose the relation pair for which the Euclidean distance between the composed relation embeddings and the target relation embedding is minimum.

3.2 Step2: Determine Decoy Entities

We consider three different heuristic approaches to select the decoy entity - soft truth score, ranks predicted by the KGE model and cosine distance.

Soft Logical Modelling of Inference Patterns

Once the adversarial relations are determined, we can express the grounding for symmetry, inversion and composition patterns for the decoy triples. We discuss only object side decoy triple for brevity -

If the model captures , or to assign high rank to the target triple, then the head atom of a grounding that violates this pattern is a suitable decoy triple. Adding the body of this grounding to the knowledge graph would improve the model performance on decoy triple through , or .

To determine the decoy triple this way, we need a measure of the degree to which a grounding satisfies an inference pattern. We call this measure the soft truth score - it provides the truth value of a logic expression indicating the degree to which the expression is true. We model the soft truth score of grounded patterns using t-norm based fuzzy logics (hajek1998tnormfuzzylogics).

The score

of an individual atom (i.e. triple) is computed using the KGE model’s scoring function. We use the sigmoid function

to map this score to a continuous truth value in the range . Hence, the soft truth score for an individual atom is . The soft truth score for the grounding of a pattern can then be expressed through logical composition (e.g. and ) of the scores of individual atoms in the grounding. We follow (guo2016kale; guo2018ruge) and define the following compositions for logical conjunction (), disjunction (), and negation ():

Here, and are two logical expressions, which can either be single triples or be constructed by combining triples with logical connectives. If is a single triple , we have . Given these compositions, the truth value of any logical expression can be calculated recursively (guo2016kale; guo2018ruge).

Thus, we obtain the following soft truth scores for the groundings of symmetry, inversion and composition patterns , and -

To select the decoy triple for symmetry and inversion, we score all possible groundings using and . The head atom of grounding with minimum score is chosen as decoy triple.

For composition pattern, the soft truth score for candidate decoy triples contains two entities to be identified. Thus, we use a greedy approach to select the decoy entity . We use the pre-trained embeddings to group the entities into

clusters using K-means clustering and determine a decoy entity with minimum soft truth score for each cluster. We then select the decoy entity

with minimum score across the clusters.

KGE Ranks:

We use the ranking protocol from KGE evaluation to rank the target triple against valid subject and object side negatives and . For each side, we select the negative triple that is ranked just below the target triple (that is, ). These are suitable as decoy because their predicted scores are likely not very different from the target triple’s score. Thus, the model’s prediction confidence for these triples might be effectively manipulated through adversarial additions. This is in contrast to very low ranked triples as decoy; where the model has likely learnt a low score with high confidence.

Cosine Distance:

A high rank for the target triple against queries and indicates that are similar to the embeddings of other subjects and objects related by in the training data. Thus, a suitable heuristic for selecting decoy entities and is to choose ones whose embeddings are dissimilar to . Since these entities are not likely to occur in the neighbourhood of and , they will act adversarially to reduce the rank of target triple. Thus, we select decoy entities and that have maximum cosine distance from target entities and respectively.

3.3 Step3: Determine Adversarial Entities

This step is only needed for the composition pattern because the body for this pattern has two adversarial triples. Given the decoy triple in the head of the composition expression, we select the body of the expression that would maximize the rank of the decoy triple. We use the soft-logical model defined in Step 2 for selecting decoy triples. The soft truth score for composition grounding of decoy triple is given by . We select the entity with maximum score because this entity satisfies the composition pattern for the decoy triple and is thus likely to improve the decoy triple’s ranks on addition to the knowledge graph.

Adversarial Attack Step Sym Inv Com
Determine Adversarial Relations n/a Alg Alg
Determine Decoy Entities Sft Sft Sft
Rnk Rnk Rnk
Cos Cos Cos
Determine Adversarial Entities n/a n/a Sft
Table 2: A summary of heuristic approaches used for different steps of the adversarial attack with symmetry (Sym), inversion (Inv) and composition (Com) pattern. Alg denotes the algebraic model for inference patterns; Sft denotes the soft truth score; Rnk denotes the KGE ranks; and Cos denotes the cosine distance.

4 Evaluation

The aim of our evaluation is to assess the effectiveness of proposed attacks in degrading the predictive performance of KGE models on missing triples that are predicted true. We use the state-of-art evaluation protocol for data poisoning attacks (xu2020advgraphsurvey)

. We train a clean model on the original data; then generate the adversarial edits and add them to the dataset; and finally retrain a new model on this poisoned data. All hyperparameters for training on original and poisoned data remain the same.

We evaluate four models with varying inductive abilities - DistMult, ComplEx, ConvE and TransE; on two publicly available benchmark datasets for link prediction333https://github.com/TimDettmers/ConvE- WN18RR and FB15k-237. We filter out triples from the validation and test set that contain unseen entities. To assess the attack effectiveness in degrading performance on triples predicted as true, we need a set of triples that are predicted as true by the model. Thus, we select as target triples, a subset of the original test set where each triple is ranked 10 by the original model. Table 3 provides an overview of dataset statistics and the number of target triples selected.

WN18RR FB15k-237
Entities 40,559 14,505
Relations 11 237
Training 86,835 272,115
Validation 2,824 17,526
Test 2,924 20,438
Target DistMult 1,315 3,342
ComplEx 1,369 3,930
ConvE 1,247 4,711
TransE 1,195 5,359
Table 3: Statistics for the datasets WN18RR and FB15k-237. We removed triples from the validation and test set that contained unseen entities to ensure that we do not add new entities as adversarial edits. The numbers above (including the number of entities) reflect this filtering.


We compare the proposed methods against the following baselines -

Random_n: Random edits in the neighbourhood of each entity of the target triple.

Random_g1: Global random edits in the knowledge graph which are not restricted to the neighbourhood of entities in the target triple and have 1 edit per decoy triple (like symmetry and inversion).

Random_g2: Global random edits in the knowledge graph which are not restricted to the neighbourhood of entities in the target triple and have 2 edits per decoy triple (like composition).

Zhang et al.: Poisoning attack from zhang2019kgeattack for edits in the neighbourhood of subject of the target triple. We extend it for both subject and object to match our evaluation protocol. Further implementation details available in Appendix B.2.

CRIAGE: Poisoning attack from (pezeshkpour2019criage). We use the publicly available implementation and the default attack settings444https://github.com/pouyapez/criage. The method was proposed for edits in the neighbourhood of object of the target triple. We extend it for both entities to match our evaluation protocol and to ensure fair evaluation.


For every attack, we filter out adversarial edit candidates that already exist in the graph. We also remove duplicate adversarial edits for different targets before adding them to the original dataset. For Step 2 of the composition attack with ground truth, we use the elbow method to determine the number of clusters for each model-data combination. Further details on KGE model training, computing resources and number of clusters are available in Appendix B. The source code to reproduce our experiments is available on GitHub555https://github.com/PeruBhardwaj/InferenceAttack.

4.1 Results

DistMult ComplEx ConvE TransE
MRR Hits@1 MRR Hits@1 MRR Hits@1 MRR Hits@1
Original 0.90 0.85 0.89 0.84 0.92 0.89 0.36 0.03
Baseline Attacks Random_n 0.86 (-4%) 0.83 0.84 (-6%) 0.80 0.90 (-2%) 0.88 0.28 (-20%) 0.01
Random_g1 0.88 0.83 0.88 0.83 0.92 0.89 0.35 0.02
Random_g2 0.88 0.83 0.88 0.83 0.91 0.89 0.34 0.02
Zhang et al. 0.82 (-8%) 0.81 0.76 (-14%) 0.74 0.90 (-2%) 0.87 0.24 (-33%) 0.01
CRIAGE 0.87 0.84 - - 0.90 0.88 - -
Proposed Attacks Sym_truth 0.66 0.40 0.56 (-33%) 0.24 0.61 (-34%) 0.28 0.57 0.36
Sym_rank 0.61 0.32 0.56 (-33%) 0.24 0.62 0.31 0.25 0.02
Sym_cos 0.57 (-36%) 0.32 0.62 0.43 0.67 0.44 0.24 (-33%) 0.01
Inv_truth 0.87 0.83 0.86 0.80 0.90 0.87 0.34 0.03
Inv_rank 0.86 0.83 0.85 0.80 0.89 (-4%) 0.85 0.25 0.02
Inv_cos 0.83 (-8%) 0.82 0.80 (-10%) 0.79 0.90 0.88 0.25 (-30%) 0.01
Com_truth 0.86 0.83 0.86 0.81 0.89 0.86 0.53 (+49%) 0.27
Com_rank 0.85 (-5%) 0.80 0.83 0.77 0.89 0.84 0.57 0.32
Com_cos 0.86 0.77 0.82 (-8%) 0.70 0.88(-4%) 0.83 0.53 (+49%) 0.27
Table 4: Reduction in MRR and Hits@1 due to different attacks on the target split of WN18RR. First block of rows are the baseline attacks with random edits; second block is state-of-art attacks; remaining are the proposed attacks. For each block, we report the best relative percentage difference from original MRR; computed as . Lower values indicate better results; best results for each model are in bold. Statistics on the target split are in Table 3.

Table 4 and 5 show the reduction in MRR and Hits@1 due to different attacks on the WN18RR and FB15k-237 datasets. We observe that the proposed adversarial attacks outperform the random baselines and the state-of-art poisoning attacks for all KGE models on both datasets.

We see that the attacks based on symmetry inference pattern perform the best across all model-dataset combinations. This indicates the sensitivity of KGE models to symmetry pattern. For DistMult, ComplEx and ConvE, this sensitivity can be explained by the symmetric nature of the scoring functions of these models. That is, the models assign either equal or similar scores to triples that are symmetric opposite of each other. In the case of TransE, the model’s sensitivity to symmetry pattern is explained by the translation operation in scoring function. The score of target is a translation from subject to object embedding through the relation embedding. Symmetry attack adds the adversarial triple where the relation is same as the target relation, but target subject is the object of adversarial triple. Now, the model learns the embedding of as a translation from through relation . This adversarially modifies the embedding of and in turn, the score of .

We see that inversion and composition attacks also perform better than baselines in most cases, but not as good as symmetry. This is particularly true for FB15k-237 where the performance for these patterns is similar to random baselines. For the composition pattern, it is likely that the model has stronger bias for shorter and simpler patterns like symmetry and inversion than for composition. This makes it harder to deceive the model through composition than through symmetry or inverse. Furthermore, FB15k-237 has high connectivity (dettmers2018conve) which means that a KGE model relies on a high number of triples to learn target triples’ ranks. Thus, poisoning KGE models for FB15k-237 will likely require more adversarial triples per target triple than that considered in this research.

The inversion pattern is likely ineffective on the benchmark datasets because these datasets do not have any inverse relations (dettmers2018conve; toutanova2015observed). This implies that our attacks cannot identify the inverse of the target triple’s relation in Step 1. We investigate this hypothesis further in Appendix D, and evaluate the attacks on WN18 dataset where the inverse relations have not been filtered out. This means that the KGE model can learn the inversion pattern and the inversion attacks can identify the inverse of the target relation. In this setting, we find that the inversion attacks outperform other attacks against ComplEx on WN18, indicating the sensitivity of ComplEx to the inversion pattern when the dataset contains inverse relations.

An exception in the results is the composition pattern on TransE where the model performance improves instead of degrading on the target triples. This is likely due to the model’s sensitivity to composition pattern such that adding this pattern improves the performance on all triples, including target triples. To verify this, we checked the change in ranks of decoy triples and found that composition attacks on TransE improve these ranks too. Results for this experiment are available in Appendix C. This behaviour of composition also indicates that the selection of adversarial entities in Step 3 of the composition attacks can be improved. It also explains why the increase is more significant for WN18RR than FB15k-237 - WN18RR does not have any composition relations but FB15k-237 does; so adding these to WN18RR shows significant improvement in performance. We aim to investigate these and more hypotheses about the proposed attacks in future work.

DistMult ComplEx ConvE TransE
MRR Hits@1 MRR Hits@1 MRR Hits@1 MRR Hits@1
Original 0.61 0.38 0.61 0.45 0.61 0.45 0.63 0.48
Baseline Attacks Random_n 0.54 (-11%) 0.40 0.54 (-12%) 0.40 0.56 (-8%) 0.41 0.60 (-4%) 0.45
Random_g1 0.54 0.40 0.55 0.41 0.57 0.43 0.62 0.46
Random_g2 0.55 0.41 0.55 0.40 0.57 0.42 0.61 0.46
Zhang et al. 0.53 (-13%) 0.39 0.51 (-16%) 0.38 0.54 (-11%) 0.39 0.57 (-10%) 0.42
CRIAGE 0.54 0.41 - - 0.56 0.41 - -
Proposed Attacks Sym_truth 0.51 0.36 0.56 0.41 0.51 (-17%) 0.34 0.62 0.48
Sym_rank 0.53 0.39 0.53 0.38 0.55 0.38 0.53 (-16%) 0.36
Sym_cos 0.46 (-25%) 0.31 0.51 (-17%) 0.38 0.52 0.37 0.55 0.40
Inv_truth 0.55 0.41 0.54 0.40 0.56 0.41 0.62 0.46
Inv_rank 0.56 0.43 0.55 0.40 0.55 (-9%) 0.40 0.58 (-8%) 0.42
Inv_cos 0.54 (-11%) 0.40 0.53 (-14%) 0.39 0.56 0.42 0.59 0.44
Com_truth 0.56 0.42 0.55 0.41 0.57 0.43 0.65 0.51
Com_rank 0.56 (-8%) 0.42 0.55 (-11%) 0.40 0.56 (-8%) 0.41 0.69 0.48
Com_cos 0.56 (-8%) 0.43 0.56 0.42 0.56 0.42 0.63 (0%) 0.49
Table 5: Reduction in MRR and Hits@1 due to different attacks on the target split of FB15k-237. For each block of rows, we report the best relative percentage difference from original MRR; computed as . Lower values indicate better results; best results for each model are in bold. Statistics on the target split are in Table 3.

5 Related Work

KGE models can be categorized into tensor factorization models like DistMult

(yang2015distmult) and ComplEx (trouillon2016complex), neural architectures like ConvE (dettmers2018conve) and translational models like TransE (bordes2013transe). We refer the reader to (cai2018comprehensive) for a comprehensive survey. Due to the black-box nature of KGE models, there is an emerging literature on understanding these models. (pezeshkpour2019criage) and (zhang2019kgeattack) are most closely related to our work as they propose other data poisoning attacks for KGE models.

minervini2017adversarialsets and cai2018kbgan

use adversarial regularization in latent space and adversarial training to improve predictive performance on link prediction. But these adversarial samples are not in the input domain and aim to improve instead of degrade model performance. Poisoning attacks have also been proposed for models for undirected and single relational graph data like Graph Neural Networks  

(zugner2018nettack; dai2018adversarialgcn) and Network Embedding models (bojchevski2019adversarialnetworkembedding). A survey of poisoning attacks for graph data is available in (xu2020advgraphsurvey). But the attacks for these models cannot be applied directly to KGE models because they require gradients of a dense adjacency matrix.

In the literature besides adversarial attacks, lawrence2020gradientrollback, nandwani2020oxkbc and zhang2019interaction generate post-hoc explanations to understand KGE model predictions. trouillon2019inductive study the inductive abilities of KGE models as binary relation properties for controlled inference tasks with synthetic datasets. allen2021interpreting interpret the structure of knowledge graph embeddings by comparison with word embeddings. On the theoretical side, wang2018multi study the expressiveness of various bilinear KGE models and basulto2018ontologyembedding study the ability of KGE models to learn hard rules expressed as ontological knowledge.

The soft-logical model of inference patterns in this work is inspired by the literature on injecting logical rules into KGE models. guo2016kale and guo2018ruge enforce soft logical rules by modelling the triples and rules in a unified framework and jointly learning embeddings from them. Additionally, our algebraic model of inference patterns, which is used to select adversarial relations, is related to approaches for graph traversal in latent vector space discussed in yang2015distmult; guu2015traversing; arakelyan2021complexqueryanswering.

6 Conclusion

We propose data poisoning attacks against KGE models based on inference patterns like symmetry, inversion and composition. Our experiments show that the proposed attacks outperform the state-of-art attacks. Since the attacks rely on relation inference patterns, they can also be used to understand the KGE models. This is because if a KGE model is sensitive to a relation inference pattern, then that pattern should be an effective adversarial attack. We observe that the attacks based on symmetry pattern generalize across all KGE models which indicates their sensitivity to this pattern.

In the future, we aim to investigate hypotheses about the effect of input graph connectivity and existence of specific inference patterns in datasets. We note that such investigation of inference pattern attacks will likely be influenced by the choice of datasets. In this paper, we have used benchmark datasets for link prediction. While there are intuitive assumptions about the inference patterns on these datasets, there is no study that formally measures and characterizes the existence of these patterns. This makes it challenging to verify the claims made about the inductive abilities of KGE models, not only by our proposed attacks but also by new KGE models proposed in the literature.

Thus, a promising step in understanding knowledge graph embeddings is to propose datasets and evaluation tasks that test varying degrees of specific inductive abilities. These will help evaluate new models and serve as a testbed for poisoning attacks. Furthermore, specifications of model performance on datasets with different inference patterns will improve the usability of KGE models in high-stake domains like healthcare and finance.

In addition to understanding model behaviour, the sensitivity of state-of-art KGE models to simple inference patterns indicates that these models can introduce security vulnerabilities in pipelines that use knowledge graph embeddings. Thus, another promising direction for future work is towards mitigating the security vulnerabilities of KGE models. Some preliminary ideas for this research can look into adversarial training; or training an ensemble of different KGE scoring functions; or training an ensemble from subsets of the training dataset. Since our experiments show that state-of-art KGE models are sensitive to symmetry pattern, we call for future research to investigate neural architectures that generalize beyond symmetry even though their predictive performance for link prediction on benchmark datasets might not be the best.


This research was conducted with the financial support of Accenture Labs and Science Foundation Ireland (SFI) at the ADAPT SFI Research Centre at Trinity College Dublin. The ADAPT SFI Centre for Digital Content Technology is funded by Science Foundation Ireland through the SFI Research Centres Programme and is co-funded under the European Regional Development Fund (ERDF) through Grant No. 13/RC/2106_P2.

Broader Impact

We study the problem of generating data poisoning attacks on KGE models. Data poisoning attacks identify the vulnerabilities in learning algorithms that could be exploited by an adversary to manipulate the model’s behaviour (joseph_nelson_rubinstein_tygar_2019; biggio2018wild). Such manipulation can lead to unintended model behaviour and failure. Identifying these vulnerabilities for KGE models is critical because of their increasing use in domains that need high stakes decision making like heathcare (bendsten2019astrazeneca) and finance (hogan2020knowledgegraphs; noy2019knowledgegraphs). In this way, our research is directed towards minimizing the negative consequences of deploying state-of-art KGE models in our society. This honours the ACM code of Ethics of contributing to societal well-being and acknowledging that all people are stakeholders in computing. At the same time, we aim to safeguard the KGE models against potential harm from adversaries and thus honour the ACM code of avoiding harm due to computing systems.

Arguably, because we study vulnerabilities by attacking the KGE models, the proposed attacks can be used by an actual adversary to manipulate the model behaviour of deployed systems. This paradox of an arms race is universal across security research (biggio2018wild). For our research, we have followed the principle of proactive security as recommended by joseph_nelson_rubinstein_tygar_2019 and biggio2018wild. As opposed to reactive security measures where learning system designers develop countermeasures after the system is attacked, a proactive approach anticipates such attacks, simulates them and designs countermeasures before the systems are deployed. Thus, by revealing the vulnerabilities of KGE models, our research provides an opportunity to fix them.

Besides the use case of security, our research can be used in understanding the inductive abilities of KGE models, which are black-box and hard to interpret. We design attacks that rely on the inductive assumptions of a model to be able to deceive that model. Thus, theoretically, the effectiveness of attacks based on one inference pattern over another indicates the model’s reliance on one inference pattern over another. However, as we discussed in our paper, realistically, it is challenging to make such claims about the inductive abilities of KGE models because the inference patterns in benchmark datasets are not well defined.

Thus, we would encourage further work to evaluate our proposed attacks by designing benchmark tasks and datasets that measure specific inductive abilities of models. This will not only be useful for evaluating the proposed attacks here, but also for understanding the inductive abilities of existing KGE models. This in turn, can guide the community to design better models. In this direction, we encourage researchers proposing new KGE models to evaluate not only the predictive performance on benchmark datasets, but also the claims made on inductive abilities of these models and their robustness to violations of these implicit assumptions.



Appendix A Computational Complexity Analysis

Lets say is the set of entities and is the set of relations. The number of target triples to attack is and the specific target triple is . Here, we discuss the computational complexity of the three steps of the proposed attacks -

Determine Adversarial Relations:

In this step, we determine the inverse relation or the composition relation of a target triple. To select inverse relation, we need computations for every target triple. Selecting composition relation requires the composition operation times per target triple. To avoid repetition, we pre-compute the inverse and composition relations for all target triples. This gives the complexity for inverse relation. For composition relation, we compute compositions of all relation pairs and then select the adversarial pair by comparison with target relation. This gives complexity for composition.

Determine Decoy Entity:

The three heuristics to compute the decoy entity are soft-truth score, KGE ranks and cosine distance. For symmetry and inversion, the soft truth score requires 2 forward calls to the model for one decoy entity. For composition, if the number of clusters is , the soft truth score requires forward calls to the model. To select decoy entities based on KGE ranks, we require one forward call for each decoy entity. For cosine distance, we compute the similarity of and

to all entities via two calls to Pytorch’s

. Once the heuristic scores are computed, there is an additional complexity of to select the entity with minimum score. Thus, the complexity for decoy selection is for all heuristics except soft truth score on composition where it is .

Determine Adversarial Entity:

This step requires three forward calls to the KGE model because the ground truth score needs to be computed. Thus, the complexity for this step is .

Based on the discussion above, the overall computational complexity is for symmetry attacks and for inversion attacks. For composition attacks, it is for soft truth score and for KGE ranks and cosine distance.

Appendix B Implementation Details

b.1 Training KGE models

Our codebase666https://github.com/PeruBhardwaj/InferenceAttack for KGE model training is based on the codebase from (dettmers2018conve)777https://github.com/TimDettmers/ConvE. We use the 1-K training protocol but without reciprocal relations. Each training step alternates through batches of (s,r) and (o,r) pairs and their labels. The model implementation uses an if-statement for the forward pass conditioned on the input batch mode.

For TransE scoring function, we use L2 norm and a margin value of 9.0. The loss function used for all models is Pytorch’s BCELosswithLogits. For regularization, we use label smoothing and L2 regularization for TransE; and input dropout with label smoothing for remaining models. We also use hidden dropout and feature dropout for ConvE.

We do not use early stopping to ensure same hyperparameters for original and poisoned KGE models. We used an embedding size of 200 for all models on both datasets. For ComplEx, this becomes an embedding size of 400 because of the real and imaginary parts of the embeddings. All hyperparameters are tuned manually based on suggestions from state-of-art implementations of KGE models (ruffinelli2020olddognewtricks; dettmers2018conve). The hyperparameter values for all model dataset combinations are available in the codebase. Table 6 shows the MRR and Hits@1 for the original KGE models on WN18RR and FB15k-237.

For re-training the model on poisoned dataset, we use the same hyperparameters as the original model. We run all model training, adversarial attacks and evaluation on a shared HPC cluster with Nvidia RTX 2080ti, Tesla K40 and V100 GPUs.

WN18RR FB15k-237
MRR Hits@1 MRR Hits@1
DistMult 0.42 0.39 0.27 0.19
ComplEx 0.43 0.40 0.24 0.20
ConvE 0.43 0.40 0.32 0.23
TransE 0.19 0.02 0.34 0.25
Table 6: MRR and Hits@1 results for original KGE models on WN18RR and FB15k-237
Figure 2: Mean of the relative increase in MRR of object and subject side decoy triples due to proposed attacks on WN18RR and FB15k-237. The increase is computed relative to original MRR of decoy triples as . The scale on y-axis is symmetric log scale. Higher values are better; as they show the effectiveness of attack in improving decoy triples’ ranks relative to their original ranks.

b.2 Baseline Implementation Details

One of the baselines in our evaluation is the attack from zhang2019kgeattack. It proposed edits in the neighbourhood of subject of the target triple. We extend it for both subject and object to match our evaluation protocol. Since no public implementation is available, we implement our own.

The attack is based on computing a perturbation score for all possible candidate additions. Since the search space for candidate additions is of the order , the attack uses random down sampling to filter out the candidates. The percent of triples down sampled are not reported in the original paper and the implementation is not available. So, in this paper, we pick a high and a low value of the percentage of triples down sampled and generate adversarial edits for both fractions. The high and low percent values that were used to select candidate adversarial additions for WN18RR are DistMult: (20.0, 5.0); ComplEx: (20.0, 5.0); ConvE: (2.0, 0.1); TransE: (20.0, 5.0). For FB15k-237, these values are DistMult: (20.0, 5.0); ComplEx: (15.0, 5.0); ConvE: (0.3, 0.1); TransE: (20.0, 5.0)

Original High Low
DistMult 0.90 0.82 0.83
ComplEx 0.89 0.76 0.79
ConvE 0.92 0.90 0.90
TransE 0.36 0.25 0.24
Original High Low
DistMult 0.61 0.55 0.53
ComplEx 0.61 0.51 0.52
ConvE 0.61 0.54 0.54
TransE 0.63 0.57 0.57
Table 7: MRR of KGE models trained on original datasets and poisoned datasets from the attack in zhang2019kgeattack. High, Low indicate the high and low percentage of candidates used for attack.

Thus, we generate two poisoned datasets from the attack - one that used a high number of candidates and another that used a low number of candidates. We train two separate KGE models on these datasets to assess attack performance. Table 7 shows the MRR of the original model; and poisoned KGE models from attack with high and low downsampling percents. The results reported for this attack’s performance in Section 4.1 are the better of the two results (which show more degradation in performance) for each combination.

b.3 Attack Implementation Details

Our proposed attacks involve three steps to generate the adversarial additions for all target triples. For step1 of selection of adversarial relations, we pre-compute the inversion and composition relations for all target triples. Step2 and Step3 are computed for each target triple in a for loop. These steps involve forward calls to KGE models to score adversarial candidates. For this, we use a vectorized implementation similar to KGE evaluation protocol. We also filter out the adversarial candidates that already exist in the training set. We further filter out any duplicates from the set of adversarial triples generated for all target triples.

For the composition attacks with soft-truth score, we use the KMeans clustering implementation from . We use the elbow method on the grid [5, 20, 50, 100, 150, 200, 250, 300, 350, 400, 450, 500] to select the number of clusters. The number of clusters selected for WN18RR are DistMult: 300, ComplEx: 100, ConvE: 300, TransE: 50. For FB15k-237, the numbers are DistMult: 200, ComplEx: 300, ConvE: 300, TransE: 100.

DistMult ComplEx ConvE TransE
MRR Hits@1 MRR Hits@1 MRR Hits@1 MRR Hits@1
Original 0.82 0.67 0.99 0.99 0.80 0.63 0.65 0.45
Baseline Attacks Random_n 0.80 (-2%) 0.63 0.99 (0%) 0.98 0.79 (-2%) 0.61 0.46 (-29%) 0.18
Random_g1 0.82 0.66 0.99 0.98 0.80 0.62 0.57 0.33
Random_g2 0.81 0.65 0.99 0.98 0.79 0.62 0.50 0.22
Zhang et al. 0.77 (-6%) 0.59 0.97 (-3%) 0.95 0.77 (-3%) 0.61 0.43 (-33%) 0.16
CRIAGE 0.78 0.61 - - 0.78 0.63 - -
Proposed Attacks Sym_truth 0.62 0.30 0.90 0.82 0.58 (-17%) 0.27 0.74 0.60
Sym_rank 0.59 0.27 0.89 (-10%) 0.79 0.62 0.33 0.52 0.34
Sym_cos 0.50 (-38%) 0.17 0.92 0.85 0.60 0.35 0.41 (-37%) 0.13
Inv_truth 0.81 0.66 0.86 0.74 0.78 (-3%) 0.61 0.59 0.34
Inv_rank 0.82 0.66 0.84 (-16%) 0.68 0.79 0.61 0.55 0.34
Inv_cos 0.79 (-3%) 0.64 0.87 0.75 0.80 0.63 0.51 (-22%) 0.25
Com_truth 0.79 0.62 0.98 0.97 0.77 0.62 0.53 (-18%) 0.25
Com_rank 0.80 0.64 0.98 0.96 0.75 (-6%) 0.58 0.67 0.47
Com_cos 0.78 (-5%) 0.61 0.97 (-2%) 0.95 0.77 0.62 0.58 0.32
Table 8: Reduction in MRR and Hits@1 due to different attacks on the target split of WN18. For each block of rows, we report the best relative percentage difference from original MRR; computed as . Lower values indicate better results; best results for each model are in bold.

Appendix C Analysis on Decoy Triples

The proposed attacks are designed to generate adversarial triples that improve the KGE model performance on decoy triples and . In this section, we analyze whether the performance of KGE models improves or degrades over decoy triples after poisoning. For the decoy triples on object side , we compute the change in object side MRR relative to the original object side MRR of these triples. Similarly, for the decoy triples on subject side , we compute the change in subject side MRR relative to the original subject side MRR of these decoy triples. Figure 2 shows plots for the mean change in MRR of object and subject side decoy triples.

We observed in Section 4.1 that the composition attacks against TransE on WN18RR improved the performance on target triples instead of degrading it. In Figure 2, we notice that composition attacks against TransE are effective in improving the ranks of decoy triples on both WN18RR and FB15k-237. This evidence supports the argument made in the main paper - it is likely that the composition attack does not work against TransE for WN18RR because the original dataset does not contain any composition relations; thus adding this pattern improves model’s performance on all triples instead of just the target triples because of the sensitivity of TransE to composition pattern.

Appendix D Analysis on WN18

The inversion attacks identify the relation that the KGE model might have learned as inverse of the target triple’s relation. But the benchmark datasets WN18RR and FB15k-237 do not contain inverse relations, and a KGE model trained on these clean datasets would not be vulnerable to inversion attacks. Thus, we perform additional evaluation on the WN18 dataset where triples with inverse relations have not been removed. Table 8 shows the results for different adversarial attacks on WN18.

We see that the symmetry based attack is most effective for DistMult, ConvE and TransE. This indicates the sensitivity of these models to the symmetry pattern even when inverse relations are present in the dataset. For DistMult and ConvE, this is likely due to the symmetric nature of their scoring functions; and for TransE, this is likely because of the translation operation as discussed in Section 4.1. On the ComplEx model, we see that though the symmetry attacks are more effective than random baselines, the inversion attacks are the most effective. This indicates that the ComplEx model is most sensitive to the inversion pattern when the input dataset contains inverse relations.

Appendix E Analysis of Runtime Efficiency

In this section, we compare the runtime efficiency of the baseline and proposed attacks. Table 9 shows the time taken (in seconds) to select the adversarial triples using different attack strategies for all models on WN18 dataset. Similar patterns were observed for attack execution on other datasets.

DistMult ComplEx ConvE TransE
Random_n 10.08 10.69 8.76 7.83
Random_g1 8.28 8.16 7.64 6.49
Random_g2 16.01 15.82 18.72 13.33
Zhang et al. 94.48 255.53 666.85 81.96
CRIAGE 21.77 - 21.96 -
Sym_truth 19.63 35.40 22.76 31.59
Sym_rank 23.47 27.25 25.82 25.03
Sym_cos 22.52 28.62 25.69 23.13
Inv_truth 11.43 15.69 24.13 31.89
Inv_rank 15.27 18.14 30.99 21.82
Inv_cos 14.96 20.47 23.02 20.63
Com_truth 2749.60 1574.44 6069.79 470.34
Com_rank 22.04 31.53 37.81 20.88
Com_cos 34.78 68.06 32.37 19.86
Table 9: Time taken in seconds to generate adversarial triples using baseline and proposed attacks on WN18

For CRIAGE, the reported time does not include the time taken to train the auto-encoder model. Similarly, for soft-truth based composition attacks, the reported time does not include the time taken to pre-compute the clusters. We observe that the proposed attacks are more efficient than the baseline Zhang et al. attack which requires a combinatorial search over the canidate adversarial triples; and have comparable efficiency to CRIAGE. Among the different proposed attacks, composition attacks based on soft-truth score take more time than others because they select the decoy entity by computing the soft-truth score for multiple clusters.