Convolutional Hypercomplex Embeddings for Link Prediction

06/29/2021 ∙ by Caglar Demir, et al. ∙ Universität Paderborn 0

Knowledge graph embedding research has mainly focused on the two smallest normed division algebras, ℝ and ℂ. Recent results suggest that trilinear products of quaternion-valued embeddings can be a more effective means to tackle link prediction. In addition, models based on convolutions on real-valued embeddings often yield state-of-the-art results for link prediction. In this paper, we investigate a composition of convolution operations with hypercomplex multiplications. We propose the four approaches QMult, OMult, ConvQ and ConvO to tackle the link prediction problem. QMult and OMult can be considered as quaternion and octonion extensions of previous state-of-the-art approaches, including DistMult and ComplEx. ConvQ and ConvO build upon QMult and OMult by including convolution operations in a way inspired by the residual learning framework. We evaluated our approaches on seven link prediction datasets including WN18RR, FB15K-237 and YAGO3-10. Experimental results suggest that the benefits of learning hypercomplex-valued vector representations become more apparent as the size and complexity of the knowledge graph grows. ConvO outperforms state-of-the-art approaches on FB15K-237 in MRR, Hit@1 and Hit@3, while QMult, OMult, ConvQ and ConvO outperform state-of-the-approaches on YAGO3-10 in all metrics. Results also suggest that link prediction performances can be further improved via prediction averaging. To foster reproducible research, we provide an open-source implementation of approaches, including training and evaluation scripts as well as pretrained models.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Knowledge graphs represent structured collections of facts describing the world in the form of typed relationships between entities (hogan2020knowledge). These collections of facts have been used in a wide range of applications, including web search, question answering, recommender systems, cancer research, machine translation, and even entertainment (eder2012knowledge; bordes2014question; zhang2016collaborative; saleem2014big; moussallem2019; malyshev2018getting). However, most knowledge graphs on the web are far from complete (nickel2015review). The task of identifying missing links in knowledge graphs is referred to as link predictionKnowledge Graph Embedding (KGE) models have been particularly successful at tackling the link prediction task, among many others (nickel2015review).

KGE research has mainly focused on the two smallest normed division algebras—real numbers () and complex numbers ()—neglecting the benefits of the larger normed division algebras—quaternions ( and octonions (). While yang2015embedding introduced the trilinear product of real-valued embeddings of triples (h, r, t) as a scoring function for link prediction, trouillon2016complex showed the usefulness of the Hermitian product of complex-valued embeddings : in contrast to real-valued embeddings, this product is not symmetric and can be used to model antisymmetric relations since . To further increase the expressivity, zhang2019quaternion proposed learning quaternion-valued embeddings due to their benefits over complex-valued embeddings. Recently, zhang2021beyond show that replacing a fully-connected layer with a hypercomplex multiplication layer in a neural network leads to significant parameter efficiency without degenerating predictive performance in many tasks including natural language inference, machine translation and text style transfer.

nguyen2017novel; dettmers2018convolutional; balavzevic2019hypernetwork; demir2021convolutional showed that convolutions are another effective means to increase the expressivity: the sparse connectivity property of the convolution operator endows models with parameter efficiency—unlike models simply increasing the embedding size which is not scalable to large knowledge graphs (dettmers2018convolutional). Different configurations of the number of feature maps and the shape of kernels in the convolution operation are often explored to find the best ratio between expressiveness and parameter space size.

We investigate the use of convolutions on hypercomplex embeddings by proposing four models: QMult and OMult can be considered hypercomplex extensions of DistMult (yang2015embedding) in  and , respectively. In contrast to the state of the art (zhang2019quaternion), we address the scaling effect of multiplication in and

by applying the batch normalization technique. Through the batch normalization technique,

QMult and OMult are allowed to control the rate of normalization and benefit from its implicit regularization effect (ioffe2015batch). Importantly, lu2020dense suggest that using solely unit quaternion-based rotations between head entity and relation limits the modeling capacity for various types of relations. ConvQ and ConvO build upon QMult and OMult by including the convolution operator in a way inspired by the residual learning framework (he2016deep). ConvQ and ConvO forge QMult and OMult with a 2D convolution operation and an affine transformation via the Hadamard product, respectively. By virtue of this architecture, we show that ConvQ can degenerate QMult, ComplEx or DistMult, if such degeneration is necessary to further minimize the training loss (see Equations 10 and 6).

Experiments suggest that our models often achieve state-of-the-art performance on seven benchmark datasets (WN18, FB15K, WN18RR, FB15K-237, YAGO3-10, Kinship and UMLS). Superiority of our models against state-of-the-art models increases as the size and complexity of the knowledge graph grows. Our results also indicate that generalization performances of models can be further increased by applying ensemble learning.

2 Related Work

In the last decade, a plethora of KGE approaches have been successfully applied to tackle various tasks (nickel2015review; cai2018comprehensive; ji2020survey). In this section, we give a brief chronological overview of selected KGE

approaches. RESCAL computes a three-way factorization of a third-order adjacency tensor representing the input knowledge graph to compute scores for triples 

(nickel2011three). RESCAL captures various types of relations in the input KG but is limited in its scalability as it has quadratic complexity in the factorization rank (trouillon2017knowledge). DistMult can be regarded as an efficient extension of RESCAL with a diagonal matrix per relation to reduce the complexity of RESCAL (yang2015embedding). DistMult performs poorly on antisymmetric relations, whereas performing well on symmetric relations (trouillon2016complex). ComplEx extends DistMult by learning representations in a complex vector space (trouillon2016complex). ComplEx is able to infer both symmetric and antisymmetric relations via a Hermitian inner product of embeddings that involves the conjugate-transpose of one of the two input vectors. lacroix2018canonical design two novel regularizers along with a data augmentation technique and propose ComplEx-N3 that can be seen as ComplEx with the N3 regularization. ConvE applies a 2D convolution operation to model the interactions between entities and relations (dettmers2018convolutional). ConvKB extends ConvE by omitting reshaping operation in the encoding of representations in the convolution operation (nguyen2017novel). Similarly, HypER extends ConvE by applying relation-specific convolution filters as opposed to applying filters from concatenated subject and relation vectors (balavzevic2019hypernetwork). TuckER employs the Tucker decomposition on the binary tensor representing the input knowledge graph triples (balavzevic2019tucker). RotatE employs a rotational model taking predicates as rotations from subjects to objects in complex space via the element-wise Hadamard product (sun2019rotate). By these means, RotatE performs well on composition relations where other approaches perform poorly. QuatE applies the quaternion multiplication followed by an inner product to compute scores of triples (zhang2019quaternion).

3 Link Prediction & Hypercomplex Numbers

Link Prediction.

Let  and  represent the sets of entities and relations. Then, a Knowledge Graph (KG) can be formalised as a set of triples where each triple contains two entities and a relation . The link prediction problem is formalised by learning a scoring function ideally characterized by if is true and is not (dettmers2018convolutional).

Hypercomplex Numbers.

The quaternions are a 4-dimensional normed division algebra (hamilton1844lxxviii; baez2002octonions). A quaternion number is defined as where are real numbers and are imaginary units satisfying Hamilton’s rule: . Let and be two quaternions, the inner product of two quaternions is defined as

The quaternion multiplication of and is defined as

The quaternion multiplication is also known as the Hamilton product (zhang2021beyond). For a -dimensional quaternion vector a + b i + c j + d k with , the inner product and multiplication is defined accordingly. The Octonions are an 8-dimensional algebra where an octonion number is defined as , where are imaginary units (baez2002octonions). Their product (), inner product () and vector operations are defined analogously to quaternions.

The quaternion multiplication subsumes real-valued multiplication and enjoys a parameter saving with as compared to the real-valued matrix multiplication (parcollet2018quaternion; parcollet2019quaternion; zhang2021beyond). Leveraging such properties of quaternions in neural networks showed promising results in numerous tasks (zhang2021beyond; zhang2019quaternion; chen2020quaternion). In turn, the octonion multiplication in neural networks and learning octonion-valued knowledge graph embeddings had not been yet fully explored.

4 Convolutional Hypercomplex Embeddings

Motivation.

dettmers2018convolutional suggest that indegree and PageRank can be used to quantify the difficulty of predicting missing links in KG. Results indicate that the superiority of ConvE becomes more apparent against DistMult and ComplEx as the complexity of the knowledge graph increases, i.e., indegree and PageRank of a KG increase (see Table 6 in dettmers2018convolutional). In turn, zhang2019quaternion show that learning quaternion-valued embeddings via multiplicative interactions can be a more effective means of predicting missing links than learning real and complex-valued embeddings. Although learning quaternion-valued embeddings through multiplicative interactions yields promising results, the only way to further increase the expressiveness of such models is to increase the number of dimensions of embeddings. This does not scale to larger knowledge graphs (dettmers2018convolutional). Increasing parameter efficiency while retaining effectiveness is a desired property in many applications (zhang2021beyond; trouillon2016complex; trouillon2017knowledge).

Motivated by findings of aforementioned works, we investigate the composition of convolution operations with hypercomplex multiplications. The rationale behind this composition is to increase the expressiveness without increasing the number of parameters. This nontrivial endeavor is the keystone of embedding models (trouillon2016complex). The sparse connectivity property of the convolution operation endows models with parameter efficiency which helps to scale to larger knowledge graphs. Additionally, different configurations of the number of kernels and their shapes can be explored to find the best ratio between expressiveness and the number of parameters. Although increasing the number of feature maps results in increasing the number of parameters, we are able to benefit from the parameter sharing property of convolutions (goodfellow2016deep).

Approaches.

Inspired by the early works DistMult and ConvE, we dub our approaches QMult, OMult, ConvQ, and ConvO where “Q” represents the quaternion variant and “O” the octonion variant. Given a triple , computes a triple score through the quaternion multiplication of head entity embeddings and relation embeddings followed by the inner product with tail entity embeddings as

(1)

where . Similarly, performs the octonion multiplication followed by the inner product as

(2)

where . Computing scores of triples in this setting can be illustrated in two consecutive steps: (1) rotating through by applying quaternion/octonion multiplication and (2) squishing () and into a real number line by taking the inner product. During training, the degree between () and is minimized provided .

Motivated by the response of John T. Graves to W. R. Hamilton,222“If with your alchemy you can make three pounds of gold, why should you stop there?” baez2002octonions. we combine convolution operations with QMult and OMult as defined

(3)
(4)

where (respectively ) is defined as

(5)

, and () denote the rectified linear unit function, a flattening operation, convolution operation, kernel in the convolution and an affine transformation, respectively.

Connection to ComplEx and DistMult.

During training, can reduce its range into if such reduction is necessary to further decrease the training loss. In the following Equations 10, 9, 8, 7 and 6, we elucidate the reduction of ConvQ into QMult and ComplEx:

(6)

Equation 6 corresponds to QMult provided that . ConvQ can be further reduced into ComplEx by setting the imaginary parts j and k of , and to zero:

(7)

Computing the quaternion multiplication of two quaternion-valued vectors corresponds to Equation 8:

(8)

The resulting quaternion-valued vector is scaled with :

(9)

Through taking the inner product of the former vector with , we obtain

(10)

where corresponds to the multi-linear inner product.  Equation 10 corresponds to ComplEx provided that . In the same way, ConvQ can be reduced into DistMult by setting all imaginary parts i, j, k to zero for , and yielding

(11)

Connection to residual learning.

The residual learning framework facilitates training of deep neural networks. A simple residual learning block consists of two weight layers denoted by and an identity mapping of the input (see Figure 2 in he2016deep). Increasing the depth of a neural model via stacking residual learning blocks led to significant improvements in many domains. In our setting, and correspond to and , respectively. We replaced the identity mapping of the input with the hypercomplex multiplication. To scale the output, we replaced the elementwise vector addition with the Hadamard product. By virtue of such inclusion, ConvQ and ConvO are endowed with the ability of controlling the impact of on predicted scores as shown in Equation 10. Ergo, the gradients of loss (see Equation 12) w.r.t. head entity and relation embeddings can be propagated in two ways, namely, via or hypercomplex multiplication. Moreover, the number of feature maps and the shape of kernels can be used to find the best ratio between expressiveness and the number of parameters. Hence, the expressiveness of models can be adjusted without necessarily increasing the embedding size. Although increasing the number of feature maps results in increasing the number of parameters in the model, we are able to benefit from the parameter sharing property of convolutions.

5 Experimental Setup

5.1 Datasets

We used seven datasets: WN18RR, FB15K-237, YAGO3-10, FB15K, WN18, UMLS and Kinship. An overview of the datasets is provided in Table 1. The latter four datasets are included for the sake of the completeness of our evaluation. dettmers2018convolutional suggest that indegree and PageRank can be used to indicate difficulty of performing link prediction on an input KG. In our experiments, we are particularly interested in link prediction results on complex KG. As commonly done, we augment the datasets by adding reciprocal triples (t, r, h(dettmers2018convolutional; balavzevic2019hypernetwork; balavzevic2019tucker). For link prediction based on only tail entity ranking experiments (see Table 5), we omit the data augmentation on the test set as similarly done by bansal2019a2n.

Dataset Degree
YAGO3-10 123,182 37 09.68.7 1,079,040 5,000 5,000
FB15K 14,951 1,345 32.569.5 483,142 50,000 59,071
WN18 40,943 18 3.57.7 141,442 5,000 5,000
FB15k-237 14,541 237 19.730 272,115 17,535 20,466
WN18RR 40,943 11 02.23.6 86,835 3,034 3,134
KINSHIP 104 25 82.23.5 8,544 1,068 1,074
UMLS 135 46 38.632.5 5,216 652 661
Table 1:

Overview of datasets in terms of entities, relations, average node degree plus/minus standard deviation.

5.2 Training and optimization

We apply the same training strategy as dettmers2018convolutional: Following the KvsAll training procedure,333Here, we follow the terminology of ruffinelli2019you. for a given pair (h, r), we compute scores for all with

and apply the logistic sigmoid function

. Models are trained to minimize the binary cross entropy loss function, where

and denote the predicted scores and binary label vector, respectively.

(12)

We employ the Adam optimizer (kingma2014adam), dropout (srivastava2014dropout), label smoothing and batch normalization (ioffe2015batch) as similarly done in the literature (balavzevic2019hypernetwork; balavzevic2019tucker; dettmers2018convolutional; demir2021convolutional)

. Moreover, we selected hyperparameters of our approaches by random search based on validation set performances 

(balavzevic2019tucker). Notably, we did not search a good random seed for the random number generator and fixed the seed to 1 throughout our experiments.

5.3 Evaluation

We employ the standard metrics filtered Mean Reciprocal Rank (MRR) and hits at N (H@N) for link prediction (dettmers2018convolutional; balavzevic2019hypernetwork). For each test triple , we construct its reciprocal and add it into which is a common technique to decrease the computational cost during testing (dettmers2018convolutional). Then, for each test triple , we compute the score of triples for all and calculate the filtered ranking of the triple having . Then we compute the MRR: . Consequently, given a , we compute ranks of missing entities based on the rank of head and tail entities as similarly done in balavzevic2019hypernetwork; balavzevic2019tucker; dettmers2018convolutional. For the sake of completeness, we also report link prediction performances based on only tail rankings, i.e., without including triples with reciprocal relations into test data, as similarly done by bansal2019a2n.

5.4 Implementation Details and Reproducibility

We implemented and evaluated our approach in the framework provided by balavzevic2019tucker; balazevic2019multi. To alleviate the hardware requirements for the reproducibility, we provide hyperparameter optimization, training and evaluation scripts along with pretrained models at the project page. Experiments were conducted on a single NVIDIA GeForce RTX 3090.

6 Results

Table 2 reports link prediction results on the WN18RR, FB15K-237 and YAGO3-10 datasets. Overall, the superior performance of our approaches becomes more and more apparent as the size and complexity of the knowledge graphs grows. On the smallest benchmark dataset (WN18RR), QMult, OMult, ConvQ and ConvO outperform many approaches including DistMult, ConvE and ComplEx in all metrics. However, QuatE, TuckER, and RotatE yield best performances. On the second-largest benchmark dataset (FB15K-237 is larger than WN18RR), ConvO outperforms all state-of-the-art approaches in 3 out of 4 metrics. Additionally, QMult and ConvQ outperform all state-of-the-art approaches except for TucKER in terms of MRR, H@1 and H@3. On the largest benchmark dataset (YAGO3-10 is larger than WN18RR), QMult, ConvO, ConvQ outperform all approaches in all metrics. Surprisingly, QMult and OMult reach the best and second-best performances in all metrics, whereas ConvO does not perform particularly well compared to our other approaches. ConvO outperforms QMult, OMult, and ConvQ in 8 out of 12 metrics, whereas QMult yields better performance on YAGO3-10. Overall, these results suggest that superiority of learning hypercomplex embeddings becomes more apparent as the size and complexity of the input knowledge graph increases as measured by indegree (see Table 1) and PageRank (see Table 6 in dettmers2018convolutional). In Table 3, we compare some of best performing approaches on WN18RR, FB15K-237 and YAGO3-10 in terms of the number of trainable parameters. Results indicate that our approaches yield competitive performances (if not better) on all benchmark datasets.

WN18RR FB15K-237 YAGO3-10
MRR @1 @3 @10 MRR @1 @3 @10 MRR @1 @3 @10
TransE (ruffinelli2019you) .228 .053 .368 .520 .313 .221 .347 .497 - - - -
ConvE (ruffinelli2019you) .442 .411 .451 .504 .339 .248 .359 .521 - - - -
TuckER (balavzevic2019tucker) .470 .443 .482 .526 .358 .266 .394 .544 - - - -
A2N (bansal2019a2n) .450 .420 .460 .510 .317 .232 .348 .486 - - - -
QuatE (zhang2019quaternion) .482 .436 .499 .572 .311 .221 .342 .495 - - - -
HypER (balavzevic2019hypernetwork) .465 .436 .477 .522 .341 .252 .376 .520 .533 .455 .580 .678
DistMult (dettmers2018convolutional) .430 .390 .440 .490 .240 .160 .260 .420 .340 .240 .380 .540
ConvE (dettmers2018convolutional) .430 .400 .440 .520 .335 .237 .356 .501 .440 .350 .490 .620
ComplEx (dettmers2018convolutional) .440 .410 .460 .510 .247 .158 .275 .428 .360 .260 .400 .550
REFE (chami2020low) .455 .419 .470 .521 .302 .216 .330 .474 .370 .289 .403 .527
ROTE (chami2020low) .463 .426 .477 .529 .307 .220 .337 .482 .381 .295 .417 .548
ATTE (chami2020low) .456 .419 .471 .526 .311 .223 .339 .488 .374 .290 .410 .538
ComplEx-N3 (chami2020low) .420 .390 .420 .460 .294 .211 .322 .463 .336 .259 .367 .484
MuRE (chami2020low) .458 .421 .471 .525 .313 .226 .340 .489 .283 .187 .317 .478
RotatE (sun2019rotate) .476 .428 .492 .571 .338 .241 .375 .533 .495 .402 .550 .670
QMult .438 .393 .449 .537 .346 .252 .383 .535 .555 .475 .602 .698
OMult .449 .406 .467 .539 .347 .253 .383 .534 .543 .461 .592 .692
ConvQ .457 .424 .470 .525 .343 .251 .376 .528 .539 .459 .587 .687
ConvO .458 .427 .473 .521 .366 .271 .403 .543 .489 .395 .546 .664
Table 2: Link prediction results on WN18RR, F15K-237 and YAGO3-10. Results are obtained from corresponding papers. Bold and underlined entries denote best and second-best results. The dash(-) denotes values missing in the papers.
WN18RR FB15K-237 YAGO3-10
QuatE (zhang2019quaternion) 16.38M 5.82M -
RotatE (sun2019rotate) 40.95M 29.32M 123.22M
QMult 16.38M 6.01M 49.30M
OMult 16.38M 6.01M 49.30M
ConvQ 21.51M 11.13M 54.42M
ConvO 21.51M 11.13M 54.42M
Table 3: Number of parameter comparisons on the WN18RR, F15K-237 and YAGO3-10 datasets. The dash(-) denotes values missing in the papers.

6.1 Ensemble Learning

WN18RR FB15K-237 YAGO3-10
MRR @1 @3 @10 MRR @1 @3 @10 MRR @1 @3 @10
Q-OMult .444 .399 .458 .544 .356 .260 .393 .545 .557 .478 .601 .700
QMult-ConvQ .446 .406 .455 .538 .357 .263 .392 .546 .561 .483 .606 .703
QMult-ConvO .449 .410 .459 .536 .372 .275 .411 .564 .543 .460 .594 .693
OMult-ConvQ .444 .403 .453 .537 .357 .262 .391 .547 .558 .478 .602 .700
OMult-ConvO .462 .425 .475 .539 .372 .277 .411 .564 .535 .450 .588 .692
ConvQ-O-OMult .463 .425 .475 .539 .372 .275 .411 .567 .552 .470 .599 .702
Table 4: Link prediction results via ensembling models.

Table 4 reports link prediction results of ensembled models on benchmark datasets. Averaging the predicted scores of models improved the performances by circa 1–2% in MRR. These results suggest that performances may be further improved through optimizing the impact of each model in the ensemble.

6.2 Impact of Tail Entity Rankings

During our experiments, we observed that models often perform more accurately in predicting missing tail entities compared to predicting missing head entities which was also observed in bansal2019a2nTable 5 indicates that MRR performances based on only tail entity rankings are on average absolute higher than MRR results based on head and tail entity rankings on FB15K-237 while such difference was not observed on WN18RR.

WN18RR FB15K-237 YAGO3-10
MRR @1 @3 @10 MRR @1 @3 @10 MRR @1 @3 @10
DistMult .430 .410 .440 .480 .370 .275 .417 .568 - - - -
ComplEx .420 .380 .430 .480 .394 .303 .434 .572 - - - -
ConvE .440 .400 .450 .520 .410 .313 .457 .600 - - - -
MINERVA .450 .410 .460 .510 .293 .217 .329 .456 - - - -
A2N .490 .450 .500 .550 .422 .328 .464 .608 - - - -
QMult .451 .403 .472 .553 .439 .341 .485 .636 .692 .626 .736 .799
OMult .461 .414 .482 .559 .440 .340 .486 .636 .689 .623 .733 .801
ConvQ .470 .437 .482 .538 .441 .344 .482 .632 .674 .612 .715 .783
ConvO .473 .442 .491 .535 .465 .367 .512 .654 .622 .535 .682 774
Q-OMult .457 .401 .48 .559 .448 .347 .495 .644 .696 .632 .734 .804
QMult-ConvQ .466 .424 .480 .557 .451 .353 .496 .646 .697 .635 .737 .803
QMult-ConvO .474 .435 .487 .558 .467 .367 .516 .662 .680 .610 .727 .800
OMult-ConvQ .471 .430 .487 .560 .452 .354 .495 .647 .696 .632 .735 .803
OMult-ConvO .476 .436 .488 .559 .466 .366 .515 .662 .676 .602 .724 .803
ConvQ-O .477 .442 .494 .548 .468 .370 .515 .661 .675 .603 .724 .795
Table 5: Link prediction results based on only tail entity rankings.
Relation Name Rel. Type RotatE QMult ConvQ OMult ConvO Ensemble
hypernym S .15 .10 .14 .11 .13 .13
instance_hypernym S .32 .35 .37 .36 .37 .39
member_meronym C .23 .22 .20 .23 .20 .23
synset_domain_topic_of C .34 .31 .31 .32 .33 .34
has_part C .18 .19 .17 .18 .18 .19
member_of_domain_usage C .32 .29 .28 .27 .33 .29
member_of_domain_region C .20 .25 .38 .30 .37 .38
derivationally_related_form R .95 .98 .98 .98 .98 .98
also_see R .59 .67 .65 .66 .66 .66
verb_group R .94 1.0 1.0 1.0 1.0 1.0
similar_to R 1.0 1.0 1.0 1.0 1.0 1.0
Table 6: MRR link prediction results per relations on WN18RR. Ensemble refers to averaging predictions of ConvQ-ConvO-OMult.
QMult ConvQ OMult ConvO Ensemble
(h, r, x)
hypernym .12 .18 .13 .18 .17
instance_hypernym .53 .56 .53 .57 .58
member_meronym .17 .09 .16 .08 .14
synset_domain_topic_of .49 .47 .51 .52 .52
has_part .15 .12 .15 .14 .14
member_of_domain_usage .04 .02 .07 .08 .05
member_of_domain_region .05 .06 .05 .06 .05
derivationally_related_form .98 .98 .98 .98 .98
also_see .67 .63 .65 .63 .63
verb_group 1.0 1.0 1.0 1.0 1.0
similar_to 1.0 1.0 1.0 1.0 1.0
(x, r, t)
hypernym .07 .09 .09 .08 .10
instance_hypernym .17 .18 .19 .17 .19
member_meronym .27 .31 .29 .33 .32
synset_domain_topic_of .13 .14 .13 .15 .15
has_part .22 .22 .22 .22 .24
member_of_domain_usage .53 .54 .47 .59 .54
member_of_domain_region .45 .69 .55 .67 .70
derivationally_related_form .98 .98 .98 .99 .98
also_see .68 .67 .66 .69 .68
verb_group 1.0 1.0 1.0 1.0 1.0
similar_to 1.0 1.0 1.0 1.0 1.0
Table 7: Link prediction results depending on the direction of prediction (head vs. tail prediction) on WN18RR. Ensemble refers to averaging predictions of ConvQ-ConvO-OMult.

6.3 Link Prediction Per Relation and Direction

We reevaluate link prediction performances of some of the best-performing models from Table 2 in Tables 7 and 6allen2021interpreting distinguish three types of relations: Type S relations are specialization relations such as hypernym, type C denote so-called generalized context-shifts and include has_part relations, and type R relations include so-called highly-related relations such as similar_to. Our results show that our approaches accurately rank missing tail and head entities for type R relations. For instance, our approaches perfectly rank ( MRR) missing entities of symmetric relations (verb_group and similar_to). However, the direction of entity prediction has a significant impact on the results for non-symmetric type C relations. For instance, MRR performances of QMult, ConvQ, OMult and ConvO vary by up to absolute 0.63 for the relation member_of_domain_region. The low performances on hypernym (type S) may stem from the fact that there are 184 triples in the test split of WN18RR where hypernym occurs with entities of which at least one did not occur in the training split. Models often perform poorly on type C relations but considerably better on type R relations corroborating findings by allen2021interpreting.

6.4 Batch vs. Unit Normalization

We investigate the effect of using batch-normalization instead of unit normalization as previously proposed by zhang2019quaternion.  Table 8 indicates that the scaling effect of hypercomplex multiplications can be effectively alleviated by using the batch normalization technique. Replacing unit normalization with the batch normalization technique allows benefiting (1) from its regularization effect and (2) from its numerical stability. Through batch normalization, our models are able to control the rate of normalization and benefit from its implicit regularization effect (ioffe2015batch).

Kinship UMLS
MRR H@1 @3 @10 MRR @1 @3 @10
GNTP-Standard minervini2019differentiable .72 .59 .82 .96 .80 .70 .88 .95
GNTP-Attention minervini2019differentiable .76 .64 .85 .96 .86 .76 .95 .98
NTP minervini2019differentiable .35 .24 .37 .57 .80 .70 .88 .95
NeuralLP minervini2019differentiable .62 .48 .71 .91 .78 .64 .87 .96
MINERVA minervini2019differentiable .72 .60 .81 .92 .83 .73 .90 .97
ConvE dettmers2018convolutional .83 .74 .92 .98 .94 .92 .96 .99
QMult (batch) .88 .81 .94 .99 .96 .93 .98 1.0
QMult (unit) .69 .58 .78 .90 .77 .69 .82 .93
ConvQ (batch) .86 .77 .93 .98 .92 .86 .98 1.0
ConvQ (unit) .61 .49 .68 .85 .55 .45 .59 .75
OMult (batch) .87 .80 .94 .99 .95 .91 .98 1.0
OMult (unit) .69 .57 .77 .89 .76 .66 .82 .93
ConvO (batch) .86 .77 .93 .98 .90 .82 .98 1.0
ConvO (unit) .65 .53 .72 .86 .56 .46 .61 .78
Table 8: Batch normalization vs. unit normalization for link prediction.

6.5 Convergence on YAGO3-10

Figure 1

indicates that incurred binary cross entropy losses significantly decrease within the first 100 epochs. After the

th iteration, ConvQ and ConvO appear to converge as losses do not fluctuate, whereas training losses of QMult and OMult continue fluctuating.

Figure 1: Convergence on the training set.
WN18 FB15K
Model Param. MRR Hit@10 Hit@3 Hit@1 Param. MRR Hit@10 Hit@3 Hit@1
TransE - .495 .943 .888 .113 - .463 .749 .578 .297
TransR - .605 .940 .876 .335 - .346 .582 .404 .218
ER-MLP - .712 .863 .775 .626 - .288 .501 .317 .173
RESCAL - .890 .928 .904 .842 - .354 .587 .409 .235
HolE - .938 .949 .945 .930 - .524 .739 .613 .402
SimplE - .942 .947 .944 .939 - .727 .838 .773 .660
TorusE - .947 .954 .950 .943 - .733 .832 .771 .674
RotatE - .947 .961 .953 .938 - .699 .872 .788 .585
QuatE - .949 .960 .954 .941 - .770 .878 .821 .700
QuatE - .950 .962 .954 .944 - .833 .900 .859 .800
QMult 16.39M .975 .980 .976 .972 7.05M .755 .896 .819 .668
OMult 16.39M .975 .981 .976 .972 7.05M .748 .889 .813 .660
ConvQ 21.51M .976 .980 .977 .973 12.17M .813 .923 .868 .743
ConvO 21.51M .976 .980 .977 .973 12.17M .810 .923 .865 .739
Table 9: Link prediction results on WN18 and FB15K.

6.6 Link Prediction Results on Previous Benchmark Datasets

Table 9 reports results on WN18 and FB15K showing that our approaches ConvQ and ConvQ outperform state-of-the-art approaches in 6 out of 8 metrics on the datasets.

7 Discussion

Our approaches often outperform many state-of-the-art approaches on all datasets. QMult and OMult outperform many state-of-the-art approaches including DistMult and ComplEx. These results indicate that scoring functions based on hypercomplex multiplications are more effective than scoring functions based on real and complex multiplications. This observation corroborates findings of zhang2019quaternion. ConvO often perform slightly better than ConvQ on all datasets. Additionally, QMult and QMult perform particularly well on YAGO3-10. These results may stem from the fact that ConvQ and ConvO

may benefit from initializing parameters with the correct variance as highlighted in 

hanin2018start. Overall, superior performances of our models stem from (1) hypercomplex embeddings and (2) the inclusion of convolution operations. Our models are allowed to degrade into ComplEx or DistMult if necessary (see Section 4). Inclusion of the convolution operation followed by an affine transformation permits to find a good ratio between expressiveness and the number of parameters.

8 Conclusion

In this study, we presented effective compositions of convolution operations with hypercomplex multiplications in the quaternion and octonion algebras to address the link prediction problem. Experimental results showed that QMult and OMult performing hypercomplex multiplication on hypercomplex-valued embeddings of entities and relations are effective methods to tackle the link prediction problem. ConvQ and ConvO forge QMult and OMult with convolution operations followed by an affine transformation. By virtue of this novel composition, ConvQ and ConvO facilitate to find a good ratio between expressiveness and the number of parameters. Experiments suggest that (1) generalizing real- and complex-valued models such as DistMult and ComplEx to the hypercomplex space is beneficial, particularly for larger knowledge graphs, (2) the scaling effect of hypercomplex multiplication can be more effectively tackled with batch normalization than unit normalization, and (3) the application of ensembling can be used to further increase generalization performances.

In future work, we plan to investigate the generalization of our approaches to temporal knowledge graphs and translation based models on hypercomplex vector spaces.

References