Log In Sign Up

Jointly Embedding Relations and Mentions for Knowledge Population

by   Miao Fan, et al.

This paper contributes a joint embedding model for predicting relations between a pair of entities in the scenario of relation inference. It differs from most stand-alone approaches which separately operate on either knowledge bases or free texts. The proposed model simultaneously learns low-dimensional vector representations for both triplets in knowledge repositories and the mentions of relations in free texts, so that we can leverage the evidence both resources to make more accurate predictions. We use NELL to evaluate the performance of our approach, compared with cutting-edge methods. Results of extensive experiments show that our model achieves significant improvement on relation extraction.


page 1

page 2

page 3

page 4


Learning Relation Prototype from Unlabeled Texts for Long-tail Relation Extraction

Relation Extraction (RE) is a vital step to complete Knowledge Graph (KG...

Learning Embedding Representations for Knowledge Inference on Imperfect and Incomplete Repositories

This paper considers the problem of knowledge inference on large-scale i...

Probabilistic Belief Embedding for Knowledge Base Completion

This paper contributes a novel embedding model which measures the probab...

Compositional Learning of Relation Path Embedding for Knowledge Base Completion

Large-scale knowledge bases have currently reached impressive sizes; how...

CoRI: Collective Relation Integration with Data Augmentation for Open Information Extraction

Integrating extracted knowledge from the Web to knowledge graphs (KGs) c...

Deep Structured Neural Network for Event Temporal Relation Extraction

We propose a novel deep structured learning framework for event temporal...

Taxonomical hierarchy of canonicalized relations from multiple Knowledge Bases

This work addresses two important questions pertinent to Relation Extrac...

1 Introduction

Relation extraction [Bach and Badaskar2007, Grishman1997, Sarawagi2008], which aims at discovering the relationships between a pair of entities, is a significant research direction for discovering more beliefs for knowledge bases. Most stand-alone approaches, however, either use local graph patterns in knowledge repositories, or extract features from text mentions, to individually help predict relations between two entities. The heterogeneity brings about a gap between structured repositories and unstructured free texts, which spoils the dream of sharing the evidence from both knowledge and natural language.

For studies in decades, scientists either compete the performance of their methods on the public text datasets such as ACE111 [GuoDong et al.2005] and MUC222 projects/muc/ [Zelenko et al.2003], or look for effective approaches [Gardner et al.2013, Lao et al.2011] on improving the accuracy of link prediction within knowledge bases such as NELL333 [Carlson et al.2010] and Freebase444 [Bollacker et al.2007]. Thanks to the research of distantly supervised relation extraction [Fan et al.2014a, Mintz et al.2009] which facilitates the manual annotation via automatically aligning with the relation mentions in free texts, NELL can not only extract triplets, i.e. , but also collect the texts between two entities as the evidence of relation mention. We take an example from NELL which originally records a belief: , where “” is the mention between the head entity , and the tail entity , to indicate the relation .

Fortunately, the embedding techniques [Fan et al.2014b, Mikolov et al.2013] enlighten us to break through the limitation of heterogeneous resources, and to establish a connection between a relation and its corresponding mention via learning a specific vector representation for each of the elements, including the entities and relations in triplets, and the words in mentions. More specifically, we propose a joint relation mention embedding (JRME) model in this paper, which simultaneously learns low-dimensional vector representations for entities and relations in knowledge repositories, and in the meanwhile, each word in the relation mentions is also trained a dedicated embedding. This model helps us take advantage of the benefits from the two resources to make more accurate predictions. We use two different datasets extracted from NELL to evaluate the performance of JRME, compared with cutting-edge methods. It turns out that our model achieves significant improvement on relation extraction.

2 Related Work

Figure 1: Given a belief, and

in NELL, (a) shows the distributed representations of a triplet in the knowledge space, and (b) illustrates word embeddings in the text space.

We group some recent work on relation extraction into two categories, i.e. text-based approaches and knowledge-based methods. Generally speaking, both of the parties seek better evidences to make more accurate predictions. The text-based community focuses on linguistic features such as the words combined with POS tags that indicate the relations, but the other side conducts relation inference depending on the local connecting patterns between entity pairs learnt from the knowledge graph which is established by beliefs.

2.1 Text-based Approaches

It is believed that the text between two recognized entities in a sentence indicate their relationships to some extent. To implement a relation extraction system guided by supervised learning, a key step is to annotate the training data. Therefore, two branches emerge as follows,

  • Relation extraction with manual annotated corpora

    : Traditional approaches compete the performance on the public text datasets which are annotated by experts, such as ACE and MUC. They choose different features extracted from the texts, like kernel features

    [Zelenko et al.2003] or semantic parser features [GuoDong et al.2005], and there is a comprehensive survey [Sarawagi2008] which shows more details about this branch.

  • Relation extraction with distant supervision: Due to the limited scale and tedious labor caused by manual annotation, scientists explore an alternative way to automatically generate large-scale annotated corpora, named by distant supervision [Mintz et al.2009]. Even though this cutting-edge technique solves the issue of lacking annotated corpora, we still suffer from the problem of noisy and sparse features [Fan et al.2014a].

2.2 Knowledge-based Methods

Knowledge bases contain millions of entries which are usually represented as triplets, i.e. , which intuitively inspire us to regard the whole repository as a graph, where entities are nodes and relations are edges. Therefore, one research community looks forward to predicting unknown relations which may exist between two entities via learning the linking patterns, and another promising research group tries to learn structured embeddings of knowledge bases.

  • Relation prediction with graph patterns: Some canonical studies [Gardner et al.2013, Lao et al.2011]

    adopt a data-driven random walk model, which follows the paths from the head entity to the tail entity on the local graph structure to generate non-linear feature combinations to represent relations, and then uses logistic regression to select the significant features that contribute to classifying other entity pairs which also have the given relation.

  • Relation prediction with embedding representations: Bordes et al. [Bordes et al.2013, Bordes2011] propose an alternative way that embedding the whole knowledge graph via learning a specific low-dimensional vector for each entity and relation, so that we just need simple vector calculation instead to predict relations.

Our model (JRME) benefits more from the latest and state-of-the art embedding approaches, TransE [Bordes et al.2013] and IIKE [Fan et al.2015a]. Therefore, we re-implement them as the rival methods, and conduct extensive comparisons in the subsequent experiments.

3 Model

The heterogeneity between free texts and knowledge bases brings about a challenge that we can hardly take advantage of the features uniformly, since they are located in different spaces and have varies dimensions. Thankfully, the embedding techniques [Fan et al.2014b, Mikolov et al.2013, Fan et al.2015b, Fan et al.] leave an idea that almost all the elements, including words, entities, relations, can be learnt and assigned distributed representations, and the mission remaind for us is to jointly learn embeddings for entities, relations, and the words in the same feature space.

We arrange the subsequent content as follows: Section 3.1 and 3.2 describe how to model the knowledge and texts individually, and we finally talk about the proposed jointly embedding model in Section 3.3.

3.1 Knowledge Relation Embedding

Inspired by TransE [Bordes et al.2013], we regard the relation between a pair of entities, i.e. and , as a transition, due to the hierarchical structure of knowledge graphs. Therefore, we use as follows to denote the plausibility of a triplet illustrated by Figure 1(a):


where the closer is to , the more likely the triplet exists. The bold fonts indicate the vector representations, e.g. the embedding of the head entity is where is short for dimension.

Assume that is the set of relations. Given a correct triplet , we aim at pushing all the possible corrupt triplets with wrong relations

away. Therefore, we adopt a margin-based ranking loss function with a block

to separate all the negative triplets in the corrupted base from all the positives in the correct knowledge base :


in which is a hinge loss function, i.e. .

3.2 Text Mention Embedding

Similar to the Knowledge Relation Embedding (KBE), we can also find an approach to measure the distance between the mention and its corresponding relation in Text Mention Embedding (TME). To denote the embedding of mention , we sum all the embeddings of words included by as shown by Equation (3). Thanks to representing all the words and relations in vectors with the same dimension which is demonstrated by Figure 1(b), we can adopt inner product function shown by Equation (4) to calculate their similarity.


Before using the margin-based ranking loss function to learn, we need to construct the negative set for each pair of relation mention which appears in the correct training set . To generate the negative pairs , we keep the mention but iteratively change other relations from the set of relations . The subsequent Formula (5) helps to discriminate between the two opponent sets with a margin ,


3.3 Joint Relation Mention Embedding

Due to the uniform modeling standard of KBE and TME, we can jointly embed the relations and corresponding mentions (JRME) with Equation (6),


in which each belief belonging to the training set contains two entities, the relation and its corresponding mention.

If we achieve the learnt embeddings for all the entities, relations and words in mentions, we can simply use Equation (7) to measure the rationality of a relation appearing between a pair of entities with the evidence of :


4 Experiments

We set up three objectives for evaluating the effectiveness of JRME, which are:

  • testing the effectiveness of JRME in terms of different evaluation protocols/metrics;

  • comparing the performances of JRME with other cutting-edge approaches;

  • judging the robustness of the proposed model by using a larger but noisy dataset.

Section 4.1 and 4.2 display the different datasets and the various protocols we use to measure the performance compared with several state-of-the-art approaches, i.e TransE [Bordes et al.2013] and IIKE [Fan et al.2015a]. Section 4.3 will show the results of the extensive experiments.

4.1 Datasets

We prepare two datasets with different statistical characteristics. As illustrated by Table 1, both of them are generated by NELL [Carlson et al.2010], a Never-Ending Language Learner which works on automatically extracting beliefs from the Web. NELL-50K is a medium size dataset, and each belief, which contains the head entity , the tail entity , the relation between them, and the mention indicate the relation, is validated by experts. However, NELL-5M is a much larger one with five million uncertain training examples automatically learnt from the Web by NELL.

#(ENTITIES) 29,904 177,635
#(RELATIONS) 233 236
#(TRAINING EX.) 57,356 5,000,000
#(VALIDATING EX.) 10,710 47,335
#(TESTING EX.) 10,711 47,335
Table 1: Statistics of the datasets used for relation prediction task.

4.2 Protocols

The scenario of experiments is that: given a pair of entities, a short text/mention to indicate the correct relations and a set of candidate relations, we compare the performance between our models and other state-of-the-art approaches, with the metrics as follows,

  • Average Rank: Each candidate relation will gain a score calculated by Equation (7). We sort them in ascending order and compare with the corresponding ground-truth belief. For each belief in the testing set, we get the rank of the correct relation. The average rank is an aggregative indicator, to some extent, to judge the overall performance on relation extraction of an approach.

  • Hit@10: Besides the average rank, scientists from the industrials concern more about the accuracy of extraction when selecting Top10 relations. This metric shows the proportion of beliefs that we predict the correct relation ranked in Top10.

  • Hit@1: It is a more strict metric that can be referred by automatic system, since it demonstrates the accuracy when just picking the first predicted relation in the sorted list.

4.3 Hyperparameters

Before displaying the evaluation results, we need to elaborate the hyperparameters that have been tried, and show the best combination of hyperparameters we choose. Another advantage of embedding-based model is that it is unnecessary to tune many hyperparameters. For our model, we just need to set four, which are the uniform dimension

of entities, relations and the words in mentions, the margin of KBE, the margin of TME and the margin of JRME. To decide the ideal set of hyperparameters, we use the validation set to pick the best combination from , , and . Finally, we choose and to train the embeddings, as this combination of hyperparameters helps perform best on the validation set.

4.4 Performance

Table 2 and 3 illustrate the results of experiments on NELL-50K and NELL-5M, respectively. Both of them show that JRME performs best among all the approaches we implemented. We can also figure out that text mentions contribute a lot to predicting the correct relations. Moreover, Table 3 also demonstrates that not only IIKE is robust to the noise in NELL-5M dataset, which consists with its characteristics emphasized by Fan et al. [Fan et al.2015a], but also TME and JRME share this special “gene”. Overall, JRME improves the average rank of relation prediction about 20% compared with state-of-the-art IIKE.

TransE 131.8 16.3% 3.0%
KRE 29.1 44.3% 14.4%
TME 11.5 80.0% 56.0%
IIKE 7.5 81.8% 56.8%
JRME 6.2 87.8% 60.2%
Table 2: Performance of TransE, KRE, IIKE, TME and JRME on the metrics of Average Rank, Hit@10 and Hit@1 in NELL-50K dataset.
TransE 77.1 5.4% 0.7%
KRE 57.5 17.9% 2.5%
TME 3.6 96.3% 63.6%
IIKE 4.5 82.6% 53.2%
JRME 3.0 96.7% 68.0%
Table 3: Performance of TransE, KRE, IIKE, TME and JRME on the metrics of Average Rank, Hit@10 and Hit@1 in NELL-5M dataset.

5 Conclusion

We engage in bridging the gap between unstructured free texts and structured knowledge bases to predict more accurate relations via proposing a joint embedding model between any given entity pair for knowledge population. The results of extensive experiments with various evaluation protocols on both medium and large NELL datasets effectively demonstrate that our model (JRME) outperforms other state-of-the-art approaches. Because of the uniform low-dimensional vector representations for entities, relations and even the words, evidence for prediction is compressed into embeddings to facilitate the information exchange and computing, which finally leads a huge leap forward in relation extraction.

There still remain, however, several open questions on this promising research direction in the future, such as exploring better ways to embed the whole beliefs or mentions without losing too much regularities of knowledge and linguistics.


The first author conducted this research while he was a joint-supervision Ph.D. student in New York University. This paper is dedicated to all the members of the Proteus Project.


  • [Bach and Badaskar2007] Nguyen Bach and Sameer Badaskar. 2007. A review of relation extraction. Literature review for Language and Statistics II.
  • [Bollacker et al.2007] Kurt Bollacker, Robert Cook, and Patrick Tufts. 2007. Freebase: A shared database of structured general human knowledge. In AAAI, volume 7, pages 1962–1963.
  • [Bordes et al.2013] Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi-relational data. In Advances in Neural Information Processing Systems, pages 2787–2795.
  • [Carlson et al.2010] Andrew Carlson, Justin Betteridge, Bryan Kisiel, Burr Settles, Estevam R. Hruschka Jr., and Tom M. Mitchell. 2010. Toward an architecture for never-ending language learning. In

    Proceedings of the Twenty-Fourth Conference on Artificial Intelligence (AAAI 2010)

  • [Fan et al.] Miao Fan, Qiang Zhou, Andrew Abel, and Thomas Fang Zheng. Probabilistic belief embedding for large-scale knowledge population.
  • [Fan et al.2014a] Miao Fan, Deli Zhao, Qiang Zhou, Zhiyuan Liu, Thomas Fang Zheng, and Edward Y. Chang. 2014a. Distant supervision for relation extraction with matrix completion. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 839–849, Baltimore, Maryland, June. Association for Computational Linguistics.
  • [Fan et al.2014b] Miao Fan, Qiang Zhou, Emily Chang, and Thomas Fang Zheng. 2014b. Transition-based knowledge graph embedding with relational mapping properties. In Proceedings of the 28th Pacific Asia Conference on Language, Information, and Computation, pages 328–337, Phuket,Thailand, December. Department of Linguistics, Chulalongkorn University.
  • [Fan et al.2015a] Miao Fan, Qiang Zhou, and Thomas Fang Zheng. 2015a. Learning embedding representations for knowledge inference on imperfect and incomplete repositories. arXiv preprint arXiv:1503.08155.
  • [Fan et al.2015b] Miao Fan, Qiang Zhou, Thomas Fang Zheng, and Ralph Grishman. 2015b. Probabilistic belief embedding for knowledge base completion. arXiv preprint arXiv:1505.02433.
  • [Gardner et al.2013] Matt Gardner, Partha Pratim Talukdar, Bryan Kisiel, and Tom M. Mitchell. 2013. Improving learning and inference in a large knowledge-base using latent syntactic cues. In EMNLP, pages 833–838. ACL.
  • [Grishman1997] Ralph Grishman. 1997. Information extraction: Techniques and challenges. In International Summer School on Information Extraction: A Multidisciplinary Approach to an Emerging Information Technology, SCIE ’97, pages 10–27, London, UK, UK. Springer-Verlag.
  • [GuoDong et al.2005] Zhou GuoDong, Su Jian, Zhang Jie, and Zhang Min. 2005. Exploring various knowledge in relation extraction. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, ACL ’05, pages 427–434, Stroudsburg, PA, USA. Association for Computational Linguistics.
  • [Lao et al.2011] Ni Lao, Tom Mitchell, and William W. Cohen. 2011. Random walk inference and learning in a large scale knowledge base. In

    Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing

    , pages 529–539, Edinburgh, Scotland, UK., July. Association for Computational Linguistics.
  • [Mikolov et al.2013] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.
  • [Mintz et al.2009] Mike Mintz, Steven Bills, Rion Snow, and Dan Jurafsky. 2009. Distant supervision for relation extraction without labeled data. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 2-Volume 2, pages 1003–1011. Association for Computational Linguistics.
  • [Sarawagi2008] Sunita Sarawagi. 2008. Information extraction. Foundations and trends in databases, 1(3):261–377.
  • [Zelenko et al.2003] Dmitry Zelenko, Chinatsu Aone, and Anthony Richardella. 2003. Kernel methods for relation extraction.

    The Journal of Machine Learning Research

    , 3:1083–1106.