Namesakes: Ambiguously Named Entities from Wikipedia and News

by   Oleg Vasilyev, et al.

We present Namesakes, a dataset of ambiguously named entities obtained from English-language Wikipedia and news articles. It consists of 58862 mentions of 4148 unique entities and their namesakes: 1000 mentions from news, 28843 from Wikipedia articles about the entity, and 29019 Wikipedia backlink mentions. Namesakes should be helpful in establishing challenging benchmarks for the task of named entity linking (NEL).



There are no comments yet.


page 1

page 2

page 3

page 4


pioNER: Datasets and Baselines for Armenian Named Entity Recognition

In this work, we tackle the problem of Armenian named entity recognition...

Entity Linking with people entity on Wikipedia

This paper introduces a new model that uses named entity recognition, co...

WikiGUM: Exhaustive Entity Linking for Wikification in 12 Genres

Previous work on Entity Linking has focused on resources targeting non-n...

Multi-class Multilingual Classification of Wikipedia Articles Using Extended Named Entity Tag Set

Wikipedia is a great source of general world knowledge which can guide N...

Detecting Potential Topics In News Using BERT, CRF and Wikipedia

For a news content distribution platform like Dailyhunt, Named Entity Re...

Text Summarization of Czech News Articles Using Named Entities

The foundation for the research of summarization in the Czech language w...

Entity Linking for Queries by Searching Wikipedia Sentences

We present a simple yet effective approach for linking entities in queri...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Recent advances have made it possible to incorporate knowledge into distributed neural representations Min et al. (2020); Nooralahzadeh and Øvrelid (2018). A fundamental component of such systems is named entity linking (NEL) Yang and Chang (2015); Sorokin and Gurevych (2018); Kolitsas et al. (2018); Li et al. (2020); Sevgili et al. (2021). Given a text, the task is to correctly identify mentions of named entities by linking to the correct reference entities in a knowledge base, e.g. Wikipedia. As the world evolves, new entities and new information about existing entities must be tracked with a dynamic knowledge base.

If every entity had a unique name like a bar code, NEL would be easy. But we live in a world populated by "Michael Jordan", a name shared by a renowned computer scientist, a famous basketball player, a famous actor, and many others. There are more than 20 entities with the surface form "Michael Jackson" in, and the Wikipedia Disambiguation pages for some names include hundreds of unique

In some definitions of NEL the task includes named entity recognition (NER), the initial tagging of named entity mentions in the target text

Yang and Chang (2015); Sorokin and Gurevych (2018); Kolitsas et al. (2018); Li et al. (2020); Sevgili et al. (2021). Here we focus on the more narrow sense of NEL which assumes the initial tagging of named entities is done Rao et al. (2012); Wu et al. (2020); Logeswaran et al. (2019). The mentions of entities in a text corpus are generally provided as spans that capture various surface forms. For example for the entity Michael Jordan, some mentions include M. Jordan, Jordan, Michael Jordan.

Most existing NEL-related datasets do not focus on highly ambiguous names, e.g. WikiDisamb30 Ferragina and Scaiella (2012), ACE and MSNBC Ratinov et al. (2011), WNED-CWEB and WNED-WIKI Guo and Barbosa (2018) CoNLL-YAGO Hoffart et al. (2011), and TAC KBP Entity Discovery and Linking dataset Ji et al. (2017). The recently introduced Ambiguous Entity Retrieval (AmbER) dataset by Chen et al. (2021) is an exception, including subsets of identically named entities for the purpose of fact checking, slot filling, and question-answering tasks. AmBer is limited to Wikipedia text and was automatically generated.

The unique contribution of the Namesakes Vasilyev et al. (2021) is its diversity—it includes news mentions—and its high quality, ensured by manual data-labeling. The importance of manual labeling will become clear in the data description that follows. In this paper we describe in detail the data selection, filtering, and composition of Namesakes. We also define and present the ambiguity of the mentions of entities in Namesakes. The aim of Namesakes is to help researchers distinguish the performance of NEL systems with highly challenging, realistic data.

2 Dataset composition

2.1 Data motivation

The primary motivation for this work is to create a more challenging NEL dataset where:

  1. [topsep=0pt,itemsep=-1ex,partopsep=1ex,parsep=1ex]

  2. The names of most entities must be ambiguous, i.e. with Wikipedia disambiguation pages linking to multiple Wikipedia articles.

  3. Most entities must belong to one of three categories - person, location, organization - preferably in balanced proportions.

  4. Entity contexts are captured by three distinct components:

    1. Entities: Wikipedia articles describing named entities

    2. Backlinks: Wikipedia articles that reference members of Entities

    3. News: News articles containing mentions from Entities. The News mentions may be true references to members of Entities or have name collisions with them.

  5. All three components should have reasonably clean text chunks (at least in the neighborhood of the entities of interest), filtered for obscure reference sections, business reports, TV listings, etc.

The initial entities names we selected are listed in the Appendix A. Only names with a Wikipedia disambiguation page are included, resulting in an initial count of 7626 Wikipedia entities.

2.2 Entities: selection for labeling

We filtered the preliminary 7626 Wikipedia entities by requiring that each text satisfies several reasonable conditions:

  1. [topsep=0pt,itemsep=-1ex,partopsep=1ex,parsep=1ex]

  2. The text must contain at least 5 "good sentences", after sentence tokenization with NLTK. The "good sentence" empirical conditions are listed in Appendix B.

  3. The first named entity identified in the text by an NER model must be a person PER, organization ORG or location LOC, not miscellaneous MISC. We performed NER using the "bert-large-cased-finetuned-conll03-english" model from the Hugging Face model hub444

  4. The text must contain at least 3 mentions that are similar to the Wikipedia entity’s name. These entities should be picked up only from the "good sentences". The similarity here means that an entity contains at least one word from the entity name with length >=3 characters. The motivation for this is to exclude abnormally short Wikipedia entities, or at least have an entity containing entities that can be confused with the "main" entity.

With the highly ambiguous entities we started with, and after the above filtering, the resulting Wikipedia pages had a high frequency of intentionally confusing entities. Most of the ambiguity derives from having similar or identical names shared among multiple Wikipedia entities. Additional confusion comes from a Wikipedia entity text frequently having not only the "main" entity (the main focus of the article), but additional entities with the same or similar names described in the article. These additional entities are often relatives of a "main" person entity, or locations and organizations of the same or similar name.

The goal of our labeling was to identify in each entity the "main" entity and the "other" entities. In our final selection for labeling, we selected top entity names by the number of Wikipedia pages in which such names occurred. Specifically, the top 100 names of each type were selected: person, organization, location. Our resulting dataset for labeling contains 4148 Wikipedia entities. For each entity text, all the entities with the names similar to the name of the entity were tagged as "categorize", requesting annotators to replace the tag by either "Same" or "Other". The tag "Same" means that the entity is actually the entity, i.e. the entity to which the entity text is devoted to. The tag "Other" means that, despite the same or similar name, the tagged entity is not the entity.

2.3 Entities data

The labeled Entities data is the output of our manual labeling process, undertaken by an annotation team at Odetta555 using the annotation software tool Datasaur666 The team consisted of 6 annotators who were experienced with NLP projects and passed a trial training task for this project. Only the most reliable tagged entity mentions were kept:

  1. [topsep=0pt,itemsep=-1ex,partopsep=1ex,parsep=1ex]

  2. The mentions to which all six annotators assigned the "Same" tag.

  3. The mentions on which only one annotator disagreed with the rest; such mentions were confirmed via reconciliation.

The final result was 21426 "Same" mentions and 7417 "Other" mentions for the 4148 Wikipedia entities.

2.4 News data

The News component of our dataset was created from querying Primer’s proprietary news corpus by the ambiguous entity names from the Entities component. The "ambiguous" entities are the entities with the names (aliases, or last names of people) that could be mixed up with at least three other entities. Once the news articles were obtained, they were filtered (see Appendix C) by excluding articles not satisfying the requirements of text quality and manageable labeling:

  1. [topsep=0pt,itemsep=-1ex,partopsep=1ex,parsep=1ex]

  2. The final number of news texts is 1000.

  3. Each text is between 500 and 3000 characters.

  4. Each text has a named entity mention found by the name query; this mention has to be labeled. The list of suggested labels must contain at least 3 but not more than 10 Wikipedia entities with which the mention can be confused, including the entity (if it exists) to which it belongs. Fewer than 3 Wikipedia entities fails to provide enough ambiguity, while more than 10 would be too time-consuming to label.

The goal of the labeling was to assign to each mention its correct Wikipedia entity (if existing in the Entities dataset), from the list of 3-10 provided Wikipedia entities. The labeling was done using the annotation tool Datasaur, by 3 Odetta annotators, with consequent reconciliation of all the mentions that caused a disagreement. Similar to the labeling of the Entities, the annotators were experienced in NLP projects and went through a trial task.

The resulting News data consists of 1000 texts, each text with one annotated mention. Of these mentions, 276 do exist in the Entities data, and 724 do not exist in the Entities data (but can be easily confused with many entities from there).

2.5 Backlinks data

The Backlinks dataset was manufactured from a Wikipedia dump from July 1, 2021 by collecting (and then thoroughly filtering for quality) the entities that link to the Entities dataset. For example, if the person described in the "John Muir" Wikipedia article were a member of the Entities dataset, then the "National Park" Wikipedia page would be one of our candidate backlinks, because its text includes a hyperlink with anchor text "John Muir" that links to the "John Muir" page.

Many links happen to occur in reference-like sections of Wikipedia rather than in a normal text; this requires careful filtering. Our filtering and cleaning included (for details see Appendix D):

  1. [topsep=0pt,itemsep=-1ex,partopsep=1ex,parsep=1ex]

  2. Removing Wikipedia pages with titles that start with certain words, such as "List", "MediaWiki" etc.

  3. Removing bottom part of the page text, starting from certain sections, named like "Notes, "References" etc.

  4. Only mentions from "good" parts of the Wikipedia page are kept. If a page loses all its mentions, the page is removed.

  5. Any page text is cut after 1000 characters down from the last (occurred lowest in the text) mention.

The resulting Backlinks dataset contains 26903 text chunks from Wikipedia pages, and 29019 linked mentions in the pages.

2.6 Resulting dataset

The resulting dataset Namesakes consists of three closely related datasets: Entities, News, and Backlinks. The structure of the dataset is shown in Figure 1, the details of the figure will be explained in the following subsections and in Section 3. The Entities and Backlinks consist of Wikipedia text chunks. The News consists of random news chunks. The Entities and News are human-labeled, resolving the mentions of the entities. The Backlinks are not labeled, but have mentions already linked by Wikipedia. In this section we summarize the structure of the data.

Figure 1: Structure of Namesakes. An attempt to link a mention to KB (Entities with "Same" mentions) has a potential confusion - Ambiguity (discussed in Section 3 and defined in Appendix E). is the number of mentions. Some mentions that refer to KB entities have mentions not existing in the KB, the percent of such mentions is indicated as "New". Some mentions that should not belong to KB ("out-KB" News and "Other" Entities) nevertheless have namesakes of "Same" mentions - as shown by bifurcated arrows. This figure shows only one possible scenario of possible linking, see Section 4.

2.6.1 Entities

The Entities dataset consists of 4148 Wikipedia text chunks containing human-tagged mentions of entities. Each mention is tagged either as "Same" (meaning that the mention is of this Wikipedia page entity), or "Other" (meaning that the mention is of some other entity, just having the same or similar name). The Entities dataset is a jsonl list, each item is a dictionary with the following keys and values:

  1. [topsep=0pt,itemsep=-1ex,partopsep=1ex,parsep=1ex]

  2. Key "pagename": page name of the Wikipedia page.

  3. Key "pageid": page id of the Wikipedia page.

  4. Key "title": title of the Wikipedia page.

  5. Key "url": URL of the Wikipedia page.

  6. Key "entities": list of the mentions in the page text, each entity is represented by a dictionary with the keys:

    1. [topsep=0pt,itemsep=-1ex,partopsep=1ex,parsep=1ex]

    2. Key "text": the mention as a string from the page text.

    3. Key "start": start character position of the entity in the text.

    4. Key "end": end (one-past-last) character position of the entity in the text.

    5. Key "tag": the annotation tag ("Same" or "Other") given to the mention.

  7. Key "text": The text chunk.

The texts contain 21426 mentions tagged "Same", and 7417 mentions tagged "Other".

2.6.2 News

The News dataset consists of 1000 news text chunks, each with a single annotated entity mention. The annotation either points to the corresponding entity from the Entities dataset (if the mention is of that entity), or indicates that the mention entity does not belong to the Entities dataset. The News dataset is a jsonl list, each item is a dictionary with the following keys and values:

  1. [topsep=0pt,itemsep=-1ex,partopsep=1ex,parsep=1ex]

  2. Key "id_text": Id of the document (0,1,2,3,…).

  3. Key "entity": a dictionary describing the annotated entity mention in the text:

    1. [topsep=0pt,itemsep=-1ex,partopsep=1ex,parsep=1ex]

    2. Key "text": the mention as a string found by an NER model in the text.

    3. Key "start": start character position of the mention in the text.

    4. Key "end": end (one-past-last) character position of the mention in the text.

    5. Key "tag": This key exists only if the mentioned entity is annotated as belonging to the Entities dataset - if so, the value is a dictionary identifying the Wikipedia page assigned by annotators to the mentioned entity:

      1. [topsep=0pt,itemsep=-1ex,partopsep=1ex,parsep=1ex]

      2. Key "pageid": Wikipedia page id.

      3. Key "pagetitle": page title.

      4. Key "url": page URL.

  4. Key "urls": List of URLs of wikipedia entities suggested to labelers for identification of the entity mentioned in the text.

  5. Key "text": The text chunk.

Of the 1000 mentions, 276 do exist in the Entities dataset, and the rest do not (but have the names that could be easily confused with one or more entities from there).

2.6.3 Backlinks

The Backlinks dataset consists of two parts: dictionary Entity-to-Backlinks and Backlinks documents. The dictionary points to backlinks for each entity of the Entity dataset (if any backlinks exist for the entity). The Backlinks documents are the backlinks Wikipedia text chunks with identified mentions of the entities from the Entities dataset.

Each mention is identified by surrounded double square brackets, e.g. "Muir built a small cabin along [[Yosemite Creek]].". However, if the mention differs from the exact entity name, the double square brackets wrap both the exact name on the left and, separated by ’|’, the mention string on the right, for example: "Muir also spent time with photographer [[Carleton E. Watkins | Carleton Watkins]] and studied his photographs of Yosemite.".

The Entity-to-Backlinks is a jsonl with 1527 items, each item is a tuple:

  1. [topsep=0pt,itemsep=-1ex,partopsep=1ex,parsep=1ex]

  2. Entity name.

  3. Entity Wikipedia page id.

  4. Backlinks ids: a list of pageids of backlink documents.

The Backlinks documents is a jsonl with 26903 items, each item is a dictionary:

  1. [topsep=0pt,itemsep=-1ex,partopsep=1ex,parsep=1ex]

  2. Key "pageid": Id of the Wikipedia page.

  3. Key "title": Title of the Wikipedia page.

  4. Key "content": Text chunk from the Wikipedia page, with all mentions in the double brackets; the text is cut 1000 characters after the last mention, the cut is denoted as "…[CUT]".

  5. Key "mentions": List of the mentions from the text, for convenience. Each mention is a tuple:

    1. [topsep=0pt,itemsep=-1ex,partopsep=1ex,parsep=1ex]

    2. Entity name.

    3. Entity Wikipedia page id.

    4. Sorted list of all character indexes at which the mention occurrences start in the text.

3 Dataset features

3.1 Entities dataset

The Entities dataset by itself is simple: its 4148 Wikipedia text chunks contain 28843 annotated mentions, of which 21426 are "Same" and 7417 are "Other". The 21426 "Same" mentions are occurrences of 2909 unique mentions as strings. The 7417 "Other" mentions are occurrences of 3754 unique mentions as strings. The distribution of the entities by the number of the mentions they have is shown in Figures 2 and 3.

Figure 2: In Entities: Distribution of entities by the number of the mentions in them. Area of (X,Y) marker is proportional to the number of entities having X "Same" and Y "Other" mentions.
Figure 3: In Entities: Histogram of entities by the number of the mentions in entity. Two entities (having 23 mentions and 27 mentions) are cut from the histogram. Average number of "Same" mentions per entity is 5.2.

The 4148 entities of Entities dataset have high overlap of their names. For example, if we define an "overlap" as two names having at least one common word of at least 4 characters, then there are 4132 overlapping entities out of 4148. The most overlapping entity is "John William Smith (legal writer)", - overlaps with 1241 other entities. In our definition of the overlap we excluded all the bracketed categorizations, like "(legal writer)", because such words are generic and not really parts of the name. We can characterize the confusing potential of a dataset by a names overlap, which we define as the number of entities with which an average entity overlaps by at least one non-generic word of length >= 4 characters. For Entities dataset we have names overlap = 379.5.

The real problem for NEL happens when the same mention was used legitimately for more than one entity. We can characterize potential confusion of NEL directly in terms of the mentions. How many entities an average mention may refer to? More specifically, what is the average number of Wikipedia entities that had at least one "Same" mention coinciding with the considered mention? This characteristic, mention-entities ambiguity, equals 181.3 for the mentions "Same", and it equals 20.9 for the mentions "Other" (see Appendix E.1). This makes a strong potential for confusing NEL. The distribution of individual ambiguities is shown in Figure 4.

Figure 4: In Entities: cumulative distribution of mentions by the number of entities using the mention as "Same" (see Appendix E.1). On the right the total number of mentions reaches 21426 for "Same, and 7417 for "Other". The average ambiguities is 181.3 for "Same" and 20.9 for "Other".

3.2 News - Entities

The News dataset contains 1000 text chunks, each with one mention annotated either as belonging to the Entities dataset (276 cases), or not (724 cases). We will call these mentions "to-KB" and "out-KB" mentions correspondingly. The 276 mentions recognized as belonging to the Entities are actually occurrences of 24 unique mentions. The 724 mentions recognized as not belonging to the Entities are occurrences of 54 unique mentions.

A strong possibility of confusion for NEL comes from the fact that some unique mentions from News completely coincide with the "Same" mentions from more than one entity from the Entities. Even stronger confusion comes from the News mentions that do not belong to the Entities, but nevertheless exactly coincide with the "Same" mentions from the Entities. Of 54 unique mentions recognized as not belonging to Entities, 37 coincide with at least one "Same" mention from the Entities. This makes 44% of the "Other" mentions occurrences; the remaining 56%, while having some names overlap, do not exactly coincide with any "Same" mentions or with exact entity names, - as depicted in Figure 1.

The mention-entities ambiguity is 1.9 for the 276 "to-KB" News mentions, and it is 1.6 for the 724 "out-KB" News mentions (see Appendix E.2). The distribution of individual ambiguities is shown in Figure 5.

Figure 5: News to Entities: cumulative distribution of mentions by the number of entities using the mention as "Same" (see Appendix E.2). On the right the total number of mentions reaches 276 for "to-KB", and 724 for "out-KB". The corresponding average ambiguities are 1.9 and 1.6.

3.3 Backlinks - Entities

The relation Backlinks-Entities is simpler than the relation News-Entities, because all the mentions in Backlinks do belong to the Entities (and there was no need to annotate). All 29019 mention occurrences in Backlinks refer to 1527 entities of the Entities dataset. Since the mention "surface forms" do not always coincide with the entity names, there are 2399 (rather than 1527) unique mentions in the Backlinks dataset. We depicted in Figure  1 that 10% of the mention occurrences do not coincide with any "Same" mentions or entity names of Entities.

The mention-entities ambiguity equals 7.6 for Backlinks mentions (the ambiguity is with respect to the Entities, see Appendix E.2). The distribution of individual ambiguities is shown in Figure 6.

Figure 6: Backlinks to Entities: cumulative distribution of Backlinks mentions by the number of entities using the mention as "Same" in Entities. On the right of the plot the total number of mentions reaches 28991 (the remaining 28 mentions are cut from the plot, their number of entities is in the range 100 - 600). The average mention-entities ambiguity is 7.6.

4 Discussion

The Namesakes dataset is helpful to us for evaluating NEL models, and we hope it will be useful as well for the community. High ambiguity makes it easier to distinguish between otherwise similar performance. There are several scenarios for using Namesakes for NEL evaluation, depending on the role played by a knowledge base (KB), and what role is played by external mentions (EM) that must be linked to the KB.

NEL for an EM mention may either link to a KB entity, or not link to the KB at all. The first outcome is incorrect in two cases: the mention refers to an entity which is not in KB, or the linking was done to an incorrect KB entity. The second outcome is incorrect if the mention refers to an entity existing in the KB (we will call such a mention a to-KB mention, and we will call any other mention an out-KB mention).

Problems for an NEL model may come not only from the ambiguity of the EM mentions with respect to the KB, but also from the difference in style of the EM texts versus KB texts. In Namesakes, Entities texts are written in clear Wikipedia style; News texts are more varied and may have much less context relevant to the mention; and Backlinks texts are in between: still in the Wikipedia style, but with context not centered around the mention.

A few simple scenarios of splitting Namesakes on EM and KB:

  1. [topsep=0pt,itemsep=-1ex,partopsep=1ex,parsep=1ex]

  2. News to Entities

  3. Backlinks to Entities

  4. News to Entities+Backlinks

  5. Entities to Entities.

  6. All to All

The News-to-Entities scenario is relevant for evaluating NEL for a situation where the KB was created from knowledge-friendly texts (in our case Entities is from Wikipedia), and EM come from accidental encounters with named entities in varied sources (in our case News texts). As in most scenarios, the evaluation must include three numbers:

  1. [topsep=0pt,itemsep=-1ex,partopsep=1ex,parsep=1ex]

  2. For to-KB mentions:

    1. [topsep=0pt,itemsep=-1ex,partopsep=1ex,parsep=1ex]

    2. Percent of mentions linked to a wrong KB entity

    3. Percent of mentions not linked to KB

  3. For out-KB mentions: percent of mentions linked to KB

Arguably, it is most important to have the third listed percent low, almost as important to have the first one low, and less important but desirable to have the second percent low.

The Backlinks-to-Entities scenario still provides some difference between the EM texts style and the KB texts style, because a backlink text mentions the entity of interest only incidentally. The Backlinks dataset has only to-KB mentions, so for simulating out-KB mentions it is necessary to exclude some entities from the Entities.

The News-to-Entities+Backlinks scenario assumes that the KB is created both from the Entities and the Backlinks, and thus should be better for linking from varied texts like News. An NEL system would be expected to perform in this scenario better than in the News-to-Entities.

The Entities-to-Entities addresses a situation where EM texts are similar to the KB texts, and provides very high ambiguity of the entities. However, it is necessary to carefully split the Entities into the mentions and entities for EM and the mentions and entities for KB.

The All-to-All scenario assumes ’All = Entities + Backlinks + News’, i.e. the whole Namesakes dataset must be split on EM and KB. This allows insights on how different kinds of mentions can be useful for creating KB entities, and how easy it is to link different kinds of EM mentions.

5 Conclusion

Advances in named entity linking (NEL) require a sensitive test of performance to distinguish good from great performance. To that end, we created Namesakes Vasilyev et al. (2021), a of entities with highly ambiguous names. In this paper we outline our motivation and method for data selection, filtering, and composition. We also describe in detail and define the metrics of ambiguity for these entities.


We thank Spencer Braun for review of the paper and valuable feedback; we thank Odetta888 annotators and reconciliators, especially Nusaiba Khubaib, for high dedication to managing the labeling and verifying the data.


  • A. Chen, P. Gudipati, S. Longpre, X. Ling, and S. Singh (2021) Evaluating entity disambiguation and the role of popularity in retrieval-based nlp. In

    Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing

    Brussels, Belgium, pp. 4472–4485. External Links: Link, Document Cited by: §1.
  • P. Ferragina and U. Scaiella (2012) Fast and accurate annotation of short texts with wikipedia pages. IEEE Software 29, pp. 70–75. External Links: Document Cited by: §1.
  • Z. Guo and D. Barbosa (2018) Robust named entity disambiguation with random walks. Semantic Web 9, pp. 459–479. External Links: Document Cited by: §1.
  • J. Hoffart, M. A. Yosef, I. Bordino, H. Fürstenau, M. Pinkal, M. Spaniol, B. Taneva, S. Thater, and G. Weikum (2011) Robust disambiguation of named entities in text. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, Edinburgh, Scotland, UK., pp. 782–792. External Links: Link Cited by: §1.
  • H. Ji, X. Pan, B. Zhang, J. Nothman, J. Mayfield, P. McNamee, and C. Costello (2017) Overview of tac-kbp2017 13 languages entity discovery and linking.. In Text Analysis Conference TAC 2017, External Links: Link Cited by: §1.
  • N. Kolitsas, O. Ganea, and T. Hofmann (2018) End-to-end neural entity linking. In Proceedings of the 22nd Conference on Computational Natural Language Learning, Brussels, Belgium, pp. 519–529. External Links: Link, Document Cited by: §1, §1.
  • B. Z. Li, S. Min, S. Iyer, Y. Mehdad, and W. Yih (2020) Efficient one-pass end-to-end entity linking for questions. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), Online, pp. 6433–6441. External Links: Link Cited by: §1, §1.
  • L. Logeswaran, M. Chang, K. Lee, K. Toutanova, J. Devlin, and H. Lee (2019) Zero-shot entity linking by reading entity descriptions. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy, pp. 3449–3460. External Links: Link, Document Cited by: §1.
  • S. Min, D. Chen, L. Zettlemoyer, and H. Hajishirzi (2020) Knowledge guided text retrieval and reading for open domain question answering. arXiv arXiv:1911.03868v2. External Links: Link Cited by: §1.
  • F. Nooralahzadeh and L. Øvrelid (2018) SIRIUS-LTG: an entity linking approach to fact extraction and verification. In Proceedings of the First Workshop on Fact Extraction and VERification (FEVER), Brussels, Belgium, pp. 119–123. External Links: Link, Document Cited by: §1.
  • D. Rao, P. McNamee, and M. Dredze (2012) Entity linking: finding extracted entities in a knowledge base. Multi-source, Multilingual Information Extraction and Summarization, pp. 93–115. External Links: Document Cited by: §1.
  • L. Ratinov, D. Roth, D. Downey, and M. Anderson (2011) Local and global algorithms for disambiguation to Wikipedia. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, Portland, Oregon, USA, pp. 1375–1384. External Links: Link Cited by: §1.
  • O. Sevgili, A. Shelmanov, M. Arkhipov, A. Panchenko, and C. Biemann (2021)

    Neural entity linking: a survey of models based on deep learning

    arXiv arXiv:2006.00575v3. External Links: Link Cited by: §1, §1.
  • D. Sorokin and I. Gurevych (2018) Mixing context granularities for improved entity linking on question answering data across entity categories. In Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics, New Orleans, Louisiana, pp. 65–75. External Links: Link, Document Cited by: §1, §1.
  • O. Vasilyev, A. Altun, N. Vyas, V. Dharnidharka, E. Lampert, and J. Bohannon (2021) Namesakes. figshare. Dataset 10.6084/m9.figshare.17009105.v1. External Links: Link Cited by: §1, §5.
  • L. Wu, F. Petroni, M. Josifoski, S. Riedel, and L. Zettlemoyer (2020) Scalable zero-shot entity linking with dense entity retrieval. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), Online, pp. 6397–6407. External Links: Link, Document Cited by: §1.
  • Y. Yang and M. Chang (2015) S-MART: novel tree-based structured learning algorithms applied to tweet entity linking. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), Beijing, China, pp. 504–513. External Links: Link, Document Cited by: §1, §1.

Appendix A Ambiguous Names

Table 1 contains list of common person names that we used. The names were taken from the top of the most common names lists 999 101010List_of_most_common_surnames_in_North_America##United_States_(American)

male female surname
James Mary Smith
Robert Patricia Johnson
John Jennifer Williams
Michael Linda Brown
William Elizabeth Jones
David Barbara Miller
Richard Susan Davis
Joseph Jessica Garcia
Thomas Sarah Rodriguez
Charles Karen Wilson
Table 1: Common person names: 10 most common male first names (column 1), female first names (column 2) and last names (column 3).

Combining first and last names from the table gives 200 person names.

List of 35 common location names that we used (from the most common US locations111111List_of_the_most_common_U.S._place_names): ’Washington’, ’Franklin’, ’Arlington’, ’Centerville’, ’Lebanon’, ’Clinton’, ’Springfield’, ’Georgetown’, ’Fairview’, ’Greenville’, ’Bristol’, ’Chester’, ’Dayton’, ’Dover’, ’Madison’, ’Salem’, ’Oakland’, ’Milton’, ’Newport’, ’Riverside’, ’Ashland’, ’Bloomington’, ’Manchester’, ’Oxford’, ’Winchester’, ’Burlington’, ’Jackson’, ’Milford’, ’Clayton’, ’Mount Vernon’, ’Auburn’, ’Kingston’, ’Lexington’, ’Cleveland’, ’Hudson’.

Initial list of organization names, subject to further check of requirement to have multiple Wikipedia entities, was combined from several preliminary lists following below, with entities as they were at May 2021.

Top 40 companies by revenues 121212List_of_largest_companies_by_revenue: ’Walmart’, ’Sinopec Group’, ’Amazon’, ’State Grid’, ’China National Petroleum’, ’Royal Dutch Shell’, ’Saudi Aramco’, ’Volkswagen’, ’BP’, ’Toyota’, ’Apple’, ’ExxonMobil’, ’CVS Health’, ’Berkshire Hathaway’, ’UnitedHealth’, ’McKesson’, ’Glencore’, ’China State Construction’, ’Samsung Electronics’, ’Daimler’, ’Ping An Insurance’, ’Alphabet’, ’AT&T’, ’AmerisourceBergen’, ’ICBC’, ’Total’, ’Foxconn’, ’Trafigura’, ’Exor’, ’China Construction Bank’, ’Ford’, ’Cigna’, ’Costco’, ’AXA’, ’Agricultural Bank of China’, ’Chevron’, ’Cardinal Health’, ’Microsoft’, ’JPMorgan Chase’, ’Honda’.

Top 14 largest employers 131313List_of_largest_employers: ’U.S. Department of Defense’, "People’s Liberation Army", ’Walmart’, ’Russian Armed Forces’, "McDonald’s", ’National Health Service’, ’China National Petroleum Corporation’, ’State Grid Corporation of China’, ’Indian Railways’, ’Indian Armed Forces’, "Korean People’s Army", ’Foxconn’, ’French Ministry of National Education’, ’Amazon’.

Top 10 wealthiest charitable foundations141414List_of_wealthiest_charitable_foundations: ’Novo Nordisk Foundation’, ’Bill & Melinda Gates Foundation’, ’Stichting INGKA Foundation’, ’Wellcome Trust’, ’Howard Hughes Medical Institute’, ’Azim Premji Foundation’, ’Garfield Weston Foundation’, ’Lilly Endowment’, ’Ford Foundation’, ’Silicon Valley Community Foundation’.

Top 10 largest political parties and their acronyms151515List_of_largest_political_parties: ’Bharatiya Janata Party’, ’Communist Party of China’, ’Democratic Party’, ’Republican Party’, ’Justice and Development Party’, ’Aam Aadmi Party’, ’Pakistan Tehreek-e-Insaf’, ’Chama Cha Mapinduzi’, ’United Socialist Party of Venezuela’, "Cambodian People’s Party".

And some of their acronyms: ’BJP’, ’CPC’, ’CCP’, ’AKP’, ’AAP’, ’TPI’, ’CCM’, ’CPP’.

A few well known universities: ’Harvard University’, ’Stanford University’, ’University of Cambridge’, ’Massachusetts Institute of Technology’, ’University of California, Berkeley’, ’Princeton University’, ’Columbia University’, ’California Institute of Technology’, ’University of Oxford’, ’University of Chicago’

And some of their short names and acronyms: ’Harvard’, ’Stanford’, ’Cambridge’, ’Berkeley’, ’Princeton’, ’Columbia’, ’Oxford’, ’Chicago’, ’Cal’, ’U of C’.

Obviously many of these names did not pass the ambiguity requirement of having multiple wikipedia entities.

Appendix B Good sentences

In Section 2.2 we used a notion of a ’good sentence’ for sake of the Entities dataset filtering. In this context, a ’good sentence’ is a sentence satisfying the following requirements:

  1. [topsep=0pt,itemsep=-1ex,partopsep=1ex,parsep=1ex]

  2. The sentence does not contain the strings ’==’ and newlines.

  3. The sentence has the number of words not less than 4, and the number of characters between 20 and 1000.

  4. The first word of the sentence is ’alpha’, i.e. it consists of the alphabetical characters.

  5. The fraction of the ’alpha’ words in the sentence is not lower than 0.6.

Appendix C Filtering of news

The news texts obtained by the initial search (as explained in Section 2.4) went through the initial filtering as follows:

  1. [topsep=0pt,itemsep=-1ex,partopsep=1ex,parsep=1ex]

  2. The name of the mention in the text must strongly overlap with at least some entity from the Entities dataset.

  3. The number of characters [‘*’, ‘#’, ‘&’] in the text must be not higher than 2. The reason is that these characters are often used for bullets, and the fraction of varied listings (sport, TV, business) in random news is high.

  4. Fraction of non-alpha characters in the text must be not higher than 0.3. The reason, similar to above, is to remove business documents or texts filled with sport scores.

This still left more than half million news texts (each with a mention of interest). Additional filtering is done for sake of manageable labeling: (1) keeping only texts with keeping only texts the length of between 500 and 3000 characters, and (2) keeping only texts with mentions that can be confused with only 3-10 Wikipedia entities. This filtering step still left more than 17K news texts. Finally, the number was reduced to the ’best quality’ 1000 news texts by the following filtering:

  1. [topsep=0pt,itemsep=-1ex,partopsep=1ex,parsep=1ex]

  2. Removing texts with containing the strings: ’wedding’, ’service will be held’, ’leaves to cherish’, ’died peacefully’, ’marriage’, ’annual’, ’passed away’, ’survived by’, ’preceded in death’. The reason for this is to reduce the fraction of names that do not exist in the Entities dataset but have the same or similar names. Having such samples is important, but we want also to have a good fraction of entities from random news that do belong to our Entities dataset.

  3. Removing texts which had, after NLTK sentence tokenization, average length of sentence less than 60 characters. The reason is to remove texts containing long listings; a normal average news sentence is not so short.

  4. Removing texts with fraction of newline characters higher than 1%.

  5. Removing texts containing the strings: ’http’, ’www.’, ’.com’, ’**’, ’[’, ’{’, ” or ’#’. The reason is to filter out texts with advertisements, and texts on obscure subjects from obscure sources.

  6. Sorting the remaining (less than 2000) texts by count of non-alpha and non-digit symbols and selecting the 1000 cleanest.

Appendix D Filtering of backlinks

The filtering and cleaning of backlinks texts (Section 2.5) is done as the following.

  1. [topsep=0pt,itemsep=-1ex,partopsep=1ex,parsep=1ex]

  2. Removing Wikipedia texts that have titles starting with: ’Book’, ’Category’, ’Draft’, ’File’, ’Help’, ’List’, ’MediaWiki’, ’Portal’, ’Special’, ’Talk’, ’Template’, ’User’, ’WikiProject’, ’Wikipedia’.

  3. Cutting any Wikipedia text starting from any section named as: ’Bibliography’, ’Discography’, ’External Links’, ’Filmography’, ’Footnotes’, ’Further Reading’, ’Notes’, ’References’, ’See Also’.

  4. Removing all mentions occurring in ’bad’ text locations. A ’bad’ location means that a piece of text in between 100 characters before and 100 characters after the mention does not satisfy the following conditions:

    1. [topsep=0pt,itemsep=-1ex,partopsep=1ex,parsep=1ex]

    2. Fraction of digits must not exceed max 0.05 of all the characters.

    3. Fraction of alpha-characters must not be below min 0.8.

    4. Fraction of newlines must not exceed max 0.01.

  5. Cutting the text starting from 1000 characters down from the lowest occurring mention.

The ’removing’ of a mention means that the mention looses its link notations, and becomes simple word (or words) in the text.

Appendix E Mention-Entities Ambiguity

e.1 Entities

In order to characterize ambiguity of the named entities in a dataset, we suggested and described a ’mention-entities overlap’ in Section 3. For clarity, here we define the overlap.

For a single dataset, for example Entities, as in Section 3.1, we have entities, each entity is having its own text . Each text has a list of mentions that were tagged "Same", and a list of mentions that were tagged "Other". In each "Same" list we will also include the ’official’ name of the entity (the title of the corresponding Wikipedia page) . For example, the "Same" lists of the first two texts:


Similarly, the "Other" lists of mentions:


Here and . Of course some "Other" lists can be empty.

For example, our entity "Milton, Indiana" has three mentions in its text: "Milton" (Same), "Milton" (Other), "Milton" (Same). Its list is ["Milton, Indiana", "Milton", "Milton"], and its list is ["Milton"]. The entity "David Jones (footballer, born 1940)" has eight mentions, first one is "David Willmott Llewellyn Jones", and the other seven are all "Jones". Since all these mentions are tagged "Same", they all (and the title) go into , and is empty.

We define the mention-entities ambiguity for the "Same" mentions as:


Here is the number of all the entities that had at least one mention (aka string, ’surface form’ occurrence) tagged "Same". Similarly, the mention-entities ambiguity for the "Other" mentions is:


Generally, for any dataset that has known correct mentions of entities (in our Entities dataset they are tagged as "Same"), and for any kind of mentions of interest, we can define the ambiguity by the equation 5.

In our Entities dataset, the mention-entities ambiguity is 181.3 for "Same", and 20.9 for "Other".

e.2 News and Backlinks to Entities

In Appendix E.1 we considered ambiguity of mentions from the same dataset (e.g. Entities). Here we are considering ambiguity of mentions in one dataset (in our case News or Backlinks) with respect to entities of another dataset (in our case Entities).

We can apply the same definition by the equation 5, with an understanding that the lists are from , but the summation is done over the mentions from :


Here is simply the number of the mentions of interest in .

For the 724 News mentions that were not tagged as one of entities of Entities, the ambiguity . A caveat here is that, while the averaging in the equation 6 is done over all the 724 mentions, the ambiguity is actually created by 77% of these mentions (555 mentions) that do exist in one or more lists . The remaining 23%, while having similar names, do not exactly match the mentions of the Entities texts.

For the 276 News mentions that were tagged as the entities of Entities, we have to add each mention to the list of the entity with which the mention was linked (by the annotators). With or without taking this into account, the ambiguity (the difference is only about 2%).

For the 29019 Backlinks mentions the ambiguity .