Candidate generation [15, 28] is the task of retrieving a short list of plausible candidate entities from a knowledge base (KB) for a given mention. This module is usually a fundamental part of a more elaborate pipeline as that of entity linking , coreference resolution  or question answering . With few exceptions , it is present in almost all existing entity linking techniques, where the subsequent entity disambiguation technique only has to evaluate a small set of entities from the KB, instead of all of them. Therefore, in addition to obtaining a high recall—fraction of mentions where the true entity is present in the set of candidates—a desired characteristic of a candidate generator is to constitute a low computational overhead of the full pipeline. Cross-lingual entity linking [20, 17] associates mentions in non-English documents to entities from a KB—typically the English Wikipedia. This poses several challenges that can often be attributed to the amount of resources of the target language, especially if it has low resources. Candidate generation is deemed trivial in the English entity linking literature, as there exist resources (i.e. indexes/lookup tables ) capable of providing high candidate recall. However, this is not the case in the cross-lingual setting. This is specially true when dealing with low-resource languages (e.g., karelian or turkmen), wherein, moreover, the number of works that tackle this important step is very scarce. Some recent approaches [14, 28] are based on the idea of performing transfer learning from a pivot language that is lexically similar to the target language.
Contributions. In this paper we propose a light-weight solution, called pivot token indexes (pti), to the candidate generation problem with a special focus on low-resource languages. pti
combines prior and posterior probabilities learned by aggregating information from the pivot and target language. These probabilities are constructed with tokens the mentions can be decomposed into. This simple solution keeps the efficiency of standard indexes—thereby possessing the desired property of a candidate generator—while being equipped with the sophistication of more complex solutions. Most importantly, the main contribution of this work is an in-depth analysis of the candidate generation problem in the context of entity linking, to which we make several contributions: i) we establish that the (inherited) practice of ignoring mentions that are not linkable to the English Wikipedia leads to an unrealistic evaluation; ii) we perform a realistic evaluation and propose an efficient trick to increase the ceiling recall of any candidate generator; iii) we introduce three query types according to their difficulty based on the available training data; iv) we compare the performance of the different methods and establish differences across them based on the amount of resources of the pivot language and the query type. Moreover, our experimental section includes experiments on 9 datasets and two different learning settings (without and with some supervision from the target language). We also empirically analyze the complexity of pti and conclude that it takes up much less memory, and requires significantly less training and inference time than the current state-of-the-art .
Other applications of candidate generators. In general, tasks that can beneficially leverage the knowledge contained in KBs may benefit from better candidate generators. Examples of such tasks are coreference resolution  or question answering , to name but a few. These works retrieve facts from a KB that are related to a given mention. As in the case of entity linking, candidate generation may alleviate the computational cost of the subsequent modules by just keeping a reduced set of facts.
Let be a mention observed in a document, and let be the ground truth entity associated to that mention. The task of a candidate generator is to retrieve a list of possible candidate entities such that . The candidate generator is trained with mention entity pairs observed in a training corpus , where the mentions are written in a certain language lang. At test time, candidate generators provide a score to all the entities that form the candidate space, and find the entities with the largest scores—this is a linear operation (done with a max-heap structure) with respect to the candidate space.
A candidate generator must have good recall, and also be much more computational efficient as compared to the more complex downstream model. See  for a detailed discussion. We use en, pl and tl to indicate English, pivot and target language, respectively.
3 Related Work
While candidate generation in monolingual (English) entity linking is deemed a trivial task given the existence of high-quality mention-entity lookup tables, most of other languages—specially low-resource languages—do not benefit from lookup tables of a similar quality. Therefore, contrary to (English) entity linking, candidate generation constitutes a challenge in the cross-lingual entity linking literature.
3.1 Wikipedia-based Indexes
. An index of prior probabilitiesis constructed from a corpus. While this solution has been shown to provide a good recall for languages where the training corpus is large, for low-resource languages the training corpus may offer a very poor coverage in terms of both mentions  and entities.
3.2 Pivoting-based Embedding Approaches
Methods within this category leverage information of a pivot language that is lexically related to the target language: the candidate generator is trained with data from the pivot language, but then it is evaluated in the target language. These methods rely on a certain lexical overlap between pairs of related languages (e.g. Italian and Corsican) that allow to transfer learning between the pivot—which is supposed to have more resources than the target language—and the target language. This is opposed to other techniques  that rely on the existence of accurate machine translation models to map all the resources to a common language. Unfortunately, the performance of translation models is also largely affected by the scarce data that is available for low-resource languages. Similar ideas have been applied to other problems such as POS tagging  or machine translation .
The learning objective consists of maximizing a similarity function
(the cosine similarity) between mentionand entity embedding pairs:
where is the collection of training mention entity pairs in the pivot language, and is the mention embedding obtained by applying an encoding function Enc to the output of a tokenizer . A similar architecture is used to obtain the entity embedding from its English name. The superscript or indicates whether the entity is the target or a randomly sampled one.
Previous works have explored different tokenizers (e.g. character-level  or character -grams ) and encoding functions (e.g. LSTM-  or CNN-based architectures ). The best setup  consists of a character
-gram tokenizer, followed by an encoder that maps each token to an embedding (via an embedding lookup table), sums them all up, and applies a nonlinear activation function to the aggregated representation. This approach, calledCharagram , has shown to also outperform other type of approaches based on indexes and translation .
Additional Related Work. There are related tasks such as transliteration  and bilingual lexical induction . The former is a harder task, as it consists of generating, instead of retrieving, English entity names. The latter relies to a great extent on the existence of large monolingual corpus in both languages .
4 Pivot Token Indexes
Intuitively, Charagram learns to associate tokens that co-occur between mentions and English entity names. The learned associations between tokens determine the similarity (score) between entity and mention embeddings. This approach has been shown to provide good recall but both memory requirements and training/inference time are considerable (an empirical analysis is in Section 6.1). Thus, it is arguable whether Charagram fulfills a desired characteristic of a candidate generator, namely its computational overhead is low. Similarly to Charagram, our approach also learns associations between mentions and entities. Different to Charagram
, the associations learned bypti are meant to indicate the attachment strength between tokens and entities, and not equivalences between tokens. pti consists of two indexes—then being high efficient—that are linearly combined to score entities. The indexes correspond to the prior probability that associates tokens to entities, and the posterior probability that associates entities to tokens. The goal of the posterior probability is to counterbalance the (low) score that unpopular entities obtain for popular tokens from the prior index. pti has the following steps:
We apply a tokenizer tknr to the collection of mentions observed in the training corpus of the pivot language . Each mention , linked to a certain entity, is decomposed into a set of tokens .
We construct prior and posterior token-based probabilities. Optionally, we can remove low probabilities by thresholding (Section 6.1) the indexes without an impact in the performance. The thresholding significantly reduces the memory requirements of pti.
At test time, given a mention , each entity obtains an score as follows:
where is a weighting term. As argued, the posterior probability term may counterbalance the bias introduced by the prior probability towards the popular entities: a large value of will favor unpopular entities. While these scores do not have a probabilistic interpretation, one can easily normalize them so that they sum up to 1. As opposed to Charagram, pti performs a search that only includes the entities that have a non-zero score; Then contributing towards a better efficiency.
We will see in Section 6 that this simple approach has the following advantages as compared to Charagram: fewer parameters, less training time, less inference time, and better performance in almost all settings.
4.1 Joint learning with the target language
In practice, there is often some training data that is available for the target language. To incorporate this data into the construction of the prior and posterior probabilities we modify step 2 by weighting the counts over the pivot language corpus with a factor . Thus, the index of priors is computed as
where is a shortcut that refers to the number of times the conditional argument is observed in the corpus . corresponds to the set of entities observed in the corpora. The same weighting is applied to compute the posterior probabilities. At test time, entities are scored following Eq. (2).
More complex versions of our approach pti are in Appendix A.5. While these versions led to minor improvements, their complexity also increased.
5 Experimental Setup
All the resources required to reproduce the experiments are available at https://github.com/epfl-dlab/pti-candgen.
In order to comprehensively understand the performance of pti we run experiments on a variety of publicly available datasets. The benchmarked methods are compared in terms of accuracy and complexity (number of parameters and training/inference time). The accuracy is measured by the recall, which is defined as the proportion of retrieved candidate lists containing the correct entity. More formally, the recall@ is defined as
where is the set of mentions entity pairs in an evaluation set, and returns 1 if its argument is true, and 0 otherwise. As in previous work  we set for our main set of experiments. However, for a subset of the experiments we include a comparative performance for other values of in Appendix A.4.
We compare the performance of the different methods in the two following learning settings:
Zero-shot. We use zero resources—no training data—from the target language. Therefore, the candidate generator must fully rely on the available information for the pivot language. While it is possible to obtain a (small) training set of mention entity pairs for some target languages, in some other cases the target language is so poorly resourced that this is the only possible learning setting.
Joint learning with the target language. Even for low-resource languages, there is often some training data for the target language that can be used in conjunction with the data that is available for the pivot language. We sometimes refer to this setting as supervised.
We note that the aim of this section is to establish a detailed comparison of different candidate generation methods. As any progress in the candidate generator is independent from any improvement in the entity disambiguation module, the integration of the candidate generation in the entity linking pipeline is out of the scope of this work.
5.1 Query Types
As discussed in Section 5, despite the little resources that are available for the target languages, sometimes it is possible to obtain some training data. This training data will determine the difficulty of the queries contained in the test set. Given a mention query that links to an entity
, we classify it according to our proposed categorization (see Figure1 for a visual explanation):
Easy. The mention is observed with the entity in the training data of the target language.
Medium. The entity is observed in the training data of the target language but with mentions other than .
Hard. The entity is never observed in the training data of the target language. To successfully link these mentions we fully rely on the pivot language. In the zero-shot setting, all mention queries belong to this type.
WikiPriors. Prior probabilities (i.e.
) are estimated using a training corpus obtained from both a target and a pivot language, and constructs indexes mapping mentions to their co-occurring entities. This simple procedure resembles the approach followed by, where a lookup table was built for entities that have an English Wikipedia page. The strength of a mention-entity association is given by the estimated prior probabilities. In the supervised setting, it first generates candidates using the index constructed on the target language. More details are provided in Appendix A.2.
Charagram . This is the state-of-the-art on candidate generation for (low-resource) cross-lingual settings. It has shown to significantly outperform other architectures such as , and translation-based methods , among others. For the supervised setting, we extend Charagram to also incorporate training data from the target language by optimizing the loss . More details are provided in Section 3.2.
As previous works [14, 28], we create our datasets from Wikipedia. We choose Wikipedia as, contrary to other datasets (e.g. DARPA LORELEI ), it can be accessed without any restriction, and, importantly, its crowdsourcing spirit guarantees a realistic setting.
We select 9 pairs of languages, where the target language is always (extremely) low-resourced, and the pivot language can be categorized as either high, medium, or low-resourced. The distinction regarding the amount of resources of the pivot language has never done in previous works, but it helps to analyze the performance of the different methods. The 9 language pairs (target-pivot) are Corsican-Italian (co-it), Limburgish-Dutch (li-nl), Bavarian-German (bar-de), Karelian-Finnish (olo-fi), Macedonian-Bulgarian (mk-bg), Javanese-Indonesian (jv-id), Turkmen-Azerbaijani (tk-az), Marathi-Hindi (mr-hi) and Faroese-Icelandic (fo-is). Appendix A.1 contains information about the dataset statistics (Table 4), additional preprocessing details and detailed information about the selection of the pivot languages.
|High Resource||Medium Resource||Low Resource|
|Target Lang.||co - it||li - nl||bar - de||olo - fi||mk - bg||jv - id||tk - az||mr - hi||fo - is|
|Joint learning with the target language|
|High Resource||Medium Resource||Low Resource|
|Target Lang.||co - it||li - nl||bar - de||olo - fi||mk - bg||jv - id||tk - az||mr - hi||fo - is|
Except for cases when zero-shot is the only possible learning setting (co-it, olo-fi and tk-az) we create the validation and test sets by sampling a maximum of 1,000 unique queries of each query type—in total it amounts to a maximum of 3,000 queries; and the training data is adapted accordingly. The same validation and test sets are used in the zero-shot setting. For target languages that can only be evaluated in the zero-shot setting, we equally distribute the available mention entity pairs into validation and test. These sets never contain duplicates.
5.4 Towards a Realistic Evaluation
A common practice in the (English) entity linking literature [25, 4] consists of ignoring mentions whose correct entity does not exist in the considered KB—typically the English Wikipedia which contains over 5 million entities. This is not a major problem in these models, as this has been shown  to remove a small percentage (typically less than 5%) of the data. Previous works [14, 28] on candidate generation for cross-lingual entity linking have adopted the same practice. Indeed, cross-lingual entity linking is defined as the task of grounding mentions in non-English documents to entries in the English Wikipedia [20, 21]. However, we will show in Section 6.1 that this practice is problematic in the cross-lingual setting, as we would have to ignore a very large percentage of data.
For some methods such as Charagram is easy to expand the candidate space by considering entities in Wikipedias other than English. While one might evaluate all entities (more than 20 million), we discovered one much less costly alternative. We experimentally observe that expanding the candidate space with the entities observed in the training data of the pivot language significantly improves the entity coverage of the entities observed in the test data of the target language (more in Section 6.1). With the goal of performing a realistic evaluation we simply evaluate all mentions.
5.5 Training Details
The simplicity of our approach pti
is also reflected in its number of hyperparameters. The hyperparameter, which controls the contribution of the pivot language in the construction of the prior and posterior probabilities, is validated among the values . The hyperparameter , which controls the importance of the posterior probability, is validated among the values . As in Charagram, the tokenizer tknr applied to mentions returns character -grams with . The tokenizer counts each symbol as a character regardless of the script of the language, although more sophisticated mechanisms are also possible (e.g. by converting strings to international phonetic alphabet (IPA) symbols). For Charagram we replicate the exact same setup as reported in . The hyperparamenter of the supervised setting is validated among the values . More details are provided in Appendix A.2. The hyperparameters and are required by both pti and Charagram, respectively, only in the supervised setting. For all cases, the validation metric is micro recall@30.
Table 1 depicts a comparison of the methods in terms of micro recall@30. Similar conclusions are drawn for both learning settings. Overall, our approach pti clearly outperforms Charagram in those target languages where the pivot language is high-resourced. For pivot languages that are medium-resourced, while pti is the best performing technique Charagram performs very similarly in one target language (mk). This tendency is not as clear when the pivot language is low-resourced, where Charagram shows a clear superior performance in the target language mr in the zero-shot learning setting. We see that all models largely benefit from the existence of training data in the target language.
By the definition of the query types introduced in Section 5.1, it follows that all queries are of type hard in the zero-shot setting. However, this is not the case when the models are also trained with data from the target language. We show the performance breakdown in Table 2. The conclusions are clear: pti always outperforms Charagram for queries of type Easy and Medium, and it is only beaten for queries of type Hard when the pivot language has low resources, with the exception of az. We note that while in this work we have created the evaluation splits so that all query types are equally important, the distribution of the query types depends on the target evaluation data. For instance, splitting Wikipedia articles into training, validation and test in a proportion of 70%, 15% and 15%, respectively, leads to a distribution of query types largely dominated by Easy ones.
The performance of WikiPriors is also dependent on the amount of resources of the pivot language, showing a performance that is comparable or even better than Charagram when the pivot language is high- and medium-resourced (see Table 1). The more detailed analysis depicted in Table 2 shows that WikiPriors has a very heterogeneous performance with respect to the different query types in the supervised setting. As expected, it obtains a perfect performance for Easy queries, but its performance tends to be poor, with some few exceptions, for Medium and Hard queries.
Entity Coverage As argued in Section 5.4, it is unrealistic to simply ignore those mentions that are not in the considered KB, specially if this translates into a performance that would not resemble that of the realistic evaluation that we propose. We proceed to analyze the ceiling recall—which depends on the entity coverage—of the methods in the realistic setting, where no mention is excluded.
Figure 2 shows the ceiling recall of pti and Charagram for the different target languages and query types. The ceiling recall of Charagram is lower than 30% in almost all cases when the candidate space is given by the English Wikipedia (EN) considered in previous work [14, 28]. This showcases that previous work were limiting, to a great extent, the mention queries that were evaluated. This is not only unrealistic, but also a notable limiting factor in performance. The ceiling recall notably increases when the candidate space also includes the entities that are observed in the training data of the pivot language. We hypothesize that this is because the pivot language is not only lexically similar to the target language, but also their respective Wikipedias exhibit similar topical interests. Moreover, the ceiling for queries of type Easy and Medium is also increased when the candidate space is also formed by the entities observed in the training data of the target language. The latter corresponds to the supervised setting; and surprisingly, as opposed to pti, Charagram does not reach a ceiling recall of 1 for queries of type Easy and Medium. The reason is that there are a fair amount of entities in the respective Wikipedias that do not have an English name (e.g. Q61068957 or Q15842755). This is the strength and weakness of Charagram: its candidate space can include any entity as long as its English title is available. On the other side, pti is language-agnostic to the representation of the entity, but its entity coverage is limited to the entities observed in the pivot language (PL), or in both the pivot and target language (PL+TL). It is for this reason that pti is beaten in queries of type Hard when the pivot language is low-resourced : it fully relies on the entity coverage of the pivot language, which is low, to successfully answer these queries. This is illustrated in Figure 1(h) and 1(i), where Charagram shows a higher ceiling recall for Hard mentions due to the additional entity coverage provided by the English (EN) Wikipedia. Future work should address this limitation of pti (e.g. by incorporating more pivot languages). However, in all other situations where the ceiling recall is higher or comparable to that of Charagram, pti shows the best performance.
Impact of the amount of training data We train both pti and Charagram with increasing amount of training data, and computed micro recall@30 for several target languages. This is shown in Figure 3. The performance of pti increases with the more data up to a point—around 10 million. One reason is that the entity coverage of pti is expected to increase with the amount of training data. Another reason is that the constructed indexes become more accurate. On the other side, Charagram barely improves beyond 80,000 data points111Authors of Charagram confirmed in an email they used, at most, this amount of data for training. Charagram needs relatively little data to learn equivalences between -grams co-occurring in mentions and English entity names, but gets stuck very quickly and does not leverage all the statistical information contained in the training corpora.
Runtime and Memory The goal of a candidate generator is to reduce the computational cost of a more complex subsequent model. Therefore, besides recall, complexity must also be a factor that drives the design of a candidate generator. We perform a detailed analysis of the complexity in Appendix A.3. The conclusion of such analysis is that in average, pti requires 20 times less memory than Charagram, and it is 300 and 7 times faster than Charagram in terms of training and inference time, respectively.
6.2 Can pti also become the de-facto choice for high-resource languages?
We analyze whether pti may compete with WikiPriors in the most standard candidate generation setting: candidates have to be generated for mentions of a high-resource language by only exploiting its own training corpus. We build this experiment motivated by the recent findings disclosed in the recent work by Fu et al. , where authors found that even high-resources languages, such as Spanish, are also limited by their mention coverage. This limitation relates to the queries of type Medium. Fu et al. partially circumvent the limitation with information coming from search engine query logs.
Following the same protocol as in Section 5.3 we create 1,000 queries of type Easy and Medium for the three high-resource languages used in Section 5 as pivot languages: Italian (it), Dutch (nl) and German (de). We do not include queries of type Hard as they cannot be successfully completed. The results, depicted in Table 3, indicate that pti always obtains the best performance for queries of type Medium, followed by Charagram. On the other side, for queries of type Easy standard lookup tables (i.e. WikiPriors) exhibit the best performance. We conclude that for the standard setting, WikiPriors must be used as the primary technique, falling back to pti only if WikiPriors fails to retrieve any candidate.
We perform an in-depth analysis of the candidate generation problem. We show that the inherited practice of ignoring mentions where the target entity is not in the English Wikipedia is unrealistic in a cross-lingual setting. We alleviate this problem with an efficient solution that consists of using the pivot and target language to expand the candidate space. We also contribute with a categorization of the mention queries according to their difficulty. Finally, we propose a light-weight approach, called pti, that outperforms the current state-of-the-art in almost all target languages and query types.
This project was partly funded by the Swiss National Science Foundation (grant 200021_185043), the European Union (TAILOR, grant 952215), and the Microsoft Swiss Joint Research Center. We also gratefully acknowledge generous gifts from Facebook and Google supporting West’s lab.
8 Bibliographical References
Constraint-based question answering with knowledge graph. In COLING, Cited by: §1, §1.
-  (2017) Model transfer for tagging low-resource languages using a bilingual dictionary. In ACL, Cited by: §3.2.
-  (2020) Design challenges for low-resource cross-lingual entity linking. In EMNLP, Cited by: §3.1, §6.2.
-  (2017) Deep joint entity disambiguation with local neural attention. In EMNLP, Cited by: §A.4, §5.4.
-  (2019) Learning dense representations for entity retrieval. In CoNLL, Cited by: §1.
-  (2001) Improving cross language retrieval with triangulated translation. In SIGIR, pp. 90–95. Cited by: §3.2.
-  (2013) Improving candidate generation for entity linking. In International Conference on Application of Natural Language to Information Systems, pp. 225–236. Cited by: §1.
Learning bilingual lexicons from monolingual corpora. In ACL, Cited by: §3.2.
URIEL and lang2vec: representing languages as typological, geographical, and phylogenetic vectors. In EACL, Cited by: §A.1.
-  (2015) Design challenges for entity linking. TACL 3, pp. 315–328. Cited by: §2.
-  (2019) Attention-based joint entity linking with entity embedding. Information 10, pp. 46. Cited by: §5.4.
-  (2004) Smoothing with mixed model software. Journal of Statistical Software 9 (1). Cited by: §A.5.
-  (2017) Cross-lingual name tagging and linking for 282 languages. In ACL, Cited by: §3.2, 2nd item.
-  (2019) Zero-shot neural transfer for cross-lingual entity linking. In AAAI, Cited by: §1, §3.2, 2nd item, §5.3, §5.4, §6.1.
-  (2014) Entity linking with a knowledge base: issues, techniques, and solutions. IEEE Transactions on Knowledge and Data Engineering 27 (2), pp. 443–460. Cited by: §1.
-  (2016) One for all: towards language independent named entity linking. In ACL, Cited by: §3.1.
-  (2018) Neural cross-lingual entity linking. In AAAI, Cited by: §1, §3.1.
-  (2012) A cross-lingual dictionary for english wikipedia concepts. In LREC, Cited by: §1, 1st item.
-  (2016) LORELEI language packs: data, tools, and resources for technology development in low resource languages. In LREC, Cited by: §5.3.
-  (2016) Cross-lingual wikification using multilingual embeddings. In HLT-NAACL, Cited by: §A.2, §1, §5.4.
-  (2018) Joint multilingual supervision for cross-lingual entity linking. In EMNLP, Cited by: §A.2, §A.4, §3.1, §5.4.
-  (2018) Bootstrapping transliteration with constrained discovery for low-resource languages. In EMNLP, Cited by: §3.2.
-  (2020) Are all good word vector spaces isomorphic?. In EMNLP, Vol. abs/2004.04070. Cited by: §3.2.
Charagram: embedding words and sentences via character n-grams. In EMNLP, Cited by: §3.2.
Learning distributed representations of texts and entities from knowledge base. Transactions of the Association for Computational Linguistics 5, pp. 397–411. Cited by: §3.1, §5.4.
-  (2019) Knowledge-aware pronoun coreference resolution. In ACL, Cited by: §1, §1.
-  (2015) Character-level convolutional networks for text classification. In NIPS, Cited by: §3.2.
-  (2020) Improving candidate generation for low-resource cross-lingual entity linking. TACL 8, pp. 109–124. Cited by: §A.1, §A.2, §A.2, Table 6, §1, §1, §3.2, 2nd item, §5.3, §5.4, §5.5, §5, §6.1.
Transfer learning for low-resource neural machine translation. In EMNLP, Cited by: §3.2.
Appendix A Supplementary Material
a.1 Dataset Information
We preprocess the Wikipedia dumps for each of the target and pivot languages. We apply possible redirects to each Wikipedia entity, and map the (redirected) Wikipedia entity to the corresponding Wikidata entity222This last step is not really necessary, but we prefer a language-agnostic representation of the entities. (e.g. Q21 is the Wikidata identifier for the Wikipedia entity England). We do not use aliases  to enrich the data. Nevertheless its inclusion will improve the performance of any technique.
|Lang.||WP Size||#Mention-Entity pairs.|
The transfer learning paradigm of applying a model, learned in a pivot language, to a target language has been explored in different natural language problems. Nevertheless, such works have never reached a consensus on how to select a pivot language that is closely related to a target language. For those target languages that have been used in previous work (e.g. jv or mr) we select the same pivot languages as in these works. For all others we rely on lang2vec  to choose appropriate pivot languages.
a.2 More Details about WikiPriors and Charagram
WikiPriors is based on the candidate generator used in previous work [20, 21]. It fallbacks to the pivot language when the index of the target language does not retrieve candidates. On the contrary, for the zero-shot setting it only relies on the index constructed using the pivot language. Furthermore, for both zero-shot and supervised settings, it constructs indexes that associate the constituent words of mention strings with entities using prior probabilities . These indexes are used to retrieve candidates if the indexes on mention strings provide less than candidates. Previous work  has shown that sometimes it is competitive as compared to Charagram.
We replicate the exact same setup for Charagram as reported in : the tokenizer tknr, which is applied to mentions and (English) entity names, returns character -grams with
. The embedding size is 300. We train the model with stochastic gradient descent (SGD) with batch size 64, and a learning rate of 0.1. We stop training if the micro recall@30 on the validation set does not increase for 50 epochs, and the maximum number of training epochs is set to 200.
|Target - Pivot||0||1e-2||0.1|
|co - it||63 (0%)||62 (72%)||48 (98%)|
|li - nl||84 (0%)||83 (65%)||73 (98%)|
|bar - de||88 (0%)||87 (75%)||70 (99%)|
|olo - fi||72 (0%)||71 (68%)||55 (97%)|
|mk - bg||71 (0%)||69 (66%)||55 (95%)|
|jv - id||87 (0%)||87 (64%)||79 (96%)|
|tk - az||69 (0%)||69 (56%)||64 (92%)|
|mr - hi||70 (%0)||69 (62%)||58 (94%)|
|fo - is||79 (%0)||79 (53%)||73 (90%)|
a.3 Complexity of pti and Charagram
All the experiments were done using code written in Python on an Intel(R) Xeon(R) E5-2680 24-core machine with 2.50GHz CPU, and 256 GB RAM running Linux Ubuntu 16.04. Experiments for Charagram are run in a TitanX GPU. We do not make use of any parallelism in the experiments.
As previously discussed, the candidate generator must be computational efficient. For this reason we compare the memory and time complexity of pti to that of Charagram. We experimentally observe that it is possible to reduce significantly the amount of non-zero prior and posterior probabilities in pti by thresholding (see Table 5), while having very little impact on the performance. These experiments correspond to the best performing learning setting. We perform further analysis of pti with a thresholding value of 0.01, and compare it to Charagram (trained with 80,000 data points) in terms of number of parameters, training and inference time. The inference time corresponds to the time needed to evaluate all mention queries—3,000 queries for almost all target languages. All target languages are evaluated in the best performing learning setting. A comparison is shown in Table 6. The findings are summarized below:
Number of parameters. While the number of parameters remains more or less constant for Charagram, pti shows its number of parameters is proportional to the amount of resources of the pivot language.
Training time. The training time of pti corresponds to the time that takes to construct the prior and posterior probabilities, which is proportional to the amount of resources of the pivot language. The optimization of the learning objective done by Charagram is significantly much slower.
Inference time. pti uses sparse matrices to represent and . At test time, scores are simply computed by aggregating rows of such matrices. Moreover, the search performed by pti to find the largest values only includes those candidates with non-zero scores. These characteristics make often pti more efficient than Charagram during inference.
a.4 Recall@ for different values of
Previous works on candidate generation have set a value of 30 for . However, when the candidate generator is followed by an entity disambiguation technique, there are other values that are typically used by these methods such as (approximately) 10  and 20 . Figure 4 shows the recall of pti and Charagram for these other values. Except for CO-IT, OLO-FI and TK-AZ, where zero-shot is the only possible learning setting, all other metrics are evaluated in the supervised setting. The comparative performance between the methods is similar for all values of .
a.5 Other versions of pti
Wildcard Tokens The tokenizer extends the tokens returned for a given mention with wildcard tokens. Wildcard tokens includes placeholders represented by an asterisk, which can be interpreted as any character. We observed some minor improvements in recall for some target languages, but this notably increased the amount of tokens and, consequently, the memory requirements of the approach.
For the supervised learning setting, we explored another approach to integrate the information from both the target and pivot language. Independent prior and posterior probabilities are computed for both languages, and linearly combined with a weighting factor. Finally, the probabilities are normalized. We did not observe any improvement in performance with this fusion technique.
Probability Smoothing We perform one additional step in the previous version. We use additive smoothing  (also known as Laplace smoothing) in the computation of the prior and posterior probabilities for the pivot language. These smoothed probabilities account for possible discrepancies between the target and pivot language. We sometimes observed a minor improvement for some query types, but this comes at the cost of validating one more hyperparameter (the smoothing factor).