Log In Sign Up

WISER: A Semantic Approach for Expert Finding in Academia based on Entity Linking

by   Paolo Cifariello, et al.
University of Pisa

We present WISER, a new semantic search engine for expert finding in academia. Our system is unsupervised and it jointly combines classical language modeling techniques, based on text evidences, with the Wikipedia Knowledge Graph, via entity linking. WISER indexes each academic author through a novel profiling technique which models her expertise with a small, labeled and weighted graph drawn from Wikipedia. Nodes in this graph are the Wikipedia entities mentioned in the author's publications, whereas the weighted edges express the semantic relatedness among these entities computed via textual and graph-based relatedness functions. Every node is also labeled with a relevance score which models the pertinence of the corresponding entity to author's expertise, and is computed by means of a proper random-walk calculation over that graph; and with a latent vector representation which is learned via entity and other kinds of structural embeddings derived from Wikipedia. At query time, experts are retrieved by combining classic document-centric approaches, which exploit the occurrences of query terms in the author's documents, with a novel set of profile-centric scoring strategies, which compute the semantic relatedness between the author's expertise and the query topic via the above graph-based profiles. The effectiveness of our system is established over a large-scale experimental test on a standard dataset for this task. We show that WISER achieves better performance than all the other competitors, thus proving the effectiveness of modelling author's profile via our "semantic" graph of entities. Finally, we comment on the use of WISER for indexing and profiling the whole research community within the University of Pisa, and its application to technology transfer in our University.


page 11

page 14


Improving Candidate Retrieval with Entity Profile Generation for Wikidata Entity Linking

Entity linking (EL) is the task of linking entity mentions in a document...

Structural Regularities in Text-based Entity Vector Spaces

Entity retrieval is the task of finding entities such as people or produ...

Effective Distributed Representations for Academic Expert Search

Expert search aims to find and rank experts based on a user's query. In ...

Author-topic profiles for academic search

We implemented and evaluated a two-stage retrieval method for personaliz...

Pangloss: Fast Entity Linking in Noisy Text Environments

Entity linking is the task of mapping potentially ambiguous terms in tex...

BERTERS: Multimodal Representation Learning for Expert Recommendation System with Transformer

The objective of an expert recommendation system is to trace a set of ca...

ExpFinder: An Ensemble Expert Finding Model Integrating N-gram Vector Space Model and μCO-HITS

Finding an expert plays a crucial role in driving successful collaborati...

1. Introduction

Searching the human expertise has recently attracted considerable attention in the information retrieval (IR) community. This is a computationally challenging task because human expertise is hard to formalize. Expertise has been commonly referred as “tacit knowledge” (Baumard, 1999): namely, it is the knowledge that people carry in their minds and, therefore, it difficult to access. As a consequence, an expert finding system has one way to assess and access the expertise of a person: through artifacts of the so-called “explicit knowledge”: namely, something that is already captured, documented, stored via e.g. documents. Applications of those systems concern with the recognition of qualified experts to supervise new researchers, assigning a paper or a project to reviewers (Roberts, 2016), finding relevant experts in social networks (Neshati et al., 2014) or, more important for modern academia, establishing links with industries for technology transfer initiatives.

Research on how to provide a way to actually share expertise can be traced back to at least the 1960s (Brittain, 1975). In more recent years, the explosion of digital information has revamped the scientific interest on this problem and led researchers to study and design software systems that, given a topic , could support the automatic search for candidates with the expertise

. Initial approaches were mainly technical and focused on how to unify disparate and dissimilar document collections and databases into a single data warehouse that could easily be mined. They employed some heuristics or even required people to self-judge their skills against a predefined set of keywords

(Ackerman et al., 2002; Yimam and Kobsa, 2000). Subsequent approaches have been proposed to exploit techniques proper of document retrieval, and they have been applied to the documents written by or associated to each expert candidate as the main evidence of her expertise (Balog et al., 2012). However, classical search engine return documents not people or topics (Heath et al., 2006).

Today they do exist advanced systems which may be classified into two main categories:

expert finding systems, which help to find who is an expert on some topic, and expert profiling systems, which help to find of which topic a person is an expert. Balog et al. (Balog et al., 2012) summarize the general frameworks that have been used to solve these two tasks (see also Section 2), and look at them as two sides of the same coin: an author retrieved as expert of a topic should contain that topic in her profile. However, as pointed out in (Van Gysel et al., 2016b, a), known systems are yet poor in addressing three key challenges which, in turn, limit their efficiency and applicability (Li et al., 2014; Balog et al., 2012): (1) Queries and documents use different representations so that maximum-likelihood language models are often inappropriate, and thus there is the need to make use of semantic similarities between words; (2) The acceleration of data availability calls for the further development of unsupervised methods; (3) In some approaches, a language model is constructed for every document in the collection thus requiring to match each query term against every document.

In this paper we focus on the task of experts finding in the academia domain, namely, we wish to retrieve academic authors whose expertise is defined through the publications they wrote and it is relevant for a user query.

In this context, the best system to date is the one recently proposed by Van Gysel et al. (Van Gysel et al., 2016b). It has a strong emphasis on unsupervised profile construction, efficient query capabilities and semantic matching between query terms and candidate profiles. Van Gysel et al. have shown that their unsupervised approach improves retrieval performance of vector space-based and generative-language models, mainly due to its ability to learn a profile-centric latent representation of academic experts from their publications. Their key idea is to deploy an embedding representation of words (such as the one proposed in (Mikolov et al., 2013)) to map conceptually similar phrases into geometrically close vectors (i.e. “nyt” is mapped into a vector close to the one of “New York Times”). At query time, their system first maps the user query into the same latent space of experts’ profiles and, then, retrieves the experts showing the highest dot-product between the embeddings of their profile and the one of the query. This way the system can efficiently address the mismatch problem between the “language” of the user query and the “language” of authors’ documents: i.e., an expert can be identified even if her documents do not contain any terms of the input query (Li et al., 2014).

But, despite these recent improvements, the semantic matching implemented by Van Gysel et al. (Van Gysel et al., 2016b) is yet limited to the use of latent concepts, namely ones that cannot be explicitly defined and thus cannot explain the why an expert profile matches a user query. In this paper we propose a novel approach for expert finding which is still unsupervised but, unlike (Van Gysel et al., 2016b), takes advantage of the recent IR trends in the deployment of Knowledge Graphs (Weikum et al., 2016; Dietz et al., 2017) which allow modern search engines and IR tools to be more powerful in semantically matching queries to documents and allow to explicitly represent concepts occurring in those documents, as well-defined nodes in these graphs. More specifically, our approach models the academic expertise of a researcher both syntactically and semantically by orchestrating a document-centric approach, that deploys an open-source search engine (namely ElasticSearch), and a profile-centric approach, that models in an innovative way the individual expert’s knowledge not just as a list of words or a vector of latent concepts (as in (Van Gysel et al., 2016b)) but as a small labeled and weighted graph derived from Wikipedia, which is the best known and open Knowledge Graph to date. That graph will consist of labeled nodes, which are the entities mentioned in author’s publications (detected via TagMe (Ferragina and Scaiella, 2012), one of the most effective entity linking systems to date), and edges weighted by means of proper entity-relatedness scores (computed via an advanced framework (Ponza et al., 2017a)). Moreover, every node is labeled with a relevance score which models the pertinence of the corresponding entity to author’s expertise, and is computed by means of proper random-walk calculation over the author’s graph; and with a latent vector representation which is learned via entity and other kinds of structural embeddings, that are derived from Wikipedia and result different from the ones proposed in (Van Gysel et al., 2016b). The use of this enriched graph allows to obtain a finer, explicit and more sophisticate modeling of author’s expertise that is then used at query time to search and rank experts based on the semantic relation that exist between the words/entities occurring in the user query and the ones occurring in the author’s graph.

This novel modelling and querying approach has been implemented in a system called Wiser, which has been experimented on the largest available dataset for benchmarking academia expert finding systems, i.e. TU dataset (Berendsen et al., 2013). This dataset consists of a total of 31,209 documents, authored by 977 researchers, and 1,266 test queries with a human-assessed ground-truth that assigns to each query a ranking of its best academic experts. Wiser shows statistically significant improvements over different ranking metrics and configurations. More precisely, our document-centric approach improves the profile-centric Log-linear model proposed by (Van Gysel et al., 2016b) of +7.6%, +7.4% and +7% over MAP, MRR and NDCG@100 scores. Whereas our profile-centric approach based on entity linking improves that Log-linear model of +2.4% in MAP, and achieves comparable results for the other metrics. Then, we show that a proper combination of our document- and profile-centric approaches achieves a further improvement over the Log-linear model of +9.7%, +12.6% and +9.1% in MAP, in MRR and in NDCG100; and, furthermore, it improves the sophisticated Ensemble method of (Van Gysel et al., 2016b), which is currently the state-of-the-art, of +, + and + on MAP, MRR and NDCG@100 metrics, respectively. This means that Wiser is designed upon the best single model and the best combined models today, thus resulting the state-of-the-art for the expert finding problem in academia.

A publicly available version of Wiser is available at for testing its functionalities about expert finding and expert profiling over the researchers of the University of Pisa.

The next Sections will review the main literature about expert finding solutions (Section 2), in order to contextualize our problem and contributions; describe the design of Wiser, by detailing its constituting modules and their underlying algorithmic motivations (Section 4); and finally present a systematic and large set of experiments conducted on Wiser and the state-of-the-art systems, in order to show our achievements (Section 5) and identify new directions for future research (Section 6).

2. Related Work

We first discuss prior work on experts finding by describing the main challenges of this task and its differences with classic document retrieval. Then we move on to describe how our work differs from known experts finding (and profiling) approaches by commenting about its novel use of entity linking, relatedness measures and word/entity embeddings: techniques that we also describe in the next paragraphs. Finally, in the last part of this Section, we will concentrate on detailing the main differences between Wiser and the state-of-the-art system proposed by Van Gysel et al. (Van Gysel et al., 2016b), because it is also the most similar to ours.

Expert Finding (and Profiling). Experts finding systems differ from classic search engines (Chakrabarti, 2002; Manning et al., 2008) in that they address the problem of finding the right person (in contrast with the right document) with appropriate skills and knowledge specified via a user query. Preliminary attempts were made in adapting classic search engines to this task with poor results (Balog et al., 2012). The key issue to solve is how to represent the individual expert’s knowledge (Macdonald and Ounis, 2006; Balog et al., 2006, 2009; Van Gysel et al., 2016b). Among the several attempts, the ones that got most attention and success were the profile-centric models (Balog et al., 2006; Van Gysel et al., 2016b) and the document-centric models (Cao et al., 2005; Macdonald and Ounis, 2006; Balog et al., 2012). The first ones work by creating a profile for each candidate according to the documents they are associated with, and then by ranking experts through a matching between the input query and their profiles. The second ones work by first retrieving documents which are relevant to the input query and then by ranking experts according to the relevance scores of their matching documents. The joint combination of these two approaches has shown recently to further improve the achievable performance (Balog and De Rijke, 2008; Van Gysel et al., 2016b), as we will discuss further below.

Most of the solutions present in the literature are unsupervised (Cao et al., 2005; Macdonald and Ounis, 2006; Balog et al., 2006, 2009; Van Gysel et al., 2016b) since they do not need any training data for the deployment of their models. Supervised approaches (Macdonald and Ounis, 2011; Moreira et al., 2015) have been also proposed, but their application has usually been confined to data collections in which query-expert pairs are available for training (Sorg and Cimiano, 2011; Fang et al., 2010). This is clearly a limitation that has indeed led researchers to concentrate mainly onto unsupervised approaches.

The focus of our work is onto the design of unsupervised academia experts finding solutions which aim at retrieving experts (i.e. academic authors) whose expertise is properly defined through the publications they wrote. Among the most popular academic expert finding solutions we have ArnetMiner (Tang et al., 2008), a system for mining academic social networks which automatically crawls and indexes research papers from the Web. Its technology relies on a probabilistic framework based on topic modeling for addressing both author ambiguity and expert ranking. Unfortunately, the implementation of the system is not publicly available and it has not been experimented on publicly available datasets. Similar comments hold true for the Scival system by Elsevier.111See

Among the publicly available systems for academia expert finding, the state-of-the-art is the one recently proposed by Van Gysel et al. (Van Gysel et al., 2016b). It adapts a collection of unsupervised neural-based retrieval algorithms (Van Gysel et al., 2017), originally deployed on product search (Van Gysel et al., 2016a), to the experts finding context via a log-linear model which learns a profile-centric latent representation of academic experts from the dataset at hand. At query time, the retrieval of experts is computed by first mapping the user query into the same latent space of experts profiles and, then, by retrieving the experts with the highest dot-product between their profile and the query.

Before discussing the differences between this approach and ours, we need to recall few technicalities regarding the main modules we will use in our solution.

Entity Linking. All expert finding approaches mentioned above (as well as typical IR solutions to indexing, clustering and classification) are commonly based on the bag-of-words paradigm. In the last years Research went beyond this paradigm (Weikum et al., 2016; Dietz et al., 2017; Ferragina and Scaiella, 2012) with the goal of improving the search experience on unstructured or semi-structured textual data (Blanco et al., 2015; Bovi et al., 2015; Ponza et al., 2017b; Cornolti et al., 2016; Nguyen et al., 2017). The key idea is to identify sequences of terms (also called spots or mentions) in the input text and to annotate them with unambiguous entities drawn from a Knowledge Graph, such as Wikipedia (Cucerzan, 2007; Kulkarni et al., 2009), YAGO (Suchanek et al., 2007), Freebase (Bollacker et al., 2008) or BabelNet (Navigli and Ponzetto, 2012). Documents are then retrieved, classified, or clustered based on this novel representation which consists of a bag of entities and a semantic relatedness function (Gabrilovich and Markovitch, 2007; Milne and Witten, 2008; Ponza et al., 2017a) which incorporates into a floating-point number how much two entities are semantically close to each other. This novel representation has recently allowed researchers to design new algorithms that significantly boost the performance of known approaches in several IR applications, such as query understanding, documents clustering and classification, text mining, etc. (Weikum et al., 2016; Dietz et al., 2017; Scaiella et al., 2012a; Ni et al., 2016a; Cornolti et al., 2016; Ponza et al., 2017a).

Entity Embeddings. Word embeddings (Mikolov et al., 2013)

is a very recent Natural Language Processing (NLP) technique that aims at mapping words or phrases to low dimensional numerical vectors that are faster to manipulate and offer interesting distributional properties to compare and retrieve ”similar” words or phrases

(Mikolov et al., 2013). This latent representation has been recently extended (Ni et al., 2016b; Perozzi et al., 2014) to learn two different forms of representations of Wikipedia entities (Ponza et al., 2017a): (1) Entity2Vec (Ni et al., 2016b) learns the latent representation of entities by working at textual-level over the content of Wikipedia pages, and (2) DeepWalk (Perozzi et al., 2014) learns the latent representation of entities by working on the hyper-link structure of the Wikipedia graph via the execution of random walks that start from a focus node (i.e. the entity to be embedded) and visit other nearby nodes (that provide its contextual knowledge). The former approach tends to declare similar two entities that co-occur within similar textual contexts, even if their textual mentions are different; the latter approach tends to declare similar two entities that are nearby in the Knowledge Graph. These are novel forms of semantic embeddings, which have been proved to be particularly effective in detecting entity relatedness (Ponza et al., 2017a).

To the best of our knowledge we are the first to design an experts finding system for the academia domain which is based on entity linking and embeddings techniques built upon the Wikipedia Knowledge Graph (Van Gysel et al., 2016b; Balog et al., 2012). The key feature of our system Wiser is a novel profile model for academic experts, called Wikipedia Expertise Model, that deploys those advanced techniques to build a small labeled and weighted graph for each academic author. This graph will describe her individual ”explicit” knowledge in terms of Wikipedia entities occurring in her publications and of their relatedness scores computed by means of Wikipedia-based interconnections and embeddings. This graph representation is then used at query time to efficiently search and rank academic experts based on the “semantic” relation that exists between their graph model and the words and entities occurring in the user query.

3. Notation and Terminology

A dataset for the experts finding problem is a pair consisting of a set of documents and a set of authors (candidate experts) . We indicate with the set of documents written by author .

In our context an entity is a Wikipedia page. Entities are annotated in texts (both documents and queries) through the entity linker TagMe (Ferragina and Scaiella, 2012), which also provides a confidence score which expresses the semantic coherence between entity and its surrounding text in the input document. Since an entity can be mentioned many times in the documents of , with possibly different values for , we denote by the maximum confidence score among all occurrences of in ’s documents. We use to denote the set of all entities annotated in the documents of author .

Given an entity , we use to denote the set of authors who mention in one of their documents, to denote the subset of documents that mention , and to denote the subset of documents written by author and which mention .

A generic input query is indicated with , will be used to denote the set of entities annotated in by TagMe and will be used to denote the subset of documents which are (syntactically or semantically) matched by the input query .

4. Wiser: Our New Proposal

In this Section we describe Wiser, whose name stands for Wikipedia Expertise Ranking. It is a system for academia experts finding, built on top of three main tools:

  • Elasticsearch222, an open-source software library for the full-text indexing of large data collections;

  • TagMe (Ferragina and Scaiella, 2012), a state-of-the-art entity linker for annotating Wikipedia pages mentioned in an input text;

  • WikipediaRelatedness (Ponza et al., 2017a), a framework for the computation of several relatedness measures between Wikipedia entities.

By properly orchestrating and enriching the results returned by the above three tools, Wiser offers both document-centric and profile-centric strategies for solving the experts finding problem, thus taking advantage of the positive features of both approaches. More specifically, Wiser first builds a document-centric model of the explicit knowledge of academic experts via classic document indexing (by means of Elasticsearch) and entity annotation (by means of TagMe) of the authors’ publications. Then, it derives a novel profile-centric model for each author that consists of a small, labeled and weighted graph drawn from Wikipedia. Nodes in this graph are the entities mentioned in the author’s publications, whereas the weighted edges express the semantic relatedness among these entities, computed via WikipediaRelatedness. Every node is labeled with a relevance score which models the pertinence of the corresponding entity to author’s expertise, and is computed by means of proper random-walk calculation over that graph; and with a latent vector representation which is learned via entity and other kinds of structural embeddings derived from Wikipedia. This graph-based model is called Wikipedia Expertise Model of an academic author (details in Section 4.1).

At query time, Wiser uses proper data fusion techniques (Macdonald and Ounis, 2006) to combine several authors’ ranking: the one derived from the documents’ ranking provided by Elasticsearch, and others derived by means of properly defined ”semantic matchings” between the query and the Wikipedia Expertise Model of each author. This way, it obtains a unique ranking of the academic experts that captures syntactically and semantically the searched expertise within the ”explicit knowledge” of authors (details in Section 4.2).

The following sections will detail the specialties of our novel Wikipedia Expertise Model, and its construction and use in the two phases above.

4.1. Data indexing and experts modeling

This is an off-line phase which consists of two main sub-phases whose goal is to construct the novel Wikipedia Expertise Model for each academic author to be indexed. A pictorial description of this phase is provided in Figure 1.

Figure 1. The construction for a given author of the Wikipedia Expertise Model in Wiser.

Data Acquisition. In this first sub-phase, Wiser indexes the authors’ publications by means of Elasticsearch and annotates them with Wikipedia’s entities by means of TagMe. For each input document, Elasticsearch stores information about its author and its textual content, whereas TagMe extracts the Wikipedia entities that are mentioned in the document together with their -score that, we recall, captures the coherence between the annotated entity and the surrounding textual context in which it has been mentioned. Given that the annotated documents are scientific publications, they are well written and formatted so that TagMe is very effective in its task of extracting relevant Wikipedia entities. Subsequently, Wiser filters out the entities such that (as suggested by the TagMe’s documentation), since those entities are usually noisy or non coherent with the topics mentioned in the annotated document. Eventually, all this information is stored in a MongoDB333 database.

Wikipedia Expertise Model (abb. WEM). In this second sub-phase, Wiser creates an innovative profile of each academic author that consists of a graph whose nodes are labeled with the Wikipedia entities found in author’s documents, and whose edges are weighted by deploying entity embeddings and the structure of the Wikipedia graph, by means of the WikipediaRelatedness framework. More precisely, the expertise of each author is modeled as a labeled and weighted graph where each node is a Wikipedia entity annotated in at least one of the documents of by TagMe, and each weighted edge models the relatedness between the two entities and . In our context we weight by computing the Milne&Witten relatedness measure between ’s and ’s entity, using the WikipediaRelatedness framework. This measure has shown its robustness and effectiveness in different domains (Scaiella et al., 2012b; Ferragina and Scaiella, 2012; Ponza et al., 2017a), we leave the use of more sophisticated relatedness measures, present in WikipediaRelatedness (Ponza et al., 2017a), to a future work.

The graph

is further refined by executing an outlier-elimination process performed via a

graph clustering algorithm that recognizes and removes from those entities that do not belong to any cluster and thus can be considered as off-topic for the author . For this task Wiser deploys HDBSCAN (McInnes and Healy, 2017)

, a density-based hierarchical clustering method based on the classic

DBSCAN (Manning et al., 2008)

. The choice in the use HDBSCAN is motivated by its efficiency and a higher clustering quality than other popular algorithms (i.e. K-Means

(McInnes and Healy, 2017). As in any clustering algorithm, input parameters of HDBSCAN strongly depend on the input graph and its expected output. In our experiments we observed that sometimes the entities labeled as outliers are not much off-topic (false positives), while in other cases no outliers are detected although they do exist at a human inspection (false negatives). Wiser deals with those issues by adopting a conservative approach: if more than 20% of the nodes in are marked as outliers, we consider the output provided by HDBSCAN as not valid, and thus we keep all nodes in as valid topics for the examined author .

Figure 2. Experts retrieval in Wiser via the combination of a document-centric strategy and a profile-centric strategy through proper data fusion techniques that are described in the text.

After the application of the outlier-elimination process, Wiser computes two attributes for each node (hence, entity) in the graph . The first one is the relevance score of an entity mentioned by the author . This score is computed by running the Personalized PageRank algorithm (Haveliwala, 2002) over the graph with a proper setting of the PageRank’s damping factor to , as commonly chosen in literature (Brin and Page, 1998). Moreover, the starting and teleporting distributions over ’s nodes are defined to reflect the number of times author mentions the entity assigned to that node, and it is scaled by the -score that evaluates how much that entity is reliable as ’s research topic according to TagMe: namely, . Constant

is a normalization factor that makes that formula a probability distribution over the entities labeling the nodes of

. This definition allows the more frequent and coherent entities to get a higher chances to re-start a random walk, and thus their nodes will probably turn to get a higher steady state probability (i.e. relevance score) via the Personalized PageRank computation (Haveliwala, 2002). In this computation a significant role will be played by the weighted edges of the graph which explicitly model the semantic relatedness among the entities mentioned by .

The second attribute that is associated to each node is a vector of floating-point numbers computed through the DeepWalk model for entity embeddings (see Section 2). This technique is inspired by the approach adopted by (Van Gysel et al., 2017), where the expertise of each author is modeled with an embedding vector. But, unlike (Van Gysel et al., 2017) where vectors are learned via a bag-of-words paradigm directly from the dataset , our embedding vectors are more ”powerful” in that they embed the latent knowledge learned from the content and the structure of Wikipedia and, additionally, they ”combine” the relevance score just described above and associated to each entity (node) in the graph . Eventually we compute for every author one single embedding vector which is obtained by summing up the DeepWalk embedding vectors relative to its top- entities and ranked according to the relevance score described above.444In the experiments of Section 5 we will investigate the impact of the choice of . This embedding vector eventually incorporates the expertise of each author into components (see Section 5), thus it is fast to be managed in the subsequent query operations when we will need to compute the semantic matches between authors’ topics and query topics.

Name Equation Description
meank Average of the top- scores of ’s documents.
max Maximum of the scores of ’s documents.
rr Reciprocal Rank (Macdonald and Ounis, 2006) of the ranks of ’s documents. is the ranking position of document .
combnz Documents’ scores of author , normalized by the number of documents associated to .
Table 1. Document-scoring techniques used by Wiser within its document-centric strategy. We denote by the score assigned to the -th document of author computed via several techniques.

Summarizing, Wiser computes for every author its WEM profile which consists of the graph and an embedding vector of numeric components. This way the WEM profile models the explicit knowledge of author by identifying the explicit concepts (via entities and their relations) and the latent concepts (via an embedding vector) occurring in her documents. The graph is crucial in many aspects because it captures the entities mentioned in ’s documents and their Milne&Witten’s relatedness score. But, also, it allows to select the top- entities that best describe ’s expertise, according to a relevance score derived by means of a Personalized PageRank calculation over . The DeepWalk vectors of these top- entities are then summed to get the embedding vector of author that describes the best latent concepts of ’s expertise.

4.2. Finding the Experts

At query time, Wiser operates in order to identify the expertise areas mentioned in the input query and then retrieve a set of candidate experts to which it assigns an expertise score. This score is eventually used for generating the final ranking of experts that are returned as result of query .

Since our system relies on both document-centric and profile-centric strategies, we organized this Section in three main paragraphs which respectively describe each of those strategies and the method used for their combination via proper data fusion techniques. Figure 2 reports a graphical representation of the query processing phase.

Document-Centric Strategy. It relies on the use of Elasticsearch. The query is forwarded to Elasticsearch in order to retrieve a ranked list of documents, i.e. a list where is the score computed for document given the query . In our experiments we will test several ranking scores: tf-idf (Manning et al., 2008), BM25 (Robertson et al., 2009), and Language Modeling with either Dirichlet or Jelinek-Mercer smoothing ranking techniques (Zhai and Lafferty, 2017).

The ranked list of documents is then turned into a ranked list of authors by means of several well-known techniques (Macdonald and Ounis, 2006; Shaw et al., 1994) that we have adapted to our context, are described in Table 1 and tested in Section 5.

Name Equation Description
iaf Inverse Author Frequency, namely the smoothing factor used for modeling the importance of entity in the dataset at hand. This score is used only when combined with other techniques (see ec-iaf and ef-iaf).
ec-iaf Frequency of an entity smoothed by means of its coherence with ’s documents (i.e. ) and the iaf scores.
ef-iaf Scaling down ec-iaf by means of the ”productivity” of author measured as the number of authored documents.
rec-iaf Extending ec-iaf equation with the relevance score of the entity within the graph . is a scaling function described in the experiments.
max Maximum exact-match score computed for a given author and for each . is either ec-iaf, ef-iaf or rec-iaf.
mean Average exact-match score computed for a given author and for each . is either ec-iaf, ef-iaf or rec-iaf.
Table 2. Author-scoring techniques based on Exact-Match of entities and used by Wiser within its profile-centric strategy. The function can be linear, sigmoid or a square function. Equation ec-iaf, ef-iaf and rec-iaf are computed for a given author and entity , whereas max and mean aggregate these scores computed for multiple entities into a single one.
Name Equation Description
aer Author Entity Relatedness score among the top- entities of and the entities .
raer Ranked Author Entity Relatedness score that extends aer with entities’ relevance score. is a scaling function described in the experiments.
aes Author Entity Similarity that computes the cos-similarity between the embedding of entity and the embedding of author .
Table 3. Author-scoring techniques based on Related-Match of entities and used by Wiser within its profile-centric strategy. The top- entities of author are the ones with the highest relevance score in . In the experiment we have set , thus taking the top entities mentioned in ’s documents.
Name Equation Description
combsum The final score is the sum of the scores.
combmin The final score is the minimum of the scores.
combmax The final score is the maximum between the scores.
rrm The final score is the product of the inversed ranking scores.
rrs The final score is the inverse of the sums of the ranking scores.
Table 4. Data fusion techniques used by Wiser to combine scores (document-centric and profile-centric) of an author into a unique value that reflects the pertinence of ’s expertise with the user query .

Profile-Centric Strategy. This is a novel set of scoring strategies that we have specifically designed to rank experts according with our new WEM profile. Authors are scored via a computation that consists of three main steps. First, Wiser runs TagMe over the input query and annotates it with a set of pertinent Wikipedia entities, denoted by . Second, it retrieves as candidate experts the authors whose WEM profile contains at least one of the entities in . Third, the authors in are ranked according to two novel entity-scoring methods, that we call Exact and Related, which compute authors’ scores based on some properly defined exact- or related-scoring functions that are computed between and their WEM profiles. These many scoring functions will be experimentally tested in Section 5.

Exact-Match Scoring. This collection of methods measures the relevance of an author with respect to the query as a function of the frequency of ’s entities which occur in ’s documents. More precisely, an author is first retrieved as candidate expert of if her WEM profile contains at least one of the entities annotated in ; and then, she is ranked by means of one of the techniques reported in Table 2 that take into account only the frequency of the entities explicitly occurring in its WEM profile.

Related Match Scoring. This approach aims at implementing a semantic scoring of the authors in , by evaluating the pertinence of the expertise of an author according to the relatedness among the entities in her WEM profile and the entities in (as opposite to the frequency used by the previous scoring functions). Table 3 reports the list of techniques used to design such a kind of semantic scores. They exploit either the structure of the graph (i.e. aer and raer

) or compute the cosine similarity between the embedding vectors of the compared entities (i.e.


Combining Document-Centric and Profile-Centric Strategies. Document and profile-centric strategies are then eventually combined via proper data fusion techniques which are listed in Table 4. We designed those techniques as adaptations of the proposals in (Shaw et al., 1994; Macdonald and Ounis, 2006) suitable for the experts finding problem.

4.3. Optimization and Efficiency Details

Wiser implements three main algorithmic techniques that speed-up the retrieval of experts, thus making the query experience user-friendly.

Double Index. Wiser’s index is implemented with two different data structures, namely, two inverted lists that store both the association author-entities and entity-authors. This allows to efficiently retrieve at query time all the information that are needed for ranking authors with profile-centric strategies.

Ordered Entities by Relevance Score. Some profile-centric strategies, namely aer and raer, need to retrieve the top- most related entities of an author with respect to , but this latter set of entities is known only at query time. This could be a slow process when dealing with many authors and many entities, so Wiser pre-computes and stores for each author the ordered list of her entities sorted by their relevance score, computed by means of a Personalized PageRank over (i.e. ’s WEM profile). The computation of the top- entities in with respect to then boils down to a fast computation of a list intersection.

Relatedness Cache. The indexing phase of Wiser needs to compute the graph for every author of the input dataset. This could be a very slow process in the presence of many authors and many entities in , because is a graph of up to edges which have to be weighted by querying the RESTful service underlying the WikipediaRelatedness framework. In order to speed up this computation, Wiser caches the edge weights as soon as they are computed. This way, if two entities occur in many subsequent graphs , their computation is saved by accessing their cached values.

5. Validation

In order to evaluate the efficacy of Wiser we have set up a sophisticated experimental framework that has systematically tested the various document-centric and profile-centric strategies described in Tables 13 and the data fusion techniques described in Tables 4 over the publicly available TU dataset (Berendsen et al., 2013). From these experiments we will derive the best combination of techniques that, then, will be used to compare the resulting Wiser against the state-of-the-art systems currently known to solve the expert finding problem.

5.1. Dataset

The TU (Berendsen et al., 2013) dataset555We thank Christophe Van Gysel for providing us the dataset. is an updated version of the UvT dataset, developed at Tilburg University (TU). It is currently the largest dataset available for benchmarking academia expert finding solutions, containing both Dutch and English documents. TU dataset comes with five different (human assessed) ground-truths, named from GT1 to GT5. In our experiments we have decided to use GT5 because it is considered the most recent and complete ground-truth (see (Berendsen et al., 2013) for details) and because it is the dataset used in the experiments of (Van Gysel et al., 2016b). Table 5 offers a high-level overview about the dataset, while Table 6 offer a finer description.

Resource Count
Documents 31,209
Author Candidates11footnotemark: 1 977
Queries (GT5) 1266
Document-candidate associations 36,566
Documents with at least one associated candidate 27,834
Associations per document22footnotemark: 2
Associations per candidate
Table 5. Overview of the TU dataset (Van Gysel et al., 2016b).
Resource Documents with Documents with Total num.
at least one author no authors documents
Profile pages (UK)
Profile pages (NL)
Course pages
Total documents
Table 6. Document composition for the TU dataset.
Resource Space
Raw Documents 25 MB
Elasitcsearch Index 40 MB
WEM Profiles (total) 94 MB
WEM Profiles (average per author) 100 KB
Table 7. Space occupancy of Wiser’s index built on the TU dataset.

Indexing TU with Wiser. Since TU dataset contains both Dutch and English documents, we normalize the data collection by translating Dutch documents into English via the tool Translate Shell666An open source command-line translator via Google Translate APIs.. Then, the dataset is indexed with Wiser, as described in Section 4.1. Table 7 reports the memory occupancy of the final indexes.

5.2. Evaluation Metrics

In our experiments we will use the following ranking metrics that are available in the trec_eval tool777, and are commonly used to evaluate expert-finding systems.

Precision at k (P@k). It is the fraction of retrieved authors that are relevant for a given query with respect to a given cut-off k which considers only the topmost results returned by the evaluated system:


Mean Average Precision (MAP).Precision and recall are set-based measures, thus they are computed on unordered lists of authors. For systems that return ranked results, as the ones solving the expert-finding problem, it is desirable to consider the order in which the authors are returned. The following score computes the average of P@k over the relevant retrieved authors.


where is the number of retrieved authors, is function which equals to if the item at rank is a relevant author for , otherwise.

The following score averages AveP over all queries in .


Mean Reciprocal Rank (MRR). The reciprocal rank of a query response is the inverse of the rank of the first correct answer (i.e. relevant author for ), namely:


The following score averages the reciprocal rank over all queries in :


Normalized Discounted Cumulative Gain (NDCG). Assuming to have a relevance score for each author, given a query , we wish to have measures that give more value to the relevant authors that appear high in the ranked list of results returned for (Järvelin and Kekäläinen, 2002). Discounted Cumulative Gain is a measure that penalizes highly relevant authors appearing lower in the result list for . This is obtained by reducing their relevance value (i.e. , see above) by the logarithmic of their position in that list.


The final measure we introduce for our evaluation purposes is among the most famous ones adopted for classic search engines (Manning et al., 2008). It is computed by normalizing with respect to the best possible ranking for a given query . More precisely, for a position , the Ideal Discounted Cumulative Gain () is obtained by computing the on the list of authors sorted by their relevance score wrt . Then the measure is obtained as the ratio between and :


5.3. Experiments

Section 4 has described several possible techniques that Wiser can use to implement its document-centric and profile-centric strategies. In this section we experiment all these proposals by varying also their involved parameters. More precisely, for the document-centric strategies we experiment different document rankings and investigate also several data-fusion techniques that allow us to assign one single score to each candidate expert given all of its documents that are pertinent with the input query (see Tables 1 and 4). For the profile-centric strategies, we experiment the exact- and related-match scoring methods summarized in Tables 2 and 3. At the end, from all these figures we derive the best possible configurations of Wiser, and then compare them against the state-of-the-art approaches (Van Gysel et al., 2016b). This comparison will allow us to eventually design and implement a state-of-the-art version of Wiser that further improves the best known results, by means of a proper orchestration of document-centric, profile-centric and data-fusion strategies.

Evaluation of the Document-Centric Strategies. We configure Wiser to first rank documents via various scoring functions: i.e. tf-idf, BM25, or Language Modeling with Dirichlet or Jelinek-Mercer smoothing. Then, we compute a score for each author that combines two or more of the previous rankings via one of the data-fusion techniques described in Section 4.2 and summarized in Table 4. As far as the smoothing configurations for Dirichlet or Jelinek-Mercer approaches are concerned, we set and , as suggested by the documentation of Elasticsearch.

Method MAP MRR P@5 P@10 NDCG@100
tf-idf (rr) 0.284 0.347 0.120 0.082 0.420
BM25 (rr) 0.363 0.437 0.157 0.099 0.495
LM (Dirichlet, rr) 0.341 0.410 0.145 0.096 0.473
LM (Jelinek-Mercer, rr) 0.346 0.414 0.151 0.098 0.481
Table 8. Comparison among different configurations of document-centric strategies with normalized reciprocal rank (rr) as data-fusion technique.

Figure 3 reports the performance of Wiser

by varying: (i) the document ranking, (ii) the data fusion method, and (iii) the evaluation metric. Looking at the histograms, it is very clear that each strategy achieves the best performance when the reciprocal rank (

rr in the Figures) is used as data-fusion method. So, we have set rr in our following experiments and explored the best performance for all other combinations. Results are reported in Table 8 below. We notice that, despite all strategies have values of and very close to each other, a difference is present on , and . As far as the document-rankings are concerned we note that tf-idf is the worst approach, whereas both LM strategies have good performance and, undoubtly, BM25 is the clear winner with + on , + on and + on with respect to tf-idf, and + on and + on and + on with respect to any LM. So the winner among the document-centric strategies is BM25 with rr as data-fusion method.

Figure 3. Expert finding performance of Wiser with different configurations of document-centric strategies and data-fusion methods.

Evaluation of the Profile-Centric Strategies. We experimented the two configurations of Wiser that deploy either the exact- or the related-match score for evaluating the pertinence of the WEM profile of an author with respect to the entities of a given query, as described in Section 4.2. To ease the reading of the following experimental results, we will first comment on their individual use and then illustrate some combinations.

Figure 4. Performance of Wiser by profile-centric strategies based on entity count: ec-iaf and ef-iaf.
Figure 5. . Performance of Wiser by rec-iaf as profile-centric strategy with different scaling functions for the relevance score .
Figure 4. Performance of Wiser by profile-centric strategies based on entity count: ec-iaf and ef-iaf.
Match Method MAP MRR P@5 P@10 NDCG@100
Exact ec-iaf (mean) 0.289 0.353 0.125 0.081 0.394
ef-iaf (mean) 0.204 0.236 0.084 0.064 0.320
rec-iaf (sqrt-mean) 0.311 0.372 0.134 0.086 0.413
Related aer 0.187 0.226 0.081 0.058 0.332
raer (sigmoid) 0.185 0.224 0.081 0.058 0.331
aes (dw-cbow-30) 0.214 0.255 0.092 0.067 0.365
Table 9. Comparison between the best configuration of the profile-centric approaches (with both exact and related match scoring methods) implemented by Wiser.

Exact-Match Scoring. Figure 5 reports the performance of Wiser configured to rank authors either with ec-iaf or ef-iaf (both methods based on entity frequency) and by deploying max and mean methods for combining multiple scores into a single one. It is evident that ec-iaf scoring with mean outperforms ef-iaf.

Figure 6. . Expert finding performance of Wiser by deploying aer and different configurations of raer.
Figure 7. . Expert finding performance of Wiser by varying the number of top- entities used for generating the unique embedding vector of each author with DeepWalk-CBOW model.
Figure 6. . Expert finding performance of Wiser by deploying aer and different configurations of raer.

Figure 5 shows the performance of Wiser with different configurations of rec-iaf scoring, which extends ec-iaf with the entities’ relevance score (computed by means of Personalized PageRank executed over the author’s graph ). Since rec-iaf depends on , we experimented various settings for that we report on the top of Figure 5

, i.e. identity function, sigmoid function, square root function, and square function. Looking at the plots, it is evident that the best configuration for

rec-iaf is achieved when is the square function, it improves both ec-iaf or ef-iaf.

Related-Match Scoring. Figure 7 shows the performance of aer and raer profile-centric strategies. Since raer depends on , we have investigated the same set of scaling functions experimented for the rec-iaf method. Despite the fact that the raer method works slightly better when configured with the sigmoid function, the simpler aer method is equivalent or slightly better on all metrics.

Figure 7 reports the performance of Wiser which ranks authors according to DeepWalk embeddings models, which have been learned via CBOW algorithm and by fixing the size of the vectors to . In those experiments we have also evaluated the impact of varying the number of top- entities selected per author. As the plots show, ranking experts with respect to the DeepWalk embedding achieves better performance on different metrics and is more robust with respect to the parameter. In the following experiments we have set . For the sake of completeness, we mention that we have also investigated the application of DeepWalk Skip-gram and Entity2Vec (Ni et al., 2016a) (both CBOW and Skip-gram) models, but for the ease of explanation we did not report them since their performance are lower than DeepWalk-CBOW.

Method MAP MRR P@5 P@10 NDCG@100
Model 2 (jm) (Balog et al., 2006) 0.253 0.302 0.108 0.081 0.394
Log-linear (Van Gysel et al., 2016b) 0.287 0.363 0.134 0.092 0.425
BM25 (rr) 0.363 0.437 0.157 0.099 0.495
rec-iaf (sqrt-mean) 0.311 0.372 0.134 0.086 0.413
aes (dw-cbow-30) 0.214 0.255 0.092 0.067 0.365
Table 10. Comparison between the best approaches reported in literature (top) and Wiser’s variants (bottom). Statistical significance of BM25 (rr)

is computed using a two-tailed paired- t-test with respect to

rec-iaf (sqrt-mean) and indicated with when .

Final Discussion. Table 9 reports the best configuration found for each profile-centric method, as derived from the previous Figures. Generally speaking, methods based on exact-match perform better than the ones based on related-match on the TU dataset, with rec-iaf that achieves a peak of + on with respect to the aes method. It is crucial to stress at this point the role of the profile of an author in achieving these results. In fact, the best methods — i.e. for the exact-match (i.e. rec-iaf), the related-match (i.e. aes) and the embeddings — are properly the ones that strongly deploy the weighted graph to derive, via a Personalized PageRank computation, the relevance scores for the entities mentioned within ’s documents and the corresponding top- entities.

Wiser versus the State-of-the-Art. In this last paragraph we compare the best configurations of Wiser, based on document- and profile-centric methods, against the best known approaches present in literature, i.e. Log-liner (Van Gysel et al., 2016b) and Model 2 (jm) (Balog et al., 2006).

Table 10 shows that both BM25 and rec-iaf methods outperform Log-linear and Model 2 (jm) over different metrics. Specifically, rec-iaf achieves competitive performance with an improvement of + over the and + over scores with respect to Log-linear, whereas BM25 improves all knwon methods over all metrics: + on , + on , + on , + on and + on , thus resulting the clear winner and showing that for the TU dataset the document-centric strategy is better than the profile-centric strategy in Wiser.

Figure 8. . Performance of Wiser configured to combine document-centric (i.e. BM25 (rr)) and a profile-centric strategy (i.e. rec-iaf (sqrt-mean)) by means of several data-fusion techniques.
Figure 9. . Performance of Wiser configured to combine document-centric (i.e. BM25 (rr)), the best exact profile-centric strategy (i.e. rec-iaf (sqrt-mean)) and the best related profile-centric strategy (i.e. aes (dw-cbow-30)) by means of several data-fusion techniques.
Figure 8. . Performance of Wiser configured to combine document-centric (i.e. BM25 (rr)) and a profile-centric strategy (i.e. rec-iaf (sqrt-mean)) by means of several data-fusion techniques.
Method MAP MRR P@5 P@10 NDCG@100
Model 2 (jm) (Balog et al., 2006) 0.253 0.302 0.108 0.081 0.394
Log-linear (Van Gysel et al., 2016b) 0.287 0.363 0.134 0.092 0.425
BM25 (rr) 0.363 0.437 0.157 0.099 0.495
rec-iaf (sqrt-mean) 0.311 0.372 0.134 0.086 0.413
aes (dw-cbow-30) 0.214 0.255 0.092 0.067 0.365
Ensemble (Van Gysel et al., 2016b) 0.331 0.402 0.156 0.105 0.477
rrm(BM25 (rr), rec-iaf (sqrt-mean)) 0.385 0.459 0.163 0.104 0.516
rrm(BM25 (rr), rec-iaf (sqrt-mean), aes (dw-cbow-30)) 0.381 0.449 0.163 0.105 0.513
Table 11. Comparison between single methods (top) and different ensemble techniques whose ranking are combined via rrm data-fusion method. Statistical significance is computed using a one-tailed paired- t-test with respect to BM25 (rr) (the best method of Table 10) and indicated with for ) and for ).

Given these numbers, we set up a final experiment that aimed at evaluating the best performance achievable by the combination of these methods via data-fusion techniques. Specifically, we designed a version of Wiser that combines the best document-centric strategy, i.e. BM25 (rr), with the two best profile-centric strategies, i.e. rec-iaf and aes. Figures 9 and 9 report the performance of these combinations. The best performance are always reached when the methods at hands are combined with the rrm data-fusion method (purple bar).

Table 11 reports the performance achieved by the best known and new approaches proposed in this paper. For the sake of comparison, we also report the Ensemble method developed by (Van Gysel et al., 2016b), which combines via reciprocal rank (i.e. rr) the Log-linear model with Model 2 (jm). It is evident from the Table that

  • the BM25 (rr) implemented by Wiser outperforms the Ensemble method of (Van Gysel et al., 2016b), which is currently the state-of-the-art, of , and in MAP, MRR and NDCG@100, and

  • with a proper combination of this document-centric strategy with the two best profile-centric algorithms of Wiser we are able to achieve a further improvement over Ensemble on , and of , , and , respectively.

Therefore, Wiser turns out to be the new state-of-the-art solution for the expert finding problem in the academia domain.

6. Conclusions

We presented Wiser, a novel search engine for expert finding in academia whose novelty relies on the deployment of entity linking, plus relatedness and entity embeddings, for the the creation of the novel WEM profile for academia experts based on a weighted and labeled graph which models the explicit knowledge of author by means of the explicit (i.e. Wikipedia entities) and latent (i.e. embedding vectors) concepts occurring in her documents and their ”semantic” relations.

In the experiments we have shown that ranking authors according to the ”semantic” relation between the user query and their WEM profile achieves state-of-the-art performance, thus making Wiser the best publicly available software for academic-expert finding to date.

An implementation of Wiser running on the Faculty of the University of Pisa is accessible at We have indexed a total of 1430 authors and 83,509 papers’ abstracts, with a total of 30,984 distinct entities. Each author has published an average of 58 papers and each WEM

profile is constituted by an average of 21 unique entities. The GUI allows a user to search for a topic or for an author’s name, the former returns a list of candidate experts for the queried topic, the latter returns the list of topics characterizing the WEM profile with an estimate of their relevance. The user can browse the topics, find the papers from which they have been extracted by

Wiser, and thus get a glimpse of the expertise of the author and, moreover, an understanding of how her research topics have evolved in the years. This tool is adopted internally to find colleagues for joint projects, and externally to allow companies and other research organizations to access in an easy way the expertise offered by our Faculty.

As a future work we foresee the exploration of: (i) other entity-relatedness measures, in order to better model the edge weights in the WEM graph; (ii) other centrality measures and clustering algorithms to estimate the relevance of an entity within an author’s profile and to discard non pertinent entities; (iii) other scoring methods for the profile-centric approaches which resulted, indeed, less performing of what we expected possibly because of the noise present in the TU dataset; (iv) related to the previous point, build a new dataset for expert finding in academia or clean TU by dropping some inconsistencies we discovered in the querying process; (v) extend the use of Wiser to other universities and possibly explore its application to non-academia settings.


We thank Marco Cornolti for the preliminary processing of the dataset and for his invaluable suggestions in the deployment of Elasticsearch. Part of this work has been supported by the EU grant for the Research Infrastructure “SoBigData: Social Mining & Big Data Ecosystem” (INFRAIA-1-2014-2015, agreement #654024) and by MIUR project FFO 2016 (DM 6 July 2016, n. 552, art. 11).


  • (1)
  • Ackerman et al. (2002) Mark S. Ackerman, Volker Wulf, and Volkmar Pipek. 2002. Sharing Expertise: Beyond Knowledge Management. MIT Press, Cambridge, MA, USA.
  • Balog et al. (2006) Krisztian Balog, Leif Azzopardi, and Maarten de Rijke. 2006. Formal Models for Expert Finding in Enterprise Corpora. In Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR ’06). ACM, New York, NY, USA, 43–50.
  • Balog et al. (2009) Krisztian Balog, Leif Azzopardi, and Maarten de Rijke. 2009. A Language Modeling Framework for Expert Finding. Inf. Process. Manage. 45, 1 (Jan. 2009), 1–19.
  • Balog and De Rijke (2008) Krisztian Balog and Maarten De Rijke. 2008. Combining candidate and document models for expert search. Technical Report. AMSTERDAM UNIV (NETHERLANDS).
  • Balog et al. (2012) Krisztian Balog, Yi Fang, Maarten de Rijke, Pavel Serdyukov, and Luo Si. 2012. Expertise Retrieval. Found. Trends Inf. Retr. 6 (feb 2012), 127–256.
  • Baumard (1999) Philippe Baumard. 1999. Tacit knowledge in organizations. Sage.
  • Berendsen et al. (2013) Richard Berendsen, Krisztian Balog, Toine Bogers, Antal van den Bosch, and Maarten de Rijke. 2013. On the Assessment of Expertise Profiles. In Proceedings of the 13th Dutch-Belgian Workshop on Information Retrieval, Delft, The Netherlands, April 26th, 2013. 24–27.
  • Blanco et al. (2015) Roi Blanco, Giuseppe Ottaviano, and Edgar Meij. 2015. Fast and space-efficient entity linking for queries. In Proceedings of the Eighth ACM International Conference on Web Search and Data Mining. ACM, 179–188.
  • Bollacker et al. (2008) Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: a collaboratively created graph database for structuring human knowledge. In Proceedings of the 2008 ACM SIGMOD international conference on Management of data. AcM, 1247–1250.
  • Bovi et al. (2015) Claudio Delli Bovi, Luca Telesca, and Roberto Navigli. 2015. Large-scale information extraction from textual definitions through deep syntactic and semantic analysis. Transactions of the Association for Computational Linguistics 3 (2015), 529–543.
  • Brin and Page (1998) S. Brin and L. Page. 1998. The Anatomy of a Large-Scale Hypertextual Web Search Engine. In Seventh International World-Wide Web Conference (WWW 1998).
  • Brittain (1975) J. Michael Brittain. 1975. Information Needs and Application of the Results of User Studies. Springer Netherlands, Dordrecht, 425–447.
  • Cao et al. (2005) Yunbo Cao, Jingjing Liu, Shenghua Bao, and Hang Li. 2005. Research on Expert Search at Enterprise Track of TREC 2005.. In TREC.
  • Chakrabarti (2002) Soumen Chakrabarti. 2002. Mining the Web: Discovering knowledge from hypertext data. Elsevier.
  • Cornolti et al. (2016) Marco Cornolti, Paolo Ferragina, Massimiliano Ciaramita, Stefan Rüd, and Hinrich Schütze. 2016. A Piggyback System for Joint Entity Mention Detection and Linking in Web Queries. In Proceedings of the 25th International Conference on World Wide Web (WWW ’16). International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, Switzerland, 567–578.
  • Cucerzan (2007) Silviu Cucerzan. 2007. Large-Scale Named Entity Disambiguation Based on Wikipedia Data. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL 2007). 708–716.
  • Dietz et al. (2017) Laura Dietz, Chenyan Xiong, and Edgar Meij. 2017. Overview of The First Workshop on Knowledge Graphs and Semantics for Text Retrieval and Analysis (KG4IR). SIGIR Forum 51, 3 (2017), 139–144.
  • Fang et al. (2010) Yi Fang, Luo Si, and Aditya P Mathur. 2010. Discriminative models of integrating document evidence and document-candidate associations for expert search. In Proceedings of the 33rd international ACM SIGIR conference on Research and development in information retrieval. ACM, 683–690.
  • Ferragina and Scaiella (2012) Paolo Ferragina and Ugo Scaiella. 2012. Fast and accurate annotation of short texts with wikipedia pages. IEEE software 29, 1 (2012), 70–75.
  • Gabrilovich and Markovitch (2007) Evgeniy Gabrilovich and Shaul Markovitch. 2007. Computing semantic relatedness using wikipedia-based explicit semantic analysis.. In IJcAI, Vol. 7. 1606–1611.
  • Haveliwala (2002) Taher H Haveliwala. 2002. Topic-sensitive pagerank. In Proceedings of the 11th international conference on World Wide Web. ACM, 517–526.
  • Heath et al. (2006) Thomas Heath, Enrico Motta, and Marian Petre. 2006. Person to person trust factors in word of mouth recommendation. In Conference on Human Factors in Computing Systems (CHI’06).
  • Järvelin and Kekäläinen (2002) Kalervo Järvelin and Jaana Kekäläinen. 2002. Cumulated Gain-based Evaluation of IR Techniques. ACM Trans. Inf. Syst. 20, 4 (Oct. 2002), 422–446.
  • Kulkarni et al. (2009) Sayali Kulkarni, Amit Singh, Ganesh Ramakrishnan, and Soumen Chakrabarti. 2009. Collective annotation of Wikipedia entities in web text. In Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining (KDD ’09). ACM, New York, NY, USA, 457–466.
  • Li et al. (2014) Hang Li, Jun Xu, et al. 2014. Semantic matching in search. Foundations and Trends® in Information Retrieval 7, 5 (2014), 343–469.
  • Macdonald and Ounis (2006) Craig Macdonald and Iadh Ounis. 2006. Voting for Candidates: Adapting Data Fusion Techniques for an Expert Search Task. In Proceedings of the 15th ACM International Conference on Information and Knowledge Management (CIKM ’06). ACM, New York, NY, USA, 387–396.
  • Macdonald and Ounis (2011) Craig Macdonald and Iadh Ounis. 2011. Learning models for ranking aggregates. In European Conference on Information Retrieval. Springer, 517–529.
  • Manning et al. (2008) Christopher D. Manning, Prabhakar Raghavan, and Hinrich Schütze. 2008. Introduction to Information Retrieval. Cambridge University Press, New York, NY, USA.
  • McInnes and Healy (2017) Leland McInnes and John Healy. 2017. Accelerated Hierarchical Density Based Clustering. In Data Mining Workshops (ICDMW), 2017 IEEE International Conference on. IEEE, 33–42.
  • Mikolov et al. (2013) Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems. 3111–3119.
  • Milne and Witten (2008) David Milne and Ian H. Witten. 2008. An effective, low-cost measure of semantic relatedness obtained from Wikipedia links. In AAAI 2008.
  • Moreira et al. (2015) Catarina Moreira, Pável Calado, and Bruno Martins. 2015. Learning to rank academic experts in the DBLP dataset. Expert Systems 32, 4 (2015), 477–493.
  • Navigli and Ponzetto (2012) Roberto Navigli and Simone Paolo Ponzetto. 2012. BabelNet: The automatic construction, evaluation and application of a wide-coverage multilingual semantic network. Artificial Intelligence 193 (2012), 217–250.
  • Neshati et al. (2014) Mahmood Neshati, Djoerd Hiemstra, Ehsaneddin Asgari, and Hamid Beigy. 2014. Integration of Scientific and Social Networks. World Wide Web 17, 5 (Sept. 2014), 1051–1079.
  • Nguyen et al. (2017) Dat Ba Nguyen, Abdalghani Abujabal, Nam Khanh Tran, Martin Theobald, and Gerhard Weikum. 2017. Query-driven on-the-fly knowledge base construction. Proceedings of the VLDB Endowment 11, 1 (2017), 66–79.
  • Ni et al. (2016a) Yuan Ni, Qiong Kai Xu, Feng Cao, Yosi Mass, Dafna Sheinwald, Hui Jia Zhu, and Shao Sheng Cao. 2016a. Semantic documents relatedness using concept graph representation. In Proceedings of the Ninth ACM International Conference on Web Search and Data Mining. ACM, 635–644.
  • Ni et al. (2016b) Yuan Ni, Qiong Kai Xu, Feng Cao, Yosi Mass, Dafna Sheinwald, Hui Jia Zhu, and Shao Sheng Cao. 2016b. Semantic Documents Relatedness Using Concept Graph Representation. In Proceedings of the Ninth ACM International Conference on Web Search and Data Mining (WSDM ’16). ACM, New York, NY, USA, 635–644.
  • Perozzi et al. (2014) Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. 2014. DeepWalk: online learning of social representations. In The 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’14, New York, NY, USA - August 24 - 27, 2014. 701–710.
  • Ponza et al. (2017a) Marco Ponza, Paolo Ferragina, and Soumen Chakrabarti. 2017a. A Two-Stage Framework for Computing Entity Relatedness in Wikipedia. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management. ACM, 1867–1876.
  • Ponza et al. (2017b) Marco Ponza, Paolo Ferragina, and Francesco Piccinno. 2017b. Document Aboutness via Sophisticated Syntactic and Semantic Features. In Natural Language Processing and Information Systems - 22nd International Conference on Applications of Natural Language to Information Systems, NLDB 2017, Liège, Belgium, June 21-23, 2017, Proceedings. 441–453.
  • Roberts (2016) Ros Roberts. 2016. Understanding the validity of data: a knowledge-based network underlying research expertise in scientific disciplines. Higher Education 72, 5 (01 Nov 2016), 651–668.
  • Robertson et al. (2009) Stephen Robertson, Hugo Zaragoza, et al. 2009. The probabilistic relevance framework: BM25 and beyond. Foundations and Trends® in Information Retrieval 3, 4 (2009), 333–389.
  • Scaiella et al. (2012a) Ugo Scaiella, Paolo Ferragina, Andrea Marino, and Massimiliano Ciaramita. 2012a. Topical clustering of search results. In Proceedings of the fifth ACM international conference on Web search and data mining. ACM, 223–232.
  • Scaiella et al. (2012b) Ugo Scaiella, Paolo Ferragina, Andrea Marino, and Massimiliano Ciaramita. 2012b. Topical clustering of search results. In Proceedings of the Fifth International Conference on Web Search and Web Data Mining, WSDM 2012, Seattle, WA, USA, February 8-12, 2012. 223–232.
  • Shaw et al. (1994) Joseph A. Shaw, Edward A. Fox, Joseph A. Shaw, and Edward A. Fox. 1994. Combination of Multiple Searches. In The Second Text REtrieval Conference (TREC-2. 243–252.
  • Sorg and Cimiano (2011) Philipp Sorg and Philipp Cimiano. 2011. Finding the right expert: Discriminative models for expert retrieval. In Proceedings of the International Conference on Knowledge Discovery and Information Retrieval (KDIR2011).
  • Suchanek et al. (2007) Fabian M Suchanek, Gjergji Kasneci, and Gerhard Weikum. 2007. Yago: a core of semantic knowledge. In Proceedings of the 16th international conference on World Wide Web. ACM, 697–706.
  • Tang et al. (2008) Jie Tang, Jing Zhang, Limin Yao, Juanzi Li, Li Zhang, and Zhong Su. 2008. ArnetMiner: Extraction and Mining of Academic Social Networks. In Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD ’08). ACM, New York, NY, USA, 990–998.
  • Van Gysel et al. (2016a) Christophe Van Gysel, Maarten de Rijke, and Evangelos Kanoulas. 2016a. Learning Latent Vector Spaces for Product Search. In CIKM, Vol. 2016. ACM, 165–174.
  • Van Gysel et al. (2017) Christophe Van Gysel, Maarten de Rijke, and Evangelos Kanoulas. 2017. Semantic Entity Retrieval Toolkit. In SIGIR 2017 Workshop on Neural Information Retrieval (Neu-IR’17).
  • Van Gysel et al. (2016b) Christophe Van Gysel, Maarten de Rijke, and Marcel Worring. 2016b. Unsupervised, Efficient and Semantic Expertise Retrieval. In Proceedings of the 25th International Conference on World Wide Web (WWW ’16). International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, Switzerland, 1069–1079.
  • Weikum et al. (2016) Gerhard Weikum, Johannes Hoffart, and Fabian M. Suchanek. 2016. Ten Years of Knowledge Harvesting: Lessons and Challenges. IEEE Data Eng. Bull. 39, 3 (2016), 41–50.
  • Yimam and Kobsa (2000) Dawit Yimam and Alfred Kobsa. 2000. DEMOIR: A Hybrid Architecture for Expertise Modeling and Recommender Systems. In 9th IEEE International Workshops on Enabling Technologies: Infrastructure for Collaborative Enterprises (WETICE 2000), 4-16 June 2000, Gaithersburg, MD, USA. 67–74.
  • Zhai and Lafferty (2017) Chengxiang Zhai and John Lafferty. 2017. A study of smoothing methods for language models applied to ad hoc information retrieval. In ACM SIGIR Forum, Vol. 51. ACM, 268–276.