Entity Set Search of Scientific Literature: An Unsupervised Ranking Approach

Literature search is critical for any scientific research. Different from Web or general domain search, a large portion of queries in scientific literature search are entity-set queries, that is, multiple entities of possibly different types. Entity-set queries reflect user's need for finding documents that contain multiple entities and reveal inter-entity relationships and thus pose non-trivial challenges to existing search algorithms that model each entity separately. However, entity-set queries are usually sparse (i.e., not so repetitive), which makes ineffective many supervised ranking models that rely heavily on associated click history. To address these challenges, we introduce SetRank, an unsupervised ranking framework that models inter-entity relationships and captures entity type information. Furthermore, we develop a novel unsupervised model selection algorithm, based on the technique of weighted rank aggregation, to automatically choose the parameter settings in SetRank without resorting to a labeled validation set. We evaluate our proposed unsupervised approach using datasets from TREC Genomics Tracks and Semantic Scholar's query log. The experiments demonstrate that SetRank significantly outperforms the baseline unsupervised models, especially on entity-set queries, and our model selection algorithm effectively chooses suitable parameter settings.

READ FULL TEXT VIEW PDF

Authors

page 1

page 2

page 3

page 4

10/08/2018

Entity-Relationship Search over the Web

Entity-Relationship (E-R) Search is a complex case of Entity Search wher...
07/02/2019

Semantic Driven Fielded Entity Retrieval

A common approach for knowledge-base entity search is to consider an ent...
09/06/2019

Context-aware Deep Model for Entity Recommendation in Search Engine at Alibaba

Entity recommendation, providing search users with an improved experienc...
05/19/2018

Entity-Duet Neural Ranking: Understanding the Role of Knowledge Graph Semantics in Neural Information Retrieval

This paper presents the Entity-Duet Neural Ranking Model (EDRM), which i...
12/22/2017

Ranking Triples using Entity Links in a Large Web Crawl - The Chicory Triple Scorer at WSDM Cup 2017

This paper describes the participation of team Chicory in the Triple Ran...
07/10/2019

Bayesian inferences on uncertain ranks and orderings

It is common to be interested in rankings or order relationships among e...
11/21/2019

Separate and Attend in Personal Email Search

In personal email search, user queries often impose different requiremen...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Literature search helps a researcher identify relevant papers and summarize essential claims about a topic, forming a critical step in any scientific research. With the fast-growing volume of scientific publications, a good literature search engine is essential to researchers, especially in the domains like computer science and biomedical science where the literature collections are so massive, diverse, and rapidly evolving—few people can master the state-of-the-art comprehensively and in depth.

A large set of literature search queries contain multiple entities which can be either concrete instances (e.g., GABP (a gene)) or abstract concepts (e.g., clustering). We refer these queries as entity-set queries. For example, a computer scientist may want to find out how knowledge base can be used for document retrieval and thus issues a query “knowledge base for document retrieval”, which is an entity-set query containing two entities. Similarly, a biologist may want to survey how genes GABP, TERT, and CD11b are associated with cancer and submits a query “GABP TERT CD11b cancer”, another entity-set query with one disease and three gene entities.

Compared with typical short keyword queries, a distinctive characteristic of entity-set queries is that they reflect user’s need for finding documents containing inter-entity relations. For example, among 50 queries collected from biologists in 2005 as part of TREC Genomics Track (Hersh et al., 2005), 40 of them are explicitly formulated as finding relations among at least two entities. In most cases, a user who submits an entity-set query will expect to get a ranked list of documents that are most relevant to the whole entity set. Therefore, as in the previous examples, returning a paper about only knowledge bases or only one gene GABP is unsatisfactory.

Metrics ESQs non-ESQs Overall
NDCG@5 0.3622 0.6291 0.5223
NDCG@10 0.3653 0.6286 0.5233
NDCG@15 0.3840 0.6221 0.5269
NDCG@20 0.4011 0.6247 0.5353
Table 1. Ranking performance on 100 benchmark queries of the S2 production system. Entity-set queries (ESQs), marked bold, perform much weaker than non-ESQs do.

Entity-set queries pose non-trivial challenges to existing search platforms. For example, among the 100 queries111http://data.allenai.org/esr/Queries/ released by Semantic Scholar (S2), 40 of them are entity-set queries and S2’s production ranking system performs poorly on these entity-set queries, as shown in Table 1. The difficulties of handling entity-set queries mainly come from two aspects. First, entity relations within entity sets have not been modeled effectively. The association or co-occurrence of multiple entities has not gained adequate attention from existing ranking models. As a result, those models will rank papers where a single distinct entity appears multiple times higher than those containing many distinct entities. Second, entity-set queries are particularly challenging for supervised ranking models. As manual labeling of document relevance in academic search requires domain expertise, it is too expensive to train a ranking model based purely on manually labeling. Most systems will first apply an off-the-shelf unsupervised ranking model during their cold-start process and then collect user interaction data (e.g., click information). Unfortunately, entity-set queries are usually sparse (i.e., not so repetitive), and have less associated click information. Furthermore, many off-the-shelf unsupervised models cannot return reasonably good candidate documents for entity-set queries within the top-20 positions. Many highly relevant documents will not be presented to users, which further compromises the usefulness of clicking information.

This paper tackles the new challenge—improving the search quality of scientific literature on entity-set queries and proposes an unsupervised ranking approach. We introduce SetRank, an unsupervised ranking framework that explicitly models inter-entity relations and captures entity type information. SetRank first links entity mentions in query and documents to an external knowledge-base. Then, each document is represented with both bag-of-words and bag-of-entities representations (Xiong et al., 2016, 2017a) and fits two language models respectively. On the query side, a novel heterogeneous graph representation is proposed to model complex entity information (e.g., entity type) and entity relations within the set. This heterogeneous query graph represents all the information need in that query. Finally, the query-document matching is defined as a graph covering process and each document is ranked based on the information need it covers in the query graph.

Although being an unsupervised ranking framework, SetRank still has some parameters that need to be appropriately learned using a labeled validation set. To further automate the process of ranking model development, we develop a novel unsupervised model selection algorithm based on the technique of weighted rank aggregation. Given a set of queries with no labeled documents, and a set of candidate parameter settings, this algorithm automatically learns the most suitable parameter settings for that set of queries.

The significance of our proposed unsupervised ranking approach is two-fold. First, SetRank itself, as an unsupervised ranking model, boosts the literature search performance on entity-set queries. Second, SetRank can be adopted during the cold-start process of a search system, which enables the collection of high-quality click data for training subsequent supervised ranking model. Our experiments on S2’s benchmark datasets and TREC 2004 & 2005 Genomics Tracks (Hersh et al., 2004, 2005) demonstrate the usefulness of our unsupervised model selection algorithm and the effectiveness of SetRank for searching scientific literature, especially on entity-set queries.

In summary, this work makes the following contributions:

  1. [leftmargin=*]

  2. A new research problem, effective entity-set search of scientific literature, is studied.

  3. SetRank, an unsupervised ranking framework, is proposed, which models inter-entity relations and captures entity type information.

  4. A novel unsupervised model selection algorithm is developed, which automatically selects SetRank’s parameter settings without resorting to a labeled validation set.

  5. Extensive experiments are conducted in two scientific domains, demonstrating the effectiveness of SetRank and our unsupervised model selection algorithm.

The remaining of the paper is organized as follows. Section 2 discusses related work. Section 3 presents our ranking framework SetRank. Section 4 presents the unsupervised model selection algorithm. Section 5 reports and analyzes the experimental results on two benchmark datasets and shows a case study of SetRank for biomedical literature search. Finally, Section 6 concludes this work with discussions on some future directions.

2. Related Work

We examine related work in three aspects: academic search, entity-aware ranking model, and automatic ranking model selection.

2.1. Academic Search

The practical importance of finding highly relevant papers in scientific literature has motivated the development of many academic search systems. Google Scholar is arguably the most widely used system due to its large coverage. However, the ranking result of Google Scholar is still far from satisfactory because of its bias toward highly cited papers (Beel and Gipp, 2009). As a result, researchers may choose other academic search platforms, such as CiteSeerX (Wu et al., 2014), AMiner (Tang et al., 2008), PubMed (Lu, 2011), Microsoft Academic Search (Sinha et al., 2015) and Semantic Scholar (Xiong et al., 2017b). Research efforts of many such systems focus on the analytical tasks of scholar data such as author name disambiguation (Tang et al., 2008), paper importance modeling (Shen et al., 2016), and entity-based distinctive summarization (Ren et al., 2017). However, this work focuses on ad-hoc document retrieval and ranking in academic search. The most relevant work to ours is (Xiong et al., 2017b) in which entity embeddings are used to obtain “soft match” feature of each query, document pair. However, (Xiong et al., 2017b) requires training data to combine word-based and entity-based relevance scores and to select parameter settings, which is rather different from our unsupervised approach.

2.2. Entity-aware Ranking Model

Entities, such as people, locations, or abstract concepts, are natural units for organizing and retrieving information (Garigliotti and Balog, 2017). Previous studies found that over 70% of Bing’s query and more than 50% of traffic in Semantic Scholar are related to entities (Guo et al., 2009; Xiong et al., 2017b). The recent availability of large-scale knowledge repositories and accurate entity linking tools have further motivated a growing body of work on entity-aware ranking models. These models can be roughly categorized into three classes: expansion-based, projection-based, and representation-based.

The expansion-based methods use entity descriptions from knowledge repositories to enhance query representation. Xu et al. (Xu et al., 2009) use entity descriptions in Wikipedia as pseudo relevance feedback corpus to obtain cleaner expansion terms; Xiong and Callen (Xiong and Callan, 2015b) utilize the description of Freebase entities related to the query for query expansion; Dalton et al. (Dalton et al., 2014) expand a query using the text fields of the attributes of the query-related entities and generate richer learning-to-rank features based on the expanded texts.

The projection-based methods try to project both query and document onto an entity space for comparison. Liu and Fang (Liu and Fang, 2015) use entities from a query and its related documents to construct a latent entity space and then connect the query and documents based on the descriptions of the latent entities. Xiong and Callen (Xiong and Callan, 2015a) use the textual features among query, entities, and documents to model the query-entity and entity-document connections. These additional connections between query and document are then utilized in a learning-to-rank model. A fundamental difference of our work from the above methods is that we do not represent query and document using external terms/entities that they do not contain. This is to avoid adding noisy expansion of terms/entities that may not reflect the information need in the original user query.

The representation-based methods, as a recent trend for utilizing entity information, aim to build entity-enhanced text representation and combine it with traditional word-based representation (Xiong et al., 2017a). Xiong et al. (Xiong et al., 2016)

propose a bag-of-entities representation and demonstrated its effectiveness for vector space model. Raviv

et al. (Raviv et al., 2016) leverage the surface names of entities to build an entity-based language model. Many supervised ranking models are proposed to apply learning-to-rank methods for combining entity-based signals with word-based signals. For example, ESR (Xiong et al., 2017b) uses entity embeddings to compute entity-based query-document matching score and then combines it with word-based score using RankSVM. Following the same spirit, Xiong et al. (Xiong et al., 2017a) propose a word-entity duet framework that simultaneously models the entity annotation uncertainty and trains the ranking model. Comparing with the above methods, we also use the bag-of-entity representation but combine it with word-based representation in an unsupervised way. Also, to the best of our knowledge, we are the first to capture entity relation and type information in an unsupervised entity-aware ranking model.

2.3. Automatic Ranking Model Selection

Most ranking models need to manually set many parameter values. To automate the process of selecting parameter settings, some AutoML methods (Feurer et al., 2015; Brazdil and Giraud-Carrier, 2017) are proposed. Nevertheless, these methods still require a validation set which contains queries with labeled documents. In this paper, we develop an unsupervised model selection algorithm, based on rank aggregation, to automatically choose parameter settings without resorting to a labeled validation set. Rank aggregation aims to combine multiple existing rankings into a joint ranking. Fox and Shaw (Fox and Shaw, 1993)

propose some deterministic functions to combine rankings heuristically. Klementiev

et al. (Klementiev et al., 2007, 2008)

propose an unsupervised learning algorithm for rank aggregation based on a linear combination of ranking functions. Another related line of work is to model rankings using a statistical model (

e.g., Plackett-Luce model) and aggregate them based on statistical inference (Guiver and Snelson, 2009; Maystre and Grossglauser, 2015; Zhao et al., 2016). Lately, Bhowmik and Ghosh (Bhowmik and Ghosh, 2017) propose to use object attributes to augment some standard rank aggregation framework. Compared with these methods, our proposed algorithm goes beyond just combining multiple rankings and uses aggregated ranking to guide the selection of parameter settings.

3. Ranking Framework

Figure 1. An illustrative example showing one document comprised of two fields (i.e., title, abstract) with their corresponding bag-of-words and bag-of-entities representations.

This section presents our unsupervised ranking framework for leveraging entity (set) information in search. Our framework provides a principled way to rank a set of documents for a query . In this framework, we represent each document using standard bag-of-words and bag-of-entities representations (Xiong et al., 2016, 2017a) (Section 3.1) and represent the query using a novel heterogeneous graph (Section 3.2) which naturally model the entity set information. Finally, we model the query-document matching as a “graph covering" process, as described in Section 3.3.

3.1. Document Representation

We represent each document using both word and entity information. For words, we use standard bag-of-words representation and treat each unigram as a word. For entities, we adopt an entity linking tool (details described in Section 5.2) that utilizes a knowledge base/graph (e.g., Wikidata or Freebase) where entities have unique IDs. Given an input text, this tool will find the entity mentions (i.e., entity surface names) in the text and link each of them to a disambiguated entity in the knowledge base/graph. For example, given the input document title “Training linear SVMs in linear time”, this tool will link the entity mention “SVMs”’ to the entity “Support Vector Machine” with Freebase id ‘/m/0hc2f’. Previous studies (Xiong et al., 2016; Raviv et al., 2016) show that when the entity linking error is within a reasonable range, the returned entity annotations, though noisy, can still improve the overall search performance, partially due to the following:

  1. [leftmargin=*]

  2. Polysemy resolution. Different entities with the same surface name will be resolved by the entity linker. For example, the fruit “Apple” (with id ‘/m/014j1m’) will be disambiguated with the company “Apple” (with id ‘/m/0k8z’).

  3. Synonymy resolution. Different entity surface names corresponding to the same entity will be identified and merged. For example, the entity “United States of America” (with id ‘/m/09c7w0’) can have different surface names including “USA”, “United States”, and “U.S.(Raviv et al., 2016). The entity linker can map all these surface names to the same entity.

After linking all the entity mentions in a document to entities in the knowledge base, we can obtain the bag-of-entities representation of this document. Then, we fit two language models (LMs) for this document: one being word-based (i.e., traditional unigram LM) and the other being entity-based. Notice that in the literature search scenario, documents (i.e., papers) usually contain multiple fields, such as title, abstract, and full text. We model each document field using a separate bag-of-words representation and a separate bag-of-entities representation, as shown in Figure 1.

To exploit such intra-document structures, we generally assume a document has fields and thus the document collection can be separated into parts: . Following (Ogilvie and Callan, 2003), we assign each field a weight and formulate the generation process of a token given the document as follows:

(1)

Notice the a token can be either a unigram or an entity , and the field weight can be either manually set based on prior knowledge or automatically learned using the mechanism described in Section 4

. The token generation probability under each document field

can be obtained from the maximum likelihood estimate with Dirichlet prior smoothing 

(Zhai and Lafferty, 2001) as follows:

(2)

where and represent the number of token in and the length of . Similarly, we can define and . Finally, is a scale parameter of the Dirichlet distribution for field . A concrete example is shown in Figure 1.

3.2. Query Representation

Given an input query , we first apply the same entity linker used for document representation to extract all the entity information in the query. Then, we design a novel heterogeneous graph to represent this query , denoted as . Such a graph representation captures both word and entity information in the query and models the entity relations. A concrete example is shown in Figure 2.

Node representation. In this heterogeneous query graph, each node represents a query token. As a token can be either a word or an entity, there are two different types of nodes in this graph.

Edge representation. We use an edge to represent a latent relation between two query tokens. In this work, we consider two types of latent relations: word-word relation and entity-entity relation. For word-word relation, we add an edge for each pair of adjacent word tokens with equal weight 1. For instance, given an query “Atari video games”, we will add two edges, one between word pairs Atari, video and the other between video, game. On the entity side, we aim to emphasize all the possible entity-entity relations, and thus add an edge between each pair of entity tokens.

Modeling entity type. The type information of each query entity can further reveal the user’s information need. Therefore, we assign the weight of each entity-entity relation based on these two entities’ type information. Intuitively, if the types of two entities are distant from each other in a type hierarchy, then the relation between these two entities should have a larger weight. A similar idea is exploited in (Garigliotti and Balog, 2017) and found useful for type-aware entity retrieval.

Mathematically, we use to denote the type of entity ; use to denote the Lowest Common Ancestor (LCA) of two nodes and in a given tree (i.e., type hierarchy), and use to denote the length of a path between node and node . In Figure 2, for example, the entity tokens ‘/m/0hjlw’ and ‘/m/0xwj’, corresponding to “reinforcement learning” and “Atari”, have types ‘education.field_of_study’ and ‘computer.game’, respectively. The Lowest Common Ancestor of these two types in the type hierarchy is ‘Thing’. Finally, we define the relation strength between entity and entity as follows:

(3)
(4)

Our proposed heterogeneous query graph representation is general and can be extended. For example, we can apply dependency parsing for verbose queries, and only add an edge between two word tokens that have direct dependency relation. Also, if the importance of each entity-entity relation is given, we can then set the edge weights correspondingly. We leave these extensions for future works.

Figure 2. An illustrative example showing the heterogeneous graph representation of one query. Word-word relations are marked by dash lines and entity-entity relations are marked by solid lines. Different solid line colors represent different relation strengths based on two entities’ types.

3.3. Document Ranking using Query Graph

Our proposed heterogeneous query graph represents all information need in the user-issued query. Such need can be either to find document discussing one particular entity or to identify papers studying an important inter-entity relation. Intuitively, a document that can satisfy more information need should be ranked at a higher position. To quantify such information need that is explained by a document, we define the following graph covering process.

Query graph covering. If a query token exists in a document , we say covers the node in that corresponds to this token. Similarly, if a pair of query tokens and exists in , we say covers the edge in that corresponds to the relation of this token pair . The subgraph of that is covered by the document , denoted as , represents the information need in the query that is explained by the document .

Furthermore, we follow the same spirit of (Metzler and Croft, 2005) and view the subgraph as a Markov Network, based on which we define the joint probability of the document and the query as follows:

(5)

where is a normalization factor, indexes the cliques in graph, and is the non-negative potential defined on . The last equation holds as we let . Notice that if is a subgraph of which means document covers less information than document does, we should have . Therefore, we should design to satisfy the constraint .

In this work, we focus on modeling each single entity and pairwise relations between two entities. Therefore, each clique can be either a node or an edge in the graph. Modeling higher-order relations among more than two entities (i.e., cliques with size larger than 2) is left for future work. We define the potential functions for a single node and an edge as follows:

Node potential. Node potential quantifies the information need contained in a single node , which can be either a word token or an entity token . To balance the relative weight of a word token and an entity token, we introduce a parameter , and define the node potential function as follows:

(6)

where

is an activation function that transforms a raw probability to a node potential. Here, we set

in order to amplify which has a relatively small value.

Edge potential. Edge potential quantifies the information need contained in an edge that can be either a word-word (W-W) relation or and an entity-entity (E-E) relation. In our query graph representation, all word-word relations have an equal weight of 1, and the weight of each entity-entity relation (i.e., ) is defined by Equation (3). Finally, we calculate the edge potential as follows:

(7)
(8)

where measures the edge importance, and is the same activation function as defined above. To simplify the calculation of , we make an assumption that two tokens and are conditionally independent given a document . Then, we replace with and substitute it in Equation (7).

Putting all together. After defining the node and edge potentials, we can calculate the joint probability of each document and query using Equation (5) as follows:

(9)

As shown in the above equation, SetRank will explicitly reward paper capturing inter-entity relations and covering more unique entities. Also, it uses to balance the word-based relevance with entity-based relevance, and models entity type information in .

4. Unsupervised Model Selection

Although being an unsupervised ranking framework, SetRank still has some parameters that need to be appropriately set by ranking model designers, including the weight of title/abstract field and the relative importance of entity token . Previous study (Zhai and Lafferty, 2001) shows that these model parameters have significant influences on the ranking performance and thus we need to choose them carefully. Typically, these parameters are chosen to optimize the performance over a validation set that is manually constructed and contains the relevance label of each query-document pair. Though being useful, the validation set is not always available, especially for those applications (e.g., literature search) where labeling document requires domain expertise.

To address the above problem, we propose an unsupervised model selection algorithm which automatically chooses the parameter settings without resorting to a manually labeled validation set. The key philosophy is that although people who design the ranking model (i.e., ranking model designers) do not know the exact “optimal” parameter settings, they do have prior knowledge about the reasonable range for each of them. For example, the title field weight should be set larger than the abstract field weight, and the entity token weight should be set small if the returned entity linking results are noisy. Our model selection algorithm leverages such prior knowledge by letting the ranking model designer input the search range of each parameter’s value. It will then return the best value for each parameter within its corresponding search range. We first describe our notations and formulate our problem in Section 4.1. Then, we present our model selection algorithm in Section 4.2.

4.1. Notations and Problem Formulation

Notations. We use to denote the collection of rankings over a set of documents: , . We denote by a complete ranking, where denotes the position of document in the ranking, and is the index of the document on position . For example, given the ranking: , we will have and . Furthermore, we use the symbol (instead of ) to denote an incomplete ranking which includes only some of the documents in . If document does not occur in the ranking, we set , otherwise, is the rank of document . In the corresponding , those missing documents simply do not occur. For example, given the ranking: , we have and . Finally, we let to represent the index of documents that appear in the ranking list .

Problem Formulation. Given a parameterized ranking model where denotes the set of all parameters (e.g., in BM25, in query likelihood model with dirichlet prior smoothing), we want to find the best parameter settings such that the ranking model achieves the best ranking performance over the space of all queries. In practice, however, the space consisting of all possible values of can be infinite and we cannot access all queries in . Therefore, we assume ranking model designers will input possible sets of parameter values: and a finite subset of queries . Finally, we formulate our problem of unsupervised model selection as follows:

Definition 1 ().

(PROBLEM FORMULATION). Given a parameterized ranking model , candidate parameter settings , and an unlabeled query subset , we aim to find such that achieves the best ranking performance over .

4.2. Model Selection Algorithm

Input: A parameterized ranking model , candidate parameter settings , and an unlabeled query subset .
Output: The best ranking model with .
1 set ;
2 for query  do
3        set ;
4        set ;
5        while True do
6               // Weighted Rank Aggregation ;
7               for document index from 1 to  do
8                      ;
9                      for  rank list index from 1 to  do
10                             if  (i.e., appears in ) then
11                                    ;
12                                   
13              ;
14               // Confidence Score Adjustment ;
15               for rank list index from 1 to  do
16                      ;
17                     
18              // Convergence Check ;
19               if  then
20                      Break;
21              else
22                      ;
23                     
24       for rank list index from 1 to  do
25               ;
26              
27;
28 Return ;
Algorithm 1 Unsupervised Model Selection.
Figure 3. An illustrative example showing the process of weighted rank aggregation and the calculation of two different ranking distances (i.e., and ).

Our framework measures the goodness of each parameter settings based on its induced ranking model . The key challenge here is how we can evaluate the ranking performance of each over a query which has no labeled documents. To address this challenge, we first leverage a weighted rank aggregation technique to obtain an aggregated rank list and then evaluate the quality of each based on the agreement between its generated rank list and the aggregated rank list. The key intuition here is that high-quality ranking models will rank documents based on a similar distribution while low-quality ranking models will rank documents in a uniformly random fashion. Therefore, the agreement between each rank list with the aggregated rank list serves as a good signal of its quality.

At a high level, our model selection method is an iterative algorithm which repeatedly aggregates multiple rankings (with their corresponding weights) and uses the aggregated rank list to estimate the quality of each of them. Given a query , we first construct ranking models , one for each parameter settings and obtain its returned top- rank list over a document set (i.e., ). Then, we construct a unified document pool . After that, we use to denote the confidence score of each ranking model , and initialize all of them with equal value . During each iteration, we first aggregate , weighted by , and obtain the aggregated rank list . Then, we adjust the confidence score of each ranking model (i.e., ) based on the distance of two rankings: and . Here, we use to denote the aggregated rank list because it is a complete ranking over the document pool .

Weighted Rank Aggregation. We aggregate multiple rank lists using a variant of Borda counting method (Coppersmith et al., 2006) which considers the relative weight of each rank list. We calculate the score of each document based on its position in each rank list as follows:

(10)

where denotes the length of a rank list , and is an indicator function. When document appears in the rank list , equals to 1, otherwise, it equals to 0. The above equation will reward a document ranked at higher position (i.e., small ) in a high-quality rank list (i.e., large ) a larger score. Finally, we obtain the aggregated rank of these documents based on their corresponding scores. A concrete example in shown in Figure 3.

Confidence Score Adjustment. After we obtain the aggregated rank list, we will need to adjust the confidence score of each ranking model based on the distance between and aggregated rank list . In order to compare the distance between an incomplete rank list with a complete rank list , we extend the classical Kendall Tau distance (Kendall, 1955) and define it as follows:

(11)

The above distance counts the number of pairwise disagreements between and . One limitation of this distance is that it does not differentiate the importance of different ranking positions. Usually, switching two documents in the top part of a rank list should be penalized more, compared with switching another two documents in the bottom part of a rank list. To model such intuition, we propose a position-aware Kendall Tau distance and define it as follows:

(12)

With the distance between two rankings defined, we can adjust the confidence score as follows:

(13)

where can be either or and we will study how different this choice can influence the model selection results in Section 5.4. The key idea of the above equation is to promote the ranking model which returns a ranked list better aligned with the aggregated rank list.

Putting all together. Algorithm 1 summarizes our unsupervised model selection process. Given a query , we can iteratively apply weighted rank aggregation and confidence score adjustment until the algorithm converges. Then, we collect the converged . Specifically, is the confidence score of ranking model on query . With a slight abuse of notation, we use to denote its accumulated confidence score. Given a set of queries , we run the former procedure for each query and sum over all converged . Finally, we return the ranking model which has the largest accumulated confidence score.

5. Experiments

In this section, we evaluate our proposed SetRank framework as well as unsupervised model selection algorithm on two datasets from two scientific domains.

5.1. Datasets

We use two benchmark datasets222Both benchmark datasets are publicly available at: https://github.com/mickeystroller/SetRank. for the experiments: Semantic Scholar (Xiong et al., 2017b) in Computer Science (S2-CS) and TREC 2004&2005 Genomics Track in Biomedical science (TREC-BIO).

S2-CS contains 100 queries sampled from Semantic Scholar’s query log, in which 40 queries are entity-set queries and the maximum number of entities in a query is 5. Candidate documents are generated by pooling from variations of Semantic Scholar’s online production system and all of them are manually labeled on a 5-level scale. Entities in both queries and documents are linked to Freebase using CMNS (Hasibi et al., 2015). As the original dataset does not contain the entity type information, we enhance it by retrieving each entity’s most notable type in the latest Freebase dump333https://developers.google.com/freebase/ based on its Freebase ID. These types are organized by Freebase type hierarchy.

TREC-BIO includes 100 queries designed by biologists and the candidate document pool is constructed based on the top results of all submissions at that time. All candidate documents are labeled on a 3-level scale. In these 100 queries, 86 of them are entity-set queries and the maximum number of entities in a query is 11. The original dataset contains no entity information and therefore we apply PubTator (Wei et al., 2013), the state-of-the-art biomedical entity linking tool, to obtain 5 types of entities (i.e., Gene, Disease, Chemical, Mutation, and Species) in both queries and documents. We build a simple type hierarchy with root node named ‘Thing’ and each first-level node corresponds to one of the above 5 types.

5.2. Entity Linking Performance

We evaluate the query entity linking using precision and recall at the query level. Specifically, an entity annotation is considered correct if it appears in the gold labeled data (

i.e., the strict evaluation in (Carmel et al., 2014)). The original S2-CS dataset provides such gold labeled data. For TREC-BIO dataset, we asked two Master-level students with biomedical science background to label all the linked entities as well as the entities that they could identify in the queries. We also report the entity linking performance on the general domain queries (ClueWeb09 and ClueWeb12) for references (Xiong et al., 2016). As we can see in Table 2, the overall linking performance of academic queries is better than that of general domain queries, probably because academic queries have less ambiguity. Also, recall of entity linking in TREC-BIO dataset is very high. A possible reason is that the biomedical entities have very distinctive tokens (e.g., “narcolepsy” is a specific disease related to sleep and is seldom used in other contexts) and thus it is relatively easier to recognize them.

5.3. Ranking Performance

5.3.1. Experimental Setup

Evaluation metrics.

Since documents in both datasets have multi-level graded relevance, we use NDCG@{5,10,15,20} as our main evaluation metrics. All evaluation is performed using standard

pytrec_eval tool (Van Gysel and de Rijke, 2018). Statistical significances are tested using two-tailed -test with p-value .

Baselines. We compare SetRank with 4 baseline ranking models: Vector Space Model (BM25 (Robertson and Zaragoza, 2009)), Query Likelihood Model with Dirichlet Prior smoothing (LM-DIR) or with Jelinek Mercer smoothing (LM-JM) (Zhai and Lafferty, 2001), and the Information-Based model (IB) (Clinchant and Gaussier, 2010). All models are applied to the paper’s title and abstract fields. Here, we do not compete with Semantic Scholar’s production system and ESR model (Xiong et al., 2017b) because they are supervised models trained over user’s click information which is not available in our setting.

The parameters of all models, including the field weights, are set using 5-fold cross validation over the queries in each benchmark dataset using the same paradigm in (Raviv et al., 2016) as follows. For each hold-out fold, the other four folds are served as a validation set. A grid search is applied to choose the optimal parameter settings that maximize NDCG@20 on the validation set. Specifically, the title and abstract field weights are selected from {1,5,10,15,20,50}; the Dirichlet smoothing parameter and Jelinek Mercer smoothing parameter are chosen from {500, 1000, 1500, 2000, 2500, 3000} and {0.1, 0.2, …, 0.9}, respectively; the relative weight of entity token used in SetRank is selected from {0, 0.1, …, 1}. The best performing parameter settings are then saved for the hold-out evaluation.

S2-CS TREC-BIO ClueWeb09 ClueWeb12
Precision 0.680 0.678 0.577 0.485
Recall 0.680 0.727 0.596 0.575
Table 2. Entity linking performance on scientific domain queries (S2-CS, TREC-BIO) and general domain queries (ClueWeb09, ClueWeb12).

5.3.2. Effectiveness of Leveraging Entity Information

As mentioned before, the entity linking process is not perfect and it generates some noisy entity annotations. Therefore, we first study how different ranking models, including our proposed SetRank, can leverage such noisy entity information to improve the ranking performance. We evaluate three variations of each model – one using only word information, one using only entity information, and one using both pieces of information.

Results are shown in Table 3. We notice that the usefulness of entity information is inconclusive for baseline models. On S2-CS dataset, adding entity information can improve the ranking performance, while on TREC-BIO dataset, it will drag down the performance of all baseline methods. This resonates with previous findings in (Karimi et al., 2012) that simply adding entities into queries and posting them to existing ranking models does not work for biomedical literature retrieval. Compared with baseline methods, SetRank successfully combines the word and entity information and effectively leverages such noisy entity information to improve the ranking performance. Furthermore, SetRank can better utilize each single information source, either word or entity, than other baseline models thanks to our proposed query graph representation. Overall, SetRank significantly outperforms all variations of baseline models.

BM25 LM-DIR LM-JM IB SetRank
Dataset Method
Word
Entity
Both
Word
Entity
Both
Word
Entity
Both
Word
Entity
Both
Word
Entity
Both
S2-CS
NDCG@5 0.3476 0.3319 0.3675 0.3447 0.3460 0.3563 0.3626 0.3394 0.3625 0.3759 0.3420 0.3729 0.3890 0.3761 0.4207
NDCG@10 0.3785 0.3520 0.4039 0.3623 0.3579 0.3901 0.3774 0.3519 0.3962 0.3903 0.3557 0.4009 0.4168 0.3885 0.4431
NDCG@15 0.4001 0.3616 0.4160 0.3781 0.3673 0.4077 0.4051 0.3666 0.4174 0.4113 0.3699 0.4272 0.4411 0.4054 0.4762
NDCG@20 0.4126 0.3752 0.4333 0.4012 0.3816 0.4205 0.4182 0.3804 0.4362 0.4295 0.3855 0.4421 0.4674 0.4229 0.4950
TREC-BIO NDCG@5 0.3189 0.1542 0.2613 0.3053 0.1755 0.2669 0.2957 0.1656 0.2826 0.3045 0.1842 0.2770 0.3417 0.2111 0.3744
NDCG@10 0.2968 0.1488 0.2472 0.2958 0.1601 0.2571 0.2742 0.1588 0.2572 0.2918 0.1715 0.2633 0.3165 0.1976 0.3522
NDCG@15 0.2833 0.1424 0.2395 0.2852 0.1579 0.2591 0.2642 0.1575 0.2437 0.2835 0.1664 0.2541 0.3017 0.1931 0.3363
NDCG@20 0.2739 0.1419 0.2337 0.2781 0.1558 0.2547 0.2560 0.1534 0.2362 0.2722 0.1628 0.2406 0.2900 0.1885 0.3246
Table 3. Effectiveness of leveraging (noisy) entity information for ranking. Each method contains three variations and the best variation is labeled bold. The superscript “" means the model significantly outperforms the best variation of all 4 baseline methods (with p-value 0.05).

5.3.3. Ranking Performance on Entity-Set Queries

We further study each model’s ranking performance on entity-set queries. There are 40 and 86 entity-set queries in S2-CS and TREC-BIO, respectively. We denote these subsets of entity-set queries as S2-CS-ESQ and TREC-BIO-ESQ. As shown in Table 4, SetRank significantly outperforms the best variation of all baseline methods on S2-CS-ESQ and TREC-BIO-ESQ by at least 25% and 14% respectively in terms of NDCG@5. Also, we can see the advantages of SetRank over the baselines on entity-set queries are larger than those on general queries, This further demonstrates SetRank’s effectiveness of modeling entity set information.

Dataset Metric BM25 LM-DIR LM-JM IB SetRank
S2-CS -ESQ NDCG@5 0.3994 0.3522 0.3812 0.3956 0.4983
NDCG@10 0.4364 0.3973 0.4241 0.4209 0.5130
NDCG@15 0.4454 0.4160 0.4431 0.4496 0.5450
NDCG@20 0.4609 0.4264 0.4618 0.4664 0.5629
TREC-BIO -ESQ NDCG@5 0.3185 0.2934 0.2940 0.3011 0.3639
NDCG@10 0.2968 0.2834 0.2746 0.2896 0.3406
NDCG@15 0.2812 0.2711 0.2636 0.2832 0.3251
NDCG@20 0.2718 0.2644 0.2553 0.2708 0.3132
Table 4. Ranking performance on entity-set queries. The best variation of each baseline method is selected. The superscript “" means the model significantly outperforms all 4 baseline methods (with p-value 0.05).

5.3.4. Effectiveness of Modeling Entity Relation and Entity Type

To study how the inter-entity relation and entity type information can contribute to document ranking, we compare SetRank with two of its variants, SetRank and SetRank. The first variant models entity relation among the set but ignores the entity type information, and the second variant simply neglects both entity relation and type.

Results are shown in Table 5. First, we compare SetRank with SetRank and find that modeling the entity relation in entity sets can significantly improve the ranking results. Such improvement is especially obvious on the entity-set query sets S2-CS-ESQ and TREC-BIO-ESQ. Also, by comparing SetRank with SetRank, we can see adding entity type information can further improve ranking performance. In addition, we present a concrete case study for one entity-set query in Table 6. The top-2 papers returned by SetRank are focusing on video game without discussing its relation with reinforcement learning. In comparison, SetRank considers the entity relations and returns the paper mentioning both entities.

Dataset Metric SetRank SetRank SetRank
S2-CS
NDCG@5 0.3847 0.4157 0.4207
NDCG@10 0.4095 0.4423 0.4431
NDCG@15 0.4256 0.4655 0.4762
NDCG@20 0.4443 0.4813 0.4950
TREC-BIO
NDCG@5 0.3414 0.3705 0.3744
NDCG@10 0.3257 0.3500 0.3522
NDCG@15 0.3140 0.3335 0.3363
NDCG@20 0.3058 0.3217 0.3246
S2-CS -ESQ NDCG@5 0.4059 0.4800 0.4983
NDCG@10 0.4311 0.5004 0.5130
NDCG@15 0.4469 0.5266 0.5450
NDCG@20 0.4683 0.5378 0.5629
TREC-BIO -ESQ NDCG@5 0.3257 0.3594 0.3639
NDCG@10 0.3100 0.3380 0.3406
NDCG@15 0.2994 0.3219 0.3251
NDCG@20 0.2903 0.3100 0.3132
Table 5. Ranking performance of different variations of SetRank. Best results are marked bold. The superscript “" means the model significantly outperforms SetRank (with p-value 0.05).
Query reinforcement learning for video game
Method SetRank SetRank
1 The effects of video game playing on attention, memory, and executive control A video game description language for model-based or interactive learning
2 Can training in a real-time strategy video game attenuate cognitive decline in older adults? Playing Atari with Deep Reinforcement Learning
3 A video game description language for model-based or interactive learning Real-time neuroevolution in the NERO video game
Table 6. A case study comparing SetRank with SetRank on one entity-set query in S2-CS. Note: Atari is a video game platform.

5.3.5. Analysis of Entity Token Weight

We introduce the entity token weight in Eq. (6) to combine the entity-based and word-based relevance scores. In all previous experiments, we choose its value using cross validation. Here, we study how this parameter will influence the ranking performance by constructing multiple SetRank models with different and directly report their performance on all 100 queries.

Figure 4. Sensitivity of in S2-CS and TREC-BIO datasets.

As shown in Figure 4, for S2-CS dataset, SetRank’s ranking performance first increases as increases until it reaches 0.7 and then starts to decrease when we further increase . However, for TREC-BIO dataset, the optimal value of is around 0.3, and if we increases over 0.6, the ranking performance will drop quickly.

Dataset Method NDCG@5 NDCG@10 NDCG@15 NDCG@20
S2-CS SetRank- 20 5 0.7 1000 1000 0.4207 0.4431 0.4762 0.4950
AutoSetRank- 20 7 0.7 1500 2000 0.4174 0.4427 0.4730 0.4929
AutoSetRank- 20 5 0.7 1500 1500 0.4173 0.4436 0.4731 0.4923
Mean ( Std) 0.3898 ( 0.0112) 0.4128 ( 0.0106) 0.4411 ( 0.0161 ) 0.4543 ( 0.0163)
TREC-BIO SetRank- 20 5 0.2 1000 1000 0.3744 0.3522 0.3363 0.3246
AutoSetRank- 20 5 0.2 1500 1000 0.3692 0.3472 0.3305 0.3173
AutoSetRank- 20 7 0.2 1000 1000 0.3748 0.3564 0.3367 0.3253
Mean ( Std) 0.3479 ( 0.0103) 0.3238 ( 0.0079) 0.3199 ( 0.0079) 0.3036 ( 0.0093)
Table 7. Effectiveness of ranking model selection. SetRank-: parameters are tuned using 5-fold cross validation. AutoSetRank-(): parameters are obtained based on our unsupervised model selection algorithm, which uses either or as ranking distance. Mean ( Std): the averaged performance of all ranking models with standard derivation shown.

5.4. Effectiveness of Model Selection

5.4.1. Experimental Setup

In this experiment, we try to apply our unsupervised model selection algorithm to choose the best parameter settings of SetRank without using a validation set. We select entity token weight , title field weight , abstract field weight , dirichlet smoothing factors for both fields & from {0.2, 0.3, …, 0.8}, {5, 10, 15, 20}, {1, 3, 5, 10}, and {500, 1000, 1500, 2000}, respectively. This generates totally possible parameter settings and for each of them we can construct a ranking model. We first apply our unsupervised model selection algorithm (with either or as the ranking distance) and obtain the most confident parameter settings returned by it. Then, we plug in these parameter settings into SetRank and denote it as AutoSetRank. For reference, we also calculate the average performance of all 1,792 ranking models.

5.4.2. Experimental Result and Analysis

Table 7 shows the results, including the SetRank’s performance when a labeled validation set is given. First, we notice that for S2-CS dataset, although the parameter settings tuned over validation set do perform better than the ones returned by our unsupervised model selection algorithm, the difference is not significant. For TREC-BIO dataset, it is surprising to find that AutoSetRank- can slightly outperforms SetRank tuned on validation set. Furthermore, the performance of AutoSetRank

 function is higher than the average performances of all possible ranking models by 2 standard deviations, which demonstrates the effectiveness of our unsupervised model selection algorithm.

5.5. Use Case Study: Bio-Literature Search

Query APP APOE4 PSEN1 SORL1 PSEN2 ACE CLU BDNF IL1B MAPT
Method Rank Paper Title
PubMed
1 Apathy and APOE4 are associated with reduced BDNF levels in Alzheimer’s disease
2 ApoE4 and A Oligomers Reduce BDNF Expression via HDAC Nuclear Translocation
3 Cognitive deficits and disruption of neurogenesis in a mouse model of apolipoprotein E4 domain interaction
4 APOE-epsilon4 and aging of medial temporal lobe gray matter in healthy adults older than 50 years
5 Influence of BDNF Val66Met on the relationship between physical activity and brain volume
SetRank
1 Investigating the role of rare coding variability in Mendelian dementia genes (APP, PSEN1, PSEN2, GRN, MAPT, and PRNP) in late-onset Alzheimer’s disease
2 Rare Genetic Variant in SORL1 May Increase Penetrance of Alzheimer’s Disease in a Family with Several Generations of APOE- 4 Homozygosity
3 APP, PSEN1, and PSEN2 mutations in early-onset Alzheimer disease: A genetic screening study of familial and sporadic cases
4 Identification and description of three families with familial Alzheimer disease that segregate variants in the SORL1 gene
5 The PSEN1, p.E318G variant increases the risk of Alzheimer’s disease in APOE-4 carriers
Table 8. A real-world use case comparing SetRank with PubMed. The input query contains a set of 10 genes and reflects user’s information need of finding an association between this gene set and an unknown disease. Entity mentions in returned paper titles are highlighted in brown and the entity mentions of Alzheimer’s disease, which are used to judge paper relevance, are marked in red.

In this section we demonstrate the effectiveness of SetRank in a biomedical use case. As preparation, we build a biomedical literature search engine based on over 27 million papers retrieved from PubMed. Entities in all papers are extracted and typed using PubTator. This search system is cold-started with our proposed SetRank model and we show how SetRank can help this search system to accommodate a given entity-set query and returns a high-quality rank list of papers relevant to the query. Comparison with PubMed, a widely used search engine for biomedical literature, will also be discussed.

A biomedical case. Consider the following case of a biomedical information need. Genomics studies often identify sets of genes as having important roles to play in the processes or conditions under investigation, and the investigators seek to understand better what biological insights such a list of genes might provide. Suppose such a study, having examined brain gene expression patterns in old mice, identifies ten genes as being of potential interest. The investigator forms a query with these 10 genes, submits it to a literature search engine, and examines the top ten returned papers to look for an association between this gene set and a disease. The query consists of symbols of the 10 genes: “APP, APOE, PSEN1, SORL1, PSEN2, ACE, CLU, BDNF, IL1B, MAPT”.

Relevance criterion. We choose the above ten genes for our illustration because these are actually top genes associated with Alzheimer’s disease according to DisGeNET (Piñero et al., 2016), and it is unlikely that there is another completely different (and unknown) commonality among them. Therefore, a retrieved paper is relevant if and only if it discusses at least one of the query genes in the context of Alzheimer’s disease. Furthermore, among all relevant papers, we prefer those covering more unique genes.

Result analysis. The top-5 papers returned by PubMed444Querying PubMed with the exact same query returns 0 document. To get reasonable results, PubMed users have to insert an OR logic between every pairs of genes, and change the default “sorting by most recent” to “sorting by best match”. and our system are shown in Table 8. We see that the “Alzheimer’s disease” is explicitly mentioned in the title of all the five papers returned by our system, and the top two papers cover 6 unique genes among the total 10 genes. All five papers returned by SetRank are highly relevant, since they all focus on the association between a subset of our query genes and Alzheimer’s disease. In contrast, the top-5 papers retrieved by PubMed are dominated by two genes (i.e., APOE4 and BDNF) and contain none of the remaining eight. Only the 1st of the five papers is highly relevant. It focuses on the association between Alzheimer’s disease (mentioned explicitly in the title) and our query gene set. Three other papers (ranked 2nd to 4th) are marginally relevant, in the sense that Alzheimer’s disease is the context but not the focus of their studies. The paper ranked 5th is irrelevant. Therefore, users will prefer SetRank since it returns papers covering a large-portion of an entity-set query and helps them to find the association between this entity set with Alzheimer’s disease.

6. Conclusions and Future Work

In this paper, we study the problem of searching scientific literature using entity-set queries. A distinctive characteristic of entity-set queries is that they reflect user’s interest in inter-entity relations. To capture such information need, we propose SetRank, an unsupervised ranking framework which explicitly models entity relations among the entity set. Second, we develop a novel unsupervised model selection algorithm based on weighted rank aggregation to select SetRank’s parameters without relying on a labeled validation set. Experimental results on two benchmark datasets corroborate the effectiveness of SetRank and the usefulness of our model selection algorithm. We further discuss the power of SetRankwith a real-world use case of biomedical literature search.

As a future direction, we would like to explore how we can go beyond pairwise entity relations and integrate higher-order entity relations into the current SetRank framework. Besides, it would be interesting to explore whether SetRank can effectively model domain expert’s prior knowledge about the relative importance of entity relations. Furthermore, the incorporation of user interaction and and extension of current SetRank framework to weakly-supervised settings are also interesting research problems.

Acknowledgements

This research is sponsored in part by U.S. Army Research Lab. under Cooperative Agreement No. W911NF-09-2-0053 (NSCTA), DARPA under Agreement No. W911NF-17-C-0099, National Science Foundation IIS 16-18481, IIS 17-04532, and IIS-17-41317, DTRA HDTRA11810026, and grant 1U54GM114838 awarded by NIGMS through funds provided by the trans-NIH Big Data to Knowledge (BD2K) initiative (www.bd2k.nih.gov).

References

  • (1)
  • Beel and Gipp (2009) Joran Beel and Bela Gipp. 2009. Google Scholar’s ranking algorithm: An Introductory Overview. In ISSI.
  • Bhowmik and Ghosh (2017) Avradeep Bhowmik and Joydeep Ghosh. 2017. LETOR Methods for Unsupervised Rank Aggregation. In WWW.
  • Brazdil and Giraud-Carrier (2017) Pavel Brazdil and Christophe Giraud-Carrier. 2017. Metalearning and Algorithm Selection: progress, state of the art and introduction to the 2018 Special Issue. Machine Learning (2017).
  • Carmel et al. (2014) David Carmel, Ming-Wei Chang, Evgeniy Gabrilovich, Bo-June Paul Hsu, and Kuansan Wang. 2014. ERD’14: entity recognition and disambiguation challenge. SIGIR Forum 48 (2014), 63–77.
  • Clinchant and Gaussier (2010) Stéphane Clinchant and Éric Gaussier. 2010. Information-based models for ad hoc IR. In SIGIR.
  • Coppersmith et al. (2006) Don Coppersmith, Lisa Fleischer, and Atri Rudra. 2006. Ordering by weighted number of wins gives a good ranking for weighted tournaments. In Proceedings of the seventeenth annual ACM-SIAM symposium on Discrete algorithm. Society for Industrial and Applied Mathematics, 776–782.
  • Dalton et al. (2014) Jeff Dalton, Laura Dietz, and James Allan. 2014. Entity query feature expansion using knowledge base links. In SIGIR.
  • Feurer et al. (2015) Matthias Feurer, Aaron Klein, Katharina Eggensperger, Jost Tobias Springenberg, Manuel Blum, and Frank Hutter. 2015. Efficient and Robust Automated Machine Learning. In NIPS.
  • Fox and Shaw (1993) Edward A. Fox and Joseph A. Shaw. 1993. Combination of Multiple Searches. In TREC.
  • Garigliotti and Balog (2017) Darío Garigliotti and Krisztian Balog. 2017. On Type-Aware Entity Retrieval. In ICTIR.
  • Guiver and Snelson (2009) John Guiver and Edward Snelson. 2009. Bayesian inference for Plackett-Luce ranking models. In ICML.
  • Guo et al. (2009) Jiafeng Guo, Gu Xu, Xueqi Cheng, and Hang Li. 2009. Named entity recognition in query. In SIGIR.
  • Hasibi et al. (2015) Faegheh Hasibi, Krisztian Balog, and Svein Erik Bratsberg. 2015. Entity linking in queries: Tasks and evaluation. In Proceedings of the 2015 International Conference on The Theory of Information Retrieval. ACM, 171–180.
  • Hersh et al. (2004) William R. Hersh, Ravi Teja Bhupatiraju, L. Ross, Aaron M. Cohen, Dale Kraemer, and Phoebe Johnson. 2004. TREC 2004 Genomics Track Overview. In TREC.
  • Hersh et al. (2005) William R. Hersh, Aaron Cohen, Jianji Yang, Ravi Teja Bhupatiraju, Phoebe Roberts, and Marti Hearst. 2005. TREC 2005 Genomics Track Overview. In TREC.
  • Karimi et al. (2012) Sarvnaz Karimi, Justin Zobel, and Falk Scholer. 2012. Quantifying the impact of concept recognition on biomedical information retrieval. Information Processing & Management 48, 1 (2012), 94–106.
  • Kendall (1955) Maurice G Kendall. 1955. Rank correlation methods. (1955).
  • Klementiev et al. (2007) Alexandre Klementiev, Dan Roth, and Kevin Small. 2007. An Unsupervised Learning Algorithm for Rank Aggregation. In ECML.
  • Klementiev et al. (2008) Alexandre Klementiev, Dan Roth, and Kevin Small. 2008. A Framework for Unsupervised Rank Aggregation. In SIGIR LR4IR Workshop.
  • Liu and Fang (2015) Xitong Liu and Hui Fang. 2015. Latent entity space: a novel retrieval approach for entity-bearing queries. Information Retrieval Journal 18 (2015), 473–503.
  • Lu (2011) Zhiyong Lu. 2011. PubMed and beyond: a survey of web tools for searching biomedical literature. In Database.
  • Maystre and Grossglauser (2015) Lucas Maystre and Matthias Grossglauser. 2015. Fast and Accurate Inference of Plackett-Luce Models. In NIPS.
  • Metzler and Croft (2005) Donald Metzler and W Bruce Croft. 2005. A Markov random field model for term dependencies. In SIGIR.
  • Ogilvie and Callan (2003) Paul Ogilvie and James P. Callan. 2003. Combining document representations for known-item search. In SIGIR.
  • Piñero et al. (2016) Janet Piñero, Àlex Bravo, Núria Queralt-Rosinach, Alba Gutiérrez-Sacristán, Jordi Deu-Pons, Emilio Centeno, Javier García-García, Ferran Sanz, and Laura I Furlong. 2016. DisGeNET: a comprehensive platform integrating information on human disease-associated genes and variants. Nucleic acids research (2016).
  • Raviv et al. (2016) Hadas Raviv, Oren Kurland, and David Carmel. 2016. Document Retrieval Using Entity-Based Language Models. In SIGIR.
  • Ren et al. (2017) Xiang Ren, Jiaming Shen, Meng Qu, Xuan Wang, Zeqiu Wu, Qi Zhu, Meng Jiang, Fangbo Tao, Saurabh Sinha, David Liem, Peipei Ping, Richard M. Weinshilboum, and Jiawei Han. 2017. Life-iNet: A Structured Network-Based Knowledge Exploration and Analytics System for Life Sciences. In ACL.
  • Robertson and Zaragoza (2009) Stephen E. Robertson and Hugo Zaragoza. 2009. The Probabilistic Relevance Framework: BM25 and Beyond. Foundations and Trends in Information Retrieval (2009).
  • Shen et al. (2016) Jiaming Shen, Zhenyu Song, Shitao Li, Zhaowei Tan, Yuning Mao, Luoyi Fu, Li Song, and Xinbing Wang. 2016. Modeling Topic-Level Academic Influence in Scientific Literatures. In AAAI Workshop: Scholarly Big Data.
  • Sinha et al. (2015) Arnab Sinha, Zhihong Shen, Yang Song, Hao Ma, Darrin Eide, Bo-June Paul Hsu, and Kuansan Wang. 2015. An Overview of Microsoft Academic Service (MAS) and Applications. In WWW.
  • Tang et al. (2008) Jie Tang, Jing Zhang, Limin Yao, Juan-Zi Li, Li Zhang, and Zhong Su. 2008. ArnetMiner: extraction and mining of academic social networks. In KDD.
  • Van Gysel and de Rijke (2018) Christophe Van Gysel and Maarten de Rijke. 2018. Pytrec_eval: An Extremely Fast Python Interface to trec_eval. In SIGIR. ACM.
  • Wei et al. (2013) Chih-Hsuan Wei, Hung-Yu Kao, and Zhiyong Lu. 2013. PubTator: a web-based text mining tool for assisting biocuration. In Nucleic Acids Research.
  • Wu et al. (2014) Jian Wu, Kyle Williams, Hung-Hsuan Chen, Madian Khabsa, Cornelia Caragea, Alexander Ororbia, Douglas Jordan, and C. Lee Giles. 2014. CiteSeerX: AI in a Digital Library Search Engine. AI Magazine 36 (2014), 35–48.
  • Xiong and Callan (2015a) Chenyan Xiong and James P. Callan. 2015a. EsdRank: Connecting Query and Documents through External Semi-Structured Data. In CIKM.
  • Xiong and Callan (2015b) Chenyan Xiong and James P. Callan. 2015b. Query Expansion with Freebase. In ICTIR.
  • Xiong et al. (2016) Chenyan Xiong, James P. Callan, and Tie-Yan Liu. 2016. Bag-of-Entities Representation for Ranking. In ICTIR.
  • Xiong et al. (2017a) Chenyan Xiong, James P. Callan, and Tie-Yan Liu. 2017a. Word-Entity Duet Representations for Document Ranking. In SIGIR.
  • Xiong et al. (2017b) Chenyan Xiong, Russell Power, and James P. Callan. 2017b.

    Explicit Semantic Ranking for Academic Search via Knowledge Graph Embedding. In

    WWW.
  • Xu et al. (2009) Yang Xu, Gareth J. F. Jones, and Bin Wang. 2009. Query dependent pseudo-relevance feedback based on wikipedia. In SIGIR.
  • Zhai and Lafferty (2001) ChengXiang Zhai and John D. Lafferty. 2001. A Study of Smoothing Methods for Language Models Applied to Ad Hoc Information Retrieval. SIGIR Forum 51 (2001), 268–276.
  • Zhao et al. (2016) Zhibing Zhao, Peter Piech, and Lirong Xia. 2016. Learning Mixtures of Plackett-Luce Models. In ICML.