AUEB at BioASQ 6: Document and Snippet Retrieval

09/15/2018 ∙ by Georgios-Ioannis Brokos, et al. ∙ 0

We present AUEB's submissions to the BioASQ 6 document and snippet retrieval tasks (parts of Task 6b, Phase A). Our models use novel extensions to deep learning architectures that operate solely over the text of the query and candidate document/snippets. Our systems scored at the top or near the top for all batches of the challenge, highlighting the effectiveness of deep learning for these tasks.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

BioASQ Tsatsaronis et al. (2015) is a biomedical document classification, document retrieval, and question answering competition, currently in its sixth year.111Consult http://bioasq.org/. We provide an overview of AUEB’s submissions to the document and snippet retrieval tasks (parts of Task 6b, Phase A) of BioASQ 6.222For further information on the BioASQ 6 tasks, see http://bioasq.org/participate/challenges. In these tasks, systems are provided with English biomedical questions and are required to retrieve relevant documents and document snippets from a collection of Medline/PubMed articles.333http://www.ncbi.nlm.nih.gov/pubmed/.

We used deep learning models for both document and snippet retrieval. For document retrieval, we focus on extensions to the Position-Aware Convolutional Recurrent Relevance (pacrr) model of hui2017pacrr and, mostly, the Deep Relevance Matching Model (drmm) of guo2016deep, whereas for snippet retrieval we based our work on the Basic Bi-CNN (bcnn) model of YinW2016bcnn. Little task-specific pre-processing is employed and the models operate solely over the text of the query and candidate document/snippets.

Overall, our systems scored at the top or near the top for all batches of the challenge. In previous years of the BioASQ challenge, the top scoring systems used primarily traditional ir techniques Jin et al. (2017). Thus, our work highlights that end-to-end deep learning models are an effective approach for retrieval in the biomedical domain.

2 Document Retrieval

For document retrieval, we investigate new deep learning architectures focusing on term-based interaction models, where query terms (q-terms for brevity) are scored relative to a document’s terms (d-terms) and their scores are aggregated to produce a relevance score for the document. All models use pre-trained embeddings for all q-terms and d-terms. Details on data resources and data pre-processing are given in Section 5.1.

2.1 PACRR-based Models

The first model we investigate is pacrr Hui et al. (2017). In this model, a query-document term similarity matrix sim is first computed (Fig. 1, left). Each cell of sim

contains the cosine similarity between the embeddings of a q-term

and a d-term . To keep the dimensions of sim

fixed across queries and documents of varying lengths, queries are padded to the maximum number of q-terms

, and only the first terms per document are retained.444We use pacrr-firstk, which hui2017pacrr recommend when documents fit in memory, as in our experiments. Then, convolutions of different kernel sizes () are applied to sim to capture -gram query-document similarities. For each size

, multiple kernels (filters) are used. Max pooling is then applied along the dimension of the filters (max value of all filters of the same size), followed by

-max pooling along the dimension of d-terms to capture the strongest signals between each q-term and all the d-terms. The resulting matrices (one per kernel size) are concatenated into a single matrix where each row is a document-aware q-term encoding (Fig. 1); the idf of the q-term is also appended, normalized by applying a softmax across the idf

s of all the q-terms. Following hui2018copacrr, we concatenate the rows of the resulting matrix into a single vector, which is passed to an

mlp that produces a query-document relevance score.555hui2017pacrr used an additional lstm, which was later replaced by the final concatenation Hui et al. (2018).

Figure 1: pacrr Hui et al. (2017) and term-pacrr. In pacrr, an mlp is applied to the concatenation of the document-aware q-term encodings to produce the relevance score. In term-pacrr, the mlp is applied separately to each document-aware q-term encoding; the resulting scores are combined by a linear layer.

Instead of using an mlp to score the concatenation of all the (document-aware) q-term encodings, a simple extension we found effective was to use an mlp to independently score each q-term encoding (the same mlp for all q-terms, Fig. 1); the resulting scores are aggregated via a linear layer. This version, term-pacrr, performs better than pacrr, using the same number of hidden layers in the mlps. Likely this is due to the fewer parameters of term-pacrr’s mlp, which is shared across the q-term representations and operates on shorter input vectors. Indeed, in our early experiments term-pacrr was less prone to over-fitting.666In the related publication of mcdonald2018emnlp term-pacrr is identical to the pacrr-drmm model.

2.2 DRMM-based Models

The second model we investigate is drmm Guo et al. (2016) (Fig. 2). The original drmm uses pre-trained word embeddings for q-terms and d-terms, and (bucketed) cosine similarity histograms (outputs of nodes in Fig. 2). Each histogram captures the similarity of a q-term to all the d-terms of a particular document. The histograms, which in this model are the document-aware q-term encodings, are fed to an mlp (dense layers of Fig. 2) that produces the (document-aware) score of each q-term. Each q-term score is then weighted using a gating mechanism (topmost box nodes in Fig. 2) that examines properties of the q-term to assess its importance for ranking (e.g., common words are less important). The sum of the weighted q-term scores is the relevance score of the document.

Figure 2: Illustration of drmm Guo et al. (2016) for three q-terms and d-terms. The nodes produce (bucketed) cosine similarity histograms, each capturing the similarity between a q-term and all the d-terms.

For gating (topmost box nodes of Fig. 2), guo2016deep use a linear self-attention:

is the embedding of the -th q-term, or its idf, ; is a weights vector. We found that , where ‘;’ is concatenation, was optimal for all drmm-based models.

2.2.1 Abel-Drmm

The original drmm Guo et al. (2016) has two shortcomings. The first one is that it ignores entirely the contexts where the terms occur, in contrast to position-aware models such as pacrr (Section 2.1) or those based on recurrent representations Palangi et al. (2016)

. Secondly, the histogram representation for document-aware q-term encodings is not differentiable, so it is not possible to train the network end-to-end, if one wished to backpropagate all the way to word embeddings.

To address the first shortcoming, we add an encoder (Fig. 3) to produce the context-sensitive encoding of each q-term or d-term from the pre-trained embeddings of the previous, current, and next term in a particular query or document. A single dense layer with residuals is used, in effect a one-layer Temporal Convolutional Network (tcn) Bai et al. (2018) without pooling or dilation. The number of convolutional filters equals the dimensions of the pre-trained embedding, for residuals to be summed without transformation.

Specifically, let be the pre-trained embedding for a q-term or d-term term . We compute the context-sensitive encoding of as:

(1)

and

are the weights matrix and bias vector of the dense layer,

is the activation function,

, are the tokens surrounding in the query or document. This is an orthogonal way to incorporate context into the model relative to pacrr. pacrr creates a query-document similarity matrix and computes -gram convolutions over the matrix. Here we incorporate context directly into the term encodings; hence similarities in this space are already context-sensitive. One way to view this difference is the point at which context enters the model – directly during term encoding (Fig. 3) or after term similarity scores have been computed (pacrr, Fig. 1).

Figure 3: Producing context-sensitive term encodings.

To make drmm trainable end-to-end, we replace its histogram-based document-aware q-term encodings ( nodes of Fig. 2) by q-term encodings that consider d-terms via an attention-mechanism. Figure 4 shows the new sub-network that computes the document-aware encoding of a q-term , given a document of d-terms. We first compute a dot-product attention score for each relative to :

(2)

where is the context-sensitive encoding of (Eq. 1). We then sum the context-sensitive encodings of the d-terms, weighted by their attention scores, to produce an attention-based representation of document from the viewpoint of :

(3)

The Hadamard product (element-wise multiplication, ) between the document representation and the q-term encoding is then computed and used as the fixed-dimension document-aware encoding of (Fig. 4):

(4)

The nodes and lower parts of the drmm network of Fig. 2 are now replaced by (multiple copies of) the sub-network of Fig. 4 (one copy per q-term), with the nodes replacing the nodes. We call the resulting model Attention-Based Element-wise drmm (abel-drmm).

Figure 4: abel-drmm sub-net. From context-aware q-term and d-term encodings (Fig. 3), it generates fixed-dimension document-aware q-term encodings to be used in drmm (Fig. 2, replacing nodes).

Intuitively, if the document contains one or more terms that are similar to , the attention mechanism will have emphasized mostly those terms and, hence, will be similar to , otherwise not. This similarity could have been measured by the cosine similarity between and , but the cosine similarity assigns the same weight to all the dimensions, i.e., to all the element-wise products in . By using the Hadamard product, we pass on to the upper layers of drmm (the dense layers of Fig. 2), which score each q-term with respect to the document, all the element-wise products of , allowing the upper layers to learn which element-wise products (or combinations of them) are important when matching a q-term to the document.

2.2.2 ABEL-DRMM extensions

We experimented with two extensions to abel-drmm. The first is a density-based extension that considers all the windows of consecutive tokens of the document and computes the abel-drmm relevance score per window. The final relevance score of a document is the sum of the original abel-drmm score computed over the entire document plus the maximum abel-drmm score over all the document’s windows. The intuition is to reward not only documents that match the query, but also those that match it in a dense window.

The second extension is to compute a confidence score per document and only return documents with scores above a threshold. We apply a softmax over the abel-drmm scores of the top documents and return only documents from the top with normalized scores exceeding a threshold . While this will always hurt metrics like Mean Average Precision (map) when evaluating document retrieval, it has the potential to improve the precision of downstream components, in our case snippet retrieval, which in fact we observe.

3 Snippet Retrieval

For the snippet retrieval task, we used the ‘basic cnn’ (bcnn) network of the broader abcnn model Yin et al. (2016), which we combined with a post-processing stage, as discussed below. The input of snippet retrieval is an English question and text snippets (e.g., sentences) from documents that the document retrieval component returned as relevant to the question. The goal is to rank the snippets, so that snippets that human experts selected as relevant to the question will be ranked higher than others. In BioASQ, human experts are instructed to select relevant snippets consisting of one or more consecutive sentences.777This was not actually the case in BioASQ year 1. Hence, some of our training data do not adhere to this rule. For training purposes, we split the relevant documents into sentences, and consider sentences that overlap the gold snippets (the ones selected by the human experts) as relevant snippets, and the remaining ones as irrelevant. At inference time, documents returned by the document retrieval model as relevant are split into sentences, and these sentences are ranked by the system. For details on sentence splitting, tokenization, etc., see Section 5.1.

3.1 BCNN Model

bcnn receives as input two sequences of terms (tokens), in our case a question (query) and a sentence from a document. All terms are represented by pre-trained embeddings (Section 5.1). Snippet sequences were truncated (or zero padded) to be of uniform length. A convolution layer with multiple filters, each of the same width , is applied to each one of the two input sequences, followed by a windowed-average pooling layer over the same filter width to produce a feature map (per filter) of the same dimensionality as the input to the convolution layer.888The same filters are applied to both queries and snippets. Consequently, we can stack an arbitrary number of convolution/pooling blocks in order to extract increasingly abstract features.

An average pooling layer is then applied to the entire output of the last convolution/pooling block (Fig. 5) to obtain a feature vector of the query and snippet, respectively. When multiple convolution filters are used (Fig. 5

illustrates only one), we obtain a different feature vector from each filter (for the query and snippet, respectively), and the feature vectors from the different filters are concatenated, again obtaining a single feature vector for the query and snippet, respectively. Similarity scores are then computed from the query and snippet feature vectors, and these are fed into a linear logistic regression layer. One critical implementation detail from the original

bcnn paper is that when computing the query-snippet similarity scores, average pooling is actually applied to the output of each one of the convolution/pooling blocks, i.e., we obtain a different query and snippet feature vector from the output of each block. Different similarity scores are computed based on the query and snippet feature vectors obtained from the output of each block, and all the similarity scores are passed to the final layer. Thus the number of inputs to the final layer is proportional to the number of blocks.

Figure 5: bcnn Yin et al. (2016) scoring snippets relative to a query. The example illustrates a query of 5 terms, a snippet of 7 terms, and a single convolution filter of width . Zero-padding shown as empty boxes. In each convolution/pooling block, the convolution layer is followed by a windowed-average pooling of the same width to preserve the dimensionality of the input to the block. Thus convolution/pooling blocks can be repeated, making the model arbitrarily deep.

3.2 Post-processing

A technique that seems to improve our results in snippet retrieval is to retain only the top snippets with the best bcnn scores for each query, and then re-rank the snippets by the relevance scores of the documents they came from; if two snippets came from the same document, they are subsequently ranked by their bcnn score. This is a proxy for more sophisticated models that would jointly consider document and snippet retrieval. This is important as the snippet retrieval model is trained under the condition that it only sees relevant documents. So accounting for the rank/score of the document itself helps to correctly bias the snippet model.

4 Overall System Architecture

Figure 6: Overall architecture of document and snippet retrieval systems.

Figure 6 outlines the general architecture that we used to piece together the various components. It consists of retrieving the top documents per query using bm25 Robertson et al. (1995); re-ranking the top documents using one of the document retrieval models (Section 2) and retaining (up to) the top documents; scoring all candidate snippets of the top documents via a snippet retrieval model (bcnn, Section 3.1) and retaining (up to) the top snippets; re-ranking the snippets by the relevance scores of the documents they came from (Section 3.2).999The last step was used only in batches 3–5.

We set as it was dictated by the BioASQ challenge. We set as we found that with this value, bm25 returned the majority of the relevant documents from the training/development data sets. Setting to larger values had no impact on the final results. The reason for using a pre-retrieval model based on bm25 is that the deep document retrieval models we use here are computationally expensive. Thus, running them on every document in the index for every query is prohibitive, whereas running them on the top documents from a pre-retrieval system is easily achieved.

5 Experiments

All retrieval components (pacrr-, drmm-, bcnn-based) were augmented to combine the scores of the corresponding deep model with a number of traditional ir features, which is a common technique Severyn and Moschitti (2015). In term-pacrr, the additional features are fed to the linear layer that combines the q-term scores (Fig. 1). In abel-drmm, an additional linear layer is used that concatenates the deep learning document relevance score with the traditional ir features. In bcnn, the additional features are included in the final linear layer (Fig. 5). The additional features we used were the bm25 score of the document (the document the snippet came from, in snippet retrieval), word overlap (binary and idf weighted) between the query and the document or snippet; bigram overlap between the query and the document or snippet. The latter features were taken from mohan2017deep. The additional features improved the performance of all models.

5.1 Data Resources and Pre-processing

The document collection consists of approx. 28M ‘articles’ (titles and abstracts only) from the ‘Medline/PubMed Baseline 2018’ collection.101010Available from https://www.nlm.nih.gov/databases/download/pubmed_medline.html. We discarded the approx. 10M articles that contained only titles, since very few of these were annotated as relevant. For the remaining 18M articles, a document was the concatenation of each title and abstract. These documents were then indexed using Galago, removing stop words and applying Krovetz’s stemmer Krovetz (1993).111111We used Galago version 3.10. Consult http://www.lemurproject.org/galago.php. This served as our pre-retrieval model.

Word embeddings were pre-trained by applying word2vec Mikolov et al. (2013) to the 28M ‘articles’ of the Medline/PubMed collection. idf values were computed over the 18M articles that contained both titles and abstracts. We used the GenSim implementation of word2vec (skip-gram model), with negative sampling, window size set to 5, default other hyper-parameter values, to produce word embeddings of 200 dimensions.121212Consult https://radimrehurek.com/gensim/models/word2vec.html. We used Gensim v. 3.3.0. The word embeddings and code of our experiments are available at https://github.com/nlpaueb/aueb-bioasq6. The word embeddings were not updated when training the document relevance ranking models. For tokenization, we used the ‘bioclean’ tool provided by BioASQ.131313The tool accompanies an older set of embeddings provided by BioASQ. See http://participants-area.bioasq.org/tools/BioASQword2vec/. In snippet retrieval, we used nltk’s English sentence splitter.141414We used nltk v3.2.3. See https://www.nltk.org/api/nltk.tokenize.html.

To train and tune the models we used years 1–5 of the BioASQ data, using batch 5 of year 5 as development for the final submitted models, specifically when selecting optimal model epoch. We report test results (

f1, map, gmap) on batches 1–5 of year 6 from the official results table.151515Available at http://participants-area.bioasq.org/results/6b/phaseA/. The names of our systems have been modified for the blind review.

Details on the three evaluation metrics are provided by tsatsaronis2015overview. They are standard, with the exception that

map here always assumes 10 relevant documents/snippets, which is the maximum number of documents/snippets the participating systems were allowed to return per query.

5.2 Hyperparameters

All drmm-based models were trained with Adam Kingma and Ba (2014) with a learning rate of 0.01 and . Batch sizes were set to 32. We used a hinge-loss with a margin of 1.0 over pairs of a single positive and a single negative document of the same query. All models used a two-layer mlp to score q-terms (dense layers of Fig. 2), with leaky-relu activation functions and 8 dimensions per hidden layer. For context-sensitive term encodings (Fig. 3), a single layer was used, again with leaky-relu as activation. For the density-based extension of abel-drmm (Section 2.2.2), . For the confidence extension of abel-drmm, , .

term-pacrr was also trained with Adam, with a learning rate of 0.001 and with batch size equal to 32. Following hui2018copacrr, we used binary log-loss over pairs of a single positive and a single negative document of the same query. Maximum query length was set to 30 and maximum document length was set to 300. Maximum kernel size was set to with 16 filters per size. Row-wise -max pooling used . term-pacrr used a two-layer mlp with relu activations and hidden layers with 7 dimensions to independently score each document-aware query-term encoding.

bcnn was trained using binary log-loss and AdaGrad Duchi et al. (2011), with a learning rate of 0.08 and regularization with . We used 50 convolution kernels (filters) of width = 4 in each convolution layer, and two convolution/pooling blocks. Finally, batch sizes were set to 200. Snippets were truncated to 40 tokens. Questions were never truncated.

5.3 Official Submissions

We submitted 5 different systems to the BioASQ challenge, all of which consist of components described above.

  • aueb-nlp-1: Combo of 10 runs of term-pacrr for document retrieval (2.1) followed by bcnn for snippet retrieval (3).

  • aueb-nlp-2: Combo of 10 runs of abel-drmm (2.2) for document retrieval followed by bcnn for snippet retrieval.

  • aueb-nlp-3: Combo of 10 runs of term-pacrr and 10 runs of abel-drmm followed by bcnn for snippet retrieval.

  • aueb-nlp-4: abel-drmm with density extension (2.2.2) for document retrieval followed by bcnn for snippet retrieval.

  • aueb-nlp-5: abel-drmm with both density and confidence extensions (2.2.2) for document retrieval followed by bcnn for snippet retrieval. This system was submitted for batches 2-5 only.

In combination (combo) systems, we obtained 10 versions of the corresponding model by retraining it 10 times with different random seeds, and we then used a simple voting scheme. If a document was ranked at position 1 by a model it got 10 votes, position 2 was 9 votes, until position 10 where it got 1 vote. Votes were then aggregated over all models in the combination. While voting did not improve upon the best single model, it made the results more stable across different runs.

5.4 Results

Results are given in Table 1. There are a number of things to note. First, for document retrieval, there is very little difference between our submitted models. Both pacrr- and drmm-based models perform well (usually at the top or near the top) with less than 1 map point separating them. These systems were all competitive and for 4 of the 5 batches one was the top scoring system in the competition. On average the experimental abel-drmm system (aueb-nlp-4) scored best amongst AUEB submissions and in aggregate over all submissions, but by a small margin (0.1053 average map versus 0.1016 for term-pacrr). The exception was the high precision system (aueb-nlp-5) which did worse in all metrics except F1, where it was easily the best system for the 4 batches it participated in. This is not particularly surprising, but impacted snippet selection, as we will see.

For snippet selection, all systems did well (aueb-nlp-[1-4]) and it is hard to form a pattern that a base document retrieval model’s results are more conducive to snippet selection. The exception is the high-precision document retrieval model of aueb-nlp-5, which had by far the best scores for AUEB submissions and the challenge as a whole. The main reason for this is that the snippet retrieval component was trained assuming only relevant documents as input. Thus, if we fed it all 10 documents, even when some were not relevant, it could theoretically still rank a snippet from an irrelevant document high since it is not trained to combat this. By sending the snippet retrieval model only high precision document sets it focused on finding good snippets at the expense of potentially missing some relevant documents.

DOCUMENT RETRIEVAL System F1 MAP GMAP Batch 1 aueb-nlp-1 0.2546 0.1246 0.0282 aueb-nlp-2 0.2462 0.1229 0.0293 aueb-nlp-3 0.2564 0.1271 0.0280 aueb-nlp-4 0.2515 0.1255 0.0235 Top Competitor 0.2216 0.1058 0.0113 Batch 2 aueb-nlp-1 0.2264 0.1096 0.0148 aueb-nlp-2 0.2473 0.1207 0.0200 aueb-nlp-3 0.2364 0.1178 0.0161 aueb-nlp-4 0.2350 0.1182 0.0161 aueb-nlp-5 0.3609 0.1014 0.0112 Top Competitor 0.2265 0.1201 0.0183 Batch 3 aueb-nlp-1 0.2345 0.1122 0.0101 aueb-nlp-2 0.2345 0.1147 0.0108 aueb-nlp-3 0.2350 0.1135 0.0109 aueb-nlp-4 0.2345 0.1137 0.0106 aueb-nlp-5 0.4093 0.0973 0.0062 Top Competitor 0.2186 0.1281 0.0113 Batch 4 aueb-nlp-1 0.2136 0.0971 0.0070 aueb-nlp-2 0.2148 0.0996 0.0069 aueb-nlp-3 0.2134 0.1000 0.0068 aueb-nlp-4 0.2094 0.0995 0.0064 aueb-nlp-5 0.3509 0.0875 0.0044 Top Competitor 0.2044 0.0967 0.0073 Batch 5 aueb-nlp-1 0.1541 0.0646 0.0009 aueb-nlp-2 0.1522 0.0678 0.0013 aueb-nlp-3 0.1513 0.0663 0.0010 aueb-nlp-4 0.1590 0.0695 0.0012 aueb-nlp-5 0.1780 0.0594 0.0008 Top Competitor 0.1513 0.0680 0.0009 SNIPPET RETRIEVAL System F1 MAP GMAP Batch 1 aueb-nlp-1 0.1296 0.0687 0.0029 aueb-nlp-2 0.1347 0.0665 0.0026 aueb-nlp-3 0.1329 0.0661 0.0028 aueb-nlp-4 0.1297 0.0694 0.0024 Top Competitor 0.1028 0.0710 0.0002 Batch 2 aueb-nlp-1 0.1329 0.0717 0.0034 aueb-nlp-2 0.1434 0.0750 0.0044 aueb-nlp-3 0.1355 0.0734 0.0033 aueb-nlp-4 0.1397 0.0713 0.0037 aueb-nlp-5 0.1939 0.1368 0.0045 Top Competitor 0.1416 0.0938 0.0011 Batch 3 aueb-nlp-1 0.1563 0.1331 0.0046 aueb-nlp-2 0.1494 0.1262 0.0034 aueb-nlp-3 0.1526 0.1294 0.0038 aueb-nlp-4 0.1519 0.1293 0.0038 aueb-nlp-5 0.2744 0.2314 0.0068 Top Competitor 0.1877 0.1344 0.0014 Batch 4 aueb-nlp-1 0.1211 0.0716 0.0009 aueb-nlp-2 0.1307 0.0821 0.0011 aueb-nlp-3 0.1251 0.0747 0.0009 aueb-nlp-4 0.1180 0.0750 0.0009 aueb-nlp-5 0.1940 0.1425 0.0017 Top Competitor 0.1306 0.0980 0.0006 Batch 5 aueb-nlp-1 0.0768 0.0357 0.0003 aueb-nlp-2 0.0728 0.0405 0.0004 aueb-nlp-3 0.0747 0.0377 0.0004 aueb-nlp-4 0.0790 0.0403 0.0004 aueb-nlp-5 0.0778 0.0526 0.0003 Top Competitor 0.0542 0.0475 0.0001
Table 1: Performance on BioASQ Task 6b, Phase A (batches 1–5) for document and snippet retrieval (left and right tables, respectively). Systems described in Section 5.3. The italicised system is the top scoring system from AUEB’s entries and if also in bold, is the top from all official entries in that batch. Top is by map, the official metric of BioASQ. Top Competitor is the top scoring entry – by map– that is not among AUEB’s submissions.

6 Related Work

Document ranking has been studied since the dawn of ir; classic term-weighting schemes were designed for this problem Sparck Jones (1972); Robertson and Sparck Jones (1976). With the advent of statistical nlp and statistical ir, probabilistic language and topic modeling were explored Zhai and Lafferty (2001); Wei and Croft (2006), followed recently by deep learning ir methods Lu and Li (2013); Hu et al. (2014); Palangi et al. (2016); Guo et al. (2016); Hui et al. (2017).

Most document relevance ranking methods fall within two categories: representation-based, e.g., palangi2016deep, or interaction-based, e.g., lu2013deep. In the former, representations of the query and document are generated independently. Interaction between the two only happens at the final stage, where a score is generated indicating relevance. End-to-end learning and backpropagation through the network tie the two representations together. In the interaction-based paradigm – which is where the models studied here fall – explicit encodings between pairs of queries and documents are induced. This allows direct modeling of exact- or near-matching terms (e.g., synonyms), which is crucial for relevance ranking. Indeed, guo2016deep showed that the interaction-based drmm outperforms previous representation-based methods. On the other hand, interaction-based models are less efficient, since one cannot index a document representation independently of the query. This is less important, though, when relevance ranking methods rerank the top documents returned by a conventional ir engine, which is the scenario we consider here.

In terms of biomedical document and snippet retrieval, several methods have been proposed for BioASQ Tsatsaronis et al. (2015), mostly based on traditional ir and ml techniques. For example, the system of jin2017multi, which is the top scoring one for previous incarnations of BioASQ (utsb team), uses an underlying graphical model for scoring coupled with a number of traditional ir techniques like pseudo-relevance feedback.

The most related work from the biomedical domain is that of mohan2017deep, who use a deep learning architecture for document ranking. Like our systems they use interaction-based models to score and aggregate q-term matches relative to a document, however using different document-aware q-term representations – namely best match d-term distance scores. Also unlike our work, they focus on user click data as a supervised signal, and they use context-insensitive representations of document-query term interactions.

There are several studies on deep learning systems for snippet selection which aim to improve the classification and ranking of snippets extracted from a document based on a specific query. WangBiLSTMs use a stacked bidirectional lstm (bilstm); their system gets as input a question and a sentence, it concatenates them in a single string and then forwards that string to the input layer of the bilstm. RaoNoiseConstractive employ a neural architecture to produce representations of pairs of the form (question, sentence) and to learn to rank pairs of the form (question, relevant sentence) higher than pairs of the form (question, irrelevant sentence

) using Noise-Contrastive Estimation. Finally, AmiriAutoencoders use autoencoders to learn to encode input texts and use the resulting encodings to compute similarity between text pairs. This is similar in nature to

bcnn, the main difference being the encoding mechanism.

7 Conclusions

We presented the models, experimental set-up, and results of AUEB’s submissions to the document and snippet retrieval tasks of the sixth year of the BioASQ challenge. Our results show that deep learning models are not only competitive in both tasks, but in aggregate were the top scoring systems. This is in contrast to previous years where traditional ir systems tended to dominate. In future years, as deep ranking models improve and training data sets get larger, we expect to see bigger gains from deep learning models.

References

  • Amiri et al. (2016) Hadi Amiri, Philip Resnik, Jordan Boyd-Graber, and Hal Daumé III. 2016. Learning text pair similarity with context-sensitive autoencoders. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1882–1892, Berlin, Germany.
  • Bai et al. (2018) Shaojie Bai, J Zico Kolter, and Vladlen Koltun. 2018. An empirical evaluation of generic convolutional and recurrent networks for sequence modeling. arXiv preprint arXiv:1803.01271.
  • Duchi et al. (2011) John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization.

    Journal of Machine Learning Research

    , 12:2121–2159.
  • Guo et al. (2016) Jiafeng Guo, Yixing Fan, Qingyao Ai, and W. Bruce Croft. 2016. A deep relevance matching model for ad-hoc retrieval. In Proceedings of the 25th ACM International Conference on Information and Knowledge Management, pages 55–64, Indianapolis, IN.
  • Hu et al. (2014) Baotian Hu, Zhengdong Lu, Hang Li, and Qingcai Chen. 2014. Convolutional neural network architectures for matching natural language sentences. In Advances in Neural Information Processing Systems 27, pages 2042–2050.
  • Hui et al. (2017) Kai Hui, Andrew Yates, Klaus Berberich, and Gerard de Melo. 2017. PACRR: A position-aware neural IR model for relevance matching. In

    Proceedings of the Conference on Empirical Methods in Natural Language Processing

    , pages 1049–1058, Copenhagen, Denmark.
  • Hui et al. (2018) Kai Hui, Andrew Yates, Klaus Berberich, and Gerard de Melo. 2018. Co-PACRR: A context-aware neural IR model for ad-hoc retrieval. In Proceedings of the 11th ACM International Conference on Web Search and Data Mining, pages 279–287, Marina Del Rey, CA.
  • Jin et al. (2017) Zan-Xia Jin, Bo-Wen Zhang, Fan Fang, Le-Le Zhang, and Xu-Cheng Yin. 2017. A multi-strategy query processing approach for biomedical question answering: Ustb_prir at BioASQ 2017 Task 5B. In BioNLP 2017, pages 373–380.
  • Kingma and Ba (2014) Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
  • Krovetz (1993) Robert Krovetz. 1993. Viewing morphology as an inference process. In Proceedings of the 16th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 191–202, Pittsburgh, PA.
  • Lu and Li (2013) Zhengdong Lu and Hang Li. 2013. A deep architecture for matching short texts. In Proceedings of the 26th International Conference on Neural Information Processing Systems - Volume 1, pages 1367–1375, Lake Tahoe, NV.
  • McDonald et al. (2018) Ryan McDonald, Georgios-Ioannis Brokos, and Ion Androutsopoulos. 2018. Deep relevance ranking using enhanced document-query interactions. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium.
  • Mikolov et al. (2013) Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Proceedings of the 26th International Conference on Neural Information Processing Systems - Volume 2, pages 3111–3119, Lake Tahoe, Nevada.
  • Mohan et al. (2017) Sunil Mohan, Nicolas Fiorini, Sun Kim, and Zhiyong Lu. 2017. Deep learning for biomedical information retrieval: Learning textual relevance from click logs. In BioNLP 2017, pages 222–231.
  • Palangi et al. (2016) Hamid Palangi, Li Deng, Yelong Shen, Jianfeng Gao, Xiaodong He, Jianshu Chen, Xinying Song, and Rabab Ward. 2016.

    Deep sentence embedding using long short-term memory networks: Analysis and application to information retrieval.

    IEEE/ACM Transactions on Audio, Speech and Language Processing, 24(4):694–707.
  • Rao et al. (2016) Jinfeng Rao, Hua He, and Jimmy J. Lin. 2016. Noise-contrastive estimation for answer selection with deep neural networks. In CIKM, pages 1913–1916, Indianapolis, IN.
  • Robertson et al. (1995) Stephen Robertson, S. Walker, S. Jones, M. M. Hancock-Beaulieu, and M. Gatford. 1995. Okapi at TREC–3. In Overview of the Third Text Retrieval Conference, pages 109–126.
  • Robertson and Sparck Jones (1976) Stephen E. Robertson and Karen Sparck Jones. 1976. Relevance weighting of search terms. Journal of the Association for Information Science and Technology, 27(3):129–146.
  • Severyn and Moschitti (2015) Aliaksei Severyn and Alessandro Moschitti. 2015. Learning to rank short text pairs with convolutional deep neural networks. In Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 373–382, Santiago, Chile.
  • Sparck Jones (1972) Karen Sparck Jones. 1972. A statistical interpretation of term specificity and its application in retrieval. Journal of documentation, 28(1):11–21.
  • Tsatsaronis et al. (2015) George Tsatsaronis, Georgios Balikas, Prodromos Malakasiotis, Ioannis Partalas, Matthias Zschunke, Michael R. Alvers, Dirk Weissenborn, Anastasia Krithara, Sergios Petridis, Dimitris Polychronopoulos, Yannis Almirantis, John Pavlopoulos, Nicolas Baskiotis, Patrick Gallinari, Thierry Artiéres, Axel-Cyrille Ngonga Ngomo, Norman Heino, Eric Gaussier, Liliana Barrio-Alvers, Michael Schroeder, Ion Androutsopoulos, and Georgios Paliouras. 2015. An overview of the BioASQ large-scale biomedical semantic indexing and question answering competition. BMC Bioinformatics, 16(1):138.
  • Wang and Nyberg (2015) Di Wang and Eric Nyberg. 2015. A long short-term memory model for answer sentence selection in question answering. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 707–712, Beijing, China.
  • Wei and Croft (2006) Xing Wei and W Bruce Croft. 2006. LDA-based document models for ad-hoc retrieval. In Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 178–185, Seattle, WA.
  • Yin et al. (2016) Wenpeng Yin, Hinrich Schütze, Bing Xiang, and Bowen Zhou. 2016. ABCNN: Attention-based convolutional neural network for modeling sentence pairs. Transactions of the Association for Computational Linguistics, 4:259–272.
  • Zhai and Lafferty (2001) Chengxiang Zhai and John Lafferty. 2001. A study of smoothing methods for language models applied to ad hoc information retrieval. In Proceedings of the 24th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 334–342, New Orleans, LA.