With the rising popularity of information access through devices with small screens, e.g., smartphones, and voice-only interfaces, e.g., Amazon’s Alexa and Google Home, there is a growing need to develop retrieval models that satisfy user information needs with sentence-level and passage-level answers. This has motivated researchers to study answer sentence and passage retrieval, in particular in response to non-factoid questions (Cohen and Croft, 2016; Yulianti et al., 2018). Non-factoid questions are defined as open-ended questions that require complex answers, like descriptions, opinions, or explanations, which are mostly passage-level texts. Questions such as “what is the reason for life?” are categorized as non-factoid questions. We believe this type of questions plays a pivotal role in the overall quality of question answering systems, since their technologies are not as mature as those for factoid questions, which seek precise facts, such as “At what age did Rossini stop writing opera?”.
Despite the widely-known importance of studying answer passage retrieval for non-factoid questions (Cohen and Croft, 2016, 2018; Keikha et al., 2014; Yulianti et al., 2018), the research progress for this task is limited by the availability of high-quality public data. Some existing collections, e.g., (Keikha et al., 2014; Shah and Pomerantz, 2010)
, consist of few queries, which are not sufficient to train sophisticated machine learning models for the task. Some others, e.g.,(Cohen and Croft, 2016), significantly suffer from incomplete judgments. Most recently, Cohen et al. (2018) developed a publicly available collection for non-factoid question answering with a few thousands questions, which is called WikiPassageQA. Although WikiPassageQA is an invaluable contribution to the community, it does not cover all aspects of the non-factoid question answering task and has the following limitations:
it only contains an average of 1.7 relevant passages per questions and does not cover questions that have multiple aspects in multiple passages;
it was created from the Wikipedia website, containing only formal text;
more importantly, the questions in the WikiPassageQA dataset were generated by crowdworkers, which is different from the questions that users ask in real-world systems;
the relevant passages in WikiPassageQA contain the answer to the question in addition to some surrounding text. Therefore, some parts of a relevant passage may not answer any aspects of the question;
it only provides binary relevance judgments.
To address these shortcomings, in this paper, we create a novel dataset for non-factoid question answering research, called ANTIQUE,111ANTIQUE stands for answering non-factoid questions. with a total of 2,626 questions. In more detail, we focus on the non-factoid questions that have been asked by users of Yahoo! Answers, a community question answering (CQA) service. Non-factoid CQA data without relevance annotation has been previously used in (Cohen and Croft, 2016), however, as mentioned by the authors, it significantly suffers from incomplete judgments.222More information on the existing collections is provided in Section 2. We collected a set of four-level relevance judgments through a careful crowdsourcing procedure involving multiple iterations and several automatic and manual quality checks. Note that we, in particular, paid extra attention to collect reliable and comprehensive relevance judgments for the test set. Therefore, we annotated the answers after conducting result pooling among several term-matching and neural retrieval models. In summary, ANTIQUE provides annotations for 34,011 question-answer pairs, which is significantly larger than many comparable datasets.
We further provide brief analysis to uncover the characteristics of ANTIQUE. Moreover, we conduct extensive experiments with ANTIQUE to present benchmark results of various methods, including classical and neural IR models on the created dataset, demonstrating the unique challenges ANTIQUE introduces to the community. To foster research in this area, we release ANTIQUE for research purposes.333https://ciir.cs.umass.edu/downloads/Antique/
2. Existing Related Collections
Factoid QA Datasets. TREC QA (Wang et al., 2007) and WikiQA (Yang et al., 2015) are examples of factoid QA datasets whose answers are typically brief and concise facts, such as named entities and numbers. InsuranceQA (Feng et al., 2015) is another factoid dataset in the domain of insurance. ANTIQUE, on the other hand, consists of open-domain non-factoid questions that require explanatory answers. The answers to these questions are often passage level, which is contrary to the factoid QA datasets.
Non-Factoid QA Datasets. There have been efforts for developing non-factoid question answering datasets (Habernal, Sukhareva, Raiber, Shtok, Kurland, Ronen, Bar-Ilan, and Gurevych, Habernal et al.; Keikha et al., 2014; Yang et al., 2016b). Keikha et al. (2014) introduced the WebAP dataset, which is a non-factoid QA dataset with 82 queries. The questions and answers in WebAP were not generated by real users. There exist a number of datasets that partially contain non-factoid questions and were collected from CQA websites, such as Yahoo! Webscope L6, Qatar Living (Nakov et al., 2017), and StackExchange. These datasets are often restricted to a specific domain, suffer from incomplete judgments, and/or do not contain sufficient non-factoid questions for training sophisticated machine learning models. The nfL6 dataset (Cohen and Croft, 2016) is a collection of non-factoid questions extracted from the Yahoo! Webscope L6. Its main drawback is the absence of complete relevance annotation. Previous work assumes that the only answer that the question writer has marked as correct is relevant, which is far from being realistic. That is why we aim to collect a complete set of relevance annotations. WikiPassageQA is another non-factoid QA dataset that has been recently created by Cohen et al. (2018). As mentioned in Section 1, despite its great potentials, it has a number of limitations. ANTIQUE addresses these limitations to provide a complementary benchmark for non-factoid question answering.444More information can be found in Section 1. More recently, Microsoft has released the MS MARCO V2.1 passage re-ranking dataset (Nguyen et al., 2016), containing a large number of queries sampled from the Bing search engine. In addition to not being specific to non-factoid QA, it significantly suffers from incomplete judgments. In contrast, ANTIQUE provides a reliable collection with complete relevance annotations for evaluating non-factoid QA models.
Machine Reading Comprehension (MRC) Datasets. MRC has recently attracted a great deal of attention in the NLP community. The MRC task is often defined as selecting a specific short text span within a sentence, selecting the answer from predefined choices, or predicting a blanked-out word of a sentence. There exist a number of datasets for MRC, such as SQuAD (Rajpurkar et al., 2016), BAbI (Weston et al., 2015), and MS MARCO v1 (Nguyen et al., 2016). In this paper, we study retrieval-based QA tasks, thus MRC is out of the scope of the paper.
3. Data Collection
In this section, we describe how we collected ANTIQUE. Following Cohen and Croft (2016), we used the publicly available dataset of non-factoid questions collected from the Yahoo! Webscope L6, called nfL6.555https://ciir.cs.umass.edu/downloads/nfL6/
Pre-processing & Filtering. We conducted the following steps for pre-processing and question sampling:
questions with less than 3 terms were omitted (excluding punctuation marks);
questions with no best answer () were removed;
duplicate or near-duplicate questions were removed. We calculated term overlap between questions and from the questions with more than 90% term overlap, we only kept one, randomly;
we omitted the questions under the categories of “Yahoo! Products” and “Computers & Internet” since they are beyond the knowledge and expertise of most workers;
From the remaining data, we randomly sampled 2,626 questions (out of 66,634).
Each question in nfL6 corresponds to a list of answers named “nbest answers,” which we denote with . For every question, one answer is marked by the question author on the community web site as the best answer, denoted by . It is important to note that as different people have different information needs, this answer is not necessarily the best answer to the question. Also, many relevant answers have been added after the user has chosen the correct answer. Nevertheless, in this work, we respect the users’ explicit feedback, assuming that the candidates selected by the actual user are relevant to the query. Therefore, we do not collect relevance assessments for those answers.
3.1. Relevance Assessment
We created a Human Intelligence Task (HIT) on Amazon Mechanical Turk,666http://www.mturk.com/ in which we presented workers with a question-answer pair, and instructed them to annotate the answer with a label between 1 to 4. The instructions started with a short introduction to the task and its motivations, followed by detailed annotation guidelines. Since workers needed background knowledge777Like for example “Can someone explain the theory of ?” for answering the majority of the questions, we also included in the instructions and called it a “possibly correct answer.” Note that since we observed that, in some cases, the question was very subjective and could have multiple correct answers, we chose to call it a “possibly correct answer” and made it clear in the instructions that other answers could potentially be different from the provided answer, but still be correct. Figure 1 shows the labeling interface where we provided a question and its “possibly correct answer,” asking workers to judge the relevance of a given answer to the question.
Label Definitions. To facilitate the labeling procedure, we described the definition of labels in the form of a flowchart to users. Our aim was to preserve the notion of relevance in question answering systems as we discriminate it with the typical topical relevance definition in ad-hoc retrieval tasks. The definition of each label can be found in the following:
Label 4: It looks reasonable and convincing. Its quality is on par with or better than the “Possibly Correct Answer”. Note that it does not have to provide the same answer as the “Possibly Correct Answer”.
Label 3: It can be an answer to the question, however, it is not sufficiently convincing. There should be an answer with much better quality for the question.
Label 2: It does not answer the question or if it does, it provides an unreasonable answer, however, it is not out of context. Therefore, you cannot accept it as an answer to the question.
Label 1: It is completely out of context or does not make any sense.
Finally, we included 15 diverse examples of QA pairs with their annotations and explanation of why and how the annotations were done.
Overall, we launched 7 assignment batches, appointing 3 workers to each QA pair. In cases where the workers could agree on a label (i.e., majority vote), we considered the label as the ground truth. We then added all QA pairs with no agreement to a new batch and performed a second round of annotation. It is interesting to note that the ratio of pairs with no agreement was nearly the same among the 7 batches (~13%). In the very rare cases of no agreement after two rounds of annotation (776 pairs), an expert annotator decided on the final label. To allow further analysis, we have added a flag in the dataset identifying the answers annotated by the expert annotator. In total, the annotation task costed 2,400 USD.
Quality Check. To ensure the quality of the data, we limited the HIT to the workers with over 98% approval rate, whi have completed at least 5,000 assignments.888We increased the previous assignment limit to 10,000 for annotating the test set. 3% of QA pairs where selected from a set of quality check questions with obviously objective labels. It enabled us to identify workers who did not provide high-quality labels. Moreover, we recorded the click log of the workers to detect any abnormal behavior (e.g., employing automatic labeling scripts) that would affect the quality of the data. Finally, we constantly performed manual quality checks by reading the QA pairs and their respective labels. The manual inspection was done on the 20% of each worker’s submission as well as the QA pairs with no agreement.
|# training (test) questions||2,426 (200)|
|# training (test) answers||27,422 (6,589)|
|# label 4||13,067|
|# label 3||9,276|
|# label 2||8,754|
|# label 1||2,914|
|# total workers||577|
|# total judgments||148,252|
|# rejected judgments||17,460|
|% of rejections||12%|
3.2. Data Splits
|DRMM-TKS (Guo, Fan, Ai, and Croft, Guo et al.)||0.2315||0.5774||0.4337||0.3827||0.3005||0.4949||0.4626||0.4531|
|aNMM (Yang et al., 2016a)||0.2563||0.6250||0.4847||0.4388||0.3306||0.5289||0.5127||0.4904|
|BERT (Devlin et al., 2018)||0.3771||0.7968||0.7092||0.6071||0.4791||0.7126||0.6570||0.6423|
Training Set. In the training set, we annotate the list (see Section 3), for each query, and assume that for each question, answers to the other questions are irrelevant. As we removed similar questions from the dataset, this assumption is fair. To test this assumption, we sampled 100 questions from the filtered version of nfL6 and annotated the top 10 results retrieved by BM25 using the same crowdsourcing procedure. The results showed that only 13.7% of the documents were annotated as relevant (label 3 or 4). This error rate can be tolerated in the training process as it enables us to collect significantly larger amount of training labels. On the other hand, for the test set we performed pooling to label all possibly relevant answers. In total, the ANTIQUE’s training set contains 27,422 answer annotations as it shown in Table 1, that is 11.3 annotated candidate answers per training question, which is significantly larger than its similar datasets, e.g., WikiPassageQA (Cohen et al., 2018).
Test Set. The test set in ANTIQUE consists of 200 questions which were randomly sampled from nfL6 after pre-processing and filtering. Statistics of the test set can be found in Table1. The set of candidate questions for annotation was selected by performing depth- () pooling. To do so, we considered the union of the top results of various retrieval models, including term-matching and neural models (listed in Table 2). We took the union of this set and ’nbest answers’ (set ) for annotation.
4. Data Analysis
In this section, we present a brief analysis of ANTIQUE to highlight its characteristics.
Statistics of ANTIQUE. Table 1 lists general statistics of ANTIQUE. As we see, ANTIQUE consists of 2,426 non-factoid questions that can be used for training, followed by 200 questions as a test set. Furthermore, ANTIQUE contains 27.4k and 6.5k annotations (judged answers) for the train and test sets, respectively. We also report the total number of answers with specific labels.
Workers Performance. Overall, we launched 7 different crowdsourcing batches to collect ANTIQUE. This allowed us to identify and ban less effective workers. As we see in Table 1, a total number of 577 workers made over 148k annotations (257 per worker), out of which we rejected 12% because they failed to satisfy the quality criteria.
Questions Distribution. Figure 2 shows how questions are distributed in ANTIQUE by reporting the top 40 starting trigrams of the questions. As shown in the figure, majority of the questions start with “how” and “why,” constituting 38% and 36% of the questions, respectively. It is notable that, according to Figure 2, a considerable number of questions start with “how do you,” “how can you,” “what do you,” and “why do you,” suggesting that their corresponding answers would be highly subjective and opinion based. Also, we can see a major fraction of questions start with “how can I” and “how do I,” indicating the importance and dominance of personal questions.
Answers Distribution. Finally, in Figure 3, we plot the distribution for the number of ‘nbest answers’ (). We see that the majority of questions have 9 or less nbest answers (=54%) and 82% of questions have 14 or less nbest answers. The distribution, however, has a long tail which is not shown in the figure.
5. Benchmark Results
In this section, we provide benchmark results on the ANTIQUE dataset. To this aim, we report the results for a wide range of retrieval models (mostly neural models) in Table 2. In this experiment, we report a wide range of standard retrieval metrics, ranging from precision- to recall-oriented metrics (see Table 2). Note that for the metrics that require binary labels (i.e., MAP, MRR, and P@k), we assume that the labels 3 and 4 are relevant, while 1 and 2 are non-relevant. Due to the definition of our labels (see Section 3), we recommend this setting for future work. For nDCG, we use the four-level relevance annotations.999Note that we mapped our 1 to 4 labels to 0 to 3 for computing nDCG.
As shown in the table, the neural model significantly outperforms BM25, an effective term-matching retrieval model. Among all, BERT (Devlin et al., 2018) provides the best performance. Recent work on passage retrieval also made similar observations (Nogueira and Cho, 2019; Padigela et al., 2019)
. Since MAP is a recall-oriented metric, the results suggest that all the models still fail at retrieving all relevant answers. There is still a large room for improvement, in terms of both precision- and recall-oriented metrics.
In this paper, we introduced ANTIQUE; a non-factoid community question answering dataset. The questions in ANTIQUE were sampled from a wide range of categories on Yahoo! Answers, a community question answering service. We collected four-level relevance annotations through a multi-stage crowdsourcing as well as expert annotation. In summary, ANTIQUE consists of 34,011 QA-pair relevance annotations for 2,426 and 200 questions in the training and test sets, respectively. Additionally, we reported the benchmark results for a set of retrieval models, ranging from term-matching to recent neural ranking models, on ANTIQUE. Our data analysis and retrieval experiments demonstrated that ANTIQUE introduces unique challenges while fostering research in the domain of non-factoid question answering.
Cohen and Croft (2016)
D. Cohen and W. B.
End to End Long Short Term Memory Networks for Non-Factoid Question Answering. InICTIR ’16. 143–146.
- Cohen and Croft (2018) D. Cohen and W. B. Croft. 2018. A Hybrid Embedding Approach to Noisy Answer Passage Retrieval. In ECIR ’18.
- Cohen et al. (2018) D. Cohen, L. Yang, and W. B. Croft. 2018. WikiPassageQA: A Benchmark Collection for Research on Non-factoid Answer Passage Retrieval. In SIGIR ’18.
- Devlin et al. (2018) J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova. 2018. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. CoRR (2018).
et al. (2015)
M. Feng, B. Xiang,
M. R. Glass, L. Wang, and
B. Zhou. 2015.
Applying Deep Learning to Answer Selection: A Study and An Open Task.CoRR (2015).
- Guo, Fan, Ai, and Croft (Guo et al.) J. Guo, Y. Fan, Q. Ai, and W. B. Croft. A Deep Relevance Matching Model for Ad-hoc Retrieval. In CIKM ’16.
- Habernal, Sukhareva, Raiber, Shtok, Kurland, Ronen, Bar-Ilan, and Gurevych (Habernal et al.) I. Habernal, M. Sukhareva, F. Raiber, A. Shtok, O. Kurland, H. Ronen, J. Bar-Ilan, and I. Gurevych. New Collection Announcement: Focused Retrieval Over the Web. In SIGIR ’16.
- Keikha et al. (2014) M. Keikha, J. Park, and W. B. Croft. 2014. Evaluating Answer Passages using Summarization Measures. In SIGIR ’14. 963–966.
- Nakov et al. (2017) P. Nakov, D. Hoogeveen, L. Màrquez, A. Moschitti, H. Mubarak, T. Baldwin, and K. Verspoor. 2017. SemEval-2017 Task 3: Community Question Answering. In SemEval ’17. 27–48.
- Nguyen et al. (2016) T. Nguyen, M. Rosenberg, X. Song, J. Gao, S. Tiwary, R. Majumder, and L. Deng. 2016. MS MARCO: A Human Generated MAchine Reading COmprehension Dataset. CoRR abs/1611.09268 (2016).
- Nogueira and Cho (2019) R. Nogueira and K. Cho. 2019. Passage Re-ranking with BERT. CoRR abs/1901.04085 (2019).
- Padigela et al. (2019) H. Padigela, H. Zamani, and W. B. Croft. 2019. Investigating the Successes and Failures of BERT for Passage Re-Ranking. CoRR abs/1903.06902 (2019).
- Rajpurkar et al. (2016) P. Rajpurkar, J. Zhang, K. Lopyrev, and P. Liang. 2016. SQuAD: 100, 000+ Questions for Machine Comprehension of Text. CoRR (2016).
- Shah and Pomerantz (2010) C. Shah and J. Pomerantz. 2010. Evaluating and Predicting Answer Quality in Community QA. In SIGIR ’10.
- Wang et al. (2007) M. Wang, N. A. Smith, and T. Mitamura. 2007. What is the Jeopardy Model? A Quasi-Synchronous Grammar for QA. In EMNLP ’07.
- Weston et al. (2015) J. Weston, A. Bordes, S. Chopra, and T. Mikolov. 2015. Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks. CoRR (2015).
- Yang et al. (2016a) L. Yang, Q. Ai, J. Guo, and W. B. Croft. 2016a. aNMM: Ranking Short Answer Texts with Attention-Based Neural Matching Model. In CIKM ’16. 287–296.
- Yang et al. (2016b) L. Yang, Q. Ai, D. Spina, R.-C. Chen, L. Pang, W. B. Croft, J. Guo, and F. Scholer. 2016b. Beyond Factoid QA: Effective Methods for Non-factoid Answer Sentence Retrieval. In ECIR ’16.
- Yang et al. (2015) Y. Yang, S. W. Yih, and C. Meek. 2015. WikiQA: A Challenge Dataset for Open-Domain Question Answering. ACL ’15.
- Yulianti et al. (2018) E. Yulianti, R. Chen, F. Scholer, W. B. Croft, and M. Sanderson. 2018. Document Summarization for Answering Non-Factoid Queries. TKDE (2018).