Question Answering over Curated and Open Web Sources

04/24/2020 ∙ by Rishiraj Saha Roy, et al. ∙ L3S Research Center 0

The last few years have seen an explosion of research on the topic of automated question answering (QA), spanning the communities of information retrieval, natural language processing, and artificial intelligence. This tutorial would cover the highlights of this really active period of growth for QA to give the audience a grasp over the families of algorithms that are currently being used. We partition research contributions by the underlying source from where answers are retrieved: curated knowledge graphs, unstructured text, or hybrid corpora. We choose this dimension of partitioning as it is the most discriminative when it comes to algorithm design. Other key dimensions are covered within each sub-topic: like the complexity of questions addressed, and degrees of explainability and interactivity introduced in the systems. We would conclude the tutorial with the most promising emerging trends in the expanse of QA, that would help new entrants into this field make the best decisions to take the community forward. Much has changed in the community since the last tutorial on QA in SIGIR 2016, and we believe that this timely overview will indeed benefit a large number of conference participants.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Motivation

1.1. Background

Over several decades, the field of question answering (QA) grew steadily from early prototypes like BASEBALL (Green Jr et al., 1961), through IBM Watson (Ferrucci et al., 2010) and all the way to present-day integration in virtually all personal assistants like Siri, Cortana, Alexa, and the Google Assistant. In the last few years though, research on QA has well and truly exploded: this has often resulted in top conferences regularly creating submission tracks and presentation sessions dedicated to this topic. This tutorial will try to highlight key contributions to automated QA systems in the last three to four years coming from the perspectives of information retrieval (IR) and natural language processing (NLP) (Wu et al., 2020; Clark and Gardner, 2018; Christmann et al., 2019; Chen et al., 2017; Lu et al., 2019; Guo et al., 2018; Sun et al., 2018; Vakulenko et al., 2019; Qiu et al., 2020; Dehghani et al., 2019; Rajpurkar et al., 2016).

1.2. Perspectives

In Information Retrieval, QA was traditionally treated as a special use case in search (Voorhees, 1999), to provide crisp and direct answers to certain classes of queries, as an alternative to ranked lists of documents that users would have to sift through. Such queries, with objective answers, are often referred to as factoid questions (Clarke and Terra, 2003; Cucerzan and Agichtein, 2005) (a term whose definition has evolved over the years). Factoid QA became very popular with the emergence of large curated knowledge graphs (KGs) like YAGO (Suchanek et al., 2007), DBpedia (Auer et al., 2007), Freebase (Bollacker et al., 2008) and Wikidata (Vrandečić and Krötzsch, 2014), powerful resources that enable such crisp question answering at scale. Question answering over knowledge graphs or equivalently, knowledge bases (KG-QA or KB-QA) became a field of its own, that is producing an increasing number of research contributions year over year (Vakulenko et al., 2019; Wu et al., 2020; Qiu et al., 2020; Christmann et al., 2019; Shen et al., 2019; Ding et al., 2019; Bhutani et al., 2019). Effort has also been directed at answering questions over Web tables (Pasupat and Liang, 2015; Iyyer et al., 2017), that can be considered canonicalizations of the challenges in QA over structured KGs.

In contrast, QA (in one of the major senses as we know it today) in Natural Language Processing started with the AI goal of whether machines can comprehend simple passages (Rajpurkar et al., 2016; Yang et al., 2018b; Chen et al., 2017; Clark and Gardner, 2018) so as to be able to answer questions posed from the contents of these passages. Over time, this machine reading comprehension (MRC) task became coupled with the retrieval pipeline, resulting in the so-called paradigm of open-domain QA (Dehghani et al., 2019; Wang et al., 2019; Chen et al., 2017) (a term that is overloaded with other senses as well (Abujabal et al., 2018; Elgohary et al., 2018)). Nevertheless, this introduction of the retrieval pipeline led to a revival of text-QA, that had increasingly focused on non-factoid QA (Cohen et al., 2018; Yang et al., 2018a) after the rise of structured KGs. This has also helped bridge the gap between text and KG-QA, with the latter family gradually incorporating supplementary textual sources to boost recall (Sun et al., 2018, 2019; Savenkov and Agichtein, 2016). Considering such heterogeneous sources may often be the right choice owing to the fact that KGs, while capturing an impressive amount of objective world knowledge, are inherently incomplete.

Terminology. In this tutorial, we refer to knowledge graphs and Web tables as the curated Web, and all unstructured text available online as the open Web.

2. Objectives

As mentioned in the beginning, the importance of QA has been fuelled to a large extent by the ubiquity of personal assistants: this has also helped bring together these seemingly independent research directions under one umbrella through a unified interface. One of the goals of this tutorial is to give the audience a feel of these commonalities: this can have a significant effect on overcoming the severely fragmented view of the QA community.

What do we not cover? QA over relational databases is closely related to the independent field NLIDB (natural language interfaces to databases) (Li and Jagadish, 2014) and is out of scope of this tutorial. Associated directions like Cloze-style QA (Lewis et al., 2019) or specialized application domains like biomedical QA (Pampari et al., 2018) will be mentioned cursorily. Approaches for visual and multimodal QA (Guo et al., 2019) are out of scope, and so is community question answering (CQA) where the primary goal is to match experts with pertinent questions: the answering itself is not by the machine but by humans.

3. Relevance to the IR Community

Related tutorials. A tutorial on QA is not really new to SIGIR: the previous one was presented in 2016 by Wen-tau Yih and Hao Ma (Yih and Ma, 2016b) (also at NAACL 2016 (Yih and Ma, 2016a), by the same authors). Text-based QA tutorials appeared way back in NAACL 2001 (Sanda Harabagiu and Dan Moldovan) (Harabagiu and Moldovan, 2001) and EACL 2003 (Jimmy Lin and Boris Katz) (Lin and Katz, 2003). Tutorials on IBM Watson (Fan, 2012; Fan and Barker, 2015) and entity recommendation (Ma and Ke, 2015) have also touched upon QA in the past. Recent workshops on various aspects of QA have been organized at top-tier conferences: MRQA (EMNLP-IJCNLP 2019), RCQA (AAAI 2019), HQA (WWW 2018), and OKBQA (COLING 2016).

Need for a new one. The unparalleled growth of QA warrants a new tutorial to cover recent advances in the field. Primarily a direction dominated by template-based (Unger et al., 2012; Abujabal et al., 2018) approaches, QA now includes a large number of neural methods (Huang et al., 2019; Chen et al., 2019, 2017; Clark and Gardner, 2018), graph-based (Lu et al., 2019; Luo et al., 2018) methods, and even a handful that explore reinforcement learning (Qiu et al., 2020; Das et al., 2018; Pan et al., 2019). Across sub-fields, more complex questions are being handled, complexity being defined in terms of entities and relationships present (Lu et al., 2019; Vakulenko et al., 2019; Yang et al., 2018b). Systems are moving from their static counterparts to more interactive ones: an increasing number of systems are including scope for user feedback (Abujabal et al., 2018; Zhang et al., 2019; Kratzwald and Feuerriegel, 2019) and operate in a multi-turn, conversational setting (Shen et al., 2019; Christmann et al., 2019; Pan et al., 2019; Reddy et al., 2019). Interpretability or explainability of presented answers is yet another area of significance (Wu et al., 2020; Abujabal et al., 2017a; Saeidi et al., 2018; Sydorova et al., 2019), as the role of such explanations is being recognized both for developers and end-users towards system improvement and user satisfaction. The tutorial will emphasize each of these key facets of QA. In addition, a summary of the available benchmarks (Berant et al., 2013; Rajpurkar et al., 2016; Yang et al., 2018b; Christmann et al., 2019; Talmor and Berant, 2018; Choi et al., 2018; Reddy et al., 2019; Abujabal et al., 2019) in each of the QA sub-fields will be provided, that would be very valuable for new entrants to get started with their problem of choice.

4. Topics

4.1. QA over Knowledge Graphs

The advent of large knowledge graphs like Freebase (Bollacker et al., 2008), YAGO (Suchanek et al., 2007), DBpedia (Auer et al., 2007) and Wikidata (Vrandečić and Krötzsch, 2014) gave rise to QA over KGs (KG-QA) that typically provides answers as single or lists of entities from the KG. KG-QA has become an important research direction, where the goal is to translate a natural language question into a structured query, typically in the Semantic Web language SPARQL or an equivalent logical form, that directly operates on the entities and predicates of the underlying KG (Wu et al., 2020; Qiu et al., 2020; Vakulenko et al., 2019; Bhutani et al., 2019; Christmann et al., 2019). KG-QA involves challenges of entity disambiguation and, most strikingly, the need to bridge the vocabulary gap between the phrases in a question and the terminology of the KG. Early work on KG-QA built on paraphrase-based mappings and query templates that involve a single entity predicate (Berant et al., 2013; Unger et al., 2012; Yahya et al., 2013). This line was further advanced by (Bast and Haussmann, 2015; Bao et al., 2016; Abujabal et al., 2017b; Hu et al., 2017)

, including the learning of templates from graph patterns in the KG. However, reliance on templates prevents such approaches from robustly coping with arbitrary syntactic formulations. This has motivated deep learning methods with CNNs and LSTMs, and especially key-value memory networks 

(Xu et al., 2019b, 2016; Tan et al., 2018; Huang et al., 2019; Chen et al., 2019).

A significant amount in this section of the tutorial will be on answering complex questions with multiple entities and predicates. This is one of the key focus areas in KG-QA now (Lu et al., 2019; Bhutani et al., 2019; Qiu et al., 2020; Ding et al., 2019; Vakulenko et al., 2019; Hu et al., 2018; Jia et al., 2018), where the overriding principle is often the identification of frequent query substructures. Web tables represent a key aspect of the curated Web and contain a substantial volume of structured information (Dong et al., 2014). QA over such tables contains canonicalized representatives of several challenges faced in large-scale KG-QA, and we will touch upon a few key works in this area (Pasupat and Liang, 2015; Iyyer et al., 2017; Sun et al., 2016).

4.2. QA over Text

4.2.1. Early efforts

Question answering has originally considered textual document collections as its underlying source. Classical approaches (Ravichandran and Hovy, 2002; Voorhees, 1999) extracted answers from passages and short text units that matched most cue words from the question followed by statistical scoring. This passage-retrieval model makes intensive use of IR techniques for statistical scoring of sentences or passages and aggregation of evidence for answer candidates. TREC ran a QA benchmarking series from 1999 to 2007, and more recently revived it as the LiveQA (Agichtein et al., 2015) and Complex Answer Retrieval (CAR) tracks (Dietz et al., 2017). IBM Watson (Ferrucci et al., 2010) extended this paradigm by combining it with learned models for special question types.

4.2.2. Machine reading comprehension (MRC)

This is a QA variation where a question needs to be answered as a short span of words from a given text paragraph (Rajpurkar et al., 2016; Yang et al., 2018b), and is different from the typical fact-centric answer-finding task in IR. Exemplary approaches in MRC that extended the original single-passage setting to a multi-document one can be found in DrQA (Chen et al., 2017) and DocumentQA (Clark and Gardner, 2018) (among many, many others). Traditional fact-centric QA over text, and multi-document MRC are recently emerging as a joint topic referred to as open-domain QA (Lin et al., 2018; Dehghani et al., 2019; Wang et al., 2019).

4.2.3. Open-domain QA

In NLP, open-domain question answering is now a benchmark task in natural language understanding (NLU) and can potentially drive the progress of methods in this area (Kwiatkowski et al., 2019). The recent reprisal of this task was jump-started by QA benchmarks like SQuAD (Rajpurkar et al., 2016) and HotpotQA (Yang et al., 2018b), that were proposed for MRC. Consequently, a majority of the approaches in NLP focus on MRC-style question answering with varying task complexities (Kwiatkowski et al., 2019; Dasigi et al., 2019; Talmor and Berant, 2019; Dua et al., 2019). This has lead to the common practice of considering the open-domain QA task as a retrieve and re-rank task. In this tutorial, we will introduce the modern foundations of open-domain QA using a similar retrieve-and-rank framework. Note that our focus will not be on architectural engineering but rather on design decisions, task complexity, and the roles and opportunities for IR.

4.3. QA over Heterogeneous Sources

Limitations of QA over KGs has recently led to a revival of considering textual sources, in combination with KGs (Savenkov and Agichtein, 2016; Xu et al., 2016; Sun et al., 2018, 2019). Early methods like PARALEX (Fader et al., 2013) and OQA (Fader et al., 2014) supported noisy KGs in the form of triple spaces compiled via Open IE (Mausam, 2016) on Wikipedia articles or Web corpora. TupleInf (Khot et al., 2017) extended and generalized the Open-IE-based PARALEX approach to complex questions, and is geared for multiple-choice answer options. TAQA (Yin et al., 2015) is another generalization of Open-IE-based QA, by constructing a KG of -tuples from Wikipedia full-text and question-specific search results. (Talmor and Berant, 2018) addressed complex questions by decomposing them into a sequence of simple questions, and relies on crowdsourced training data. Some methods for hybrid QA start with KGs as a source for candidate answers and use text corpora like Wikipedia or ClueWeb as additional evidence (Xu et al., 2016; Das et al., 2017; Sun et al., 2018, 2019; Sydorova et al., 2019; Xiong et al., 2019), or start with answer sentences from text corpora and combine these with KGs for giving crisp entity answers (Sun et al., 2015; Savenkov and Agichtein, 2016).

4.4. New Horizons in QA

4.4.1. Conversational QA

Conversational QA involves a sequence of questions and answers that appear as a natural dialogue between the system and the user. The aim of such sequential, multi-turn QA is to understand the context left implicit by users and effectively answer incomplete and ad hoc follow-up questions. Towards this, various recent benchmarks have been proposed that expect answers that are boolean (Saeidi et al., 2018), extractive (Choi et al., 2018) and free-form responses (Reddy et al., 2019), entities (Christmann et al., 2019), passages (Kaiser et al., 2020), and chit-chat (Zhou et al., 2018). Leaderboards of the QuAC (Choi et al., 2018) and CoQA (Reddy et al., 2019) datasets point to many recent approaches in the text domain. Recently, the TREC CAsT track (Dalton et al., 2019) and the Dagstuhl Seminar on Conversational Search (Anand et al., 2020) tried to address such challenges conversational search. For KG-QA, notable efforts include (Saha et al., 2018; Guo et al., 2018; Christmann et al., 2019; Shen et al., 2019). We will focus on the modelling complexity of these tasks and a classification of the approaches involved.

4.4.2. Feedback and interpretability

Static learning systems for QA are gradually paving the way for those that incorporate user feedback. These mostly design the setup as a continuous learning setup, where the explicit user feedback mechanism is built on top of an existing QA system. For example, the NEQA (Abujabal et al., 2018), QApedia (Kratzwald and Feuerriegel, 2019), and IMPROVE-QA (Zhang et al., 2019) systems primarily operate on the core systems of QUINT (Abujabal et al., 2017b), DrQA (Chen et al., 2017), and gAnswer (Hu et al., 2017), respectively. A direction closely coupled with effective feedback is the interpretability of QA models, that is also essential to improving trust and satisfaction (Wu et al., 2020; Abujabal et al., 2017a). This section is for experts and we will discuss potential limitations and open challenges.

4.4.3. Clarification questions

A key aspect of mixed-initiative systems (Radlinski and Craswell, 2017) is to be able to ask clarifications. This ability is essential QA systems, especially for handling ambiguity and facilitating better question understanding. Most of the work in this domain has been driven by extracting tasks from open QA forums  (Braslavski et al., 2017; Rao and Daumé III, 2018; Xu et al., 2019a).

5. Format and Support

A detailed schedule for our proposed half-day tutorial (three hours plus breaks), which is aimed to meet a high-quality presentation within the chosen time period, is as follows:

  • 9:00 - 10:30 Part I (1.5 hours)

    • Introduction and Definitions of QA systems (20 minutes)

    • QA over Knowledge Graphs (35 minutes)

    • QA over Heterogeneous Sources (35 minutes)

  • 10:30 - 11:00 Coffee break

  • 11:00 - 12:30 Part II (1.5 hours)

    • QA over Text (30 minutes)

    • Open-domain QA (30 minutes)

    • New Horizons in QA (30 minutes)

Support for attendees. We will provide the attendees with a link to the tutorial slides and preparatory reading material. Upon acceptance, we will prepare a webpage with all updated information and the necessary reading material, well in time before the conference.

References

  • A. Abujabal, R. Saha Roy, M. Yahya, and G. Weikum (2017a) QUINT: Interpretable Question Answering over Knowledge Bases. In EMNLP, Cited by: §3, §4.4.2.
  • A. Abujabal, R. Saha Roy, M. Yahya, and G. Weikum (2018) Never-ending learning for open-domain question answering over knowledge bases. In WWW, Cited by: §1.2, §3, §4.4.2.
  • A. Abujabal, R. Saha Roy, M. Yahya, and G. Weikum (2019) ComQA: A Community-sourced Dataset for Complex Factoid Question Answering with Paraphrase Clusters. In NAACL-HLT ’19, Cited by: §3.
  • A. Abujabal, M. Yahya, M. Riedewald, and G. Weikum (2017b) Automated template generation for question answering over knowledge graphs. In WWW, Cited by: §4.1, §4.4.2.
  • E. Agichtein, D. Carmel, D. Pelleg, Y. Pinter, and D. Harman (2015) Overview of the TREC 2015 LiveQA Track. In TREC, Cited by: §4.2.1.
  • A. Anand, L. Cavedon, H. Joho, M. Sanderson, and B. Stein (2020) Conversational Search (Dagstuhl Seminar 19461). Dagstuhl Reports 9 (11). Cited by: §4.4.1.
  • S. Auer, C. Bizer, G. Kobilarov, J. Lehmann, R. Cyganiak, and Z. Ives (2007) DBpedia: A nucleus for a Web of open data. Cited by: §1.2, §4.1.
  • J. Bao, N. Duan, Z. Yan, M. Zhou, and T. Zhao (2016) Constraint-based question answering with knowledge graph. In COLING, Cited by: §4.1.
  • H. Bast and E. Haussmann (2015) More accurate question answering on Freebase. In CIKM, Cited by: §4.1.
  • J. Berant, A. Chou, R. Frostig, and P. Liang (2013) Semantic parsing on Freebase from question-answer pairs. In EMNLP, Cited by: §3, §4.1.
  • N. Bhutani, X. Zheng, and H. Jagadish (2019) Learning to answer complex questions over knowledge bases with query composition. In CIKM, Cited by: §1.2, §4.1, §4.1.
  • K. Bollacker, C. Evans, P. Paritosh, T. Sturge, and J. Taylor (2008) Freebase: A collaboratively created graph database for structuring human knowledge. In SIGMOD, Cited by: §1.2, §4.1.
  • P. Braslavski, D. Savenkov, E. Agichtein, and A. Dubatovka (2017) What do you mean exactly? Analyzing clarification questions in CQA. In CHIIR, Cited by: §4.4.3.
  • D. Chen, A. Fisch, J. Weston, and A. Bordes (2017) Reading wikipedia to answer open-domain questions. In ACL, Cited by: §1.1, §1.2, §3, §4.2.2, §4.4.2.
  • Y. Chen, L. Wu, and M. J. Zaki (2019) Bidirectional attentive memory networks for question answering over knowledge bases. In NAACL-HLT, Cited by: §3, §4.1.
  • E. Choi, H. He, M. Iyyer, M. Yatskar, W. Yih, Y. Choi, P. Liang, and L. Zettlemoyer (2018) QuAC: Question answering in context. In EMNLP, Cited by: §3, §4.4.1.
  • P. Christmann, R. Saha Roy, A. Abujabal, J. Singh, and G. Weikum (2019) Look before you hop: conversational question answering over knowledge graphs using judicious context expansion. In CIKM, Cited by: §1.1, §1.2, §3, §4.1, §4.4.1.
  • C. Clark and M. Gardner (2018) Simple and effective multi-paragraph reading comprehension. In ACL, Cited by: §1.1, §1.2, §3, §4.2.2.
  • C. L. A. Clarke and E. L. Terra (2003) Passage retrieval vs. document retrieval for factoid question answering. In SIGIR, Cited by: §1.2.
  • D. Cohen, L. Yang, and W. B. Croft (2018) WikiPassageQA: A benchmark collection for research on non-factoid answer passage retrieval. In SIGIR, Cited by: §1.2.
  • S. Cucerzan and E. Agichtein (2005) Factoid question answering over unstructured and structured web content.. In TREC, Cited by: §1.2.
  • J. Dalton, C. Xiong, and J. Callan (2019) CAsT 2019: The conversational assistance track overview. In TREC, Cited by: §4.4.1.
  • R. Das, S. Dhuliawala, M. Zaheer, L. Vilnis, I. Durugkar, A. Krishnamurthy, A. Smola, and A. McCallum (2018) Go for a walk and arrive at the answer: Reasoning over paths in knowledge bases using reinforcement learning. In ICLR, Cited by: §3.
  • R. Das, M. Zaheer, S. Reddy, and A. McCallum (2017) Question answering on knowledge bases and text using universal schema and memory networks. In ACL, Cited by: §4.3.
  • P. Dasigi, N. F. Liu, A. Marasovic, N. A. Smith, and M. Gardner (2019) Quoref: A reading comprehension dataset with questions requiring coreferential reasoning. In EMNLP-IJCNLP, Cited by: §4.2.3.
  • M. Dehghani, H. Azarbonyad, J. Kamps, and M. de Rijke (2019) Learning to transform, combine, and reason in open-domain question answering. In WSDM, Cited by: §1.1, §1.2, §4.2.2.
  • L. Dietz, M. Verma, F. Radlinski, and N. Craswell (2017) TREC Complex Answer Retrieval Overview. In TREC, Cited by: §4.2.1.
  • J. Ding, W. Hu, Q. Xu, and Y. Qu (2019) Leveraging frequent query substructures to generate formal queries for complex question answering. In EMNLP-IJCNLP, Cited by: §1.2, §4.1.
  • X. Dong, E. Gabrilovich, G. Heitz, W. Horn, N. Lao, K. Murphy, T. Strohmann, S. Sun, and W. Zhang (2014) Knowledge vault: A web-scale approach to probabilistic knowledge fusion. In KDD, Cited by: §4.1.
  • D. Dua, Y. Wang, P. Dasigi, G. Stanovsky, S. Singh, and M. Gardner (2019) DROP: A reading comprehension benchmark requiring discrete reasoning over paragraphs. In NAACL-HLT, Cited by: §4.2.3.
  • A. Elgohary, C. Zhao, and J. Boyd-Graber (2018) A dataset and baselines for sequential open-domain question answering. In EMNLP, Cited by: §1.2.
  • A. Fader, L. Zettlemoyer, and O. Etzioni (2013) Paraphrase-driven learning for open question answering. In ACL, Cited by: §4.3.
  • A. Fader, L. Zettlemoyer, and O. Etzioni (2014) Open question answering over curated and extracted knowledge bases. In KDD, Cited by: §4.3.
  • J. Fan and K. Barker (2015) Natural language processing in Watson. In AAAI, Cited by: §3.
  • J. Fan (2012) Natural language processing in Watson. In NAACL-HLT, Cited by: §3.
  • D. Ferrucci, E. Brown, J. Chu-Carroll, J. Fan, D. Gondek, A. A. Kalyanpur, A. Lally, J. W. Murdock, E. Nyberg, J. Prager, N. Schlaefer, and C. Welty (2010) Building Watson: An overview of the DeepQA project. AI magazine 31 (3). Cited by: §1.1, §4.2.1.
  • B. F. Green Jr, A. K. Wolf, C. Chomsky, and K. Laughery (1961) Baseball: An automatic question-answerer. In Western joint IRE-AIEE-ACM computer conference, Cited by: §1.1.
  • D. Guo, D. Tang, N. Duan, M. Zhou, and J. Yin (2018) Dialog-to-action: Conversational question answering over a large-scale knowledge base. In NeurIPS, Cited by: §1.1, §4.4.1.
  • Y. Guo, Z. Cheng, L. Nie, Y. Liu, Y. Wang, and M. Kankanhalli (2019) Quantifying and alleviating the language prior problem in visual question answering. In SIGIR, Cited by: §2.
  • S. Harabagiu and D. Moldovan (2001) Open-domain textual question answering. In NAACL-HLT, Cited by: §3.
  • S. Hu, L. Zou, J. X. Yu, H. Wang, and D. Zhao (2017) Answering natural language questions by subgraph matching over knowledge graphs. TKDE 30 (5). Cited by: §4.1, §4.4.2.
  • S. Hu, L. Zou, and X. Zhang (2018) A state-transition framework to answer complex questions over knowledge base. In EMNLP, Cited by: §4.1.
  • X. Huang, J. Zhang, D. Li, and P. Li (2019) Knowledge graph embedding based question answering. In WSDM, Cited by: §3, §4.1.
  • M. Iyyer, W. Yih, and M. Chang (2017) Search-based neural structured learning for sequential question answering. In ACL, Cited by: §1.2, §4.1.
  • Z. Jia, A. Abujabal, R. Saha Roy, J. Strötgen, and G. Weikum (2018) TEQUILA: Temporal Question Answering over Knowledge Bases. In CIKM, Cited by: §4.1.
  • M. Kaiser, R. Saha Roy, and G. Weikum (2020) Conversational question answering over passages by leveraging word proximity networks. In SIGIR, Cited by: §4.4.1.
  • T. Khot, A. Sabharwal, and P. Clark (2017) Answering complex questions using open information extraction. In ACL, Cited by: §4.3.
  • B. Kratzwald and S. Feuerriegel (2019) Learning from on-line user feedback in neural question answering on the web. In WWW, Cited by: §3, §4.4.2.
  • T. Kwiatkowski, J. Palomaki, O. Redfield, M. Collins, A. Parikh, C. Alberti, D. Epstein, I. Polosukhin, J. Devlin, and K. Lee (2019) Natural questions: A benchmark for question answering research. TACL 7. Cited by: §4.2.3.
  • P. Lewis, L. Denoyer, and S. Riedel (2019) Unsupervised question answering by Cloze translation. In ACL, Cited by: §2.
  • F. Li and H. Jagadish (2014) Constructing an interactive natural language interface for relational databases. In VLDB, Cited by: §2.
  • J. Lin and B. Katz (2003) Question answering techniques for the world wide web. In EACL, Cited by: §3.
  • Y. Lin, H. Ji, Z. Liu, and M. Sun (2018) Denoising distantly supervised open-domain question answering. In ACL, Cited by: §4.2.2.
  • X. Lu, S. Pramanik, R. Saha Roy, A. Abujabal, Y. Wang, and G. Weikum (2019) Answering complex questions by joining multi-document evidence with quasi knowledge graphs. In SIGIR, Cited by: §1.1, §3, §4.1.
  • K. Luo, F. Lin, X. Luo, and K. Zhu (2018) Knowledge base question answering via encoding of complex query graphs. In EMNLP, Cited by: §3.
  • H. Ma and Y. Ke (2015) An introduction to entity recommendation and understanding. In WWW, Cited by: §3.
  • Mausam (2016) Open information extraction systems and downstream applications. In IJCAI, Cited by: §4.3.
  • A. Pampari, P. Raghavan, J. Liang, and J. Peng (2018) emrQA: A large corpus for question answering on electronic medical records. In EMNLP, Cited by: §2.
  • B. Pan, H. Li, Z. Yao, D. Cai, and H. Sun (2019) Reinforced dynamic reasoning for conversational question generation. In ACL, Cited by: §3.
  • P. Pasupat and P. Liang (2015) Compositional semantic parsing on semi-structured tables. In ACL, Cited by: §1.2, §4.1.
  • Y. Qiu, Y. Wang, X. Jin, and K. Zhang (2020) Stepwise reasoning for multi-relation question answering over knowledge graph with weak supervision. In WSDM, Cited by: §1.1, §1.2, §3, §4.1, §4.1.
  • F. Radlinski and N. Craswell (2017) A theoretical framework for conversational search. In CHIIR, Cited by: §4.4.3.
  • P. Rajpurkar, J. Zhang, K. Lopyrev, and P. Liang (2016) SQuAD: 100,000+ Questions for machine comprehension of text. In EMNLP, Cited by: §1.1, §1.2, §3, §4.2.2, §4.2.3.
  • S. Rao and H. Daumé III (2018) Learning to ask good questions: Ranking clarification questions using neural expected value of perfect information. In ACL, Cited by: §4.4.3.
  • D. Ravichandran and E. Hovy (2002) Learning surface text patterns for a question answering system. In ACL, Cited by: §4.2.1.
  • S. Reddy, D. Chen, and C. Manning (2019) CoQA: A conversational question answering challenge. TACL 7. Cited by: §3, §4.4.1.
  • M. Saeidi, M. Bartolo, P. Lewis, S. Singh, T. Rocktäschel, M. Sheldon, G. Bouchard, and S. Riedel (2018) Interpretation of natural language rules in conversational machine reading. In EMNLP, Cited by: §3, §4.4.1.
  • A. Saha, V. Pahuja, M. Khapra, K. Sankaranarayanan, and S. Chandar (2018) Complex sequential question answering: Towards learning to converse over linked question answer pairs with a knowledge graph. In AAAI, Cited by: §4.4.1.
  • D. Savenkov and E. Agichtein (2016) When a knowledge base is not enough: Question answering over knowledge bases with external text data. In SIGIR, Cited by: §1.2, §4.3.
  • T. Shen, X. Geng, Q. Tao, D. Guo, D. Tang, N. Duan, G. Long, and D. Jiang (2019) Multi-task learning for conversational question answering over a large-scale knowledge base. In EMNLP-IJCNLP, Cited by: §1.2, §3, §4.4.1.
  • F. Suchanek, G. Kasneci, and G. Weikum (2007) YAGO: A core of semantic knowledge. In WWW, Cited by: §1.2, §4.1.
  • H. Sun, H. Ma, W. Yih, C. Tsai, J. Liu, and M. Chang (2015) Open domain question answering via semantic enrichment. In WWW, Cited by: §4.3.
  • H. Sun, T. Bedrax-Weiss, and W. Cohen (2019) PullNet: Open domain question answering with iterative retrieval on knowledge bases and text. In EMNLP-IJCNLP, Cited by: §1.2, §4.3.
  • H. Sun, B. Dhingra, M. Zaheer, K. Mazaitis, R. Salakhutdinov, and W. Cohen (2018) Open domain question answering using early fusion of knowledge bases and text. In EMNLP, Cited by: §1.1, §1.2, §4.3.
  • H. Sun, H. Ma, X. He, W. Yih, Y. Su, and X. Yan (2016) Table cell search for question answering. In WWW, Cited by: §4.1.
  • A. Sydorova, N. Poerner, and B. Roth (2019) Interpretable question answering on knowledge bases and text. In ACL, Cited by: §3, §4.3.
  • A. Talmor and J. Berant (2018) The Web as a knowledge-base for answering complex questions. In NAACL-HLT, Cited by: §3, §4.3.
  • A. Talmor and J. Berant (2019) MultiQA: An empirical investigation of generalization and transfer in reading comprehension. In ACL, Cited by: §4.2.3.
  • C. Tan, F. Wei, Q. Zhou, N. Yang, B. Du, W. Lv, and M. Zhou (2018)

    Context-aware answer sentence selection with hierarchical gated recurrent neural networks

    .
    IEEE/ACM Trans. Audio, Speech & Language Processing 26 (3). Cited by: §4.1.
  • C. Unger, L. Bühmann, J. Lehmann, A. Ngonga Ngomo, D. Gerber, and P. Cimiano (2012) Template-based question answering over RDF data. In WWW, Cited by: §3, §4.1.
  • S. Vakulenko, J. D. Fernandez Garcia, A. Polleres, M. de Rijke, and M. Cochez (2019) Message passing for complex question answering over knowledge graphs. In CIKM, Cited by: §1.1, §1.2, §3, §4.1, §4.1.
  • E. M. Voorhees (1999) The TREC-8 question answering track report. In TREC, Cited by: §1.2, §4.2.1.
  • D. Vrandečić and M. Krötzsch (2014) Wikidata: A free collaborative knowledge base. CACM 57 (10). Cited by: §1.2, §4.1.
  • B. Wang, T. Yao, Q. Zhang, J. Xu, Z. Tian, K. Liu, and J. Zhao (2019) Document gated reader for open-domain question answering. In SIGIR, Cited by: §1.2, §4.2.2.
  • Z. Wu, B. Kao, T. Wu, P. Yin, and Q. Liu (2020) PERQ: Predicting, Explaining, and Rectifying Failed Questions in KB-QA Systems. In WSDM, Cited by: §1.1, §1.2, §3, §4.1, §4.4.2.
  • W. Xiong, M. Yu, S. Chang, X. Guo, and W. Y. Wang (2019) Improving question answering over incomplete KBs with knowledge-aware reader. In ACL, Cited by: §4.3.
  • J. Xu, Y. Wang, D. Tang, N. Duan, P. Yang, Q. Zeng, M. Zhou, and X. Sun (2019a) Asking clarification questions in knowledge-based question answering. In EMNLP-IJCNLP, Cited by: §4.4.3.
  • K. Xu, Y. Lai, Y. Feng, and Z. Wang (2019b) Enhancing key-value memory neural networks for knowledge based question answering. In NAACL-HLT, Cited by: §4.1.
  • K. Xu, S. Reddy, Y. Feng, S. Huang, and D. Zhao (2016) Question answering on freebase via relation extraction and textual evidence. In ACL, Cited by: §4.1, §4.3.
  • M. Yahya, K. Berberich, S. Elbassuoni, and G. Weikum (2013) Robust question answering over the web of linked data. In CIKM, Cited by: §4.1.
  • Y. Yang, Y. Gong, and X. Chen (2018a) Query Tracking for E-commerce Conversational Search: A Machine Comprehension Perspective. In CIKM, Cited by: §1.2.
  • Z. Yang, P. Qi, S. Zhang, Y. Bengio, W. Cohen, R. Salakhutdinov, and C. D. Manning (2018b) HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering. In EMNLP, Cited by: §1.2, §3, §4.2.2, §4.2.3.
  • S. W. Yih and H. Ma (2016a) Question answering with knowledge base, Web and beyond. In NAACL-HLT, Cited by: §3.
  • W. Yih and H. Ma (2016b) Question answering with knowledge base, Web and beyond. In SIGIR, Cited by: §3.
  • P. Yin, N. Duan, B. Kao, J. Bao, and M. Zhou (2015) Answering questions with complex semantic constraints on open knowledge bases. In CIKM, Cited by: §4.3.
  • X. Zhang, L. Zou, and S. Hu (2019) An interactive mechanism to improve question answering systems via feedback. In CIKM, Cited by: §3, §4.4.2.
  • K. Zhou, S. Prabhumoye, and A. W. Black (2018) A dataset for document grounded conversations. In EMNLP, Cited by: §4.4.1.