-
Break It Down: A Question Understanding Benchmark
Understanding natural language questions entails the ability to break do...
read it
-
Answering Complicated Question Intents Expressed in Decomposed Question Sequences
Recent work in semantic parsing for question answering has focused on lo...
read it
-
Querent Intent in Multi-Sentence Questions
Multi-sentence questions (MSQs) are sequences of questions connected by ...
read it
-
Domain-Relevant Embeddings for Medical Question Similarity
The rate at which medical questions are asked online significantly excee...
read it
-
Full-Time Supervision based Bidirectional RNN for Factoid Question Answering
Recently, bidirectional recurrent neural network (BRNN) has been widely ...
read it
-
Neural Compositional Denotational Semantics for Question Answering
Answering compositional questions requiring multi-step reasoning is chal...
read it
-
HHH: An Online Medical Chatbot System based on Knowledge Graph and Hierarchical Bi-Directional Attention
This paper proposes a chatbot framework that adopts a hybrid model which...
read it
Towards Understanding and Answering Multi-Sentence Recommendation Questions on Tourism
We introduce the first system towards the novel task of answering complex multisentence recommendation questions in the tourism domain. Our solution uses a pipeline of two modules: question understanding and answering. For question understanding, we define an SQL-like query language that captures the semantic intent of a question; it supports operators like subset, negation, preference and similarity, which are often found in recommendation questions. We train and compare traditional CRFs as well as bidirectional LSTM-based models for converting a question to its semantic representation. We extend these models to a semisupervised setting with partially labeled sequences gathered through crowdsourcing. We find that our best model performs semi-supervised training of BiDiLSTM+CRF with hand-designed features and CCM(Chang et al., 2007) constraints. Finally, in an end to end QA system, our answering component converts our question representation into queries fired on underlying knowledge sources. Our experiments on two different answer corpora demonstrate that our system can significantly outperform baselines with up to 20 pt higher accuracy and 17 pt higher recall.
READ FULL TEXT
Comments
There are no comments yet.