Leveraging Term Banks for Answering Complex Questions: A Case for Sparse Vectors

04/11/2017
by   Peter D. Turney, et al.
0

While open-domain question answering (QA) systems have proven effective for answering simple questions, they struggle with more complex questions. Our goal is to answer more complex questions reliably, without incurring a significant cost in knowledge resource construction to support the QA. One readily available knowledge resource is a term bank, enumerating the key concepts in a domain. We have developed an unsupervised learning approach that leverages a term bank to guide a QA system, by representing the terminological knowledge with thousands of specialized vector spaces. In experiments with complex science questions, we show that this approach significantly outperforms several state-of-the-art QA systems, demonstrating that significant leverage can be gained from continuous vector representations of domain terminology.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/19/2017

Answering Complex Questions Using Open Information Extraction

While there has been substantial progress in factoid question-answering ...
research
02/09/2022

Can Open Domain Question Answering Systems Answer Visual Knowledge Questions?

The task of Outside Knowledge Visual Question Answering (OKVQA) requires...
research
02/22/2020

Unsupervised Question Decomposition for Question Answering

We aim to improve question answering (QA) by decomposing hard questions ...
research
05/05/2015

A Feature-based Classification Technique for Answering Multi-choice World History Questions

Our FRDC_QA team participated in the QA-Lab English subtask of the NTCIR...
research
04/14/2021

TWEAC: Transformer with Extendable QA Agent Classifiers

Question answering systems should help users to access knowledge on a br...
research
06/07/2023

When to Read Documents or QA History: On Unified and Selective Open-domain QA

This paper studies the problem of open-domain question answering, with t...
research
07/05/2023

Won't Get Fooled Again: Answering Questions with False Premises

Pre-trained language models (PLMs) have shown unprecedented potential in...

Please sign up or login with your details

Forgot password? Click here to reset