Learning to Focus when Ranking Answers

08/08/2018
by   Dana Sagi, et al.
0

One of the main challenges in ranking is embedding the query and document pairs into a joint feature space, which can then be fed to a learning-to-rank algorithm. To achieve this representation, the conventional state of the art approaches perform extensive feature engineering that encode the similarity of the query-answer pair. Recently, deep-learning solutions have shown that it is possible to achieve comparable performance, in some settings, by learning the similarity representation directly from data. Unfortunately, previous models perform poorly on longer texts, or on texts with significant portion of irrelevant information, or which are grammatically incorrect. To overcome these limitations, we propose a novel ranking algorithm for question answering, QARAT, which uses an attention mechanism to learn on which words and phrases to focus when building the mutual representation. We demonstrate superior ranking performance on several real-world question-answer ranking datasets, and provide visualization of the attention mechanism to otter more insights into how our models of attention could benefit ranking for difficult question answering challenges.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/05/2017

An Attention Mechanism for Answer Selection Using a Combined Global and Local View

We propose a new attention mechanism for neural based question answering...
research
09/15/2017

Query-based Attention CNN for Text Similarity Map

In this paper, we introduce Query-based Attention CNN(QACNN) for Text Si...
research
06/14/2019

Microsoft AI Challenge India 2018: Learning to Rank Passages for Web Question Answering with Deep Attention Networks

This paper describes our system for The Microsoft AI Challenge India 201...
research
06/03/2018

Multi-Cast Attention Networks for Retrieval-based Question Answering and Response Prediction

Attention is typically used to select informative sub-phrases that are u...
research
02/11/2016

Attentive Pooling Networks

In this work, we propose Attentive Pooling (AP), a two-way attention mec...
research
01/05/2018

aNMM: Ranking Short Answer Texts with Attention-Based Neural Matching Model

As an alternative to question answering methods based on feature enginee...
research
07/25/2017

Hyperbolic Representation Learning for Fast and Efficient Neural Question Answering

The dominant neural architectures in question answer retrieval are based...

Please sign up or login with your details

Forgot password? Click here to reset