Microsoft AI Challenge India 2018: Learning to Rank Passages for Web Question Answering with Deep Attention Networks

06/14/2019
by   Chaitanya Sai Alaparthi, et al.
0

This paper describes our system for The Microsoft AI Challenge India 2018: Ranking Passages for Web Question Answering. The system uses the biLSTM network with co-attention mechanism between query and passage representations. Additionally, we use self attention on embeddings to increase the lexical coverage by allowing the system to take union over different embeddings. We also incorporate hand-crafted features to improve the system performance. Our system achieved a Mean Reciprocal Rank (MRR) of 0.67 on eval-1 dataset.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/05/2017

An Attention Mechanism for Answer Selection Using a Combined Global and Local View

We propose a new attention mechanism for neural based question answering...
research
08/08/2018

Learning to Focus when Ranking Answers

One of the main challenges in ranking is embedding the query and documen...
research
12/08/2020

Distilling Knowledge from Reader to Retriever for Question Answering

The task of information retrieval is an important component of many natu...
research
03/26/2017

Question Answering from Unstructured Text by Retrieval and Comprehension

Open domain Question Answering (QA) systems must interact with external ...
research
10/18/2016

Addressing Community Question Answering in English and Arabic

This paper studies the impact of different types of features applied to ...
research
06/03/2018

Multi-Cast Attention Networks for Retrieval-based Question Answering and Response Prediction

Attention is typically used to select informative sub-phrases that are u...
research
10/23/2019

BanditRank: Learning to Rank Using Contextual Bandits

We propose an extensible deep learning method that uses reinforcement le...

Please sign up or login with your details

Forgot password? Click here to reset