Multi-Cast Attention Networks for Retrieval-based Question Answering and Response Prediction

06/03/2018
by   Yi Tay, et al.
0

Attention is typically used to select informative sub-phrases that are used for prediction. This paper investigates the novel use of attention as a form of feature augmentation, i.e, casted attention. We propose Multi-Cast Attention Networks (MCAN), a new attention mechanism and general model architecture for a potpourri of ranking tasks in the conversational modeling and question answering domains. Our approach performs a series of soft attention operations, each time casting a scalar feature upon the inner word embeddings. The key idea is to provide a real-valued hint (feature) to a subsequent encoder layer and is targeted at improving the representation learning process. There are several advantages to this design, e.g., it allows an arbitrary number of attention mechanisms to be casted, allowing for multiple attention types (e.g., co-attention, intra-attention) and attention variants (e.g., alignment-pooling, max-pooling, mean-pooling) to be executed simultaneously. This not only eliminates the costly need to tune the nature of the co-attention layer, but also provides greater extents of explainability to practitioners. Via extensive experiments on four well-known benchmark datasets, we show that MCAN achieves state-of-the-art performance. On the Ubuntu Dialogue Corpus, MCAN outperforms existing state-of-the-art models by 9%. MCAN also achieves the best performing score to date on the well-studied TrecQA dataset.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/05/2017

An Attention Mechanism for Answer Selection Using a Combined Global and Local View

We propose a new attention mechanism for neural based question answering...
research
10/18/2019

Relational Graph Representation Learning for Open-Domain Question Answering

We introduce a relational graph neural network with bi-directional atten...
research
08/08/2018

Learning to Focus when Ranking Answers

One of the main challenges in ranking is embedding the query and documen...
research
02/11/2016

Attentive Pooling Networks

In this work, we propose Attentive Pooling (AP), a two-way attention mec...
research
06/14/2019

Microsoft AI Challenge India 2018: Learning to Rank Passages for Web Question Answering with Deep Attention Networks

This paper describes our system for The Microsoft AI Challenge India 201...
research
09/05/2019

A Better Way to Attend: Attention with Trees for Video Question Answering

We propose a new attention model for video question answering. The main ...
research
08/25/2023

MMBAttn: Max-Mean and Bit-wise Attention for CTR Prediction

With the increasing complexity and scale of click-through rate (CTR) pre...

Please sign up or login with your details

Forgot password? Click here to reset