A Fully Attention-Based Information Retriever

Recurrent neural networks are now the state-of-the-art in natural language processing because they can build rich contextual representations and process texts of arbitrary length. However, recent developments on attention mechanisms have equipped feedforward networks with similar capabilities, hence enabling faster computations due to the increase in the number of operations that can be parallelized. We explore this new type of architecture in the domain of question-answering and propose a novel approach that we call Fully Attention Based Information Retriever (FABIR). We show that FABIR achieves competitive results in the Stanford Question Answering Dataset (SQuAD) while having fewer parameters and being faster at both learning and inference than rival methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/18/2017

Learning Convolutional Text Representations for Visual Question Answering

Visual question answering is a recently proposed artificial intelligence...
research
12/08/2020

Distilling Knowledge from Reader to Retriever for Question Answering

The task of information retrieval is an important component of many natu...
research
01/07/2016

Learning to Compose Neural Networks for Question Answering

We describe a question answering model that applies to both images and s...
research
10/02/2020

Attention-Based Clustering: Learning a Kernel from Context

In machine learning, no data point stands alone. We believe that context...
research
01/24/2021

A2P-MANN: Adaptive Attention Inference Hops Pruned Memory-Augmented Neural Networks

In this work, to limit the number of required attention inference hops i...
research
03/23/2017

Recurrent and Contextual Models for Visual Question Answering

We propose a series of recurrent and contextual neural network models fo...

Please sign up or login with your details

Forgot password? Click here to reset