QANet: Combining Local Convolution with Global Self-Attention for Reading Comprehension

04/23/2018
by   Adams Wei Yu, et al.
0

Current end-to-end machine reading and question answering (Q&A) models are primarily based on recurrent neural networks (RNNs) with attention. Despite their success, these models are often slow for both training and inference due to the sequential nature of RNNs. We propose a new Q&A architecture called QANet, which does not require recurrent networks: Its encoder consists exclusively of convolution and self-attention, where convolution models local interactions and self-attention models global interactions. On the SQuAD dataset, our model is 3x to 13x faster in training and 4x to 9x faster in inference, while achieving equivalent accuracy to recurrent models. The speed-up gain allows us to train the model with much more data. We hence combine our model with data generated by backtranslation from a neural machine translation model. On the SQuAD dataset, our single model, trained with augmented data, achieves 84.6 F1 score on the test set, which is significantly better than the best published F1 score of 81.8.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/19/2018

Lightweight Convolutional Approaches to Reading Comprehension on SQuAD

Current state-of-the-art reading comprehension models rely heavily on re...
research
10/18/2021

Finding Strong Gravitational Lenses Through Self-Attention

The upcoming large scale surveys are expected to find approximately 10^5...
research
02/18/2019

Self-Attention Aligner: A Latency-Control End-to-End Model for ASR Using Self-Attention Network and Chunk-Hopping

Self-attention network, an attention-based feedforward neural network, h...
research
08/26/2021

Self-Attention for Audio Super-Resolution

Convolutions operate only locally, thus failing to model global interact...
research
11/12/2017

Fast Reading Comprehension with ConvNets

State-of-the-art deep reading comprehension models are dominated by recu...
research
11/18/2019

Affine Self Convolution

Attention mechanisms, and most prominently self-attention, are a powerfu...
research
09/04/2019

Towards Better Modeling Hierarchical Structure for Self-Attention with Ordered Neurons

Recent studies have shown that a hybrid of self-attention networks (SANs...

Please sign up or login with your details

Forgot password? Click here to reset