Making Neural Machine Reading Comprehension Faster

03/29/2019
by   Debajyoti Chatterjee, et al.
0

This study aims at solving the Machine Reading Comprehension problem where questions have to be answered given a context passage. The challenge is to develop a computationally faster model which will have improved inference time. State of the art in many natural language understanding tasks, BERT model, has been used and knowledge distillation method has been applied to train two smaller models. The developed models are compared with other models which have been developed with the same intention.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/20/2020

An Experimental Study of Deep Neural Network Models for Vietnamese Multiple-Choice Reading Comprehension

Machine reading comprehension (MRC) is a challenging task in natural lan...
research
08/23/2018

Attention-Guided Answer Distillation for Machine Reading Comprehension

Despite that current reading comprehension systems have achieved signifi...
research
12/13/2019

WaLDORf: Wasteless Language-model Distillation On Reading-comprehension

Transformer based Very Large Language Models (VLLMs) like BERT, XLNet an...
research
10/02/2019

AntMan: Sparse Low-Rank Compression to Accelerate RNN inference

Wide adoption of complex RNN based models is hindered by their inference...
research
12/29/2019

ORB: An Open Reading Benchmark for Comprehensive Evaluation of Machine Reading Comprehension

Reading comprehension is one of the crucial tasks for furthering researc...
research
03/16/2021

Robustly Optimized and Distilled Training for Natural Language Understanding

In this paper, we explore multi-task learning (MTL) as a second pretrain...
research
10/24/2020

Improved Synthetic Training for Reading Comprehension

Automatically generated synthetic training examples have been shown to i...

Please sign up or login with your details

Forgot password? Click here to reset