Comparing Attention-based Convolutional and Recurrent Neural Networks: Success and Limitations in Machine Reading Comprehension

08/27/2018
by   Matthias Blohm, et al.
0

We propose a machine reading comprehension model based on the compare-aggregate framework with two-staged attention that achieves state-of-the-art results on the MovieQA question answering dataset. To investigate the limitations of our model as well as the behavioral difference between convolutional and recurrent neural networks, we generate adversarial examples to confuse the model and compare to human performance. Furthermore, we assess the generalizability of our model by analyzing its differences to human inference,

READ FULL TEXT
research
09/30/2020

Bridging Information-Seeking Human Gaze and Machine Reading Comprehension

In this work, we analyze how human gaze during reading comprehension is ...
research
11/12/2017

Fast Reading Comprehension with ConvNets

State-of-the-art deep reading comprehension models are dominated by recu...
research
10/13/2020

Interpreting Attention Models with Human Visual Attention in Machine Reading Comprehension

While neural networks with attention mechanisms have achieved superior p...
research
04/04/2019

Frustratingly Poor Performance of Reading Comprehension Models on Non-adversarial Examples

When humans learn to perform a difficult task (say, reading comprehensio...
research
02/25/2019

Leveraging Knowledge Bases in LSTMs for Improving Machine Reading

This paper focuses on how to take advantage of external knowledge bases ...
research
07/13/2021

Deep Neural Networks Evolve Human-like Attention Distribution during Reading Comprehension

Attention is a key mechanism for information selection in both biologica...
research
12/13/2016

Building Large Machine Reading-Comprehension Datasets using Paragraph Vectors

We present a dual contribution to the task of machine reading-comprehens...

Please sign up or login with your details

Forgot password? Click here to reset