To Test Machine Comprehension, Start by Defining Comprehension

05/04/2020
by   Jesse Dunietz, et al.
0

Many tasks aim to measure machine reading comprehension (MRC), often focusing on question types presumed to be difficult. Rarely, however, do task designers start by considering what systems should in fact comprehend. In this paper we make two key contributions. First, we argue that existing approaches do not adequately define comprehension; they are too unsystematic about what content is tested. Second, we present a detailed definition of comprehension – a "Template of Understanding" – for a widely useful class of texts, namely short narratives. We then conduct an experiment that strongly suggests existing systems are not up to the task of narrative understanding as we define it.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset