Teaching Machine Comprehension with Compositional Explanations

05/02/2020
by   Qinyuan Ye, et al.
0

Advances in extractive machine reading comprehension (MRC) rely heavily on the collection of large scale human-annotated training data (in the form of "question-paragraph-answer span"). A single question-answer example provides limited supervision, while an explanation in natural language describing human's deduction process may generalize to many other questions that share similar solution patterns. In this paper, we focus on "teaching" machines on reading comprehension with (a small number of) natural language explanations. We propose a data augmentation framework that exploits the compositional nature of explanations to rapidly create pseudo-labeled data for training downstream MRC models. Structured variables and rules are extracted from each explanation and formulated into neural module teacher, which employs softened neural modules and combinatorial search to handle linguistic variations and overcome sparse coverage. The proposed work is particularly effective when limited annotation effort is available, and achieved a practicable F1 score of 59.80 with supervision from 52 explanations on the SQuAD dataset.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/30/2020

STARC: Structured Annotations for Reading Comprehension

We present STARC (Structured Annotations for Reading Comprehension), a n...
research
09/14/2021

Summarize-then-Answer: Generating Concise Explanations for Multi-hop Reading Comprehension

How can we generate concise explanations for multi-hop Reading Comprehen...
research
11/04/2019

Learning to Annotate: Modularizing Data Augmentation for TextClassifiers with Natural Language Explanations

Deep neural networks usually require massive labeled data, which restric...
research
05/20/2022

Explanatory machine learning for sequential human teaching

The topic of comprehensibility of machine-learned theories has recently ...
research
11/04/2019

Learning to Annotate: Modularizing Data Augmentation for Text Classifiers with Natural Language Explanations

Deep neural networks usually require massive labeled data, which restric...
research
04/09/2021

Evaluating Explanations for Reading Comprehension with Realistic Counterfactuals

Token-level attributions have been extensively studied to explain model ...
research
06/10/2015

Teaching Machines to Read and Comprehend

Teaching machines to read natural language documents remains an elusive ...

Please sign up or login with your details

Forgot password? Click here to reset