Paired Examples as Indirect Supervision in Latent Decision Models

04/05/2021
by   Nitish Gupta, et al.
0

Compositional, structured models are appealing because they explicitly decompose problems and provide interpretable intermediate outputs that give confidence that the model is not simply latching onto data artifacts. Learning these models is challenging, however, because end-task supervision only provides a weak indirect signal on what values the latent decisions should take. This often results in the model failing to learn to perform the intermediate tasks correctly. In this work, we introduce a way to leverage paired examples that provide stronger cues for learning latent decisions. When two related training examples share internal substructure, we add an additional training objective to encourage consistency between their latent decisions. Such an objective does not require external supervision for the values of the latent output, or even the end task, yet provides an additional training signal to that provided by individual training examples themselves. We apply our method to improve compositional question answering using neural module networks on the DROP dataset. We explore three ways to acquire paired questions in DROP: (a) discovering naturally occurring paired examples within the dataset, (b) constructing paired examples using templates, and (c) generating paired examples using a question generation model. We empirically demonstrate that our proposed approach improves both in- and out-of-distribution generalization and leads to correct latent decision predictions.

READ FULL TEXT

page 1

page 2

page 3

page 4

07/01/2020

Latent Compositional Representations Improve Systematic Generalization in Grounded Question Answering

Answering questions that involve multi-step reasoning requires decomposi...
10/08/2020

Learning to Recombine and Resample Data for Compositional Generalization

Flexible neural models outperform grammar- and automaton-based counterpa...
08/20/2019

Phrase Localization Without Paired Training Examples

Localizing phrases in images is an important part of image understanding...
07/23/2018

Explainable Neural Computation via Stack Neural Module Networks

In complex inferential tasks like question answering, machine learning m...
11/16/2020

Beyond I.I.D.: Three Levels of Generalization for Question Answering on Knowledge Bases

Existing studies on question answering on knowledge bases (KBQA) mainly ...
10/07/2020

Exposing Shallow Heuristics of Relation Extraction Models with Challenge Data

The process of collecting and annotating training data may introduce dis...
08/26/2018

Deep Probabilistic Logic: A Unifying Framework for Indirect Supervision

Deep learning has emerged as a versatile tool for a wide range of NLP ta...