Structurally Diverse Sampling Reduces Spurious Correlations in Semantic Parsing Datasets

03/16/2022
by   Shivanshu Gupta, et al.
5

A rapidly growing body of research has demonstrated the inability of NLP models to generalize compositionally and has tried to alleviate it through specialized architectures, training schemes, and data augmentation, among other approaches. In this work, we study a different relatively under-explored approach: sampling diverse train sets that encourage compositional generalization. We propose a novel algorithm for sampling a structurally diverse set of instances from a labeled instance pool with structured outputs. Evaluating on 5 semantic parsing datasets of varying complexity, we show that our algorithm performs competitively with or better than prior algorithms in not only compositional template splits but also traditional IID splits of all but the least structurally diverse datasets. In general, we find that diverse train sets lead to better generalization than random training sets of the same size in 9 out of 10 dataset-split pairs, with over 10 5, providing further evidence to their sample efficiency. Moreover, we show that structural diversity also makes for more comprehensive test sets that require diverse training to succeed on. Finally, we use information theory to show that reduction in spurious correlations between substructures may be one reason why diverse training sets improve generalization.

READ FULL TEXT

page 2

page 6

research
09/06/2021

Finding needles in a haystack: Sampling Structurally-diverse Training Sets from Synthetic Data for Compositional Generalization

Modern semantic parsers suffer from two principal limitations. First, tr...
research
12/13/2022

Diverse Demonstrations Improve In-context Compositional Generalization

In-context learning has shown great success in i.i.d semantic parsing sp...
research
05/03/2022

SUBS: Subtree Substitution for Compositional Semantic Parsing

Although sequence-to-sequence models often achieve good performance in s...
research
01/15/2022

Unobserved Local Structures Make Compositional Generalization Hard

While recent work has convincingly showed that sequence-to-sequence mode...
research
06/20/2023

On Evaluating Multilingual Compositional Generalization with Translated Datasets

Compositional generalization allows efficient learning and human-like in...
research
12/16/2022

Better May Not Be Fairer: Can Data Augmentation Mitigate Subgroup Degradation?

It is no secret that deep learning models exhibit undesirable behaviors ...
research
11/09/2021

Learning to Generalize Compositionally by Transferring Across Semantic Parsing Tasks

Neural network models often generalize poorly to mismatched domains or d...

Please sign up or login with your details

Forgot password? Click here to reset