Finding needles in a haystack: Sampling Structurally-diverse Training Sets from Synthetic Data for Compositional Generalization

09/06/2021
by   Inbar Oren, et al.
0

Modern semantic parsers suffer from two principal limitations. First, training requires expensive collection of utterance-program pairs. Second, semantic parsers fail to generalize at test time to new compositions/structures that have not been observed during training. Recent research has shown that automatic generation of synthetic utterance-program pairs can alleviate the first problem, but its potential for the second has thus far been under-explored. In this work, we investigate automatic generation of synthetic utterance-program pairs for improving compositional generalization in semantic parsing. Given a small training set of annotated examples and an "infinite" pool of synthetic examples, we select a subset of synthetic examples that are structurally-diverse and use them to improve compositional generalization. We evaluate our approach on a new split of the schema2QA dataset, and show that it leads to dramatic improvements in compositional generalization as well as moderate improvements in the traditional i.i.d setup. Moreover, structurally-diverse sampling achieves these improvements with as few as 5K examples, compared to 1M examples when sampling uniformly at random – a 200x improvement in data efficiency.

READ FULL TEXT

page 1

page 4

research
03/16/2022

Structurally Diverse Sampling Reduces Spurious Correlations in Semantic Parsing Datasets

A rapidly growing body of research has demonstrated the inability of NLP...
research
12/13/2022

Diverse Demonstrations Improve In-context Compositional Generalization

In-context learning has shown great success in i.i.d semantic parsing sp...
research
09/22/2021

COVR: A test-bed for Visually Grounded Compositional Generalization with real images

While interest in models that generalize at test time to new composition...
research
10/12/2020

Improving Compositional Generalization in Semantic Parsing

Generalization of models to out-of-distribution (OOD) data has captured ...
research
01/15/2022

Unobserved Local Structures Make Compositional Generalization Hard

While recent work has convincingly showed that sequence-to-sequence mode...
research
05/15/2022

SeqZero: Few-shot Compositional Semantic Parsing with Sequential Prompts and Zero-shot Models

Recent research showed promising results on combining pretrained languag...
research
05/04/2022

Measuring and Improving Compositional Generalization in Text-to-SQL via Component Alignment

In text-to-SQL tasks – as in much of NLP – compositional generalization ...

Please sign up or login with your details

Forgot password? Click here to reset