Compositional generalization in semantic parsing with pretrained transformers

09/30/2021
by   A. Emin Orhan, et al.
0

Large-scale pretraining instills large amounts of knowledge in deep neural networks. This, in turn, improves the generalization behavior of these models in downstream tasks. What exactly are the limits to the generalization benefits of large-scale pretraining? Here, we report observations from some simple experiments aimed at addressing this question in the context of two semantic parsing tasks involving natural language, SCAN and COGS. We show that language models pretrained exclusively with non-English corpora, or even with programming language corpora, significantly improve out-of-distribution generalization in these benchmarks, compared with models trained from scratch, even though both benchmarks are English-based. This demonstrates the surprisingly broad transferability of pretrained representations and knowledge. Pretraining with a large-scale protein sequence prediction task, on the other hand, mostly deteriorates the generalization performance in SCAN and COGS, suggesting that pretrained representations do not transfer universally and that there are constraints on the similarity between the pretraining and downstream domains for successful transfer. Finally, we show that larger models are harder to train from scratch and their generalization accuracy is lower when trained up to convergence on the relatively small SCAN and COGS datasets, but the benefits of large-scale pretraining become much clearer with larger models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/28/2022

Downstream Datasets Make Surprisingly Good Pretraining Corpora

For most natural language processing tasks, the dominant practice is to ...
research
06/05/2023

Explore and Exploit the Diverse Knowledge in Model Zoo for Domain Generalization

The proliferation of pretrained models, as a result of advancements in p...
research
04/17/2022

Language Contamination Explains the Cross-lingual Capabilities of English Pretrained Models

English pretrained language models, which make up the backbone of many m...
research
04/10/2021

Learning from 2D: Pixel-to-Point Knowledge Transfer for 3D Pretraining

Most of the 3D networks are trained from scratch owning to the lack of l...
research
09/10/2021

Does Pretraining for Summarization Require Knowledge Transfer?

Pretraining techniques leveraging enormous datasets have driven recent a...
research
01/25/2023

An Experimental Study on Pretraining Transformers from Scratch for IR

Finetuning Pretrained Language Models (PLM) for IR has been de facto the...
research
12/21/2022

Uncontrolled Lexical Exposure Leads to Overestimation of Compositional Generalization in Pretrained Models

Human linguistic capacity is often characterized by compositionality and...

Please sign up or login with your details

Forgot password? Click here to reset