Evaluating the Impact of Model Scale for Compositional Generalization in Semantic Parsing

05/24/2022
by   Linlu Qiu, et al.
0

Despite their strong performance on many tasks, pre-trained language models have been shown to struggle on out-of-distribution compositional generalization. Meanwhile, recent work has shown considerable improvements on many NLP tasks from model scaling. Can scaling up model size also improve compositional generalization in semantic parsing? We evaluate encoder-decoder models up to 11B parameters and decoder-only models up to 540B parameters, and compare model scaling curves for three different methods for transfer learning: fine-tuning all parameters, prompt tuning, and in-context learning. We observe that fine-tuning generally has flat or negative scaling curves on out-of-distribution compositional generalization in semantic parsing evaluations. In-context learning has positive scaling curves, but is generally outperformed by much smaller fine-tuned models. Prompt-tuning can outperform fine-tuning, suggesting further potential improvements from scaling as it exhibits a more positive scaling curve. Additionally, we identify several error trends that vary with model scale. For example, larger models are generally better at modeling the syntax of the output space, but are also more prone to certain types of overfitting. Overall, our study highlights limitations of current techniques for effectively leveraging model scale for compositional generalization, while our analysis also suggests promising directions for future work.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/15/2022

On the Compositional Generalization Gap of In-Context Learning

Pretrained large generative language models have shown great performance...
research
03/05/2022

Unfreeze with Care: Space-Efficient Fine-Tuning of Semantic Parsing Models

Semantic parsing is a key NLP task that maps natural language to structu...
research
06/04/2023

Arbitrary Few Parameters are Good Enough for Adapting Large-scale Pre-trained Language Models

Parameter-efficient tuning (PET) methods can effectively drive extremely...
research
11/09/2021

Learning to Generalize Compositionally by Transferring Across Semantic Parsing Tasks

Neural network models often generalize poorly to mismatched domains or d...
research
10/23/2022

Learning to Perform Complex Tasks through Compositional Fine-Tuning of Language Models

How to usefully encode compositional task structure has long been a core...
research
11/25/2021

Amortized Prompt: Lightweight Fine-Tuning for CLIP in Domain Generalization

Domain generalization (DG) is a difficult transfer learning problem aimi...
research
09/29/2022

Compositional Semantic Parsing with Large Language Models

Humans can reason compositionally when presented with new tasks. Previou...

Please sign up or login with your details

Forgot password? Click here to reset