How to Plant Trees in Language Models: Data and Architectural Effects on the Emergence of Syntactic Inductive Biases

05/31/2023
by   Aaron Mueller, et al.
0

Accurate syntactic representations are essential for robust generalization in natural language. Recent work has found that pre-training can teach language models to rely on hierarchical syntactic features - as opposed to incorrect linear features - when performing tasks after fine-tuning. We test what aspects of pre-training are important for endowing encoder-decoder Transformers with an inductive bias that favors hierarchical syntactic generalizations. We focus on architectural features (depth, width, and number of parameters), as well as the genre and size of the pre-training corpus, diagnosing inductive biases using two syntactic transformation tasks: question formation and passivization, both in English. We find that the number of parameters alone does not explain hierarchical generalization: model depth plays greater role than model width. We also find that pre-training on simpler language, such as child-directed speech, induces a hierarchical bias using an order-of-magnitude less data than pre-training on more typical datasets based on web text or Wikipedia; this suggests that in cognitively plausible language acquisition settings, neural language models may be more data-efficient than previously thought.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/17/2022

Coloring the Blank Slate: Pre-training Imparts a Hierarchical Inductive Bias to Sequence-to-sequence Models

Relations between words are governed by hierarchical structure rather th...
research
01/10/2020

Does syntax need to grow on trees? Sources of hierarchical inductive bias in sequence-to-sequence networks

Learners that are exposed to the same training data might generalize dif...
research
10/21/2022

What do Large Language Models Learn beyond Language?

Large language models (LMs) have rapidly become a mainstay in Natural La...
research
10/08/2020

On the importance of pre-training data volume for compact language models

Recent advances in language modeling have led to computationally intensi...
research
06/23/2023

Scaling MLPs: A Tale of Inductive Bias

In this work we revisit the most fundamental building block in deep lear...
research
02/25/2018

Revisiting the poverty of the stimulus: hierarchical generalization without a hierarchical bias in recurrent neural networks

Syntactic rules in human language usually refer to the hierarchical stru...
research
09/28/2021

Shaking Syntactic Trees on the Sesame Street: Multilingual Probing with Controllable Perturbations

Recent research has adopted a new experimental field centered around the...

Please sign up or login with your details

Forgot password? Click here to reset