Pretrain on just structure: Understanding linguistic inductive biases using transfer learning

04/25/2023
by   Isabel Papadimitriou, et al.
0

Both humans and transformer language models are able to learn language without explicit structural supervision. What inductive learning biases make this learning possible? In this study, we examine the effect of different inductive learning biases by predisposing language models with structural biases through pretraining on artificial structured data, and then evaluating by fine-tuning on English. Our experimental setup gives us the ability to actively control the inductive bias of language models. With our experiments, we investigate the comparative success of three types of inductive bias: 1) an inductive bias for recursive, hierarchical processing 2) an inductive bias for unrestricted token-token dependencies that can't be modeled by context-free grammars, and 3) an inductive bias for a Zipfian power-law vocabulary distribution. We show that complex token-token interactions form the best inductive biases, and that this is strongest in the non-context-free case. We also show that a Zipfian vocabulary distribution forms a good inductive bias independently from grammatical structure. Our study leverages the capabilities of transformer models to run controlled language learning experiments that are not possible to run in humans, and surfaces hypotheses about the structures that facilitate language learning in both humans and machines.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/22/2021

Controlled Evaluation of Grammatical Knowledge in Mandarin Chinese Language Models

Prior work has shown that structural supervision helps English language ...
research
06/02/2021

Examining the Inductive Bias of Neural Language Models with Artificial Languages

Since language models are used to model a wide variety of languages, it ...
research
05/28/2021

On the Bias Against Inductive Biases

Borrowing from the transformer models that revolutionized the field of n...
research
07/14/2020

Can neural networks acquire a structural bias from raw linguistic data?

We evaluate whether BERT, a widely used neural network for sentence proc...
research
09/17/2018

The Fast and the Flexible: training neural networks to learn to follow instructions from small data

Learning to follow human instructions is a challenging task because whil...
research
04/12/2021

On the Inductive Bias of Masked Language Modeling: From Statistical to Syntactic Dependencies

We study how masking and predicting tokens in an unsupervised fashion ca...
research
04/05/2022

OccamNets: Mitigating Dataset Bias by Favoring Simpler Hypotheses

Dataset bias and spurious correlations can significantly impair generali...

Please sign up or login with your details

Forgot password? Click here to reset