What do Large Language Models Learn beyond Language?

10/21/2022
by   Avinash Madasu, et al.
0

Large language models (LMs) have rapidly become a mainstay in Natural Language Processing. These models are known to acquire rich linguistic knowledge from training on large amounts of text. In this paper, we investigate if pre-training on text also confers these models with helpful `inductive biases' for non-linguistic reasoning. On a set of 19 diverse non-linguistic tasks involving quantitative computations, recognizing regular expressions and reasoning over strings. We find that pretrained models significantly outperform comparable non-pretrained neural models. This remains true also in experiments with training non-pretrained models with fewer parameters to account for model regularization effects. We further explore the effect of text domain on LMs by pretraining models from text from different domains and provenances. Our experiments surprisingly reveal that the positive effects of pre-training persist even when pretraining on multi-lingual text or computer code, and even for text generated from synthetic languages. Our findings suggest a hitherto unexplored deep connection between pre-training and inductive learning abilities of language models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/11/2020

Do Language Embeddings Capture Scales?

Pretrained Language Models (LMs) have been shown to possess significant ...
research
05/18/2022

LogiGAN: Learning Logical Reasoning via Adversarial Pre-training

We present LogiGAN, an unsupervised adversarial pre-training framework f...
research
04/17/2022

Language Contamination Explains the Cross-lingual Capabilities of English Pretrained Models

English pretrained language models, which make up the backbone of many m...
research
05/31/2023

How to Plant Trees in Language Models: Data and Architectural Effects on the Emergence of Syntactic Inductive Biases

Accurate syntactic representations are essential for robust generalizati...
research
01/26/2023

Understanding Finetuning for Factual Knowledge Extraction from Language Models

Language models (LMs) pretrained on large corpora of text from the web h...
research
04/30/2020

Linguistic Typology Features from Text: Inferring the Sparse Features of World Atlas of Language Structures

The use of linguistic typological resources in natural language processi...
research
05/22/2023

A Pretrainer's Guide to Training Data: Measuring the Effects of Data Age, Domain Coverage, Quality, Toxicity

Pretraining is the preliminary and fundamental step in developing capabl...

Please sign up or login with your details

Forgot password? Click here to reset