Language Modeling Teaches You More Syntax than Translation Does: Lessons Learned Through Auxiliary Task Analysis

09/26/2018
by   Kelly W. Zhang, et al.
0

Recent work using auxiliary prediction task classifiers to investigate the properties of LSTM representations has begun to shed light on why pretrained representations, like ELMo (Peters et al., 2018) and CoVe (McCann et al., 2017), are so beneficial for neural language understanding models. We still, though, do not yet have a clear understanding of how the choice of pretraining objective affects the type of linguistic information that models learn. With this in mind, we compare four objectives---language modeling, translation, skip-thought, and autoencoding---on their ability to induce syntactic and part-of-speech information. We make a fair comparison between the tasks by holding constant the quantity and genre of the training data, as well as the LSTM architecture. We find that representations from language models consistently perform best on our syntactic auxiliary prediction tasks, even when trained on relatively small amounts of data. These results suggest that language modeling may be the best data-rich pretraining task for transfer learning applications requiring syntactic information. We also find that the representations from randomly-initialized, frozen LSTMs perform strikingly well on our syntactic auxiliary tasks, but this effect disappears when the amount of training data for the auxiliary tasks is reduced.

READ FULL TEXT
research
03/08/2019

Neural Language Models as Psycholinguistic Subjects: Representations of Syntactic State

We deploy the methods of controlled psycholinguistic experimentation to ...
research
11/02/2018

Sentence Encoders on STILTs: Supplementary Training on Intermediate Labeled-data Tasks

Pretraining with language modeling and related unsupervised tasks has re...
research
04/25/2019

Probing What Different NLP Tasks Teach Machines about Function Word Comprehension

We introduce a set of nine challenge tasks that test for the understandi...
research
11/10/2020

When Do You Need Billions of Words of Pretraining Data?

NLP is currently dominated by general-purpose pretrained language models...
research
01/27/2023

Call for Papers – The BabyLM Challenge: Sample-efficient pretraining on a developmentally plausible corpus

We present the call for papers for the BabyLM Challenge: Sample-efficien...
research
07/10/2018

Revisiting the Hierarchical Multiscale LSTM

Hierarchical Multiscale LSTM (Chung et al., 2016a) is a state-of-the-art...
research
10/12/2020

Structural Supervision Improves Few-Shot Learning and Syntactic Generalization in Neural Language Models

Humans can learn structural properties about a word from minimal experie...

Please sign up or login with your details

Forgot password? Click here to reset