Scalable Syntax-Aware Language Models Using Knowledge Distillation

06/14/2019
by   Adhiguna Kuncoro, et al.
0

Prior work has shown that, on small amounts of training data, syntactic neural language models learn structurally sensitive generalisations more successfully than sequential language models. However, their computational complexity renders scaling difficult, and it remains an open question whether structural biases are still necessary when sequential models have access to ever larger amounts of training data. To answer this question, we introduce an efficient knowledge distillation (KD) technique that transfers knowledge from a syntactic language model trained on a small corpus to an LSTM language model, hence enabling the LSTM to develop a more structurally sensitive representation of the larger training data it learns from. On targeted syntactic evaluations, we find that, while sequential LSTMs perform much better than previously reported, our proposed technique substantially improves on this baseline, yielding a new state of the art. Our findings and analysis affirm the importance of structural biases, even in models that learn from large amounts of data.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/27/2020

Syntactic Structure Distillation Pretraining For Bidirectional Encoders

Textual representation learners trained on large amounts of data have ac...
research
08/27/2018

Targeted Syntactic Evaluation of Language Models

We present a dataset for evaluating the grammaticality of the prediction...
research
07/28/2022

Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions

Large amounts of training data are one of the major reasons for the high...
research
11/10/2022

The CRINGE Loss: Learning what language not to model

Standard language model training employs gold human documents or human-h...
research
09/22/2021

Controlled Evaluation of Grammatical Knowledge in Mandarin Chinese Language Models

Prior work has shown that structural supervision helps English language ...
research
07/10/2022

FairDistillation: Mitigating Stereotyping in Language Models

Large pre-trained language models are successfully being used in a varie...
research
05/03/2023

Distilling Step-by-Step! Outperforming Larger Language Models with Less Training Data and Smaller Model Sizes

Deploying large language models (LLMs) is challenging because they are m...

Please sign up or login with your details

Forgot password? Click here to reset