Syntactic Structure Distillation Pretraining For Bidirectional Encoders

05/27/2020
by   Adhiguna Kuncoro, et al.
0

Textual representation learners trained on large amounts of data have achieved notable success on downstream tasks; intriguingly, they have also performed well on challenging tests of syntactic competence. Given this success, it remains an open question whether scalable learners like BERT can become fully proficient in the syntax of natural language by virtue of data scale alone, or whether they still benefit from more explicit syntactic biases. To answer this question, we introduce a knowledge distillation strategy for injecting syntactic biases into BERT pretraining, by distilling the syntactically informative predictions of a hierarchical—albeit harder to scale—syntactic language model. Since BERT models masked words in bidirectional context, we propose to distill the approximate marginal distribution over words in context from the syntactic LM. Our approach reduces relative error by 2-21 although we obtain mixed results on the GLUE benchmark. Our findings demonstrate the benefits of syntactic biases, even in representation learners that exploit large amounts of data, and contribute to a better understanding of where syntactic biases are most helpful in benchmarks of natural language understanding.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/14/2019

Scalable Syntax-Aware Language Models Using Knowledge Distillation

Prior work has shown that, on small amounts of training data, syntactic ...
research
04/21/2021

Improving BERT Pretraining with Syntactic Supervision

Bidirectional masked Transformers have become the core theme in the curr...
research
12/14/2020

LRC-BERT: Latent-representation Contrastive Knowledge Distillation for Natural Language Understanding

The pre-training models such as BERT have achieved great results in vari...
research
12/30/2020

Unnatural Language Inference

Natural Language Understanding has witnessed a watershed moment with the...
research
03/17/2022

Coloring the Blank Slate: Pre-training Imparts a Hierarchical Inductive Bias to Sequence-to-sequence Models

Relations between words are governed by hierarchical structure rather th...
research
06/15/2022

Alexa Teacher Model: Pretraining and Distilling Multi-Billion-Parameter Encoders for Natural Language Understanding Systems

We present results from a large-scale experiment on pretraining encoders...

Please sign up or login with your details

Forgot password? Click here to reset