Improving Contextual Representation with Gloss Regularized Pre-training

05/13/2022
by   Yu Lin, et al.
0

Though achieving impressive results on many NLP tasks, the BERT-like masked language models (MLM) encounter the discrepancy between pre-training and inference. In light of this gap, we investigate the contextual representation of pre-training and inference from the perspective of word probability distribution. We discover that BERT risks neglecting the contextual word similarity in pre-training. To tackle this issue, we propose an auxiliary gloss regularizer module to BERT pre-training (GR-BERT), to enhance word semantic similarity. By predicting masked words and aligning contextual embeddings to corresponding glosses simultaneously, the word similarity can be explicitly modeled. We design two architectures for GR-BERT and evaluate our model in downstream tasks. Experimental results show that the gloss regularizer benefits BERT in word-level and sentence-level semantic representation. The GR-BERT achieves new state-of-the-art in lexical substitution task and greatly promotes BERT sentence representation in both unsupervised and supervised STS tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/04/2020

On Losses for Modern Language Models

BERT set many state-of-the-art results over varied NLU benchmarks by pre...
research
02/16/2020

SBERT-WK: A Sentence Embedding Method by Dissecting BERT-based Word Models

Sentence embedding is an important research topic in natural language pr...
research
03/29/2021

Whitening Sentence Representations for Better Semantics and Faster Retrieval

Pre-training models such as BERT have achieved great success in many nat...
research
10/29/2020

Contextual BERT: Conditioning the Language Model Using a Global State

BERT is a popular language model whose main pre-training task is to fill...
research
08/04/2020

Taking Notes on the Fly Helps BERT Pre-training

How to make unsupervised language pre-training more efficient and less r...
research
03/30/2021

Kaleido-BERT: Vision-Language Pre-training on Fashion Domain

We present a new vision-language (VL) pre-training model dubbed Kaleido-...
research
12/16/2020

Focusing More on Conflicts with Mis-Predictions Helps Language Pre-Training

In this work, we propose to improve the effectiveness of language pre-tr...

Please sign up or login with your details

Forgot password? Click here to reset