Visualizing and Understanding the Effectiveness of BERT

08/15/2019
by   Yaru Hao, et al.
0

Language model pre-training, such as BERT, has achieved remarkable results in many NLP tasks. However, it is unclear why the pre-training-then-fine-tuning paradigm can improve performance and generalization capability across different tasks. In this paper, we propose to visualize loss landscapes and optimization trajectories of fine-tuning BERT on specific datasets. First, we find that pre-training reaches a good initial point across downstream tasks, which leads to wider optima and easier optimization compared with training from scratch. We also demonstrate that the fine-tuning procedure is robust to overfitting, even though BERT is highly over-parameterized for downstream tasks. Second, the visualization results indicate that fine-tuning BERT tends to generalize better because of the flat and wide optima, and the consistency between the training loss surface and the generalization error surface. Third, the lower layers of BERT are more invariant during fine-tuning, which suggests that the layers that are close to input learn more transferable representations of language.

READ FULL TEXT

page 4

page 5

page 7

page 8

research
11/24/2022

Using Selective Masking as a Bridge between Pre-training and Fine-tuning

Pre-training a language model and then fine-tuning it for downstream tas...
research
12/31/2020

EarlyBERT: Efficient BERT Training via Early-bird Lottery Tickets

Deep, heavily overparameterized language models such as BERT, XLNet and ...
research
01/30/2023

CSDR-BERT: a pre-trained scientific dataset match model for Chinese Scientific Dataset Retrieval

As the number of open and shared scientific datasets on the Internet inc...
research
06/22/2022

reStructured Pre-training

In this work, we try to decipher the internal connection of NLP technolo...
research
11/05/2019

MML: Maximal Multiverse Learning for Robust Fine-Tuning of Language Models

Recent state-of-the-art language models utilize a two-phase training pro...
research
08/20/2023

How Good Are Large Language Models at Out-of-Distribution Detection?

Out-of-distribution (OOD) detection plays a vital role in enhancing the ...
research
04/14/2021

Masked Language Modeling and the Distributional Hypothesis: Order Word Matters Pre-training for Little

A possible explanation for the impressive performance of masked language...

Please sign up or login with your details

Forgot password? Click here to reset