Syntax-Enhanced Pre-trained Model

12/28/2020
by   Zenan Xu, et al.
0

We study the problem of leveraging the syntactic structure of text to enhance pre-trained models such as BERT and RoBERTa. Existing methods utilize syntax of text either in the pre-training stage or in the fine-tuning stage, so that they suffer from discrepancy between the two stages. Such a problem would lead to the necessity of having human-annotated syntactic information, which limits the application of existing methods to broader scenarios. To address this, we present a model that utilizes the syntax of text in both pre-training and fine-tuning stages. Our model is based on Transformer with a syntax-aware attention layer that considers the dependency tree of the text. We further introduce a new pre-training task of predicting the syntactic distance among tokens in the dependency tree. We evaluate the model on three downstream tasks, including relation classification, entity typing, and question answering. Results show that our model achieves state-of-the-art performance on six public benchmark datasets. We have two major findings. First, we demonstrate that infusing automatically produced syntax of text improves pre-trained models. Second, global syntactic distances among tokens bring larger performance gains compared to local head relations between contiguous tokens.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/17/2020

GraphCodeBERT: Pre-training Code Representations with Data Flow

Pre-trained models for programming language have achieved dramatic empir...
research
04/04/2023

Improved Visual Fine-tuning with Natural Language Supervision

Fine-tuning a pre-trained model can leverage the semantic information fr...
research
07/21/2023

Tuning Pre-trained Model via Moment Probing

Recently, efficient fine-tuning of large-scale pre-trained models has at...
research
11/10/2019

Syntax-Infused Transformer and BERT models for Machine Translation and Natural Language Understanding

Attention-based models have shown significant improvement over tradition...
research
01/22/2021

The heads hypothesis: A unifying statistical approach towards understanding multi-headed attention in BERT

Multi-headed attention heads are a mainstay in transformer-based models....
research
08/23/2023

Cabrita: closing the gap for foreign languages

The strategy of training the model from scratch in a specific language o...
research
05/18/2023

Silver Syntax Pre-training for Cross-Domain Relation Extraction

Relation Extraction (RE) remains a challenging task, especially when con...

Please sign up or login with your details

Forgot password? Click here to reset