Attending to Entities for Better Text Understanding

11/11/2019
by   Pengxiang Cheng, et al.
0

Recent progress in NLP witnessed the development of large-scale pre-trained language models (GPT, BERT, XLNet, etc.) based on Transformer (Vaswani et al. 2017), and in a range of end tasks, such models have achieved state-of-the-art results, approaching human performance. This demonstrates the power of the stacked self-attention architecture when paired with a sufficient number of layers and a large amount of pre-training data. However, on tasks that require complex and long-distance reasoning where surface-level cues are not enough, there is still a large gap between the pre-trained models and human performance. Strubell et al. (2018) recently showed that it is possible to inject knowledge of syntactic structure into a model through supervised self-attention. We conjecture that a similar injection of semantic knowledge, in particular, coreference information, into an existing model would improve performance on such complex problems. On the LAMBADA (Paperno et al. 2016) task, we show that a model trained from scratch with coreference as auxiliary supervision for self-attention outperforms the largest GPT-2 model, setting the new state-of-the-art, while only containing a tiny fraction of parameters compared to GPT-2. We also conduct a thorough analysis of different variants of model architectures and supervision configurations, suggesting future directions on applying similar techniques to other problems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/25/2020

MiniLM: Deep Self-Attention Distillation for Task-Agnostic Compression of Pre-Trained Transformers

Pre-trained language models (e.g., BERT (Devlin et al., 2018) and its va...
research
12/31/2018

Multilingual Constituency Parsing with Self-Attention and Pre-Training

We extend our previous work on constituency parsing (Kitaev and Klein, 2...
research
08/27/2019

Bridging the Gap for Tokenizer-Free Language Models

Purely character-based language models (LMs) have been lagging in qualit...
research
12/04/2020

Pre-trained language models as knowledge bases for Automotive Complaint Analysis

Recently it has been shown that large pre-trained language models like B...
research
04/06/2022

Paying More Attention to Self-attention: Improving Pre-trained Language Models via Attention Guiding

Pre-trained language models (PLM) have demonstrated their effectiveness ...
research
08/09/2019

VisualBERT: A Simple and Performant Baseline for Vision and Language

We propose VisualBERT, a simple and flexible framework for modeling a br...
research
05/24/2019

Human vs. Muppet: A Conservative Estimate of Human Performance on the GLUE Benchmark

The GLUE benchmark (Wang et al., 2019b) is a suite of language understan...

Please sign up or login with your details

Forgot password? Click here to reset