LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention

10/02/2020
by   Ikuya Yamada, et al.
0

Entity representations are useful in natural language tasks involving entities. In this paper, we propose new pretrained contextualized representations of words and entities based on the bidirectional transformer. The proposed model treats words and entities in a given text as independent tokens, and outputs contextualized representations of them. Our model is trained using a new pretraining task based on the masked language model of BERT. The task involves predicting randomly masked words and entities in a large entity-annotated corpus retrieved from Wikipedia. We also propose an entity-aware self-attention mechanism that is an extension of the self-attention mechanism of the transformer, and considers the types of tokens (words or entities) when computing attention scores. The proposed model achieves impressive empirical performance on a wide range of entity-related tasks. In particular, it obtains state-of-the-art results on five well-known datasets: Open Entity (entity typing), TACRED (relation classification), CoNLL-2003 (named entity recognition), ReCoRD (cloze-style question answering), and SQuAD 1.1 (extractive question answering). Our source code and pretrained representations are available at https://github.com/studio-ousia/luke.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/01/2019

Pre-training of Deep Contextualized Embeddings of Words and Entities for Named Entity Disambiguation

Deep contextualized embeddings trained using unsupervised language model...
research
11/16/2022

UniRel: Unified Representation and Interaction for Joint Relational Triple Extraction

Relational triple extraction is challenging for its difficulty in captur...
research
10/12/2021

Mention Memory: incorporating textual knowledge into Transformers through entity mention attention

Natural language understanding tasks such as open-domain question answer...
research
07/12/2022

OSLAT: Open Set Label Attention Transformer for Medical Entity Span Extraction

Identifying spans in medical texts that correspond to medical entities i...
research
10/26/2022

DyREx: Dynamic Query Representation for Extractive Question Answering

Extractive question answering (ExQA) is an essential task for Natural La...
research
04/15/2020

Entities as Experts: Sparse Memory Access with Entity Supervision

We focus on the problem of capturing declarative knowledge in the learne...
research
07/14/2023

Interactive Spatiotemporal Token Attention Network for Skeleton-based General Interactive Action Recognition

Recognizing interactive action plays an important role in human-robot in...

Please sign up or login with your details

Forgot password? Click here to reset