DeepAI
Log In Sign Up

Contextual Knowledge Selection and Embedding towards Enhanced Pre-Trained Language Models

09/29/2020
by   Yusheng Su, et al.
0

Several recent efforts have been devoted to enhancing pre-trained language models (PLMs) by utilizing extra heterogeneous knowledge in knowledge graphs (KGs), and achieved consistent improvements on various knowledge-driven NLP tasks. However, most of these knowledge-enhanced PLMs embed static sub-graphs of KGs ("knowledge context"), regardless of that the knowledge required by PLMs may change dynamically according to specific text ("textual context"). In this paper, we propose a novel framework named DKPLM to dynamically select and embed knowledge context according to textual context for PLMs, which can avoid the effect of redundant and ambiguous knowledge in KGs that cannot match the input text. Our experimental results show that DKPLM outperforms various baselines on typical knowledge-driven NLP tasks, indicating the effectiveness of utilizing dynamic knowledge context for language understanding. Besides the performance improvements, the dynamically selected knowledge in DKPLM can describe the semantics of text-related knowledge in a more interpretable form than the conventional PLMs. Our source code and datasets will be available to provide more details for DKPLM.

READ FULL TEXT
05/17/2019

ERNIE: Enhanced Language Representation with Informative Entities

Neural language representation models such as BERT pre-trained on large-...
03/17/2022

Enhanced Temporal Knowledge Embeddings with Contextualized Language Representations

With the emerging research effort to integrate structured and unstructur...
10/22/2020

Language Models are Open Knowledge Graphs

This paper shows how to construct knowledge graphs (KGs) from pre-traine...
04/15/2018

Context and Humor: Understanding Amul advertisements of India

Contextual knowledge is the most important element in understanding lang...
07/16/2022

Multimodal Dialog Systems with Dual Knowledge-enhanced Generative Pretrained Language Model

Text response generation for multimodal task-oriented dialog systems, wh...
01/28/2021

Combining pre-trained language models and structured knowledge

In recent years, transformer-based language models have achieved state o...
09/25/2021

Sorting through the noise: Testing robustness of information processing in pre-trained language models

Pre-trained LMs have shown impressive performance on downstream NLP task...