MLRIP: Pre-training a military language representation model with informative factual knowledge and professional knowledge base

07/28/2022
by   Hui Li, et al.
3

Incorporating prior knowledge into pre-trained language models has proven to be effective for knowledge-driven NLP tasks, such as entity typing and relation extraction. Current pre-training procedures usually inject external knowledge into models by using knowledge masking, knowledge fusion and knowledge replacement. However, factual information contained in the input sentences have not been fully mined, and the external knowledge for injecting have not been strictly checked. As a result, the context information cannot be fully exploited and extra noise will be introduced or the amount of knowledge injected is limited. To address these issues, we propose MLRIP, which modifies the knowledge masking strategies proposed by ERNIE-Baidu, and introduce a two-stage entity replacement strategy. Extensive experiments with comprehensive analyses illustrate the superiority of MLRIP over BERT-based models in military knowledge-driven NLP tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/01/2022

DictBERT: Dictionary Description Knowledge Enhanced Language Model Pre-training via Contrastive Learning

Although pre-trained language models (PLMs) have achieved state-of-the-a...
research
03/20/2023

Context-faithful Prompting for Large Language Models

Large language models (LLMs) encode parametric knowledge about world fac...
research
12/01/2021

Domain-oriented Language Pre-training with Adaptive Hybrid Masking and Optimal Transport Alignment

Motivated by the success of pre-trained language models such as BERT in ...
research
12/16/2021

Learning Rich Representation of Keyphrases from Text

In this work, we explore how to learn task-specific language models aime...
research
05/02/2023

UNTER: A Unified Knowledge Interface for Enhancing Pre-trained Language Models

Recent research demonstrates that external knowledge injection can advan...
research
11/05/2020

Improving Event Duration Prediction via Time-aware Pre-training

End-to-end models in NLP rarely encode external world knowledge about le...
research
10/22/2022

Generative Prompt Tuning for Relation Classification

Using prompts to explore the knowledge contained within pre-trained lang...

Please sign up or login with your details

Forgot password? Click here to reset