Building Language Models for Text with Named Entities

05/13/2018
by   Md Rizwan Parvez, et al.
0

Text in many domains involves a significant amount of named entities. Predict- ing the entity names is often challenging for a language model as they appear less frequent on the training corpus. In this paper, we propose a novel and effective approach to building a discriminative language model which can learn the entity names by leveraging their entity type information. We also introduce two benchmark datasets based on recipes and Java programming codes, on which we evalu- ate the proposed model. Experimental re- sults show that our model achieves 52.2 generation than the state-of-the-art language models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/09/2019

Knowledge-Augmented Language Model and its Application to Unsupervised Named-Entity Recognition

Traditional language models are unable to efficiently model entity names...
research
01/01/2021

Sensei: Self-Supervised Sensor Name Segmentation

A sensor name, typically an alphanumeric string, encodes the key context...
research
07/30/2021

Towards Continual Entity Learning in Language Models for Conversational Agents

Neural language models (LM) trained on diverse corpora are known to work...
research
08/13/2019

Improving Generalization in Coreference Resolution via Adversarial Training

In order for coreference resolution systems to be useful in practice, th...
research
10/22/2020

UniCase – Rethinking Casing in Language Models

In this paper, we introduce a new approach to dealing with the problem o...
research
10/04/2017

Counterfactual Language Model Adaptation for Suggesting Phrases

Mobile devices use language models to suggest words and phrases for use ...
research
05/01/2015

Grounded Discovery of Coordinate Term Relationships between Software Entities

We present an approach for the detection of coordinate-term relationship...

Please sign up or login with your details

Forgot password? Click here to reset