A Survey on Neural Network Language Models

06/09/2019
by   Kun Jing, et al.
0

As the core component of Natural Language Processing (NLP) system, Language Model (LM) can provide word representation and probability indication of word sequences. Neural Network Language Models (NNLMs) overcome the curse of dimensionality and improve the performance of traditional LMs. A survey on NNLMs is performed in this paper. The structure of classic NNLMs is described firstly, and then some major improvements are introduced and analyzed. We summarize and compare corpora and toolkits of NNLMs. Further, some research directions of NNLMs are discussed.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/24/2017

A Study on Neural Network Language Modeling

An exhaustive study on neural network language modeling (NNLM) is perfor...
research
11/15/2019

A Subword Level Language Model for Bangla Language

Language models are at the core of natural language processing. The abil...
research
11/07/2022

Probing neural language models for understanding of words of estimative probability

Words of estimative probability (WEP) are expressions of a statement's p...
research
03/04/2022

Deep Lexical Hypothesis: Identifying personality structure in natural language

Recent advances in natural language processing (NLP) have produced gener...
research
06/16/2022

Methods for Estimating and Improving Robustness of Language Models

Despite their outstanding performance, large language models (LLMs) suff...
research
01/11/2020

A Continuous Space Neural Language Model for Bengali Language

Language models are generally employed to estimate the probability distr...
research
06/04/2021

Language Model Metrics and Procrustes Analysis for Improved Vector Transformation of NLP Embeddings

Artificial Neural networks are mathematical models at their core. This t...

Please sign up or login with your details

Forgot password? Click here to reset