A Study on Neural Network Language Modeling

08/24/2017
by   Dengliang Shi, et al.
0

An exhaustive study on neural network language modeling (NNLM) is performed in this paper. Different architectures of basic neural network language models are described and examined. A number of different improvements over basic neural network language models, including importance sampling, word classes, caching and bidirectional recurrent neural network (BiRNN), are studied separately, and the advantages and disadvantages of every technique are evaluated. Then, the limits of neural network language modeling are explored from the aspects of model architecture and knowledge representation. Part of the statistical information from a word sequence will loss when it is processed word by word in a certain order, and the mechanism of training neural network by updating weight matrixes and vectors imposes severe restrictions on any significant enhancement of NNLM. For knowledge representation, the knowledge represented by neural network language models is the approximate probabilistic distribution of word sequences from a certain training data set rather than the knowledge of a language itself or the information conveyed by word sequences in a natural language. Finally, some directions for improving neural network language modeling further is discussed.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/09/2019

A Survey on Neural Network Language Models

As the core component of Natural Language Processing (NLP) system, Langu...
research
03/23/2017

Sequential Recurrent Neural Networks for Language Modeling

Feedforward Neural Network (FNN)-based language models estimate the prob...
research
12/22/2014

Diverse Embedding Neural Network Language Models

We propose Diverse Embedding Neural Network (DENN), a novel architecture...
research
03/07/2017

Data Noising as Smoothing in Neural Network Language Models

Data noising is an effective technique for regularizing neural network m...
research
03/24/2022

Evaluating Distributional Distortion in Neural Language Modeling

A fundamental characteristic of natural language is the high rate at whi...
research
11/04/2016

Tying Word Vectors and Word Classifiers: A Loss Framework for Language Modeling

Recurrent neural networks have been very successful at predicting sequen...
research
06/15/2022

DIRECTOR: Generator-Classifiers For Supervised Language Modeling

Current language models achieve low perplexity but their resulting gener...

Please sign up or login with your details

Forgot password? Click here to reset