Evaluating Computational Language Models with Scaling Properties of Natural Language

06/22/2019
by   Shuntaro Takahashi, et al.
0

In this article, we evaluate computational models of natural language with respect to the universal statistical behaviors of natural language. Statistical mechanical analyses have revealed that natural language text is characterized by scaling properties, which quantify the global structure in the vocabulary population and the long memory of a text. We study whether five scaling properties (given by Zipf's law, Heaps' law, Ebeling's method, Taylor's law, and long-range correlation analysis) can serve for evaluation of computational models. Specifically, we test n-gram language models, a probabilistic context-free grammar (PCFG), language models based on Simon/Pitman-Yor processes, neural language models, and generative adversarial networks (GANs) for text generation. Our analysis reveals that language models based on recurrent neural networks (RNNs) with a gating mechanism (i.e., long short-term memory, LSTM; a gated recurrent unit, GRU; and quasi-recurrent neural networks, QRNNs) are the only computational models that can reproduce the long memory behavior of natural language. Furthermore, through comparison with recently proposed model-based evaluation methods, we find that the exponent of Taylor's law is a good indicator of model quality.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/24/2018

Assessing Language Models with Scaling Properties

Language models have primarily been evaluated with perplexity. While per...
research
07/16/2017

Do Neural Nets Learn Statistical Laws behind Natural Language?

The performance of deep learning in natural language processing has been...
research
04/21/2018

Taylor's law for Human Linguistic Sequences

Taylor's law describes the fluctuation characteristics underlying a syst...
research
02/02/2015

Scaling Recurrent Neural Network Language Models

This paper investigates the scaling properties of Recurrent Neural Netwo...
research
03/21/2018

Exploring the Naturalness of Buggy Code with Recurrent Neural Networks

Statistical language models are powerful tools which have been used for ...
research
04/10/2018

Natural Language Statistical Features of LSTM-generated Texts

Long Short-Term Memory (LSTM) networks have recently shown remarkable pe...
research
08/08/2016

Syntactically Informed Text Compression with Recurrent Neural Networks

We present a self-contained system for constructing natural language mod...

Please sign up or login with your details

Forgot password? Click here to reset