Autocorrelations Decay in Texts and Applicability Limits of Language Models

05/11/2023
by   Nikolay Mikhaylovskiy, et al.
0

We show that the laws of autocorrelations decay in texts are closely related to applicability limits of language models. Using distributional semantics we empirically demonstrate that autocorrelations of words in texts decay according to a power law. We show that distributional semantics provides coherent autocorrelations decay exponents for texts translated to multiple languages. The autocorrelations decay in generated texts is quantitatively and often qualitatively different from the literary texts. We conclude that language models exhibiting Markov behavior, including large autoregressive language models, may have limitations when applied to long texts, whether analysis or generation.

READ FULL TEXT

page 8

page 9

research
02/09/2020

Limits of Detecting Text Generated by Large-Scale Language Models

Some consider large-scale language models that can generate long and coh...
research
11/20/2022

Pragmatic Constraint on Distributional Semantics

This paper studies the limits of language models' statistical learning i...
research
04/21/2018

Taylor's law for Human Linguistic Sequences

Taylor's law describes the fluctuation characteristics underlying a syst...
research
03/15/2022

Do Language Models Plagiarize?

Past literature has illustrated that language models do not fully unders...
research
09/02/2023

Multilingual Text Representation

Modern NLP breakthrough includes large multilingual models capable of pe...
research
10/10/2022

Metaphorical Paraphrase Generation: Feeding Metaphorical Language Models with Literal Texts

This study presents a new approach to metaphorical paraphrase generation...
research
07/26/2023

Three Bricks to Consolidate Watermarks for Large Language Models

The task of discerning between generated and natural texts is increasing...

Please sign up or login with your details

Forgot password? Click here to reset