Empirical observations of ultraslow diffusion driven by the fractional dynamics in languages: Dynamical statistical properties of word counts of already popular words

01/24/2018 ∙ by Hayafumi Watanabe, et al. ∙ 0

Ultraslow diffusion (i.e. logarithmic diffusion) has been extensively studied theoretically, but has hardly been observed empirically. In this paper, firstly, we find the ultraslow-like diffusion of the time-series of word counts of already popular words by analysing three different nationwide language databases: (i) newspaper articles (Japanese), (ii) blog articles (Japanese), and (iii) page views of Wikipedia (English, French, Chinese, and Japanese). Secondly, we use theoretical analysis to show that this diffusion is basically explained by the random walk model with the power-law forgetting with the exponent β≈ 0.5, which is related to the fractional Langevin equation. The exponent β characterises the speed of forgetting and β≈ 0.5 corresponds to (i) the border (or thresholds) between the stationary and the nonstationary and (ii) the right-in-the-middle dynamics between the IID noise for β=1 and the normal random walk for β=0. Thirdly, the generative model of the time-series of word counts of already popular words, which is a kind of Poisson process with the Poisson parameter sampled by the above-mentioned random walk model, can almost reproduce not only the empirical mean-squared displacement but also the power spectrum density and the probability density function.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 6

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

A language is a typical complex system, and is characterised by well-known language-independent statistical laws such as Zipf’s law and Heap’s law altmann2016statistical . In this study, we investigate the dynamical statistical properties in languages by using massive databases related to word usage that has developed in the past 10 years. Especially, we focus on the stability or the slowness of change in the usage of already popular words from the viewpoint of diffusion on a complex system and show that common logarithmic diffusion (i.e. very slow diffusion or change) is approximately observed by some languages or media.

Diffusion on complex systems, which is an attractive research topic in physics or complex system science, has been extensively studied both theoretically and empirically and has been applied to various systems such as biological or social systems. The diffusion on complex systems is basically characterised by the mean-squared displacement (MSD). The vast majority of studies reported that MSD growth occurs asymptotically according to the power law,

(1)

In the case of , the diffusion corresponds to normal diffusion, such as the diffusion of particles in water, which is modelled using a random walk. In other cases, it is known as anomalous diffusion, especially, it is termed subdiffusion for and superdiffusion for . Many complex systems have been shown to exhibit this power-law type of anomalous diffusion in diverse areas, such as physics, chemistry, geophysics, biology, and economy metzler2000random ; da2014ultraslow . In theoretical studies, anomalous diffusion is explained using the correlation of random noise (e.g. a random walk in disordered media) bouchaud1990anomalous

, a finite-variance (e.g. a Levy flight)

bouchaud1990anomalous ; metzler2000random , a power-law wait time (e.g. a continuous random walk) bouchaud1990anomalous ; metzler2000random ; burov2011single , and a long memory (e.g. a fractional random walk) lowen2005fractal ; burov2011single .

Another class of anomalous diffusion is predicted by theories, where the MSD growths logarithmically,

(2)

This type of diffusion is known as “ultraslow diffusion”. One of the best-known examples that was first discovered is the diffusion in a disordered medium (it is known as Sinai diffusion for ) sinai1983limiting . Thereafter, other types of models that explain ultraslow diffusion have also been proposed such as continuous random walk (CTRW) with the waiting time generated by the logarithmic-form probability density function godec2014localisation , CTRW with waiting time generated by the power-form probability density function and the excluded volume effect sanders2014severe , temporal change of diffusion coefficients bodrova2015ultraslow , spatial changes cherstvy2013population , and fractional dynamics eab2011fractional .

Although many theoretical studies of ultraslow diffusion have been reported, we were unable to find empirical examples thereof. A rare example of diffusion related to the logarithmic function, which is similar but different from the “ultraslow diffusion” defined by Eq. 2 (i.e. ), is the mobility of humans measured by mobile phone data. In this study, by using both data and models, the authors insisted that the MSD grows according to or becomes saturated (i.e. the MSD grows slower than ). This diffusion is mainly explained by the CTRW and preferential return (to home) effects song2010modelling . Diffusion resembling this very slow diffusion was also observed in the mobility of monkeys and the authors maintained that this diffusion may be explained by the heterogeneity of the space such as by Sinai’s model boyer2011non . Note that logarithmic “relaxation” phenomena, which are known as “ageing”, are observed in many systems such as paper crumpling matan2002crumpling and grain compalification richard2005slow .

We investigate the stability or dynamic usage of already popular words. In other words, we focus only on the dynamics of the “mature phase” in the life trajectory of words (consisting of an “infant phase”, an “adolescent phase”, and a “mature phase”) petersen2012statistical ; gerlach2013stochastic . The pioneer study of a stability and variation of language from the viewpoint on the dynamical statistical property was given by Erez Lieberman et. al lieberman2007quantifying . In this study, the regularization of English verbs (i.e., change from irregular to regular verbs) over the past 1200 years was investigated, and the 0.5th power law of the regularization rate as a function of word frequencies (i.e., higher frequency words involve less changes, or are more stable) was noted. This study quantified the stability of language on a historical timescale (i.e., from 100 to 1000 years). In contrast, our study focuses on stability on a shorter timescale (i.e., from 1 day to 10 years). Note that some findings relating to the dynamics or properties of words in the “infant phase”, the “adolescent phase” or total life trajectory (i.e. from birth to death) were conducted by using the Google Ngram data corpus (in which word frequencies occurring in printed books from 1520 to 2000 are given) michel2011quantitative ; petersen2012statistical ; gerlach2013stochastic . In these studies, the authors found some statistical properties such as the typical time to reach the “adolescent phase” is about 20 or 30 years; the MSD is superdiffusion and the dynamics are related to Yule’s, Simons, Gibrat’s, and preferential attachment processes petersen2012statistical ; gerlach2013stochastic .

Note that physicists have studied linguistic phenomena using the concepts of complex systems link1 , such as competitive dynamics abrams2003linguistics , statistical laws altmann2016statistical , complex networks cong2014approaching

, the phase transition and the information theory

i2005zipf .

In this paper, in order to quantify “the stability” or “the speed of change” of the usage of already popular words (i.e. the mature phase) precisely, we measure the MSD by using actual data and introduce the time-evolution model of frequencies of words for it. In addition, we clarify the dynamics behind this diffusion.

Firstly, we investigate the MSD of the time series of word counts of three actual types of data: (i) newspapers for 10 years (Japanese), (ii) blogs for 5 years (Japanese) and (iii) Wikipedia page views for 2 years (English, French, Chinese and Japanese). This approach enabled us to observe an ultraslow-like diffusion for all data (Figs. 2 and 3).

Secondly, we discuss the relation between empirical results and the random walk model with the power-law forgetting given by Eq. 10, which is related to the fractional Langevin equation, and can essentially explain the ultraslow diffusion.

Thirdly, we introduce a model of word counts sampled from the Poisson process (Eq. 27) of which the rate is generated by the previously mentioned random walk model (Eq. 10), in order to connect the ultraslow diffusion explained by Eq. 10, with peculiar properties of word count data, such as discreteness(i.e. counts take ). In addition, we show that the model can consistently reproduce the following empirical dynamical statistical properties (Fig. 5):

  • Mean squared displacement [MSD],

    (3)
  • Power spectral density [PSD] (periodogram),

    (4)
  • Probability density function [PDF] (histogram)

    (5)

where is the word count scaled by the database size at the date defined by Section II.2; is the last date of observation; is a time lag (positive integer, ); is a (spectral) frequency; is the bin size of a histogram; represents the value of a bin of a histogram; and means the number of elements of the set .

Finally, we conclude with a discussion.

[scale=.35,tics=5]fig1a.pdf (a) [scale=.35,tics=5]fig1b.pdf (b) [scale=.35,tics=5]fig1c.pdf (c)
Figure 1: Time-series of word appearance on Japanese blogs: (a) Example of a daily time-series of raw word appearance for “Sanada (i.e. well-known family name)”, on Japanese blogs. (b) Daily time-series of the normalised total number of blogs (i.e., the normalised scale of database), , which is estimtaed by Appendix A. (c) Daily corresponding time series of word appearances scaled by the normalised total number of blogs (i.e., the normalised scale of database), . We can confirm that the time-variation of raw word appearances shown in panel (a) is almost the same as that of the total number of blogs shown in panel (b).

Ii Data set

Our data analysis involved analysing the daily time-series of the word counts in newspapers (Japanese), blogs (Japanese), and Wikipedia page views (English, French, Chinese, and Japanese).

ii.1 Data sources

Newspapers

We obtained the time-series of word appearance per day in nation-wide Japanese newspapers by using the “Shinbun trend in NIKKEI Telecom” database, which was provided by Nikkei Inc. Using the database, we obtained the daily number of articles containing a keyword from 80 newspapers published in Japan between Jan. 2005 and Sep. 2017 nikkei . Note that if an article contains more than two focused keywords (e.g. key word “dog”: “There is a dog. The dog is big.”), the system counts it as one article. We used the top 10,000 ranked words in frequency order as keywords. We referred to the pages entitled ”Wiktionary:Frequency lists” in Wiktionary wikitionaly to obtain the rank of the word frequency.

Blogs

We obtained the time-series of the daily number of articles containing a keyword in nationwide Japanese blogs using a large database of Japanese blogs (”Kuchikomi@kakaricho”) provided by Hottolink, Inc. This database contains 3 billion articles of Japanese blogs, which covers 90% of Japanese blogs from Nov. 2006 to Dec. 2012 RD_base . Note that in common with the newspaper data, if one article contains more than two focused keywords, the system counts it as one article. We used 1,771 basic adjectives and 60,476 nouns as keywords from ipa-dic idadic .

Wikipedia page views

We obtained daily Wikipedia pageviews using PageviewAPI, which is a public API developed and maintained by the Wikimedia Foundation. This API provides analytical data about article pageviews (or the number of page loads) of Wikipedia. By inputting an article title as a keyword (e.g. “dog”), a time period (e.g. from 1st Jan. 2017 to 31st Nov. 2017) and a language (e.g. English Wikipedia) to the API, we can obtain time-series of count data on how many times people have visited the focused article (e.g. the number of loads or pageviews of the “dog” page in the English Wikipedia) per day during a given time period. Although Wikipedia page views are not the word appearance of a keyword in documents unlike newspaper data and blog data, they are often used to investigate the daily changes of concerns of keywords (or article) in common with newspaper and blog data. We obtained the data of the English, French, Chinese, and Japanese pages from Jul. 2015 to Sep. 2017 wikipedia_pageview . We used the top 10,000 ranked words in frequency order as keywords RD_base with respect to each language. To obtain the rank of the word frequency, we referred to the pages entitled ”Wiktionary:Frequency lists” in Wiktionary as is the case with the newspaper data wikitionaly .

ii.2 Normalised time-series of word appearances

We define herein as follows the notation of the time-series of the word counts and the normalised word counts :

  • is the raw daily counts of the -th word within the nationwide dataset (Fig.1(a)), where is the last date of observation, and is the number of observed keywords.

    Concretely speaking, for the newspaper and blog data, corresponds to the daily number of articles containing the -th keyword in the database. For the Wikipedia page view data, it corresponds to the daily page view of an article entitled the -th keyword (how many times people have visited the focused article).

  • is the time-series of the daily count normalised by the temporal scale of database (Fig. 1 (c)).

    corresponds to the original time deviation of the -th word separated from the effects of deviations in the scale of database (Figs. 1 (b) and (c)). The scale of database almost corresponds to the (normalised) total number of articles (i.e. temporal database size) for the newspaper and blog data. For the Wikipedia data, it conceivable that almost corresponds to the (normalised) total temporal number of users of Wikipedia of a focused language ( does not correspond to the size or number of articles of Wikipedia of a focused language itself.).

    is estimated herein by the ensemble median of the number of words at time

    , as described in the Appendix A. assumes that for normalisation (Fig.1(b)).

[scale=0.35,tics=5] fig2a.pdf (a) [scale=0.35,tics=5] fig2b.pdf (b) [scale=0.35,tics=5] fig2c.pdf (c) [scale=0.35,tics=5] fig2d.pdf (d) [scale=0.35,tics=5] fig2e.pdf (e) [scale=0.35,tics=5] fig2f.pdf (f)
Figure 2: Empirical MSD given by Eq. 6 for typical words. The grey dots indicate the raw MSD, the thin black solid line indicates the corresponding 7-day moving median, and the thick solid blue line the 365-day moving median. The magenta thick dashed line corresponds to the logarithmic curve given by Eq. 7. Panel (a) shows the newspaper data for “Tatiba (position or standpoint in English )” ( and ), panel (b) the blog data for “Sanada (well-known Japanese family name)” ( and ), and panel (c) the English Wikipedia page views for “Handle” ( and ). Panels (d), (e), and (f) are the corresponding figures on a semi-logarithmic scale. The results in these figures confirm that the logarithmic curves substantially agree with the empirical data.
[scale=0.35,tics=5] fig3a.pdf (a) [scale=0.35,tics=5] fig3b.pdf (b) [scale=0.35,tics=5] fig3c.pdf (c) [scale=0.35,tics=5] fig3d.pdf (d) [scale=0.35,tics=5] fig3e.pdf (e) [scale=0.35,tics=5] fig3f.pdf (f) [scale=0.35,tics=5] fig3g.pdf (g) [scale=0.35,tics=5] fig3h.pdf (h) [scale=0.35,tics=5] fig3i.pdf (i)
Figure 3: (a-g) Ensemble median of the scaled MSD given by Eq. 9 for words with a mean above 30 (excluding words with a small because they have relatively large signal-to-noise ratios.). The magenta thick dashed lines are the corresponding theoretical curves . (a)Newspaper data. The grey dots indicate the raw ensemble scaled MSD, the thin black solid line indicates the corresponding 7-day moving median, and the thick dash-dotted red line with circles is the 365-day moving median. (b)MSD for blog data. The grey thin line indicates the 7-day moving median of the ensemble scaled MSD for nouns (solid line) and for adjectives (dash-dotted line). The thick black line with triangles is the 365-day moving median for nouns and the red dash-dotted line with circles is that for adjectives. (c)MSD for Wikipedia data. The grey thin lines indicate the 7-day moving median of the ensemble scaled MSD for English (solid line), French (dash-dotted line), Chinese (dotted line), and Japanese (long-dashed line). The thick lines indicate the 365-day moving median of the ensemble scaled MSD for English (black solid line with triangles), French (red dash-dotted line with circles), Chinese (green dotted line with plus signs), and Japanese (blue long-dashed line with squares). The results in these figures confirm that the theoretical curve substantially agrees with the corresponding empirical data. Note that in the case of the newspaper, given in definition Eq. 8 was replaced with the corresponding 365-day moving median to avoid the effect of strong weekly and annual cycles.
(h-k)Power spectral density analysis. The ensemble median of the word-independent normalised spectral density for words with a mean above 30, given by Eq. 21. The magenta thick dashed lines are the theoretical curve given by Eq. 22 and the cyan guidelines are . (h) Newspaper data. The black solid line is the raw normalised spectral density and the red dash-dotted line is the corresponding 31-point moving median. (i) Blog data. The line is the spectral density for nouns (black solid line) and for adjectives (red dash-dotted line). (j) Wikipedia data. The line is the spectral density for English (black solid line), French (red dash-dotted line), Chinese (green short-dashed line), and Japanese (blue long-dashed line).
[scale=0.33,tics=5] fig4a.pdf (a) [scale=0.34,tics=5] fig4b.pdf (b) [scale=0.37,tics=5] fig4c.pdf (c)
Figure 4: Histograms of estimated forgetting exponent in the model described by Eq. 10 and Eq. 27 for words with a mean above 30. Herein we estimate of individual words and subsequently construct the histograms. Details of the estimation method are provided in Appendix C. The data are shown in (a) for the newspaper, (b)the nouns (first row) and the adjectives (second row) for the blog, (c)English (first row), French (second row), Chinese (third row), and Japanese (fourth row) from Wikipedia articles, with all histograms standardized. From these figures, we can confirm that the modes are approximately for all datasets. The blue crosses are the reference histograms of estimated for the corresponding numerical simulations of the model given by Eqs. 10 and 27 in which the parameter are , , and (the sampling number is 2000).
[scale=0.35,tics=5] fig5a.pdf (a) [scale=0.35,tics=5] fig5b.pdf (b) [scale=0.35,tics=5] fig5c.pdf (c) [scale=0.35,tics=5] fig5d.pdf (d) [scale=0.35,tics=5] fig5e.pdf (e) [scale=0.35,tics=5] fig5f.pdf (f) [scale=0.35,tics=5] fig5g.pdf (g) [scale=0.35,tics=5] fig5h.pdf (h) [scale=0.35,tics=5] fig5i.pdf (i) [scale=0.35,tics=5] fig5j.pdf (j) [scale=0.35,tics=5] fig5k.pdf (k) [scale=0.35,tics=5] fig5l.pdf (l)
Figure 5: Comparison between empirical observations and model of the word counts given by Eqs. 10 and 27. Subfigures in the first column indicate the newspaper data for “Tachiba (i.e. position or standpoint in English; , and )”, the second column indicates the blog data for “Sanada (i.e. well-known Japanese family name; , and )”, and the third column indicates Wikipedia data for “Handle (, , )”. Panels (a), (b), and (c) are the normalised word counts . The empirical data are shown as a black solid line and the results of the numerical simulation are shown as a red dashed line. Panels (d), (e), and (f) are the corresponding MSDs given by Eq. 3 and the magenta thick dashed-dotted line is the theoretical curves of the model expressed by Eq. 30. Panels (g), (h), and (i) are the corresponding power spectrum densities given by Eq. 4 and the magenta thick dashed-dotted line is the corresponding theoretical curve obtained by Eq. 31. Panels (j), (k), and (l) compare the probability density functions (PDFs) of the empirical data and numerical simulations given by Eq. 5. The black triangles indicate the empirical data for , red diamonds for , and blue circles for (newspapers), (blogs) or (Wikipedia). In addition, the grey solid lines show the corresponding numerical simulation for , the magenta dashed line for and the cyan dash-dotted lines for (newspapers), (blogs) or (Wikipedia). The results in these figures confirm that the theoretical model is almost consistently in accordance with the empirical data. Note that the p-values of the two-sample Kolmogorov-Smirnov test (KS test) in panels (j-l): [Black triangles empirical distribution vs grey solid line simulation distribution, red diamonds distribution vs magenta dashed line distribution, blue circles distribution vs cyan dash-dotted line distribution] are [0.89, 0.46, 0.46] for the newspapers, [0.94, 0.30, 0.74] for the blogs and [0.98, 0.92, 0.97] for Wikipedia. In this statistical test, we check whether the samples obtained by the empirical data and those obtained by the numerical simulation come from the same distribution, where samples are points differences of of data and the corresponding simulation results, namely . The KS test requires the IID samples, but our data have a weak autocorrelation; hence, these p-values are approximated values.
[scale=0.32,tics=5] fig6a.pdf (a) [scale=0.32,tics=5] fig6b.pdf (b) [scale=0.32,tics=5] fig6c.pdf (c) [scale=0.32,tics=5] fig6d.pdf (d) [scale=0.32,tics=5] fig6e.pdf (e) [scale=0.32,tics=5] fig6f.pdf (f) [scale=0.32,tics=5] fig6g.pdf (g) [scale=0.32,tics=5] fig6h.pdf (h) [scale=0.32,tics=5] fig6i.pdf (i) [scale=0.32,tics=5] fig6j.pdf (j) [scale=0.32,tics=5] fig6k.pdf (k) [scale=0.32,tics=5] fig6l.pdf (l)
Figure 6: The comparison of the simulation results of the models given by Eqs. 10 and 27 between different (the speeds of forgetting) and (database size). The subfigures in the first row indicate the results of the case where is sampled from the IID noise (). The third row indicates the results of the case where is sampled from the word count model (). The fourth row indicates the results of the case where is sampled from the random walk model (; ;). Herein, the scaled database size (i.e. scaled total number articles) is estimated from data (Appendix A). The subfigures in the second row present the simulation, in which the database size is conserved , and is the same as the third row (). For all data, . The column corresponds to the properties of these models. Panels (a), (d) ,(g) and (j) (first column) show the word counts and . Panels (b), (e) , (h) and (k) (second column) are the MSD of , and panels (c), (f), (i) and (l) (third column) are the PSD of . The red thin dashed lines indicate the results of the numerical simulations, while the thick purple dash dotted lines denote the corresponding theoretical curves given by Eq. F3 for MSD and Eq. G5 for PSD. The black solid lines represent the empirical data for “Sanada (i.e. well-known Japanese family name)” in the blog data. The gray thick dotted lines are the corresponding theoretical curves. These figures confirm that the random walk model and the IID noise cannot reproduce the empirical results. In addition, from the second row, we can also confirm that is not essential in reproducing the empirical properties .

Iii Data analysis: Ultraslow-like diffusion in the empirical data

We next calculate the MSD of the actual data. We use the following temporal MSD for data analysis,

(6)

where is the time lag (e.g. for , it corresponds to a weekly difference; for , it corresponds to almost a monthly difference; and for , it corresponds to a yearly difference.). Thus, the MSD quantifies the changes of word counts of the focused keyword growth in days. Note that the statistics has a meaning when the differential is steady. (the normalised counts by the scaled database size) sampled from our mathematical model (described subsequently) do not contradict with this condition (Appendix F), and the majority of corresponding empirical data approximately satisfies this condition, although the raw counts do not always satisfy this condition because of effects, such as increasing database size (Fig. 1).

Fig. 2 shows examples of the MSDs of typical words for the Japanese newspaper (a), Japanese blogs (b), and English Wikipedia page views (c). The results in these figures confirm that all growth of MSDs is essentially approximated by the logarithmic function,

(7)

Next, we verify the validity of the above result by calculating the ensemble median of (temporal) scaled MSD by using all words with a large frequency on the respective databases. If we assume Eq. 7, the scaled MSD has a word-independent curve,

(8)

where is the temporal median of the set and is the maximum lag which we use to make a graph. Thus, we can use the ensemble over words and the ensemble median obeys the logarithmic function,

(9)

where is the median over the words set and is the size of the set. We take the median over the set of the mean frequency over 30, . We exclude words with a small mean because they have relatively large signal-to-noise ratios (see Eq. 30). Figs. 3 (a)-(f) show that the logarithmic curve is approximately observed for all data sets, namely newspapers, blogs, and Wikipedia content (English, French, Chinese, and Japanese). Here, because there are words with a non-negligible weekly or annual cycle, the raw ensemble of MSD also has these cycles (grey dots or grey thin lines). Thus, we can observe the logarithmic curve by using the 365-day moving median, which cancels these cycles. Note that by replacing the ensemble median with the ensemble mode in Eq. 9, we can also obtain the essentially same logarithmic diffusion. This logarithmic diffusion is not in conflict with our intuition that languages are basically stable but change constantly.

Iv Model

This section explains the properties of word counts by the combination of two probabilistic models:(i) the random walk model with the power-law forgetting and (ii) the random diffusion model (i.e, a kind of the Poisson point process). The random walk model describes the latent concerns of the focused word and it can explain the ultraslow diffusion essentially. Besides, the random diffusion model expresses the connection between the latent concern described above-mentioned random walk model and the observable word counts or . Here, first we introduce and discuss the random walk model, and next, we introduce the word counts model which is the combination of the random walk model and random diffusion model.

iv.1 Model: Relation with the random walk

Here, we present the extent to which the empirical result and a random walk correspond with the power-law forgetting, which is one of the most representative standard explanations of anomalous diffusion in previous studies. This approach is also equivalent to the fractional dynamics approach (in our case, the fractional Langevin equation approach).

The random walk model with the power-law forgetting is given by

(10)

where , is a constant used to characterise the forgetting speed and

is independent and identically distributed noise where the mean takes zero and the standard deviation is

, that is, we can write . Here, is independent and identically distributed noise where the mean takes zero and the standard deviation is 1.

This model is an extension of the normal random walk model, namely, the model corresponds to the random walk for and to the steady IID noise for . For the time-series of the word counts, the model is interpreted by considering that the social concern of the -th word at the time , is determined by the summation of received outer shocks until the time in the case of . In the case of , (i.e. the social concern) is determined by both the above-mentioned summation effects and the effects of forgetting shocks in a power-law manner.

From the Appendix B, the MSD of this model is calculated by

(11)
(12)

This formula implies corresponds to our empirical results, that is, the logarithmic-like diffusion.

We also verify the validity of the model by comparing the power spectrum density (PSD) between the data and the model. The PSD of the model Eq. 10 is approximated by

(13)
(14)

where . We use herein the formula of the PSD of granger1980introduction , by which our model was approximated (Appendix D) and the empirical PSD of the time series is calculated as follows:

(15)

where is the frequency [1/days]. is the abbreviation for autoregressive fractionally integrated moving average model, which is a well-known time-series model that describes a time-series with a long memory, in the field of statistics burnecki2014algorithms . is defined by Eq. D3. For , this formula is also approximated by

(16)

Thus, for the power spectrum is approximated by the simple power law, .

Because the concern of word is directly observed from the actual word counts data as mentioned (see the section IV.2), alternatively we use the normalised power spectrum of word counts for , ,

(17)
(18)
(19)

where is the minimum in the observation, and we used the assumption

(20)

where and are constants depending on the word . Hence, we can obtain the information of from the observable , which we estimate by using a periodogram in this study. The validity of this assumption is discussed in section IV.2. Figs. 3(g-i) show the ensemble median of the normalised power spectrum of word counts given by Eq.19 over the word sets,

(21)
(22)

where, for data analysis, we take the median over the set in which the mean is above 30. The results in these figures confirm that Eq. 22 is in agreement with of actual data given by Eq. 21 for all data sets.

In order to check the plausibility of , in addition, we estimate directly from data, with respect to individual words. Herein we use the model described by Eq. 10 and Eq. 27 (outlined subsequently) and details of the estimation method are provided in the Appendix C. Fig. 4 shows the histogram of estimated for the newspaper data, blog data, and Wikipedia data. This figure confirms that the mode of estimated takes the value of approximately 0.5 for all datasets.

iv.1.1 Relation to the fractional dynamics

Here, we address the relation between the fractional dynamics and the random walk model. From Appendix D, the continuous version of Eq. 10 corresponds to the fractional Langevin equation, which is the expansion of the Langevin equation eab2011fractional ; magdziarz2007fractional ,

(23)

where, on the condition that , this equation is the normal Langevin equation. Here the Riemann-Liouville fractional derivative operator of is defined by

(24)

This operator is satisfied with . For example, in the case of , the operator is times the derivative operator,that is, three operations of mean one normal derivative,

(25)

Therefore, in the case of the word counts, namely, , we can obtain the half-order fractional Langevin equation,

(26)

where is the half derivative operator, . Thus, the properties of the word count time series are right-in-the-middle dynamics between the IID noise (zero-order differentiation) and the normal random walk (first-order differentiation).

Table 1 provides a summary of the properties of the model given by Eq. 10.

Random walk Word counts IID noise
(i) Dynamics statistics
Stability Unsteady Steady
MSD [Eq. 11] Normal-dif. Sub-dif. Ultraslow-dif. Steady
: Time-lag [days]
PSD [Eq. 15]
: Frequency [1/days]
(ii) Time evolution
Table 1: Summary of the model properties obtained by Eq. 10

iv.2 Model of word counts

In the previous section, we confirmed that the logarithmic diffusion in word counts can be explained by the random walk with power-law forgetting given by Eq. 10 essentially. However, this random walk model cannot explain all the statistical properties of word counts we observed in this paper. For example, we cannot explain: (i) the discreteness of the row word counts and (ii) the word-dependent constants in Eq. 7 and in Eq. 20. Thus, lastly, we discuss the connection between the essential dynamics of the concern of word given by Eq. 10 (i.e. the latent value) and the time series of word counts or (i.e. the observed value).

Here, we use the random diffusion model (RD model) introduced in PhysRevLett.100.208701 ; PhysRevE.87.012805 ; sano2009 ; RD_base to sample or . The RD model is a kind of point process, which can be deduced from the simple model of the writing activity of independent bloggers RD_base .

In this model, values are sampled from the Poisson distribution of which the rate (or intensity) function is determined by a random variable or a stochastic process (i.e. the doubly stochastic Poisson process

lowen2005fractal ). In the case of blogs, the rate function is connected to the latent concern of word . Particularly, the RD model is given by RD_base

(27)

and its rate function of the Poisson distribution (denoted by ), is determined by the following definition of the product:

(28)

where

  • is the (normalised) scale of the database such as the total number of blogs, assuming that for normalisation (see Fig.1(b)), where is estimated by the ensemble median of the number of words at time t, as described in the Appendix A.

  • is the scale of the -th word, namely, the temporal means of the -th word, where we estimate the mean of the raw word count of data .

  • is the scaled time-variation of concern of the -th word sampled from Eq. 10, where we set the (This sampling using Eq. 10 is the no-verity of this study in comparison with Ref. RD_base .) .

  • is the magnitude (i.e. the standard deviation) of the ensemble fluctuation, which may be related to the magnitude of the heterogeneity of bloggers RD_base .

  • is the normalised ensemble fluctuation, which is sampled from the system-dependent random variable with a mean , standard deviation , and parameters that characterize the distribution.

Note that in the previous study Ref. RD_base , we estimated from data directly by using the moving average for data analysis or used the assumption for analytical calcultion. Thus, we could not discuss the properties of the dynamics as such in Ref RD_base (The model only describes the fluctuation when dynamics of are given). However, in this study we introduce the time-evolution model given by Eq. 10, enabling us to calculate the basic dynamics of already popular words (The model describes not only the fluctuation but also the dynamics.).

Also note that the word counts are confined from zero to the size of databases (i.e. the total number of articles for the newspaper and blog data and the total number of Wikipedia users for the Wikipedia data) in the actual data. However, our model does not consider this limitation, and this problem is substantially neglectable in our situation. The main reasons for this are as follows:

  • The time evolution is very slow (i.e. logarithmic diffusion) on the condition the initial value being and the finite time step (a maximum of approximately 10 years); hence, walks on around 1. The cases taking a negative value (for which the rate function and Eq. 27 become meaningless) and a very large value (related to the limitation of the total number of articles ) were not sampled practically.

  • Almost all words take a very small temporal mean of the word counts to be affected by the limitation of the total number of articles ().

Though in our case, we were almost able to avoid these problems of constraints without a special treatment, in general situations, such as infinite time step (), we may have to extend the model to explicitly describe these constraints.

Fig. 5 compares the statistical properties of the word counts time-series between the empirical data and the numerical simulation of the RD model driven by the random walk model with the power-law forgetting given by Eq. 27 and Eq. 10 with respect to the (i) MSD, (ii) PSD, and (iii) (temporal) probability density function (PDF) of the differential (see Eq. 5). The results in these figures confirm that the numerical simulations are almost in accordance with the empirical observations in the newspaper, blog, and Wikipedia data, respectively.

In these simulations, and are sampled from the normalised noncentral t-distribution . The normalised noncentral t-distribution , of which the mean is zero and the standard deviaton 1, is the shifted and scaled noncentral t-distribution . The noncentral t-distribution

is a skewed heavy tail distribution, the tail parameter

determine the heaviness of tail and the noncentrality parameter determine the skewness of the distribution. On the condition that , the noncentral t-distribution corresponds to (normal or no-skew) t-distribution. The detail of this distribution given by Appendix E.

In the figure, we use the word-dependent or system-dependent parameters, namely, the mean frequency (scale) of word counts , the speed of the diffusion (or mean strength of outer shocks) and the magnitude of ensemble fluctuation (maybe related to the heterogeneity of bloggers) as follows: , and for “Tachiba (i.e. position or standpoint in English)” on the newspaper data, , and for “Sanada (i.e. well-known Japanese family name)” on the blog data, and , and for “Handle” on the English Wikipedia data.

Lastly, we show the relation between the RD model and the word-dependent constant , , in Eq. 7 and , in Eq. 20. From Appendix F, the (mean of the temporal) MSD of is written by

(29)
(30)

where , , is the digamma function and this curve is shown in the magenta thick dash-dotted lines in Figs. 5 in (d-f).

In addition, from Appendix G the power spectrum density of is written by,

(31)

where , and this curve is shown in the magenta thick dash-dotted lines in Figs. 5 in (g-i).

We also verified that the model cannot reproduce the statistical properties of the empirical data on the condition that does not take around . Fig. 6 shows the results in which we compare the empirical data with the numerical simulations for different (the speeds of forgetting) and (database size): (i) the IID noise (), (ii) the simple random walk model () and (iii) the case where the database size is constant ( and ). This figure confirms that the IID noise () and the random walk model () cannot reproduce the empirical properties. In addition, is not essential in reproducing the empirical properties (see panels in the second line).

Note that the model given by Eq. 27 and Eq. 10 can also explain the “fluctuation scaling”, which is known as the other property of word counts on social media such as blogs eisler2008fluctuation ; sano2009 ; RD_base . The relation between the empirical fluctuation scaling and the model given by Eq. 27 and Eq. 10 will be discussed in our next paper.

V Conclusion and discussion

In this paper, from the viewpoint of the diffusion of complex systems, we investigated the stability of the time-series of word counts of already popular words (i.e. “mature phase words“) on some nationwide language data sets (newspaper articles, blog articles, and Wikipedia page views).

Firstly, by analysing the data, we commonly observed a logarithmic-like diffusion (i.e. an ultraslow-like diffusion) in word counts between different data sets. Although ultraslow diffusion has been extensively studied by using theories or mathematical models, few empirical observations have been reported. Moreover, this logarithmic-like diffusion from observed facts is not in conflict with the intuition in which languages are basically stationary but change constantly. This intuition may be related to the empirical studies of the stability of word count statistics: (i) more frequent words change slower lieberman2007quantifying ; gerlach2016similarity , and (ii) some observations implied small stable core (kernel) vocabularies as distinguished from other many vocabularies for specific communications which are not shared by all people ferrer2001two ; ferrer2017origins .

Secondly, we show that the logarithmic diffusion of word counts is essentially explained by the random walk model with forgetting in the power law. This random walk model corresponds with the fractional Langevin equation, which is a typical mathematical model in previous theoretical studies of anomalous diffusions. The speed of forgetting characterized by the power-law exponent in Eq. 10 has the following meanings:

  • The border (or thresholds) between the stationary and the nonstationary (Eq. 12), and

  • Right-in-the-middle dynamics between IID noise and the normal random walk (Eq. 23).

    (32)

which are summarized in Table. 1.

Thirdly, we confirmed the RD model driven by the random walk model with forgetting given by Eqs. 10 and 27 in the power law can almost reproduce the empirical properties time-series of typical words (Fig. 5): (i) MSD, (ii) PSD, and (iii) PDF.

Although our model can explain the dynamical properties of the word counts time series, our framework cannot explain the model parameter in Eq. 10. This special value, , which is the threshold between steady dynamics and unsteady dynamics, is observed detail-independently (i.e. words, languages, and media independent) as far as we investigated. Thus, clarifying the origin of the parameter may provide a clue to understand the fundamental dynamical and memory properties of human systems or societies as complex systems.

In the micro-level study, namely, the study of single documents, the power law of the forgetting process with the word-dependent exponents which are distributed approximately around 0.5, is used to explain the empirical stretched exponential distribution of the recurrence distance of words (e.g., for the phrase “This cat is big. That cat is small.”, the recurrence distance of “cat” is 4.)

altmann2009beyond . This quantitative similarity of the power law of forgetting dynamics between data of micro- (single document) and macro-level (nation-wide collective behavior datasets) studies might provide important suggestions to understand the origin of the 0.5th exponent obtained in our macro-level study from micro-level human behavior.

Acknowledgements.
The authors would like to thank Hottolink, Inc. for providing the data. This work was supported by JSPS KAKENHI, Grant Number JP17K13815.

References

  • (1) E. G. Altmann and M. Gerlach, Creativity and Universality in Language (Springer, Basel, 2016), pp. 7–26.
  • (2) R. Metzler and J. Klafter, Physics reports 339, 1 (2000).
  • (3) M. A. A. da Silva, G. M. Viswanathan, and J. C. Cressoni, Physical Review E 89, 052110 (2014).
  • (4) J.-P. Bouchaud and A. Georges, Physics reports 195, 127 (1990).
  • (5) S. Burov, J.-H. Jeon, R. Metzler, and E. Barkai, Physical Chemistry Chemical Physics 13, 1800 (2011).
  • (6) S. B. Lowen and M. C. Teich, Fractal-based point processes (John Wiley & Sons, Hoboken, USA, 2005), Vol. 366.
  • (7) Y. G. Sinai, Theory of Probability & Its Applications 27, 256 (1983).
  • (8) A. Godec et al., Journal of Physics A: Mathematical and Theoretical 47, 492002 (2014).
  • (9) L. P. Sanders et al., New Journal of Physics 16, 113050 (2014).
  • (10) A. S. Bodrova, A. V. Chechkin, A. G. Cherstvy, and R. Metzler, New Journal of Physics 17, 063038 (2015).
  • (11) A. G. Cherstvy and R. Metzler, Physical Chemistry Chemical Physics 15, 20220 (2013).
  • (12) C. H. Eab and S. C. Lim, Physical Review E 83, 031136 (2011).
  • (13) C. Song, T. Koren, P. Wang, and A.-L. Barabási, Nature Physics 6, 818 (2010).
  • (14) D. Boyer, M. C. Crofoot, and P. D. Walsh, Journal of The Royal Society Interface rsif20110582 (2011).
  • (15) K. Matan, R. B. Williams, T. A. Witten, and S. R. Nagel, Physical Review Letters 88, 076101 (2002).
  • (16) P. Richard et al., Nature materials 4, 121 (2005).
  • (17) A. M. Petersen, J. Tenenbaum, S. Havlin, and H. E. Stanley, Scientific reports 2, (2012).
  • (18) M. Gerlach and E. G. Altmann, Physical Review X 3, 021006 (2013).
  • (19) E. Lieberman et al., Nature 449, 713 (2007).
  • (20) J.-B. Michel et al., science 331, 176 (2011).
  • (21) E. G. Altmann and M. Gerlach, Physicists’ papers on natural language from a complex systems viewpoint, http://www.pks.mpg.de/mpi-doc/sodyn/physicist-language/.
  • (22) D. M. Abrams and S. H. Strogatz, Nature 424, 900 (2003).
  • (23) J. Cong and H. Liu, Phys Life Rev. 11, 598 (2014).
  • (24) R. F. i Cancho, Eur. Phys. J. B 47, 449 (2005).
  • (25) Nikkei Inc. and Nikkei Business Publications, Inc., Shinbun trend (web system), http://ntrend.nikkei.co.jp/.
  • (26) Wiktionary:Frequency lists, https://en.wiktionary.org/wiki/Wiktionary:Frequency_lists.
  • (27) H. Watanabe, Y. Sano, H. Takayasu, and M. Takayasu, Physical Review E 94, 052317 (2016).
  • (28) A. Masayuki and M. Yuji, User’s manual (ipadic), http://chasen.naist.jp/snapshot/ipadic/ipadic/doc/ipadic-ja.pdf, 2003.
  • (29) Analytics/AQS/Pageviews, https://wikitech.wikimedia.org/wiki/Analytics/AQS/Pageviews.
  • (30) C. W. Granger and R. Joyeux, Journal of time series analysis 1, 15 (1980).
  • (31) K. Burnecki and A. Weron, Journal of Statistical Mechanics: Theory and Experiment 2014, P10036 (2014).
  • (32) M. Magdziarz and A. Weron, Studia Math 181, 47 (2007).
  • (33) S. Meloni, J. Gómez-Gardeñes, V. Latora, and Y. Moreno, Phys. Rev. Lett. 100, 208701 (2008).
  • (34) Y. Sano et al., Phys. Rev. E 87, 012805 (2013).
  • (35) Y. Sano, K. K. Kaski, and M. Takayasu, in Proc. Complex ’09 (Springer, Berlin, Germany, 2009), No. 2, pp. 195–198.
  • (36) Z. Eisler, I. Bartos, and J. Kertesz, Adv. Phys. 57, 89 (2008).
  • (37) M. Gerlach, F. Font-Clos, and E. G. Altmann, Phys. Rev. X 6, 021009 (2016).
  • (38) R. Ferrer i Cancho and R. V. Solé, Journal of Quantitative Linguistics 8, 165 (2001).
  • (39) R. Ferrer-i Cancho and M. S. Vitevitch, arXiv preprint arXiv:1801.00168 (2017).
  • (40) E. G. Altmann, J. B. Pierrehumbert, and A. E. Motter, PLOS one 4, e7678 (2009).
  • (41) M. Abramowitz and I. A. Stegun, Handbook of mathematical functions: with formulas, graphs, and mathematical tables (Dover Publications, New York, USA, 1964), Vol. 55.
  • (42) http://functions.wolfram.com/HypergeometricFunctions/.
  • (43) N. Johnson, Continuous univariate distributions (Wiley, New York, USA, 1994).

Appendix A Estimation of normalised scale of database from the data

We estimate the normalised scale of database such as the total number of blogs by using the moving median as follows:

  1. We create a set consisting of the indexes of words such that takes a value larger than the threshold .

  2. We estimate as the median of with respect to .

  3. For , we calculate using step 2.

Here, we use only words with

in step 1 because we neglect the discreteness. In step 2, we apply the median because of its robustness to outliers.

Appendix B Mean square displacement of power-law forgetting process

We calculate the MSD of the following power-law forgetting process given by Eq. D2,

(B1)

where

(B2)

and

(B3)

is an arbitrary constant. The MSD can be calculated as

(B4)
(B5)
(B6)

where

and

We calculate three terms in Eq. LABEL:S_2, respectively. The first term of Eq. LABEL:S_2 is given by

where is the Hurwitz zeta function , and is the digamma function, . The second term of Eq. LABEL:S_2 is given by

For , using the general formula for

(B13)

and

(B14)

we can obtain the approximation

and

.

Lastly, we calculate the third term of Eq. LABEL:S_2. Using the Euler-Maclaurin formula abramowitz1964handbook ,