Emergence of order in random languages

02/20/2019
by   E. DeGiuli, et al.
Cole Normale Suprieure
0

We consider languages generated by weighted context-free grammars. It is shown that the behaviour of large texts is controlled by saddle-point equations for an appropriate generating function. We then consider ensembles of grammars, in particular the Random Language Model of E. DeGiuli, Phys. Rev. Lett., 2019. This model is solved in the replica-symmetric ansatz, which is valid in the high-temperature, disordered phase. It is shown that in the phase in which languages carry information, the replica symmetry must be broken.

READ FULL TEXT VIEW PDF

Authors

page 1

page 2

page 3

page 4

09/04/2018

Random Language Model: a path to principled complexity

Many complex generative systems use languages to create structured objec...
05/06/2021

The replica-symmetric free energy for Ising spin glasses with orthogonally invariant couplings

We study the mean-field Ising spin glass model with external field, wher...
03/10/2020

On the coexistence of competing languages

We investigate the evolution of competing languages, a subject where muc...
06/09/2022

Ancestor-to-Creole Transfer is Not a Walk in the Park

We aim to learn language models for Creole languages for which large vol...
03/10/2016

Zipf's law emerges asymptotically during phase transitions in communicative systems

Zipf's law predicts a power-law relationship between word rank and frequ...
10/01/2020

Emergence of a finite-size-scaling function in the supervised learning of the Ising phase transition

We investigate the connection between the supervised learning of the bin...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Partition functions

A CFG in Chomsky Normal form is defined by two types of rules: , where one hidden symbol becomes two hidden symbols, and , where a hidden symbol becomes an observable symbol. In a weighted CFG (WCFG), we assign weights and , respectively, to these rules. With these weights we build a weight for an entire derivation tree as follows. We write for the hidden variables, and for the observables. These are indexed by their positions on a tree. Write for the topology of the tree, namely the identity (observable or hidden) of each node. We write for the set of internal factors, i.e. factors of the form , and for the boundary factors, i.e. those associated to rules. The number of boundary factors is written , which is also the number of leaves. Since derivations are trees, the number of internal factors is .

Consider a derivation tree, such as that depicted in Figure 1a. In a weighted context-free grammar, a tree of topology with hidden variables and observables has a weight

(1)

where each is a factor in the order

. This defines a conditional probability measure on configurations

(2)

where

(3)

We will additionally add a weight depending only on the size of the tree, with , where is the emission probability, which controls the size of trees 111 for .. The tree-averaged partition function is then

(4)

as a function of the number of leaves , and the partition function for sentences of total length is

(5)

We have , and , so that just gives a trivial factor. For now we suppress dependence on and . Note that is the weight for the grand canonical partition function ; we will, however, work at fixed and .

2 Energy, Entropy, and Order Parameters

It is convenient to add some auxiliary parameters in order to extract additional observables. First, we note that (1) can be written as

(6)

where is the usage frequency of rule and is the usage frequency of . Adding external fields and let us define the energy of a configuration as

(7)

Then we can generalize (3) to

(8)

where we added a bias . The original ensemble is recovered for and . We see that the average energy is

(9)

and it is natural to define the entropy of the grammar as .

In [1] it was argued that a natural order parameter for WCFGs is one that measures the extent to which rules are applied uniformly: if all rules and have the same weight, the grammar carries no information, and sentences will be indistinguishable from noise. To measure order in the deep grammar, define first

(10)

averaged over all interior vertices , and averaged over derivations. The normalization for . A spin-glass order parameter specific to deep structure is

(11)

where

(12)

with . Here we used the fact that when , the permutation symmetry is restored upon disorder averaging. We see that

(13)

so and can be obtained from derivatives of with respect to the field.

3 Diagrammatic formulation

We expect that universal properties of weighted context-free languages are contained in the behavior of when becomes large, where is an average over grammars. In order to compute this object, we find it convenient to move to an alternative, particle, representation. In particular, we seek a model whose diagrammatic expansion gives the derivation trees we seek to count, with the appropriate weights. This technique has been widely used in the study of 2D gravity [6], and facilitated Kazakov’s solution of the Ising model on random surfaces [7, 8]. Later, it was shown in a simpler setting that this technique could be used to easily obtain results for spin models on random graphs [9, 10].

Figure 3: Feynman rules for weighted context-free grammars. Elements of Feynman diagram expansion (top row) with weights (middle row). (a-b) interaction, (c-d) Root source, (e-f) source, (g-h): Nonzero propagators. Labels are colour indices, . All propagators are diagonal in colour.

We begin with a simplified model. Consider the formal integral

(14)

where the measure is normalized such that . Strictly speaking, the integral is only defined by its perturbative expansion in ; convergence requires that the real part of be positive-definite. This expansion generates Feynman diagrams with cubic vertices. Each vertex gets a factor , and the expansion with respect to generates sources. By Wick’s theorem, each edge gets a factor , the propagator. The coefficient in this expansion of thus counts all such diagrams, possibly disconnected, with vertices and sources, times an inverse symmetry factor [11] 222Such factors appear when diagrams, including all their colour indices, have nontrivial symmetries, like reflections. In the disordered case where depends on all indices, these symmetry factors will not play a role since typical connected graphs will have no symmetries..

This is a skeleton of what we need to count derivation trees, but there are several elements missing: first, includes all graphs with cubic vertices, not only trees. Second, even if we could restrict the sum to trees, there is nothing in to distinguish leaves from roots, or to distinguish the left and right branches from a given hidden node.

Figure 4: Feynman diagram corresponding to derivation tree in Figure 1a. Alphabet of hidden symbols is and alphabet of surface symbols is . Vertices are represented by with heads at the tip. The diagram has a weight .

One solution to these problems is to use matrices as the integration variables, because their diagrammatic expansion can be arranged to give planar diagrams in an appropriate large limit [12, 13]. However, for our problem this is overkill. Instead, we will consider a theory of complex scalar fields with colour indices, equivalent to a complex matrix model with matrices of size 1x1. We will have two scalar fields and , with colour indices . Consider

(15)

where denotes complex conjugate and

(16)

The measure is normalized such that , and similarly for .

The propagator is diagonal in the colour indices and for each is such that ; that is, the Feynman rules are as shown in Figure 3. The diagram corresponding to Figure 1a is shown in Figure 4. We claim that, apart from accidental symmetry factors,

(17)

where . The proof is as follows.

The perturbative expansion with respect to generates cubic vertices with distinguished heads and and ends; the vertices can be placed on the plane such that that their heads point up. The expansion and contour integral for generate sources of and , which are the leaves. The expansion and contour integral for generate sources of and , which are the roots. An can only be connected to a , and a can only be connected to a . We can orient all edges from and , and similarly for the half-edges in the cubic vertices, flowing from head to L and R branches. Then, from each root, we can define paths by following arrows; any path we take will go through some number of cubic vertices and end in a leaf. Therefore there are connected components, one for each root. Considered as a graph, we can count the number of edges as follows: each source generates half an edge, and each cubic vertex generates edges. The total number of edges is . The difference between the number of vertices and the number of edges is , which is the number of connected components. Therefore the graph is a forest.

We thus generate a forest of planar, rooted, trees, with leaves in total. The weight of each diagram is

(18)

where is the usage frequency of rule and is the usage frequency of . This expansion counts diagrams with a degeneracy of since each tree root can be either an or an . In the expansion, the different connected components are not ordered. We would like to distinguish forests by the order of the trees, so we multiply the result by . Choosing and such that

(19)

for all and we have our result.

The virtue of working with (17) is that when constant, the leading behavior can be extracted by a saddle-point analysis [14] 333This can be seen explicitly by considering the rescaling . . There is one subtlety. The integration variables are the real and imaginary parts of and , and the saddle-point equations should be taken with respect to these parts. The solutions to and , which may be complex, are then added to produce , and similarly for . By linearity, this is equivalent to taking saddle-point equations with respect to and , and treating and as independent. It is convenient to write . The saddle-point equations are

(20)
(21)
(22)
(23)
(24)
(25)
(26)

for all .

(20),(21) and their pairs (22),(23) have an interpretation as recursion equations, which are equivalent to the saddle-point limit of Tutte recursion relations or loop equations in related contexts [15, 16]. They are also related to self-consistent equations derived for spin glasses on trees 444For example, is analagous to what is called in [9] and in [17]. In these works, and are functions of Ising variables, so that they can take different values. Below, we will replicate the so that they take different values; is the Ising case.. Indeed, any node can either propagate into and become a leaf, with weight , or propagate into and become another branch, with weight , including all possibilities. This gives (20). Similarly, any node is either the child of a root, with weight , or the child of a branch, with weight , including all possibilities. This gives (21).

For specific grammars, (20)-(26) can be explicitly analyzed 555After writing these equations in terms of real variables, these take the form of ‘context-free schema.’ See section VII.6.1 in [18].. Our interest, however, is in extracting the behavior of typical grammars in the case when is large. For this we need to choose an ensemble.

4 Grammar ensembles

Figure 5: Distribution of individual grammar weights in (a) Random Language model (RLM) and (b) Gaussian model for as indicated. The regime of unphysical negative weights is shaded in the Gaussian model.

We will consider two models. In [1] it was argued that a generic model will have lognormally distributed weights, viz.,

(27)

where the deep and surface sparsities and are defined by

(28)

and . Here and . A plot of the weights for a range of is shown in Fig.5a. It is straightforward to show that and satisfy

(29)

where denotes a grammar average and . A small ‘deep temperature’ corresponds to a large deep sparsity. The model (27) was called in [1] the Random Language Model (RLM).

It was shown in [1] that the RLM shows two phases, depending on the value of , plus logarithmic corrections. More precisely, Shannon entropies appear to collapse with respect to , where or depending on the quantity considered. For , Shannon entropies are independent of the deep temperature , and take maximal values, indicating that the grammar does not carry information: despite strictly following the rules of a WCFG, sentences are indistinguishable from random noise. For smaller , entropies drop, and the grammar carries nontrivial information. It is our goal to extract this transition from (17).

It will turn out to be much simpler to consider an alternative model, where the weights and

are Gaussian, rather than lognormal; matching the mean and variance to those of the RLM we can again use the quantities

and , and similarly for . The distribution is plotted in Fig.5b. This model has the unphysical feature that weights have a negative tail; naively, we could imagine that this would be unimportant, since the largest weights are most important, but we will have to revise this statement later. We call this the Gaussian model (GM).

We wish to compute

(30)

where we used the replica method [19]. The fields , and are all replicated, adding an index . To compute we need the grammar average

(31)

Write . In the RLM the grammar averages over are of the form ()

(32)
(33)

A term of order corresponds to the rule appearing times. We are interested in a transition due to patterns of repeated rule application between sentences, rather than inside them (this would correspond to a transition deeper in the ordered phase). Therefore a priori we expect that we only need connected terms up to a small finite order . Note that at order , terms involving different replicas will be present. Resumming gives

(34)

with etc. From this divergent sum we only need to retain terms up to order , since the integration over will retain only vertices for each replica.

For practical reasons exact calculations are limited to . In this case, we consider all derivations in which a rule can appear at most twice in one derivation tree. Note that rules can still appear arbitrarily many times in the set of replicas and sentences. Keeping terms to is equivalent to letting the

be drawn from a Gaussian distribution. For appropriate choice of mean and variance, we can thus fix the GM to be equal to the RLM to this order. In the remainder of this work, we will first find the exact solution of the GM, and then discuss its extension to the RLM.

4.1 Gaussian model

Applying the same arguments to the integral over , we have for the GM

(35)

where we introduced ‘magnetization’ vectors and overlap matrices

(36)
(37)

and . (Recall that .). Assembling the above results we find that for the GM,

This is now in the form amenable to standard treatment by replicas: the overlap matrices can be introduced as new parameters, and the original variables can be integrated out. We notice that the colour indices play the role usually played by spatial indices in spin glasses [19]. The surprising result is that the model can be exactly integrated, without even making an ansatz on the replica structure, and without taking the large limit. This integrability can be traced to the fact that the overlap matrices depend only on the real and imaginary parts of in the canonical way, i.e. through . This gives rise to a symplectic structure that simplifies the integration over these variables. The derivation is sketched in Appendix A. The final result is

(38)

with ,

(39)

and

(40)

Here and .

The elements of are as follows. Those involving and are precisely those required from 18. is entropic. Terms with and are energetic, since they depend on the grammar weight distribution. The function , which can written in terms of hypergeometric functions, is plotted in Fig.6. It develops an imaginary part for , indicating that the unphysical negative probability states are becoming important. For large the condition is equivalent to

(41)

and similarly the condition is equivalent to . These inequalities fix the regime in which the GM is physical.

Figure 6: Function from 39.

4.2 Rlm

We now return to the full model. As discussed above, we cannot obtain the exact solution; however, since a saddle-point method is justified for large , we can consider different ansatze on the form of the solution. Two are natural: (i) the colour-symmetric ansatz , and the (ii) replica-symmetric ansatz . After some calculations similar to those for the GM, we eventually find for either (i) or (ii) the same form (38), except that . Besides the ansatze on the form of and , we assume that is large and that the replica limit can be taken perturbatively, i.e. keeping terms in the action, as in the usual approach [19] 666Consistency with the GM suggests that we should be able to recover (38) with for that model, without necessarily taking the limit , in which case the terms are trivially unimportant. Indeed, if instead of taking in (49), we look for a saddle-point with large , the saddle-point perturbative in gives exactly (38) with . This indicates that the function is nonperturbative in the replica limit.. We now analyze (38) with , with the understanding that this holds in the replica-symmetric regime, whose range of validity is to be determined.

It is convenient to separate into its entropic and energetic contributions. This can be done exactly because the RLM has a scaling symmetry when the bias is included. Indeed, it is not hard to show that the partition function satisfies the scaling property (in abuse of earlier notation). The dependent part of is

(42)

so that at , and the replica-symmetric entropy is

(43)

The entropy cannot be negative; this gives necessary, though perhaps not sufficient, conditions on the regime where the replica-symmetric ansatz is applicable. For simplicity, consider the case ( is the typical length of a tree; we let it be large). Then one can determine that and, for the simulated case where the emission probability is close to , . The condition is approximately equivalent to

(44)

Our main concern is the emergence of deep structure, which does not depend on what happens at the surface of the tree. In the limit , this becomes , very similar to the regime in which the GM is physical, 41.

4.3 Order parameter

Finally, we return to the order parameter

that measures deep structure. Let us first give a heuristic derivation of its value in the RLM. We need to compute

and , where is the occupancy of the rule . Using (7) and (13) in the diagrammatic representation, one can see that . The satisfy the sum rule and so have a mean value

. The occupancies are positively correlated with the grammar weights, since rules with higher weights are sampled more frequently. A crude estimate is then

(the mean value of a weight is ). This leads to the estimate

(45)

which indicates that order increases as is lowered simply because the weight variance increases. can be computed more precisely using replicas. After a long computation, assuming the replica symmetric ansatz and taking large , one eventually finds exactly the same result, (45). Thus this simple expression is in fact the genuine replica symmetric result; it is plotted in Fig.2, where it is compared with numerical data 777These data have been obtained by the same methods as described in [1]. Here we have simulated many more samples ( compared to in that work) in order to resolve the large part of the curve.. In the large limit, it matches quantitatively, without fitting parameters, above . For smaller , the data asymptote, as they must, while the replica-symmetric prediction diverges.

5 Conclusion

We showed that the partition function for weighted context-free grammars has a convenient diagrammatic representation. For individual grammars, the behavior of a large text is governed by saddle-point equations, which resemble belief-propagation equations [20].

We then considered two ensembles of grammars, which are equivalent in a large temperature limit. The Gaussian model (GM) was solved exactly, and shown to become unphysical for . For the random language model (RLM), previously simulated in [1], the partition function was computed in the replica-symmetric ansatz; the entropy becomes negative at low temperature, again depending essentially on the quantity . Finally, the order parameter was computed in the replica-symmetric ansatz. The prediction quantitatively agrees with simulations above . These results indicate that replica-symmetry must be broken in the nontrivial low-temperature phase.

The RLM bears some similarity to a spin-glass on the Bethe lattice, a difficult problem that is still not fully understood [21, 10, 22, 17, 23]. Indeed, both problems can be generated by a diagrammatic method, and in both problems one finds that overlaps of all orders are needed to compute the partition function. However, for the spin-glass, one can perform an expansion around the mean-field limit, which is the Sherrington-Kirkpatrick model solved by Parisi [19]. Naïvely, the analogue to the SK model would be the Gaussian model, which we solved above. However, we showed that this model does not break the replica symmetry. This is related to a gauge symmetry in the diagrammatic formulation. It is therefore an open question whether there is a more primitive model that captures the essence of random languages in the low-temperature phase, and remains solvable.

Finally, we have focussed here on context-free grammars, for which derivations are trees. The next level up in the Chomsky hierarchy are context-sensitive grammars. A theorem of Kuroda [24] says that it is sufficient to add rules of the form to those above to model all context-sensitive grammars. Clearly this will add a quartic vertex to our (16), which is not in itself a difficulty. However, well-formed derivations must be represented by planar diagrams, so that the order of symbols is preserved in the derivation. Generating random planar graphs that are not trees requires matrices as integration variables; this strongly suggests that general grammars require the full machinery of complex matrix models.

References

  • [1] DeGiuli E 2018 to appear in Phys. Rev. Lett. arXiv preprint arXiv:1809.01201
  • [2] Carnie A 2013 Syntax: A generative introduction (John Wiley & Sons, Ltd.)
  • [3] Chomsky N 2002 Syntactic structures (Berlin: Walter de Gruyter)
  • [4] Hopcroft J E, Motwani R and Ullman J D 2007 Introduction to automata theory, languages, and computation 3rd ed (Boston, Ma: Pearson)
  • [5] Searls D B 2002 Nature 420 211
  • [6] Di Francesco P, Ginsparg P and Zinn-Justin J 1995 Physics Reports 254 1–133
  • [7] Kazakov V 1986 Physics Letters A 119 140–144
  • [8] Boulatov D and Kazakov V 1987 Physics Letters B 186 379–384 0370–2693
  • [9] Bachas C, De Calan C and Petropoulos P 1994 Journal of Physics A: Mathematical and General 27 6121
  • [10] Baillie C, Janke W, Johnston D and Plecháč P 1995 Nuclear Physics B 450 730–752
  • [11] Bessis D, Itzykson C and Zuber J B 1980 Advances in Applied Mathematics 1 109–157
  • [12] t’Hooft G 1974 Nuclear Physics. B 72 461–473
  • [13] Brezin E, Itzykson C, Parisi G and Zuber J 1978 Commun. math. Phys 59 35–51
  • [14] Le Guillou J C and Zinn-Justin J 2012 Large-order behaviour of perturbation theory vol 7 (Elsevier)
  • [15] Di Francesco P 2006 2D quantum gravity, matrix models and graph combinatorics (Springer) pp 33–88
  • [16] Eynard B 2016 Counting surfaces (Springer)
  • [17] Parisi G and Tria F 2002 The European Physical Journal B-Condensed Matter and Complex Systems 30 533–541
  • [18] Flajolet P and Sedgewick R 2009 Analytic combinatorics (cambridge University press)
  • [19] Mézard M, Parisi G and Virasoro M 1987 Spin glass theory and beyond: An Introduction to the Replica Method and Its Applications vol 9 (World Scientific Publishing Company)
  • [20] Mezard M and Montanari A  X 2009 Information, physics, and computation (Oxford University Press)
  • [21] De Dominicis C and Goldschmidt Y 1989 Journal of Physics A: Mathematical and General 22 L775
  • [22] Mézard M and Parisi G 2001 The European Physical Journal B-Condensed Matter and Complex Systems 20 217–233
  • [23] Parisi G 2017 Journal of Statistical Physics 167 515–542 0022–4715
  • [24] Kuroda S Y 1964 Information and Control 7 207–223
  • [25] Mackey D S and Mackey N 2003 On the determinant of symplectic matrices (Manchester Centre for Computational Mathematics)

Appendix A Solution of Gaussian model

We introduce as a new variable with a corresponding momentum , and similarly for and , with conjugate momenta and , respectively. Let us write . The variables are Gaussian, with a coupling matrix diagonal in colour. For each , the coupling matrix is a matrix acting on ,

(46)

where is the identity matrix. It is easily verified that is complex symplectic: , where

(47)

This implies that and, less obviously, [25]. Hence after integrating out there is no nontrivial entropic term from , nor does there appear the inverse of , as would naively be expected. In fact, after integrating out and the action remains linear in , , and . Hence these can be immediately integrated out and we find that

(48)

We find