1 Introduction
In the context of distributional semantics [1, 2]
, the meaning of words is represented by vectors which are constructed from the cooccurrences of a word of interest with a set of context words. In tensorial compositional distributional semantics
[3, 4, 5, 6, 7], different types of words, depending on their grammatical role, are associated with vectors, matrices or higher rank tensors. In
[8, 9]we initiated a study of the statistics of these tensors in the framework of matrix/tensor models. We focused on matrices associated with adjectives or verbs, constructed by a linear regression method, from the vectors for nouns and for adjectivenoun composites or verbnoun composites.
We developed a 5parameter Gaussian model,
(1.2)  
The parameters are coefficients of five linearly independent linear and quadratic functions of the random matrix variables which are permutation invariant, i.e. obey the equation
(1.3) 
for , the symmetric group of all permutations of distinct objects. This invariance implements the notion that the meaning represented by the wordmatrices is independent of the ordering of the context words. General observables of the model are polynomials obeying the condition (1.3). At quadratic order there are linearly independent polynomials, which are listed in Appendix B of [8]. A three dimensional subspace of quadratic invariants was used in the model above. The most general Gaussian matrix model compatible with symmetry considers all the eleven parameters and allow coefficients for each of them. What makes the 5parameter model relatively easy to handle is that the diagonal variables are each decoupled from each other and from the offdiagonal elements, and there are pairs of offdiagonal elements. For each , and mix with each other so the solution of the model requires an inversion of a matrix.
Representation theory of offers the techniques to solve the general permutation invariant Gaussian model. The matrix elements transform as the tensor product of two copies of the natural representation . We first decompose into irreducible representations of the diagonal .
(1.5) 
The trivial (onedimensional) representation occurs with multiplicity . The dimensional irreducible representation (irrep) occurs with multiplicity . is an irrep of dimension which occurs with multiplicity . Likewise, of dimension occurs with multiplicity . As a result of these multiplicities, the 11 parameters can be decomposed as
(1.6) 
is the size of a symmetric matrix. is the size of a symmetric matrix. More precisely the parameters form
(1.7) 
where is the set of real numbers greater or equal to zero, is the space of positive semidefinite matrices of size . Calculating the correlators of this Gaussian model amounts to inverting a symmetric matrix, inverting a symmetric matrix, and applying Wick contraction rules, as in quantum field theory, for calculating correlators. There is a Graph basis for permutation invariant functions of . This is explained in Appendix B of [8] which gives examples of graph basis invariants and representation theoretic counting formulae which make contact with the sequence A052171  directed multigraphs with loops on any number of nodes  of the Online Encyclopaedia of Integer Sequences (OEIS) [10].
In this paper we show how all the linear and quadratic moments of the graphbasis invariants are expressed in terms of the representation theoretic parameters of (
1.7). We also show how some cubic and quartic graph basis invariants are expressed in terms of these parameters. These results are analytic expressions valid for all .The paper is organised as follows. Section 2 introduces the relevant facts from the representation theory of we need in a fairly selfcontained way, which can be read with little prior familiarity of rep theory, but only knowledge of linear algebra. This is used to define the 13parameter family of Gaussian models (equations (2.115) ,(2.116), (2.118)). Section 3 calculates the expectation values of linear and quadratic graphbasis invariants in the Gaussian model. Sections 4 and 5 describe calculations of expectation values of a selection of cubic and quartic graphbasis invariants in the model.
2 General permutation invariant Gaussian Matrix models
We solved a permutation invariant Gaussian Matrix model with linear and quadratic parameters [8]. The linear parameters are coefficients of linear permutation invariant functions of and the quadratic parameters (denoted ) are coefficients of quadratic functions. We explained the existence of a parameter family of models, based on the fact that there are linearly independent quadratic permutation invariant functions of a matrix. The general parameter family of models can be solved by using techniques from the representation theory of . Useful background references on representation theory are [11, 12, 13]. We begin by collecting the relevant facts which will allow a useful parametrisation of the quadratic invariants of a matrix . An important step is to form linear combinations of the labelled by irreducible representations of . The results of this step are in (2.68)(2.77). The quadratic terms in the action of the Gaussian model are close to diagonal in these variables.
The matrix elements , where run over span a vector space of dimension . It is isomorphic to the tensor product , where is a dimensional space. Consider as a span of basis vectors . This is a representation of . For every permutation , there is a linear operator defined by
(2.1) 
on the basis vectors and extended by linearity. With this definition, is a homomorphism from to linear operators acting on .
(2.2) 
We can take the basis vectors to be orthonormal.
(2.3) 
We can form the following linear combinations
(2.4)  
(2.5)  
(2.6)  
(2.7)  
(2.8)  
(2.9)  
(2.10) 
is invariant under the action of
(2.11) 
The onedimensional vector space spanned by is an invariant vector subspace of . We can call this vector space . The vector space spanned by , where , which we call , is also an invariant subspace.
(2.12) 
We have some matrices such that
(2.13) 
These matrices are obtained by using the action on the and the change of basis coefficients. The vectors for are orthonormal.
(2.14) 
All the above facts are summarised by saying that the natural representation of decomposes as an orthogonal direct sum of irreducible representations of as
(2.15) 
By reading off the coefficients in the expansion of the in , we can define the coefficients
(2.16)  
(2.17) 
They are
(2.18)  
(2.19)  
(2.20) 
The orthonormality means that
(2.21)  
(2.22)  
(2.23) 
The last equation implies that
(2.25) 
From
(2.26) 
we deduce
(2.27) 
As we will see, this function will play an important role in calculations of correlators in the Gaussian model. It is the projector in for the subspace , obeying
(2.28)  
(2.29) 
Now we will use these coefficient to build linear combinations of the matrix elements which have welldefined transformation properties under . Define
(2.30)  
(2.31)  
(2.32)  
(2.33) 
The indices range over . These variables are irreducible under , transforming as . Under the diagonal , the first three transform as while form a reducible representation.
Conversely, we can write these variables in terms of the variables, using the orthogonality properties of the .
(2.35)  
The next step is to consider quadratic products of these variables, and identify the products which are invariant. In order to do this we need to understand the transformation properties of the above variables in terms of the diagonal action of . It is easy to see that is invariant. and both have a single index running over , and they transform in the same way as . The vector space spanned by form a space of dimension which is
(2.36) 
Permutations act on this as
(2.37) 
2.1 Useful facts from representation theory

The representation space can be decomposed into irreducible representations (irreps) of the diagonal action as
(2.38) In Young diagram notation for irreps of
(2.39) (2.40) (2.41) (2.42) 
These irreps are known to have dimensions . They add up to which is the dimension of .

The vector is invariant under diagonal action of the . The action of on is given by
(2.43) These can be verified to satisfy the homomorphism property
(2.44) We also have . Using these properties, we can show that is invariant under the diagonal action. The vector
(2.45) 
The vector in coming from is simply
(2.46) The vector in inside (2.38) is some linear combination
(2.47) The coefficients are some representation theoretic numbers ( called ClebschGordan coefficients ) which satisfy the orthonormality condition
(2.48) As shown in Appendix B, these ClebschGordan coefficients are proportional to
(2.49) It is a useful fact that the ClebschGordan coefficients for can be usefully written in terms of the describing as a subspace of the natural representation. This has recently played a role in the explicit description of a ring structure on primary fields of free scalar conformal field theory [14]. It would be interesting to explore the more general construction of explicit ClebschGordan coefficients and projectors in the representation theory of in terms of the .

Similarly for we have corresponding vectors and clebschGordan coefficients
(2.50) where ranges from to . We have the orthogonality property
(2.51) And for
(2.52) (2.53) Here the runs over to .

The projector for the subspace of transforming as under the diagonal is
(2.55) 
The projector for in is
(2.56) is just the antisymmetric of . It is the orthogonal complement to inside the symmetric subspace of which is invariant under the swop of the two factors (often denoted )
(2.57) (2.58) The quadratic invariant corresponding to is
(2.59) The quadratic invariant corresponding to is similar. We just have to calculate
(2.60) 
The inner product
(2.61) is invariant under the action .
(2.62) 
The following is an important fact about invariants. Every irreducible representation of , let us denote it by has the property that
(2.63) contains the trivial irrep once. This invariant is formed by taking the sum over an orthonormal basis .
(2.64) (2.65) (2.66) (2.67) 
To summarize the matrix variables
can be linearly transformed to the following variables, organised according to representations of the diagonal
.Trivial rep: (2.68) Hook rep: (2.69) The rep : (2.70) The rep : (2.71) 
For convenience, we will also use simpler names
(2.72) (2.73) where we introduced labels to distinguish two occurrences of the trivial irrep in the space spanned by the . We will also use
(2.74) (2.75) (2.76) where we introduced labels to distinguish the three occurrences of the space spanned by . For the multiplicityfree cases, we introduce
(2.77) (2.78) The variables can be written as linear combinations of the variables. Repbasis expansion of
(2.85) In going from first to second line, we have used the fact that the transition from the natural representation to the trivial is given by simple constant coefficients. In the third line, we have used the ClebschGordan coefficients for , obeying the orthogonality
(2.87) For , which is one dimensional, we just have
(2.88) It is now useful to collect together the terms corresponding to each irrep
(2.91) Using the notation of (2.72), (2.74), (2.77) , we write this as
(2.92) (2.93) (2.94) 
The discussion so far has included explicit bases for inside which are easy to write down. A key object in the above discussion is the projector defined in (2.27). For the irreps which appear in , we will not need to write down explicit bases. Although ClebschGordan coefficients for and appear in some of the above formulae, we will only need some of their orthogonality properties rather than their explicit forms. The projectors for in can be written in terms of the , and it is these projectors which play a role in the correlators we will be calculating.
2.2 Representation theoretic description of quadratic invariants
With the above background of facts from representation theory at hand, we can give a useful description of quadratic invariants. Quadratic invariant functions of form the invariant subspace of since transform as .
(2.95)  
(2.96) 
So there are two copies of , namely . contains three invariants.
(2.97)  
(2.98)  
(2.99) 
These are all easy to write in terms of the original matrix variables (using the formulae for variables in terms of
Comments
There are no comments yet.