The Hermitian Jacobi process was introduced in 
as a multidimensional analogue of the real Jacobi process. It is a stationary matrix-valued process whose distribution converges weakly in the large-time limit to the matrix-variate Beta distribution describing the Jacobi unitary ensemble (hereafter JUE). The latter was used in
as a random matrix-model for a Multi-Input-Multi-Output (MIMO) optical fiber channel. There, numerical evidences for the Shannon capacity and for the outage probability were supplied and support the efficiency of the matrix model. From a general fact about unitarily-invariant matrix models, this capacity may be expressed through the Christoffel-Darboux kernel for Jacobi polynomials which is the one-point correlation function of the underlying eigenvalues process (). Yet another expression for it was recently obtained in  relying on a remarkable formula for the moments of the unitary selberg weight (). The strategy employed in  was partially adapted in  to the Hermitian Jacobi process and led to a quite complicated formula for its moments which did not allow to derive their large-size limits. The main ingredients used in  were the expansion of Newton power sums in the basis of Schur functions, the determinantal form of the symmetric Jacobi polynomials and an integral form of the Cauchy-Binet formula (known as Andreief’s identity).
In this paper, we follow another approach to compute the moments of the Hermitian Jacobi process based on a change of basis in the algebra of symmetric functions (with a fixed number of indeterminates). More precisely, we rather express the Newton power sums in the basis of symmetric Jacobi polynomials since the latter are mutually orthogonal with respect to the unitary Selberg weight. Doing so leads to the determinant of an ‘almost triangular’ matrix which we express in a product form using row operations. After a careful rearrangement of the terms, we end up with a considerably simpler moment formula compared to the one obtained in  (Theorem 1). Actually, the latter involves three nested and alternating sums together with a determinant whose entries are Beta functions. Up to our best knowledge, this determinant has no closed form except in very few special cases. The moment formula obtained in this paper contains only two nested and alternating sums whose summands are ratios of Gamma functions.
As a potential application of our formula, we propose the Hermitian Jacobi process as a dynamical analogue of the MIMO Jacobi channel studied in 
and compute its Shannon capacity for small power per-antenna at the transmitter. Motivated by free probability theory, we also give some interest in the case when the size of the Hermitian Jacobi process is larger than the moment order. In this respect, our moment formula may be written as a linear combination of terminating balanced-hypergeometric series evaluated at unit argument (, Chapter 3).
The paper is organized as follows. In the next section, we briefly review the construction of the Hermitian Jacobi process and recall the semi-group density of its eigenvalues process (when it exists). In the third section, we state our main result in Theorem 1below and prove it. For ease of reading, we proceed in several steps until getting the sought moment formula. In the last section, we discuss the application of our main result to optical fibers MIMO channel and to the large-size limit of the moments of the Hermitian Jacobi process.
2. A review of the Hermitian Jacobi process
For sake of completeness, we recall the construction of the Hermitian Jacobi process and the expression of the semi-group density of its eigenvalues process. We refer the reader to  and  for further details.
Denote the group of complex unitary matrices. Let be two integers and let be a -valued stochastic process. Set :
are orthogonal projections. In other words, is the upper left corner of . Assume now that is the Brownian motion on
starting at the identity matrix. Then,
is called the Hermitian Jacobi process of size and of parameters where . As , where
is a Haar unitary matrix and the convergence holds in the weak sense. Moreover, it was proved in that the random matrix
has the same distribution drawn from JUE with suitable parameters.
For any , define the -th moment of by:
for fixed time and write simply . Since the matrix Jacobi process is Hermitian, then
where is the eigenvalues process of and stands for the expectation of the underlying probability space. If
then the distribution of the eigenvalues process is absolutely-continuous with respect to Lebesgue measure in . Besides, its semi-group density is given by a bilinear generating series of symmetric Jacobi polynomials with Jack parameter equals to . More precisely, let
be a partition of length at most and let be the sequence of orthonormal Jacobi polynomials with respect to the beta weight:
These polynomials may be defined through the Gauss hypergeometric function as:
Then the orthonormal symmetric Jacobi polynomial corresponding to is defined by:
if the coordinates do not overlap and by L’Hôpital’s rule otherwise. An expansion of these polynomials in the basis of Schur functions may be found in . These polynomials are mutually orthonormal with respect to the unitary Selberg weight:
in the sense that two elements corresponding to different partitions are orthogonal and the norm of each equals one (see e.g. , Theorem 3.1). Moreover, the semi-group density of the eigenvalues process of admits the following absolutely-convergent expansion ():
If denotes the sequence of orthogonal Jacobi polynomials:
then may be written as:
where is the squared -norm of the one-variable Jacobi polynomial and
Indeed, Andreief’s identity (, p.37) shows that is an orthogonal set with respect the unitary Selberg weight and that the squared -norm of with respect to is nothing else but:
On the other hand, the polynomial set may be mapped to the set of symmetric Jacobi polynomials considered in  by the affine transformation:
More precisely, one has:
Moreover, the following mirror property is satisfied by :
and is inherited from their one-variable analogues. Indeed, one checks directly this property when has distinct coordinates using the determinantal form of then extends it by continuity. In particular:
But Proposition 7.1 in  gives:
As a result, we get the special value:
which will be used in our forthcoming computations below.
3. Main result: The moment formula
Let and recall that a hook of weight is a partition of the form:
For partitions , recall the order induced by the containment of their Young diagrams: if and only if for any , where the length is the number of non-zero components of . On the other hand, the -th moment of the stationary distribution is given by the normalized integral111As for fixed , we omit the dependence of the stationary moments on .:
is the Selberg integral. The explicit expression of may be read off Corollary 2.3 in . With these notations, our main result is stated as follows:
The -th moment of the Hermitian Jacobi process is given by:
The rest of this section is devoted to the proof of this result. Due to lengthy computations, we shall proceed in several steps where in each step, we simplify the moment expression obtained in the previous one.
3.1. The basis change
We start with performing the change of basis from Schur polynomials to symmetric Jacobi polynomials. Doing so leads to the following formula for :
For any , we have:
Recall the -th Newton power sum ():
as well as the Schur polynomials associated to a partition of length ():
These symmetric functions are related by the representation-theoretical formula (see e.g. , p. 48):
In order to integrate the Newton sum against the semi-group density (1), we shall further expand the Schur polynomials in the basis of symmetric Jacobi polynomials . To this end, we appeal to the inversion formula ():
together with Proposition 3.1 in  (change of basis formula). Doing so yields:
where we set:
Integrating and applying Fubini Theorem, we are led to:
where the last equality follows again from Andreief’s identity. Keeping in mind the series expansion (1), the stated moment formula follows. ∎
3.2. An almost upper-triangular matrix
For sake of simplicity, we introduce the following notations :
Since for then provided that . Similarly, implies the same conclusion when . These elementary observations show that the matrix above is ‘almost upper-triangular’.
For any hook of weight and length , and any , set:
Then for and:
If then is upper triangular.
Otherwise, for or while
Take . Then, while except when and . Consequently, in the following three cases:
In particular, is upper triangular if or since then . Otherwise, if then vanishes except for in which case
3.3. Further simplifications
where an empty determinant or product equals one. This expression can be considerably simplified into the one below where we prove that the factors corresponding to indices cancel. To this end, we find it convenient to single out the contribution of the empty partition which corresponds to the stationary regime .
The moment formula (6) reduces to:
where for a non empty hook , we set:
We only consider hooks such that and we proceed in three steps. In the first one, we work out the product:
Since for , then
equals one. In the second step, we split the product
The first product is expressed as:
As to the second, it splits in turn into: