# An efficient algorithm for sampling from sin^k(x) for generating random correlation matrices

In this note, we develop a novel algorithm for generating random numbers from a distribution with probability density function proportional to sin^k(x), x ∈ (0,π) and k ≥ 1. Our algorithm is highly efficient and is based on rejection sampling where the envelope distribution is an appropriately chosen beta distribution. An example application illustrating how the new algorithm can be used to generate random correlation matrices is discussed.

## Authors

• 3 publications
• 5 publications
• ### On equivalence of the LKJ distribution and the restricted Wishart distribution

In this paper, we want to show the Restricted Wishart distribution is eq...
09/13/2018 ∙ by Zhenxun Wang, et al. ∙ 0

• ### Parametrising correlation matrices

Correlation matrices are the sub-class of positive definite real matrice...
12/18/2018 ∙ by P. J. Forrester, et al. ∙ 0

• ### Estimation of distributions via multilevel Monte Carlo with stratified sampling

We design and implement a novel algorithm for computing a multilevel Mon...
06/01/2019 ∙ by Søren Taverniers, et al. ∙ 0

• ### Finding linearly generated subsequences

We develop a new algorithm to compute determinants of all possible Hanke...
03/05/2020 ∙ by Claude Gravel, et al. ∙ 0

• ### A refined determinantal inequality for correlation matrices

Olkin [3] obtained a neat upper bound for the determinant of a correlati...
09/12/2019 ∙ by Niushan Gao, et al. ∙ 0

• ### Generating Connected Random Graphs

We present an algorithm to produce connected random graphs using a Metro...
06/29/2018 ∙ by Caitlin Gray, et al. ∙ 0

• ### Galactos: Computing the Anisotropic 3-Point Correlation Function for 2 Billion Galaxies

The nature of dark energy and the complete theory of gravity are two cen...
08/31/2017 ∙ by Brian Friesen, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Consider a positive random variable

with probability density function

 fX(x)=cksink(x),x∈(0,π),k≥1, (1)

where the normalization constant is given by

 ck=Γ(k2+1)√πΓ(k2+12).

This is a symmetric continuous probability density function for which the mean (mode and median) are

 EX[X]=π/2,fX(π/2)=ck,

for all . Generating random numbers from this distribution is not straightforward as the corresponding cumulative density function, although available in closed form, is defined recursively and requires operations to evaluate. The nature of the cumulative density function makes any procedure based on inverse transform sampling computationally inefficient, especially for large . Furthermore, an alternative approach based on numerical integration is not recommended due to numerical issues with the probability density function when is large.

In this note, we develop a novel algorithm (see Section 2) for generating random numbers from the distribution defined in (1). Our algorithm is based on rejection sampling [1] (pp. 50–53) where the envelope distribution is a particular beta distribution. We compute the exact unconditional acceptance probability of the proposed algorithm and show that the maximum expected average number of iterations per sample is for all and , which highlights the efficiency of the proposed algorithm. Lastly, we use our novel rejection sampler to implement the algorithm for generating high dimensional correlation matrices based on the hyperspherical parametarization (see Section 3) proposed by Pourahmadi and Wang [2]. Our implementation of the correlation matrix sampling algorithm for the Matlab and R numerical computing platforms is available for download from Matlab Central (File ID: 68810) and CRAN (the randcorr package).

## 2 Rejection sampling algorithm

The proposed algorithm for sampling from density (1) utilises rejection sampling where the envelope distribution is an appropriately scaled symmetric distribution with probability density function

 gY(x)=xk(π−x)kB(k+1,k+1)π2k+1,x∈(0,π),k≥1, (2)

where is the Euler beta function. Theorem 1 proves that is an envelope density for this problem and gives the expected average sampling efficiency for all . The proof of the theorem is given in Appendix A.

###### Theorem 1.

Let denote the probability density function given in (1) and let denote a random variable with probability density function (2). Then,

 fX(x)≤MkgY(x),x∈(0,π),k≥1, (3)

where is the upper-bound of the likelihood ratio and is given by

 Mk=fX(π/2)gY(π/2)=√π2k−1Γ(k2+1)2Γ(k+32). (4)

Based on Theorem 1, our algorithm to generate a random variable with probability density function (1) is given below:

1. Generate

2. Generate

from the uniform distribution

3. Accept if

 (1/k)logU≤log(π2sinX4X(π−X))

The average number of iterations required to obtain a sample from the distribution (1) is given by (4). This is a strictly increasing function of where the first few values are

 M1=π3,M2=1615,M3=12π35,M4=1024945.

The maximum of is given by

 limk→∞Mk=π2√2≈1.11

which emphasises the efficiency of the proposed algorithm. In particular, for any value of , the maximum expected average number of iterations per sample is .

## 3 Application

The proposed rejection sampling algorithm is suitable for applications requiring repeated sampling from density (1), especially for large . An example application is the problem of random sampling of a correlation matrix , which may be high dimensional. For this problem, we implement the procedure outlined in [2] which consists of two steps: (i) generate angles from the distribution (1) where ; and (ii) compute a lower triangular matrix (equation (1) in [2]) from the angles and then the corresponding correlation matrix (see [2] for details).

The procedure of Pourahmadi and Wang requires a computationally efficient method for sampling of the angles . This is especially true for high dimensional correlation matrices where (and therefore ) is large and the number of random variates required is of the order . Using our rejection sampling algorithm, we have implemented the Pourahmadi and Wang sampler in the Matlab and R numerical computing environments. Our program can generate a random correlation matrix (i.e., 4,950 samples of ) in 0.02s, and a random (i.e., 499,500 samples of ) correlation matrix in 0.5s on a commodity laptop. In contrast, the inverse transform sampling approach implemented in [2] and running on comparable computing hardware, requires approximately and to generate a and a correlation matrix, respectively.

## Appendix A Proof of Theorem 1

###### Proof.

We wish to show that , for all and . First, note that

 fX(x) ≤ MkgY(x) Γ(k2+1)sink(x)√πΓ(k2+12) ≤ ⎛⎜ ⎜⎝√π2k−1Γ(k2+1)2Γ(k+32)⎞⎟ ⎟⎠xk(π−x)kB(k+1,k+1)π2k+1 sink(x) ≤ 4kπ−2k((π−x)x)k sin(x) ≤ 4π−2(π−x)x

The function

is an interpolation polynomial at the nodes

and for the function for all  [3]. The interpolation error for the polynomial and some is

 sin(x)−p3(x)=sin(ξ)243∏j=0(x−xj)=x24(x−π2)2(x−π)sin(ξ)≤0,

as the only term that is negative is . The inequality is strict everywhere except the boundary points () and the maximum of and (). ∎

## References

• [1] C. P. Robert, G. Casella, Monte Carlo Statistical Methods, Springer, 2004.
• [2] M. Pourahmadi, X. Wang, Distribution of random correlation matrices: Hyperspherical parameterization of the Cholesky factor, Statistics & Probability Letters 106 (2015) 5–12.
• [3] M. R. (user:42969), How to prove for all ?, Mathematics, https://math.stackexchange.com/q/2030069 (2016).