# A new Gini correlation between quantitative and qualitative variables

We propose a new Gini correlation to measure dependence between a categorical and numerical variables. Analogous to Pearson R^2 in ANOVA model, the Gini correlation is interpreted as the ratio of the between-group variation and the total variation, but it characterizes independence (zero Gini correlation mutually implies independence). Closely related to the distance correlation, the Gini correlation is of simple formulation by considering the nature of categorical variable. As a result, the proposed Gini correlation has a lower computational cost than the distance correlation and is more straightforward to perform inference. Simulation and real applications are conducted to demonstrate the advantages.

## Authors

• 12 publications
• 7 publications
• 21 publications
• 1 publication
• ### Jackknife Empirical Likelihood Approach for K-sample Tests

The categorical Gini correlation is an alternative measure of dependence...
08/01/2019 ∙ by Yongli Sang, et al. ∙ 0

• ### Nonparametric independence tests in metric spaces: What is known and what is not

Distance correlation is a recent extension of Pearson's correlation, tha...
09/29/2020 ∙ by Fernando Castro-Prado, et al. ∙ 0

• ### A Statistical Model with Qualitative Input

A statistical estimation model with qualitative input provides a mechani...
12/14/2017 ∙ by Seksan Kiatsupaibul, et al. ∙ 0

• ### A new correlation coefficient between categorical, ordinal and interval variables with Pearson characteristics

A prescription is presented for a new and practical correlation coeffici...
11/28/2018 ∙ by M. Baak, et al. ∙ 0

• ### A Correlation Measure Based on Vector-Valued L_p-Norms

In this paper, we introduce a new measure of correlation for bipartite q...
05/21/2018 ∙ by Mohammad Mahdi Mojahedian, et al. ∙ 0

• ### Estimation of entropy measures for categorical variables with spatial correlation

Entropy is a measure of heterogeneity widely used in applied sciences, o...
11/09/2019 ∙ by Linda Altieri, et al. ∙ 0

• ### Virtual Image Correlation uncertainty

The Virtual Image Correlation method applies for the measurement of silh...
09/10/2020 ∙ by M. L. M. François, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Measuring strength of association or dependence between two variables or two sets of variables is of vital importance in many research fields. Various correlation notions have been developed and studied [15, 20]. The widely-used Pearson product correlation measures the linear relationship. Rank based or coupla based correlations such as Spearman’s [33] and Kendall’s [16] explore monotonic relationships. Gini correlation [26, 28] is based on the covariance of one variable and rank of the other. A symmetric version of Gini correlation is proposed by Sang, Dang and Sang (2016) [24]. Other robust correlation measures are surveyed in [6, 31] and explored in detail in [32]. Distance correlation proposed by Székely and Riozzo (2009) [37]

characterizes dependence for multivariate data. Those correlations, however, are only defined for numerical and/or ordinal variables. They can not be applied to a categorical variable.

If both variables are nominal, Cramér’s [3] and Tschuprow’s [41] based on test statistic can be used to measure their association. Theoretically based on information theory, mutual information is popular due to its easy computation for two discrete variables. However, mutual information correlation [23, 11] loses the computational attractiveness for measuring dependence between categorical and numerical variables, especially when the numerical variable is in high dimension.

For this case, two approaches are typically used for defining association measures. The first one treats the continuous numerical variable

as the response variable and the categorical variable

as the predictor. Pearson

of the analysis of variance (ANOVA) or

of MANOVA is then the measure of correlation between them. The second approach considers being the response and as the explanatory variable(s). A Psuedo- of the logistic or other generalized regression model serves a measure of correlation [42]. If and are independent, those correlation parameters are zero. However, the converse is not true in general. Those correlations do not characterize independence. In this paper, we propose a so-called Gini distance correlation (denoted as ) for measuring dependence between categorical and numerical variables.

The contributions of this paper are as follows.

• A new dependence measure between categorical and numerical variables. The proposed Gini correlation characterizes independence: zero correlation mutually implies independence. It also has a nice interpretation as the ratio of between Gini variation and the total variation.

• Limiting distributions of sample Gini correlation obtained under independence and dependence cases.

• Extension of the distance correlation for dependence measure between categorical and numerical variables.

• Comparison of Gini correlation and distance correlation. Comparing with the distance correlation, Gini correlation has a simpler form, leading a simple computation and easy inference.

The remainder of the paper is organized as follows. Section 2 begins with a motivation of the proposed correlation by considering a dependence measure between one-dimensional numerical variable and a categorical variable. The connection to Gini mean difference leads a natural generalization and nice interpretation. The properties of the generalized Gini correlation are studied in Section 2.3. The relationship of distance correlation is treated in Section 2.4 and three examples are given in Section 2.5. Section 3 is devoted to inferences of Gini correlation. The asymptotical behavior of sample Gini correlation is explored. In Section 4, we conduct experimental studies by simulation and real data applications to demonstrate advantages of the Gini correlation over the distance correlation. We conclude and discuss future works in Section 5.

## 2 Categorical Gini Correlation

### 2.1 Motivation

We consider to measure association between a numerical variable in and a categorical variable . Suppose that takes values . Assume the categorical distribution of is and the conditional distribution of given is

. Then the joint distribution of

is and the marginal distribution of is

 F(x)=K∑k=1pkFk(x).

When the conditional distribution of given is the same as the marginal distribution of , and is independent. In that case, we say there is no correlation between them. However, when they are dependent, i.e for some , we would like to measure this dependence. Intuitively, the larger the difference between the marginal distribution and conditional distribution is, the stronger association should be. With that consideration, a natural correlation measure shall be proportional to

 D:=E∫R(F(x|Y)−F(x))2dx=K∑k=1pk∫R(Fk(x)−F(x))2dx, (1)

the expectation of the integrated squared difference between conditional and marginal distribution functions, if is finite.

Clearly, the corresponding correlation is non-negative, just like Pearson type of correlations. It, however, has an advantage that the correlation is zero if and only if and are independent, while for Pearson type of correlation, zero does not mutually imply independence.

Next, we need to find the standardization term so that the corresponding correlation has a range of , a desired property for a dependence measure [21]. In other words, under some condition of , we want to obtain among all and , which can be formulated to solve the following optimization problem.

 maxFk,pkD=maxFk,pkK∑k=1pk∫R(Fk(x)−F(x))2dx, (2) subject to pk>0,K∑k=1pk=1,K∑k=1pkFk(x)=F(x) and Fk(x) is a distribution function for k=1,...,K.

Note that for any . Since

is a cumulative distribution function, we have

The equality holds if and only if is a single point mass distribution. In that case, is a discrete distribution with at most distinct values almost surely. Assuming that , we propose the correlation between and as

 ρ(X,Y)=∑Kk=1pk∫R(Fk(x)−F(x))2dx∫RF(x)−F2(x)dx. (3)

From the discussion above, we have the following immediate results.

1. .

2. if and only if and are independent.

3. if and only if is a single point mass distribution.

Assumption implies that is not a point mass distribution and hence is non-degenerate. Assumption means , which we will see in the next subsection. Further, can be written as

 ρ(X,Y)=1−2∑Kk=1pk∫RFk(x)−F2k(x)dx2∫RF(x)−F2(x)dx. (4)

This formulation provides a Gini mean difference representation of the proposed correlation.

### 2.2 Gini distance representation

Gini mean difference (GMD) was introduced as an alternative measure of variability to the usual standard deviation (

[12], [5], [43]). Let and

be independent random variables from a distribution

with finite first moment in

. The GMD of is

 Δ=Δ(X)=Δ(F)=E|X−X′|, (5)

the expected distance between two independent random variables. Dorfman (1979) [7] proved that for non-negative random variables,

 Δ=2∫F(x)(1−F(x))dx. (6)

The proof can be easily extended to any random variable with . Note that (6

) also holds for discrete random variables. Hence, we can write the correlation of (

4) as

 ρ(X,Y)=1−∑Kk=1pkΔkΔ=Δ−∑Kk=1pkΔkΔ, (7)

where is the Gini mean difference (GMD) of and is the GMD of . We call it the Gini correlation and denote as or .

The representation of (7) allows another inspiring interpretation. , the weighted average of Gini mean differences, is a measure of within-group variation and is the corresponding between group variation. The proposed correlation is the ratio of the between-group Gini variation and the total Gini variation, analogue to the Pearson correlation in ANOVA (Analysis of Variance). The squared Pearson correlation is defined to be the ratio of between variance and the total variance. Denote , and as the mean and variance of and , respectively. The variance of can be partitioned to the within variation and the between variation as below,

 σ2=Var(X)=E[EX2|Y]−(E[EX|Y])2=K∑k=1pk(σ2k+μ2k)−μ2=K∑k=1pkσ2k+(K∑k=1pkμ2k−μ2).

And Pearson correlation, denoted as , is

 ρ2p(X,Y)=1−∑Kk=1pkσ2kσ2=∑Kk=1pkμ2k−μ2σ2.

Let , , be independent pair variables independently from , and , respectively. It is easy to derive that

 Δ=E|X−X′|=EE(|X−X′||Y,Y′)=K∑k=1p2kΔk+2∑1≤k

where and . Then the between Gini variation, denoted as the Gini distance covariance between and , is

 gCov(X,Y)=Δ−K∑k=1pkΔk=2∑1≤k

and the Gini distance correlation between and is

 gCor(X,Y)=ρg(X,Y)=gCov(X,Y)Δ(X). (10)

The total Gini variation is partitioned to the within and the between Gini variation. The proposed Gini correlation is the ratio of the between and the total variation. Frick et al. (2006) [10] consider another decomposition of the Gini variation, which is represented by four components, i.e, within Gini variation, between Gini variation among group means and two effects of overlapping among groups. Although the extra terms provide some insights of the extent of group intertwining, their decomposition is complicated. Not only our representation of the total Gini variation is simple and easy to interpret, but also it is natural to extend to the multivariate case.

### 2.3 Generalized Gini Correlations

There are two multivariate generalizations for the Gini mean difference. One is the Gini covariance matrix proposed by Dang, Sang and Weatherall (2016) [4]. Along this line, one may extend the Gini correlation based on an analog of Wilk’s lamda or Hotelling-Lawley trace in MANOVA. That leaves for future work. Here we explore another generalization defined in [17]. That is, the Gini mean difference of a distribution in is

 Δ=E∥\boldmath{X}−\boldmath{X}′∥,

or even more generally for some ,

 Δ(α)=E∥\boldmath{X}−\boldmath{X}′∥α, (11)

where is the Euclidean norm of . With this generalized multivariate Gini mean difference (11), we can define the Gini correlation in (4) as follows.

###### Definition 2.1

For a non-degenerate random vector

in and a categorical variable , if for , the Gini correlation of and is defined as

 ρg(\boldmath{X},Y;α)=1−∑Kk=1pkΔk(α)Δ(α)=Δ(α)−∑Kk=1pkΔk(α)Δ(α), (12)

where and are the generalized Gini differences of distribution and , respectively.

###### Remark 2.1

Note that a small provides a weak assumption of on distributions, which allows applications of the Gini correlation to heavy-tailed distributions.

###### Remark 2.2

The requirement of is for desired properties of the Gini correlation.

The next theorem states the properties of the proposed Gini correlation.

###### Theorem 2.1

For a categorical variable and a continuous random vector in with for , has following properties.

1. .

2. if and only if and Y are independent.

3. if and only if is a single point mass distribution for .

4. for any orthonormal matrix , nonzero constant and vector .

Properties 3 and 4 immediately follow from the definition. First of all, so we have . It is obvious that if and only if for each , which mutually implies that is a singleton distribution. Orthogonal invariance of the Property 4 is a result from the Euclidean distance used in Gini correlation. It remains invariant under under rotation, translation and homogeneous scale change. The remaining part of the proof has two steps. In Step 1, let and are independent pairs from and , respectively. We can write

 gCov(\boldmath{X},Y;α)=Δ(α)−K∑k=1pkΔk(α)=K∑k=1pkT(\boldmath{X}k,\boldmath{X% };α), (13)

where This is because

 K∑k=1pkT(\boldmath{X}k,\boldmath{X};α) = K∑k=1pk(2pkΔk(α)+2∑l≠kplΔkl(α)−Δk(α)−Δ(α)) = K∑k=1(2p2k−pk)Δk(α)+2K∑k=1∑l≠kpkplΔkl(α)−Δ(α) = 2∑1≤k

In Step 2, one recognizes that is the energy distance between and defined in [38]. Applying the Proposition 2 of [38], for , we have

 T(\boldmath{X}k,\boldmath{X};α)=c(d,α)∫Rd|ψk(\boldmath{t})−ψ(% \boldmath{t})|2∥\boldmath{t}∥d+αd\boldmath{t}, (14)

where and

are the characteristic functions of

and , respectively, and is a constant only depending on and , i.e.,

 c(d,α)=α2αΓ((d+α)/2)2πd/2Γ(1−α/2).

Results of (13) and (14) show that for all , we have and hence with equality to zero if and only if and are identically distributed for all .

###### Remark 2.3

is the energy distance of and , which is the weighted distance of characteristic functions of and . For , is also the distance of the distribution function and multiplying a constant. However, such a relationship does not hold for .

###### Remark 2.4

The Gini covariance of and is the weighted average of energy distance between and . It is also a linear combination of energy distances between and for . That is, .

Particularly for , the between variation , is simplified to be

 p1T(\boldmath{X}1,\boldmath{X};α)+p2T(\boldmath{X}2,\boldmath{X};α)=p1p2T(\boldmath{X}1,\boldmath{X}2;α)

which is proportional to , the energy distance used in [38, 39]. Székely and Rizzo [34] considered a special case of the energy distance of and proposed a test for the equality of two distributions and , which is also studied in [1]. The test is equivalent to test . The test of is also used for the -sample problem. In that case, it is equivalent to the test of DISCO (DIStance COmponent) analysis in [22]. The test statistic in DISCO takes the ratio of the between and the within group Gini variations for the -sample problem. Testing is equivalent to their one-way DISCO analysis. What we contribute in the dependence test is that our test is able to provide power analysis for a particular alternative which is specified as where

. Also, we can have a test which controls Type-II error rather than Type-I error.

### 2.4 Connection to Distance Correlation

The proposed Gini correlation is closely related to but different from the distance correlation studied by Székely, Rizzo and Bakirov (2007) [36], Székely and Rizzo (2009) [37]

. Their distance correlation considers correlation between two sets of continuous random variables. Later the distance covariance and distance correlation are extended from Eucliean space to general metric spaces by Lyons (2013)

[19]. Based on that idea, we define the discrete metric

 d(y,y′)=|y−y′|:=I(y≠y′),

where is the indicator function. Equipped with this set difference metric on the support of and Euclidean distance on the support of , the corresponding distance covariance and distance correlation for numerical and categorical variables are as follows.

 dCov(\boldmath{X},Y;α)=c(d,α)K∑k=1∫(pkψk(\boldmath{t})−pkψ(\boldmath{t}% ))2∥\boldmath{t}∥d+αd\boldmath{t}, (15) dCov(\boldmath{X},\boldmath{X};α)=c(d,α)2∫(ψ(\boldmath{t}+\boldmath{s})−ψ(\boldmath{t})ψ(\boldmath{s}))2∥\boldmath{t}% ∥d+α∥\boldmath{s}∥d+αd\boldmath{t}d\boldmath{s}, dCov(Y,Y)=K∑k=1p2k−2K∑k=1p3k+(K∑k=1p2k)2, (16) dCor(\boldmath{X},Y;α)=dCov(\boldmath{% X},Y,α)√dCov(\boldmath{X},\boldmath{X};α)√dCov(Y,Y).
###### Remark 2.5

As expected, , where are i.i.d.

The proofs of this identity in Remark 2.5 along with (16) are given in Appendix. Comparing (15) with (13) and (14), it is easy to make the following conclusions.

.

###### Remark 2.7

. They are equal if and only if and are independent with both being zero.

When , .

For , and .

###### Remark 2.10

For the case of , is studied in [8] and

 dCov(X,Y;1)=2K∑k=1∫(pkFk(x)−pkF(x))2dx. (17)

Comparison of Remark 2.10 and (1) explains the difference of our Gini approach and distance correlation approach in the one dimensional case. The distance covariance of and is based on squared difference of the joint distribution and the product of the marginal distributions , while the Gini one is based on the squared difference between the conditional distribution and the marginal distribution . Our Gini dependence measure considers the categorical nature of and has a simpler formulation than the distance correlation, leading a simpler inference and computation.

Before we discuss their computation and inference, let first demonstrate the Gini correlation and distance correlation in several examples for .

### 2.5 Examples

Three examples for , and are provided. Denote as .

Example 1. Let and . We have

 μ1=σ1=Δ1=θ,μ2=σ2=Δ2=β,Δ12=θ2+β2θ+β, dCov(X,X)=2p2θ2+2(1−p)2β2+(p2θ+(1−p)2β)2−83p3θ2−83(1−p)3β2+16p(1−p)θ2β2(θ+β)2 +16p2(1−p)2θ2β2(θ+β)2+8p3(1−p)θ2βθ+β+8p(1−p)3θβ2θ+β−8p(1−p)2θβ2(5θ+β)(2θ+β)(θ+β) −8p2(1−p)θ2β(θ+5β)(θ+2β)(θ+β).

As we see, the formula of is complicated for the 2-component exponential mixture distribution. The correlations are given as follows.

 ρg(X,Y) = p(1−p)(θ−β)2(2p−p2)θ2+(1−p2)β2+(1−2p+2p2)θβ, ρd(X,Y) = p(1−p)(θ−β)22(θ+β)√dCov(X,X), ρ2p(X,Y) = p(1−p)(θ−β)2pθ2+(1−p)β2+p(1−p)(θ−β)2.

Figure 1 demonstrates Gini correlation, distance correlation and squared Pearson correlation in the exponential mixtures. The cases of or in (a) and in (b) have zero Gini, zero distance and zero Pearson correlation coefficients, corresponding to the case of independence of and . The value of the Gini correlation is between the squared Pearson correlation and distance correlation.

Example 2. Let , and . We have

 Δ1=Δ2=2σ√π,Δ12=σ[2aΦ(a/√2)+2√2ϕ(a/√2)−a],

where and are the density and cumulative functions of the standard normal distribution, respectively. But it is too complicate to derive formula of when is from a mixture of two normal distributions. In this case, we are only able to derive Gini correlation and the squared Pearson correlation as follows.

 ρg(X,Y) = p(1−p)[2aΦ(a/√2)+2√2ϕ(a/√2)−a−2/√π](p2+(1−p)2)/√π+p(1−p)[2aΦ(a/√2)+2√2ϕ(a/√2)−a], ρ2p(X,Y) = p(1−p)a21+p(1−p)a2.

For a mixture of two normal distributions with a same standard deviation but different means, independence of and is equivalent to either in (a) or in (b) for both correlations, which is demonstrated in Figure 2. For dependence cases, the squared Pearson correlation is larger than the Gini correlation.

Example 3. Let , and . Again, it is too complicate to derive the formula of in this example. Since two distributions have a same mean, is always 0 and hence it completely fails to measure the difference of two distributions when . For the Gini correlation, we have

 Δ1=2σ1√π,Δ2=2σ2√π,Δ12=√2(σ21+σ22)√π.

Then

 ρg(X,Y) = p(1−p)(√2(1+r2)−1−r)p2+(1−p)2r+p(1−p)√2(1+r2).

Figure 3 plots Gini correlation changes with for normal mixture under different ratios of standard deviations in (a) and (b) plots the changes of Gini correlation with ratio of standard deviations of normal mixture under different . In the cases of and in (a) and the case of the ratio to be 1 in (b), the Gini correlation is 0, corresponding to the independence of and .

## 3 Inference

### 3.1 Estimation

Suppose a sample data for available. The sample counterparts can be easily computed. Let be the index set of sample points with , then

is estimated by the sample proportion of that category, that is,

where is the number of elements in . With a given , a point estimator of is given as follows.

 ^Δk(α) =(nk2)−1∑i

Clearly, and are U-statistics of size 2. Applying the U-statistic theorem [13, 27], we are able to establish asymptotic properties of and . The limiting distribution of the sample Gini correlation is obtained, depending on whether is degenerate. We have the following theorems.

###### Theorem 3.1

If and for all , then almost surely

 limn→∞^ρg(α)=ρg(α).

Proof: By the SLLN, converges to

with probability 1. Also by the almost sure behavior of

-statistics [29], and converge with probability 1 to and , respectively. Let be the function , which is continuous for . Therefore, the strong consistency of the sample Gini correlation follows by the fact that .

###### Theorem 3.2

Suppose that , for all and . We have

 √n(^ρg(α)−ρg(α))→N(0,v2g),

where is the asymptotic variance given in the proof.

Proof: Let be and its sample version be . Let and . We first provide the limiting distribution of , then by Slutsky’s theorem, and are obtained since and are consistent estimators for and , respectively.

Let . With the U-statistic theorem, we have

 √n(^