On mutual information estimation for mixed-pair random variables

12/27/2018 ∙ by Aleksandr Beknazaryan, et al. ∙ The University of Mississippi 0

We study the mutual information estimation for mixed-pair random variables. One random variable is discrete and the other one is continuous. We develop a kernel method to estimate the mutual information between the two random variables. The estimates enjoy a central limit theorem under some regular conditions on the distributions. The theoretical results are demonstrated by simulation study.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The entropy of a discrete random variable

with countable support and is defined to be

and the (differential) entropy of a continuous random variable

with probability density function

is defined as

If , or is also called the joint entropy of the components in or

. Entropy is a measure of distribution uncertainty and naturally it has application in the fields of information theory, statistical classification, pattern recognition and so on.

Let , be probability measures on some arbitrary measure spaces and respectively. Let be the joint probability measure on the space . If is absolutely continuous with respect to the product measure , let be the Radon-Nikodym derivative. Then the general definition of the mutual information (e.g., [3]) is given by

(1)

If two random variables and are either both discrete or both continuous then the mutual information of and can be expressed in terms of entropies as

(2)

However, in practice and application, we often need to work on a mixture of continuous and discrete random variables. There are several ways for the mixture. 1). One random variable is discrete and the other random variable is continuous; 2). A random variable has both discrete and continuous components, i.e., with probability and with probability where , is a discrete random variable and

is a continuous random variable; 3). a random vector with each dimension component being discrete, continuous or mixture as in 2).

In [11], the authors extend the definition of the joint entropy for the first case mixture, i.e., for the pair of random variables, where the first random variable is discrete and the second one is continuous. Our goal is to study the mutual information for that case and provide the estimation of the mutual information from a given i.i.d. sample .

In [3], the authors applied the -nearest neighbor method to estimate the Radon-Nikodym derivative and, therefore, to estimate the mutual information for all three mixed cases. In the literature, if the random variables and are either both discrete or both continuous, the estimation of mutual information is usually performed by the estimation of the three entropies in (2). The estimation of a differential entropy has been well studied. An incomplete list of the related research includes the nearest-neighbor estimator [7], [12], [10]; the kernel estimator [1], [6], [4], [5] and the orthogonal projection estimator [8], [9]. Basharin [2]

studied the plug-in entropy estimator for the finite value discrete case and obtained the mean, the variance and the central limit theorem of this estimator. Vu, Yu and Kass

[13] studied the coverage-adjusted entropy estimator with unobserved values for the infinite value discrete case.

2 Main results

Consider a random vector . We call a mixed-pair if is a discrete random variable with countable support while is a continuous random variable. Observe that induces measures that are absolutely continuous with respect to the Lebesgue measure, where , for every Borel set in . There exists a non-negative function with be the probability mass function on and be the marginal density function of . Here, , . In particular, denote . We have that

is the probability density function of conditioned on . In [11], the authors gave the following regulation of mixed-pair and then defined the joint entropy of a mixed-pair.

Definition 2.1

(Good mixed-pair). A mixed-pair random variables is called good if the following condition is satisfied:

Essentially, we have a good mixed-pair random variables when restricted to any of the values, the conditional differential entropy of is well-defined.

Definition 2.2

(Entropy of a mixed-pair). The entropy of a good mixed-pair random variable is defined by

As then we have that

(3)

We take the convention and . From the general formula of the mutual information (1), we get that

(4)

Let be a random sample drawn from a mixed distribution with discrete component having support , and let , with . Also suppose that the continuous component has pdf . Denote , and let

(5)

and

(6)

be the estimators of , , and respectively, where is the probability density function of conditioned on , . Denote . Let be the covariance matrix of .

Theorem 2.1

if and only if and are dependent. For the estimator

(7)

of we have that

(8)

given that and are dependent. Furthermore, the variance can be calculated by

(9)

where is the conditional expectation of given .

Proof. First of all, since is the variance covariance matrix. If then

and for some constant . But

Hence . Then for some constant and for all . But . Hence, and for all . Then and are independent. On the other hand, if and are independent, then for all . Therefore, and . Hence, if and only if and are independent.

Notice that the vector is the sample mean of a sequence of i.i.d. random vectors

with mean . Then, by central limit theorem, we have

and, given , we have (8). By the formula for variance decomposition, we have

(10)

. Here is the conditional variance of when . By similar calculation,

(11)

for all , and

(12)

Thus, the covariance matrix of and therefore can be calculated by the above calculation (10)-(12). We then have (9).   

We consider the case when the random variables and are dependent. Note that in this case and we have (8). However, is not a practical estimator since the density functions involved are not known.

Now let be a kernel function in and let be the bandwidth. Then

are the “leave-one-out” estimators of the functions , , and

(13)

are estimators of , . Also

(14)

is an estimator of , where

(15)
Theorem 2.2

Assume that the tails of are decreasing like , respectively, as . Also assume that the kernel function has appropriately heavy tails as in [4]. If and are all greater than in the case , greater than in the case and greater than in the case , then for the estimator

(16)

we have

(17)

Proof. Under the conditions in the theorem, applying the formula (3.1) or (3.2) from [5], we have

Together with Theorem 2.1, we have (17).   

We may take the probability density function of Student-

distribution with proper degree of freedom instead of the normal density function as the kernel function. On the other hand, if

and are independent then and we have that .

3 Simulation study

In this section we conduct a simulation study with , i.e., the random variable takes two possible values 0 and 1, to confirm the main results stated in (17) for the kernel mutual information estimation of good mixed-pairs. First we study some one dimensional examples. Let be the Student t distribution with degree of freedom , location parameter and scale parameter and let be the Pareto distribution with density function . We study the mixture for the following four cases: 1). and ; 2). and ; 3). and ; 4). and . For each case, for the first distribution and for the second distribution.

The second row of Table 1 lists the mathematica calculation of the mutual information (MI) as stated in (4) for each case. The third row of Table 1 gives the average of 400 estimates based on formula (16). For each estimate, we use the probability density function of the Student t distribution with degree of freedom 3, i.e. , as the kernel function. We also have simulation study with kernel functions satisfying the conditions in the main results and obtained similar results. We take as the bandwidth for the first three cases and for the last case. The data size for each estimate is in each case. The Pareto distributions and have very dense area on the right of 1. This is the reason that we take a relatively small bandwidth for this case. To apply the kernel method in estimation, one should select an optimal bandwidth based on some criteria, for example, to minimize the mean squared error. It is interesting to investigate the bandwidth selection problem from both theoretical and application viewpoints. However, it seems that the study in this direction is very difficult. We leave it as an open question for future study. It is clear that the average of the estimates matches the true value of mutual information.

We apply mathematica to calculate the covariance matrix of

and, therefore, the value of for each case by formulae (10)-(12) or (9). The values of are , , and respectively for the four cases. The fourth row of Table 1 lists the values of

which serves as the asymptotic approximation of the standard deviation of the estimator

in the central limit theorem (17). The last row gives the sample standard deviation from estimates. These two values also have good match.


  mixture
  MI 0.011819 0.20023 0.102063 0.201123
  mean of estimates 0.01167391 0.1991132 0.1014199 0.2010447
   0.0006617 0.0025 0.0018 0.0023
  sample sd 0.0006616724 0.002345997 0.001819982 0.002349275
Table 1: True value of the mutual information and the mean value of the estimates.
Figure 1: The histograms with kernel density fits of estimates. Top left: and . Top right: and . Bottom left: and . Bottom right: and .
Figure 2: The Q-Q plots of estimates. Top left: and . Top right: and . Bottom left: and . Bottom right: and .

Figure 1 and 2 show the histograms with kernel density fits and normal Q-Q plots of 400 estimates for each case. It is clear that the values of

follow a normal distribution.

We study two examples in the two dimensional case. Let be the two dimensional Student t distribution with degree of freedom , mean and shape matrix . We study the mixture in two cases: 1). and ; 2). and . Here

is the identity matrix. For each case,

for the first distribution and for the second distribution. Table 2 summarizes estimates of the mutual information with and sample size for each estimate. We take as the kernel function. Same as the one dimensional case, we apply mathematica to calculate the true value of MI and which is given in formula (9). Figure 3 shows the histograms with kernel density fits and normal Q-Q plots of 200 estimates for each example. It is clear that the values of also follow a normal distribution in the two dimensional case. In summary, the simulation study confirms the central limit theorem as stated in (17).


  mixture
  MI 0.01158 0.202516
  mean of estimates 0.0112381 0.2022715
   0.0006577826 0.002312909
  sample sd 0.0008356947 0.002315134
Table 2: True value of the mutual information and the mean value of the estimates.
Figure 3: The histograms and Q-Q plots of estimates. Left: and . Right: and .

Acknowledgement

The authors thank the editor and the referees for carefully reading the manuscript and for the suggestions that improved the presentation. This research is supported by the College of Liberal Arts Faculty Grants for Research and Creative Achievement at the University of Mississippi. The research of Hailin Sang is also supported by the Simons Foundation Grant 586789.

References

  • [1] Ahmad, I. A. and Lin, P. E. 1976. A nonparametric estimation of the entropy for absolutely continuous distributions. IEEE Trans. Information Theory. 22, 372-375.
  • [2] Basharin, G. P. 1959. On a statistical estimate for the entropy of a sequence of independent random variables. Theory of Probability and Its Applications. 4, 333-336.
  • [3] Gao, W., Kannan, S., Oh, S. and Viswanath, P. 2017. Estimating mutual information for discrete-continuous mixtures. Advances in Neural Information Processing Systems. 5988-5999.
  • [4] Hall, P. 1987. On Kullback-Leibler Loss and Density Estimation. Ann. Statist. 15, no. 4, 1491-1519.
  • [5] Hall, P. and Morton, S. 1993. On the estimation of entropy. Ann. Inst. Statist. Math. 45, 69-88.
  • [6] Joe, H. 1989. On the estimation of entropy and other functionals of a multivariate density. Ann. Inst. Statist. Math. 41, 683-697.
  • [7] Kozachenko, L. F. and Leonenko, N. N. 1987. Sample estimate of entropy of a random vector. Problems of Information Transmission, 23, 95-101.
  • [8] Laurent, B. 1996. Efficient estimation of integral functionals of a density. Ann. Statist. 24, 659-681.
  • [9] Laurent, B. 1997. Estimation of integral functionals of a density and its derivatives. Bernoulli 3, 181-211.
  • [10] Leonenko, N., Pronzato, L. and Savani, V. 2008. A class of Rényi information estimators for multidimensional densities. Ann. Statist. 36, 2153–2182. Corrections, Ann. Statist. 38 (2010), 3837-3838.
  • [11] Nair, C., Prabhakar, B. and Shah, D. On entropy for mixtures of discrete and continuous variables. arXiv:cs/0607075
  • [12] Tsybakov, A. B. and van der Meulen, E. C. 1994. Root-n consistent estimators of entropy for densities with unbounded support. Scand. J. Statist., 23, 75-83.
  • [13] Vu, V. Q., Yu, B. and Kass, R. E. 2007. Coverage-adjusted entropy estimation. Statist. Med., 26, 4039-4060.