# Estimating mutual information in high dimensions via classification error

Multivariate pattern analyses approaches in neuroimaging are fundamentally concerned with investigating the quantity and type of information processed by various regions of the human brain; typically, estimates of classification accuracy are used to quantify information. While a extensive and powerful library of methods can be applied to train and assess classifiers, it is not always clear how to use the resulting measures of classification performance to draw scientific conclusions: e.g. for the purpose of evaluating redundancy between brain regions. An additional confound for interpreting classification performance is the dependence of the error rate on the number and choice of distinct classes obtained for the classification task. In contrast, mutual information is a quantity defined independently of the experimental design, and has ideal properties for comparative analyses. Unfortunately, estimating the mutual information based on observations becomes statistically infeasible in high dimensions without some kind of assumption or prior. In this paper, we construct a novel classification-based estimator of mutual information based on high-dimensional asymptotics. We show that in a particular limiting regime, the mutual information is an invertible function of the expected k-class Bayes error. While the theory is based on a large-sample, high-dimensional limit, we demonstrate through simulations that our proposed estimator has superior performance to the alternatives in problems of moderate dimensionality.

## Authors

• 4 publications
• 5 publications
07/29/2019

### Improved mutual information measure for classification and community detection

The information theoretic quantity known as mutual information finds wid...
10/05/2020

### DEMI: Discriminative Estimator of Mutual Information

Estimating mutual information between continuous random variables is oft...
01/12/2018

### MINE: Mutual Information Neural Estimation

We argue that the estimation of the mutual information between high dime...
01/21/2015

### A Bayesian alternative to mutual information for the hierarchical clustering of dependent random variables

The use of mutual information as a similarity measure in agglomerative h...
03/13/2020

### Learning Unbiased Representations via Mutual Information Backpropagation

We are interested in learning data-driven representations that can gener...
06/02/2017

### Information, Privacy and Stability in Adaptive Data Analysis

Traditional statistical theory assumes that the analysis to be performed...
10/11/2021

### Sliced Mutual Information: A Scalable Measure of Statistical Dependence

Mutual information (MI) is a fundamental measure of statistical dependen...
##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

A fundamental challenge of computational neuroscience is to understand how information about the external world is processed and represented in the brain. Each individual neuron aggregates the incoming information into a single sequence of spikes–an output which is too simplistic by itself to capture the full complexity of sensory input. Only by combining the signals from massive ensembles of neurons is it possible to reconstruct our complex representation of the world. Nevertheless, neurons form hierarchies of specialization within neural circuits, which are further organized in various specialized regions of the brain. At the lowest level of the hierarchy–individual neurons, it is possible to infer and interpret the functional relationship between a neuron and stimulus features of interest using single-cell recording technologies. Due to the inherent stochasticity of the neural output, it is natural to view the neuron as a noisy channel, and use mutual information to quantify how much of the stimulus information is encoded by the neuron. Moving up the hierarchy to the the macroscale level of organization in the brain requires both different experimental methodologies and new approaches for summarizing and inferring measures of information in the brain.

Shannon’s mutual information

is fundamentally a measure of dependence between random variables

and , and is defined as

 I(X;Y)=∫p(x,y)logp(x,y)p(x)p(y)dxdy.

Various properties of make it ideal for quantifying the information between a random stimulus and the signaling behavior of an ensembles of neurons, [1]. A leading metaphor is that of a noisy communications channel; the mutual information describes the rate at which can communicate bits from . This framework is well-suited for summarizing the properties of a single neuron coding external stimulus information; indeed, experiments studying the properties of a single or a small number of neurons often make use of the concept of mutual information in summarizing or interpreting their results [2]. See discussions in [3]. However, estimating mutual information for multiple channels requires large and over-parameterized generative models.

Machine learning algorithms showed a way forward: a seminal work by Haxby [4] proposed to quantify the information in multiple channels by measuring how well the stimulus can be identified from the brain responses, in what is known as “multivariate pattern analysis” (MVPA). To demonstrate that a particular brain region responds to a certain type of sensory information, one employs supervised learning to build a classifier that classifies the stimulus class from the brain activation in that region. Classifiers that achieve above-chance classification accuracy indicate that information from the stimulus is represented in the brain region. In principle, one could just as well test the statistical hypothesis that the Fisher information or mutual information between the stimulus and the activation patterns is nonzero. But in practice, the machine learning approach enjoys several advantages: First, it is invariant to the parametric representation of the stimulus space, and is opportunistic in the parameterization of the response space. This is an important quality for naturalistic stimulus-spaces, such as faces or natural images. Second, it scales better with the dimensionality of both the stimulus space and the responses space, because a slimmer discriminative model can be used rather than a fully generative model.

Nevertheless, classification error is problematic for quantifying the strength of the relation between stimulus and outputs due to its arbitrary scale and strong dependence on experimental choices. Classification accuracy depends on the particular choice of stimuli exemplars employed in the study and the number of partitions () used to define the classes for the classification task. The difficulty of the classification task depends on the number of classes defined: high classification accuracy can be achieved relatively easily by using a coarse partition of stimuli exemplars into classes. Often is an arbitrary design constraint, and researchers try to extrapolate the error for alternative number of classes [5]. In a meta-analysis on visual decoding, Coutanche et al (2016) [6] quantified the strength of a classification study using the formula

 decoding strength=accuracy−chancechance.

Such an approach may compensate for the differences in accuracy due purely to choice of number of classes defined; however, no theory is provided to justify the formula. In contrast, mutual information has ideal properties for quantitatively comparing information between different studies, or between different brain regions, subjects, feature-spaces, or modalities. Not only is the mutual information defined independently of the arbitrary definition of stimulus classes (albeit still dependent on an implied distribution over stimuli), it is even meaningful to discuss the difference between the mutual information measured for one system and the mutual information for a second system.

Hence, a popular approach which combines the strengths of the machine learning approach and the advantages of the information theoretic approach is to obtain a lower bound on the mutual information by using the confusion matrix of a classifier. Treves [7] first proposed using the empirical mutual information of the classification matrix in order to obtain a lower bound of the mutual information

; this confusion-matrix-based lower bound has subsequently enjoyed widespread use in the MVPA literature [2]. Even earlier that this, the idea of linking classification performance to mutual information can be found in the beginnings of information theory. Fano’s inequality provides a lower bound on mutual information in relation to the optimal prediction error, or Bayes error. In practice, the bound obtained may be a vast underestimate [8].

### 1.1 Our contributions

In this paper, we propose a new way to link classification performance to the implied mutual information. To create this link we need to overcome the arbitrary choice of exemplars, and the arbitrary number of classes k. Towards this end, we define a notion of -class average Bayes error which is uniquely defined for any given stimulus distribution and stochastic mapping from stimulus to response. The -class average Bayes error is the expectation of the Bayes error (the classification error of the optimal classifier) when stimuli exemplars are drawn i.i.d. from the stimulus distribution, and treated as distinct classes. Hence the average Bayes error can in principle be estimated if the appropriate randomization is employed for designing the experiment.

Specifically, we establish a relationship between the mutual information and the average -class Bayes error, . In short, we will identify a function (which depends on ),

 eABE,k≈πk(√2I(X;Y)) (1)

and that this approximation becomes accurate under a limit where is small relative to the dimensionality of , and under the condition that the components of are approximately independent. The function is given by

 πk(c)=1−∫Rϕ(z−c)Φ(z)k−1dz.

This formula is not new to the information theory literature: it appears as the error rate of an orthogonal constellation [9]. What is surprising is that the same formula can be used to approximate the error rate in much more general class of classification problems111

An intuitive explanation for this fact is that points from any high-dimensional distribution lie in an orthogonal configuration with high probability.

–this is precisely the universality result which provides the basis for our proposed estimator.

Figure 1 displays the plot of for several values of . For all values of , is monotonically decreasing in , and tends to zero as , which is what we expect since if is large, then the average Bayes error should be small. Another intuitive fact is that since after all, an uninformative response cannot lead to above-chance classification accuracy.

The estimator we propose is

 ^IHD=12(π−1k(^egen,α))2,

obtained by inverting the relation (1), then substituting an estimate of generalization error for the . As such, our estimator can be directly compared to the , since both are functions of (Figure 1.) As the estimate of generalization error goes to zero, approaches while goes to infinity. This difference in behavior is due to the fact that in contrast to Fano’s inequality, the asymptotic relationship (1) is independent of the number of classes .

In the paper we argue for the advantages of our method in comparison to alternative discriminative estimators under the assumption that the discriminative model approximates the Bayes rule. While this is an unrealistic assumption, it simplifies the theoretical discussion, and allows us to clearly discuss the principles behind our method. Alternatively, we can take the view that any observed classification error is a lower bound on the Bayes prediction, therefore interpreting our result as establishing a usually tighter lower bound on .

The organization of the paper is as follows. We outline our framework in Section 2.1. In Section 2.2 we present our key result, which links the asymptotic average Bayes error to the mutual information, under an asymptotic setting intended to capture the notion of high dimensionality222Namely, one where the number of classes is fixed, and where the information remains fixed, while the dimensionality of the input and output both grow to infinity. We make a number of additional regularity conditions to rule out scenarios where is really less “high-dimensional” than it appears, since most of the variation is captured a low-dimensional manifold. . In Section 2.3 we apply this result to derive our proposed estimator, (where HD stands for “high-dimensional.”) Section 3 presents simulation results, and Section 4 concludes. All proofs are given in the supplement.

## 2 Theory

### 2.1 Setting

Let us assume that the variables

have a joint distribution

, and that one can define a conditional distribution of given , and let denote the marginal distribution of . We assume that data is collected using stratified sampling. For , sample i.i.d. exemplars . For , draw

iid from the uniform distribution on

, then draw from the conditional distribution .

Stratified sampling is commonly seen in controlled experiments, where an experimenter chooses an input to feed into a black box, which outputs . An example from fMRI studies is an experimental design where the subject is presented a stimulus , and the experimenter measures the subject’s response via the brain activation . 333Note the asymmetry in our definition of stratified sampling: our convention is to take to be the variable preceding in causal order. Such causal directionality constrains the stratified sampling to have repeated rather than repeated values, but has no consequence for the mutual information , which is a symmetric function.

When stratified sampling is employed, one can define an exemplar-based classification task. One defines the class function by

 Z:{X(1),…,X(k)}→{1,…,k},
 Z(X(i))=i for i=1,…,k.

One defines the generalization error by

 egen(f)=1kk∑i=1Pr[f(Y)≠Z|X=X(i)]. (2)

In an exemplar-based classification, there is no need to specify an arbitrary partition on the input space (as is the case in category-based classification), but note that the classes are randomly defined. One consequence is that the Bayes error is a random variable: when the sampling produces similar exemplars, will be higher, and when the sampling produces well-separated exemplars may be lower. Therefore, it is useful to consider the average Bayes error,

 eABE,k=EX(1),…,X(k)[eBayes], (3)

where the expectation is taken over the joint distribution of .

We use the terminology classifier to refer to any algorithm which takes data as input, and produces a classification rule as output. Mathematically speaking, the classifier is a functional which maps a set of observations to a classification rule, The data used to obtain the classification rule is called training data. When the goal is to obtain inference about the generalization error of the classification rule , it becomes necessary to split the data into two independent sets: one set to train the classifier, and one to evaluate the performance. The reason that such a splitting is necessary is because using the same data to test and train a classifier introduces significant bias into the empirical classification error [10]. The classification rule is obtained via where is the training set, and the performance of the classifier is evaluated by predicting the classes of the test set. The results of this test are summarized by a confusion matrix with The th entry of counts how many times a output in the th class was classified to the th class. The test error is the proportion of off-diagonal terms of ,

and is an unbiased estimator of

. However, in small sampling regimes the quantity may be too variable to use as an estimator of . We recommend the use of Bayesian smoothing, defining an -smoothed estimate by which takes a weighted average of the unbiased estimate , and the natural prior of chance classification.

We define a discriminative estimator to be a function which maps the misclassification matrix to a positive number, We are aware of the following examples of discriminative estimators: (1) estimators derived from using Fano’s inequality, and (2) the empirical information of the confusion matrix, , as introduced by Treves [7]. We discuss these estimators in Section 3.

### 2.2 Universality result

We obtain the universality result in two steps. First, we link the average Bayes error to the moments of some statistics

. Secondly, we use taylor approximation in order to express in terms of the moments of . Connecting these two pieces yields the formula (1).

Let us start by rewriting the average Bayes error:

 eABE,k=Pr[p(Y|X1)≤maxj≠1p(Y|Xj)|X=X1].

Defining the statistic , where , we obtain The key assumption we need is that are asymptotically multivariate normal. If so, the following lemma allows us to obtain a formula for the misclassification rate.

Lemma 1. Suppose are jointly multivariate normal, with , , , , and for all , such that . Then, letting

 μ=E[Z1−Zi]√12Var(Zi−Zj)=α√δ−ϵ,
 ν2=Cov(Z1−Zi,Z1−Zj)12Var(Zi−Zj)=β+ϵ−2γδ−ϵ,

we have

 Pr[Z1

where and is the maximum of independent standard normal variates, which are independent of .

To see why the assumption that are multivariate normal might be justified, suppose that and have the same dimensionality , and that joint density factorizes as

 p(x(j),y)=d∏i=1pi(x(j)i,yi)

where are the

th scalar components of the vectors

and . Then,

 Zi=d∑m=1logpm(ym|x(i)m)−logpm(ym|x(m)1)

where is the th component of . The terms are independent across the indices , but dependent between the

. Therefore, the multivariate central limit theorem can be applied to conclude that the vector

can be scaled to converge to a multivariate normal distribution. While the componentwise independence condition is not a realistic assumption, the key property of multivariate normality of

holds under more general conditions, and appears reasonable in practice.

It remains to link the moments of to . This is accomplished by approximating the logarithmic term by the Taylor expansion

 logp(x,y)p(x)p(y)≈p(x,y)−p(x)p(y)p(x)p(y)−(p(x,y)−p(x)p(y)p(x)p(y))2+….

A number of assumptions are needed to ensure that needed approximations are sufficiently accurate; and additionally, in order to apply the central limit theorem, we need to consider a limiting sequence of problems with increasing dimensionality. We now state the theorem.

Theorem 1. Let be a sequence of joint densities for . Further assume that

• There exists a sequence of scaling constants and such that the random vector converges in distribution to a multivariate normal distribution, where for independent .

• Define

 u[d](x,y)=logp[d](x,y)−logp[d](x)−logp[d](y).

There exists a sequence of scaling constants , such that

 a[d]u[d](X(1),Y(2))+b[d]

converges in distribution to a univariate normal distribution.

• For all ,

 limd→∞Cov[u[d](X(i),Y(j)),u[d](X(k),Y(j))]=0.

Then for as defined above, we have

 limd→∞eABE,k=πk(√2ι)

where

 πk(c)=1−∫Rϕ(z−c)Φ(z)k−1dz

where and

are the standard normal density function and cumulative distribution function, respectively.

Assumptions A1-A4 are satisfied in a variety of natural models. One example is a multivariate Gaussian sequence model where and with where and are covariance matrices, and where and are independent. Then, if and have limiting spectra and respectively, the joint densities for satisfy assumptions A1 - A4. Another example is the multivariate logistic model, which we describe in Section 3. We further discuss the rationale behind A1-A4 in the supplement, along with the detailed proof.

### 2.3 High-dimensional estimator

As stated in the introduction, we propose the estimator

 ^IHD(M)=12(π−1k(^egen,α))2.

For sufficiently high-dimensional problems, can accurately recover , supposing also that the classifier consistently estimates the Bayes rule. The number of observations needed depends on the convergence rate of and also the complexity of estimating . Therefore, without making assumptions on , the sample complexity is at least exponential in . This is because when is large relative to , the Bayes error is exponentially small. Hence observations in the test set are needed to recover to sufficient precision. While the sample complexity exponential in is by no means ideal, by comparison, the nonparametric estimation approaches have a complexity exponential in the dimensionality. Hence, is favored over nonparametric approaches in settings with high dimensionality and low signal-to-noise ratio.

## 3 Simulation

We compare the discriminative estimators , , with a nonparametric estimator in the following simulation, and the correctly specified parametric estimator

. We generate data according to a multiple-response logistic regression model, where

, and is a binary vector with conditional distribution

 Yi|X=x∼Bernoulli(xTBi)

where is a matrix. One application of this model might be modeling neural spike count data arising in response to environmental stimuli

[12]. We choose the naive Bayes for the classifier

: it is consistent for estimating the Bayes rule.

The estimator is based on Fano’s inequality, which reads

 H(Z|Y)≤H(eBayes)+eBayeslog||Z|−1|

where is the entropy of a Bernoulli random variable with probability . Replacing with and replacing with , we get the estimator

 ^IFano(M)=log(K)−^egen,αlog(K−1)+^egen,αlog(p)+(1−^egen,α)log(1−^egen,α).

Meanwhile, the confusion matrix estimator computes

 ^ICM(M)=1k2k∑i=1k∑j=1logMijr/k,

which is the empirical mutual information of the discrete joint distribution .

It is known that , tend to underestimate the mutual information. Quiroga et al. [2] discussed two sources of ‘information loss’ which lead to underestimating the mutual information: the discretization of the classes, and the error in approximating the Bayes rule. Meanwhile, Gastpar et al. [11] showed that is biased downwards due to undersampling of the exemplars: to counteract this bias, they introduce the anthropic correction estimator 444However, without a principled approach to choose the parameter , could still vastly underestimate or overestimate the mutual information..

In addition to the sources of information loss discussed by Quiroga et al., an additional reason why and underestimate the mutual information is that they are upper bounded by , where is the number of classes. As exceeds , the estimate can no longer approximate , even up to a constant factor. In contrast, is unbounded and may either underestimate or overestimate the mutual information in general, but performs well when the high-dimensionality assumption is met.

In Figure 2 we show the sampling distributions of the five estimators as is varied in the interval . The estimator is a plug-in estimator using , the coefficient matrix estimated via multinomial regression of on ; it recovers the true mutual information within with a probability of 90%. We see that , , and indeed begin to asymptote as they approach . In contrast, remains a good approximation of within the range, although it begins to overestimate at the right endpoint. The reason why loses accuracy as the true information increases is that the multivariate normality approximation used to derive the estimator becomes less accurate when the conditional distribution becomes highly concentrated.

## 4 Discussion

Discriminative estimators of mutual information have the potential to estimate mutual information in high-dimensional data without resorting to fully parametric assumptions. However, a number of practical considerations also limit their usage. First, one has to find a good classifier

for the data: techniques for model selection can be used to choose from a large library of methods. However, there is no way to guarantee how well the chosen classifier approximates the optimal classification rule. Secondly, one has to estimate the generalization error from test data: the complexity of estimating could become the bottleneck when is close to 0. Thirdly, for previous estimators and , the ability of the estimator to distinguish high values of is limited by the number of classes . Our estimator is subject to the first two limitations, along with any conceivable discriminative estimator, but overcomes the third limitation under the assumption of stratified sampling and high dimensionality.

It can be seen that additional assumptions are indeed needed to overcome the third limitation, the upper bound. Consider the following worst-case example: let and have joint density on the unit square. Under partition-based classification, if we set , then no errors are made under the Bayes rule. We therefore have a joint distribution which maximizes any reasonable discriminative estimator but has finite information . The consequence of this is that under partition-based classification, we cannot hope to distinguish distributions with . The situation is more promising if we specialize to stratified sampling: in the same example, a Bayes of zero is no longer likely due to the possibility of exemplars being sampled from the same bin (‘collisions’)–we obtain an approximation to the average Bayes error through a Poisson sampling model: . By specializing further to the high-dimensional regime, we obtain even tighter control on the relation between Bayes error and mutual information. Our estimator therefore provides more accurate estimation at the cost of more additional assumptions, but just how restrictive are these assumptions?

The assumption of stratified sampling is usually not met in the most common applications of classification where the classes are defined a priori. For instance, if the classes consist of three different species of iris, it does not seem appropriate to model the three species as i.i.d. draws from some distribution on a space of infinitely many potential iris species. Yet, when the classes have been pre-defined in an arbitrary manner, the mutual information between a latent class-defining variable and may be only weakly related to the classification accuracy. We rely on the stratified sampling assumption to obtain the necessary control on how the classes in the classification task are defined. Fortunately, in many applications where one is interested in estimating , a stratified sampling design can be practically implemented.

The assumption of high dimensionality is not easy to check: having a high-dimension response does not suffice, since even then could still lie close to a low-dimensional manifold. In such cases, could either overestimate or underestimate the mutual information. In situations where lie on a manifold, one could effectively estimate mutual information by would be to combining dimensionality reduction with nonparametric information estimation [13]. We suggest the following diagnostic to determine if our method is appropriate: subsample within the classes collected and check that does not systematically increase or decrease with the number of classes .

The assumption of approximating the Bayes rule is impractical to check, as any nonparametric estimate of the Bayes error requires exponentially many observations. Hence, while the present paper studies the ‘best-case’ scenario where the model is well-specified, it is even more important to understand the robustness of our method in the more realistic case where the model is misspecified. We leave this question to future work.

Even given a classifier which consistently estimates the Bayes error, the estimator can still be improved. One can employ more sophisticated methods to estimate : for example, extrapolating from learning curves [14]. Furthermore, depending on the risk function, one may debias or shrink the estimate

to achieve a more favorable bias-variance tradeoff.

All of the necessary assumptions are met in our simulation experiment, hence our proposed estimator is seen to dramatically outperform existing estimators. It remains to assess the utility of our estimation procedure in a real-world example, where both the high-dimensional assumption and the model specification assumption are likely to be violated. In a forthcoming work, we apply our framework to evaluate visual encoding models in human fMRI data.

#### Acknowledgments

We thank John Duchi, Youngsuk Park, Qingyun Sun, Jonathan Taylor, Trevor Hastie, Robert Tibshirani for useful discussion. CZ is supported by an NSF graduate research fellowship.

## References

[1] Borst, A. & Theunissen, F. E. (1999). “Information theory and neural coding” Nature Neurosci., vol. 2, pp. 947-957.

[2] Quiroga, R. Q., & Panzeri, S. (2009). “Extracting information from neuronal populations: information theory and decoding approaches”. Nature Reviews Neuroscience, 10(3), 173-185.

[3] Paninski L. , “Estimation of entropy and mutual information,” Neural Comput., vol. 15, no. 6, pp. 1191-1253, 2003.

[4] Haxby, J.V., et al. (2001). "Distributed and overlapping representations of faces and objects in ventral temporal cortex." Science 293.5539: 2425-2430.

[5] Kay, K. N., et al. “Identifying natural images from human brain activity.” Nature 452.7185 (2008): 352-355.

[6] Coutanche, M.N., Solomon, S.H., and Thompson-Schill S. L., “A meta-analysis of fMRI decoding: Quantifying influences on human visual population codes.” Neuropsychologia 82 (2016): 134-141.

[7] Treves, A. (1997). “On the perceptual structure of face space.” Bio Systems, 40(1-2), 189?96.

[8] Beirlant, J., Dudewicz, E. J., Gyorfi, L., & der Meulen, E. C. (1997). “Nonparametric Entropy Estimation: An Overview.” International Journal of Mathematical and Statistical Sciences, 6, 17-40.

[9] Tse, D., & Viswanath, P. (2005). Fundamentals of wireless communication. Cambridge university press,

[10] Friedman, J., Hastie, T., & Tibshirani, R. (2008). The elements of statistical learning. Vol. 1. Springer, Berlin: Springer series in statistics.

[11] Gastpar, M. Gill, P. Huth, A. & Theunissen, F. (2010). “Anthropic Correction of Information Estimates and Its Application to Neural Coding.” IEEE Trans. Info. Theory, Vol 56 No 2.

[12] Banerjee, A., Dean, H. L., & Pesaran, B. (2011). "Parametric models to relate spike train and LFP dynamics with neural information processing."

Frontiers in computational neuroscience 6: 51-51.

[13] Theunissen, F. E. & Miller, J.P. (1991). “Representation of sensory information in the cricket cercal sensory system. II. information theoretic calculation of system accuracy and optimal tuning-curve widths of four primary interneurons,” J. Neurophysiol., vol. 66, no. 5, pp. 1690-1703.

[14] Cortes, C., et al. "Learning curves: Asymptotic values and rate of convergence." (1994). Advances in Neural Information Processing Systems.

## 5 Appendix

Lemma 1. Suppose are jointly multivariate normal, with , , , , and for all , such that . Then, letting

 μ=E[Z1−Zi]√12Var(Zi−Zj)=α√δ−ϵ,
 ν2=Cov(Z1−Zi,Z1−Zj)12Var(Zi−Zj)=β+ϵ−2γδ−ϵ,

we have

 Pr[Z1

where and is the maximum of independent standard normal variates, which are independent of .

Proof. We can construct independent normal variates , such that

 G1∼N(0,β+ϵ−2γ)
 Gi∼N(0,δ−ϵ) for i>1

such that

 Z1−Zi=α+G1+Gi for i>1.

Hence

 Pr[Z11Z1−Zi<0]. =Pr[kmini=2G1+Gi+α<0] =Pr[kmini=2Gi<−α−G1] =Pr[kmini=2Gi√δ−ϵ<−α−G1√δ−ϵ].

Since are iid standard normal variates, and since for and given in the statement of the Lemma, the proof is completed via a straightforward computation.

Theorem 1. Let be a sequence of joint densities for as given above. Further assume that

• There exists a sequence of scaling constants and such that the random vector converges in distribution to a multivariate normal distribution.

• There exists a sequence of scaling constants , such that

 a[d]u(X(1),Y(2))+b[d]

converges in distribution to a univariate normal distribution.

• For all ,

 limd→∞Cov[u(X(i),Y(j)),u(X(k),Y(j))]=0.

Then for as defined above, we have

 limd→∞eABE,k=πk(√2ι)

where

 πk(c)=1−∫Rϕ(z−c)Φ(z)k−1dz

where and are the standard normal density function and cumulative distribution function, respectively.

Proof.

For , define

 Zi=logp(Y(1)|X(i))−logp(Y(1)|X(1)).

Then, we claim that converges in distribution to

 →Z∼N⎛⎜ ⎜ ⎜ ⎜ ⎜⎝−2ι,⎡⎢ ⎢ ⎢ ⎢ ⎢⎣4ι2ι⋯2ι2ι4ι⋯2ι⋮⋮⋱⋮2ι2ι⋯4ι⎤⎥ ⎥ ⎥ ⎥ ⎥⎦⎞⎟ ⎟ ⎟ ⎟ ⎟⎠.

Combining the claim with the lemma (stated below this proof) yields the desired result.

To prove the claim, it suffices to derive the limiting moments

 E[Zi]→−2ι,
 Var[Zi]→4ι,
 Cov[Zi,Zj]→2ι,

for , since then assumption A2 implies the existence of a multivariate normal limiting distribution with the given moments.

Before deriving the limiting moments, note the following identities. Let and .

 E[eu(X′,Y)]=∫p(x)p(y)eu(x,y)dxdy=∫p(x,y)dxdy=1.

Therefore, from assumption A3 and the formula for gaussian exponential moments, we have

 limd→∞E[u(X′,Y)]−12Var[u(X′,Y)]=0.

Let . Meanwhile, by applying assumption A2,

 limd→∞I(X;Y) =limd→∞∫p(x,y)u(x,y)dxdy=limd→∞∫p(x)p(y)eu(x,y)u(x,y)dxdy =limd→∞E[eu(X,Y′)u(X,Y′)] =∫Rezz1√2πσ2e−(z+σ2/2)22σ2 (applying A2) =∫Rz1√2πσ2e−(z−σ2/2)22σ2 =12σ2.

Therefore,

 σ2=2ι,

and

 limd→∞E[u(X′,Y)]=−ι.

Once again by applying A2, we get

 limd→∞Var[u(X,Y)] =limd→∞∫(u(x,y)−ι)2p(x,y)dxdy =limd→∞∫(u(x,y)−ι)2eu(x,y)p(x)p(y)dxdy =limd→∞E[(u(X′,Y)−ι)2eu(X′,Y)] =∫(z−ι)2ez1√4πιe−(z+ι)24ιdz (applying A2) =∫(z−ι)21√4πιe−(z−ι)24ιdz =2ι.

We now proceed to derive the limiting moments. We have

 limd→∞E[Z] =limd→∞E[logp(Y|X′)−logp(Y|X)] =limd→∞E[u(X′,Y)−u(X,Y)]=−2ι.

Also,

 limd→∞Var[Z] =limd→∞Var[u(X′,Y)−u(X,Y)] =limd→∞Var[u(X′,Y)]+Var[u(X,Y)] (using assumption A4) =4ι,

and similarly

 limd→∞Cov[Zi,Zj] =limd→∞Var[u(X,Y)] (using assumption A4) =2ι.

This concludes the proof. .

### 5.1 Assumptions of theorem 1

Assumptions A1-A4 are satisfied in a variety of natural models. One example is a multivariate Gaussian model where

 X∼N(0,Σd)
 E∼N(0,Σe)
 Y=X+E

where and are covariance matrices, and where and are independent. Then, if and have limiting spectra and respectively, the joint densities for satisfy assumptions A1 - A4.

We can also construct a family of densities satisfying A1 - A4, which we call an exponential family sequence model since each joint distribution in the sequence is a member of an exponential family. A given exponential family sequence model is specified by choice of a base carrier function and base sufficient statistic , with the property that carrier function factorizes as

 b(x,y)=bx(x)by(y)

for marginal densities and . Note that the dimensions of and in the base carrier function are arbitrary; let denote the dimension of and the dimension of for the base carrier function. Next, one specifies a sequence of scalar parameters such that

 limd→∞dκd=c<∞.

for some constant . For the th element of the sequence, is a -dimensional vector, which can be partitioned into blocks

 X[d]=(X[d]1,…,X[d]d)

where each is -dimensional. Similarly, is partitioned into for . The density of is given by

 p[d](x[d],y[d])=Z−1d(d∏i=1b(x[d]i,y[d]i))exp[κdd∑i=1t(x[d]i,y[d]i)],

where is a normalizing constant. Hence can be recognized as the member of an exponential family with carrier measure

 (d∏i=1b(x[d]i,y[d]i))

and sufficient statistic

 d∑i=1t(x[d]i,y[d]i).

One example of such an exponential family sequence model is a multivariate Gaussian model with limiting spectra and , but scaled so that the marginal variance of the components of and are equal to one. This corresponds to a exponential family sequence model with

 bx(x)=by(x)=1√2πe−x2/2

and

 t(x,y)=xy.

Another example is a multivariate logistic regression model, given by

 X∼N(0,I)
 Yi∼Bernoulli(eβXi/(1+eβXi))

This corresponds to an exponential family sequence model with

 bx(x)=1√2πe−x2/2
 by(y)=12 for y={0,1},

and

 t(x,y)=xδ1(y)−xδ0(y).

The multivariate logistic regression model (and multivariate Poisson regression model) are especially suitable for modeling neural spike count data; we simulate data from such a multivariate logistic regression model in section X.

Multiple-response logistic regression model

 X∼N(0,Ip)
 Y∈{0,1}q
 Yi|X=x∼Bernoulli(xTBi)

where is a matrix.

Multiple-response logistic regression model

 X∼N(0,Ip)
 Y∈{0,1}q
 Yi|X=x∼Bernoulli(xTBi)