1 Introduction
A fundamental challenge of computational neuroscience is to understand how information about the external world is processed and represented in the brain. Each individual neuron aggregates the incoming information into a single sequence of spikes–an output which is too simplistic by itself to capture the full complexity of sensory input. Only by combining the signals from massive ensembles of neurons is it possible to reconstruct our complex representation of the world. Nevertheless, neurons form hierarchies of specialization within neural circuits, which are further organized in various specialized regions of the brain. At the lowest level of the hierarchy–individual neurons, it is possible to infer and interpret the functional relationship between a neuron and stimulus features of interest using singlecell recording technologies. Due to the inherent stochasticity of the neural output, it is natural to view the neuron as a noisy channel, and use mutual information to quantify how much of the stimulus information is encoded by the neuron. Moving up the hierarchy to the the macroscale level of organization in the brain requires both different experimental methodologies and new approaches for summarizing and inferring measures of information in the brain.
Shannon’s mutual information
is fundamentally a measure of dependence between random variables
and , and is defined asVarious properties of make it ideal for quantifying the information between a random stimulus and the signaling behavior of an ensembles of neurons, [1]. A leading metaphor is that of a noisy communications channel; the mutual information describes the rate at which can communicate bits from . This framework is wellsuited for summarizing the properties of a single neuron coding external stimulus information; indeed, experiments studying the properties of a single or a small number of neurons often make use of the concept of mutual information in summarizing or interpreting their results [2]. See discussions in [3]. However, estimating mutual information for multiple channels requires large and overparameterized generative models.
Machine learning algorithms showed a way forward: a seminal work by Haxby [4] proposed to quantify the information in multiple channels by measuring how well the stimulus can be identified from the brain responses, in what is known as “multivariate pattern analysis” (MVPA). To demonstrate that a particular brain region responds to a certain type of sensory information, one employs supervised learning to build a classifier that classifies the stimulus class from the brain activation in that region. Classifiers that achieve abovechance classification accuracy indicate that information from the stimulus is represented in the brain region. In principle, one could just as well test the statistical hypothesis that the Fisher information or mutual information between the stimulus and the activation patterns is nonzero. But in practice, the machine learning approach enjoys several advantages: First, it is invariant to the parametric representation of the stimulus space, and is opportunistic in the parameterization of the response space. This is an important quality for naturalistic stimulusspaces, such as faces or natural images. Second, it scales better with the dimensionality of both the stimulus space and the responses space, because a slimmer discriminative model can be used rather than a fully generative model.
Nevertheless, classification error is problematic for quantifying the strength of the relation between stimulus and outputs due to its arbitrary scale and strong dependence on experimental choices. Classification accuracy depends on the particular choice of stimuli exemplars employed in the study and the number of partitions () used to define the classes for the classification task. The difficulty of the classification task depends on the number of classes defined: high classification accuracy can be achieved relatively easily by using a coarse partition of stimuli exemplars into classes. Often is an arbitrary design constraint, and researchers try to extrapolate the error for alternative number of classes [5]. In a metaanalysis on visual decoding, Coutanche et al (2016) [6] quantified the strength of a classification study using the formula
Such an approach may compensate for the differences in accuracy due purely to choice of number of classes defined; however, no theory is provided to justify the formula. In contrast, mutual information has ideal properties for quantitatively comparing information between different studies, or between different brain regions, subjects, featurespaces, or modalities. Not only is the mutual information defined independently of the arbitrary definition of stimulus classes (albeit still dependent on an implied distribution over stimuli), it is even meaningful to discuss the difference between the mutual information measured for one system and the mutual information for a second system.
Hence, a popular approach which combines the strengths of the machine learning approach and the advantages of the information theoretic approach is to obtain a lower bound on the mutual information by using the confusion matrix of a classifier. Treves [7] first proposed using the empirical mutual information of the classification matrix in order to obtain a lower bound of the mutual information
; this confusionmatrixbased lower bound has subsequently enjoyed widespread use in the MVPA literature [2]. Even earlier that this, the idea of linking classification performance to mutual information can be found in the beginnings of information theory. Fano’s inequality provides a lower bound on mutual information in relation to the optimal prediction error, or Bayes error. In practice, the bound obtained may be a vast underestimate [8].1.1 Our contributions
In this paper, we propose a new way to link classification performance to the implied mutual information. To create this link we need to overcome the arbitrary choice of exemplars, and the arbitrary number of classes k. Towards this end, we define a notion of class average Bayes error which is uniquely defined for any given stimulus distribution and stochastic mapping from stimulus to response. The class average Bayes error is the expectation of the Bayes error (the classification error of the optimal classifier) when stimuli exemplars are drawn i.i.d. from the stimulus distribution, and treated as distinct classes. Hence the average Bayes error can in principle be estimated if the appropriate randomization is employed for designing the experiment.
Specifically, we establish a relationship between the mutual information and the average class Bayes error, . In short, we will identify a function (which depends on ),
(1) 
and that this approximation becomes accurate under a limit where is small relative to the dimensionality of , and under the condition that the components of are approximately independent. The function is given by
This formula is not new to the information theory literature: it appears as the error rate of an orthogonal constellation [9]. What is surprising is that the same formula can be used to approximate the error rate in much more general class of classification problems^{1}^{1}1
An intuitive explanation for this fact is that points from any highdimensional distribution lie in an orthogonal configuration with high probability.
–this is precisely the universality result which provides the basis for our proposed estimator.Figure 1 displays the plot of for several values of . For all values of , is monotonically decreasing in , and tends to zero as , which is what we expect since if is large, then the average Bayes error should be small. Another intuitive fact is that since after all, an uninformative response cannot lead to abovechance classification accuracy.
The estimator we propose is
obtained by inverting the relation (1), then substituting an estimate of generalization error for the . As such, our estimator can be directly compared to the , since both are functions of (Figure 1.) As the estimate of generalization error goes to zero, approaches while goes to infinity. This difference in behavior is due to the fact that in contrast to Fano’s inequality, the asymptotic relationship (1) is independent of the number of classes .
In the paper we argue for the advantages of our method in comparison to alternative discriminative estimators under the assumption that the discriminative model approximates the Bayes rule. While this is an unrealistic assumption, it simplifies the theoretical discussion, and allows us to clearly discuss the principles behind our method. Alternatively, we can take the view that any observed classification error is a lower bound on the Bayes prediction, therefore interpreting our result as establishing a usually tighter lower bound on .
The organization of the paper is as follows. We outline our framework in Section 2.1. In Section 2.2 we present our key result, which links the asymptotic average Bayes error to the mutual information, under an asymptotic setting intended to capture the notion of high dimensionality^{2}^{2}2Namely, one where the number of classes is fixed, and where the information remains fixed, while the dimensionality of the input and output both grow to infinity. We make a number of additional regularity conditions to rule out scenarios where is really less “highdimensional” than it appears, since most of the variation is captured a lowdimensional manifold. . In Section 2.3 we apply this result to derive our proposed estimator, (where HD stands for “highdimensional.”) Section 3 presents simulation results, and Section 4 concludes. All proofs are given in the supplement.
2 Theory
2.1 Setting
Let us assume that the variables
have a joint distribution
, and that one can define a conditional distribution of given , and let denote the marginal distribution of . We assume that data is collected using stratified sampling. For , sample i.i.d. exemplars . For , drawiid from the uniform distribution on
, then draw from the conditional distribution .Stratified sampling is commonly seen in controlled experiments, where an experimenter chooses an input to feed into a black box, which outputs . An example from fMRI studies is an experimental design where the subject is presented a stimulus , and the experimenter measures the subject’s response via the brain activation . ^{3}^{3}3Note the asymmetry in our definition of stratified sampling: our convention is to take to be the variable preceding in causal order. Such causal directionality constrains the stratified sampling to have repeated rather than repeated values, but has no consequence for the mutual information , which is a symmetric function.
When stratified sampling is employed, one can define an exemplarbased classification task. One defines the class function by
One defines the generalization error by
(2) 
In an exemplarbased classification, there is no need to specify an arbitrary partition on the input space (as is the case in categorybased classification), but note that the classes are randomly defined. One consequence is that the Bayes error is a random variable: when the sampling produces similar exemplars, will be higher, and when the sampling produces wellseparated exemplars may be lower. Therefore, it is useful to consider the average Bayes error,
(3) 
where the expectation is taken over the joint distribution of .
We use the terminology classifier to refer to any algorithm which takes data as input, and produces a classification rule as output. Mathematically speaking, the classifier is a functional which maps a set of observations to a classification rule, The data used to obtain the classification rule is called training data. When the goal is to obtain inference about the generalization error of the classification rule , it becomes necessary to split the data into two independent sets: one set to train the classifier, and one to evaluate the performance. The reason that such a splitting is necessary is because using the same data to test and train a classifier introduces significant bias into the empirical classification error [10]. The classification rule is obtained via where is the training set, and the performance of the classifier is evaluated by predicting the classes of the test set. The results of this test are summarized by a confusion matrix with The th entry of counts how many times a output in the th class was classified to the th class. The test error is the proportion of offdiagonal terms of ,
and is an unbiased estimator of
. However, in small sampling regimes the quantity may be too variable to use as an estimator of . We recommend the use of Bayesian smoothing, defining an smoothed estimate by which takes a weighted average of the unbiased estimate , and the natural prior of chance classification.We define a discriminative estimator to be a function which maps the misclassification matrix to a positive number, We are aware of the following examples of discriminative estimators: (1) estimators derived from using Fano’s inequality, and (2) the empirical information of the confusion matrix, , as introduced by Treves [7]. We discuss these estimators in Section 3.
2.2 Universality result
We obtain the universality result in two steps. First, we link the average Bayes error to the moments of some statistics
. Secondly, we use taylor approximation in order to express in terms of the moments of . Connecting these two pieces yields the formula (1).Let us start by rewriting the average Bayes error:
Defining the statistic , where , we obtain The key assumption we need is that are asymptotically multivariate normal. If so, the following lemma allows us to obtain a formula for the misclassification rate.
Lemma 1. Suppose are jointly multivariate normal, with , , , , and for all , such that . Then, letting
we have
where and is the maximum of independent standard normal variates, which are independent of .
To see why the assumption that are multivariate normal might be justified, suppose that and have the same dimensionality , and that joint density factorizes as
where are the
th scalar components of the vectors
and . Then,where is the th component of . The terms are independent across the indices , but dependent between the
. Therefore, the multivariate central limit theorem can be applied to conclude that the vector
can be scaled to converge to a multivariate normal distribution. While the componentwise independence condition is not a realistic assumption, the key property of multivariate normality of
holds under more general conditions, and appears reasonable in practice.It remains to link the moments of to . This is accomplished by approximating the logarithmic term by the Taylor expansion
A number of assumptions are needed to ensure that needed approximations are sufficiently accurate; and additionally, in order to apply the central limit theorem, we need to consider a limiting sequence of problems with increasing dimensionality. We now state the theorem.
Theorem 1. Let be a sequence of joint densities for . Further assume that


There exists a sequence of scaling constants and such that the random vector converges in distribution to a multivariate normal distribution, where for independent .

Define
There exists a sequence of scaling constants , such that
converges in distribution to a univariate normal distribution.

For all ,
Then for as defined above, we have
where
where and
are the standard normal density function and cumulative distribution function, respectively.
Assumptions A1A4 are satisfied in a variety of natural models. One example is a multivariate Gaussian sequence model where and with where and are covariance matrices, and where and are independent. Then, if and have limiting spectra and respectively, the joint densities for satisfy assumptions A1  A4. Another example is the multivariate logistic model, which we describe in Section 3. We further discuss the rationale behind A1A4 in the supplement, along with the detailed proof.
2.3 Highdimensional estimator
As stated in the introduction, we propose the estimator
For sufficiently highdimensional problems, can accurately recover , supposing also that the classifier consistently estimates the Bayes rule. The number of observations needed depends on the convergence rate of and also the complexity of estimating . Therefore, without making assumptions on , the sample complexity is at least exponential in . This is because when is large relative to , the Bayes error is exponentially small. Hence observations in the test set are needed to recover to sufficient precision. While the sample complexity exponential in is by no means ideal, by comparison, the nonparametric estimation approaches have a complexity exponential in the dimensionality. Hence, is favored over nonparametric approaches in settings with high dimensionality and low signaltonoise ratio.
3 Simulation
We compare the discriminative estimators , , with a nonparametric estimator in the following simulation, and the correctly specified parametric estimator
. We generate data according to a multipleresponse logistic regression model, where
, and is a binary vector with conditional distributionwhere is a matrix. One application of this model might be modeling neural spike count data arising in response to environmental stimuli
[12]. We choose the naive Bayes for the classifier
: it is consistent for estimating the Bayes rule.Sampling distribution of  
The estimator is based on Fano’s inequality, which reads
where is the entropy of a Bernoulli random variable with probability . Replacing with and replacing with , we get the estimator
Meanwhile, the confusion matrix estimator computes
which is the empirical mutual information of the discrete joint distribution .
It is known that , tend to underestimate the mutual information. Quiroga et al. [2] discussed two sources of ‘information loss’ which lead to underestimating the mutual information: the discretization of the classes, and the error in approximating the Bayes rule. Meanwhile, Gastpar et al. [11] showed that is biased downwards due to undersampling of the exemplars: to counteract this bias, they introduce the anthropic correction estimator ^{4}^{4}4However, without a principled approach to choose the parameter , could still vastly underestimate or overestimate the mutual information..
In addition to the sources of information loss discussed by Quiroga et al., an additional reason why and underestimate the mutual information is that they are upper bounded by , where is the number of classes. As exceeds , the estimate can no longer approximate , even up to a constant factor. In contrast, is unbounded and may either underestimate or overestimate the mutual information in general, but performs well when the highdimensionality assumption is met.
In Figure 2 we show the sampling distributions of the five estimators as is varied in the interval . The estimator is a plugin estimator using , the coefficient matrix estimated via multinomial regression of on ; it recovers the true mutual information within with a probability of 90%. We see that , , and indeed begin to asymptote as they approach . In contrast, remains a good approximation of within the range, although it begins to overestimate at the right endpoint. The reason why loses accuracy as the true information increases is that the multivariate normality approximation used to derive the estimator becomes less accurate when the conditional distribution becomes highly concentrated.
4 Discussion
Discriminative estimators of mutual information have the potential to estimate mutual information in highdimensional data without resorting to fully parametric assumptions. However, a number of practical considerations also limit their usage. First, one has to find a good classifier
for the data: techniques for model selection can be used to choose from a large library of methods. However, there is no way to guarantee how well the chosen classifier approximates the optimal classification rule. Secondly, one has to estimate the generalization error from test data: the complexity of estimating could become the bottleneck when is close to 0. Thirdly, for previous estimators and , the ability of the estimator to distinguish high values of is limited by the number of classes . Our estimator is subject to the first two limitations, along with any conceivable discriminative estimator, but overcomes the third limitation under the assumption of stratified sampling and high dimensionality.It can be seen that additional assumptions are indeed needed to overcome the third limitation, the upper bound. Consider the following worstcase example: let and have joint density on the unit square. Under partitionbased classification, if we set , then no errors are made under the Bayes rule. We therefore have a joint distribution which maximizes any reasonable discriminative estimator but has finite information . The consequence of this is that under partitionbased classification, we cannot hope to distinguish distributions with . The situation is more promising if we specialize to stratified sampling: in the same example, a Bayes of zero is no longer likely due to the possibility of exemplars being sampled from the same bin (‘collisions’)–we obtain an approximation to the average Bayes error through a Poisson sampling model: . By specializing further to the highdimensional regime, we obtain even tighter control on the relation between Bayes error and mutual information. Our estimator therefore provides more accurate estimation at the cost of more additional assumptions, but just how restrictive are these assumptions?
The assumption of stratified sampling is usually not met in the most common applications of classification where the classes are defined a priori. For instance, if the classes consist of three different species of iris, it does not seem appropriate to model the three species as i.i.d. draws from some distribution on a space of infinitely many potential iris species. Yet, when the classes have been predefined in an arbitrary manner, the mutual information between a latent classdefining variable and may be only weakly related to the classification accuracy. We rely on the stratified sampling assumption to obtain the necessary control on how the classes in the classification task are defined. Fortunately, in many applications where one is interested in estimating , a stratified sampling design can be practically implemented.
The assumption of high dimensionality is not easy to check: having a highdimension response does not suffice, since even then could still lie close to a lowdimensional manifold. In such cases, could either overestimate or underestimate the mutual information. In situations where lie on a manifold, one could effectively estimate mutual information by would be to combining dimensionality reduction with nonparametric information estimation [13]. We suggest the following diagnostic to determine if our method is appropriate: subsample within the classes collected and check that does not systematically increase or decrease with the number of classes .
The assumption of approximating the Bayes rule is impractical to check, as any nonparametric estimate of the Bayes error requires exponentially many observations. Hence, while the present paper studies the ‘bestcase’ scenario where the model is wellspecified, it is even more important to understand the robustness of our method in the more realistic case where the model is misspecified. We leave this question to future work.
Even given a classifier which consistently estimates the Bayes error, the estimator can still be improved. One can employ more sophisticated methods to estimate : for example, extrapolating from learning curves [14]. Furthermore, depending on the risk function, one may debias or shrink the estimate
to achieve a more favorable biasvariance tradeoff.
All of the necessary assumptions are met in our simulation experiment, hence our proposed estimator is seen to dramatically outperform existing estimators. It remains to assess the utility of our estimation procedure in a realworld example, where both the highdimensional assumption and the model specification assumption are likely to be violated. In a forthcoming work, we apply our framework to evaluate visual encoding models in human fMRI data.
Acknowledgments
We thank John Duchi, Youngsuk Park, Qingyun Sun, Jonathan Taylor, Trevor Hastie, Robert Tibshirani for useful discussion. CZ is supported by an NSF graduate research fellowship.
References
[1] Borst, A. & Theunissen, F. E. (1999). “Information theory and neural coding” Nature Neurosci., vol. 2, pp. 947957.
[2] Quiroga, R. Q., & Panzeri, S. (2009). “Extracting information from neuronal populations: information theory and decoding approaches”. Nature Reviews Neuroscience, 10(3), 173185.
[3] Paninski L. , “Estimation of entropy and mutual information,” Neural Comput., vol. 15, no. 6, pp. 11911253, 2003.
[4] Haxby, J.V., et al. (2001). "Distributed and overlapping representations of faces and objects in ventral temporal cortex." Science 293.5539: 24252430.
[5] Kay, K. N., et al. “Identifying natural images from human brain activity.” Nature 452.7185 (2008): 352355.
[6] Coutanche, M.N., Solomon, S.H., and ThompsonSchill S. L., “A metaanalysis of fMRI decoding: Quantifying influences on human visual population codes.” Neuropsychologia 82 (2016): 134141.
[7] Treves, A. (1997). “On the perceptual structure of face space.” Bio Systems, 40(12), 189?96.
[8] Beirlant, J., Dudewicz, E. J., Gyorfi, L., & der Meulen, E. C. (1997). “Nonparametric Entropy Estimation: An Overview.” International Journal of Mathematical and Statistical Sciences, 6, 1740.
[9] Tse, D., & Viswanath, P. (2005). Fundamentals of wireless communication. Cambridge university press,
[10] Friedman, J., Hastie, T., & Tibshirani, R. (2008). The elements of statistical learning. Vol. 1. Springer, Berlin: Springer series in statistics.
[11] Gastpar, M. Gill, P. Huth, A. & Theunissen, F. (2010). “Anthropic Correction of Information Estimates and Its Application to Neural Coding.” IEEE Trans. Info. Theory, Vol 56 No 2.
[12] Banerjee, A., Dean, H. L., & Pesaran, B. (2011). "Parametric models to relate spike train and LFP dynamics with neural information processing."
Frontiers in computational neuroscience 6: 5151.[13] Theunissen, F. E. & Miller, J.P. (1991). “Representation of sensory information in the cricket cercal sensory system. II. information theoretic calculation of system accuracy and optimal tuningcurve widths of four primary interneurons,” J. Neurophysiol., vol. 66, no. 5, pp. 16901703.
[14] Cortes, C., et al. "Learning curves: Asymptotic values and rate of convergence." (1994). Advances in Neural Information Processing Systems.
5 Appendix
Lemma 1. Suppose are jointly multivariate normal, with , , , , and for all , such that . Then, letting
we have
where and is the maximum of independent standard normal variates, which are independent of .
Proof. We can construct independent normal variates , such that
such that
Hence
Since are iid standard normal variates, and since for and given in the statement of the Lemma, the proof is completed via a straightforward computation.
Theorem 1. Let be a sequence of joint densities for as given above. Further assume that


There exists a sequence of scaling constants and such that the random vector converges in distribution to a multivariate normal distribution.

There exists a sequence of scaling constants , such that
converges in distribution to a univariate normal distribution.

For all ,
Then for as defined above, we have
where
where and are the standard normal density function and cumulative distribution function, respectively.
Proof.
For , define
Then, we claim that converges in distribution to
Combining the claim with the lemma (stated below this proof) yields the desired result.
To prove the claim, it suffices to derive the limiting moments
for , since then assumption A2 implies the existence of a multivariate normal limiting distribution with the given moments.
Before deriving the limiting moments, note the following identities. Let and .
Therefore, from assumption A3 and the formula for gaussian exponential moments, we have
Let . Meanwhile, by applying assumption A2,
Therefore,
and
Once again by applying A2, we get
We now proceed to derive the limiting moments. We have
Also,
and similarly
This concludes the proof. .
5.1 Assumptions of theorem 1
Assumptions A1A4 are satisfied in a variety of natural models. One example is a multivariate Gaussian model where
where and are covariance matrices, and where and are independent. Then, if and have limiting spectra and respectively, the joint densities for satisfy assumptions A1  A4.
We can also construct a family of densities satisfying A1  A4, which we call an exponential family sequence model since each joint distribution in the sequence is a member of an exponential family. A given exponential family sequence model is specified by choice of a base carrier function and base sufficient statistic , with the property that carrier function factorizes as
for marginal densities and . Note that the dimensions of and in the base carrier function are arbitrary; let denote the dimension of and the dimension of for the base carrier function. Next, one specifies a sequence of scalar parameters such that
for some constant . For the th element of the sequence, is a dimensional vector, which can be partitioned into blocks
where each is dimensional. Similarly, is partitioned into for . The density of is given by
where is a normalizing constant. Hence can be recognized as the member of an exponential family with carrier measure
and sufficient statistic
One example of such an exponential family sequence model is a multivariate Gaussian model with limiting spectra and , but scaled so that the marginal variance of the components of and are equal to one. This corresponds to a exponential family sequence model with
and
Another example is a multivariate logistic regression model, given by
This corresponds to an exponential family sequence model with
and
The multivariate logistic regression model (and multivariate Poisson regression model) are especially suitable for modeling neural spike count data; we simulate data from such a multivariate logistic regression model in section X.
6 Additional simulation results
Multipleresponse logistic regression model
where is a matrix.
Multipleresponse logistic regression model
Comments
There are no comments yet.