Bayesian Properties of Normalized Maximum Likelihood and its Fast Computation

01/28/2014 ∙ by Andrew Barron, et al. ∙ Helsingin yliopisto Yale University 0

The normalized maximized likelihood (NML) provides the minimax regret solution in universal data compression, gambling, and prediction, and it plays an essential role in the minimum description length (MDL) method of statistical modeling and estimation. Here we show that the normalized maximum likelihood has a Bayes-like representation as a mixture of the component models, even in finite samples, though the weights of linear combination may be both positive and negative. This representation addresses in part the relationship between MDL and Bayes modeling. This representation has the advantage of speeding the calculation of marginals and conditionals required for coding and prediction applications.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

For a family of probability mass (or probability density) functions

, also denoted or , for data in a data space and a parameter in a parameter set , there is a distinquished role in information theory and statistics for the maximum likelihood measure with mass (or density) function proportional to obtained from the maximum likelihood estimator achieving the maximum likelihood value . Let , where the sum is replaced by an integral in the density case. For statistical models in which is finite (i.e. the maximum likelihood measure is normalizable), this maximum value characterizes exact solution in an arbitrary sequence (non-stochastic) setting to certain modeling tasks ranging from universal data compression, to arbitrage-free gambling, to predictive distributions with minimax regret.

Common to these modeling tasks is the problem of providing a single non-negative distribution with with a certain minimax property. For instance, for the compression of data with codelength , the codelength is to be compared to the best codelength with hindsight among the codes parameterized by the family. This ideal codelength is not exactly attainable, because the maximized likelihood will (except in a trivial case) have a sum than is greater than , so that the Kraft inequality required for unique decodability would not be satisfied by plugging in the MLE. We must work with a with sum not greater than . The difference between these actual and ideal codelengths is the (pointwise) regret

which of course is the same as The minimax regret problem is to solve for the distribution achieving

where the minimum is taken over all probability mass functions. Beginning with Shtarkov [8], who formulated the minimax regret problem for universal data compression, it has been shown that the solution is given by the normalized maximized likelihood given by

where is the normalizer given by . This is an equalizer rule (achieving constant regret), showing that the minimax reget is .

For settings in which is infinite, the maximized likelihood measure is not normalizable and the minimax regret defined above is infinite. Nevertheless, one can identify problems of this type in which the maximized likelihood value continues to have a distinguished role. In particular, suppose the data comes in two parts , thought of as initial and subsequent data strings. Then the maximum likelihood value often has a finite marginal leading to the conditional NML distribution

which is non-negative and sums to over for each such conditioning event .

Bayes mixtures are used in approximation, and as we shall see, in exact representation of the maximized likelihood measure. There are reasons for this use of Bayes mixtures when studying properties of logarithmic regret in general and when studying the normalized maximum likelihood in particular. There are two traditional reasons that have an approximate nature. One is a relationship between minimax pointwise regret and minimax expected regret for which Bayes procedures are known to play a distinquished role. The other is the established role of such mixtures in the asymptotic charaterization of minimax pointwise regret.

Here we offer up two more reasons for consideration of Bayes mixtures which are based on exact representation of the normalized maximum likelihood. One is that representation by mixtures provides computational simplification of coding and prediction by NML or conditional NML. The other is that the exact representation of NML allows determination of which parametric families allow Bayes interpretation with positive weights and which require a combination of positive and negative weights.

Before turning attention to exact representation of the NML, let’s first recall the information-theoretic role in which Bayes mixtures arise. The expected regret (redundancy) in data compression is , which is a function of giving the expectation of the difference between the codelength based on and the optimal codelength (the expected difference being a Kullback divergence). There is a similar formulation of expected regret for the description of given which is the risk of the statistical decision problem with loss specified by Kullback divergence.

For these two decision problems the procedures minimizing the average risk are the Bayes mixture distributions and Bayes predictive distributions, respectively. In general admissibility theory for convex losses like Kullback divergence, the only procedures not improvable in their risk functions are Bayes and certain limits of Bayes procedures with positive priors. For minimax expected redundancy, with min over and max over , the minimax solution is characterized using the maximin average redundancy, which calls for a least favorable (capacity achieving) prior [6].

The maximum pointwise regret provides an upper bound on as well as an upper bound on the maximum expected redundancy. It is for the max over problems that the minimax solution takes the form of a Bayes mixture. So it is a surprise that the max over form also has a mixture representation as we shall see.

The other traditional role for Bayes mixtures in the study of the NML arises in asymptotics [12]. Suppose is a string of outcomes from a given alphabet. Large sample approximations for smooth families of distributions show a role for sequences of prior distributions with densities close to Jeffreys prior taken to be proportional to where is the Fisher information. Bayes mixtures of this type are asymptotically minimax for the expected regret [3, 13], and in certain exponential families Bayes mixtures are simultaneously asymptotically minimax for pointwise regret and expected regret [14, 9]. However, in non-exponential families, it is problematic for Bayes mixtures to be asymptotically minimax for pointwise regret, because there are data sequences for which the empirical Fisher information (arising in the large sample Laplace approximation) does not match the Fisher information, so that Jeffreys prior fails. The work of [9, 10] overcomes this problem in the asymptotic setting by putting a Bayes mixture on a slight enlargement of the family to compensate for this difficulty. The present work motivates consideration of signed mixtures in the original family rather than enlarging the family.

We turn now to the main finite sample reasons for exploration of Bayes representation of NML. The first is the matter of computational simplification of the representation of NML by mixtures with possibly signed weights of combination. For any coding distribution in general, and NML in particular, coding implementation (for which arithmetic coding is the main tool) requires computation of the sequence of conditional distributions of defined by the ratios of consecutive marginals for for . This appears to be a very difficult task for normalized maximum likelihood, for which direct methods require sums of size up to . Fast methods for NML coding have been developed in specialed settings [5], yet remain intractable for most models.

In contrast, for computation of the corresponding ingredients of Bayes mixtures , one can make use of simplifying conditional rules for , e.g. it is equal to in the conditionally iid case or in the first order Markov case, which multiply in providing for each . So to compute one has ready access to the computation of the ratios of consecutive marginals , contingent on the ability to do the sums or integrals required by the measure . Equivalently, one has representation of the required predictive distributions as a posterior average, e.g. in the iid case it is, . So Bayes mixtures permit simplified marginalization (and conditioning) compared to direct marginalization of the NML.

The purpose of the present paper is to explore in the finite sample setting, the question of whether we can take advantage of the Bayes mixture to provide exact representation of the maximized likelihood measures. That is, the question explored is whether there is a prior measure such that exactly

Likewise for strings we want the representation for some prior measure that may depend on . Then to perform the marginalization required for sequential prediction and coding by maximized likelihood measures, we can get them computationaly easily as for . We point out that this computational simplicity holds just as well if is a signed (not necessarily non-negative) measure.

In Section II, we give a result on exact representation in the case that the family has a finitely supported sufficient statistic. In Section III we demonstrate numerical solution in the Bernoulli trials case, using linear algebra solutions for as well as a Renyi divergence optimization, close to optimization of the maximum ratio of the NML and the mixture.

Finally, we emphasize that at no point are we trying to report a negative probability as a direct model of an observable variable. Negative weights of combination of unobservable parameters are instead arising in representation and calculation ingredients of non-negative probability mass functions of observable quantities. The marginal and predictive distributions of observable outcomes all remain non-negative as they must.

Ii Signed Prior representation of NML

We say that a function is a signed Bayes mixture when is allowed to be a signed measure, with positive and negative parts, and , respectively. These signed Bayes mixtures may play a role in the representation of . For now, let’s note that for strings a signed Bayes mixture has some of the same marginalization properties as a proper Bayes mixture. The marginal for is defined by summing out through . For the components , the marginals are denoted . These may be conveniently simple to evaluate for some choices of the component family, e.g. i.i.d. or Markov. Then the signed Bayes mixture has marginals . [Here it is being assumed that, at least for indicies past some initial value, the and are finite, so that the exchange of the order of the integral and the sum producing this marginal is valid.] Our emphasis will be on cases in which the mixture is non-negative (that is for all ) and then the marginals will be non-negative as well. Accordingly one has predictive distributions defined as ratios of consecutive marginals, as long as the conditioning string has finite and non-zero. It is seen then that is a non-negative distribution which sums to , summing over in , for each such conditioning string. Moreover, one may formally define a possibly-signed posterior distribution such that the predictive distribution is still a posterior average, e.g. in the iid case one still has the representation .

We mention that for most families the maximized likelihood has a horizon dependence property, such that the marginals defined as remain slightly dependent on for each . In suitable Bayes approximations and exact representations, this horizon dependence is reflected in a (possibly-signed) prior depending on , such that its marginals take the form . (Un)achievability of asymptotic minimax regret without dependency on the horizon was characterized as a conjecture in [12]. Three models within one-dimensional exponential families are exceptions to this horizon dependence [2]. It is also of interest that, as shown in [2], those horizon independent maximized likelihood measures have exact representation using horizon independent positive priors.

Any finite-valued statistic has a distribution . It is a sufficient statistic if there is a function not depending on such that the likelihood factorizes as . If the statistic takes on values, we may regard the distribution of

as a vector in the positive orthant of

, with sum of coordinates equal to . For example if are Bernoulli() trials then is well known to be a sufficient statistic having a Binomial() distribution with .

The main point of the present paper is to explore the ramifications of the following simple result.

Theorem: Signed-Bayes representation of maximized likelihood. Suppose the parametric family , with in and in , has a sufficient statistic with values in a set of cardinality , where may depend on . Then for any subset for which the distributions of are linearly independent in , there is a possibly-signed measure supported on , with values , such that has the representation

Proof: By sufficiency, it is enough to represent as a linear combination of , which is possible since these are linearly independent and hence span .

Remark 1: Consequently, the task of computation of the marginals (and hence conditionals) of maximized likelihood needed for minimax pointwise redundancy codes is reduced from the seemingly hard task of summing over to the much simpler task of computing the sum of terms

Remark 2: Often the likelihood ratio simplifies, where is the maximum likelihood estimate. For instance, in i.i.d. exponential families it takes the form where is the relative entropy between the distributions at and at . So then, dividing through by , the representation task is to find a possibly-signed measure such that the integral of these is constant for all possible values of , that is, The will only depend on the sufficient statistic so this is a simplified form of the representation.

Remark 3: Summing out one sees that the Shtarkov value has the representation . That is, the Shtarkov value matches the total signed measure of . When is finite, one may alternatively divide out and provide a representation in which the possibly-signed prior has total measure .

Remark 4: In the Bernoulli trials case, the likelihoods are proportional to . The can be associated with the weights of combination. To see the linear independence required in the theorem, it is enough to note that the vectors of exponentials are linearly independent for any

distinct values of the log odds

. The roles of and can be exchanged in the maximized likelihood, and, correspondingly, the representation can be arranged with a prior symmetric around . Numerical selections are studied below.

Iii Numerical Results

We now proceed to demonstrate the discussed mixture representations.

Iii-a A trivial example where negative weigths are required

We start with a simple illustration of a case where negative weights are required. Consider a single observation of a ternary random variable

under a model consisting of three probability mass functions, , defined as follows.

The maximum likelihood values are given by for all since for all the maximum, , is achieved by either or

(or both). The NML distribution is therefore the uniform distribution

.

An elementary solution for the weights such that for all yields . The solution is unique implying that in particular there is no weight vector that achieves the matching with only positive weights.

Iii-B Bernoulli trials: Linear equations for with fixed

For Bernoulli trials, the sufficient statistic takes on possible values. We discuss two alternative methods for finding the prior. First, as a direct application of linear algebra, we choose a set of fixed parameter values and obtain the weights by solving a system of linear equations. By Remark 4 above, any combination of distinct values yields linearly independent distributions of and a signed-Bayes representation is guaranteed to exist. However, the choice of has a strong effect on the resulting weights .

We consider two alternative choices of : first, a uniform grid with , and second, a grid with points at

. The latter are the quantiles of the Beta(1/2,1/2) distribution (also known as the arcsine law), which is the Jeffreys prior motivated by the asymptotics discussed in the introduction.

Figure 1 shows priors representing NML obtained by solving the associated linear equations. For mass points given by , the prior is nearly uniform except at the boundaries of the parameter space where the weights are higher. For uniformly spaced mass points, the prior involves both negative and positive weights when . Without the non-negativity constraint, the requirement that the weights sum to one no longer implies a bound () on the magnitudes of the weights, and in fact, the absolute values of the weights become very large as grows.

Iii-C Bernoulli trials: Divergence optimization of and

For less than parameter values

with non-zero prior probability, there is no guarantee that an exact representation of NML is possible. However,

mass points should suffice when we are also free to choose the

values because the total degree of freedom of the symmetric discrete prior,

, coincides with the number of equations to be satisfied by the mixture.

While solving the required weights, , can be done in a straightforward manner using linear algebra, the same doesn’t hold for . Inspired by the work of Watanabe and Ikeda [11], we implemented a Newton type algorithm for minimizing

with a large values of , under the constraint that , where is the number of mass points. This optimization criterion is equivalent to the Renyi divergence [7], and converges to the log of the worst-case ratio

as . In the following, we use except for where we use in order to avoid numerical problems. The mass points were initialized at .

Figure 2 shows the priors obtained by optimizing the locations of the mass points, , and the respective prior weights, . The left panels show priors where the number of mass points is , while the right panels shows priors with mass points which guarantees that an exact representation is possible even without optimization of the

. Note, however, that the divergence optimization method we use can only deal with non-negative prior weigths. The obtained mixtures had Kullback-Leibler divergence

and worst-case ratio in each case.

Iv Conclusions and Future Work

Unlike many earlier studies that have focused on either finite-sample or asymptotic approximations of the normalized maximum likelihood (NML) distribution, the focus of the present paper is on exact representations. We showed that an exact representation of NML as a Bayes-like mixture with a possibly signed prior exists under a mild condition related to linear independence of a subset of the statistical model in consideration. We presented two techniques for finding the required signed priors in the case of Bernoulli trials.

Uniform grid                               grid

Fig. 1: Examples of priors representing the NML distribution in the Bernoulli model with . The left panels show priors for mass points chosen at uniform intervals in ; the right panels show priors for the same number of mass points at . The prior weights are obtained by directly solving a set of linear equations. Negative weights are plotted in red.

                             

Fig. 2: Examples of priors representing the NML distribution in the Bernoulli model with . Both the locations of the mass points, , and the respective prior weights, , are optimized using a Newton type method. The left panels show priors with mass points; the right panel shows priors with mass points.

The implications of this work are two-fold. First, from a theoretical point of view, it provides insight into the relationship between MDL and Bayesian methods by demonstrating that in some models, a finite-sample Bayes-like counterpart to NML only exists when the customary assumption that prior probabilities are non-negative is removed. This complements earlier asymptotic and approximate results. Second, from a practical point of view, a Bayes-like representation offers a computationally efficient way to extract marginal probabilities and conditional probabilities for where is the total sample size. These probabilities are required in, for instance, data data compression using arithmetic coding.

Other algorithms will be explored in the full paper along with other families including truncated Poisson and multinomial, which have an interesting relationship between supports for the prior. The full paper will also show how matching NML produces a prior with some interesting prediction properties. In particular, in Bernoulli trials, by the NML matching device, we can arrange a prior in which the posterior mean of the log odds is the same as the maximum likelihood estimate of the log odds, whenever the count of ones is neither nor .

References

  • [1] A. R. Barron, J. Rissanen and B. Yu, “The minimum description length principle in coding and modeling,” IEEE Trans. Inform. Theory, Vol. 44 No. 6, pp. 2743 - 2760, 1998.
  • [2] P. Bartlett, P. Grünwald, P. Harremoës, F. Hedayati, W. Kotłowski, ”Horizon-independent optimal prediction with log-loss in exponential families,” arXiv:1305.4324v1, May 2013.
  • [3] B. Clarke & A. R. Barron, “Jeffreys prior is asymptotically least favorable under entropy risk,” J. Statistical Planning and Inference, 41:37-60, 1994.
  • [4] P. D. Grünwald, The Minimum Description Length Principle, MIT Press, 2007.
  • [5] P. Kontkanen & P. Myllymäki. “A linear-time algorithm for computing the multinomial stochastic complexity.” Information Processing Letters, vol.103, pp.227-233, 2007.
  • [6] D. Haussler, “A general minimax result for relative entropy,” IEEE Trans. Inform. Theory, vol. 43, no. 4, pp. 1276-1280, 1997.
  • [7] A. Renyi, “On measures of entropy and information,” Proc. of the Fourth Berkeley Symp. on Math. Statist. and Prob.  vol. 1, Univ. of Calif. Press, pp. 547-561, 1961.
  • [8] Yu M. Shtarkov, “Universal sequential coding of single messages,” Problems of Information Transmission, vol. 23, pp. 3-17, July 1988.
  • [9] J. Takeuchi & A. R. Barron, “Asymptotically minimax regret by Bayes mixtures,” Proc. 1998 IEEE ISIT, 1998.
  • [10] J. Takeuchi & A. R. Barron, “Asymptotically minimax regret by Bayes mixtures for non-exponential families,” Proc. 2013 IEEE ITW, pp.204-208, 2013.
  • [11] K. Watanabe & S. Ikeda, “Convex formulation for nonparametric estimation of mixing distribution,” Proc. 2012 WITMSE, pp. 36-39, 2012.
  • [12] K. Watanabe, T. Roos & P. Myllymäki, “Achievability of asymptotic minimax regret in online and batch prediction,” Proc. 2013 ACML, pp. 181-196, 2013.
  • [13] Q. Xie & A. R. Barron, “Minimax redundancy for the class of memoryless sources”, IEEE Trans. Inform. Theory, vol. 43, pp. 646-657, 1997.
  • [14] Q. Xie & A. R. Barron, “Asymptotic minimax regret for data compression, gambling and prediction,” IEEE Trans. Inform. Theory, vol. 46, pp. 431-445, 2000.