Supervised learning, the task of inferring a function that predicts a target
from a feature vectorby using labeled training samples
, has been a problem of central interest in machine learning. Given the underlying distribution
, the optimal prediction rules had long been studied and formulated in the statistics literature. However, the advent of high-dimensional problems raised this important question: What would be a good prediction rule when we do not have enough samples to estimate the underlying distribution?
To understand the difficulty of learning in high-dimensional settings, consider a genome-based classification task where we seek to predict a binary trait of interest from an observation of SNPs, each of which can be considered as a discrete variable . Hence, to estimate the underlying distribution we need samples.
With no possibility of estimating the underlying
in such problems, several approaches have been proposed to deal with high-dimensional settings. The standard approach in statistical learning theory is empirical risk minimization (ERM)[vapnik]. ERM learns the prediction rule by minimizing an approximated loss under the empirical distribution of samples. However, to avoid overfitting, ERM restricts the set of allowable decision rules to a class of functions with limited complexity measured through its VC-dimension. However, the ERM problem for several interesting loss functions such as 0-1 loss is computationally intractable [NP_01].
This paper focuses on a complementary approach to ERM where one can learn the prediction rule through minimizing a decision rule’s worst-case loss over a larger set of distributions centered at the empirical distribution . In other words, instead of restricting the class of decision rules, we consider and evaluate all possible decision rules, but based on a more stringent criterion that they will have to perform well over all distributions in . As seen in Figure 2, this minimax approach can be broken into three main steps:
We compute the empirical distribution from the data,
We form a distribution set based on ,
We learn a prediction rule that minimizes the worst-case expected loss over .
An important example of the above minimax approach is the linear regression fitted via the least-squares method, which also minimizes the worst-case squared-error over all distributions with the same first and second order moments as empirically estimated from the samples. Some other special cases of this minimax approach, which are also based on learning a prediction rule from low-order marginal/moments, have been addressed in the literature:[mpm] solves a robust minimax classification problem for continuous settings with fixed first and second-order moments; [DCC] develops a classification approach by minimizing the worst-case hinge loss subject to fixed low-order marginals; [DRC] fits a model minimizing the maximal correlation under fixed pairwise marginals to design a robust classification scheme. In this paper, we develop a general minimax approach for supervised learning problems with arbitrary loss functions.
To formulate Step 3 in Figure 2, given a general loss function and set of distribution we generalize the problem formulation discussed at [DCC] to
Here, is the space of all decision rules. Notice the difference with the ERM problem where was restricted to smaller function classes while .
If we have to predict with no access to , (1) reduces to
where is the action space for loss function . For being the logarithmic loss function (log loss), Topsoe [topsoe1979information] reduces (2) to the entropy maximization problem over . This result is shown based on Sion’s minimax theorem [sion] which shows under some mild conditions one can exchange the order of min and max in the minimax problem. Note that when is log loss, the maximin problem corresponding to (2) results in a maximum entropy problem. More generally, this result provides a game theoretic interpretation of the principle of maximum entropy introduced by Jaynes in [jaynes]. By the principle of maximum entropy, one should select and act based on a distribution in which maximizes the Shannon entropy.
Grünwald and Dawid [MaxEnt] generalize the minimax theorem for log loss in [topsoe1979information] to other loss functions, showing (2) and its corresponding maximin problem have the same solution for a large class of loss functions. They further interpret the maximin problem as maximizing a generalized entropy function, which motivates generalizing the principle of maximum entropy for other loss functions: Given loss function , select and act based on a distribution maximizing the generalized entropy function for . Based on their minimax interpretation, the maximum entropy principle can be used for a general loss function to find and interpret the optimal action minimizing the worst-case expected loss in (2).
How can we use the principle of maximum entropy to solve (1) where we observe as well? A natural idea is to apply the maximum entropy principle to the conditional instead of the marginal . This idea motivates a generalized version of the principle of maximum entropy, which we call the principle of maximum conditional entropy. The conditional entropy maximization for prediction problems was first introduced and interpreted by Berger et al. [berger]
. They indeed proved that the logistic regression model maximizes the conditional entropy over a particular set of distributions. In this work, we extend the minimax interpretation from the maximum entropy principle[topsoe1979information, MaxEnt] to the maximum conditional entropy principle, which reveals how the maximum conditional entropy principle breaks Step 3 into two smaller steps:
We search for the distribution maximizing the conditional entropy over ,
We find the optimal decision rule for .
Although the principle of maximum conditional entropy characterizes the solution to (1), computing the maximizing distribution is hard in general. In [minMI], the authors propose a conditional version of the principle of maximum entropy, for the specific case of Shannon entropy, and draw the principle’s connection to (1). They call it the principle of minimum mutual information, by which one should predict based on the distribution minimizing mutual information among and . However, they develop their theory targeting a broad class of distribution sets, which results in a convex problem, yet the number of variables is exponential in the dimension of the problem.
To overcome this issue, we propose a specific structure for the distribution set by matching the marginal
of all the joint distributionsin to the empirical marginal while matching only the cross-moments between and with those of the empirical distribution . We show that this choice of has two key advantages: 1) the minimax decision rule can be computed efficiently; 2) the minimax generalization error can be controlled by allowing a level of uncertainty in the matching of the cross-moments, which can be viewed as regularization in the minimax framework.
More importantly, by applying this idea for the generalized conditional entropy we generalize the duality shown in [berger] among the maximum conditional Shannon entropy problem and the maximum likelihood problem for fitting the logistic regression model. In particular, we show how under quadratic and logarithmic loss functions our framework leads to the linear regression and logistic regression models respectively. Through the same framework, we also derive a classifier which we call the maximum entropy machine (MEM). We also show how regularization in the empirical risk minimization problem can be interpreted as expansion of the uncertainty set in the dual maximum conditional entropy problem, which allows us to bound the generalization worst-case error in the minimax framework.
2 Two Examples
In this section, we highlight two important examples to compare the minimax approach with the ERM approach. These examples also motivate a particular structure for the distribution set in the minimax approach, which is discussed earlier in the introduction.
2.1 Regression: Squared-error
Consider a regression task to predict a continuous from feature vector . A well-known approach for this task is the linear regression, where one considers the set of linear prediction rules . The ERM problem over this function class is the least-squares problem, where given samples we solve
Interestingly, the minimax approach for the squared-error loss also results in the linear regression and least-squares method. Consider the space of functions and define as the following set of distributions fixing the cross-moments and the marginal using the data
where is the empirical marginal . Then, in Section 4 we show if we solve the minimax problem
the minimax optimal is a linear function which is the same as the solution to the least-squares problem. This simple example motivates the minimax approach and defined above by fixing the cross-moments and marginal. Note that the maximin problem corresponding to (5) is
where is in fact the generalized conditional entropy for the squared-error loss function.
2.2 Classification: 0-1 Loss
0-1 loss is a loss function of central interest for the classification task. In the binary classification problem, the empirical risk minimization problem over linear decision rules is commonly formulated as
where denotes the indicator function. This ERM problem to minimzie the number of missclassifications over the training samples is known to be non-convex and NP-hard [NP_01]
. To resolve this issue in practice, the 0-1 loss is replaced with a surrogate loss function. The hinge loss is an important example of a surrogate loss function which is empirically minimized by the Support Vector Machine (SVM)[SVM] and is defined for a binary as
On the other hand, one can change the loss function from squared-error to 0-1 loss and solve the minimax problem (5) instead of empirical risk minimization. Then, by swapping the order of min and max as
we reduce the 0-1 loss minimax problem to the maximization of a concave objective over a convex set of probability distributions . Therefore, unlike the ERM problem with 0-1 loss, this minimax problem can be solved efficiently by the MEM method. In fact, MEM solves the maximum conditional entropy problem by reducing it to a convex ERM problem. For a binary , the new ERM problem has a loss function to which we call the minimax hinge loss defined as
As seen in Figure 2, the minimax hinge loss is different from the hinge loss, and while the hinge loss is an adhoc surrogate loss function, the minimax hinge loss emerges naturally from the minimax framework. Another notable difference between the ERM and minimax frameworks is that while the linear prediction rule coming from the ERM framework is deterministic, the prediction rule resulted from the minimax approach is randomized linear (See Figure 3). Indeed, the relaxation from the deterministic rules to the randomized rules is an important step to overcome the computational intractability of 0-1 loss minimization problem in the minimax approach. We will discuss the details of the randomized prediction rule for MEM later in Section 4. Therefore, 0-1 loss provides an important example where, unlike the ERM problem, the generalized maximum entropy framework developed at [MaxEnt] results in a computationally tractable problem which is well-connected to the loss function.
3 Principle of Maximum Conditional Entropy
In this section, we provide a conditional version of the key definitions and results developed in [MaxEnt]. We propose the principle of maximum conditional entropy to break Step 3 into 3a and 3b in Figure 1. We also define and characterize Bayes decision rules for different loss functions to address Step 3b.
3.1 Decision Problems, Bayes Decision Rules, Conditional Entropy
Consider a decision problem. Here the decision maker observes from which she predicts a random target variable using an action . Let be the underlying distribution for the random pair . Given a loss function , indicates the loss suffered by the decision maker by deciding action when . The decision maker uses a decision rule to select an action from based on an observation . We will in general allow the decision rules to be random, i.e. is random. The main purpose of extending to the space of randomized decision rules is to form a convex set of decision rules. Later in Theorem 16, this convexity is used to prove a saddle-point theorem.
We call a (randomized) decision rule a Bayes decision rule if for all decision rules and for all :
It should be noted that depends only on , i.e. it remains a Bayes decision rule under a different . The (unconditional) entropy of is defined as [MaxEnt]
Similarly, we can define conditional entropy of given as
and the conditional entropy of given as
Note that and are both concave in . Applying Jensen’s inequality, this concavity implies that
which motivates the following definition for the information that carries about ,
i.e. the reduction of expected loss in predicting by observing . In [Dawid], the author has defined the same concept to which he calls a coherent dependence measure. It can be seen that
where is the divergence measure corresponding to the loss , defined for any two probability distributions with Bayes actions as [MaxEnt]
3.2.1 Logarithmic loss
For an outcome and distribution , define logarithmic loss as . It can be seen , , are the well-known unconditional, conditional Shannon entropy and mutual information [cover]. Also, the Bayes decision rule for a distribution is given by .
3.2.2 0-1 loss
The 0-1 loss function is defined for any as . Then, we can show
The Bayes decision rule for a distribution is the well-known maximum a posteriori (MAP) rule, i.e.
3.2.3 Quadratic loss
The quadratic loss function is defined as . It can be seen
The Bayes decision rule for any is the well-known minimum mean-square error (MMSE) estimator that is .
3.3 Principle of Maximum Conditional Entropy & Robust Bayes decision rules
Given a distribution set , consider the following minimax problem to find a decision rule minimizing the worst-case expected loss over
where is the space of all randomized mappings from to and denotes the expected value over distribution . We call any solution to the above problem a robust Bayes decision rule against . The following results motivate a generalization of the maximum entropy principle to find a robust Bayes decision rule. Refer to the Appendix for the proofs.
(Weak Version) Suppose is convex and closed, and let be a bounded loss function. Assume are finite and that the risk set is closed. Then there exists a robust Bayes decision rule against , which is a Bayes decision rule for a distribution that maximizes the conditional entropy over .
(Strong Version) Suppose is convex and that under any there exists a Bayes decision rule. We also assume the continuity in Bayes decision rules for distributions in (See the Appendix for the exact condition). Then, if maximizes over , any Bayes decision rule for is a robust Bayes decision rule against .
Principle of Maximum Conditional Entropy: Given a set of distributions , predict based on a distribution in that maximizes the conditional entropy of given , i.e.
Note that while the weak version of Theorem 16 guarantees only the existence of a saddle point for (16), the strong version further guarantees that any Bayes decision rule of the maximizing distribution results in a robust Bayes decision rule. However, the continuity in Bayes decision rules does not hold for the discontinuous 0-1 loss, which requires considering the weak version of Theorem 16 to address this issue.
4 Prediction via Maximum Conditional Entropy Principle
Consider a prediction task with target variable and feature vector . We do not require the variables to be discrete. As discussed earlier, the maximum conditional entropy principle reduces (16) to (17), which formulate steps 3 and 3a in Figure 2, respectively. However, a general formulation of (17) in terms of the joint distribution leads to an exponential computational complexity in the feature dimension .
The key question is therefore under what structures of in Step 2 we can solve (17) efficiently. In this section, we propose a specific structure for , under which we provide an efficient solution to Steps 3a and 3b in Figure 1. In addition, we prove a bound on the generalization worst-case risk for the proposed . In fact, we derive these results by reducing (17) to the maximum likelihood problem over a generalized linear model, under this specific structure.
To describe this structure, consider a set of distributions centered around a given distribution , where for a given norm , mapping vector ,
Here encodes with -dimensional , and denotes the th entry of . The first constraint in the definition of requires all distributions in to share the same marginal on as ; the second imposes constraints on the cross-moments between and , allowing for some uncertainty in estimation. When applied to the supervised learning problem, we will choose to be the empirical distribution and select appropriately based on the loss function . However, for now we will consider the problem of solving (17) over for general and .
To that end, we use a similar technique as in the Fenchel’s duality theorem, also used at [altun2006, dudik, semi2010] to address divergence minimization problems. However, we consider a different version of convex conjugate for , which is defined with respect to . Considering as the set of all probability distributions for the variable , we define as the convex conjugate of with respect to the mapping ,
Refer to the the supplementary material for the proof. ∎
When applying Theorem 2 on a supervised learning problem with a specific loss function, will be chosen such that provides sufficient information to compute the Bayes decision rule for . This enables the direct computation of , i.e. step 3 of Figure 2, without the need to explicitly compute itself. For the loss functions discussed at Subsection 3.2, we choose the identity
for the quadratic loss and the one-hot encodingfor the logarithmic and 0-1 loss functions. Later in this section, we will discuss how this theorem applies to these loss functions.
We make the key observation that the problem in the RHS of (20), when for all ’s, is equivalent to minimizing the negative log-likelihood for fitting a generalized linear model [GLM] given by
An exponential-family distribution with the log-partition function and the sufficient statistic ,
A linear predictor, ,
A mean function, .
Therefore, Theorem 2 reveals a duality between the maximum conditional entropy problem over and the regularized maximum likelihood problem for the specified generalized linear model. This duality further provides a minimax justification for generalized linear models and fitting them using maximum likelihood, since we can consider the convex conjugate of its log-partition function as the negative entropy in the maximum conditional entropy problem.
4.1 Generalization Bound on the Worst-case Risk
By establishing the objective’s Lipschitzness and boundedness through appropriate assumptions, we can apply standard results to bound the rate of uniform convergence for the problem in the RHS of (20). Here we consider the uniform convergence of the empirical averages, when is the empirical distribution of samples drawn i.i.d. from the underlying distribution , to their expectations when .
In the supplementary material, we prove the following theorem which bounds the generalization worst-case risk, by interpreting the mentioned uniform convergence on the other side of the duality. Here and denote the robust Bayes decision rules against and , respectively. As explained earlier, by the maximum conditional entropy principle we can learn by solving the RHS of (20) for the empirical distribution and then applying (21).
Consider a loss function with the entropy function and suppose includes only one element, i.e. . Let be the maximum entropy value over . Also, take to be the pair where , . Given that and , for any with probability at least
Theorem 3 states that though we learn the prediction rule by solving the maximum conditional problem for the empirical case, we can bound the excess -based worst-case risk. This generalization result justifies the constraint of fixing the marginal across the proposed and explains the role of the uncertainty parameter in bounding the generalization worst-case risk.
4.2 Geometric Interpretation of Theorem 2
By solving the regularized maximum likelihood problem in the RHS of (20), we in fact minimize a regularized KL-divergence
where is the set of all exponential-family conditional distributions for the specified generalized linear model. This can be viewed as projecting onto (See Figure 4).
Furthermore, it can be seen that for a label-invariant entropy function
, the Bayes act for the uniform distributionleads to the same expected loss under any distribution on . Based on the divergence ’s definition in (15), maximizing over in the LHS of (20) is therefore equivalent to the following divergence minimization problem
Here denotes the uniform conditional distribution over given any . This can be interpreted as projecting the joint distribution onto (See Figure 4). Then, the duality shown in Theorem 2 implies the following corollary.
4.3.1 Logarithmic Loss: Logistic Regression
To gain sufficient information for the Bayes decision rule under the logarithmic loss, for , let be the one-hot encoding of , i.e. for . Here, we exclude as . Then
which is the logistic regression model [elements]. Also, the RHS of (20) will be the regularized maximum likelihood problem for logistic regression. This particular result is well-studied in the literature and straightforward using the duality shown in [berger].
4.3.2 0-1 Loss: maximum entropy machine
To get sufficient information for the Bayes decision rule under the 0-1 loss, we again consider the one-hot encoding described for the logarithmic loss. We show in the Appendix that if and denotes the th largest element of ,
In particular, if is binary where
Then, if the maximum likelihood problem (20) for learning the optimal linear predictor given samples will be
The first term is the empirical risk of a linear classifier over the minimax-hinge loss as shown in Figure 2. In contrast, the standard SVM is formulated using the hinge loss :
We therefore call this classification approach the maximum entropy machine. However, unlike the standard SVM, the maximum entropy machine is naturally extended to multi-class classification.
Using Theorem 1.A111We show that given the specific structure of Theorem 1.A holds whether is finite or infinite., we prove that for 0-1 loss the robust Bayes decision rule exists and is randomized in general, where given the optimal linear predictor randomly predicts a label according to the following -based distribution on labels
Here is the permutation sorting in the ascending order, i.e. , and is the largest index satisfying . For example, in the binary case discussed, the maximum entropy machine first solves (28) to find the optimal and then predicts label vs. label with probability .
We can also find the conditional-entropy maximizing distribution via (21), where the gradient of is given by
Note that is not differentiable if , but the above vector is still in the subgradient . Although we can find the -maximizing distribution, there could be multiple Bayes decision rules for that distribution. Since the strong result in Theorem 16 does not hold for the 0-1 loss, we are not guaranteed that all these decision rules are robust against . However, as we show in the appendix the randomized decision rule given by (30) will be robust.
4.3.3 Quadratic Loss: Linear Regression
Based on the Bayes decision rule for the quadratic loss, we choose . To derive , note that if we let in (19
) include all possible distributions, the maximized entropy (variance for quadratic loss) and thus the value ofwould be infinity. Therefore, given a parameter , we restrict the second moment of distributions in and then apply (19). We show in the Appendix that an adjusted version of Theorem 2 holds after this change, and
which is the Huber function [huber1981]. To find via (21), we have
Given the samples of a supervised learning task if we choose the parameter large enough, by solving the RHS of (20) when is replaced with and set greater than , we can equivalently take . Then, (33) reduces to the linear regression model and the maximum likelihood problem in the RHS of (20) is equivalent to
Least squares when .
Lasso [lasso, Lasso_Donoho] when is the pair.
Ridge regression [Ridge] when is the -norm.
(overlapping) Group lasso [grouplasso, GLoverlap] with the penalty when is defined, given subsets of and , as
See the Appendix for the proofs. Another type of minimax, but non-probabilistic, argument for the robustness of lasso-like regression algorithms can be found in [RobustLasso, RobustLassoLike].
5 Robust Feature Selection
Using a minimax criterion over a set of distributions , we solve the following problem to select the most informative subset of features,
which under the assumption that the marginal is fixed across all distributions in is equivalent to selecting a subset maximizing the worst-case generalized information over , i.e.
Here by constraining where denotes the th column of , we impose the same sparsity pattern across the rows of . Let be the -norm and relax the above problem to
Note that if for the uncertainty parameters ’s, the solution to (39) satisfies due to the tendency of -regularization to produce sparse solutions, is the solution to (38) as well. In addition, based on the generalization bound established in Theorem 3, by allowing some gap we can generalize this sparse solution to (35) with for the underlying distribution .
It is noteworthy that for the quadratic loss and identity , (39) is the same as the lasso. Also, for the logarithmic loss and one-hot encoding , (39) is equivalent to the -regularized logistic regression. Hence, the -regularized logistic regression maximizes the worst-case mutual information over , which seems superior to the methods maximizing a heusristic instead of the mutual information [pengfeature, feature2].
6 Numerical Experiments
We evaluated the performance of the maximum entropy machine on six binary classification datasets from the UCI repository, compared to these five benchmarks: Support Vector Machines (SVM), Discrete Chebyshev Classifiers (DCC) [DCC], Minimax Probabilistic Machine (MPM) [mpm]
, Tree Augmented Naive Bayes (TAN)[tan], and Discrete Rényi Classifiers (DRC) [DRC]. The results are summarized in Table 1 where the numbers indicate the percentage of error in the classification task.
We implemented the maximum entropy machine by applying the gradient descent to (28) with the regularizer . We determined the value of by cross validation. To determine the lambda coefficient, we used a randomly-selected 70% of the training set for training and the rest 30% of the training set for testing. We tested the values in . Using the tuned lambda, we trained the algorithm over all the training set and then evaluated the error rate over the test set. We performed this procedure in 1000 Monte Carlo runs each training on 70% of the data points and testing on the rest 30% and averaged the results.
As seen in the table, the maximum entropy machine results in the best performance for four of the six datasets. Also, note that except a single dataset the maximum entropy machine outperforms SVM. To compare these methods in high-dimensional problems, we ran an experiment over synthetic data with samples and features. We generated features by i.i.d. Bernoulli with , and considered where . Using the same approach, we evaluated 20.6% error rate for SVM, 20.4% error rate for DRC, 20.0% for the MEM which shows the MEM can outperform SVM and DRC in high-dimensional settings as well.
We are grateful to Stanford University providing a Stanford Graduate Fellowship, and the Center for Science of Information (CSoI), an NSF Science and Technology Center under grant agreement CCF-0939370, for the support during this research.
7.1 Proof of Theorem 16
7.1.1 Weak Version
First, we list the assumptions of the weak version of Theorem 1:
is convex and closed,
Loss function is bounded by a constant ,
Risk set is closed.
Given these assumptions, Sion’s minimax theorem [sion] implies that the minimax problem has a finite answer ,
Thus, there exists a sequence of decision rules for which
As we supposed, the risk set is closed. Therefore, the randomized risk set222 is a short-form for where is a random action distributed according to . defined over the space of randomized acts is also closed and, since is bounded, is a compact subset of . Therefore, since and are both finite, we can find a randomized decision rule which on taking a subsequence satisfies
Then is a robust Bayes decision rule against , because
Moreover, since is assumed to be convex and closed (hence compact), achieves its supremum over at some distribution . By the definition of conditional entropy, (43) implies that
which shows that is a Bayes decision rule for as well. This completes the proof.
7.1.2 Strong Version
Let’s recall the assumptions of the strong version of Theorem 1:
For any distribution , there exists a Bayes decision rule.
We assume continuity in Bayes decision rules over , i.e., if a sequence of distributions with the corresponding Bayes decision rules converges to with a Bayes decision rule , then under any , the expected loss of converges to the expected loss of .
maximizes the conditional entropy .
Note: A particular structure used in our paper is given by fixing the marginal across . Under this structure, the condition of the continuity in Bayes decision rules reduces to the continuity in Bayes acts over ’s in . It can be seen that while this condition holds for the logarithmic and quadratic loss functions, it does not hold for the 0-1 loss.
Let be a Bayes decision rule for . We need to show that is a robust Bayes decision rule against . To show this, it suffices to show that is a saddle point of the mentioned minimax problem, i.e.,
Clearly, inequality (45) holds due to the definition of the Bayes decision rule. To show (46), let us fix an arbitrary distribution . For any , define . Notice that since is convex. Let be a Bayes decision rule for . Due to the linearity of the expected loss in the probability distribution, we have
for any . Here the first inequality is due to the definition of the conditional entropy and the last inequality holds since maximizes the conditional entropy over . Applying the assumption of the continuity in Bayes decision rules, we have
which makes the proof complete.
7.2 Proof of Theorem 2
Let us recall the definition of the set :