1 Binary classification as hypothesis testing
Classification, when there is available a complete probabilistic description of the classes of interest, coincides with statistical hypothesis testing. For this reason we revisit the basic results of this simple and interesting theory and use them as basis in order to build an equivalent data driven version of the problem.
Assume we observe a random vectorfor which we distinguish two possible scenarios (classes or hypotheses) regarding its probabilistic description
are the probability densities that capture the statistical behavior ofunder the corresponding classes and with
express the prior probability of occurrence of each class.
We are interested in developing a mechanism which, every time we observe a vector , will assign the label 1 or 2 to in an effort to identify the class the vector is coming from. Specifically, we are looking for a function with values in the set and we would like to select it properly. Following a Bayesian approach, each function produces labeling errors with the corresponding error probability being equal to
where denotes probability under the density , the corresponding expectation and the indicator of the event .
An optimum classifier is obtained if we select to minimize the error probability . It is known [11, Pages 26–28] that the optimum is the well celebrated likelihood ratio test (LRT)
As we can see, optimum classification is achieved by consulting the sign of . From (1) we understand that we could formulate the classification problem as
where is some scalar function and look for the optimum . Clearly, restricting ourselves to this smaller class does not inflict any performance loss since the optimum classifier (LRT), as pointed out in (1), can be put exactly under this form with .
1.1 Alternative optimization problems
To find the optimum one would work, directly, with the error probability and attempt to minimize it. This, obviously, will lead to when the probability densities and the priors are known. When, however, this information is not to our disposal and we are in the pure data driven case, the same optimization problem is completely unsuitable. The reason is that, in order to solve it, we are going to limit to some parametric family of functions (as neural networks) and employ (stochastic) gradient-type algorithms to optimize for the parameters. This would inevitably require the computation of gradients of indicator functions. Unfortunately, the latter are notorious for having gradients that cannot be used in numerical computations since the gradients are either 0 or at border points, due to discontinuity, they do not exist.
It is because of the above reason that the minimization of the error probability is abandoned in favor of different optimization problem where gradients are well defined. Of course there is an important property that needs to be satisfied by any alternative approach:
The solution of the alternative optimization must be equivalent to LRT
in order to be useful for the classification problem.
Otherwise it will produce “suboptimum” classification results. Fortunately, there exists a significant number of optimization problems proposed in the literature that satisfy this basic requirement. Actually, the main scope of this work is to offer additional classes of optimization problems that enjoy the same basic property and can therefore be used to design neural networks.
Regarding existing alternative optimizations, in ,  one can find an interesting mathematical analysis that treats cases that can be put under the following form
where is a scalar function. Not every satisfies the basic requirement that the solution of the minimization in (3) is equivalent to LRT.
| (for )|
Table 1 depicts characteristic examples of and the corresponding optimum where this desirable property is indeed valid. As we can verify, it is always possible from to produce a classifier which is equivalent to the likelihood ratio test. A slight extension to the previous formulation can be enjoyed if we replace in (3) by where is a scalar increasing function. In this more general setting belong  the regularized loss corresponding to and the expectation loss corresponding to (for binary classification the latter coincides with the Chebyshev loss , ) that accept optimum solutions which satisfy the basic requirement of being equivalent to LRT.
1.2 Proposed optimization problems
Let us now introduce our own class of problems. The difference between the methodology we intend to present and the existing techniques lies in the fact that in our case, provided certain very simple conditions are met, it is straightforward to show that the optimum is indeed a strategy equivalent to LRT. In place of (3), we propose the following criterion and the accompanying optimization
where a scalar function.
Using change of measure we can rewrite our criterion as
where denotes expectation with respect to the mixture density and is the posterior probability that belongs to the class . For the pair of functions we distinguish two categories.
Category A. Let be a scalar function which satisfies
In other words has a global minimum equal to attained at and a global maximum equal to 1 attained at . The class of functions satisfying (6) is very rich. In fact if a scalar function has finite maximum and minimum attained at finite points then, with suitable constants, it can be transformed into a function satisfying (6).
with the upper bound being attainable by , namely, a classifier which is equivalent to LRT.
Category B. Here is assumed to be strictly increasing for with . Actually, any strictly increasing function can be properly scaled to satisfy this constraint. Consider now the classifier function which we like to determine. In the previous category was not limited in any sense. In this case we impose the following boundedness condition
If we attempt to solve (4) then, again, it is straightforward to see that
with the upper bound attained, as in the previous category, by .
Even though we did not impose any additional conditions on beyond the ones that define the two categories, it is understood that this function must be differentiable, except perhaps a finite number of points where it must have right and left derivatives. This is necessary in order to be able to derive, in the next section, gradient-type training algorithms. Regarding Category A, it is preferable that has no extra local extrema except the two global ones appearing at . This will help the training algorithm to avoid convergence to incorrect limits. Finally, we should mention that the log loss
and the square log loss function are special cases of Category B.
2 Neural network classifiers
Suppose now that we are interested in restricting the function to be the scalar output of a neural network where summarizes the parameters of the network. The classifier and its error probability, following (2), become functions of
Similarly, for the cost proposed in (4) we have
Maximization of over the classifier is reduced to , namely, maximization over the parameters of the network. This will produce an optimum neural network by identifying the best .
In the ideal case when is any arbitrary scalar function, the optimum solutions of all the alternative optimization problems are equivalent since they match some version of LRT. Unfortunately, this significant property is lost when we limit ourselves to neural networks. The optimum parameters and, therefore, the resulting network are optimization problem dependent.
The previous observation raises then the logical question: Since the possible “optimum” choices are not equal, which is the most appropriate for classification? Clearly, the answer is: the one that minimizes defined in (8), because this produces the smallest misclassification probability. We stress again that the reason we do not perform the minimization of and we resort to alternative problems is because of the presence of indicator functions that make it impossible to develop (stochastic) gradient type algorithms and iteratively compute the minimizer of .
2.1 Data driven optimization problems
The next step in our presentation consists in assuming that the probability densities and the prior probabilities are unknown. Instead, we are given two sets of data and which are realizations that follow the two unknown densities. Furthermore, for the sizes of the two data sets we assume that they are consistent with the two unknown prior probabilities in the sense that .
Computing the two expectations in (9) is no longer an option, hence, it makes sense to approximate stochastic means by sample averages with the help of the available data. More precisely
It is then clear that
replaces the maximization of . We note that for the computation of in (10) we need the training data, the function and the geometry of the neural network. No knowledge or modeling of densities or priors is necessary. We should also mention that if we perform a similar approximation for we obtain
which constitutes the preferable data driven criterion to optimize.
Summary. For the design of the neural network classifier, we propose the solution of the optimization problem depicted in (11). For the function and the output of the neural network we offer two possibilities. Category A: must have a global minimum equal to at and a global maximum equal to 1 at . No condition is imposed on the output of the neural network. Category B: must be increasing in with and . The output of the neural network must be limited within the interval .
For Category A, potential forms for are
The two possibilities are depicted in Figure 1 for different values of the parameter . As we can see, in the first case the tails of the function decrease to 0 as raised to some fixed power. This suggests that has “fat” tails, which in turn implies that outputs of the network that are far from the two target values can still contribute to the overall criterion in (10
). If on the other hand we use the second function where tails decrease to zero rapidly then, network outputs far from the target values tend to have small or even negligible contribution to the overall cost. This form of “data screening” could, potentially, be advantageous when robustness against possible outliers is necessary.
2.2 Training algorithms
Let us now continue with a detailed presentation of possible training algorithms for the optimization problem introduced in (11
). We limit ourselves to a full, 2-layer network, because in most of our simulations we observed satisfactory performance from this simple geometry. Of course one can very easily extend our derivations to cover networks with more layers and/or with special structure as convolutional neural networks.
For Category A, we recall that there is no nonlinear transformation in the last layer of the neural network. On the output, we do however apply the nonlinearity which we select for our criterion in (10). For Category B because of the constraint in (7) we need to apply some nonlinearity to contain the network output in the interval . On top of this nonlinearity, according to our approach, we must apply the strictly increasing function that we select for our optimization problem.
Both categories can be put under the same form, allowing for the presentation of a common training algorithm. For this reasons, let denote the length vector that we like to classify. We then apply the following transformations
where is an dimensional matrix, are vectors of length , is one of the popular scalar nonlinearities employed in neural networks and applied to each element of the vector and is a scalar offset. For the output nonlinearity we have for Category A that while for Category B the nonlinearity takes the form (composition of the two functions). Clearly the scalar output takes over the role of and summarizes the quantities which need to be identified.
To develop a training algorithm that solves (11) we must find the gradient of the cost function with respect to all network parameters. This translates into computing the gradient of with respect to . We have the following formulas that can be easily verified
where denotes element-by-element multiplication of the corresponding vectors, “” denotes transpose and “ ” derivative. For the computation of the solution of the optimization problem in (11), we distinguish a batch and a stochastic gradient version.
Batch version. We form a gradient ascent scheme by considering directly the criterion in (10). Table 2 summarizes the algorithm. In the corresponding formulas “” denotes the element-wise raise to the power and “” the element-by-element division of the corresponding vectors or matrices. Parameter is the learning rate and must be sufficiently small so that the algorithm does not diverge. Finally,
is an exponential forgetting factor and is used to estimate average powers of the gradient elements. Following the scheme in  we normalize each gradient element with the square root of its estimated power before using it in the update of the corresponding parameter. The batch version tends to become computationally demanding whenare very large since it requires a number of computations per iteration which is proportional to .
|Initialize using the method in  and set|
|Available from iteration :|
|At iteration and for :|
|Compute layer outputs for all available data vectors:|
|Update power estimates:|
|Update parameter estimates:|
|Repeat until some stopping rule is satisfied|
|Initialize using the method in  and set|
|Available from iteration :|
|At iteration select the next data vector from the merged set (recycle when data|
|are exhausted). Perform the following computations:|
|Compute layer outputs:|
|Update power estimates:|
|Update parameter estimates:|
|if label of is 1 and if label of is 2|
|Repeat until some stopping rule is satisfied|
Stochastic gradient version. For this version we need the two data sets to be merged into a single set and the data to be randomly permuted. In the merged set the data must of course retain their original labeling. Table 3 summarizes the algorithm. We would like to emphasize that the random permutation of the data is absolutely necessary because, otherwise, if the data are grouped according to their labels, the algorithm will exhibit a periodically biased convergence behavior as the data are being reused. In fact, it would be advisable to perform a new random permutation every time we recycle the data.
Remark. Regarding the nonlinearity applied to the output of the first layer, simulations agree that the best choice is to use the ReLU since the resulting algorithm exhibits a far more stable convergence behavior as compared to other alternatives.
applied to the output of the first layer, simulations agree that the best choice is to use the ReLU since the resulting algorithm exhibits a far more stable convergence behavior as compared to other alternatives.
To test our methodology, from Category A we select , while from Category B we use and for limiting the output of the neural network in we adopt . We would like to add that we also performed simulations with from Category A. However, the results were almost identical to , for this reason we are not including them in our presentation. We compare our algorithms against the method based on the Hinge loss . In  this technique was evaluated and found to enjoy many positive characteristics compared to other possibilities.
Before presenting our simulation results we would like to mention a very interesting interpretation for our Category B selection. As we claimed before, the most desirable criterion to minimize is . It is because of the indicator function that we seek different formulations. If in (12) we approximate the indicator with the sigmoid then, minimizing the resulting criterion is equivalent to maximizing the part inside the brackets, which is exactly our optimization problem in (11) for our Category B selection. This connection to the desired optimization problem will prove to be beneficial for our method as we will see next.
In the first set of experiments we considered scalar random variables () and attempted to classify between a standard normal under and a mixture of Gaussians under . We used a two-layer network with the first layer output having length and we applied the ReLU nonlinearity . In all three training algorithms we used the same learning rate and forgetting factor . The networks were initialized with exactly the same values following the initialization scheme in  and the gradients were normalized following the scheme in .
Regarding the training data, we used and applied the stochastic gradient algorithm. To avoid the need for random permutations, in each iteration we simply used one data point from each class, instead of randomly switching between classes. To evaluate the quality of the corresponding network, at each iteration after the parameters were updated, we applied the resulting network to testing data from and an equal number from in order to estimate the corresponding error probabilities the classifiers could deliver. There are two error types, one for each data class and there is also their average which, as we know, is optimized by LRT.
In Figure 2 we plot the evolution of the error probabilities as a function of the number of iterations of the training algorithms. We also include the errors delivered by LRT (for which we used the true densities). In Figure 2(a) we present the errors when the data come from , in Figure 2(b) when the data come from , and in Figure 2(c) the average of the two errors.
We have the following interesting observations: Our training algorithms can design neural networks that have classification errors that approach the errors of LRT more efficiently than the algorithm which is based on the Hinge loss. Remarkably, despite the significant differences, when the errors are averaged, all three classifiers yield similar performances, closely matching the optimum LRT performance. To obtain a more precise idea about the corresponding errors, in Table 4 we present the error probabilities delivered by the networks as designed in the final (5000th) iteration.
|Method||Error under||Error under||Average Error|
We can see that both our methods approximate the LRT errors better than the Hinge loss based scheme. The average error on the other hand for all three methods is extremely close the the optimum LRT performance. This specific behavior is typical
in all simulation we performed, where we experimented with numerous data dimensions, means, variances and mixture probabilities for the corresponding mixture densities.
The next simulation involves real datasets. Here, we limit ourselves to the Category A algorithm (since it has shown slightly better convergence speed than its Category B counterpart) and we compare it against the Hinge loss based scheme. The goal is to distinguish between the two hand written numerals “4” and “9” from the MNIST database.
The image size is and is reshaped into a vector of length . We use a full two layer network with the first layer output being of length . The learning rates are selected equal to and the forgetting factors equal to . Our training set is comprised of handwritten “4” and handwritten “9”. We also have an additional 982 handwritten “4” and 1009 handwritten “9” that we use for testing. Every time the training data are exhausted we recycle them. Following the same strategy as in the previous experiment, at each iteration we test the quality of the computed classifiers by applying them to the testing data. Figure 3, similarly to Figure 2, depicts the evolution of the two errors and their average as a function of the number of iterations.
As we can see our scheme exhibits a faster convergence rate and it attains lower levels of average error probability. If we now focus on the neural networks obtained during the last (th) iteration then our design method applied to the testing data makes 4 errors out of 982 when applied to the set of “4” and 12 errors out of 1009 when applied to the set of “9”. The average number of errors is 8. The same figures for the Hinge based scheme are 17 and 5 with average equal to 11.
It is interesting to visualize a few examples where each method fails. In Figure 4 we depict four errors for each method and each data class. Figure 4(a) contains cases where our scheme misclassified “4” as “9” and in (b) the opposite, namely, “9” as “4”. In Figure 4(c) and (d) we have the corresponding failures for the Hinge based method. We could agree that in most of these cases the correct decision would have been challenging even for a human decision maker.
A number of interesting experiments and comparisons follow based on the CIFAR-10 database where we tested several combinations of pairs of classes. This particular set of experiments involves far less (statistically) structured data than the preceding examples therefore the error probabilities we obtain are significantly more modest. We focus in comparing our Category B algorithmic version with the classical Hinge loss based algorithm. Each class contains 5000 training data and 1000 testing data. In Figures 5, 6, 7, 8 and 9 we present, as before, the evolution of the classification error probabilities with the number of iterations for the pairs “Cats & Dogs”, “Airplanes & Automobiles”, “Deers & Birds”, “Frogs & Horses”, “Ships & Trucks” respectively.
CIFAR-10 images were converted from RGB to grayscale reshaped vectors of size . Furthermore, the mean and the variance of each element of the 1024-length vector was computed from all training data and applied to transform each element to make it of zero mean and unit variance. The same transformation was then applied to the testing data. This pre-processing is necessary in order for the adopted network parameter initialization method, proposed in , to become appropriate since it assumes that the input is a standard Gaussian.
For the neural network we use two layers with the first being of length . The learning rates were selected equal to for our Category B version and for the Hinge based. In both cases we applied a forgetting factors equal to 0.99. We recycle the training data every time they are exhausted. At each iteration we test the quality of the computed classifiers by applying them to the testing data and computing the resulting error percentage.
We observe that the two competing algorithms have comparable performance. In most of the cases our algorithm exhibits an improvement over the classical scheme and only in one of the cases we observe the opposite. Furthermore, generally speaking, our algorithm exhibits a faster convergence rate, although such claims need far more simulations and, whenever possible, a theoretical analysis, in order to be trusted.
With this work our goal was twofold: First we wanted to understand how existing classifier design techniques are related to each other and, more importantly, to the optimum likelihood ratio test. Second, we wanted to demonstrate that there exists a very simple formulation that can provide an abundance of optimization problems that enjoy the same characteristics as the existing techniques for the classification problem. These problems can lend themselves for the development of proper training techniques for neural network based classifiers. The resulting algorithms, compared to existing alternatives, produce classifiers which, in simulations with synthetic data, exhibit a more effective approximation of the optimum likelihood ratio test performance. Additionally, in simulations with real datasets they demonstrate a faster convergence speed attaining, in the limit, most of the times smaller error probabilities. Our immediate future goals include extension of these ideas to the classification of more than two classes and study of the convergence properties of the corresponding training algorithms.
This work was supported by the US National Science Foundation under Grant CIF 1513373, through Rutgers University.
 Bartlett, P. L. & Jordan, M. I. & Mcauliffe, J. D. (2006) Convexity, classification, and risk bounds. Journal of the American Statistical Association, Theory and Methods, vol. 101, no. 473, pp. 138–156.
 Buja, A. & Stuetzle, W. & Shen, Y. (2005) Loss functions for binary class probability estimation and classification: Structure and applications. Technical report, University of Pennsylvania.
 Cantrell, C. D. (2000) Modern Mathematical Methods for Physicists and Engineers. Cambridge University Press.
 Lee, C.-Y. & Xie, S. & Gallagher, P. & Zhang, Z. & Tu, Z. (2015) Deeply-supervised nets. In Proceedings of Machine Learning Research
Proceedings of Machine Learning Research, vol. 38, pp. 562–570.
 Choromanska, A. & Henaff, M. & Mathieu, M. & Ben Arous, B. & LeCun, Y. (2015) The loss surfaces of multilayer networks. In Proceedings of Machine Learning Research, vol. 38, pp. 192–204.
 Dereziński, M. & Warmuth, M. K. (2014). The limits of squared Euclidean distance regularization. Advances in Neural Information Processing Systems. 4. 2807-2815.
 Eban, E. & Mezuman, E. & Globerson, A. (2014) Discrete Chebyshev classifiers. In Proceedings International Conference on Machine Learning.
 Glorot, X. & Bengio, Y. (2010) Understanding the difficulty of training deep feedforward neural networks. In Proceedings International Conference on Artificial Intelligence and Statistics
Proceedings International Conference on Artificial Intelligence and Statistics.
 Janocha, K. & Czarnecki, W. M. (2017) On loss functions for deep neural networks in classification. arXiv:1702.05659.
 Masnadishirazi, H. & Vasconcelos, N. (2009) On the design of loss functions for classification: Theory, robustness to outliers, and SavageBoost. Advances in Neural Information Processing Systems.
 Moulin, P. & Veeravalli, V. V. (2019) Statistical Inference for Engineers and Data Scientists. Cambridge University Press.
 Tieleman, T. & Hinton, G. (2012) Lecture 6.5-rmsprop: Divide the Gradient by a Running Average of Its Recent Magnitude.
 Tieleman, T. & Hinton, G. (2012) Lecture 6.5-rmsprop: Divide the Gradient by a Running Average of Its Recent Magnitude.COURSERA: Neural Networks for Machine Learning.