1 Introduction
The success of DNNs in learning complex relationships between the inputs and outputs may be mainly attributed to multiple nonlinear hidden layers [1,2]. As a consequence of having multiple layers, DNNs typically have tens of thousands of parameters, sometimes even millions. Such large number of parameters gives the method incredible amount of flexibility. However on the downside, this may lead to overfitting the data, especially if the training sample is not large enough. Overfitting means that the method may work well in the training set but not in the test set. Since overfitting is a typical problem for DNNs, many methods have been suggested to reduce it. Adding weight penalties to the cost function, dropout, early stopping, maxnorm regularization and data augmentation are some of the popular regularization methods used to avoid overfitting. In this paper, we narrow our focus to regularization methods based on weight penalties appended to the cost function.
Two most commonly considered penalties for DNN regularization are the and penalties. In statistical literature, these two penalties are known as Lasso [3] and Ridge penalties [4,5] respectively. One of the main advantages of working with these two penalties is convexity of the optimization problem which guarantees that a local optima will always be a global optimum. penalization is also a selection procedure as it sets many parameters to zero. penalization does not have this property. All the parameters after penalization and all nonzero parameters after
penalization are shrunk towards zero. The resulting bias in the regularized solution of the above convex penalties has motivated a few authors to consider nonconvex penalties [6,7], which have the potential to yield nearly unbiased estimates for the parameters. Recent theoretical work [8,9] has also shown that although nonconvex regularizers may yield multiple local optima they are essentially as good as a global optima from a statistical perspective.
In this paper we present nonconvex penalty functions which could be utilized as regularizers of the parameters in a DNN. In the method section we motivate the definition of these penalty functions based on the norm. The main focus of our paper is to compare the performance of DNN with regularization based on nonconvex penalties with regularization based on convex penalty functions. We provide theoretical justifications for our proposed regularization approaches and also assess their performance on real datasets.
The paper is structured as follows. In section 2, we motivate and introduce our method for regularizing DNNs, and justify based on theoretical considerations. In section 3, we apply our method to real datasets and compare its performance with regularization. Finally we make our conclusions in section 4.
2 Methods
2.1 Background and Motivation
Consider a classifier
parameterized by the weight vector
, for input and categorical output . Optimal weights in a nonregularized setting are obtained by minimizing a cost function . Typically the negative loglikelihood is taken as the cost function; in the case of a categorical output it will be the crossentropy function. One general approach for regularizing DNNs is to append a penalty function to the cost function, where denotes the vector of tuning parameters associated with the penalty function. As done in most of the literature we will be restricting our attention to coordinate separable penalty functions which could be expressed as a sumThus the regularized optimization problem that we are interested in is
(1) 
The most commonly discussed approach, known as the ‘canonical selection procedure’ is based on
(2) 
the penalty function in this case is referred to as the norm. The key ideas behind Akaike’s, Bayesian and Minimax Risk based Information Criteria (AIC, BIC and RIC), and Mallow’s are all based on the above norm. However, it is intractable for DNN applications because finding the minimum of the objective function in (1) with the penalty function in (2) is in general NP hard. It is combinatorial in nature and has exponential complexity as it requires an exhaustive search of order .
The abovementioned intractability has led to considerations of approximations for the penalty function in (2). The most widely considered approximations are of the class of Bridge functions [10,11]
motivated by the fact that
and cases ( and penalties) are known in the literature as Lasso and Ridge penalties. Note that the penalty function in (2) is singular at zero and the optimization problem based on it is nonconvex. Bridge penalty functions are convex when and nonconvex for . Bridge functions are singular at zero only in the case . Thus Lasso is the only case among the class of Bridge functions which is both convex and has a singularity at origin. Convex relaxation of a nonconvex problem has its advantage in the optimization setting based on the simple fact that the local minimum of a convex function is also a global minimum. Singularity at origin for the penalty function is essentially what guarantees the sparsity of the solution (i.e. setting to zero small estimated weights to reduce model complexity).
Although Lasso has the abovementioned advantages over other Bridge estimators, it differs from the norm in a crucial aspect: where as the norm is constant for any nonzero argument the norm increases linearly with the absolute value of the argument. This linear increase results in a bias for the regularized solution [6] which in turn could lead to modeling bias. As mentioned in [6], in addition to unbiasedness and sparsity, a good penalty function should result in an estimator with continuity property. Continuity is necessary to avoid instability in model prediction. Note that the penalty function in (2) does not satisfy the continuity criterion. None of the Bridge penalty functions satisfy simultaneously all of the preceding three required properties. The solution for Bridge penalties is continuous only when . However, when the Bridge penalties do not produce sparse solutions. When (i.e. Lasso) it produces continuous and sparse solution, but this comes at the price of shifting the resulting estimator by a constant (i.e. bias).
The above issues for the Bridge functions have led to considerations of other approximations for the penalty function in (2) (especially nonconvex approximations) with the hope that these new approximations will satisfy (or nearly satisfy) all the three desirable properties mentioned above. In this paper we present two nonconvex approximation functions:
(3) 
(4) 
The first penalty has appeared previously in the medical imaging literature in a method for magnetic resonance image reconstruction [12], and it has been referred to as the Laplace penalty function. See also [13]. The second penalty function based on arctan has not been considered in the literature so far to the best of our knowledge. Two other nonconvex penalties that currently exist in the literature are the SCAD penalty,
developed by Fan and Li (2001) and the MCP regularizer (Zhang 2010),
There are two other nonconvex penalties that have appeared in the literature before that we do not consider in this paper. We present these two penalties in a later section and provide reasons for not considering them.
Although nonconvex penalties are worth considering in DNN applications, they rarely get as much attention as the convex penalty functions. For example, textbooks such as [14] mention only and as regularization methods based on weight penalties. In this paper, we compare the performance of nonconvex regularizers (including ours, SCAD and MCP) with regularizer for DNNs.
2.2 Theoretical considerations
Properties of SCAD and MCP penalties have been studied in the original papers in which they were presented. Below we present a few properties satisfied by Laplace and arctan penalty functions. These properties will help us to apply theorems from existing literature [6,8,9] that guarantees that any local optimum lies close to the target vector . These properties are easy to see from plots, but we give proofs.
Properties of the Laplace penalty function
We begin with a useful lemma.
Lemma 2.1.
For and ,
(5) 
Proof.
Let . Note that based on the assumptions. Taking logarithm on both sides of the inequality (1), we get Multiplying by 1 on both sides and substituting , we get . But this follows from the inequality for all (in particular for ) and the fact that . ∎
We present a few properties satisfied by the penalty function,
(P1) and is symmetric around zero. It is easily verified.
(P2) is increasing for . It is easy to see that is positive for and .
(P3) For , the function is nonincreasing in . Since, for ,
it suffices to show that the numerator for . But this follows from Lemma 3.1 above.
(P4) The function is differentiable for all and subdifferentiable at , with . It is easy to see that any point in the interval is a subgradient of at .
(P5) There exists such that is convex: will work. is a measure of the severity of nonconvexity of the penalty function.
Since the penalty function satisfy the properties (P1) to (P5), we have is everywhere differentiable. These properties also imply that is Lipschitz as a function of [8]. In particular, all subgradients and derivatives of are bounded in magnitude by [8]. We also see that for empirical loss satisfying restricted strong convexity condition and conditions for and sample size in Theorem 1 in [8], the squared error of the estimator grows proportionally with the number of nonzeros in the target parameter and with . One condition that is not satisfied by is
(P6) There exists such that It is clear that such a does not exist for our penalty function. However, we note that can be made arbitrarily close to zero for large . In other words, the following property is satisfied.
(P6)
(P6) and (P6) are related to unbiasedness as mentioned in Fan and Li (2009). (P6) guarantees unbiasedness (and (P6) nearunbiasedness) when the true unknown parameter is large to avoid unnecessary modeling bias.
Our penalty function depends on two parameters and , while as Lasso and ridge penalties depend on only alone. In our case, is there an optimal choice of that depends on alone? The following considerations based on Fan and Li shed some light into this. According to Fan and Li [6] a good penalty function should have the following two properties.
(P7) Minimum of the function is positive. This property guarantees sparsity, at least in the empirical loss case; that is, the resulting estimator is a thresholding rule.
(P8) Minimum of the function is attained at . This property, at least in the empirical loss case, is related to the continuity of the resulting estimator. Continuity helps to avoid instability in model prediction.
Consider the function , which is symmetric around zero, so that if a minimum is attained at , then it is attained at as well. This allows us to restrict our attention to the domain . Note that in this domain and . We have so that or
(6) 
In order that , we require that ; satisfies (P8) if
(7) 
For any given by eq.(2) (i.e. with any )
so, in particular, corresponding to is positive. Thus for a given , choosing based on eq.(3) will ensure that properties (P7) and (P8) are satisfied by our penalty function.
Theorem 1 in Fan and Li [6] provides required conditions for the consistency of the estimator in a maximum likelihood framework and generalized linear models setting. The main assumption on the penalty function is stated as the following property.
(P9) In our case, this property is satisfied because
Lemma 3.1 suggests considering another penalty function where
This penalty function () is equivalent to Geman’s penalty function mentioned in [15]; also mentioned in [12,13]. Most of the properties listed above are satisfied by the penalty function as well. For example,
In order to check (P3) we consider
For , it is easy to check that
However, this suggests that the required for (P5) is which is twice as that for . That is, nonconvexity for is twice as severe for . Also converges to zero (as do ) for large satisfying (P6) for nearunbiasedness. However, since it can be shown that
we see that the convergence for is faster. Hence we do not consider the latter penalty function, in this paper.
We may also generalize our penalty function to for . However, the corresponding to this function will be , making it more severely nonconvex similar to the Bridge penalty function when . Hence, in this paper we focus only on case. Further comparison with bridge penalty is given in the subsection below.
Properties of the arctan penalty function
Here we check properties for the arctan penalty,
Property P1 (  and is symmetric around zero  ) is again easily verified.
(P2): For ,
is positive for . Hence is increasing for .
We state as a lemma a wellknown fact about arctan function.
Lemma 2.2.
For ,
(8) 
Proof.
If we take , then for and hence is nondecreasing in the interval . In particular , which proves the right inequality. Similarly, by writing , we have for , proving the left inequality. ∎
(P3) For consider the function .
Thus by the above lemma and hence is nonincreasing.
(P4) . Any point in the interval is a subgradient of at .
(P5) makes convex.
(P6’) It is easy to check that
Also easy to see that which guarantees that (P9) is satisfied.
Convergence of the Laplace and arctan approximation functions
Here we present heuristic justifications for using the Laplace or arctan penalties over the bridge penalties by considering their respective error in approximating the indicator function involved in the canonical selection procedure (eq. (2)).
Lemma 2.3.
Consider the approximation functions for , and for some fixed . The overall error for is much larger than that of .
Proof.
We give a proof based on heuristic analysis. Because of symmetry we just focus on the right side of origin on the xaxis for error analysis. For an interval , with , , we have , so that the area under the curve in this interval is
The area under the curve for the indicator function in this interval is , so that the error in approximation is
where we used the leading term of the Taylor series approximation to the function . For the approximation function the area under the curve in the interval with is
Using the leading term in the Taylor series approximation, can be approximated by so that the absolute value of the error is approximately . Thus the absolute value of the error for in a unit interval (with ) is approximately and that for in the same interval is approximately . For a fixed , the former can be made arbitrarily large, and the latter arbitrarily small by increasing . The error for is larger than that for in the interval but the difference in this interval is bounded. ∎
Lemma 2.4.
Consider the approximation functions for , and where is fixed. The overall error for is much larger than that of .
Proof.
In this case
where the approximate equality above was obtained using the leading term in the Taylor series expansion of the each of the following functions:
Thus the absolute value of the error for in a unit interval (with ) is approximately
which can be made arbitrarily small by increasing , since for , and increases to as . On the other hand, as shown in the previous lemma, the corresponding error for can be made arbitrarily large by increasing . Also the error for and in the interval is bounded. ∎
Two other nonconvex penalties
Here we mention two other nonconvex penalties that have appeared in the regularization literature previously. The first one is the GemanMcClure function
this function is exactly same as the function mentioned above if we replace with . As mentioned above, the function is related to the Laplace penalty via Lemma 2.1. For the same parameter the nonconvexity for is twice as that for the Laplace penalty. It can also be shown that the derivative of the Laplace penalty converges to zero at a faster rate than the that of . Based on these considerations we did not study the GemanMcClure function in this paper.
Yet another nonconvex penalty that has appeared in the literature is the concave logarithmic penalty
This function increases with the absolute value of the argument like and penalties; although the increase is at a lower rate than and for large , it is still an increasing function thereby resulting in bias. Hence we do not consider this latter penalty as well in this paper.
3 Experimental results
We assess the performance of regularized DNNs with the nonconvex penalty functions presented in this paper, by applying them on a real dataset (MNIST). Details of the analysis and description of the MNIST dataset are given below.
The optimal weights of the fitted deep neural networks (DNN) were estimated by minimizing the total cross entropy loss function. We used batch gradient descent algorithm with early stopping. To avoid the vanishing/exploding gradients problem, the weights were initialized to values obtained from a normal distribution with mean zero and variance
whereis the number of neurons in the
layer [16, 17]. Rectified linear units (ReLU) function was used as the activation function.
The training data was randomly split into multiple batches. During each epoch, the gradient descent algorithm was sequentially applied to each of these batches resulting in new weights estimates. At the end of each epoch, the total validation loss was calculated using the validation set. When twenty consecutive epochs failed to improve the total validation loss, the iteration was stopped. The maximum number of epochs was set at 250. The weights estimate that resulted in the lowest total validation loss was selected as the final estimate. Since there was a random aspect to the way the training sets were split into batches, the whole process was repeated three times with seed values 1, 2, and 3. The reported test error rates are the median of the three test error rates obtained using each of these seed values.
A triangular learning rate schedule was used because it produced the lowest test error rates [18]. The learning rates varied from a minimum of 0.01 to a maximum of 0.25 (see figure 1 below).
For all penalty functions the optimal
was found by fitting models with logarithmically equidistant values in a grid. We used Python ver. 3.6.7rc2 and TensorFlow ver.1.12.0 for the calculations.
The models were fit with no regularization, and regularizations and the nonconvex regularization methods. The results based on new nonconvex penalty functions were comparable to and regularization in all the datasets. A general overview of the dataset and the DNN model specifications is given in Table 1. The models were intentionally overparameterized to better contrast the effects of various types of regularization methods.
Table 1. Overview of dataset and DNN model specifications  

Dataset  Domain  Dimensionality  Classes  DNN  Training  Validation  Test 
Specifications  Set  Set  Set  
MNIST  Visual  784 (  10  5 layers,  48000  2000  10000 
greyscale)  1024 units 
MNIST:
Modified National Institute of Standards and Technology (MNIST) dataset is a widely used toy dataset of 60,000 greyscale images of handwritten digits. Each image has pixels. The intensity measures of these 784 pixels form the input variables of the model. The dataset was split into 48,000 training set, 2000 validation set, and 10,000 test set.
The test error rate obtained with no regularization was 1.87. With Lasso regularization, the test error reduced to 1.24 and with Ridge regularization the test error was 1.23 The Laplace and Arctan methods gave test error rates of 1.23 and 1.25 which were comparable to Lasso and Ridge.
All the results mentioned above are summarized in Table 2 below.
Table 2. Median test error rates at optimal  

Dataset  Regularization method  
None  SCAD  MCP  Laplace  Arctan  
MNIST  1.87  1.24  1.23  1.80  1.60  1.23  1.25 
4 Conclusion
Nonconvex regularizers were originally considered in statistical literature after observing certain limitations of the convex regularizers from the class of Bridge functions. Yet, nonconvex regularizers never gained as much popularity as their convex counterparts in DNN applications, mainly because of certain perceived computational and optimization limitations  that is, in the presence of local optima which are not global optima, in the case of nonconvex functions, iterative methods such as gradient or coordinate descent may terminate undesirably in local optima. However, recent theoretical work [8,9] that established regularity conditions under which both local and global minimum lie within a small neighborhood of the true minimum have brought the limelight back onto nonconvex regularizers. The new theory eliminates the need for specially designed optimization algorithms for most nonconvex regularizers as it implies that standard firstorder optimization methods will converge to points within statistical error of the truth. In other words, nonconvex regularizers that satisfy such regularity conditions enjoy guarantees for both statistical accuracy and optimization efficiency.
Penalty functions typically considered for regularization of DNN are convex. In this paper, we present nonconvex penalty functions (Laplace, Arctan, SCAD and MCP) that are typically not considered in the DNN literature. Arctan penalty function has not been considered in any statistic literature previously to the best of our knowledge. We study the performance of the nonconvex penalty functions while applying DNN on a large dataset (MNIST). Test error rates for Laplace and Arctan penalty functions were comparable to that obtained by the convex penalties.
References
References
 (1) Rumelhart, D. E., Hinton, G. E., and Williams, R. J. (1986). Learning representations by backpropagating errors. Nature, 323, 533536.
 (2) Rumelhart, D. E., Hinton, G. E., and Williams, R. J. (1986). Learning internal representations by error propagation. In Rumelhart, D. E. and McClelland, J. L., editors, Parallel Distributed Processing: Explorations in the Microstructure of Cognition. Volume 1: Foundations Volume 1: Foundations, MIT Press, Cambridge, MA.
 (3) Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. J. Royal. Statist. Soc B., Vol. 58, No. 1, pages 267288.

(4)
Hoerl, A.E. and Kennard, R. (1970). Ridge regression: Biased estimation for nonorthogonal problems. Technometrics,12:5567
 (5) Frank, I. and Friedman, J. (1993). A statistical view of somechemometrics regression tools.Technometrics,35, 109–148.
 (6) J. Fan and R. Li. Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association, 96:1348–1360, 2001.
 (7) C.H. Zhang. Nearly unbiased variable selection under minimax concave penalty. Annals of Statistics, 38(2):894–942, 2010.

(8)
P. Loh and M.J. Wainwright. Regularized Mestimators with nonconvexity: Statistical and algorithmic theory for local optima. Journal of Machine Learning Research 16 (2015) 559–616.
 (9) P. Loh and M. J. Wainwright. Support recovery without incoherence: A case for nonconvex regularization. Annals of Statistics 45(6): 24552482, 2017.
 (10) Fu, W. J.(1998). Penalized regressions: the Bridge versus the Lasso.J. Comput. Graph. Statist.7397–416.
 (11) Knight K and Fu W (2000). Asymptotics for lassotype estimators Ann. Statist. 28, no. 5, 13561378.
 (12) Trzasko, J and Manduca, A. Highly undersampled magnetic resonance image reconstruction via homotopic L0minimization. IEEE Transactions on Medical Imaging. Vol 28, Issue: 1, Jan. 2009

(13)
Lu, C., Tang, J., Yan, S. and Lin, Z. 2014. Generalized Nonconvex Nonsmooth LowRank Minimization. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR ’14). IEEE Computer Society, Washington, DC, USA, 41304137.
 (14) Geron, A. HandsOn Machine Learning with ScikitLearn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems. O’Reilly Media; 1 edition (2017)
 (15) Geman, D and Yang, C. Nonlinear image recovery with halfquadratic regularization. IEEE Transactions on Image Processing ( Volume: 4, Issue: 7, Jul 1995.
 (16) Glorot, X., Bengio, Y. Understanding the difficulty of training deep feedforward neural networks

(17)
He, K., Zhang, X., Ren, S., Sun, J. Delving Deep into Rectifiers: Surpassing HumanLevel Performance on ImageNet Classification.
 (18) Smith, L.N. Cyclical Learning Rates for Training Neural Networks. DOI: 10.1109/WACV.2017.58 https://ieeexplore.ieee.org/document/7926641
 (19) Lewis, D. D., Yang, Y., Rose, T. G., Li, F. (2004). RCV1: A new benchmark collection for text categorization research. The Journal of Machine Learning Research, 5, 361397.

(20)
Tarigan, B. and van de Geer, S. (2006) Classifiers of support vector machine type withl1complexity regularization.Bernoulli,12, 1045–1076.

(21)
L. Meier, S. van de Geer and P. Buehlmann (2008). The group Lasso for logistic regression. JRSS, Series B, 70, 5371
Appendix A: Statistical Consistency
Statistical consistency results for the weight estimates based on penalty functions in eq.(3) and eq.(4) can be obtained by modifying slightly existing theoretical results [20,21] in the literature. Asymptotic results for SCAD and MCP are presented in [6] and [7]. So, we focus on only Laplace and arctan penalties. Consider the class of logistic classifiers,
Classification is done based on the sign of of the function defined as
Here
denotes the class probability. If there are
classes, then the class probability for the class may be modeled asbut for simplicity, we just focus on binary classification.
We assume that is endowed with a probability measure and let be the norm (). Denote . Design matrix consists of copies of . The empirical logistic (also known as crossentropy) loss is
and theoretical loss
Let
We assume the following three conditions given in [20].
(C1): There exists constants and , such that for all ,
(C3): for some . Here denotes the unit vector with 1 as the element and 0’s elsewhere.
The following theorem holds for , where equals either the Laplace penalty function given in eq (3) or the arctan penalty function given in eq. (4).
Theorem 4.1.
Assume conditions C1 to C3 hold and that . Then for universal constants ,
where
(9) 
The constant in eq. (7) depends on the penalty function: for the Laplace penalty and for the arctan penalty.
Proof.
We give only a sketch of the proof, as the proof is the same as the lengthy proof given in [20], with only minor differences. First of all note that the only difference in the statement of the above theorem from the statement of the theorem 1 in [20] is the constant in eq. (9).
Although the loss function used in [20] was Hingeloss function, the steps in their proof follows for logistic loss also (actually it becomes easier) as pointed out in [21]. Thus, the only difference in the steps in the proof that we need to focus are those corresponding to the penalty functions. Their theorem is stated for the Lasso penalty. The triangle inequality satisfied by the Lasso penalty is used in certain steps of the proof. But, since both Laplace and arctan penalties are subadditive (  concave in the positive real line, with  ), those steps hold true for these two penalties as well.
The only other step we need to focus is Lemma 5.2 in their proof, where the key inequality used is
(10) 
Instead, we use the inequalities
(11) 
and
(12) 
Inequality in (11) follows from the inequality in (10) and the fact that
which follows easily by considering the function and noting that and
Inequality in (12) follows from the inequality in (10) and the right inequality in Lemma 2.2.
∎
Appendix B: Tables
DNN analysis was repeated for multiple seed values. The test error rate presented in section 3.3 was the median of the test error rates from all seed values. Detailed results (i.e. test error rates for each grid point and seed value) used for compiling the summarized table in section 3.3 are presented below.
4.1 Mnist
MNIST results with no regularization  
Seed  1  2  3  4  5  6  7  8  9  10  Median 
Error  1.69  2.54  1.76  2.38  1.70  1.98  2.34  1.72  2.56  1.76  blue1.87black 
MNIST results with Lasso regularization  
Seed 1  Seed 2  Seed 3  Median  
4.00  1.98  1.98  1.69  1.98 
4.20  1.55  1.55  1.52  1.55 
4.40  1.45  1.36  1.32  1.36 
4.60  1.37  1.34  1.41  1.37 
4.80  1.28  1.40  1.29  1.29 
5.00  1.44  1.27  1.24  1.27 
5.20  1.16  1.34  1.24  blue1.24black 
5.40  1.45  1.36  1.35  1.36 
5.60  1.36  1.42  1.34  1.36 
5.80  1.24  1.32  1.21  1.24 
6.00  1.26  1.31  1.24  1.26 
6.20  1.72  2.09  1.87  1.87 
6.40  2.16  1.78  2.31  2.16 
6.60  2.14  1.97  1.41  1.97 
6.80  1.97  1.84  2.45  1.97 
7.00  1.84  1.97  2.03  1.97 
MNIST results with regularization  
Seed 1  Seed 2  Seed 3  Median  
4.00  1.26  1.23  1.22  blue1.23black 
4.20  1.36  1.25  1.22  1.25 
4.40  1.45  1.33  1.29  1.33 
4.60  1.31  1.28  1.34  1.31 
4.80  1.33  1.35  1.26  1.33 
5.00  1.90  1.30  1.27  1.30 
5.20  1.70  2.45  2.41  2.41 
5.40  2.21  2.16  2.43  2.21 
5.60  2.09  2.23  2.02  2.09 
5.80  2.43  1.71  1.92  1.92 
6.00  2.14  2.07  2.15  2.14 
6.20  1.99  1.96  2.38  1.99 
6.40  1.83  2.05  2.03  2.03 
6.60  2.02  1.38  1.47  1.47 
6.80  1.88  2.40  1.83  1.88 
7.00  2.19  1.40  1.97  1.97 
MNIST results with Laplace ( = 1e07) regularization  
Seed = 1  Seed = 2  Seed = 3  Median  
4.00  6.05  5.78  6.43  6.05 
4.20  3.07  2.70  2.93  2.93 
4.40  2.29  2.37  2.41  2.37 
4.60  2.33  2.10  1.93  2.10 
4.80  2.04  1.77  1.94  1.94 
5.00  1.81  1.86  1.79  1.81 
5.20  1.46  1.53  1.39  1.46 
5.40  1.35  1.43  1.45  1.43 
5.60  1.18  1.36  1.23  blue1.23blue 
5.80  1.47  1.42  1.45  1.45 
6.00  1.46  1.46  1.35  1.46 
6.20  1.45  1.40  1.41  1.41 
6.40  1.33  1.35  1.35  1.35 
6.60  1.33  1.36  1.30  1.33 
6.80  1.25  1.28  1.34  1.28 
7.00  1.19  1.40  1.33  1.33 

MNIST results with Arctan ( = 100) regularization  
Seed 1  Seed 2  Seed 3  Median  
4.00  3.43  3.65  5.71  3.65 
4.20  1.95  3.62  1.75  1.95 
4.40  2.57  1.66  2.45  2.45 
4.60  1.94  2.64  1.36  1.94 
4.80  1.51  1.43  1.49  1.49 
5.00  1.47  1.50  2.11  1.50 
5.20  1.41  1.45  2.02  1.45 
5.40  1.51  1.51  1.50  1.51 
5.60  1.44  1.39  1.57  1.44 
5.80  1.88  1.74  1.57  1.74 
6.00  1.30  1.34  1.31  1.31 
6.20  1.35  1.31  1.31  1.31 
6.40  1.38  1.30  1.29  1.30 
6.60  1.33  1.25  1.25  blue1.25black 
6.80  1.39  1.30  1.27  1.30 
7.00  1.34  1.27  1.34  1.34 
MNIST results with SCAD ( = 3.7) regularization  
Seed 1  Seed 2  Seed 3  Median  
4.00  1.95  2.29  1.92  1.95 
4.20  2.07  2.05  1.78  2.05 
4.40  1.94  2.29  1.92  1.94 
4.60  2.08  1.99  1.67  1.99 
4.80  1.70  1.80  1.95  blue1.80black 
5.00  2.34  2.26  2.08  2.26 
5.20  2.28  2.11  1.94  2.11 
5.40  2.29  2.01  2.10  2.10 
5.60  2.28  2.39  1.83  2.28 
5.80  2.02  1.88  1.58  1.88 
6.00  2.32  2.06  1.94  2.06 
6.20  2.12  1.74  1.86  1.86 
6.40  2.20  1.97  1.94  1.97 
6.60  1.89  1.61  2.12  1.89 
6.80  2.00  1.44  1.96  1.96 
7.00  2.07  2.08  2.00  2.07 
MNIST results with MCP ( = 1.5) regularization  
Seed 1  Seed 2  Seed 3  Median  
4.00  1.72  1.97  2.09  1.97 
4.20  2.19  1.89  1.84  1.89 
4.40  2.16  1.88  1.91  1.91 
4.60  1.94  1.92  1.85  1.92 
4.80  1.82  2.38  1.95  1.95 
5.00  2.10  1.93  2.34  2.10 
5.20  1.76  2.28  1.80  1.80 
5.40  2.09  1.33  1.60  blue1.60black 
5.60  1.80  2.12  1.90  1.90 
5.80  2.01  1.83  2.12  2.01 
6.00  2.24  2.20  1.65  2.20 
6.20  2.16  1.46  1.74  1.74 
6.40  2.00  1.67  2.15  2.00 
6.60  2.02  1.78  1.83  1.83 
6.80  2.02  1.96  1.89  1.96 
7.00  2.02  2.08  2.56  2.08 
Comments
There are no comments yet.