Large Scale Empirical Risk Minimization via Truncated Adaptive Newton Method

05/22/2017 ∙ by Mark Eisen, et al. ∙ University of Pennsylvania 0

We consider large scale empirical risk minimization (ERM) problems, where both the problem dimension and variable size is large. In these cases, most second order methods are infeasible due to the high cost in both computing the Hessian over all samples and computing its inverse in high dimensions. In this paper, we propose a novel adaptive sample size second-order method, which reduces the cost of computing the Hessian by solving a sequence of ERM problems corresponding to a subset of samples and lowers the cost of computing the Hessian inverse using a truncated eigenvalue decomposition. We show that while we geometrically increase the size of the training set at each stage, a single iteration of the truncated Newton method is sufficient to solve the new ERM within its statistical accuracy. Moreover, for a large number of samples we are allowed to double the size of the training set at each stage, and the proposed method subsequently reaches the statistical accuracy of the full training set approximately after two effective passes. In addition to this theoretical result, we show empirically on a number of well known data sets that the proposed truncated adaptive sample size algorithm outperforms stochastic alternatives for solving ERM problems.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The recent advances in large scale machine learning focus largely on solving of the expected risk minimization problem, in which a set of model parameters of dimension

are found that minimize an expected loss function. More typically, the expected loss function is taken with respect to an unknown probability distribution. Therefore, the expected risk is estimated with a statistical average over

samples, where is very large. The minimization over the statistical average is called the empirical risk minimization (ERM) problem. Indeed the computational complexity of solving ERM problems depends on the size of the dataset

if we use deterministic methods that operate on the full dataset at each iteration. Stochastic optimization has long been used to reduce this cost. Among the stochastic optimization methods are stochastic gradient descent algorithm

(Robbins and Monro, 1951; Bottou, 2010), Nesterov-based methods (Nesterov et al., 2007; Beck and Teboulle, 2009)

, variance reduction methods

(Johnson and Zhang, 2013; Nguyen et al., 2017), stochastic average gradient algorithms (Roux et al., 2012; Defazio et al., 2014a), stochastic majorization-minimization algorithms (Defazio et al., 2014b; Mairal, 2013), hybrid methods (Konecnỳ and Richtárik, 2013), and dual coordinate methods (Shalev-Shwartz and Zhang, 2013, 2016). Although these stochastic first-order methods succeed in reducing the computational complexity of deterministic methods, they suffer from slow convergence in ill conditioned problems. This drawback inspires the development of stochastic second order methods, which improve upon the performance of first-order methods by using a curvature correction. These include subsampled Hessian (Erdogdu and Montanari, 2015; Pilanci and Wainwright, 2015; Roosta-Khorasani and Mahoney, 2016a, b), incremental Hessian updates (Gürbüzbalaban et al., 2015), stochastic dual Newton ascent (Qu et al., 2016), and stochastic quasi-Newton methods (Schraudolph et al., 2007; Mokhtari and Ribeiro, 2014, 2015; Lucchi et al., 2015; Moritz et al., 2016; Byrd et al., 2016; Mokhtari et al., 2017). However, many of these cannot improve upon the asymptotic convergence rate of linearly convergent first-order methods. Those that do improve rates do so at great computational cost due to the computation of the full gradient and the Hessian inverse, both computationally prohibitive when and are large, as is often true in real datasets.

Each of these methods solve the ERM problem using the full sample set. However, the samples in ERM problems are drawn from a common distribution, so a smaller ERM problem using a smaller subset of samples should have a solution close to that of the full problem. Solving a sequence of smaller ERM problems using a subset of samples thus may reduce computational complexity. The work in (Mokhtari et al., 2016), for instance, reduces complexity of Newton’s method by adaptively increasing sample size, but remains impractical for high dimensional problems due to the inverse computation. However, a key insight in ERM problems is that only the empirical average is optimized, and not the true expected loss function. Indeed this error between the optimal empirical loss and expected loss, called the statistical accuracy, can be made arbitrarily small by increasing the number of samples . More importantly, however, is that the unavoidable presence of such an error in fact grants us latitude in the accuracy of our optimization methods. Given that we can only optimize the expected loss function up to a certain accuracy, we can employ further approximation techniques with negligible loss, so long as the error induced by these approximations is small relative to that of the stochastic approximation. It is therefore possible to significantly reduce the computational cost of our optimization methods without negatively impacting the overall accuracy, thus making second order methods a viable option.

In this paper, we propose a novel adaptive sample size second-order method, which reduces the cost of computing the Hessian by solving a sequence of ERM problems corresponding to a subset of samples and lowers the cost of computing the inverse of the Hessian by using a truncated eigenvalue decomposition. In the presented scheme, we increase the size of the training set geometrically, while solving subproblems up to their statical accuracy. We show that we can increase the sample size in such a manner that a single iteration of a truncated Newton method is sufficient to solve the increased sample size ERM problem to within its statistical accuracy. This is achieved by using the quadratic convergence region of the new ERM problem. While the proposed method does not converge at a purely quadratic rate due to truncation of the Newton step, the additional linear term incurred by the approximation can be made negligible with respect to the statistical accuracy of the data set. The resulting -Truncated Adaptive Newton (-TAN) method uses a rank- approximation of the Hessian to solve high dimensional ERM problems with large sample sizes up to the statistical accuracy at a significantly lower overall cost than existing optimization techniques. Specifically, we demonstrate that, in many cases, we can reach the statistical accuracy of the problem with a total computational cost of using rank approximations of the Hessian inverse.

2 Problem Formulation

We consider in this paper the empirical risk minimization (ERM) problem for a convex function , where

is a realization of a random variable

. More specifically, we seek the optimal variable that minimizes the expected loss . Define as the variable that minimizes the expected loss, i.e.


In general, the problem in (1) cannot be solved without knowing the distribution of . As an alternative, we traditionally consider the case that we have access to samples of , labelled . Define then the functions for and an associated empirical risk function as the statistical mean over the first samples. We say that function approximates the original expected loss with statistical accuracy if the difference between the empirical risk function and the expected loss is upper bounded by for all with high probability (w.h.p.). The statistical accuracy is typically bounded by (Vapnik, 2013) or the stronger for a set of common problems (Bartlett et al., 2006; Bottou and Bousquet, 2007).

Observe that the sampled loss function is of an order difference from the true loss function and, consequently, any additional change of the same order has negligible effect. It is therefore common to regularize non-strongly convex loss functions by a term of order . We then seek the minimum argument of the regularized risk function ,


where is a scalar constant. The solution minimizes the regularized risk function using the first samples, which is of order from the expected loss function . It follows then that by setting we find a solution in (2) that solves the original problem in (1) up to the statistical accuracy of using all samples.

The problem in (2) is strongly convex can be solved using any descent method. In particular, Newton’s method uses a curvature-corrected gradient to iteratively update a variable , and is known to converge to the optimal argument at a very fast quadratic rate. To implement Newton’s method, it is necessary to compute the gradient and Hessian as


The variable is updated in Newton’s method as


Solving (1) to the full statistical accuracy (i.e. solving (2) for ) using Newton’s method would then require the computation of individual gradients and Hessians for functions for computational cost of at each iteration. Furthermore, the computation of the Hessian inverse in (4) requires a cost of , bringing at total of for an iteration of Newton’s method using the whole dataset. For large and , this may become computationally infeasible. In this paper we show how this complexity can be reduced by gradually increasing the sample size and approximating the inverse of the respective Hessian .

3 -Truncated Adaptive Newton (-TAN) Method

We propose the -Truncated Adaptive Newton (-TAN) as a low cost alternative to solving (1) to its statistical accuracy. In the -TAN method, at each iteration we start from a point within the statistical accuracy of , i.e. . We geometrically increase the sample size to , where , and compute using an approximated Newton method on the increased sample size risk function . More specifically, we update a decision variable associated with to a new decision variable associated with with the Newton-type update


where is a matrix approximating the Hessian and parametrized by . In particular we are interested in an approximation whose inverse can be computed with complexity less than . To define such a matrix, consider to be the eigenvalues of the Hessian of empirical risk

, with associated eigenvectors

. We perform an eigenvalue decomposition of , where and . We can then define the truncated eigenvalue decomposition with rank as , where and . The full approximated Hessian is subsequently defined as the rank approximation of regularized by , i.e.


The inverse of the approximated Hessian can then be computed directly using and as


Observe that setting , i.e., full Hessian inverse, recovers the AdaNewton method in (Mokhtari et al., 2016). To understand how we may determine , consider that the full Hessian computed in (3) is regularized by . Therefore, the eigenvalues of less than are made negligible by the regularization, and can be left out of the approximation. We thus select the largest eigenvalues of the Hessian which are larger than for some truncation parameter .

To analyze the computational complexity of (7), observe that the inverse computation in (7) requires only the inversion of diagonal matrices, and thus the primary cost in computing the largest eigenvalues and associated eigenvectors . Indeed, the truncated eigenvalue decomposition can in general be computed with at most complexity , with recent randomized algorithms even finding with complexity (Halko et al., 2011). This results in a total cost of, at worst, to perform the update in (5), thus removing a cost.

In this paper we aim to show that while we geometrically increase the size of the training set, a single iteration of the truncated Newton method in (5) is sufficient to solve the new risk function within its statistical accuracy. To state this result we first assume the following assumptions hold.

Assumption 1

The loss functions are convex with respect to for all values of . Moreover, their gradients are Lipschitz continuous with constant .

Assumption 2

The loss functions are self-concordant with respect to for all .

Assumption 3

The difference between the gradients of the empirical loss and the statistical average loss is bounded by for all with high probability,


Based on Assumption 1, we obtain that the regularized empirical risk gradients are Lipschitz continuous with constant . Assumption 2 states the loss functions are additionally self concordant which is a customary assumption in the analysis of second-order methods. It also follows that the functions are therefore self concordant. Assumption 3 bounds the difference between gradients of the expected loss and the empirical loss with samples by

. This is a reasonable bound for the convergence of gradients to their statistical averages using the law of large numbers.

We are interested in establishing the result that, as we increase at each step, the -TAN method stays in the quadratic region of the the associated risk function. More explicitly, we wish to show the sample size can be increased from to such that is in the quadratic region of . Moreover, if is indeed in the quadratic region of , then we demonstrate that a single step of -TAN as in (5) produces a point that is within the statistical accuracy of the risk .

Consider the -TAN method defined in (5)-(7) and suppose that the constant for low rank factorization is defined as where is a free parameter chosen from the interval . Further consider the variable as a -optimal solution of the risk , i.e., a solution such that . Let and suppose Assumptions (1)-(3) hold. If the sample size is chosen such that the following conditions


are satisfied, where , then the variable computed from (5) has the suboptimality of with high probability, i.e.,


The result in Theorem 3 establishes the required conditions to guarantee that the iterates always stay within the statistical accuracy of the risk . The expression in (9) provides a condition on growth rate to ensures that iterate , which is a -suboptimal solution for , is within the quadratic convergence neighborhood of Newton’s method for . The second condition in (10) ensures that a single iteration of -TAN is sufficient for the updated variable to be within the statistical accuracy of . Note that the first term in the left hand side of (10) is quadratic with respect to and comes from the quadratic convergence of Newton’s method, while the second and third terms of respective orders and are the outcome of Hessian approximation. Indeed, these terms depend on , which is the upper bound on ratio of the discarded eigenvalues to the regularization . The truncation must be enough such that is sufficiently small to make (10) hold. It is worth mentioning, as a sanity check, if we set then we will keep all the eigenvalues and recover the update of Newton’s method which makes the non-quadratic terms in (10) zero.

The conditions in Theorem 3 are cumbersome but can be simplified if we focus on large and assume that the inequality holds for . Then, (9) and (10) can be simplified to


respectively. Observe that first condition is dependent of and the second condition depends on and . Thus, a pair must be chosen that satisfies (12) for the result in Theorem 3 to hold. We point out one such pair as the parameters , and . Consequently, when is large we may double the sample size with each update in until , after which we will have obtained a point such that . After iterations (roughly samples processed), we solve the full risk function to within the statistical accuracy . At each iteration, the truncated inverse step requires cost . Computing Hessians over samples requires cos , resulting in a total complexity of .

In practice, these may be chosen in a backtracking manner, in which the iterate is updated using an estimate pair. If the resulting iterate is not in statistical accuracy , the increase factor is decreased by factor and is decreased by factor . Since is not known in practice, the suboptimality can be upper bounded using strong convexity as .

The resulting method is presented in Algorithm 1. After preliminaries and initializations in Steps 1-4, the backtracking loop starts in Step 6 with the sample size increase by rate . After computing the gradient and Hessian in Step 7 and 8, the low rank decomposition with rate in Step 9. The -TAN update is then performed with (7)-(5) in Steps 9 and 10. The factors and are then decreased using the backtracking parameters and the statistical accuracy condition is checked. We stress that, while must be computed to check the exit condition in Step 13, the gradient for these samples must be computed in any case in the following iteration, so no additional computation is added by this step.

1:  Parameters: Sample size increase constants , and
2:  Input: Initial sample size and argument with
3:  while  do {main loop}
4:     Update argument and index:  and . Reset factor , .
5:     repeat {sample size backtracking loop}
6:        Increase sample size: .
7:        Compute gradient [cf. (3)]:  
8:        Compute Hessian:  
9:        Find low rank decomposition (Halko et al., 2011):
10:        Compute [cf. (7)]:
11:        Newton Update [cf. (5)]:  
12:        Backtrack sample size increase , truncation factor .
13:     until 
14:  end while
Algorithm 1 -TAN

4 Convergence Analysis

We study the convergence properties of the -TAN method and in particular prove the result in Theorem 1. Namely, we show that the update described in (5) produces a variable that is within the statistical accuracy . This, in turn, implies all future updates will be within the statistical accuracy of their respective increased sample functions until the full set of samples is used, at which point we will have reached a point that is within the statistical accuracy of problem (1).

4.1 Preliminaries

Before proceeding with the analysis of -TAN, we first present two propositions that relate current iterate to the suboptimality and quadratic convergence region to the increased sample size risk . Define to be the -suboptimality of point with respect to , i.e.


where is the point that minimizes . We establish in the following proposition a bound on the -suboptimality of from the difference in sample sizes and and their associated statistical accuracies. The proof can be found in (Mokhtari et al., 2016, Proposition 3).

Consider a point that minimizes the risk function to within its statistical accuracy , i.e. . If the sample size is increased from to and , the sub-optimality is upper bounded as


Proposition 4.1 demonstrates a bound on the -suboptimality of a point whose -suboptimality is within statistical accuracy . It is also necessary to establish conditions on increase rate such the is also in the quadratic convergence region of . Traditional analysis of Newton’s method characterizes quadratic convergence in terms of the Newton decrement . The iterate is said to be in the quadratic convergence region of when —see (Boyd and Vandenberghe, 2004, Chapter 9.6.4). The conditions for current iterate to be within this region are presented in the following proposition. The proof can be found in (Mokhtari et al., 2016, Proposition 5).

Define as an optimal solution of the risk , i.e., . In addition, define as the Newton decrement of variable associated with the risk . If Assumption 1-3 hold, then Newton’s method at point is in the quadratic convergence phase for the objective function , i.e., , if we have


4.2 Analysis of -Tan

To analyze the -TAN method, it is necessary to study the error incurred from approximating the Hessian inverse in (6) with rank . Because we are only interested in solving each risk function to within its statistical accuracy , however, some approximation error can be afforded. In the following Lemma, we characterize the error between an approximate and exact Newton steps using the chosen rank of the approximation and the associated eigenvalues of the Hessian. Consider the -TAN update in (5)-(7) for some . Define . The norm of difference in the -TAN step and the exact Newton step —where —can be upper bounded as


The result in Lemma 4.2 gives us an upper bound on the error incurred in single iteration of a rank approximation of the Newton step versus an exact Newton step. To make small, a sufficiently large must be chosen such that is in the order of . The size of will therefore depend on the distribution of the eigenvalues of particular empirical risk function. However, in practical datasets of high dimension, it is often the case that most eigenvalues of the Hessian will be close to 0, in which case can be made very small. This trend is supported by our numerical experiments on real world data sets in Section 5 and the Appendix of this paper.

With the results of Proposition 4.1 and Lemma 4.2 in mind, we can characterize the -suboptimality of the updated variable from (5). This is stated formally in the following Lemma.

Consider the -TAN update in (5)-(7). If is in the quadratic neighborhood of , i.e. , then the -suboptimality can be upper bounded by


With Lemma 4.2 we establish a bound on -suboptimality of the obtained from the -TAN update in (5). Observe that this bounded by terms proportional to the -suboptimality of the previous point, . We can then establish that is indeed upper bounded by the statistical accuracy by combing the results in (14) and (17) to obtain Theorem 3. The proofs of Theorem 3 and all supporting Lemmata are provided in the Appendix.

5 Experiments

We compare the performance of the -TAN method to existing optimization methods on large scale machine learning problems of practical interest. In particular, we consider a regularized logistic loss function, with regularization parameters and . The -TAN method is compared against the second order method AdaNewton (Mokhtari et al., 2016) and two first order methods—SGD and SAGA (Defazio et al., 2014a)

. Here, we study the performance of these methods on the logistic regression problem for two datasets. First, the GISETTE handwritten digit classification from the NIPS 2003 feature selection challenge and, second, the well-known RCV1 dataset for classifying news stories from the Reuters database. We perform additional experiments on the ORANGE dataset for KDD Cup 2009 and the BIO dataset for KDD Cup 2004.

Figure 1: Convergence of -TAN, AdaNewton, SGD, and SAGA in terms of number of processed gradients (left) and runtime (right) for the GISETTE handwritten digit classification problem.

The GISETTE dataset includes samples of dimension . We use step sizes of for both SGD and SAGA. In both -TAN and AdaNewton, the sample size is increased by a factor of at each iteration (the condition is always satisfied) starting with an initial size of . For both of these methods, we initially run gradient descent on for iterations so that we may begin in the statistical accuracy . For -TAN, the truncation is observed to be able to afford a cutoff of around in all of our simulations.

In Figure 1, the convergence results of the four methods for GISETTE data is shown. The left plot demonstrates the sub-optimality with respect to the number of gradients, or samples, processed. In particular, -TAN and AdaNewton compute gradients per iterations, while SGD and SAGA compute 1 gradient per iteration. Observe that the second order methods all converge with a smaller number of total processed gradients than the first order methods, reaching after samples a sub-optimality of . We point out that, while -TAN only approximates the Hessian inverse, its convergence path follows that of AdaNewton exactly. Indeed, both algorithms reach the statistical accuracy of after 15000 samples, or just over two passes over the dataset. To see the gain in terms of computation time of -TAN over AdaNewton and other methods, we present in the right image of Figure 1 the convergence in terms of runtime. In this case, -TAN outperforms all methods, reaching a sub-optimality of after 60 seconds, while AdaNewton reaches only a sub-optimality of after 80 seconds. Note that first order methods have lower cost per iteration than all second order methods. Thus, SAGA is able to converge to after 80 seconds.

Figure 2: Convergence of -TAN, AdaNewton, SGD, and SAGA in terms of number of processed gradients (left) and runtime (right) for the RCV1 text classification problem.

For a very high dimensional problem, we consider the RCV1 dataset with and . We use step sizes of for both SGD and SAGA and truncate sizes of around for -TAN, while keeping the parameters for the other methods the same. The results of these simulations are shown in Figure 2. In the left image, observe that, in terms of processed gradients, the second order methods again outperform the SGD and SAGA, as expected, with -TAN again following the path of AdaNewton. Given the high dimension , the cost of computing the inverse in AdaNewton provides a large bottleneck. The gain in terms of computation time can then be best seen in the right image of Figure 2. Observe that AdaNewton becomes entirely ineffective in this high dimension. The -TAN method, alternatively, continues to descend at a fast rate because of the inverse truncation step. For this set -TAN outperforms all the other methods, reaching an error of after 1500 seconds. Since both and are large, SAGA performs well on this dataset due to small cost per iteration.

Figure 3: Convergence of -TAN, AdaNewton, SGD, and SAGA in terms of number of processed gradients (left) and runtime (right) for the ORANGE text classification problem.

We perform additional numerical experiments on the ORANGE dataset used for customer relationship prediction in KDD Cup 2009. We use samples with dimension . The convergence results are shown in Figure 3. Observe in the right hand plot that all second order methods, perform similarly well on this dataset. The first order methods, including SAGA, do not converge after 2000 seconds. We also note that, in this experiment, we were able to reduce the truncation size to around of .

Figure 4: Convergence of -TAN, AdaNewton, SGD, and SAGA in terms of number of processed gradients (left) and runtime (right) for the BIO protein homology classification problem.

In Figure 4, we show results on the BIO dataset used for protein homology classification in KDD Cup 2004. The dimensions are and . In this setting, the number of samples is very large put the problem dimension is very small. Observe in Figure 4 that both -TAN and AdaNewton greatly outperform the first order methods, due to the reduced cost in Hessian computation that comes from adaptive sample size. However, because is small, the additional gain from the truncating in the inverse in -TAN does not provide significant benefit relative to AdaNewton.

6 Discussion

We demonstrated in this paper the success of the proposed -TAN method on solving large scale empirical risk minimization problems both theoretically and empirically. The -TAN method reduces the total cost in solving (1) to its statistical accuracy in two ways: (i) progressively increasing the sample size to reduce the costs of computing gradients and Hessians, and (ii) using a low rank approximation of the Hessian to reduce the cost of inversion. The gain provided by -TAN relative to existing methods is therefore most significant in large scale ERM problems with both large sample size and dimension . To see this, consider the alternatives previously considered

  • Stochastic first order methods, such as SAGA (Defazio et al., 2014a) and SVRG (Johnson and Zhang, 2013), compute a single gradient per iteration, and they have the overall complexity of to achieve statistical accuracy of the full training set if .

  • Newton’s method computes gradients and Hessians over the entire dataset and inverts a matrix of size at each iteration, requiring a total cost of , where is number of iterations required to converge. Because Newton’s method converges quadratically, may be small, but the total cost is made large by and .

  • The AdaNewton method (Mokhtari et al., 2016) computes gradients and Hessians for a subset of the dataset and inverts a matrix of size at each iteration, with the size of the subset increasing geometrically. By doubling the sample size every iteration, the statistical accuracy can be reached in a total of iterations after a total of passes over the dataset, for a total cost of . While Hessian computation cost is reduced, for high dimensional problems the inversion cost of dominates and the algorithm remains costly.

The -TAN method computes gradients and Hessians on a increasing subset of data in the same manner as AdaNewton, put reduces the inversion cost at each iteration to , resulting in a total cost of , or an effective cost of , if the size of the initial training set is large enough. For ill-conditioned problems, this method is a more feasible option as a second-order method than Newton’s method or AdaNewton. This theoretical intuition is indeed supported in the empirical simulations performed on large, high dimensional datasets in this paper.

We acknowledge the support of the National Science Foundation (NSF CAREER CCF-0952867) and the Office of Naval Research (ONR N00014-12-1-0997).

7 Appendix

7.1 Proof of Lemma 4.2

We factorize and bound as


Thus, it remains to bound by some . To do so, consider that we can factorize and as in (7). We can then expand as


where is the truncated eigenvalue matrix

with zeros padded for the last

diagonal entries. Observe that the first entries of the product are equal to , while the last entries are equal to . Thus, we have that


7.2 Proof of Lemma 4.2

To begin, recall the result from Lemma 4.2 in (16). From this, we use the following result from (Pilanci and Wainwright, 2015, Lemma 6), which present here as a lemma.

Consider the -TAN step where . The Newton decrement of the -TAN iterate is bounded by


Lemma 7.2 provides a bound on the Newton decrement of the iterate computed from the -TAN update in (5) in terms of Newton decrement of the previous iterate and the error incurred from the truncation of the Hessian. We proceed in a manner similar to (Mokhtari et al., 2016, Proposition 4) by finding upper and lower bounds for the sub-optimality in terms of the Newton decrement parameter . Consider the result from (Nesterov, 1998, Theorem 4.1.11),


Consider the Taylor’s expansion of for to obtain the lower bound on ,


Assume that is such that . Then the expression in (23) can be rearranged and bounded as


Now, consider the Taylor’s expansion of for in a similar manner to obtain for , from (Boyd and Vandenberghe, 2004, Chapter 9.6.3).


Using these bounds with the inequalities in (22) we obtain the upper and lower bounds on as


Now, consider the bound for Newton decrement of the -TAN iterate from (21). As we assume that , we have


We substitute this back into the upper bound in (26) for to obtain


Consider also from (26) that we can upper bound the Newton decrement as . We plug this back into (7.2) to obtain a final bound for sub-optimality as


7.3 Proof of Theorem 3

The proof of this theorem follows from the previous results. Observe that, from Proposition 4.1 the condition in (9) ensures that will be in the quadratic region of , i.e. . This condition validates the result in (17), restated as


From Proposition 4.1 we can bound the -suboptimality of the previous iterate . For notational simplicity, we focus on the case in which the statistical accuracy is , as given in (14). Furthermore, we can take the expression given for the truncation error from (16). Consider for some , the -th eigenvalue of the Hessian satisfies . Substituting this and (14) into (30), we obtain


The bound in (31) provides us then the condition in(10) for .


  • Bartlett et al. (2006) Peter L. Bartlett, Michael I. Jordan, and Jon D. McAuliffe. Convexity, classification, and risk bounds. Journal of the American Statistical Association, 101(473):138–156, 2006.
  • Beck and Teboulle (2009) Amir Beck and Marc Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences, 2(1):183–202, 2009.
  • Bottou (2010) Léon Bottou. Large-scale machine learning with stochastic gradient descent. In Proceedings of COMPSTAT’2010, pages 177–186. Springer, 2010.
  • Bottou and Bousquet (2007) Léon Bottou and Olivier Bousquet. The tradeoffs of large scale learning. In Advances in Neural Information Processing Systems 20, Vancouver, British Columbia, Canada, pages 161–168, 2007. URL
  • Boyd and Vandenberghe (2004) Stephen Boyd and Lieven Vandenberghe. Convex Optimization. Cambridge University Press, New York, NY, USA, 2004.
  • Byrd et al. (2016) Richard H Byrd, Samantha L Hansen, Jorge Nocedal, and Yoram Singer. A stochastic quasi-Newton method for large-scale optimization. SIAM Journal on Optimization, 26(2):1008–1031, 2016.
  • Defazio et al. (2014a) Aaron Defazio, Francis R. Bach, and Simon Lacoste-Julien. SAGA: A fast incremental gradient method with support for non-strongly convex composite objectives. In Advances in Neural Information Processing Systems 27, Montreal, Quebec, Canada, pages 1646–1654, 2014a.
  • Defazio et al. (2014b) Aaron Defazio, Justin Domke, and Tiberio Caetano. Finito: A faster, permutable incremental gradient method for big data problems. In Proceedings of the 31st international conference on machine learning (ICML-14), pages 1125–1133, 2014b.
  • Erdogdu and Montanari (2015) Murat A. Erdogdu and Andrea Montanari. Convergence rates of sub-sampled Newton methods. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, Montreal, Quebec, Canada, pages 3052–3060, 2015. URL
  • Gürbüzbalaban et al. (2015) Mert Gürbüzbalaban, Asuman Ozdaglar, and Pablo Parrilo. A globally convergent incremental Newton method. Mathematical Programming, 151(1):283–313, 2015.
  • Halko et al. (2011) Nathan Halko, Per-Gunnar Martinsson, and Joel A Tropp. Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions. SIAM review, 53(2):217–288, 2011.
  • Johnson and Zhang (2013) Rie Johnson and Tong Zhang. Accelerating stochastic gradient descent using predictive variance reduction. In Advances in Neural Information Processing Systems 26. Lake Tahoe, Nevada, United States., pages 315–323, 2013.
  • Konecnỳ and Richtárik (2013) Jakub Konecnỳ and Peter Richtárik. Semi-stochastic gradient descent methods. arXiv preprint arXiv:1312.1666, 2(2.1):3, 2013.
  • Lucchi et al. (2015) Aurelien Lucchi, Brian McWilliams, and Thomas Hofmann. A variance reduced stochastic Newton method. arXiv, 2015.
  • Mairal (2013) Julien Mairal. Stochastic majorization-minimization algorithms for large-scale optimization. In Advances in Neural Information Processing Systems, pages 2283–2291, 2013.
  • Mokhtari and Ribeiro (2014) Aryan Mokhtari and Alejandro Ribeiro. RES: Regularized stochastic BFGS algorithm. IEEE Transactions on Signal Processing, 62(23):6089–6104, 2014.
  • Mokhtari and Ribeiro (2015) Aryan Mokhtari and Alejandro Ribeiro. Global convergence of online limited memory BFGS. Journal of Machine Learning Research, 16:3151–3181, 2015.
  • Mokhtari et al. (2016) Aryan Mokhtari, Hadi Daneshmand, Aurelien Lucchi, Thomas Hofmann, and Alejandro Ribeiro. Adaptive Newton method for empirical risk minimization to statistical accuracy. In Advances in Neural Information Processing Systems, pages 4062–4070, 2016.
  • Mokhtari et al. (2017) Aryan Mokhtari, Mark Eisen, and Alejandro Ribeiro. IQN: An incremental quasi-Newton method with local superlinear convergence rate. arXiv preprint arXiv:1702.00709, 2017.
  • Moritz et al. (2016) Philipp Moritz, Robert Nishihara, and Michael I. Jordan. A linearly-convergent stochastic L-BFGS algorithm.

    Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, AISTATS 2016, Cadiz, Spain

    , pages 249–258, 2016.
  • Nesterov (1998) Yu Nesterov. Introductory Lectures on Convex Programming Volume I: Basic course. Citeseer, 1998.
  • Nesterov et al. (2007) Yurii Nesterov et al. Gradient methods for minimizing composite objective function. 2007.
  • Nguyen et al. (2017) Lam Nguyen, Jie Liu, Katya Scheinberg, and Martin Takáč. SARAH: A novel method for machine learning problems using stochastic recursive gradient. arXiv preprint arXiv:1703.00102, 2017.
  • Pilanci and Wainwright (2015) Mert Pilanci and Martin J Wainwright. Newton sketch: A linear-time optimization algorithm with linear-quadratic convergence. arXiv preprint arXiv:1505.02250, 2015.
  • Qu et al. (2016) Zheng Qu, Peter Richtárik, Martin Takác, and Olivier Fercoq. SDNA: stochastic dual Newton ascent for empirical risk minimization. In Proceedings of the 33nd International Conference on Machine Learning, ICML 2016, New York City, NY, USA, June 19-24, 2016, pages 1823–1832, 2016.
  • Robbins and Monro (1951) Herbert Robbins and Sutton Monro. A stochastic approximation method. The Annals of Mathematical Statistics, pages 400–407, 1951.
  • Roosta-Khorasani and Mahoney (2016a) Farbod Roosta-Khorasani and Michael W Mahoney. Sub-sampled Newton methods I: globally convergent algorithms. arXiv preprint arXiv:1601.04737, 2016a.
  • Roosta-Khorasani and Mahoney (2016b) Farbod Roosta-Khorasani and Michael W Mahoney. Sub-sampled Newton methods II: Local convergence rates. arXiv preprint arXiv:1601.04738, 2016b.
  • Roux et al. (2012) Nicolas Le Roux, Mark W. Schmidt, and Francis R. Bach. A stochastic gradient method with an exponential convergence rate for finite training sets. In Advances in Neural Information Processing Systems 25. Lake Tahoe, Nevada, United States., pages 2672–2680, 2012. URL
  • Schraudolph et al. (2007) Nicol N. Schraudolph, Jin Yu, and Simon Günter. A stochastic quasi-Newton method for online convex optimization. In Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics, AISTATS 2007, San Juan, Puerto Rico, pages 436–443, 2007.
  • Shalev-Shwartz and Zhang (2013) Shai Shalev-Shwartz and Tong Zhang. Stochastic dual coordinate ascent methods for regularized loss. The Journal of Machine Learning Research, 14:567–599, 2013.
  • Shalev-Shwartz and Zhang (2016) Shai Shalev-Shwartz and Tong Zhang. Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization. Mathematical Programming, 155(1-2):105–145, 2016.
  • Vapnik (2013) Vladimir Vapnik.

    The nature of statistical learning theory

    Springer Science & Business Media, 2013.