An inexact subsampled proximal Newton-type method for large-scale machine learning

08/28/2017 ∙ by Xuanqing Liu, et al. ∙ University of California-Davis University of Michigan University of Southern California 0

We propose a fast proximal Newton-type algorithm for minimizing regularized finite sums that returns an ϵ-suboptimal point in Õ(d(n + √(κ d))(1/ϵ)) FLOPS, where n is number of samples, d is feature dimension, and κ is the condition number. As long as n > d, the proposed method is more efficient than state-of-the-art accelerated stochastic first-order methods for non-smooth regularizers which requires Õ(d(n + √(κ n))(1/ϵ)) FLOPS. The key idea is to form the subsampled Newton subproblem in a way that preserves the finite sum structure of the objective, thereby allowing us to leverage recent developments in stochastic first-order methods to solve the subproblem. Experimental results verify that the proposed algorithm outperforms previous algorithms for ℓ_1-regularized logistic regression on real datasets.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

We consider optimization problems of the form

(1)

where the

’s are smooth, convex loss functions, and

is a convex but possibly non-smooth regularizer, we also require the smooth part

to be strongly convex and Lipschitz continuous. Such problems are ubiquitous in machine learning applications, and concrete instances include (regularized) linear-regression and logistic regression.

For (1) with smooth regularizer, most of the current state-of-the-art algorithms are accelerated stochastic first-order methods, which need floating point operations (FLOP’s) to return an -suboptimal point (cf. (Allen-Zhu, 2016b)). A notable exception is LiSSA and its variants by (Agarwal et al., 2016), which is a Newton-type method that only needs FLOPS to return an -suboptimal point, by convention we use to suppress factors of , , etc. As long as , LiSSA is more efficient than accelerated stochastic first-order methods. However, it only handles smooth regularizers. using second order methods for problems with non-smooth regularizers.

In this paper, we propose a Newton-type method for solving (1) that has fast rate of convergence. Our convergence rate matches state-of-the-art stochastic first-order methods and LiSSA for smooth regularizers, but also accommodates non-smooth regularizers. The basic idea is to combine a proximal Newton-type methods with a subsampled Hessian approximation that preserve the finite sum structure of the smooth part of the objective in the Newton subproblem by subsampling. This allows us to leverage state-of-the-art stochastic first-order methods to solve the subproblem. As we shall see, the proposed method matches the efficiency of LiSSA: it needs FLOPS to return an -suboptimal point. Thus, as long as , the proposed method is more efficient than accelerated stochastic first-order methods.

The rest of the paper is outlined as follows. We present our main algorithm in Section 2 and introduce some related work in Section 3. The theoretical analysis is presented in Section 4 and experimental results are in Section 5.

2 Subsampled Proximal Newton-type methods

1:  Input: Data pairs , , ; Desired precision .
2:  Output:
3:  ;
4:  for t do
5:     Sample a subset by leverage score sampling defined in (5). We need samples;
6:     Calculate the subsampled Hessian by (4);
7:     Solve the quasi-Newton subproblem (2) approximately (using Catalyst+SVRG) to ensure the convergence condition:
where is the gradient residual defined in (7), and is the solution of subproblem defined in (2). In Section 4.3 we show it only takes constant iterations to ensure this stopping condition.
8:     Choose step size by Theorem 4.2 (for Phase I) and 4.2 (for Phase II);
9:     Update iterate: ;
10:     if  (can be checked by Corollary 4.2then
11:        Break;
12:     end if
13:  end for
14:  Return .
Algorithm 1 Fast proximal Newton

The proposed method is, at its core, a proximal Newton-type method. The search directions are found by solving the sub-problem

(2)

where is the smooth part of composite function defined in (1), is an positive definite approximation to the Hessian. We see that the objective of the subproblem is obtained by replacing the smooth part of the objective by a quadratic approximation. For this reason, the algorithm is also called a successive quadratic approximation method (Byrd et al., 2013). If there is no regularizer, we see that the method reduces to a Newton-type method for minimizing the smooth part of the objective.

From a theoretical perspective, proximal Newton-type methods are known to inherit the desirable convergence properties of Newton-type methods for minimizing smooth functions. Unfortunately, the high cost of solving (2) has prevented widespread adoption of the methods for large-scale machine learning applications.

In this paper, we combine sub-sampling and recent advances in stochastic first-order methods to solve the sub-problem efficiently. For problem (1), the Hessian can be written as

(3)

Let be a random sample consists training instances, we define to be the sub-sampled Hessian:

(4)

where

is the sampling probability for

-th instance and is a small constant that depends on the Hessian approximation error. In leverage score sampling, the probability is proportional to the corresponding leverage score of Hessian where is the data matrix. Let the -th leverage score be (Drineas et al., 2012):

(5)

where is the -th row of (in our case, ). As we shall see, it is possible to emulate leverage score subsampling in FLOP’s, where is (up to a constant), the computational complexity of matrix multiplication. In this case, subproblem (2) can be rewritten as

(6)

which is the sum of terms plus regularization. The key benefit of forming the Hessian approximation by subsampling is that the subproblem objective retains the finite sum structure of the objective, thereby allowing us to leverage state-of-the-art stochastic first-order methods to solve the subproblem efficiently. We remark that only the hessian is subsampled, not the gradient. This allows the subproblem to capture the first-order characteristics of the original problem, thereby preserving the fast convergence rate of proximal Newton-type methods. As we shall see, the computational cost of an inexact subsampled proximal Newton method is competitive with that of state-of-the-art stochastic first order methods.

Besides combining subsampling and leveraging state-of-the-art stochastic first-order methods to solve the subproblem, the third idea that is crucial to making the proposed method competitive with stochastic first-order methods is inexact search directions. (Lee et al., 2014; Byrd et al., 2013) propose an inexact stopping condition based on the relative lengths of the composite gradient step on the subproblem and original objective. We modify their stopping condition to suit the convergence analysis. Define the gradient residual:

(7)

where and is the solution of (2), so is the residual of the first-order optimality condition of the subproblem, if then is the exact solution to the subproblem (2). However we only require

where is the norm induced by and is its dual norm (equivalently the norm induced by ),

is a pre-determined control series. As long as the eigenvalues of

remain bounded, the proposed inexact stopping condition is (up to a constant) equivalently to the inexact stopping condition of (Lee et al., 2014).

To check the inexact stopping condition, we need a more tractable formulation to compute gradient residual given . To do so imagine we do one proximal gradient (PG) step (of step size ) on the subproblem (2):

where is the iterate that induces by (7), then by the properties of the proximal mapping, we have

By subtracting from at both sides, we see that is the residual in the inexact stopping condition:

(8)

In order to study the computational complexity of the proposed method, we analyze the convergence rate of inexact proximal Newton-type methods on self-concordant composite minimization problems, which may be of independent interest. Compared to the analysis of (Tran-Dinh et al., 2015), our analysis does not rely on an infeasible choice of step size that require evaluating the proximal Newton decrement.

Our proposed algorithm is summarized in Algorithm 1. Note that in our algorithm, controls the inexactness of Hessian approximation, and controls the inexactness of Newton subproblem solver. Since our analysis holds for any satisfy these constraints, in practice we can simply choose them to be constants.

Here are some more details for each step in Algorithm 1:

  1. With the leverage score sampling, Theorem 3.3 and Proposition 4.1, 4.1 show that samples are sufficient to guarantee that the subsampled Hessian (4) satisfies Assumption 4.1 and 4.1 with high probability. In this case, we will show that the subsampled Hessian is close enough to the exact Hessian such that an inexact proximal Newton method achieves a linear convergence rate.

  2. When forming the subsampled Hessian (4), all we need to do is to calculate . There is no need for explicitly forming the -by- matrix , since the subproblem solver will directly solve the resulting finite-sum problem.

  3. Instead of solving the subproblem exactly, we only require an inexact search direction up to a certain precision (controlled by ). As we shall see, by initializing the subproblem solver at the previous solution , it is possible to obtain an inexact search direction of sufficient accuracy in a constant number of SVRG+Catalyst iterations.

  4. To determine the step size , we first calculate the proximal Newton decrement defined as

    (9)

    which is the “inexact” Newton decrement computed by the inexact subproblem solution and is the approximate Hessian. If , where is predefined constant, our algorithm is in Phase I and we choose step size according to Theorem 4.2. Otherwise our algorithm is in Phase II and we choose step size according to Theorem 4.2.

3 Related work

Existing algorithms for minimizing composite problems are fall into two broad classes: first-order methods and Newton-type (second-order) methods.

3.1 First-order methods

First-order methods are dominant in large-scale optimization due to the fact that their memory requirement is , where is the problem dimension. The basic variants of most first-order methods converge linearly on strongly convex objectives, and their rate of convergence depends on the condition number of the objective. The accelerated variants improve the dependence on the condition number to (Nesterov, 2004). Broadly speaking, to produce a -suboptimal iterate, first-order methods require floating point operations (FLOP’s), while their accelerated counterparts need .

On large problems, stochastic first-order methods are preferred because they need fewer passes over the data than their non-stochastic counterparts (Robbins and Monro, 1951). Recently, the idea of variance reduction has led to significant improvements in the efficiency of stochastic first-order methods (Johnson and Zhang, 2013; Xiao and Zhang, 2014; Roux et al., 2012; Defazio et al., 2014; Shalev-Shwartz and Zhang, 2013)

. The key idea is to compute the gradient of the objective sparingly during optimization to reduce the variance of the steps as the algorithm converges. The resulting algorithms achieve linear rates of convergence that are comparable to those of their non-stochastic counterparts. Broadly speaking, these methods reduce the computational cost of obtaining a

-suboptimal point to FLOP’s. Accelerated variants of such stochastic first-order method with variance reduction further reduce the cost to (Allen-Zhu, 2016a; Lin et al., 2015; Shalev-Shwartz and Zhang, 2014).

3.2 Newton-type methods

Traditional second-order methods, due to their higher computational cost, have been relegated to medium-scale problems. The main bottleneck is forming the -by- Hessian matrix and computing the Newton direction by solving an -by- linear system. Conjugate gradient method can be used to accelerate this procedure by solving the linear system inexactly, and has been successfully used in some machine learning tasks (Lin et al., 2008; Keerthi and DeCoste, 2005).

For problems with non-smooth regularizers (e.g., penalty), Newton-type methods cannot be directly applied since the objective is non-differentiable. For these problems, a family of proximal Newton methods has been studied recently (Lee et al., 2014). To deal with non-smooth regularizers, proximal Newton methods compute the Newton direction by solving a quadratic plus non-smooth subproblem which does not have a closed form solution, so another iterative solver has to be used to solve the subproblem approximately. There are a few specialized proximal Newton algorithms tailored to specific problems, that achieve state-of-the-art performance (Hsieh et al., 2011; Yuan et al., 2012; Friedman et al., 2007).

Recently, there has been a line of research that aims to reduce the computational cost of Newton-type methods so that they are competitive with state-of-the-art first-order methods. The key ideas here are subsampling and exploiting the low-rank structure in limited-memory Hessian approximations to accelerate the solution of the Newton subproblem (Erdogdu and Montanari, 2015; Agarwal et al., 2016; Byrd et al., 2011; Roosta-Khorasani and Mahoney, 2016a, b; Xu et al., 2016; Ye et al., 2016; Pilanci and Wainwright, 2015). (Erdogdu and Montanari, 2015) uses uniform subsampled Hessian called NewSamp, and (Pilanci and Wainwright, 2015) uses sketching in place of subsampling. Non-uniform subsampling and especially leverage sampling is desirable because they require a subsample size that does not depend on . For example (Xu et al., 2016) uses blocked partial leverage scores to sample data points in FLOPS, but the overall method only handles with smooth problems. At the same time, (Roosta-Khorasani and Mahoney, 2016a, b) use both Hessian and gradient uniform sampling scheme to reach a better per iteration cost when .

Unfortunately, all the previous work focus on smooth functions where the Newton direction can be computed in closed form or by solving a linear system. Compared with prior work, our method is the first subsampled second order method for problems with non-smooth regularizers. To deal with non-smooth regularizers, we appeal to the proximal Newton framework. Since the proximal Newton subproblem is itself non-smooth and does not have a closed form solution, we use another iterative solver to compute search directions, which leads us to the question of how to balance the inexactness of subproblem solvers and computational cost. Our theoretical analysis shows that the convergence rate of the proposed second order method has better computational complexity than state-of-the-art first order methods for solving (1) with non-smooth regularizers.

In a related area, there has also been considerable research on stochastic Newton-type methods that aim to incorporate second order information into stochastic first-order methods (Byrd et al., 2016; Schraudolph et al., 2007). Unfortunately these methods generally retain the sublinear convergence rate of their first-order counterparts. The exception to this is the algorithm by, which attain a linear rate of convergence (Moritz et al., 2016). Unfortunately, the rate of convergence depends poorly on the condition number of the objective.

3.3 Leverage score subsampling

In this section we will introduce the fast leverage score subsampling algorithm of (Cohen et al., 2015). This algorithm is used for forming the Hessian approximation in Algorithm 1.

[Cohen et al. (2015)] Given a matrix , we can compute a matrix with rows such that for all ,

in time111 is the running time of matrix multiplication, so . .

The highlight of (Cohen et al., 2015) is it doesn’t seek to the exact leverage scores in one shot, rather, it adopts an iterative scheme: first sample a subset of uniformly to construct a crude spectral approximation, then resample rows from

using these estimates to get a finer estimation. By repeating this process iteratively we can find the spectral approximation of

within desired error tolerance.

3.4 Catalyst and SVRG

The Catalyst procedure or accelerated proximal point algorithm is a continuation technique for improving the computational complexity of optimization algorithms. At each step, Catalyst adds a small strongly convex term to the objective function, thereby making it easier to solve, and solves the modified problem using a first order method. By carefully controlling the decrease of , Lin et al. showed that the convergence rate can be improved. The algorithm for Catalyst with SVRG is listed in Algorithm 2.

1:  Input: (,,)
2:  , ,
3:  for  to  do
4:      SVRG(, init=).
5:     Solve
6:     
7:  end for
8:  Return
Algorithm 2 Universal Catalyst with SVRG solver

The following theorem shows that Catalyst+SVRG converges linearly. In this paper, we will use this algorithm to solve the proximal Newton subproblem (2). [Lin et al. ] Choose (parameter in Lin et al. ) and use SVRG to solve the subproblem, then the function value of subproblem decreases linearly and find a -suboptimal solution within steps, omits constants and log factors of and . Formally:

In the next section, we present a convergence analysis of inexact proximal Newton-type methods on self-concordant composite minimization problems, which may be of independent interest.

4 Convergence analysis

4.1 Preliminaries on self-concordant function

In this paper we focus on composite objectives where the smooth part is self-concordants. (Self-concordant) A closed, convex function is called self-concordant if:

(10)

for all and , where is the local norm. We claim that the regularized logistic regression loss is self-concordant for any (Zhang and Lin, 2015), so we will use that in our experiment. For self-concordant function we have some useful inequalities:

  • Hessian bound:

    (11)
  • Gradient bound:

    (12)
  • Function value bound:

    (13)

where , . (11,12) and the right hand side of (13) hold for . Similar to the global analysis of Newton’s method, we divide the convergence analysis into two phases. In the first phase we will show in Section 4.2 that the objective function value decreases by at least a constant value at each iteration. In the second phase, the objective function value converges to its minimum linearly. Since we use the subsampled Hessian and solve the inner problem inexactly, the following conditions on the inaccuracy of the Hessian approximation are required. These are analogues of the Dennis-Moré condition in the analysis of quasi-Newton methods.

(Dennis-Moré condition) For all ( is the unit ball in ), the subsampled Hessian matrix satisfies where is a parameter that controls the preciseness of (will be fixed later). This is also equivalent to , .

For all , we have , where dual norm .

Next we show that Assumption 4.1(Dennis-Moré condition) implies Assumption 4.1: If Hessian approximation satisfies Dennis-Moré condition, then will also hold. Further, both Assumption 4.1 and 4.1 are satisfied if is a good enough spectral approximation of : To satisfy Assumption 4.1 it is enough to set in Theorem 3.3. Theorem 3.3 indicates that Assumption 4.1 and 4.1 will hold if we form the Hessian approximation defined in (4) using samples, where the probability that a sample is selected is proportional to its leverage score.

Based on the properties of self-concordant functions and the preceding conditions on the Hessian approximation, we are ready to show that the convergence rate is linear. In the proof we will follow the update rule and notations introduced in Algorithm 1.

4.2 Outer loop analysis and stopping criterion

Denote as (exact) proximal Newton decrement and as the approximate Newton decrement, from Assumption 4.1 we have . As long as , where is a small constant, the algorithm is in phase I. Theorem 4.2 together with Corollary 4.2 show that during this phase, the objective value decreases by at least a constant in each iteration.

By the update rule of Algorithm 1 with step size:

and solve the inner problem with precision , where is the subgradient residual:

is a forcing coefficient. Then the function value will decrease by:

(14)

We remark that unlike the step size proposed by (Tran-Dinh et al., 2013) where , our step size does not depend on exact Newton decrement and Hessian . In practical implementations of proximal quasi-Newton methods, calculating is impractical. Our step size only depends on the Newton decrement , which is available in our algorithm.

By fixing and step size , the decrement of function value at each step is at least which is bounded away from zero as long as . So within finite steps, the iterates will enter into (defined as Phase-II).

When or equivalently our algorithm switches to undamped subsampled proximal Newton method, where step size is adopted and so . The following theorem indicates that the Newton decrement, as a metric of suboptimality, converges to zero linear-quadratically: When , if step size and the subproblem solver yields a solution such that the subgradient residual , then Newton decrement will converge to zero linear-quadratically:

(15)

If , and is small enough, the numerator of RHS will be dominated by so the contraction factor is asymptotically.

If we set (so that is exact Hessian) and (so the subproblem is solved exactly), we recover the quadratic convergence rate of the proximal Newton method: (15) becomes:

By using the connection between the Newton decrement and suboptimality , we show that the suboptimality also decreases linearly:

If and use the undamped update: , then the function value to minimum is upper bounded by:

(16)

It is easy to see that the as . Practically we use to replace , this is validated by Dennis-Moré condition 4.1. Note that the proof is similar to the analysis of (Li et al., 2016) but we modify it to accommodate an inexact Hessian.approximation.

In addition to the convergence rate, Corollary 4.2 gives a stopping criterion for the outer iteration of our algorithm: For any desired error tolerance , we terminate the algorithm as long as . In practice, we replace by , where is a small constant. This is justified by the fact that is a good spectral approximation of the exact Hessian.

4.3 Inner loop analysis

In this part we show that to satisfy the precision requirement we only need a constant number of inner iterations if we use variance reduction method such as SVRG and it can be further accelerated by Catalyst (Lin et al., ). Theorem 2 indicates that catalyst can accelerate many first order methods like SVRG to change the dependent of condition number from to .

Recall the solution of the subproblem should satisfy the inexact stopping condition (see Theorem 4.2 and Theorem 4.2). The following lemma converts the condition on to the function value of subproblem. To satisfy the condition it is enough to solve the subproblem to a certain precision defined below:

(17)

where introduced in (2) is the subproblem at -th outer iteration:

(18)

and is its minimum.

Since the proximal Newton decrement converges to zero linearly in phase II, the number of inner iterations should increase linearly. However, if we use the last iterate as the initial guess to “warm start” the subproblem solution, we only need a constant number of iterations each time.

With the definition of subproblem in (18), suppose is the -inexact solution of that satisfies (17), i.e.

(19)

If we initialize the next subproblem with then the initial error has the same order of magnitude with desired error. That is,

where is a constant that does not change across major iterations of the subsampled proximal Newton method. Note that many stochastic first order method such as SVRG is only guaranteed to find an -optimal solution with certain probability. While as the number of outer iteration grows, number of SVRG calling also increases, so we need to make sure each SVRG calling successes with high enough probability such that the total process success. Specifically we use the following union bound property: where is the incident that the -th subproblem fails to converge within given iterations and is the number of outer iterations. By Markov inequality the failure probability of each SVRG calling is bounded by:

(20)

Therefore, if we desire the overall probability of failure to be at most , it suffices to make sure failure rate small enough:

(21)

Combining the above inequalities, we obtain the following theorem showing the overall complexity which includes sampling overhead, inner loop and outer loop complexity. Our fast inexact proximal Newton method, with Catalyst and SVRG as inner solver, can find an -optimal solution with probability within time, where is sample size. The complexity of leverage sampling is simplified from Lemma 10 in (Cohen et al., 2015), when . We outline the proof of Theorem 4.3 here. First of all from Theorem 4.2 we know that the iterate will reach within where is the lower bound of function decrement introduced in Corollary 4.2. As long as is bounded below, phase-II will be reached within constant iterations. From Theorem 4.2 we know decrease linearly and from Corollary 4.2 the algorithm can exit when so the number outer iterations in phase-II is:

where is the linear convergence rate of .

For each major iteration, as long as , the cost of subsampling is FLOPS according to Theorem 3.3. Further, by the union bound (21) and Lemma 4.3, using Catalyst and SVRG we have an upper bound on number of inner iterations:

Finally we combine outer loop complexity with inner loop complexity and notice that for each inner iteration it takes FLOPS:

(22)

5 Experiments

In this section, we compare our proposed algorithm with other 1st/2nd order methods on -regularized logistic regression problem:

where are training data/label pairs and is the regularization parameter. Three datasets from LIBSVM website are chosen. Because Mnist8M is a multiclass dataset, we extract the 1st and 6th classes to synthesize a two-class dataset. Other basic information about datasets is listed in Table 1. These three datasets mainly differ in sparsity which we believe is an important factor when comparing different algorithms.

Dataset #Data #Features #Non-zeros
Realsim 72,309 20,958 3,781,392
Covtype 581,012 54 7,521,450
Mnist8M 1,603,260 784 345,075,085
Table 1: Dataset Statistics and Parameters Used in Experiments

We compare the following algorithms with our fast proximal Newton method:

  • LIBLINEAR (Full proximal Newton): The proximal Newton method with exact Hessian is used as the default solver for logistic regression in LIBLINEAR (Fan et al., 2008).

  • SVRG: the variance reduced SGD algorithm proposed in (Johnson and Zhang, 2013).

  • SAGA: another variance reduced SGD algorithm proposed in (Defazio et al., 2014). Our implementation uses storage per sample by exploiting the structure of the ERM problem.

Since the notion of “epoch” is quite different for these algorithms, unlike many other experiments which use data passes or gradient calculation as x-axis, we evaluate performance by comparing running time of different methods. We implement all the algorithms in C++ by modifying the code base of LIBLINEAR, and try to optimize each of them in order to have a fair comparison. In the following, we first test our algorithm with different parameter settings, and then compare it with other competing algorithms.

In the first set of experiments we consider how number of inner iterations affects the convergence rate, by setting number of inner iteration( we can observe the convergence rates in Figure 1.

Figure 1: Solving logistic regression on Covtype with different inner iteration and different . Unlike the suggestion in (cite SVRG) we fix the very inner iteration in SVRG to be where is the size of data.

The result in Figure 1 shows that our algorithm is quite robust to the choice of number of inner iterations. This is a nice advantage in practice when we cannot afford to do a grid search for the best hyper-parameters. However we also noticed that when the performance is substantially worse than other choices, this is probably because doing merely one inner iteration cannot solve the subproblem precisely enough to satisfy Lemma 4.3.

Next we compare different algorithms on three datasets and choose regularization coefficients from . For each combination of algorithm, dataset, -triples we search the best step size (we later found that step size is largely determined by but less depending on data for these three datasets, so in fact we are using the same step size for all algorithms). We choose a fixed number of inner iterations for all because as the previous experiment shows it won’t affect the outcome much. The result is shown in Figure 2. We can see our algorithm outperforms others on Covtype and Mnist with both large and small , furthermore our algorithm is especially good on large regularization where we are expected to see linear or even superlinear convergence. On Realsim dataset, our algorithm is slower than LIBLINEAR probably because other algorithms including our fast proximal Newton method as a general purpose algorithm don’t exploit the sparse property of data, when is large our algorithm is comparable to LIBLINEAR. Another key observation is that as the regularization factor increases, the computational time to convergence decreases (similar conclusion is made in (Shi et al., 2010)). Intuitively a larger regularization leads to a sparser solution, so if we initialize at then it’s already close to the optimal solution.

Overall, the experiments show that our algorithm is competitive and slightly better than the state-of-the-art implementation, LIBLINEAR. Therefore, our algorithm not only achieves better theoretical convergence but also has better practical performance than other methods.

Figure 2: Running time comparison of different algorithms on different conditions. Our algorithm is labeled as Prox+SVRG(inner, ) where inner is inner iteration and is the inner iteration in SVRG.

6 Summary and discussion

We proposed an inexact subsampled proximal Newton-type method for composite minimization that attains fast rates of convergence. In particular, it matches the computational efficiency of state-of-the-art stochastic first-order methods and LiSSA. At a high-level, the proposed method combines subsampling with accelerated variance reduced first-order methods, and is essentially the composite counterpart to LiSSA. We remark that as long as , the proposed method has the best known computational complexity for composite minimization under the stated assumption. The key takeaway is that by leveraging recent advances in stochastic first-order methods, it is possible to design second-order methods that are equally, if not more efficient for large-scale machine learning.


References

  • Agarwal et al. (2016) Naman Agarwal, Brian Bullins, and Elad Hazan. Second order stochastic optimization in linear time. arXiv preprint arXiv:1602.03943, 2016.
  • Allen-Zhu (2016a) Zeyuan Allen-Zhu. Katyusha: Accelerated variance reduction for faster sgd. ArXiv e-prints, abs/1603.05953, 2016a.
  • Allen-Zhu (2016b) Zeyuan Allen-Zhu. Katyusha: The first direct acceleration of stochastic gradient methods. arXiv preprint arXiv:1603.05953, 2016b.
  • Byrd et al. (2011) Richard H Byrd, Gillian M Chin, Will Neveitt, and Jorge Nocedal. On the use of stochastic hessian information in optimization methods for machine learning. SIAM Journal on Optimization, 21(3):977–995, 2011.
  • Byrd et al. (2013) Richard H Byrd, Jorge Nocedal, and Figen Oztoprak. An inexact successive quadratic approximation method for convex l-1 regularized optimization. arXiv preprint arXiv:1309.3529, 2013.
  • Byrd et al. (2016) Richard H Byrd, Samantha L Hansen, Jorge Nocedal, and Yoram Singer. A stochastic quasi-newton method for large-scale optimization. SIAM Journal on Optimization, 26(2):1008–1031, 2016.
  • Cohen et al. (2015) Michael B Cohen, Yin Tat Lee, Cameron Musco, Christopher Musco, Richard Peng, and Aaron Sidford. Uniform sampling for matrix approximation. In Proceedings of the 2015 Conference on Innovations in Theoretical Computer Science, pages 181–190. ACM, 2015.
  • Defazio et al. (2014) Aaron Defazio, Francis Bach, and Simon Lacoste-Julien. Saga: A fast incremental gradient method with support for non-strongly convex composite objectives. In Advances in Neural Information Processing Systems, pages 1646–1654, 2014.
  • Drineas et al. (2012) Petros Drineas, Malik Magdon-Ismail, Michael W Mahoney, and David P Woodruff. Fast approximation of matrix coherence and statistical leverage. Journal of Machine Learning Research, 13(Dec):3475–3506, 2012.
  • Erdogdu and Montanari (2015) Murat A Erdogdu and Andrea Montanari. Convergence rates of sub-sampled newton methods. In Advances in Neural Information Processing Systems, pages 3052–3060, 2015.
  • Fan et al. (2008) Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, Xiang-Rui Wang, and Chih-Jen Lin. Liblinear: A library for large linear classification. Journal of machine learning research, 9(Aug):1871–1874, 2008.
  • Friedman et al. (2007) J. Friedman, T. Hastie, H. Höfling, and R. Tibshirani. Pathwise coordinate optimization. Ann. Appl. Stat., 1(2):302–332, 2007.
  • Hsieh et al. (2011) C.J. Hsieh, M.A. Sustik, I.S. Dhillon, and P. Ravikumar. Sparse inverse covariance matrix estimation using quadratic approximation. In Adv. Neural Inf. Process. Syst. (NIPS), 2011.
  • Johnson and Zhang (2013) Rie Johnson and Tong Zhang.

    Accelerating stochastic gradient descent using predictive variance reduction.

    In Advances in Neural Information Processing Systems, pages 315–323, 2013.
  • Keerthi and DeCoste (2005) S Sathiya Keerthi and Dennis DeCoste. A modified finite newton method for fast solution of large scale linear svms. Journal of Machine Learning Research, 6(Mar):341–361, 2005.
  • Lee et al. (2014) Jason D. Lee, Yuekai Sun, and Michael A. Saunders. Proximal Newton-type methods for minimizing composite functions. SIAM Journal on Optimization, 24(3):1420–1443, 2014. doi: 10.1137/130921428.
  • Li et al. (2016) Jinchao Li, Martin S Andersen, and Lieven Vandenberghe. Inexact proximal newton methods for self-concordant functions. Mathematical Methods of Operations Research, pages 1–23, 2016.
  • Lin et al. (2008) Chih-Jen Lin, Ruby C Weng, and S Sathiya Keerthi. Trust region newton method for logistic regression. Journal of Machine Learning Research, 9(Apr):627–650, 2008.
  • (19) Hongzhou Lin, Julien Mairal, and Zaid Harchaoui. A universal catalyst for first-order optimization.
  • Lin et al. (2015) Hongzhou Lin, Julien Mairal, and Zaid Harchaoui. A universal catalyst for first-order optimization. In Advances in Neural Information Processing Systems, pages 3384–3392, 2015.
  • Moritz et al. (2016) Philipp Moritz, Robert Nishihara, and Michael I Jordan. A linearly-convergent stochastic l-bfgs algorithm. In

    Proceedings of the Nineteenth International Conference on Artificial Intelligence and Statistics

    , 2016.
  • Nesterov (2004) Yurii Nesterov. Introductory Lectures on Convex Optimization, volume 87. Springer Science & Business Media, 2004.
  • Pilanci and Wainwright (2015) Mert Pilanci and Martin J Wainwright. Newton sketch: A linear-time optimization algorithm with linear-quadratic convergence. arXiv preprint arXiv:1505.02250, 2015.
  • Robbins and Monro (1951) Herbert Robbins and Sutton Monro. A stochastic approximation method. The annals of mathematical statistics, pages 400–407, 1951.
  • Roosta-Khorasani and Mahoney (2016a) Farbod Roosta-Khorasani and Michael W Mahoney. Sub-sampled newton methods i: globally convergent algorithms. arXiv preprint arXiv:1601.04737, 2016a.
  • Roosta-Khorasani and Mahoney (2016b) Farbod Roosta-Khorasani and Michael W Mahoney. Sub-sampled newton methods ii: Local convergence rates. arXiv preprint arXiv:1601.04738, 2016b.
  • Roux et al. (2012) Nicolas L Roux, Mark Schmidt, and Francis R Bach. A stochastic gradient method with an exponential convergence _rate for finite training sets. In Advances in Neural Information Processing Systems, pages 2663–2671, 2012.
  • Schraudolph et al. (2007) Nicol N Schraudolph, Jin Yu, Simon Günter, et al. A stochastic quasi-newton method for online convex optimization. In AISTATS, volume 7, pages 436–443, 2007.
  • Shalev-Shwartz and Zhang (2013) Shai Shalev-Shwartz and Tong Zhang. Stochastic dual coordinate ascent methods for regularized loss minimization. Journal of Machine Learning Research, 14(Feb):567–599, 2013.
  • Shalev-Shwartz and Zhang (2014) Shai Shalev-Shwartz and Tong Zhang. Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization. In ICML, pages 64–72, 2014.
  • Shi et al. (2010) Jianing Shi, Wotao Yin, Stanley Osher, and Paul Sajda. A fast hybrid algorithm for large-scale l1-regularized logistic regression. Journal of Machine Learning Research, 11(Feb):713–741, 2010.
  • Tran-Dinh et al. (2013) Quoc Tran-Dinh, Anastasios Kyrillidis, and Volkan Cevher. Composite self-concordant minimization. arXiv preprint arXiv:1308.2867, 2013.
  • Tran-Dinh et al. (2015) Quoc Tran-Dinh, Anastasios Kyrillidis, and Volkan Cevher. Composite self-concordant minimization. Journal of Machine Learning Research, 16:371–416, 2015.
  • Xiao and Zhang (2014) Lin Xiao and Tong Zhang. A proximal stochastic gradient method with progressive variance reduction. SIAM Journal on Optimization, 24(4):2057–2075, 2014.
  • Xu et al. (2016) Peng Xu, Jiyan Yang, Farbod Roosta-Khorasani, Christopher Ré, and Michael W Mahoney. Sub-sampled newton methods with non-uniform sampling. In Advances in Neural Information Processing Systems, pages 3000–3008, 2016.
  • Ye et al. (2016) Haishan Ye, Luo Luo, and Zhihua Zhang. Revisiting sub-sampled newton methods. arXiv preprint arXiv:1608.02875, 2016.
  • Yuan et al. (2012) G.X. Yuan, C.H. Ho, and C.J. Lin. An improved glmnet for -regularized logistic regression. J. Mach. Learn. Res., 13:1999–2030, 2012.
  • Zhang and Lin (2015) Yuchen Zhang and Xiao Lin. Disco: Distributed optimization for self-concordant empirical loss. In ICML, pages 362–370, 2015.

Appendix A Proof of Proposition 4.1

If Hessian approximation satisfies Dennis-Moré condition, then will also hold.

Notice that exact Hessian , and . For subsampled Hessian , here is a random diagonal matrix, with each diagonal element and

is a i.i.d. random variable:

i.e. we sample each independently with probability .

Then by expanding the square of the left hand side of deviation condition we have:

From the first deviation condition:

Because is arbitrary, suppose is the space spanned by :

, then for any eigenvector

of matrix the corresponding eigenvalue should lies in , this ensures:

after rearranging we complete the proof:

Appendix B Proof of Proposition 4.1

To satisfy Assumption 4.1 it is enough to set in Theorem 3.3.

First of all we can expand Dennis Moré condition:

(23)

rearranging:

(24)

so if we set in Theorem 3.3 and expand the norm then we get:

(25)

So (24) naturally holds.

Appendix C Proof of Section 4.2

By the update rule of Algorithm 1 with step size: