# Second-Order Stochastic Optimization for Machine Learning in Linear Time

First-order stochastic methods are the state-of-the-art in large-scale machine learning optimization owing to efficient per-iteration complexity. Second-order methods, while able to provide faster convergence, have been much less explored due to the high cost of computing the second-order information. In this paper we develop second-order stochastic methods for optimization problems in machine learning that match the per-iteration cost of gradient based methods, and in certain settings improve upon the overall running time over popular first-order methods. Furthermore, our algorithm has the desirable property of being implementable in time linear in the sparsity of the input data.

## Authors

• 21 publications
• 10 publications
• 46 publications
• ### LSOS: Line-search Second-Order Stochastic optimization methods

We develop a line-search second-order algorithmic framework for optimiza...
07/31/2020 ∙ by Daniela di Serafino, et al. ∙ 0

• ### Second Order Optimization Made Practical

Optimization in machine learning, both theoretical and applied, is prese...
02/20/2020 ∙ by Rohan Anil, et al. ∙ 16

• ### Oracle Complexity of Second-Order Methods for Finite-Sum Problems

Finite-sum optimization problems are ubiquitous in machine learning, and...
11/15/2016 ∙ by Yossi Arjevani, et al. ∙ 0

• ### Efficient Implementation of Second-Order Stochastic Approximation Algorithms in High-Dimensional Problems

Stochastic approximation (SA) algorithms have been widely applied in min...
06/23/2019 ∙ by Jingyi Zhu, et al. ∙ 0

• ### Application of a Second-order Stochastic Optimization Algorithm for Fitting Stochastic Epidemiological Models

Epidemiological models have tremendous potential to forecast disease bur...
08/02/2017 ∙ by Atiye Alaeddini, et al. ∙ 0

• ### Which Factorization Machine Modeling is Better: A Theoretical Answer with Optimal Guarantee

Factorization machine (FM) is a popular machine learning model to captur...
01/30/2019 ∙ by Ming Lin, et al. ∙ 8

• ### Whitening and second order optimization both destroy information about the dataset, and can make generalization impossible

Machine learning is predicated on the concept of generalization: a model...
08/17/2020 ∙ by Neha S. Wadia, et al. ∙ 0

## Code Repositories

### darkon

Performance hacking for your deep learning models

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

In recent literature stochastic first-order optimization has taken the stage as the primary workhorse for training learning models, due in large part to its affordable computational costs which are linear (in the data representation) per iteration. The main research effort devoted to improving the convergence rates of first-order methods have introduced elegant ideas and algorithms in recent years, including adaptive regularization [DHS11]

, variance reduction

[JZ13, DBLJ14], dual coordinate ascent [SSZ13], and many more. In contrast, second-order methods have typically been much less explored in large scale machine learning (ML) applications due to their prohibitive computational cost per iteration which requires computation of the Hessian in addition to a matrix inversion. These operations are infeasible for large scale problems in high dimensions. In this paper we propose a family of novel second-order algorithms, LiSSA (Linear time Stochastic Second-Order Algorithm) for convex optimization that attain fast convergence rates while also allowing for an implementation with linear time per-iteration cost, matching the running time of the best known gradient-based methods. Moreover, in the setting where the number of training examples is much larger than the underlying dimension , we show that our algorithm has provably faster running time than the best known gradient-based methods. Formally, the main optimization problem we are concerned with is the empirical risk minimization (ERM) problem:

 minx∈Rdf(x)=minx∈Rd{1mm∑k=1fk(x)+R(x)}

where each is a convex function and

is a convex regularizer. The above optimization problem is the standard objective minimized in most supervised learning settings. Examples include logistic regression, SVMs, etc. A common aspect of many applications of ERM in machine learning is that the loss function

is of the form where is the training example-label pair. We call such functions generalized linear models (GLM) and will restrict our attention to this case. We will assume that the regularizer is an regularizer,111It can be seen that some of these assumptions can be relaxed in our analyses, but since these choices are standard in ML, we make these assumptions to simplify the discourse. typically . Our focus is second-order optimization methods (Newton’s method), where in each iteration, the underlying principle is to move to the minimizer of the second-order Taylor approximation at any point. Throughout the paper, we will let . The update of Newton’s method at a point is then given by

 xt+1=xt−∇−2f(xt)∇f(xt). (1)

Certain desirable properties of Newton’s method include the fact that its updates are independent of the choice of coordinate system and that the Hessian provides the necessary regularization based on the curvature at the present point. Indeed, Newton’s method can be shown to eventually achieve quadratic convergence [Nes13]. Although Newton’s method comes with good theoretical guarantees, the complexity per step grows roughly as (the former term for computing the Hessian and the latter for inversion, where is the matrix multiplication constant), making it prohibitive in practice. Our main contribution is a suite of algorithms, each of which performs an approximate Newton update based on stochastic Hessian information and is implementable in linear time. These algorithms match and improve over the performance of first-order methods in theory and give promising results as an optimization method on real world data sets. In the following we give a summary of our results. We propose two algorithms, LiSSA and LiSSA-Sample. LiSSA: Algorithm 1

is a practical stochastic second-order algorithm based on a novel estimator of the Hessian inverse, leading to an efficient approximate Newton step (Equation

1). The estimator is based on the well known Taylor approximation of the inverse (Fact 2) and is described formally in Section 3.1. We prove the following informal theorem about LiSSA.

###### Theorem 1.1 (Informal).

LiSSA returns a point such that in total time

 ~O((m+S1κ)dlog(1ε))

where is the underlying condition number of the problem and is a bound on the variance of the estimator.

The precise version of the above theorem appears as Theorem 3.3. In theory, the best bound we can show for is ; however, in our experiments we observe that setting to be a small constant (often 1) is sufficient. We conjecture that can be improved to and leave this for future work. If indeed can be improved to (as is indicated by our experiments), LiSSA enjoys a convergence rate comparable to first-order methods. We provide a detailed comparison of our results with existing first-order and second-order methods in Section 1.2. Moreover, in Section 7 we present experiments on real world data sets that demonstrate that LiSSA as an optimization method performs well as compared to popular first-order methods. We also show that LiSSA runs in time proportional to input sparsity, making it an attractive method for high-dimensional sparse data. LiSSA-Sample: This variant brings together efficient first-order algorithms with matrix sampling techniques [LMP13, CLM15] to achieve better runtime guarantees than the state-of-the-art in convex optimization for machine learning in the regime when . Specifically, we prove the following theorem:

###### Theorem 1.2 (Informal).

LiSSA-Sample returns a point such that in total time

 ~O(m+√κd)dlog2(1ε)loglog(1ε).

The above result improves upon the best known running time for first-order methods achieved by acceleration when we are in the setting where . We discuss the implication of our bounds and further work in Section 1.3. In all of our results stated above corresponds to the condition number of the underlying problem. In particular we assume some strong convexity for the underlying problem. This is a standard assumption which is usually enforced by the addition of the regularizer. In stating our results formally we stress on the nuances between different notions of the condition number (ref. Section 2.1), and we state our results precisely with respect to these notions. In general, all of our generalization/relaxations of the condition number are smaller than where

is the regularization parameter, and this is usually taken to be the condition number of the problem. The condition of strong convexity has been relaxed in literature by introducing proximal methods. It is an interesting direction to adapt our results in those setting which we leave for future work. We also remark that all of our results focus on the very high accuracy regime. In general the benefits of linear convergence and second-order methods can be seen to be effective only when considerably small error is required. This is also the case for recent advances in fast first-order methods where their improvement over stochastic gradient descent becomes apparent only in the high accuracy regime. Our experiments also demonstrate that second-order methods can improve upon fast first-order methods in the regime of very high accuracy. While it is possible that this regime is less interesting for generalization, in this paper we focus on the optimization problem itself. We further consider the special case when the function

is self-concordant. Self-concordant functions are a sub-class of convex functions which have been extensively studied in convex optimization literature in the context of interior point methods [Nem04]. For self-concordant functions we propose an algorithm (Algorithm 5) which achieves linear convergence with running time guarantees independent of the condition number. We prove the formal running time guarantee as Theorem 6.2. We believe our main contribution to be a demonstration of the fact that second-order methods are comparable to, or even better than, first-order methods in the large data regime, in both theory and practice.

### 1.1 Overview of Techniques

LiSSA: The key idea underlying LiSSA is the use of the Taylor expansion to construct a natural estimator of the Hessian inverse. Indeed, as can be seen from the description of the estimator in Section 3.1, the estimator we construct becomes unbiased in the limit as we include additional terms in the series. We note that this is not the case with estimators that were considered in previous works such as that of [EM15], and so we therefore consider our estimator to be more natural. In the implementation of the algorithm we achieve the optimal bias/variance trade-off by truncating the series appropriately. An important observation underlying our linear time step is that for GLM functions, has the form where is a scalar dependent on . A single step of LiSSA requires us to efficiently compute

for a given vector

. In this case it can be seen that the matrix-vector product reduces to a vector-vector product, giving us an time update. LiSSA-Sample: LiSSA-Sample is based on Algorithm 2, which represents a general family of algorithms that couples the quadratic minimization view of Newton’s method with any efficient first-order method. In essence, Newton’s method allows us to reduce (up to

factors) the optimization of a general convex function to solving intermediate quadratic or ridge regression problems. Such a reduction is useful in two ways. First, as we demonstrate through our algorithm LiSSA-Sample, the quadratic nature of ridge regression problems allows us to leverage powerful sampling techniques, leading to an improvement over the running time of the best known accelerated first-order method. On a high level this improvement comes from the fact that when solving a system of

linear equations in dimensions, a constant number of passes through the data is enough to reduce the system to equations. We carefully couple this principle and the computation required with accelerated first-order methods to achieve the running times for LiSSA-Sample. The result for the quadratic sub-problem (ridge regression) is stated in Theorem 5.1, and the result for convex optimization is stated in Theorem 5.2. The second advantage of the reduction to quadratic sub-problems comes from the observation that the intermediate quadratic sub-problems can potentially be better conditioned than the function itself, allowing us a better choice of the step size in practice. We define these local notions of condition number formally in Section 2.1 and summarize the typical benefits for such algorithms in Theorem 4.1. In theory this is not a significant improvement; however, in practice we believe that this could be significant and lead to runtime improvements.222While this is a difficult property to verify experimentally, we conjecture that this is a possible explanation for why LiSSA performs better than first-order methods on certain data sets and ranges of parameters. To achieve the bound for LiSSA-Sample we extend the definition and procedure for sampling via leverage scores described by [CLM15] to the case when the matrix is given as a sum of PSD matrices and not just rank one matrices. We reformulate and reprove the theorems proved by [CLM15] in this context, which may be of independent interest.

### 1.2 Comparison with Related Work

In this section we aim to provide a short summary of the key ideas and results underlying optimization methods for large scale machine learning. We divide the summary into three high level principles: first-order gradient-based methods, second-order Hessian-based methods, and quasi-Newton methods. For the sake of brevity we will restrict our summary to results in the case when the objective is strongly convex, which as justified above is usually ensured by the addition of an appropriate regularizer. In such settings the main focus is often to obtain algorithms which have provably linear convergence and fast implementations. First-Order Methods: First-order methods have dominated the space of optimization algorithms for machine learning owing largely to the fact that they can be implemented in time proportional to the underlying dimension (or sparsity). Gradient descent is known to converge linearly to the optimum with a rate of convergence that is dependent upon the condition number of the objective. In the large data regime, stochastic first-order methods, introduced and analyzed first by [RM51], have proven especially successful. Stochastic gradient descent (SGD), however, converges sub-linearly even in the strongly convex setting. A significant advancement in terms of the running time of first-order methods was achieved recently by a clever merging of stochastic gradient descent with its full version to provide variance reduction. The representative algorithms in this space are SAGA [RSB12, DBLJ14] and SVRG [JZ13, ZMJ13]. The key technical achievement of the above algorithms is to relax the running time dependence on (the number of training examples) and (the condition number) from a product to a sum. Another algorithm which achieves similar running time guarantees is based on dual coordinate ascent, known as SDCA [SSZ13]. Further improvements over SAGA, SVRG and SDCA have been obtained by applying the classical idea of acceleration emerging from the seminal work of [Nes83]. The progression of work here includes an accelerated version of SDCA [SSZ16]; APCG [LLX14]; Catalyst [LMH15], which provides a generic framework to accelerate first order algorithms; and Katyusha [AZ16], which introduces the concept of negative momentum to extend acceleration for variance reduced algorithms beyond the strongly convex setting. The key technical achievement of accelerated methods in general is to reduce the dependence on condition number from linear to a square root. We summarize these results in Table 1. LiSSA places itself naturally into the space of fast first-order methods by having a running time dependence that is comparable to SAGA/SVRG (ref. Table 1). In LiSSA-Sample we leverage the quadratic structure of the sub-problem for which efficient sampling techniques have been developed in the literature and use accelerated first-order methods to improve the running time in the case when the underlying dimension is much smaller than the number of training examples. Indeed, to the best of our knowledge LiSSA-Sample is the theoretically fastest known algorithm under the condition . Such an improvement seems out of hand for the present first-order methods as it seems to strongly leverage the quadratic nature of the sub-problem to reduce its size. We summarize these results in Table 1.

Second-Order Methods: Second-order methods such as Newton’s method have classically been used in optimization in many different settings including development of interior point methods [Nem04] for general convex programming. The key advantage of Newton’s method is that it achieves a linear-quadratic convergence rate. However, naive implementations of Newton’s method have two significant issues, namely that the standard analysis requires the full Hessian calculation which costs , an expense not suitable for machine learning applications, and the matrix inversion typically requires time. These issues were addressed recently by the algorithm NewSamp [EM15] which tackles the first issue by subsampling and the second issue by low-rank projections. We improve upon the work of [EM15] by defining a more natural estimator for the Hessian inverse and by demonstrating that the estimator can be computed in time proportional to . We also point the reader to the works of [Mar10, BCNN11] which incorporate the idea of taking samples of the Hessian; however, these works do not provide precise running time guarantees on their proposed algorithm based on problem specific parameters. Second-order methods have also enjoyed success in the distributed setting [SSZ14]. Quasi-Newton Methods: The expensive computation of the Newton step has also been tackled via estimation of the curvature from the change in gradients. These algorithms are generally known as quasi-Newton methods stemming from the seminal BFGS algorithm [Bro70, Fle70, Gol70, Sha70]. The book of [NW06] is an excellent reference for the algorithm and its limited memory variant (L-BFGS). The more recent work in this area has focused on stochastic quasi-Newton methods which were proposed and analyzed in various settings by [SYG07, MR14, BHNS16]. These works typically achieve sub-linear convergence to the optimum. A significant advancement in this line of work was provided by [MNJ16] who propose an algorithm based on L-BFGS by incorporating ideas from variance reduction to achieve linear convergence to the optimum in the strongly convex setting. Although the algorithm achieves linear convergence, the running time of the algorithm depends poorly on the condition number (as acknowledged by the authors). Indeed, in applications that interest us, the condition number is not necessarily a constant as is typically assumed to be the case for the theoretical results in [MNJ16]. Our key observation of linear time Hessian-vector product computations for machine learning applications provides evidence that in such instances, obtaining true Hessian information is efficient enough to alleviate the need for quasi-Newton information via gradients.

### 1.3 Discussion and Subsequent Work

In this section we provide a brief survey of certain technical aspects of our bounds which have since been improved by subsequent work. An immediate improvement in terms of (in fact suggested in the original manuscript) was achieved by [BBN16] via conjugate gradient on a sub-sampled Hessian which reduces this to . A similar improvement can also be achieved in theory through the extensions of LiSSA proposed in the paper. As we show in Section 7, the worse dependence on condition number has an effect on the running time when is quite large.333Equivalently, is small. Accelerated first-order methods, such as APCG [LLX14], outperform LiSSA in this regime. To the best of our knowledge second-order stochastic methods have so far not exhibited an improvement in that regime experimentally. We believe a more practical version of LiSSA-Sample could lead to improvements in this regime, leaving this as future work. To the best of our knowledge the factor of that appears to reduce the variance of our estimator has yet not been improved despite it being in our experiments. This is an interesting question to which partial answers have been provided in the analysis of [YLZ17]. Significant progress has been made in the space of inexact Newton methods based on matrix sketching techniques. We refer the reader to the works of [PW15, XYRK16, Coh16, LACBL16, YLZ17] and the references therein. We would also like to comment on the presence of a warm start parameter in our proofs of Theorems 3.3 and 5.1. In our experiments the warm start we required would be quite small (often a few steps of gradient descent would be sufficient) to make LiSSA converge. The warm start does not affect the asymptotic results proven in Theorems 3.3 and 5.1 because getting to such a warm start is independent of . However, improving this warm start, especially in the context of Theorem 5.1, is left as interesting future work. On the complexity side, [AS16] proved lower bounds on the best running times achievable by second-order methods. In particular, they show that to get the faster rates achieved by LiSSA-Sample, it is necessary to use a non-uniform sampling based method as employed by LiSSA-Sample. We would like to remark that in theory, up to logarithmic factors, the running time of LiSSA-Sample is still the best achieved so far in the setting . Some of the techniques and motivations from this work were also generalized by the authors to provide faster rates for a large family of non-convex optimization problems [AAZB17].

### 1.4 Organization of the Paper

The paper is organized as follows: we first present the necessary definitions, notations and conventions adopted throughout the paper in Section 2. We then describe our estimator for LiSSA, as well as state and prove the convergence guarantee for LiSSA in Section 3. After presenting a generic procedure to couple first-order methods with Newton’s method in 4, we present LiSSA-Sample and the associated fast quadratic solver in Section 5. We then present our results regarding self-concordant functions in Section 6. Finally, we present an experimental evaluation of LiSSA in Section 7.

## 2 Preliminaries

We adopt the convention of denoting vectors and scalars in lowercase, matrices in uppercase, and vectors in boldface. We will use without a subscript to denote the norm for vectors and the spectral norm for matrices. Throughout the paper we denote . A convex function is defined to be -strongly convex and -smooth if, for all ,

 ∇f(x)⊤(y−x)+β2∥y−x∥2≥f(y)−f(x)≥∇f(x)⊤(y−x)+α2∥y−x∥2.

The following is a well known fact about the inverse of a matrix s.t. and :

 A−1=∞∑i=0(I−A)i. (2)

### 2.1 Definition of Condition Numbers

We now define several measures for the condition number of a function . The differences between these notions are subtle and we use them to precisely characterize the running time for our algorithms.444During initial reading we suggest the reader to skip the subtlety with these notions with the knowledge that they are all smaller than the pessimistic bound one can achieve by considering a value proportional to , where is the coefficient of the regularizer. For an -strongly convex and -smooth function , the condition number of the function is defined as , or when the function is clear from the context. Note that by definition this corresponds to the following notion:

 κ≜maxxλmax(∇2f(x))minxλmin(∇2f(x)).

We define a slightly relaxed notion of condition number where the moves out of the fraction above. We refer to this notion as a local condition number as compared to the global condition number defined above:

 κl≜maxxλmax(∇2f(x))λmin(∇2f(x)).

It follows that . The above notions are defined for any general function , but in the case of functions of the form , a further distinction is made with respect to the component functions. We refer to such definitions of the condition number by . In such cases one typically assumes the each component is bounded by . The running times of algorithms like SVRG depend on the following notion of condition number:

 ^κ=maxxβmax(x)minxλmin(∇2f(x)).

Similarly, we define a notion of local condition number for , namely

 ^κl≜maxxβmax(x)λmin(∇2f(x)),

and it again follows that . For our (admittedly pessimistic) bounds on the variance we also need a per-component strong convexity bound . We can now define

 ^κmaxl≜maxxβmax(x)αmin(x).

Assumptions: In light of the previous definitions, we make the following assumptions about the given function to make the analysis easier. We first assume that the regularization term has been divided equally and included in . We further assume that each .555The scaling is without loss of generality, even when looking at additive errors, as this gets picked up in the log-term due to the linear convergence. We also assume that is -strongly convex and -smooth, is the associated local condition number and has a Lipschitz constant bounded by . We now collect key concepts and pre-existing results that we use for our analysis in the rest of the paper. Matrix Concentration: The following lemma is a standard concentration of measure result for sums of independent matrices.666The theorem in the reference states the inequality for the more nuanced bounded variance case. We only state the simpler bounded spectral norm case which suffices for our purposes. An excellent reference for this material is by [Tro12].

###### Theorem 2.1 (Matrix Bernstein, [Tro12]).

Consider a finite sequence of independent, random, Hermitian matrices with dimension . Assume that

 \bf E[Xk]=0 and ∥Xk∥≤R.

Define . Then we have for all ,

 \bf Pr(∥Y∥≥t)≤dexp(−t24R2).

Accelerated SVRG: The following theorem was proved by [LMH15].

###### Theorem 2.2.

Given a function with condition number , the accelerated version of SVRG via Catalyst [LMH15] finds an

-approximate minimum with probability

in time

 ~O(md+min(√κm,κ)d)log(1ε).

Sherman-Morrison Formula: The following is a well-known expression for writing the inverse of rank one perturbations of matrices:

 (A+vvT)−1=A−1−A−1vvTA−11+vTA−1v.

## 3 LiSSA: Linear (time) Stochastic Second-Order Algorithm

In this section, we provide an overview of LiSSA (Algorithm 1) along with its main convergence results.

### 3.1 Estimators for the Hessian Inverse

Based on a recursive reformulation of the Taylor expansion (Equation 2

), we may describe an unbiased estimator of the Hessian. For a matrix

, define as the first terms in the Taylor expansion, i.e.,

 A−1j≜j∑i=0(I−A)i,or equivalentlyA−1j≜I+(I−A)A−1j−1.

Note that . Using the above recursive formulation, we now describe an unbiased estimator of by first deriving an unbiased estimator for .

###### Definition 3.1 (Estimator).

Given independent and unbiased samples of the Hessian , define recursively as follows:

 ~∇−2f0=I and ~∇−2ft=I+(I−Xt)~∇−2ft−1fort=1,…,j.

It can be readily seen that , and so as , providing us with an unbiased estimator in the limit.

###### Remark 3.2.

One can also define and analyze a simpler (non-recursive) estimator based on directly sampling terms from the series in Equation (2). Theoretically, one can get similar guarantees for the estimator; however, empirically our proposed estimator exhibited better performance.

### 3.2 Algorithm

Our algorithm runs in two phases: in the first phase it runs any efficient first-order method FO for steps to shrink the function value to the regime where we can then show linear convergence for our algorithm. It then takes approximate Newton steps based on the estimator from Definition 3.1 in place of the true Hessian inverse. We use two parameters, and , to define the Newton step. represents the number of unbiased estimators of the Hessian inverse we average to get better concentration for our estimator, while represents the depth to which we capture the Taylor expansion. In the algorithm, we compute the average Newton step directly, which can be computed in linear time as observed earlier, instead of estimating the Hessian inverse.

### 3.3 Main Theorem

In this section we present our main theorem which analyzes the convergence properties of LiSSA. Define to be the total time required by a first-order algorithm to achieve accuracy .

###### Theorem 3.3.

Consider Algorithm 1, and set the parameters as follows: , , The following guarantee holds for every with probability ,

 ∥xt+1−x∗∥≤∥xt−x∗∥2.

Moreover, we have that each step of the algorithm takes at most time. Additionally if is GLM, then each step of the algorithm can be run in time .

As an immediate corollary, we obtain the following:

###### Corollary 3.4.

For a GLM function , Algorithm 1 returns a point such that with probability at least ,

 f(xt)≤minx∗f(x∗)+ε

in total time for .

In the above theorems hides factors of . We note that our bound on the variance is possibly pessimistic and can likely be improved to a more average quantity. However, since in our experiments setting the parameter suffices, we have not tried to optimize it further. We now prove our main theorem about the convergence of LiSSA (Theorem 3.3).

###### Proof of Theorem 3.3.

Note that since we use a first-order algorithm to get a solution of accuracy at least , we have that

 ∥x1−x∗∥≤14^κlM. (3)

As can be seen from Definition 3.1, a single step of our algorithm is equivalent to , where is the average of independent estimators . We now make use of the following lemma.

###### Lemma 3.5.

Let , as per a single iteration of Algorithm 1, and suppose are as defined in Algorithm 1. Then if we choose we have the following guarantee on the convergence rate for every step with probability :

 ∥xt+1−x∗∥≤γ∥xt−x∗∥+M∥∇−2f(xt)∥∥xt−x∗∥2

where .

Substituting the values of and , combining Equation (3) and Lemma 3.5, and noting that , we have that at the start of the Newton phase the following inequality holds:

 ∥xt+1−x∗∥≤∥xt−x∗∥4+M^κmaxl∥xt−x∗∥2≤∥xt−x∗∥2.

It can be shown via induction that the above property holds for all , which concludes the proof. ∎

We now provide a proof of Lemma 3.5.

###### Proof of Lemma 3.5.

Define . Note that . Following an analysis similar to that of [Nes13], we have that

 ∥xt+1−x∗∥ = ∥xt−x∗−~∇−2f(xt)∇f(xt)∥ = ∥xt−x∗−~∇−2f(xt)χ(xt)(xt−x∗)∥ ≤ ∥I−~∇−2f(xt)χ(xt)∥∥xt−x∗∥.

Following from the previous equations, we have that

 ∥xt+1−x∗∥∥xt−x∗∥≤∥I−~∇−2f(xt)χ(xt)∥=∥I−∇−2f(xt)χ(xt)a−(~∇−2f(xt)−∇−2f(xt))χ(xt)b∥.

We now analyze the above two terms and separately:

 ∥a∥ = ∥I−∇−2f(xt)χ(xt)∥ ≤ ∥∇−2f(xt)∫10(∇2f(xt)−∇2f(x∗+τ(xt−x∗))dτ)∥ ≤ M∥∇−2f(xt)∥∥xt−x∗∥.

The second inequality follows from the Lipschitz bound on the Hessian. The second term can be bounded as follows:

 ∥b∥≤(∥(~∇−2f(xt)−∇−2f(xt))∥∥χ(xt)∥)≤γ.

The previous claim follows from Lemma 3.6 which shows a concentration bound on the sampled estimator and by noting that due to our assumption on the function, we have that for all , and hence . Putting the above two bounds together and using the triangle inequality, we have that

 ∥xt+1−x∗∥∥xt−x∗∥≤M∥∇−2f(xt)∥∥xt−x∗∥+γ

which concludes the proof. ∎

###### Lemma 3.6.

Let be the average of independent samples of , as defined in Definition 3.1 and used in the per-step update of Algorithm 1, and let be the true Hessian. If we set , then we have that

 \bf Pr(∥~∇−2f(xt)−∇−2f(xt)∥>16^κmaxl ⎷ln(dδ)S1+1/16)≤δ.
###### Proof of Lemma 3.6.

First note the following statement which is a straightforward implication of our construction of the estimator:

 \bf E[~∇−2f(xt)]=\bf E[~∇−2f(xt)S2]=S2∑i=0(I−∇2f(xt))i.

We also know from Equation (2) that for matrices such that and ,

 X−1=∞∑i=0(I−X)i.

Since we have scaled the function such that , it follows that

 ∇−2f(xt)=\bf E[~∇−2f(xt)S2]+∞∑i=S2+1(I−∇2f(xt))i. (4)

Also note that since , it follows that . Observing the second term in the above equation,

 ∥∞∑i=S2(I−∇2f(xt))i∥ ≤ ∥(I−∇2f(xt))∥S2(∞∑i=0∥I−∇2f(xt)∥i) ≤ (1−1^κl)S2(∞∑i=0(1−1^κl)i) ≤ (1−1^κl)S2^κl ≤ exp(−S2^κl)^κl.

Since we have chosen , we get that the above term is bounded by . We will now show, using the matrix Bernstein inequality (Theorem 2.1), that the estimate

is concentrated around its expectation. To apply the inequality we first need to bound the spectral norm of each random variable. To that end we note that

has maximum spectral norm bounded by

 ∥~∇−2fS2∥≤S2∑i=0(1−1/^κmaxl)i≤^κmaxl.

We can now apply Theorem 2.1, which gives the following:

 \bf Pr(∥~∇−2f−\bf E[~∇−2f]∥>ε)≤2dexp(−ε2S164(^κmaxl)2).

Setting gives us that the probability above is bounded by . Now putting together the bounds and Equation (4) we get the required result. ∎

### 3.4 Leveraging Sparsity

A key property of real-world data sets is that although the input is a high dimensional vector, the number of non-zero entries is typically very low. The following theorem shows that LiSSA can be implemented in a way to leverage the underlying sparsity of the data. Our key observation is that for GLM functions, the rank one Hessian-vector product can be performed in time where is the sparsity of the input .

###### Theorem 3.7.

For GLM functions Algorithm 1 returns a point such that with probability at least

 f(xt)≤minx∗f(x∗)+ε

in total time for .

We will prove the following theorem, from which Theorem 3.7 will immediately follow.

###### Theorem 3.8.

Consider Algorithm 1, let be of the form described above, and let be such that the number of non zero entries in is bounded by . Then each step of the algorithm can be implemented in time .

###### Proof of Theorem 3.8.

Proof by induction. Let , , , and consider the update rules , , and , where is the regularization parameter. For the base case, note that , as is the case in Algorithm 1. Furthermore, suppose . Then we see that

 X[i,j+1] =∇f(x)+(I−λI−~∇2f[i,j+1](x))X[i,j] =∇f(x)+((1−λ)I−~∇2f[i,j+1](x))(cj∇f(x)+djvj) =(1+(1−λ)cj)∇f(x)+(1−λ)(djvj)−~∇2f[i,j+1](x)(cj∇f(x)+djvj) =cj+1∇f(x)+dj+1vj+1.

Note that updating and takes constant time, and and can each be calculated in time. It can also be seen that each product gives an -sparse vector, so computing takes time . Since can be calculated in time, and since is 0-sparse which implies the number of non-zero entries of is at most , it follows that the total time to calculate is . ∎

## 4 LiSSA: Extensions

In this section we first describe a family of algorithms which generically couple first-order methods as sub-routines with second-order methods. In particular, we formally describe the algorithm LiSSA-Quad (Algorithm 2) and provide its runtime guarantee (Theorem 4.1). The key idea underlying this algorithm is that Newton’s method essentially reduces a convex optimization problem to solving intermediate quadratic sub-problems given by the second-order Taylor approximation at a point, i.e., the sub-problem given by

 Qt(y)=f(xt−1)+∇f(xt−1)Ty+yT∇2f(xt−1)y2

where . The above ideas provide an alternative implementation of our estimator for used in LiSSA. Consider running gradient descent on the above quadratic , and let be the step in this process. By definition we have that

 yi+1t=yit−∇Qt(yit)=(I−∇2f(xt−1))yit−∇f(xt−1).

It can be seen that the above expression corresponds exactly to the steps taken in LiSSA (Algorithm 2, line 8), the difference being that we use a sample of the Hessian instead of the true Hessian. Therefore LiSSA can also be interpreted as doing a partial stochastic gradient descent on the quadratic . It is partial because we have a precise estimate of gradient of the function and a stochastic estimate for the Hessian. We note that this is essential for the linear convergence guarantees we show for LiSSA. The above interpretation indicates the possibility of using any first-order linearly convergent scheme for approximating the minimizer of the quadratic . In particular, consider any algorithm that, given a convex quadratic function and an error value , produces a point such that

 ∥y−y∗t∥≤ε (5)

with probability at least , where . Let the total time taken by the algorithm to produce the point be . For our applications we require to be linearly convergent, i.e. is proportional to with probability at least . Given such an algorithm , LiSSA-Quad, as described in Algorithm 2, generically implements the above idea, modifying LiSSA by replacing the inner loop with the given algorithm . The following is a meta-theorem about the convergence properties of LiSSA-Quad.

###### Theorem 4.1.

Given the function which is -strongly convex, let be the minimizer of the function and be defined as in Algorithm 2. Suppose the algorithm satisfies condition (5) with probability under the appropriate setting of parameters . Set the parameters in the algorithm as follows: , , , where is the final error guarantee one wishes to achieve. Then we have that after steps, with probability at least ,

 mint={1…T}∥xt−x∗∥≤ε.

In particular, LiSSA-Quad(ALG) produces a point such that

 ∥x−x∗∥≤ε

in total time with probability at least for .

Note that for GLM functions, the at any point can be computed in time linear in . In particular, a full gradient of can be computed in time and a stochastic gradient (corresponding to a stochastic estimate of the Hessian) in time . Therefore, a natural choice for the algorithm in the above are first-order algorithms which are linearly convergent, for example SVRG, SDCA, and Acc-SDCA. Choosing a first-order algorithm FO gives us a family of algorithms LiSSA-Quad(FO), each with running time comparable to the running time of the underlying FO, up to logarithmic factors. The following corollary summarizes the typical running time guarantees for LiSSA-Quad(FO) when FO is Acc-SVRG.

###### Corollary 4.2.

Given a GLM function , if is replaced by Acc-SVRG [LMH15], then under a suitable setting of parameters, LiSSA-Quad produces a point such that

 f(x)−f(x∗)≤ε

with probability at least , in total time .

Here the above hides logarithmic factors in , but not in . Note that the above running times depend upon the condition number which as described in Section 2 can potentially provide better dependence compared to its global counterpart. In practice this difference could lead to faster running time for LiSSA-Quad as compared to the underlying first-order algorithm FO. We now provide a proof for Theorem 4.1.

###### Proof of Theorem 4.1.

We run the algorithm to achieve accuracy on each of the intermediate quadratic functions , and we set which implies via a union bound that for all ,

 ∥xt+1−x∗t∥≤ε2 (6)

with probability at least . Assume that for all , (otherwise the theorem is trivially true). Using the analysis of Newton’s method as before, we get that for all ,

 ∥xt+1−x∗∥ ≤ ∥x∗t−x∗∥+∥xt+1−x∗t∥ ≤ ∥xt−∇−2f(xt)∇f(xt)−x∗∥+∥xt+1−x∗t∥ ≤ M4α∥xt−x∗∥2+ε2 ≤ (M4α+1)∥xt−x∗∥2

where the second inequality follows from the analysis in the proof of Theorem 3.3 and Equation (6). We know that from the initial run of the first-order algorithm . Applying the above inductively and using the value of prescribed by the theorem statement, we get that . ∎

## 5 Runtime Improvement through Fast Quadratic Solvers

The previous section establishes the reduction from general convex optimization to quadratic functions. In this section we show how we can leverage the fact that for quadratic functions the running time for accelerated first-order methods can be improved in the regime when . In particular, we show the following theorem.

###### Theorem 5.1.

Given a vector and a matrix where each is of the form for some