Source code for "Sub-sampled Cubic Regularization for Non-convex Optimization", JM Kohler, A Lucchi, https://arxiv.org/abs/1705.05933
We consider the minimization of non-convex functions that typically arise in machine learning. Specifically, we focus our attention on a variant of trust region methods known as cubic regularization. This approach is particularly attractive because it escapes strict saddle points and it provides stronger convergence guarantees than first- and second-order as well as classical trust region methods. However, it suffers from a high computational complexity that makes it impractical for large-scale learning. Here, we propose a novel method that uses sub-sampling to lower this computational cost. By the use of concentration inequalities we provide a sampling scheme that gives sufficiently accurate gradient and Hessian approximations to retain the strong global and local convergence guarantees of cubically regularized methods. To the best of our knowledge this is the first work that gives global convergence guarantees for a sub-sampled variant of cubic regularization on non-convex functions. Furthermore, we provide experimental results supporting our theory.READ FULL TEXT VIEW PDF
Source code for "Sub-sampled Cubic Regularization for Non-convex Optimization", JM Kohler, A Lucchi, https://arxiv.org/abs/1705.05933
In this paper we address the problem of minimizing an objective function of the form
is a not necessarily convex, (regularized) loss function over
datapoints. Stochastic Gradient Descent (SGD) is a popular method to optimize this type of objective especially in the context of large-scale learning whenis very large. Its convergence properties are well understood for convex functions, which arise in many machine learning applications (Nesterov, 2004)
. However, non-convex functions are also ubiquitous and have recently drawn a lot of interest due to the growing success of deep neural networks. Yet, non-convex functions are extremely hard to optimize due to the presence of saddle points and local minima which are not global optima(Dauphin et al., 2014; Choromanska et al., 2015). In fact, the work of (Hillar & Lim, 2013) showed that even a degree four polynomial can be NP-hard to optimize. Instead of aiming for a global minimizer, we will thus seek for a local optimum of the objective. In this regard, a lot of attention has focused on a specific type of functions known as strict saddle functions or ridable functions (Ge et al., 2015; Sun et al., 2015)
. These functions are characterized by the fact that the Hessian of every saddle point has a negative eigenvalue. Geometrically this means that there is a direction of negative curvature where decreasing function values can be found. Examples of strict saddle functions include dictionary learning, orthogonal tensor decomposition and generalized phase retrieval(Ge et al., 2015; Sun et al., 2015).
In this work, we focus our attention on trust region methods to optimize Eq. 1. These methods construct and optimize a local model of the objective function within a region whose radius depends on how well the model approximates the real objective. One of the keys for efficiency of these methods is to pick a model that is comparably easy to optimize, such as a quadratic function (Conn et al., 2000). Following the trust region paradigm, cubic regularization methods (Nesterov & Polyak, 2006; Cartis et al., 2011a) suggest finding the step that minimizes a cubic model of the form
where and 111In the work of (Nesterov & Polyak, 2006), is assumed to be the Lipschitz constant of the Hessian in which case the model defined in Eq. 2 is a global overestimator of the objective, i.e. . We will elaborate on the role of in (Cartis et al., 2011a) later on..
(Nesterov & Polyak, 2006) were able to show that, if the step is computed by globally minimizing the cubic model and if the Hessian is globally Lipschitz continuous, Cubic regularization methods possess the best known worst case complexity to solve Eq. 1: an overall worst-case iteration count of order for generating , and of order for achieving approximate nonnegative curvature. However, minimizing Eq. 2 in an exact manner impedes the performance of this method for machine learning applications as it requires access to the full Hessian matrix. More recently, (Cartis et al., 2011a) presented a method (hereafter referred to as ARC) which relaxed this requirement by assuming that one can construct an approximate Hessian that is sufficiently close to in the following way:
Furthermore, they showed that it is sufficient to find an approximate minimizer by applying a Lanczos method to build up evolving Krylov spaces, which can be constructed in a Hessian-free manner, i.e. by accessing the Hessians only indirectly via matrix-vector products. However there are still two obstacles for the application of ARC in the field of machine learning: (1) The cost of the Lanczos process increases linearly inand is thus not suitable for large datasets and (2) there is no theoretical guarantee that quasi-Newton approaches satisfy Eq. 3 and (Cartis et al., 2011a) do not provide any alternative approximation technique.
In this work, we make explicit use of the finite-sum structure of Eq. 1 by applying a sub-sampling technique in order to provide guarantees for machine learning applications. Towards this goal, we make the following contributions:
Since the dominant iteration cost lie in the construction of the Lanczos process and increase linearly in , we lower the computational cost significantly by reducing the number of samples used in each iteration.
Finally, we provide experimental results demonstrating significant speed-ups compared to standard first and second-order optimization methods for various convex and non-convex objectives.
In large-scale learning, when
most of the computational cost of traditional deterministic optimization methods is spent in computing the exact gradient information. A common technique to address this issue is to use sub-sampling in order to compute an unbiased estimate of the gradient. The simplest instance is Stochastic Gradient Descent (SGD) whose convergence does not depend on the number of datapoints
. However, the variance in the stochastic gradient estimates slows its convergence down. The work of(Friedlander & Schmidt, 2012) explored a sub-sampling technique for gradient descent in the case of convex functions, showing that it is possible to maintain the same convergence rate as full-gradient descent by carefully increasing the sample size over time. Another way to recover a linear rate of convergence for strongly-convex functions is to use variance-reduced methods (Johnson & Zhang, 2013; Defazio et al., 2014; Roux et al., 2012; Hofmann et al., 2015; Daneshmand et al., 2016). Recently, the convergence of SGD and its variance-reduced counterparts has also been extended to non-convex functions (Ghadimi & Lan, 2013; Reddi et al., 2016a) but the techniques used in these papers require using a randomized sampling scheme which is different from what is typically used in practice. Furthermore, the guarantees these methods provide are only in terms of convergence to critical points. However, the work of (Ge et al., 2015; Sun et al., 2015) recently showed that SGD can achieve stronger guarantees in the case of strict saddle functions. Yet, the convergence rate has a polynomial dependency to the dimension and the smallest eigenvalue of the Hessian which can make this method fairly impractical.
For second-order methods, the problem of avoiding saddle points is even worse as they might be attracted by saddle points or even points of local maximizers (Dauphin et al., 2014). Another predominant issue is the computation (and perhaps storage) of the Hessian matrix, which requires operations as well as computing the inverse of the Hessian, which requires computations. Quasi-Newton methods such as the well-known (L-)BFGS algorithm partially address this issue by requiring per-iteration cost (Nesterov, 2004) instead of . An increasingly popular alternative is to use sub-sampling techniques to approximate the Hessian matrix, such as done for example in (Byrd et al., 2011) and (Erdogdu & Montanari, 2015). The latter method, named NewSamp, approximates the Hessian with a low-rank approximation which reduces the complexity per iteration to with being the sample size 222Note that this method still requires computation for the gradient as it only subsamples the Hessian.. Although this is a significant reduction in terms of complexity, NewSamp yields a composite convergence rate: quadratic at first but only linear near the minimizer. Unlike NewSamp, our sampling scheme yields a locally quadratic rate of convergence (as well as faster global convergence). Our analysis also does not require using exact gradients and can thus further reduce the complexity per iteration.
Trust region methods are among the most effective algorithmic frameworks to avoid pitfalls such as local saddle points in non-convex optimization. Classical versions iteratively construct a local quadratic model and minimize it within a certain radius wherein the model is trusted to be sufficiently similar to the actual objective function. This is equivalent to minimizing the model function with a suitable quadratic penalty term on the stepsize. Thus, a natural extension is the cubic regularization method introduced by (Nesterov & Polyak, 2006) that uses a cubic over-estimator of the objective function as a regularization technique for the computation of a step to minimize the objective function. The drawback of their method is that it requires computing the exact minimizer of Eq. 2, thus requiring the exact gradient and Hessian matrix. However finding a global minimizer of the cubic model may not be essential in practice and doing so might be prohibitively expensive from a computational point of view. (Cartis et al., 2011a) introduced a method named ARC which relaxed this requirement by letting be an approximation to the minimizer. The model defined by the adaptive cubic regularization method introduced two further changes. First, instead of computing the exact Hessian it allows for a symmetric approximation . Second, it introduces a dynamic positive parameter instead of using the global Lipschitz constant .
There have been efforts to further reduce the computational complexity of this problem. For example, (Agarwal et al., 2016) refined the approach of (Nesterov & Polyak, 2006) to return an approximate local minimum in time which is linear in the input representation. Similar improvements have been made by (Carmon & Duchi, 2016) and (Hazan & Koren, 2016). These methods provide alternatives to minimize the cubic model and can thus be seen as complementary to our approach. Finally, the work of (Blanchet et al., 2016) proposed a stochastic variant of a trust region method but their analysis does not specify any accuracy level required for the estimation of the stochastic Hessian. (Cartis & Scheinberg, 2017) also analyzed a probabilistic cubic regularization variant that allows approximate second-order models but they did not provide an explicit derivation of sampling conditions.
We are interested in optimizing Eq. 1 in a large-scale setting when the number of datapoints is very large such that the cost of solving Eq. 2 exactly becomes prohibitive. In this regard we identify a sampling scheme that allows us to retain the convergence results of deterministic trust region and cubic regularization methods, including quadratic local convergence rates and global second-order convergence guarantees as well as worst-case complexity bounds. A detailed theoretical analysis is given in Section 4. Here we shall first state the algorithm itself and elaborate further on the type of local nonlinear models we employ as well as how these can be solved efficiently.
Instead of using deterministic gradient and Hessian information as in Eq. 2, we use unbiased estimates of the gradient and Hessian constructed from two independent sets of points denoted by and . We then construct a local cubic model that is (approximately) minimized in each iteration:
The model derivative with respect to is defined as:
Our Sub-sampled Cubic Regularization approach (SCR) is presented in Algorithm 1. At iteration step , we sub-sample two sets of datapoints from which we compute a stochastic estimate of the gradient and the Hessian. We then solve the problem in Eq. 4 approximately using the method described in Section 3.4 and update the regularization parameter depending on how well the model approximates the real objective. In particular, very successful steps indicate that the model is (at least locally) an adequate approximation of the objective such that the penalty parameter is decreased in order to allow for longer steps. For unsuccessful iterations we proceed exactly the opposite way. Readers familiar with trust region methods might see that one can interpret the penalty parameter as inversely proportional to the trust region radius .
Solving Eq. 4 requires minimizing an unconstrained non-convex problem that may have isolated local minima. As shown in (Cartis et al., 2011a) the global model minimizer is characterized by following systems of equations,
In order to find a solution we can express , apply this in the second equation of (9) and obtain a univariate, nonlinear equation in
Furthermore, we need , where is the leftmost eigenvalue of , in order to guarantee the semi-positive definiteness of .
Thus, computing the global solution of boils down to finding the root of Eq. 10 in the above specified range of . The problem can be solved by Newton’s method, which involves factorizing for various and is thus prohibitively expensive for large problem dimensions . See Section 6.2 in (Cartis et al., 2011a) for more details. In the following Section we instead explore an approach to approximately minimize the model while retaining the convergence guarantees of the exact minimization.
(Cartis et al., 2011a) showed that it is possible to retain the remarkable properties of the cubic regularization algorithm with an inexact model minimizer. A necessary condition is that satisfies the two requirements stated in A1.
Note that the first equation is equal to and the second to .
As shown in (Cartis et al., 2011a) Lemma 3.2, the global minimizer of in a Krylov subspace satisfies this assumption independent of the subspace dimension. This comes in handy, as minimizing in the Krylov subspace only involves factorizing a tri-diagonal matrix, which can be done at the cost of . However, a Lanczos-type method must be used in order to build up an orthogonal basis of this subspace which typically involves one matrix-vector product () per additional subspace dimension (see Chapter 5 in (Conn et al., 2000) for more details).
Thus, in order to keep the per iteration cost of SCR low and in accordance to ARC, we apply the following termination criterion to the Lanczos process in the hope to find a suitable trial step before is of dimensionality .
For each outer iteration , assume that the Lanczos process stops as soon as some Lanczos iteration satisfies the criterion
However, we argue that especially for high dimensional problems, the cost of the Lanczos process may significantly slow down cubically regularized methods and since this cost increases linearly in , carefully sub-sampled versions are an attractive alternative.
In this section, we provide the convergence analysis of SCR. For the sake of brevity, we assume Lipschitz continuous Hessians right away but note that a superlinear local convergence result as well as the global first-order convergence theorem can both be obtained without the former assumption.
First, we lay out some critical assumptions regarding the gradient and Hessian approximations. Second, we show that one can theoretically satisfy these assumptions with high probability by sub-sampling first- and second-order information. Third, we give a condensed convergence analysis of SCR which is widely based on (Cartis et al., 2011a), but adapted for the case of stochastic gradients. There, we show that the local and global convergence properties of ARC can be retained by sub-sampled versions at the price of slightly worse constants.
The functions , and are Lipschitz continuous for all , with Lipschitz constants and respectively.
By use of the triangle inequality, it follows that these assumptions hold for all and , independent of the sample size. Furthermore, note that the Hessian and gradient norms are uniformly bounded as a consequence of A3.
In each iteration, the Hessian approximation shall satisfy condition AM.4 from (Cartis et al., 2011a), which we restate here for the sake of completeness.
We explicitly stress the fact that this condition is stronger than the well-known Dennis Moré Condition. While quasi-Newton approximations satisfy the latter, there is no theoretical guarantee that they also satisfy the former (Cartis et al., 2011a). Furthermore, any sub-sampled gradient shall satisfy the following condition.
Based on probabilistic deviation bounds for random vectors and matrices333These bounds have lately become popular under the name of concentration inequalities . Unlike classic limit theorems, such as the Central Limit Theorem, concentration inequalities are specifically attractive for application in machine learning because of their non-asymptotic nature.
. Unlike classic limit theorems, such as the Central Limit Theorem, concentration inequalities are specifically attractive for application in machine learning because of their non-asymptotic nature., we now present sampling conditions that guarantee sufficient steepness and curvature information in each iteration
. In particular, the Bernstein inequality gives exponentially decaying bounds on the probability of a random variable to differ by more thanfrom its mean for any fixed number of samples. We use this inequality to upper bound the -norm distance , as well as the spectral-norm distance by quantities involving the sample size . By applying the resulting bounds in the sufficient agreement assumptions (A4 & A5) and re-arranging for , we are able to translate the latter into concrete sampling conditions.
As detailed in the Appendix, the following Lemma arises from the Vector Bernstein Inequality.
Let the sub-sampled gradient be defined as in Eq. 4. For we have with probability that
It constitutes a non-asymptotic bound on the deviation of the gradient norms that holds with high probability. Note how the accuracy of the gradients increases in the sample size. This bound yields the following condition.
then satisfies the sufficient agreement condition A5 with probability .
In analogy to the gradient case, we use the matrix version of Bernstein’s Inequality to derive the following Lemma.
Let the sub-sampled Hessian be defined as in Eq. 4. For we have with probability that
This, in turn, can be used to derive a Hessian sampling condition that is guaranteed to satisfy the sufficient agreement condition (A4) with high probability.
then satisfies the strong agreement condition A4 with probability .
As expected, the required sample size grows in the problem dimensionality and in the Lipschitz constants and . Finally, as outlined in the Appendix (Lemma 24), the samples size is eventually equal to the full sample size as SCR converges and thus we have
The entire analysis of cubically regularized methods is prohibitively lengthy and we shall thus establish only the crucial properties that ensure global, as well as fast local convergence and improve the worst-case complexity of these methods over standard trust region approaches. Next to the cubic regularization term itself, these properties arise mainly from the penalty parameter updates and step acceptance criteria of the ARC framework, which give rise to a good relation between regularization and stepsize. Further details can be found in (Cartis et al., 2011a).
First, we note that the penalty parameter sequence is guaranteed to stay within some bounded positive range, which is essentially due to the fact that SCR is guaranteed to find a successful step as soon as the penalty parameter exceeds some critical value .
Furthermore, for any successful iteration the objective decrease can be directly linked to the model decrease via the step acceptance criterion in Eq. 8. The latter, in turn, can be shown to be lower bounded by the stepsize which combined gives the following result.
Suppose that satisfies A1. Then, for all successful iterations
Finally, the termination criterion (13) also guarantees step sizes that do not become too small compared to the respective gradient norm which leads to the following Lemma.
We here provide a proof of local convergence for any sampling scheme that satisfies the conditions presented in Theorem 7 and Theorem 9 as well as the additional condition that the sample size does not decrease in unsuccessful iterations. We show that such sampling schemes eventually yield exact gradient and Hessian information. Based upon this observation, we obtain the following local convergence result (as derived in the Appendix).
where is positive definite. Moreover, assume the stopping criterion TC (A2). Then,
That is, converges in q-quadratically to as with high probability.
Lemma 10 and 11 allow us to lower bound the function decrease of a successful step in terms of the full gradient (as we will shorty detail in Eq. 31). Combined with Lemma 10, this allows us to give deterministic global convergence guarantees using only stochastic first order information.
Unsurprisingly, the second-order convergence guarantee relies mainly on the use of second-order information so that the stochastic gradients do neither alter the result nor the proof as it can be found in Section 5 of (Cartis et al., 2011a). We shall restate it here for the sake of completeness.
Let A3, A4 and A5 hold. Furthermore, let be bounded below by and be a global minimizer of over a subspace that is spanned by the columns of the orthogonal matrix . As asymptotically (Eq. 20), any subsequence of negative leftmost eigenvalues converges to zero for sufficiently large, successful iterations. Hence
Finally, if becomes a full orthogonal basis of as , then any limit point of the sequence of successful iterates is second-order critical (provided such a limit point exists).
For the worst-case analysis we shall establish the two disjoint index sets and , which represent the un- and successful SCR iterations that have occurred up to some iteration , respectively. As stated in Lemma 10 the penalty parameter is bounded above and hence SCR may only take a limited number of consecutive unsuccessful steps. As a consequence, the total number of unsuccessful iterations is at most a problem dependent constant times the number of successful iterations.
For any fixed , let Lemma 10 hold. Then we have that
Regarding the number of successful iterations we have already established the two key ingredients: (i) a sufficient function decrease in each successful iteration (Lemma 11) and (ii) a step size that does not become too small compared to the respective gradient norm (Lemma 12), which is essential to driving the latter below at a fast rate. Combined they give rise to the guaranteed function decrease for successful iterations
which already contains the power of 3/2 that appears in the complexity bound. Finally, by summing over all successful iterations one obtains the following, so far best know, worst-case iteration bound to reach first-order criticality.
Note that the constants and involved in this upper bound both increase in the gradient inaccuracy and the Hessian inaccuracy (via and ), such that more inaccuracy in the sub-sampled quantities may well lead to an increased overall number of iterations.
Finally, we want to point out that similar results can be established regarding a second-order worst-case complexity bound similar to Corollary 5.5 in (Cartis et al., 2011b), which we do not prove here for the sake of brevity.
In this section we present experimental results on real-world datasets where . They largely confirm the analysis derived in the previous section. Please refer to the Appendix for more detailed results and experiments on higher dimensional problems.
|1. A9A (n=32561, d=123)||2. COVTYPE (n=581012, d=54)||3. HIGGS (n=11000000, d=28)|
Top (bottom) row shows log suboptimality of convex (non-convex) regularized logistic regressions over time (avg. of 10 runs).
We implement SCR as stated in Algorithm 1 and note the following details. Following (Erdogdu & Montanari, 2015), we require the sampling conditions derived in Section 4 to hold with probability , which yields the following practically applicable sampling schemes
The positive constants and can be used to scale the sample size to a reasonable portion of the entire dataset and can furthermore be used to offset and , which are generally expensive to obtain.
However, when choosing for the current iteration , the stepsize is yet to be determined. Based on the Lipschitz continuity of the involved functions, we argue that the previous stepsize is a fair estimator of the current one and this is confirmed by experimental results. Finally, we would like to point out that the sampling schemes derived in Eq. 34 gives our method a clear edge over sampling schemes that do not take any iteration information into account, e.g. linearly or geometrically increased samples.
We compare SCR to various optimization methods presented in Section 2. This includes SGD (with constant step-size), SAGA, Newton’s method, BFGS, L-BFGS and ARC. More details concerning the choice of the hyper-parameters are provided in the appendix. We ran experiments on the datasets a9a, covtype and higgs (see details in the appendix). We experimented with a binary logistic regression model with two different regularizers: a standard penalty , and a non-convex regularizer (see (Reddi et al., 2016b)).
The results in Figure 1 confirm our intuition that SCR can reduce ARCs computation time without losing its global convergence property. Newton’s method is the closest in terms of performance. However, it suffer heavily from an increase in as can be seen by additional results provided in the appendix. Furthermore, it cannot optimize the non-convex version of covtype due to a singular Hessian. Notably, BFGS terminates early on the non-convex higgs dataset due to a local saddle point. Finally, the high condition number of covtype has a significant effect on the performance of SGD, SAGA and L-BFGS.
In this paper we proposed a sub-sampling technique to estimate the gradient and Hessian in order to construct a cubic model analogue to trust region methods. We show that this method exhibits the same convergence properties as its deterministic counterpart, which are the best known worst-case convergence properties on non-convex functions. Our proposed method is especially interesting in the large scale regime when . Numerical experiments on both real and synthetic datasets demonstrate the performance of the proposed algorithm which we compared to its deterministic variant as well as more classical optimization methods. As future work we would like to explore the adequacy of our method to train neural networks which are known to be hard to optimize due to the presence of saddle points.
Libsvm: a library for support vector machines.ACM Transactions on Intelligent Systems and Technology (TIST), 2(3):27, 2011.
For the sake of simplicity we shall drop the iteration subscript in the following results of this section.
First, we extend the Vector Bernstein inequality as it can be found in (Gross, 2011) to the average of independent, zero-mean vector-valued random variables.
Let be independent vector-valued random variables with common dimension and assume that each one is centered, uniformly bounded and also the variance is bounded above:
Then we have for
Proof: Theorem 6 in (Gross, 2011) gives the following Vector Bernstein inequality for independent, zero-mean vector-valued random variables
where is the sum of the variances of the centered vectors .
First, we shall define , which allows us to rewrite the above equation as
Based on the observation that
always holds, we can formulate a slightly weaker Vector Bernstein version as follows
Since the individual variance is assumed to be bounded above, we can write
This term also constitutes an upper bound on the variance of , because the are independent and thus uncorrelated . However, and we must account for the averaging term. Since the are centered we have , and thus
where we used the fact that the expectation of a sum equals the sum of the expectations and the cross-terms because of the independence assumption. Hence, we can bound the term for the random vector sum .
Now, since and , as well as is falling in and falling in , we can use this upper bound on the variance of in (39), which gives the desired inequality
This result was applied in order to find the probabilistic bound on the deviation of the sub-sampled gradient from the full gradient as stated in Lemma 6, for which we will give the proof next.
Proof of Lemma 6:
To apply vector Bernstein’s inequality (35) we need to center the gradients. Thus we define
and note that from the Lipschitz continuity of (A3), we have
in equation (35), we can require the probability of a deviation larger or equal to to be lower than some
Conversely, the probability of a deviation of
is higher or equal to .
Of course, any sampling scheme that guarantees the right hand side of (16) to be smaller or equal to times the squared step size, directly satisfies the sufficient gradient agreement condition (A5). Consequently, plugging the former into the latter and rearranging for the sample size gives Theorem 7 as we shall prove now.
Proof of Theorem 7:
By use of Lemma 6 we can write
Let be independent random Hermitian matrices with common dimension and assume that each one is centered, uniformly bounded and also the variance is bounded above:
Introduce the sum
Then we have
Proof: Theorem 12 in (Gross, 2011) gives the following Operator-Bernstein inequality
where . As well shall see, this is an upper bound on the variance of since the are independent and have an expectation of zero ().
where we used the fact that the expectation of a sum equals the sum of the expectations and the cross-terms because of the independence assumption.
However, and thus
Hence, we can bound for the averagerandom matrix sum . Furthermore, since and as well as decreasing in we have that
This result exhibits that sums of independent random matrices provide normal concentration near its mean in a range determined by the variance of the sum. We apply it in order to derive the bound on the deviation of the sub-sampled Hessian from the full Hessian as stated in Lemma 8, which we shall prove next.
Proof of Lemma 8: Bernstein’s Inequality holds as and thus the Hessian is symmetric by Schwarz’s Theorem. Since the expectation of the random matrix needs to be zero, we center the individual Hessians,
and note that now from the Lipschitz continuity of (A3):
Hence, for , we are in the small deviation regime of Bernstein’s bound with a sub-gaussian tail. Then, we may plug
into (49), to get
Finally, we shall require the probability of a deviation of or higher to be lower than some
which is equivalent to staying within this particular choice of with probability , generally perceived as high probability.
happens to be (exactly in the direction of) the largest eigenvector of.
Finally, we shall state a Lemma which illustrates that the stepsize goes to zero as the algorithm converges. The proof can be found in Section 5 of (Cartis et al., 2011a).
Let be bounded below by some . Also, let satisfy A1 and be bounded below by some . Then we have for all successful iterations that
In the top row of Figure 2 we illustrate the Hessian sample sizes that result when applying SCR with a practical version of Theorem 9 to the datasets used in our experiments 444see Section A.3 for details. In the bottom row of Figure 2, we benchmark our algorithm to the deterministic as well as two naive stochastic versions of ARC with linearly and exponentially increasing sample sizes.
|1. a9a||2. covtype||3. gaussian|
Note that both the linear and the exponential sampling schemes do not quite reach the same performance as SCR even though they were carefully fine tuned to achieve the best possible performance. Furthermore, the sampling size was manually set to reach the full sample size at the very last iteration. This highlights another advantage of the automatic sampling scheme that does not require knowledge of the total number of iterations.
Proof of Lemma 10:
The lower bound follows directly from Step 7 in the algorithm design (see Algorithm 1). Within the upper bound, the constant accounts for the start value of the penalty parameter. Now, we show that as soon as some , the iteration is very successful and . Finally, allows for being ’close to’ the successful threshold, but increased ’one last time’.
Any iteration with