Non-Uniform Stochastic Average Gradient Method for Training Conditional Random Fields

04/16/2015 ∙ by Mark Schmidt, et al. ∙ 0

We apply stochastic average gradient (SAG) algorithms for training conditional random fields (CRFs). We describe a practical implementation that uses structure in the CRF gradient to reduce the memory requirement of this linearly-convergent stochastic gradient method, propose a non-uniform sampling scheme that substantially improves practical performance, and analyze the rate of convergence of the SAGA variant under non-uniform sampling. Our experimental results reveal that our method often significantly outperforms existing methods in terms of the training objective, and performs as well or better than optimally-tuned stochastic gradient methods in terms of test error.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Conditional random fields (CRFs) (Lafferty et al., 2001)

are a ubiquitous tool in natural language processing. They are used for part-of-speech tagging 

(McCallum et al., 2003), semantic role labeling (Cohn and Blunsom, 2005), topic modeling (Zhu and Xing, 2010), information extraction (Peng and McCallum, 2006), shallow parsing (Sha and Pereira, 2003)

, named-entity recognition 

(Settles, 2004)

, as well as a host of other applications in natural language processing and in other fields such as computer vision 

(Nowozin and Lampert, 2011). Similar to generative Markov random field (MRF) models, CRFs allow us to model probabilistic dependencies between output variables. The key advantage of discriminative CRF models is the ability to use a very high-dimensional feature set, without explicitly building a model for these features (as required by MRF models). Despite the widespread use of CRFs, a major disadvantage of these models is that they can be very slow to train and the time needed for numerical optimization in CRF models remains a bottleneck in many applications.

Due to the high cost of evaluating the CRF objective function on even a single training example, it is now common to train CRFs using stochastic gradient methods (Vishwanathan et al., 2006). These methods are advantageous over deterministic methods because on each iteration they only require computing the gradient of a single example (and not all example as in deterministic methods). Thus, if we have a data set with training examples, the iterations of stochastic gradient methods are times faster than deterministic methods. However, the number of stochastic gradient iterations required might be very high. This has been studied in the optimization community, which considers the problem of finding the minimum number of iterations so that we can guarantee that we reach an accuracy of , meaning that

where is our training objective function,

is our parameter estimate on iteration

, and

is the parameter vector minimizing the training objective function. For strongly-convex objectives like

-regularized CRFs, stochastic gradient methods require iterations (Nemirovski et al., 2009). This is in contrast to traditional deterministic methods which only require iterations (Nesterov, 2004). However, this much lower number of iterations comes at the cost of requiring us to process the entire data set on each iteration.

For problems with a finite number of training examples, Le Roux et al. (2012) recently proposed the stochastic average gradient (SAG) algorithm which combines the advantages of deterministic and stochastic methods: it only requires evaluating a single randomly-chosen training example on each iteration, and only requires iterations to reach an accuracy of . Beyond this faster convergence rate, the SAG method also allows us to address two issues that have traditionally frustrated users of stochastic gradient methods: setting the step-size and deciding when to stop. Implementations of the SAG method use both an adaptive step-size procedure and a cheaply-computable criterion for deciding when to stop. Le Roux et al. (2012) show impressive empirical performance of the SAG algorithm for binary classification.

This is the first work to apply a SAG algorithm to train CRFs. We show that tracking marginals in the CRF can drastically reduce the SAG method’s huge memory requirement. We also give a non-uniform sampling (NUS) strategy that adaptively estimates how frequently we should sample each data point, and we show that the SAG-like algorithm of Defazio et al. (2014) converges under any NUS strategy while a particular NUS strategy achieves a faster rate. Our experiments compare the SAG algorithm with a variety of competing deterministic, stochastic, and semi-stochastic methods on benchmark data sets for four common tasks: part-of-speech tagging, named entity recognition, shallow parsing, and optical character recognition. Our results indicate that the SAG algorithm with NUS often outperforms previous methods by an order of magnitude in terms of the training objective and, despite not requiring us to tune the step-size, performs as well or better than optimally tuned stochastic gradient methods in terms of the test error.

2 Conditional Random Fields

CRFs model the conditional probability of a structured output

(such as a sequence of labels) given an input (such as a sequence of words) based on features and parameters using

(1)

Given pairs comprising our training set, the standard approach to training the CRF is to minimize the -regularized negative log-likelihood,

(2)

where is the strength of the regularization parameter. Unfortunately, evaluating is expensive due to the summation over all possible configurations . For example, in chain-structured models the forward-backward algorithm is used to compute and its gradient. A second problem with solving (2) is that the number of training examples in applications is constantly-growing, and thus we would like to use methods that only require a few passes through the data set.

3 Related Work

Lafferty et al. (2001) proposed an iterative scaling algorithm to solve problem (2), but this proved to be inferior to generic deterministic optimization strategies like the limited-memory quasi-Newton algorithm L-BFGS (Wallach, 2002; Sha and Pereira, 2003). The bottleneck in these methods is that we must evaluate and its gradient for all training examples on every iteration. This is very expensive for problems where is very large, so to deal with this problem stochastic gradient methods were examined (Vishwanathan et al., 2006; Finkel et al., 2008). However, traditional stochastic gradient methods require iterations rather than the much smaller required by deterministic methods.

There have been several attempts at improving the cost of deterministic methods or the convergence rate of stochastic methods. For example, the exponentiated gradient method of Collins et al. (2008) processes the data online and only requires iterations to reach an accuracy of in terms of the dual objective. However, this does not guarantee good performance in terms of the primal objective or the weight vector. Although this method is highly-effective if is very large, our experiments and the experiments of others show that the performance of online exponentiated gradient can degrade substantially if a small value of is used (which may be required to achieve the best test error), see Collins et al. (2008, Figures 5-6 and Table 3) and Lacoste-Julien et al. (2013, Figure 1). In contrast, SAG degrades more gracefully as becomes small, even achieving a convergence rate faster than classic SG methods when  (Schmidt et al., 2013). Lavergne et al. (2010) consider using multiple processors and vectorized computation to reduce the high iteration cost of quasi-Newton methods, but when is enormous these methods still have a high iteration cost. Friedlander and Schmidt (2012) explore a hybrid deterministic-stochastic method that slowly grows the number of examples that are considered in order to achieve an convergence rate with a decreased cost compared to deterministic methods.

Below we state the convergence rates of different methods for training CRFs, including the fastest known rates for deterministic algorithms (like L-BFGS and accelerated gradient) (Nesterov, 2004), stochastic algorithms (like [averaged] stochastic gradient and AdaGrad) (Ghadimi and Lan, 2012), online exponentiated gradient, and SAG. Here is the Lipschitz constant of the gradient of the objective, is the strong-convexity constant (and we have ), and

bounds the variance of the gradients.


Deterministic: (primal)
Online EG (dual)
Stochastic (primal)
SAG (primal)

4 Stochastic Average Gradient

Le Roux et al. (2012) introduce the SAG algorithm, a simple method with the low iteration cost of stochastic gradient methods but that only requires iterations. To motivate this new algorithm, we write the classic gradient descent iteration as

(3)

where is the step-size and at each iteration we set the ‘slope’ variables to the gradient with respect to training example at , so that . The SAG algorithm uses this same iteration, but instead of updating for all data points on every iterations, it simply sets for one randomly chosen data point and keeps the remaining at their value from the previous iteration. Thus the SAG algorithm is a randomized version of the gradient algorithm where we use the gradient of each example from the last iteration where it was selected. The surprising aspect of the work of Le Roux et al. (2012) is that this simple delayed gradient algorithm achieves a similar convergence rate to the classic full gradient algorithm despite the iterations being times faster.

4.1 Implementation for CRFs

Unfortunately, a major problem with applying (3) to CRFs is the requirement to store the . While the CRF gradients have a nice structure (see Section 4.2), includes for some previous , which is dense and unstructured. To get around this issue, instead of using (3) we use the following SAG-like update (Le Roux et al., 2012, Section 4)

(4)

where is the value of for the last iteration where was selected and is the sum of the over all . Thus, this update uses the exact gradient of the regularizer and only uses an approximation for the (structured) CRF log-likelihood gradients. Since we don’t yet have any information about these log-likelihoods at the start, we initialize the algorithm by setting . But to compensate for this, we track the number of examples seen , and normalize by in the update (instead of ). In Algorithm 1, we summarize this variant of the SAG algorithm for training CRFs.111If we solve the problem for a sequence of regularization parameters, we can obtain better performance by warm-starting , , and .

0:  , , ,
1:  , for
2:  ,
3:  while  and  do
4:     Sample from
5:     
6:     
7:     if this is the first time we sampled  then
8:        
9:     end ifSubtract old gradient , add new gradient :
10:      Replace old gradient of example :
11:     
12:     if  then
13:        lineSearch
14:     end if
15:     
16:     
17:     
18:  end while
Algorithm 1 SAG algorithm for training CRFs

In many applications of CRFs the are very sparse, and we would like to take advantage of this as in stochastic gradient methods. Fortunately, we can implement (4) without using dense vector operations by using the representation for a scalar and a vector , and using ‘lazy updates’ that apply repeatedly to an individual variable when it is needed (Le Roux et al., 2012).

Also following Le Roux et al. (2012), we set the step-size to , where is an approximation to the maximum Lipschitz constant of the gradients. This is the smallest number such that

(5)

for all , , and . This quantity is a bound on how fast the gradient can change as we change the weight vector. The Lipschitz constant with respect to the gradient of the regularizer is simply . This gives , where is the Lipschitz constant of the gradient of the log-likelihood. Unfortunately, depends on the covariance of the CRF and is typically too expensive to compute. To avoid this computation, as in Le Roux et al. (2012) we approximate in an online fashion using the standard backtracking line-search given by Algorithm 2 (Beck and Teboulle, 2009). The test used in this algorithm is faster than testing (5), since it uses function values (which only require the forward algorithm for CRFs) rather than gradient values (which require the forward and backward steps). Algorithm 2 monotonically increases , but we also slowly decrease it in Algorithm 1 in order to allow the possibility that we can use a more aggressive step-size as we approach the solution.

0:  .
1:  
2:  while  do
3:     
4:     
5:  end while
6:  return  .
Algorithm 2 Lipschitz line-search algorithm

Since the solution is the only stationary point, we must have at the solution. Further, the value converges to so we can use the size of this value to decide when to stop the algorithm (although we also require that to avoid premature stopping before we have seen the full data set). This is in contrast to classic stochastic gradient methods, where the step-size must go to zero and it is therefore difficult to decide if the algorithm is close to the optimal value or if we simply require a small step-size to continue making progress.

4.2 Reducing the Memory Requirements

Even if the gradients are not sparse, we can often reduce the memory requirements of Algorithm 1 because it is known that the CRF gradients only depend on through marginals of the features. Specifically, the gradient of the log-likelihood under model (1) with respect to feature is given by

Typically, each feature only depends on a small ‘part’ of . For example, we typically include features of the form for some function , where is an element of and is a discrete state that can take. In this case, the gradient can be written in terms of the marginal probability of element taking state ,

Notice that Algorithm 1 only depends on the old gradient through its difference with the new gradient (line 10), which in this example gives

where is the current parameter vector and is the old parameter vector. Thus, to perform this calculation the only thing we need to know about is the unary marginal , which will be shared across features that only depend on the event that . Similarly, features that depend on pairs of values in will need to store pairwise marginals, . For general pairwise graphical model structures, the memory requirements to store these marginals will thus be , where is the number of vertices and is the number of edges. This can be an enormous reduction since it does not depend on the number of features. Further, since computing these marginals is a by-product of computing the gradient, this potentially-enormous reduction in the memory requirements comes at no extra computational cost.

5 Non-Uniform Sampling

Recently, several works show that we can improve the convergence rates of randomized optimization algorithms by using non-uniform sampling (NUS) schemes. This includes randomized Kaczmarz (Strohmer and Vershynin, 2009), randomized coordinate descent (Nesterov, 2012), and stochastic gradient methods (Needell et al., 2014). The key idea behind all of these NUS strategies is to bias the sampling towards the Lipschitz constants of the gradients, so that gradients that change quickly get sampled more often and gradients that change slowly get sampled less often. Specifically, we maintain a Lipschitz constant for each training example and, instead of the usual sampling strategy , we bias towards the distribution . In these various contexts, NUS allows us to improve the dependence on the values in the convergence rate, since the NUS methods depend on , which may be substantially smaller than the usual dependence on Schmidt et al. (2013) argue that faster convergence rates might be achieved with NUS for SAG since it allows a larger step size that depends on instead of .222An interesting difference between the SAG update with NUS and NUS for stochastic gradient methods is that the SAG update does not seem to need to decrease the step-size for frequently-sampled examples (since the SAG update does not rely on using an unbiased gradient estimate).

The scheme for SAG proposed by Schmidt et al. (2013, Section 5.5) uses a fairly complicated adaptive NUS scheme and step-size, but the key ingredient is estimating each constant using Algorithm 2. Our experiments show this method often already improves on state of the art methods for training CRFs by a substantial margin, but we found we could obtain improved performance for training CRFs using the following simple NUS scheme for SAG: as in Needell et al. (2014), with probability choose uniformly and with probability sample with probability (restricted to the examples we have previously seen).333Needell et al. (2014) analyze the basic stochastic gradient method and thus require iterations. We also use a step-size of , since the faster convergence rate with NUS is due to the ability to use a larger step-size than . This simple step-size and sampling scheme contrasts with the more complicated choices described by Schmidt et al. (2013, Section 5.5), that make the degree of non-uniformity grow with the number of examples seen . This prior work initializes each to , and updates to each subsequent time an example is chosen. In the context of CRFs, this leads to a large number of expensive backtracking iterations. To avoid this, we initialize with the first time an example is chosen, and decrease to each time it is subsequently chosen. Allowing the to decrase seems crucial to obtaining the best practical performance of the method, as it allows the algorithm to take bigger step sizes if the values of are small near the solution.

5.1 Convergence Analysis under NUS

Schmidt et al. (2013) give an intuitive but non-rigorous motivation for using NUS in SAG. More recently, Xiao and Zhang (2014) show that NUS gives a dependence on in the context of a related algorithm that uses occasional full passes through the data (which substantially simplifies the analysis). Below, we analyze a NUS extension of the SAGA algorithm of Defazio et al. (2014), which does not require full passes through the data and has similar performance to SAG in practice but is much easier to analyze.

Proposition 1.

Let the sequences and be defined by

where is chosen with probability .

(a) If is set to , then with we have

where and

(b) If and is chosen uniformly at random, then with we have

where:

This result (which we prove in Appendix A and B) shows that SAGA has (a) a linear convergence rate for any NUS scheme where for all , and (b) a rate depending on by sampling proportional to the Lipschitz constants and also generating a uniform sample. However, (a) achieves the fastest rate when while (b) requires two samples on each iteration. We were not able to show a faster rate using only one sample on each iteration as used in our implementation.

5.2 Line-Search Skipping

To reduce the number of function evaluations required by the NUS strategy, we also explored a line-search skipping strategy. The general idea is to consider skipping the line-search for example if the line-search criterion was previously satisfied for example without backtracking. Specifically, if the line-search criterion was satisfied consecutive times for example (without backtracking), then we do not do the line-search on the next times example is selected (we also do not multiply by on these iterations). This drastically reduces the number of function evaluations required in the later iterations.

6 Experiments

We compared a wide variety of approaches on four CRF training tasks: the optical character recognition (OCR) dataset of Taskar et al. (2003), the CoNLL-2000 shallow parse chunking dataset,444http://www.cnts.ua.ac.be/conll2000/chunking the CoNLL-2002 Dutch named-entity recognition dataset,555http://www.cnts.ua.ac.be/conll2002/ner

and a part-of-speech (POS) tagging task using the Penn Treebank Wall Street Journal data (POS-WSJ). The optimal character recognition dataset labels the letters in images of words. Chunking segments a sentence into syntactic chunks by tagging each sentence token with a chunk tag corresponding to its constituent type (e.g., ‘NP’, ‘VP’, etc.) and location (e.g., beginning, inside, ending, or outside any constituent). We use standard n-gram and POS tag features 

(Sha and Pereira, 2003)

. For the named-entity recognition task, the goal is to identify named entities and correctly classify them as persons, organizations, locations, times, or quantities. We again use standard n-gram and POS tag features, as well as word shape features over the case of the characters in the token. The POS-tagging task assigns one of 45 syntactic tags to each token in each of the sentences in the data. For this data, we follow the standard division of the WSJ data given by 

Collins (2002), using sections 0-18 for training, 19-21 for development, and 22-24 for testing. We use the standard set of features following Ratnaparkhi (1996) and Collins (2002): n-gram, suffix, and shape features. As is common on these tasks, our pairwise features do not depend on .

On these datasets we compared the performance of a set of competitive methods, including five variants on classic stochastic gradient methods: Pegasos which is a standard stochastic gradient method with a step-size of on iteration  (Shalev-Shwartz et al., 2011),666We also tested Pegasos with averaging but it always performed worse than the non-averaged version. a basic stochastic gradient (SG) method where we use a constant , an averaged stochastic gradient (ASG) method where we use a constant step-size and average the iterations,777We also tested SG and ASG with decreasing step-sizes of either or , but these gave worse performance than using a constant step size. AdaGrad where we use the per-variable and the proximal-step with respect to the -regularizer (Duchi et al., 2011), and stochastic meta-descent (SMD) where we initialize with and dynamically update the step-size (Vishwanathan et al., 2006). Since setting the step-size is a notoriously hard problem when applying stochastic gradient methods, we let these classic stochastic gradient methods cheat by choosing the which gives the best performance among powers of on the training data (for SMD we additionally tested the four choices among the paper and associated code of Vishwanathan et al. (2006), and we found worked well for AdaGrad).888Because of the extra implementation effort required to implement it efficiently, we did not test SMD on the POS dataset, but we do not expect it to be among the best performers on this data set. Our comparisons also included a deterministic L-BFGS algorithm (Schmidt, 2005) and the Hybrid L-BFGS/stochastic algorithm of Friedlander and Schmidt (2012). We also included the online exponentiated gradient (OEG) method (Collins et al., 2008)

, and we followed the heuristics in the author’s code.

999Specifcially, for OEG we proceed through a random permutation of the dataset on the first pass through the data, we perform a maximum of backtracking iterations per example on this first pass (and on subsequent passes), we initialize the per-sample step-sizes to and divide them by if the dual objective does not increase (and multiply them by after processing the example), and to initialize the dual variables we set parts with the correct label from the training set to and parts with the incorrect label to . Finally, we included the SAG algorithm as described in Section 4, the SAG-NUS variant of Schmidt et al. (2013), and our proposed SAG-NUS* strategy from Section 5.101010We also tested SG with the proposed NUS scheme, but the performance was similar to the regular SG method. This is consistent with the analysis of Needell et al. (2014, Corollary 3.1) showing that NUS for regular SG only improves the non-dominant term. We also tested SAGA variants of each of the SAG algorithms, and found that they gave very similar performance. All methods (except OEG) were initialized at zero.

Figure 1: Objective minus optimal objective value against effective number of passes for different deterministic, stochastic, and semi-stochastic optimization strategies. Top-left: OCR, Top-right: CoNLL-2000, bottom-left: CoNLL-2002, bottom-right: POS-WSJ.
Figure 2: Test error against effective number of passes for different deterministic, stochastic, and semi-stochastic optimization strategies (this figure is best viewed in colour). Top-left: OCR, Top-right: CoNLL-2000, bottom-left: CoNLL-2002, bottom-right: POS-WSJ. The dotted lines show the performance of the classic stochastic gradient methods when the optimal step-size is not used. Note that the performance of all classic stochastic gradient methods is much worse when the optimal step-size is not used, whereas the SAG methods have an adaptive step-size so are not sensitive to this choice.

Figure 1 shows the result of our experiments on the training objective and Figure 2 shows the result of tracking the test error. Here we measure the number of ‘effective passes’, meaning times the number of times we performed the bottleneck operation of computing and its gradient. This is an implementation-independent way to compare the convergence of the different algorithms (most of whose runtimes differ only by a small constant), but we have included the performance in terms of runtime in Appendix E. For the different SAG methods that use a line-search we count the extra ‘forward’ operations used by the line-search as full evaluations of and its gradient, even though these operations are cheaper because they do not require the backward pass nor computing the gradient. In these experiments we used , which yields a value close to the optimal test error across all data sets. The objective is strongly-convex and thus has a unique minimum value. We approximated this value by running L-BFGS for up to iterations, which always gave a value of satisfying , indicating that this is a very accurate approximation of the true solution. In the test error plots, we have excluded the SAG and SAG-NUS methods to keep the plots interpretable (while Pegasos does not appear becuase it performs very poorly), but Appendx C includes these plots with all methods added. In the test error plots, we have also plotted as dotted lines the performance of the classic stochastic gradient methods when the second-best step-size is used.

We make several observations based on these experiments:

  • SG outperformed Pegasos. Pegasos is known to move exponentially away from the solution in the early iterations (Bach and Moulines, 2011), meaning that for some , while SG moves exponentially towards the solution () in the early iterations (Nedic and Bertsekas, 2000).

  • ASG outperformed AdaGrad and SMD (in addition to SG). ASG methods are known to achieve the same asymptotic efficiency as an optimal stochastic Newton method (Polyak and Juditsky, 1992), while AdaGrad and SMD can be viewed as approximations to a stochastic Newton method.  Vishwanathan et al. (2006) did not compare to ASG, because applying ASG to large/sparse data requires the recursion of Xu (2010).

  • Hybrid outperformed L-BFGS. The hybrid algorithm processes fewer data points in the early iterations, leading to cheaper iterations.

  • None of the three algorithms ASG/Hybrid/SAG dominated the others: the relative ranks of these methods changed based on the data set and whether we could choose the optimal step-size.

  • OEG performed very well on the first two datasets, but was less effective on the second two. By experimenting with various initializations, we found that we could obtain much better performance with OEG on these two datasets. We report these results in the Appendix D, although Appendix E shows that OEG was less competitive in terms of runtime.

  • Both SAG-NUS methods outperform all other methods (except OEG) by a substantial margin based on the training objective, and are always among the best methods in terms of the test error. Further, our proposed SAG-NUS* always outperforms SAG-NUS.

On three of the four data sets, the best classic stochastic gradient methods (AdaGrad and ASG) seem to reach the optimal test error with a similar speed to the SAG-NUS* method, although they require many passes to reach the optimal test error on the OCR data. Further, we see that the good test error performance of the AdaGrad and ASG methods is very sensitive to choosing the optimal step-size, as the methods perform much worse if we don’t use the optimal step-size (dashed lines in Figure 2). In contrast, SAG uses an adaptive step-size and has virtually identical performance even if the initial value of is too small by several orders of magnitude (the line-search quickly increases to a reasonable value on the first training example, so the dashed black line in Figure 2 would be on top of the solid line).

To quantify the memory savings given by the choices in Section 4, below we report the size of the memory required for these datasets under different memory-saving strategies divided by the memory required by the naive SAG algorithm. Sparse refers to only storing non-zero gradient values, Marginals refers to storing all unary and pairwise marginals, and Mixed refers to storing node marginals and the gradient with respect to pairwise features (recall that the pairwise features do not depend on in our models).

Dataset Sparse Marginals Mixed
OCR
CoNLL-2000
CoNLL-2002
POS-WJ

7 Discussion

Due to its memory requirements, it may be difficult to apply the SAG algorithm for natural language applications involving complex features that depend on a large number of labels. However, grouping training examples into mini-batches can also reduce the memory requirement (since only the gradients with respect to the mini-batches would be needed). An alternative strategy for reducing the memory is to use the algorithm of Johnson and Zhang (2013) or Zhang et al. (2013). These require evaluating the chosen training example twice on each iteration, and occasionally require full passes through the data, but do not have the memory requirements of SAG (in our experiments, these performed similar to or slightly worse than running SAG at half speed).

We believe linearly-convergent stochastic gradient algorithms with non-uniform sampling could give a substantial performance improvement in a large variety of CRF training problems, and we emphasize that the method likely has extensions beyond what we have examined. For example, we have focused on the case of -regularization but for large-scale problems there is substantial interest in using -regularization CRFs (Tsuruoka et al., 2009; Lavergne et al., 2010; Zhou et al., 2011). Fortunately, such non-smooth regularizers can be handled with a proximal-gradient variant of the method, see Defazio et al. (2014). While we have considered chain-structured data the algorithm applies to general graph structures, and any method for computing/approximating the marginals could be adopted. Finally, the SAG algorithm could be modified to use multi-threaded computation as in the algorithm of Lavergne et al. (2010), and indeed might be well-suited to massively distributed parallel implementations.

Acknowledgments

We would like to thank the anonymous reviewers as well as Simon Lacoste-Julien for their helpful comments. This research was supported by the Natural Sciences and Engineering Research Council of Canada (RGPIN 262313, RGPAS 446348, and CRDPJ 412844 which was made possible by a generous contribution from The Boeing Company and AeroInfo Systems). Travel support to attend the conference was provided by the Institute for Computing, Information and Cognitive Systems (ICICS).

Appendx A: Proof of Part (a) of Proposition 1

In this section we consider the minimization problem

where each is -Lipschitz continuous and each is -strongly-convex. We will define Algorithm 2, a variant of SAGA, by the sequences , , and given by

where with probability . In this section we’ll use the convention that , that , and that is the minimizer of . We first show that is an unbiased gradient estimator and derive a bound on its variance.

Lemma 1.

We have and subsequently

Proof.

We have

To show the second part, we use that if and are independent, , and ,

We will also make use of the inequality

(6)

which follows from Defazio et al. (2014, Lemma 1) using that and the non-positivity of . We now give the proof of part (a) of Proposition 1, which we state below.

Proposition 1 (a).

If and , then Algorithm 2 has

Proof.

We denote the Lyapunov function at iteration by

We will will show that for some . First, we write the expectation of the first term as

(7)

Next, we simplify the other term of ,

We now use Lemma 1 followed by Inequality (6),

We use this to bound the expected improvement in the Lyapunov function,

From (Appendx A: Proof of Part (a) of Proposition 1)
From above
Def’n of