In this paper, we propose a novel approach to automatically determine the batch size in stochastic gradient descent methods. The choice of the batch size induces a trade-off between the accuracy of the gradient estimate and the cost in terms of samples of each update. We propose to determine the batch size by optimizing the ratio between a lower bound to a linear or quadratic Taylor approximation of the expected improvement and the number of samples used to estimate the gradient. The performance of the proposed approach is empirically compared with related methods on popular classification tasks. The work was presented at the NIPS workshop on Optimizing the Optimizers. Barcelona, Spain, 2016.

## Authors

• 19 publications
• 19 publications
• ### Accelerating Stochastic Gradient Descent Using Antithetic Sampling

(Mini-batch) Stochastic Gradient Descent is a popular optimization metho...
10/07/2018 ∙ by Jingchang Liu, et al. ∙ 0

• ### Risk Bounds for Low Cost Bipartite Ranking

Bipartite ranking is an important supervised learning problem; however, ...
12/02/2019 ∙ by San Gultekin, et al. ∙ 0

• ### Reducing Runtime by Recycling Samples

Contrary to the situation with stochastic gradient descent, we argue tha...
02/05/2016 ∙ by Jialei Wang, et al. ∙ 0

• ### Which Algorithmic Choices Matter at Which Batch Sizes? Insights From a Noisy Quadratic Model

Increasing the batch size is a popular way to speed up neural network tr...
07/09/2019 ∙ by Guodong Zhang, et al. ∙ 3

• ### A Resizable Mini-batch Gradient Descent based on a Randomized Weighted Majority

Determining the appropriate batch size for mini-batch gradient descent i...
11/17/2017 ∙ by Seong Jin Cho, et al. ∙ 0

• ### Stochastic natural gradient descent draws posterior samples in function space

Natural gradient descent (NGD) minimises the cost function on a Riemanni...
06/25/2018 ∙ by Samuel L. Smith, et al. ∙ 0

• ### Asymptotically Optimal Exact Minibatch Metropolis-Hastings

Metropolis-Hastings (MH) is a commonly-used MCMC algorithm, but it can b...
06/20/2020 ∙ by Ruqi Zhang, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

The optimization of the expectation of a function is a relevant problem in large-scale machine learning and in many stochastic optimization problems involving finance, signal processing, neural networks, just to mention a few. The availability of large datasets has called the attention on algorithms that scale favorably both with the number of trainable parameters and the size of the input data. Batch approaches that exploit a large number of samples to compute an approximation of the gradient have been gradually replaced by stochastic approaches that sample a small dataset (usually a single point) per iteration. For example,

stochastic gradient descent (SGD) methods have been observed to yield faster convergence and (sometimes) lower test errors than standard batch methods (Bottou and Bousquet, 2011).

Despite the optimality results (Bottou and LeCun, 2003) and the successful applications, in practice SGD requires several steps of manual adjustment of the parameters to obtain good performance. For example, the initial step size together with the design of an appropriate annealing schema is required for learning with stationary data (Bottou, 2012; Schaul et al., 2013). In addition, to limit the effects of noisy updates, it is often necessary to exploit mini-batch techniques that require the choice of an additional parameter: the batch size. This optimization is costly and tedious since parameters have to be tested on several iterations. Such problems get even worse when nonstationary settings are considered (Schaul et al., 2013).

Several techniques have been designed for the tuning of the step size with pure SGD method. Although these approaches have been successful applied to mini-batch settings, the design of the appropriate batch size is still an open problem. The contribute of this paper is the derivation of a novel algorithm for the selection of the batch size in order to compromise between noisy updates and more certain but expensive steps. The proposed algorithm automatically

adapts the batch size at each iteration in order to maximize a lower bound to the expected improvement by accounting for the cost of processing samples. In particular, we consider both a first-order and a second-order Taylor approximation of the expected improvement and, exploiting concentration inequalities, we compute lower bounds to such approximations. The batch size is chosen by maximizing the ratio between the lower bound to the expected improvement and the number of samples used to estimate the gradient. Such optimization problem trades off the desire of increasing the batch size to get more accurate estimates and the cost of using more samples. The only parameter to be handled is the probability

that regulates the confidence level of the lower bound to the improvement step.

The rest of the paper is organized as follows. In the next section we give a brief overview of stochastic gradient descent methods. In Section 3 we define the optimization problem used to select the batch size. Section 4 introduces an approximation of the expected improvement expoliting Taylor expansion and Sections 5 and 6 deals respectively with a linear and a quadratic Taylor approximation. Section 7 discuss the application of diagonal preconditioning to define dimension-dependent step sizes. Empirical comparisons of the proposed methods with related approaches are reported in Section 8, while Section 9 draws conclusions and outlines future work.

## 2 Background

Stochastic gradient descent (SGD) is one of the most important optimization methods in machine learning. Most of the research on SGD has focused on the choice of the step size (Peters, 2007; Roux and Fitzgibbon, 2010; Duchi et al., 2011; Zeiler, 2012; Schaul et al., 2013; Orabona, 2014). Several annealing schemes have been proposed in literature based on the standard rule originally proposed in (Herbert Robbins, 1951) and analyzed in (Xu, 2011; Bach and Moulines, 2011). More recently researchers have proposed techniques to adapt the step size online accordingly to the observed samples and gradients. These techniques derive a global step size or adapt the step size for each parameter (diagonal preconditioning). Refer to (George and Powell, 2006) for a survey on annealing schemes and adaptive step size rules.

Traditional SGD processes one example per iteration. This sequential nature makes SGD challenging for distributed inference. A common practical solution is to employ minibatch training, which aggregates multiple examples at each iteration. On the other hand, the choice of the batch size is critical since too small batches lead to high communication costs, while large batches may slow down convergence rate in practice (Li et al., 2014). Despite the increasing amount of research in this field, all the mentioned approaches focus on obtaining (sub)optimal convergence rate of SGD without considering the possibility to adapt the size of the mini-batch. A notable exception is the work presented in (Byrd et al., 2012)

where the authors proposed to adapt the sample size during the algorithm progression. The batch size is selected according to the variance of the gradient estimated from observed samples. Starting from the geometrical definition of descent direction, through several manipulations, the authors derived the following condition

 |S|≥∥∥Var[∇θ]∥∥1γ2∥∥∇Sθ∥∥22, (1)

where and

is the vector storing the population variance for each component (

). The population variance is then approximated through its unbiased estimate

computed on sample set . However, the interplay between sample size and step size is not investigated resulting in an algorithm with hyper-parameters for both the selection of the batch size (the meaning of is not clearly defined) and the tuning of the step size.

## 3 Cost Sensitive Scenario

In this section we formalize the problem and the methodology that will be used through all the paper. Consider the problem of maximizing the expected value of a function (we assume that is Lipschitz continuous with Lipschitz constant ):

 maxθJ(θ)=maxθEx∼P[f(x,θ)],

where is the trainable parameter vector and the samples are drawn i.i.d. from a distribution . A common approach is to optimize the previous function through gradient ascent. However, since is unknown it is not possible to compute the exact gradient , but we can estimate it through samples. Given a training set , the mini-batch stochastic gradient (SG) ascent is a stochastic process

 θ(t+1) =θ(t)+Δθn (2) =θ(t)+η(t)∇θJn,t∈N+

where

is random variable associated to a

-dimensional subset of (e.g., randomly drawn). Formally is defined as the product of a positive scalar (or positive semi-definite matrix) and a the gradient estimate built on a samples drawn from

 ∇θJn=1n∑i∈In∇θf(xi,θ),

where is an index set used to identify elements in . is a random variable that depends on the selection of the subset of , i.e., the index set . In the following we will show how to select the batch size for each gradient update.

To evaluate the quality of an update we consider the improvement that is again a random variable. As the number of samples increases, the gradient estimate (and consequently the estimated improvement) gets more and more certain. So, adopting a risk-averse approach, we consider as a goal the maximization of some statistical lower bound to the expected improvement . This allows to account for the uncertainty in the stochastic process defined in 2. On the other hand, this problem is trivially solved by taking the batch size as large as possible, thus not considering the additional computational cost of processing a larger batch size. In practice, this means that the batch dimension induces a trade-off between a secure but costly update (the estimate converges to the true value as ) and a noisy one. In order to formalize the trade-off, in this paper we consider that any additional sample comes at a price and when the addition of a new sample does not provide any significant improvement in the estimated performance it is not worth to pay that price. As a consequence, we can formalize the batch size selection problem as a cost sensitive optimization

 n∗=argmaxn∈N+Υnn. (3)

## 4 Lower Bound to the Improvement

This section focuses on the derivation of the lower bound to the improvement . Given an increment , a realization of the random variable can be computed. However, we do not know the analytical relationship that ties the two terms. The lack of this information prevents a closed-form solution for the optimal batch size. On the other hand, resorting to black-box optimization methods (e.g., grid search) is generally not a suitable alternative due to their high cost.

In order to simplify the optimization problem, we can consider the Taylor expansion of the expected improvement. For example, the first order expansion is given by

 ΔJn =∇θJTΔθn+R1(Δθn), (4)

where is the remainder. A lower bound to the remainder is easily derived by minimizing the remainder along the line connecting the current parameterization and the value : . By plugging in this result in (4), a deterministic lower bound to the improvement is derived.

This formalization has two issues. First, the computation of such lower bound needs to solve a minimization problem that requires the evaluation of the Hessian in several points along the gradient direction. Secondly, the above lower bound does not explicitly depend on the batch size, since it does not take into consideration the uncertainty in the gradient estimate. The first issue will be solved by considering an approximation of the expected improvement obtained by considering a truncation of the Taylor expansion, while the second issue is addressed by considering a probabilistic lower bound to the expected improvement, that explicitly depends on the batch size (uncertainty reduces as batch size increases).

### 4.1 Approximation of the Expected Improvement

As mentioned, the computation of the lower bound to the remainder of the first-order Taylor expansion requires the evaluation of the Hessian in several points (depending on ) which has a quadratic cost in the number of parameters . One way to deal with this issue is to require a high order Lipschitz continuous condition on the objective function in order to derive a bound to the Hessian or to exploit the knowledge of the objective function (Pirotta et al., 2013). However, in practice this information is hard to retrieve and since our goal is to derive a practical algorithm we suggest to exploit approximations of the improvement .

A first formulation is obtained by considering a local linear expansion: . As we will see in the next section, this simplification has several advantages. As second option, in Section 6, we suggest to replace the inferior with the evaluation of the Hessian in the current parametrization. Equivalently, this means that we consider a quadratic expansion, a choice that is common in literature (Roux and Fitzgibbon, 2010; Schaul et al., 2013). Formally, we consider that , where the second-order remainder has been dropped.

## 5 Linear Probabilistic Adaptive Sample Technique (L-PAST)

The linear expansion allows to select the batch size in a way that is complementary to the step size selection technique since it is independent from the selected step size. In other words, the advantage of this approach is that the step size can be tuned using any automatic technique provided in literature, while the batch size is selected automatically according to the quality of the observed samples.

Let be the linear simplification of the expect improvement. We still need to manipulate such formulation in order to remove the dependence on the true gradient. This goal can be achieve by exploiting concentration inequalities on the exact gradient . Formally, we consider that the following inequality holds with probability (w.p.)

 ∥∇θJ−∇θJn∥2

Given the previous inequality it is easy to prove the following bound, w.p. , (see the appendix, Sec. 10.2)

 ˆΔJn =η∇θJT∇θJn (6) >η(∥∇θJn∥2−B∥∇∥(n,δ))∥∇θJn∥2=Υn,δL,

where we have considered the global step size (). As expected, the lower bound to the expected improvement depends on the batch size through the concentration bound ( is a realization of the random variable given the current set ). In particular, as the number of samples increases, the empirical error (according to the concentration inequality) decreases leading to better estimates of the expected improvement. Having derived a sample-based bound to the expected improvement, we can solve the cost-sensitive problem (3) for the “optimal” batch size . L-PAST is outlined in Algorithm 1.

### 5.1 Concentration Inequalities and Batch Size

The bound in (6) provides a generic lower bound to the expected improvement that is independent from the specific concentration inequality that is used. It is now necessary to provide an explicit formulation in order to solve Problem (3). Several concentration inequalities have been provided in literature, in this paper we consider Hoeffding’s, Chebyshev’s and Bernstein’s inequalities (Massart, 2007). Chebyshev’s inequality has been widely exploited in literature due to its simplicity, it can be applied to any arbitrary distribution (by knowing the variance). On the other side, Hoeffding’s and Bernstein’s inequalities require a bounded support of the distribution, i.e., the knowledge of the range of the random variables (here ). We use the term distribution aware to refer to the scenario where the properties of the distribution are known (e.g., variance and range). Although these values can be estimated online from the observed value, the results may be unreliable in the event of poor estimates. Empirical versions—that directly account for the estimation error—have been presented in literature (Saw et al., 1984; Mnih et al., 2008; Stellato et al., 2016).

The advantage of using these inequalities is that the batch size can be easily computed in closed form, see Table 1 for the distribution aware scenario. It is worth to notice that all the proposed approaches retains one hyper-parameter which denotes the desired confidence level. This parameter can be easily set due to its clear meaning and typically its contribution is small since it is inside a logarithm.

It is worth to notice that, when we consider the Chebyshev’s inequality, i.e., , our approach provides a probabilistic interpretation of the AGSS algorithm presented in (Byrd et al., 2012) and reported in (1). Nevertheless, our derivation gives a different and more formal interpretation of their approach and gives an explicit meaning to the hyper-parameter by mapping to . It is worth to notice that this result is obtained by considering the distribution aware Chebyshev’s inequality instead of the empirical version. By replacing the variance with its empirical estimate the result may be unreliable.

The simplicity of the previous approach comes at a low expressive power. The quadratic expansion of the expected improvement (with global step size)

 ˆΔJn =∇θJTΔθn+12ΔθnTHθJΔθn (7) =η∇θJT∇θJn+12η2(∇θJn)THθJ∇θJn

allows to account for local curvatures of the space.

Before to describe the Quadratic-PAST (Q-PAST), as done with L-PAST, we need to manipulate the expected improvement in order to remove the dependence on the exact gradient and Hessian. While the linear term can be lower bounded as done in Section 5, here we show how to handle the quadratic form in a similar way. Consider a component-wise concentration inequality for the Hessian estimate, such that, w.p. :

 ∣∣H(ij)θJ−H(ij)θJn∣∣

Then,

 ∇θJnTHθJ∇θJn>∇θJnT˜HθJn∇θJn, (9)

where . By plugging in inequalities (6)–(9) in (7) we obtain a lower bound to the quadratic expansion of the improvement

 ˆΔJn >Υn,δ2L+12η2∇θJnT˜HθJn∇θJn (10) =Υn,δQ.

Given the step size and a set , we can optimize the lower bound for the batch size .

Finally, we can exploit this sample-based bound to compute the “optimal” batch size as in Problem (3). The concentration inequalities mentioned in Section 5.1 can be used to bound the Hessian components. By exploiting these bounds it is possible to derive closed–form solution for even in this context.

## 7 Diagonal Preconditioning

Until now we have considered the global step size scenario where each parameter is scaled by the same amount . In practice, it may be necessary to considered individual step sizes () in order to account for the different magnitudes of the parameters.

There are several ways to deal with such scenario. We start considering the linear expansion where is a -dimensional vector and is the Hadamard (element-wise) product. If now we consider the gradient to be scaled by a factor , we can apply the same procedure presented in Section 5 on the scaled gradient. This means that it is necessary to recompute the concentration inequalities to take into account the change of magnitude. For example, Hoeffding’s inequality requires to know an upper bound to the L2–norm of the random vector involved in the estimate. In our settings (see Section 3) we have assumed that for any and . To use a diagonal preconditioning with L-PAST and Hoeffding we need just compute the upper bound to that in a trivial form is: . Similar considerations can be derived for the other concentration inequalities.

Another possible way to deal with the diagonal preconditioning is to exploit a component-wise concentration inequality. Let be a vector such that w.p. . This is the element-wise counterpart of the concentration inequality considered in (5). Notice that the following inequality always holds: . Let us consider this scenario together with the quadratic expansion, then:

 ˆΔJn ≥(η∘∇θJn)T˜∇θJn (11) +12(η∘∇θJn)T˜HθJn(η∘∇θJn)w.p.\@1−δ

where and is defined as in (9).

A different way to deal with diagonal preconditioning is to assume the problem to be separable (Schaul et al., 2013). In our settings this maps to a diagonal approximation of the Hessian .

## 8 Experiments

We tested the approaches on digit recognition and news classification tasks, with both convex (logistic regression) and non-convex (multi-layer perceptron) models.

Mini-batch SG (SG-) with fixed batch size is the standard approach to stochastic optimization problems. This section compares SG- with the adaptive algorithms introduced above (three versions of PAST and DSG (Byrd et al., 2012)).

A critical parameter in SG optimization is the definition of the step length

. In order to remove the dependence from these parameters we have tested several adaptive strategies (e.g., AdaGrad, Adam, RMSprop, AdaDelta). We have finally decided to use RMSprop which provided the most consistent results across the different settings (parameters are set as suggested in

(Tieleman and Hinton, 2012)). Finally, the -dimensional subset of

is sampled sequentially without shuffling at the beginning of each epoch.

#### Evaluation.

The main measure to be considered is the loss . However, the evaluation needs to take into account two orthogonal dimensions: samples and iterations

. The number of samples processed by the algorithm is relevant in applications where the samples need to be actively collected. For example, it is a relevant measure in reinforcement learning problems where samples are obtained interacting with a real or simulated environment. On contrary, in off-line applications (e.g., supervise learning) the iterations play a central role because there is no cost in collecting samples. For example, the highest cost in deep learning approaches is the evaluation of the evaluation the computation of the gradient and the consequent update of the parameters. Clearly this cost is proportional to the number of iterations. In the following we will investigate both the dimensions.

#### Datasets.

We chose to test the algorithms both classification and regression tasks. The classification tasks are: the MNIST digit recognition task (LeCun et al., 1998) (with training samples, test samples, and classes), and a subset of the Reuters newswire topics classification111Reuters data are available at https://keras.io/datasets/. (with training samples, test samples, and classes). For the Reuters task we select the most frequent words and we used them as binary features. The regression task is performed on the Parkinsons Telemonitoring dataset (Tsanas et al., 2010).222The dataset is available at https://archive.ics.uci.edu/ml/datasets/Parkinsons+Telemonitoring. This dataset is composed by voice measurements and the goal is to predict the total_UPDRS field by using the available features. We did not used any form of preprocessing for the classification tasks, while we performed normalization for the Parkinsons one (zero mean and unitary variance).

#### Estimators.

The

multi-class classifier

was modeled through different architectures of feed-forward neural networks. The simplest one is a logistic regression (i.e., a network without hidden layers). This model has convex loss (categorical cross entropy) in the parameters. This configuration is denoted as ’M0’. The second configuration is a fully connected multi-layer perceptron with one hidden layer with RELU activation function. In the MNIST task the network (denoted ’M1’) has the following configuration (

), it is not used in Reuters. Finally we test a deep, fully connected multi-layer perceptron with two hidden layers with activation function RELU. This architecture has been used only in the MNIST problem (denoted ’M2’) with the following layers (). The multi-layer parceptrons have non-convex loss (cross-entropy) relative to parameters.

For the regression task we have decided to exploit a simple linear regressor. We are aware of the limited power of such estimator but the focus is not on the final performance but on the relationship between the different batch strategies.

### 8.1 L-PAST Behavior.

In this section we compare the behavior of L-PAST approaches with the state-of-the-art on the classification tasks.

We start considering the total number of processed samples as evaluation dimension (together with the accuracy score). As shown in Figure 1 (top line), the best accuracy is obtained by the algorithms that select small batches (e.g., Bernstein L-PAST). This clearly is a consequence of the higher number of updates performed by the algorithms that select small batch sizes. In particular Bernstein L-PAST is able to outperform the other algorithms in the MNIST task due to the ability of quickly approaching the optimal solution in the initial phase. The rightmost figure shows the number of samples selected by Bernstein L-PAST w.r.t. SG-. Other approaches (DSG, Hoeffding/Chebyshev L-PAST) that exploit more general inequalities are prone to select bigger batch sizes that result in less updates and slower convergence rates.

When we consider the number of iterations, the ranking of the algorithms changes. In particular the ones that select the smallest batch sizes are penalized by the noisy estimate of the gradient. The other algorithms perform updates that are more certain and leads to highest scores.

Finally, we tested also different confidence levels. Table 2 shows that, as expected, the confidence has a small influence on the overall behavior. It is worth to notice that smaller batches generally lead to better (maybe noisy) performance since are able to perform a larger number of updates.

### 8.2 Q-PAST Behavior.

In this section we evaluate the performance of the quadratic approximation on the regression task. We assume the problem to be separable in order to consider only the diagonal component of the Hessian. Figure 2 shows that Q-PAST outperforms all the other approaches. It is worth to notice that it is less aggressive in the change of the batch dimension in particular when compared with L-PAST (bottom figure).

Table 3 shows the R2-score achieved by the algorithm with different confidence levels. It is possible to observe that the influence of on the final performance is very limited. This means that the design of such value is not critical.

### 8.3 Non-Stationary Scenario

PAST approaches regulate the batch dimension accordingly to the statistical information associated to the estimated gradient. In particular, when we are far away from the optimal solution we can exploit noisy steps (i.e., small batch) to rapidly approach “good” solution. Instead, when we approach the optimal solution we need an accurate estimate of the gradient to closely converge to the optimum.

This property is relevant in realistic applications (e.g., online scenario) where the optimal solution may change (even drastically) overtime. To simulate this scenario we have considered a regression problem (-degree polynomial) where the optimal solution is changed every iterations (i.e., parameter updates).

Figure 3 shows how Bernstein L-PAST handles such scenario. Initially, small batches are exploited to approach the optimum, then the batch size is increased proportionally to the noise-to-signal ratio. Intuitively, when the noise-to-signal ratio is high we need to average the gradient over many samples in order to lower the influence of the noise. A good proxy for the noise-to-signal ratio is provided by the ratio between variance and squared Euclidean norm of the gradient (), see Figure 3. When the optimum is changed the algorithm detects a decrease of the noise-to-signal variance (the gradient norm increases w.r.t. the variance) and adapts the batch size to the new scenario. This analysis is even more clear when we consider Chebyshev L-PAST since it directly optimize the batch size accordingly to the noise-to-signal ratio. On contrary, Hoeffding L-PAST, which is less aware of the informed that the other approaches, considers only the gradient magnitude.

In the same figure it is reported the performance of Bernstein Q-PAST. It is possible to observe that Q-PAST selects smaller batches than L-PAST. This means that it performs noisy steps that leads to less overfitting w.r.t. to L-PAST. In fact, when a change in the objective happens, Q-PAST enjoy lower losses than L-PAST.

## 9 Conclusions

Pure SG has proved to be effective in several applications, but it is highly time consuming since it exploits one sample for each update. We have shown that it is possible to exploit automatic techniques that are able to adapt the batch size overtime. Moreover, these techniques can be used in conjunction to any schema for the update of the parameters. While L-PAST based on Bernstein’s inequality has proved to be effective on the well known MNIST task and Reuters dataset, Q-PAST has proved to be more effective in the regression problem. However, the computation or estimation of the Hessian may be prohibitive in big data applications such as deep neural networks.

Although the batch size may not play a fundamental role in supervised applications, it is a critical parameters in reinforcement learning specially when the environment is highly stochastic (update the estimate with one sample may be too optimistic). Future work will apply the proposed techniques to refine policy gradient approaches.

## References

• Bach and Moulines (2011) Francis R. Bach and Eric Moulines.

Non-asymptotic analysis of stochastic approximation algorithms for machine learning.

In NIPS 24, December 2011, Granada, Spain, pages 451–459, 2011.
• Bottou (2012) Léon Bottou. Neural Networks: Tricks of the Trade: Second Edition, chapter Stochastic Gradient Descent Tricks, pages 421–436". Springer Berlin Heidelberg, Berlin, Heidelberg, 2012. ISBN "978-3-642-35289-8".
• Bottou and Bousquet (2011) Léon Bottou and Olivier Bousquet. The tradeoffs of large scale learning. In Optimization for Machine Learning, pages 351–368. MIT Press, 2011.
• Bottou and LeCun (2003) Léon Bottou and Yann LeCun. Large scale online learning. In NIPS 16, December 8-13, 2003, Vancouver and Whistler, British Columbia, Canada, pages 217–224. MIT Press, 2003.
• Byrd et al. (2012) Richard H. Byrd, Gillian M. Chin, Jorge Nocedal, and Yuchen Wu. Sample size selection in optimization methods for machine learning. Math. Program., 134(1):127–155, 2012.
• Duchi et al. (2011) John C. Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12:2121–2159, 2011.
• Ferentios (1982) K. Ferentios. On tcebycheff’s type inequalities. Trabajos de Estadistica y de Investigacion Operativa, 33(1):125–132, 1982. ISSN 0041-0241. doi: 10.1007/BF02888707.
• George and Powell (2006) Abraham P. George and Warren B. Powell. Adaptive stepsizes for recursive estimation with applications in approximate dynamic programming. Machine Learning, 65(1):167–198, 2006.
• Herbert Robbins (1951) Sutton Monro Herbert Robbins. A stochastic approximation method. The Annals of Mathematical Statistics, 22(3):400–407, 1951. ISSN 00034851.
• LeCun et al. (1998) Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998. ISSN 0018-9219. doi: 10.1109/5.726791.
• Li et al. (2014) Mu Li, Tong Zhang, Yuqiang Chen, and Alexander J Smola. Efficient mini-batch training for stochastic optimization. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 661–670. ACM, 2014.
• Massart (2007) Pascal Massart. Concentration inequalities and model selection, volume 6. Springer, 2007.
• Mnih et al. (2008) Volodymyr Mnih, Csaba Szepesvári, and Jean-Yves Audibert. Empirical bernstein stopping. In ICML, volume 307 of ACM International Conference Proceeding Series, pages 672–679. ACM, 2008.
• Orabona (2014) Francesco Orabona. Simultaneous model selection and optimization through parameter-free stochastic learning. In NIPS 27, December 2014, Montreal, Quebec, Canada, pages 1116–1124, 2014.
• Peters (2007) Jan Reinhard Peters. Machine learning of motor skills for robotics. PhD thesis, University of Southern California, 2007.
• Pirotta et al. (2013) Matteo Pirotta, Marcello Restelli, and Luca Bascetta. Adaptive step-size for policy gradient methods. In NIPS 26, December 2013, Lake Tahoe, Nevada, United States, pages 1394–1402, 2013.
• Roux and Fitzgibbon (2010) Nicolas Le Roux and Andrew W. Fitzgibbon. A fast natural newton method. In ICML-10, June 2010, Haifa, Israel, pages 623–630. Omnipress, 2010.
• Saw et al. (1984) John G. Saw, Mark C. K. Yang, and Tse Chin Mo. Chebyshev inequality with estimated mean and variance. The American Statistician, 38(2):pp. 130–132, 1984. ISSN 00031305.
• Schaul et al. (2013) Tom Schaul, Sixin Zhang, and Yann LeCun. No more pesky learning rates. In Proceedings of the 30th International Conference on Machine Learning, ICML 2013, Atlanta, GA, USA, 16-21 June 2013, volume 28 of JMLR Proceedings, pages 343–351. JMLR.org, 2013.
• Stellato et al. (2016) Bartolomeo Stellato, Bart Van Parys, and Paul J. Goulart. Multivariate chebyshev inequality with estimated mean and variance. The American Statistician, 2016.
• Tieleman and Hinton (2012) Tijmen Tieleman and Geoffrey Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. Technical report, 2012.
• Tropp (2012) Joel A. Tropp. User-friendly tail bounds for sums of random matrices. Foundations of Computational Mathematics, 12(4):389–434, 2012. ISSN 1615-3375.
• Tropp (2015) Joel A. Tropp. An introduction to matrix concentration inequalities. CoRR, abs/1501.01571, 2015.
• Tsanas et al. (2010) A. Tsanas, M. A. Little, P. E. McSharry, and L. O. Ramig. Accurate telemonitoring of parkinson’s disease progression by non-invasive speech tests. IEEE Transactions on Biomedical Engineering, 57(4):884–893, April 2010. ISSN 0018-9294.
• Xu (2011) Wei Xu. Towards optimal one pass large scale learning with averaged stochastic gradient descent. CoRR, abs/1107.2490, 2011.

## 10 Appendix

### 10.1 Bounding the Expected Improvement with Global Step Size

Consider the quadratic expansion of the expected improvement:

 ˆΔJn=∇θJTΔθn+12ΔθnTHθJΔθn, (12)

where and is a scalar value. In this section we will show how to bound the linear and quadratic terms of the expansion.

### 10.2 Bounding the Linear Term of ΔJn

In the following we will show how to bound the expected improvement obtained through stochastic gradient update, by means of an upper confidence bounds for the estimated gradient.

Consider the generic representation of the expected and estimated gradient vectors provided in Figure 4. By fixing a confidence value and a concentration inequality, we can write that:

 ∥∇θJ−∇θJn∥2

From previous inequality, by noticing that the exact gradient must lie in the ball of radius around the estimated gradient, it is easy to derive the following relationships:

 ∥∇θJn∥2−Bn∥∇∥<∥∇θJ∥2<∥∇θJn∥2+Bn∥∇∥. (14)

Exploiting a simple trigonometric relationship and inequalities (13)–(14), we can write that, w.p. :

 (Bn∥∇∥)2 >∥∇θJ−∇θJn∥22 =∥∇θJ∥22+∥∇θJn∥22−2∥∇θJ∥2∥∇θJn∥2cosγ >(∥∇θJn∥2−Bn∥∇∥)2+∥∇θJn∥22−2∥∇θJ∥2∥∇θJn∥2cosγ.

Then,

 cosγ>∥∇θJn∥2−Bn∥∇∥∥∇θJ∥2,w.p.\@1−δ.

As a consequence, the linear term of (12) can be lower bounded as follows (w.p. )

 ∇θJT∇θJn=∥∇θJ∥2∥∇θJn∥2cosγ>(∥∇θJn∥2−Bn∥∇∥)∥∇θJn∥2

Finally, w.p.

 ˆΔJnL=∇θJTΔθn>ΥnL(η)=η(∥∇θJn∥2−Bn∥∇∥)∥∇θJn∥2.

### 10.3 Bounding the Quadratic Term of ΔJS

Let us consider the quadratic term in (12). Then:

 ∇θJSTHθJ∇θJS =∑i,jH(ij)θJ∇θJSi∇θJSj

Previous equation requires the knowledge of the exact Hessian in the current parametrization. However, we can only estimate such quantity from observations. In order to derive a lower bound to such quadratic term, we assume that the following inequalities hold w.p. :

 ∣∣H(ij)θJ−H(ij)θJS∣∣

In order to properly handle the approximation error and obtain a lower bound we need just to subtract the bound:

 ∇θJSTHθJ∇θJS >∑i,j(H(ij)θJ−B(ij)H)∇θJSi∇θJSj,w.p.\@ \;1−δ

then, w.p. :

 ˆΔJnQ =∇θJTΔθS+12ΔθSTHθJΔθS >η(∥∥∇θJS∥∥2−Bn∥∇∥)∥∥∇θJS∥∥2+12η2∑i,j(H(ij)θJ−B(ij)H)∇θJSi∇θJSj.

## 11 Probabilistic Adaptive Sample Technique

In this section we derive the Probabilistic Adaptive Sample Technique (PAST) used to select both the batch size and the step size . First off all we want to review the concentration inequalities.

We start the Hoeffding’s inequality for uncentered random vectors. The following lemma is adapted from [Tropp, 2012, Theorem 1.3].

###### Lemma 1 (Vector Hoeffding).

Consider a finite sequence of independent, random vectors with dimension and population variance , such that, for any , . Introduce the mean then, for all ,

 P(∥Z−μ∥2≥t)≤(d+1)⋅e−nt28L2.

Note that Hoeffding’s inequality only requires the knowledge of the support of the distribution (here ). On contrary, Chebyshev’s and Bernstein’s inequalities require in addition the knowledge of the distribution variance.

We start considering the matrix version of the Chebyshev’s equality provided in [Ferentios, 1982]. Here we report the Chebyshev’s inequality for the sample mean.

###### Lemma 2 (Vector Chebyshev).

Consider a finite sequence of independent random vectors with common dimension . Let be the population mean and the vector variance statistic storing the population variance of each component. Introduce the mean then, for all ,

 P⎛⎝∥Z−μ∥2≥t∥∥√ν(x)∥∥2√n⎞⎠≤1t2.
###### Lemma 3 (Vector Bernstein, adapted from (Tropp 2015, Corollary 6.1.2)).

Consider a finite sequence of independent random vectors with common dimension . Let be the population mean and the vector variance statistic storing the population variance of each component. Assume that each vector is such that , i.e., each vector has uniformly bounded deviation from the mean:

 ∥xk−μ∥2≤L,∀k.

Introduce the mean then, for all ,

 P(∥Z−μ∥2≥t)≤(d+1)exp(−nt2/2∥ν(Z)∥2+Lt/3).

### 11.1 Linear-PAST

Given the lower bound derived in Section 10.2 we need to solve the cost sensitive problem in (3), reported here for sake of clarity

 n′=argmax¯n∈N+η(∥∇θJn∥2−B¯n∥∇∥)∥∇θJn∥2¯n,

where is the realization of the random variable computed on a subset of of dimension (it is independent from the value ). It is easy to observe that the problem can be further reduced

The bound as to be replaced using one of the concentration inequalities presented above.

• Hoeffding’s inequality: w.p.

 ∥∇θJn−∇θJ∥2≤L√8nln(d+1δ),

then

 n≥18L2∥∥∇θJS∥∥22ln(d+1δ).
• Chebyshev’s inequality: w.p.

 ∥∇θJn−∇θJ∥2≤ ⎷∥∥√ν(∇θJ)∥∥22δn=√∥ν(∇θJ)∥1δn,

then

 n≥9∥ν(∇θJ)∥14δ∥∇θJn∥22.
• Bernstein’s inequality: w.p.

 ∥∇θJn−∇θJ∥2≤√2∥ν(∇θJ)∥2nln(d+1δ)+2L3nln(d+1δ),

then

 n≥9b+16a∥∇θJn∥2+3√9b2+32ab∥∇θJn∥28∥∇θJn∥22,

where

 a:=23Lln(d+1δ),b:=2∥ν(∇θJ)∥2ln(d+1δ).