    # SGDLibrary: A MATLAB library for stochastic gradient descent algorithms

We consider the problem of finding the minimizer of a function f: R^d →R of the form f(w) = 1/n∑_if_i(w). This problem has been studied intensively in recent years in machine learning research field. One typical but promising approach for large-scale data is stochastic optimization algorithm. SGDLibrary is a flexible, extensible and efficient pure-Matlab library of a collection of stochastic optimization algorithms. The purpose of the library is to provide researchers and implementers a comprehensive evaluation environment of those algorithms on various machine learning problems.

## Authors

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

This work aims to facilitate research on stochastic optimization for large-scale data. We particularly address a regularized finite-sum minimization problem defined as

 minw∈Rdf(w):=1nn∑i=1fi(w)=1nn∑i=1L(w,xi,yi)+λR(w), (1)

where represents the model parameter and denotes the number of samples .

is the loss function and

is the regularizer with the regularization parameter . Widely diverse machine learning (ML) models fall into this problem. Considering , , and , this results in an

-norm regularized linear regression problem (a.k.a. ridge regression) for

training samples . In the case of binary classification with the desired class label and , an

-norm regularized logistic regression (LR) problem is obtained as

, which encourages the sparsity of the solution of . Other problems covered

are matrix completion, support vector machines (SVM), and sparse principal components analysis, to name but a few.

Full gradient descent (a.k.a. steepest descent) with a step-size is the most straightforward approach for (1), which updates as at the -th iteration. However, this is expensive when is extremely large. In fact, one needs a sum of calculations of the inner product in the regression problems above, leading to cost overall per iteration. For this issue, a popular and effective alternative is stochastic gradient descent (SGD), which updates as for the -th sample uniformly at random [2, 3]. SGD assumes an unbiased estimator of the full gradient as . As the update rule represents, the calculation cost is independent of , resulting in per iteration. Furthermore, mini-batch SGD  calculates , where is the set of samples of size . SGD needs a diminishing step-size algorithm to guarantee convergence, which causes a severe slow convergence rate . To accelerate this rate, we have two active research directions in ML; Variance reduction (VR) techniques [4, 5, 6, 7, 8] that

explicitly or implicitly exploit a full gradient estimation to reduce the variance of

the noisy stochastic gradient, leading to superior convergence properties. Another promising direction is to modify deterministic second-order algorithms into stochastic settings, and solve the potential problem of first-order algorithms for ill-conditioned problems. A direct extension of quasi-Newton (QN) is known as online BFGS . Its variants include a regularized version (RES) , a limited memory version (oLBFGS) [9, 11], a stochastic QN (SQN) , an incremental QN , and a non-convex version. Lastly, hybrid algorithms of the SQN with VR are proposed [14, 15]. Others include [16, 17].

The performance of stochastic optimization algorithms is strongly influenced not only by the distribution of data but also by the step-size algorithm . Therefore, we often encounter results that are completely different from those in papers in every experiment. Consequently, an evaluation framework to test and compare the algorithms at hand is crucially important for fair and comprehensive experiments. One existing tool is Lightning 

, which is a Python library for large-scale ML problems. However, its supported algorithms are limited, and the solvers and the problems such as classifiers are mutually connected. Moreover, the implementations utilize Cython, which is a C-extension for Python, for efficiency. Subsequently, they decrease users’ readability of code, and also make users’ evaluations and extensions more complicated.

SGDLibrary is a readable, flexible and extensible pure-MATLAB library of a collection of stochastic optimization algorithms. The library is also operable on GNU Octave. The purpose of the library is to provide researchers and implementers a collection of state-of-the-art stochastic optimization algorithms that solve a variety of large-scale optimization problems such as linear/non-linear regression problems and classification problems. This also allows researchers and implementers to easily extend or add solvers and problems for further evaluation. To the best of my knowledge, no report in the literature and no library describe a comprehensive experimental environment specialized for stochastic optimization algorithms. The code is available at https://github.com/hiroyuki-kasai/SGDLibrary.

## 2 Software architecture

The software architecture of SGDLibrary follows a typical module-based architecture, which separates problem descriptor and optimization solver. To use the library, the user selects one problem descriptor of interest and no less than one optimization solvers to be compared.

Problem descriptor: The problem descriptor, denoted as problem, specifies the problem of interest with respect to , noted as w in the library. This is implemented by MATLAB classdef. The user does nothing other than calling a problem definition function, for instance, logistic_regression() for the -norm regularized LR problem. Each problem definition includes the functions necessary for solvers; (i) (full) cost function , (ii) mini-batch stochastic derivative v= for the set of samples . (iii) stochastic Hessian , and (iv) stochastic Hessian-vector product for a vector v. The built-in problems include, for example, -norm regularized multidimensional linear regression, -norm regularized linear SVM, -norm regularized LR, -norm regularized softmax classification (multinomial LR), -norm multidimensional linear regression, and -norm LR. The problem descriptor provides additional specific functions. For example, the LR problem includes the prediction and the classification accuracy calculation functions.

Optimization solver: The optimization solver implements the main routine of the stochastic optimization algorithm. Once a solver function is called with one selected problem descriptor problem as the first argument, it solves the optimization problem by calling some corresponding functions via problem such as the cost function and the stochastic gradient calculation function. Examples of the supported optimization solvers in the library are listed in categorized groups as; (i) SGD methods: Vanila SGD , SGD with classical momentum, SGD with classical momentum with Nesterov’s accelerated gradient , AdaGrad 

Variance reduction (VR) methods: SVRG , SAG , SAGA , and SARAH , (iii) Second-order methods: SQN , oBFGS-Inf [9, 11], oBFGS-Lim (oLBFGS) [9, 11], Reg-oBFGS-Inf (RES) , and Damp-oBFGS-Inf, (iv) Second-order methods with VR: SVRG-LBFGS , SS-SVRG , and SVRG-SQN , and (v) Else: BB-SGD and SVRG-BB. The solver function also receives optional parameters as the second argument, which forms a struct, designated as options

in the library. It contains elements such as the maximum number of epochs, the batch size, and the step-size algorithm with an initial step-size. Finally, the solver

function returns to the caller the final solution w and rich statistical information, such as a record of the cost function values, the optimality gap, the processing time, and the number of gradient calculations.

Others: SGDLibrary accommodates a user-defined step-size algorithm. This accommodation is achieved by setting as options.stepsizefun=@my_stepsize_alg, which is delivered to solvers. Additionally, when the regularizer in the minimization problem (1) is a non-smooth regularizer such as the -norm regularizer , the solver calls the proximal operator as problem.prox(w,stepsize), which is the wrapper function defined in each problem. The -norm regularized LR problem, for example, calls the soft-threshold function as w = prox(w,stepsize)=soft_thresh(w,stepsize*lambda), where stepsize is the step-size and lambda is the regularization parameter in (1).

## 3 Tour of the SGDLibrary

We embark on a tour of SGDLibrary exemplifying the -norm regularized LR problem. The LR model generates pairs of for an unknown model parameter , where is an input -dimensional vector and is the binary class label, as The problem seeks that fits the regularized LR model to the generated data . This problem is cast as a minimization problem as The code for this problem is in Listing 1.

First, we generate train/test datasets d using logistic_regression_data_generator(), where the input feature vector is with and . is its class label. The LR problem is defined by calling logistic_regression(), which internally contains the functions for cost value, the gradient and the Hessian. This is stored in problem. Then, we execute solvers, i.e., SGD and SVRG, by calling solver functions, i.e., sgd() and svrg() with problem and options after setting some options into the options struct. They return the final solutions of {w_sgd,w_svrg} and the statistical information {info_sgd,info_svrg}. Finally, display_graph() visualizes the behavior of the cost function values in terms of the number of gradient evaluations. It is noteworthy that each algorithm requires a different number of evaluations of samples in each epoch. Therefore, it is common to use this value to evaluate the algorithms instead of the number of iterations. Illustrative results additionally including SQN and SVRG-LBFGS are presented in Figure 1, which are generated by display_graph(), and display_classification_result() specialized for classification problems. Thus, SGDLibrary provides rich visualization tools as well.

## Appendix A Supported stochastic optimization solvers

Table 1 summarizes the supported stochastic optimization algorithms and configurations.

## Appendix B Directory and file structure

The tree structure below represents the directory and file structure of SGDLibrary. .1 /. .2 README.mdReadme file. .2 runmefirst.mFirst script to run. .2 demo.mDemonstration script to check library. .2 demoext.mDemonstration script to check library. .2 sgdlibraryversion.mVersion and release date information. .2 LICENSE.txtLicense file. .2 problem/Problem definitions. .2 sgdsolver/Stochastic optimization sovlers. .2 sgdtest/Test scripts to use this library. .2 plotter/Tools for plotting. .2 tool/Auxiliary tools. .2 gdsolver/Gradient descent optimization solver files. .2 gdtest/Test scripts for gradient descent solvers.

## Appendix C How to use SGDLibrary

### c.1 First to do

Run run_me_first for path configurations.

>> run_me_first;
##########################################################
###                                                    ###
###                Welcome to SGDLibrary               ###
###        (version:1.0.16, released:01-April-2017)    ###
###                                                    ###
##########################################################


Now, we are ready to use the library. Just execute demo for the simplest demonstration of this library. This is the case of Ł2-norm regularized logistic regression problem.

>> demo;
SGD: Epoch = 000, cost = 3.3303581454622559e+01, optgap = Inf
SGD: Epoch = 001, cost = 1.1475439725295747e-01, optgap = Inf
............
SGD: Epoch = 099, cost = 7.9162101289075484e-02, optgap = Inf
SGD: Epoch = 100, cost = 7.8493133230913101e-02, optgap = Inf
Max epoch reached: max_epochr = 100
SVRG: Epoch = 000, cost = 3.3303581454622559e+01, optgap = Inf
SVRG: Epoch = 001, cost = 1.531453394302157711148737e-01, optgap = Inf
SVRG: Epoch = 002, cost = 9.330987606770735354189128e-02, optgap = Inf
............
SVRG: Epoch = 099, cost = 7.715265139028108787311311e-02, optgap = Inf
SVRG: Epoch = 100, cost = 7.715265135253832062822710e-02, optgap = Inf
Max epoch reached: max_epochr = 100


The cost function values every iteration of two algorithms are shown in the Matlab command window. Additionally, the convergence plots of the cost function values are shown as in Figure 2.

### c.2 Simplest usage example: 4 steps!

The code of demo.m is shown in Listing 2.

Let us take a closer look at the code above bit by bit. The procedure has only 4 steps!

Step 1: Generate data

First, we generate datasets including train set and test set using a data generator function logistic_regression_data_generator(). The output includes train & test set and an initial value of the solution w.

Step 2: Define problem

The problem to be solved should be defined properly from the supported problems. logistic_regression() provides the comprehensive functions for a logistic regression problem. This returns the cost value by cost(w), the gradient by grad(w) and the hessian by hess(w) when given w. These are essential for any gradient descent algorithms.

Step 3: Perform solver

Now, you can perform optimization solvers, i.e., SGD and SVRG, calling solver functions, i.e., sgd() function and svrg() function, after setting some optimization options as the options struct.

They return the final solutions of w and the statistics information that include the histories of the epoch numbers, the cost function values, norms of gradient, the number of gradient evaluations and so on.

Step 4: Show result

Finally, display_graph() provides output results of decreasing behavior of the cost values in terms of the number of gradient evaluations. Note that each algorithm needs different number of evaluations of samples in each epoch. Therefore, it is common to use this number to evaluate stochastic optimization algorithms instead of the number of iterations.

## Appendix D Problem definition

This specifies the problem of interest with respect to w, noted as w in the library. It is noteworthy that the problem is defined as the class-based structure by exploiting MATLAB classdef.

The user does nothing other than calling a problem definition function.

The build-in problems in the library include

• Ł2-norm regularized multidimensional linear regression

• Ł2-norm regularized linear support vector machines (SVM)

• Ł2-norm regularized logistic regression (LR)

• Ł2-norm regularized softmax classification (multinomial LR)

• Ł1-norm regularized multidimensional linear regression

• Ł1-norm regularized logistic regression (LR)

Each problem definition contains the functions necessary for solvers;

• cost(w): calculate full cost function ,

• grad(w,indices) : calculate mini-batch stochastic derivative for the set of samples , which is noted as indices.

• hess(w,indices): calculate stochastic Hessian for indices.

• hess_vec(w,v,indices): calculate stochastic Hessian-vector product for v and indices.

We illustrate the code of the definition of Ł2-norm regularized linear regression function as an example. Listing 7 shows its entire code of linear_regression() function.

We explain line-by-line of Listing 7 below;

Listing 8 defines the properties (i.e., class variables) properties of the class.

Listing 9 defines the constructor linear_regression() of the class.

Listing 10 defines the cost calculation function cost() with respect to w.

Listing 11 defines the stochastic gradient calculation function grad() with respect to w for indices. full_grad() calculates the full gradient estimation.

Listing 12 defines the stochastic Hessian calculation function hess() with respect to w for indices. full_hess() calculates the full Hessian estimation.

Listing 13 defines the stochastic Hessian-vector product calculation function hess_vec() with respect to w and a vector v for indices.

The problem descriptor also provides some specific functions that are necessary for a particular problem. For example, the linear regression problem predicts the class for test data based on the model parameter that is trained by a stochastic optimization algorithm. Then, the final classification are calculated. Therefore, the regression problem equips the prediction and the mean squares error (MSE) calculation functions.

• prediction(w)

• mse(y_pred)

Listing 14 shows the prediction function prediction from the final solution w, and the MSE calculation function mse.

Meanwhile, in case of the logistic regression problem, the final classification are calculated. Therefore, the regression problem equips the prediction and the prediction accuracy calculation functions.

• prediction(w)

• accuracy(y_pred)

Listing 15 illustrates the binary-class prediction function prediction with the final solution w, and prediction accuracy calculation function accuracy with the predicted classes.

The last example is the softmax regression problem case. This case also contains the prediction function prediction and the prediction accuracy calculation function accuracy as shown in Listing 16.

## Appendix E Solver definition

This implements the main routine of the stochastic optimization solver. The optimization solver solves an optimization problem by calling the corresponding functions via the problem definition, such as cost(w), grad(w,indices), and possibly hess(w,indices). The final optimal solution w and the statistic information are returned. The latter is stored as the info struct. See also Appendix H.

This section illustrates the definition of the SGD function sgd() as an example. The entire code of sgd() is shown in Listing 17.

We explain line-by-line of Listing 17 below.

We first set option parameters as a options struct. The solver merges the local default options local_options, the default options get_default_options() that are commonly used in all solvers, and the input options in_options. This is shown in Listing 18.

Next, the initial statistics data are collected by store_infos function as shown in Listing 19 before entering the main optimization routine. The statistics data are also collected at the end of every epoch, i.e., outer iteration, as Listing 20. See also Appendix H.

The code in Listing 21 shows the calculation of the stepsize. If the user does not specify his/her user-defined stepsize algorithm, the solver calls a default stepsize algorithm function with options parameter. This parameter contains the stepsize algorithm type, the initial stepsize and so on. Otherwise, the user-defined stepsize algorithm is called. See also Appendix G.

Listing 22 illustrates the code for the stochastic gradient calculation, where indice_j is calculated from the number of the inner iteration, j, and the batch size options.batch_size.

Finally, the model parameter w is updated using the calculated stepsize step and the stochastic gradient grad as Listing 23.

## Appendix F Default stepsize algorithms

SGDLibrary supports four stepsize algorithms. This can be switched by the setting option struct such as options.step_alg=’decay-2’. After and are properly configured, we can use one of the following algorithms according to the total inner iteration number as;

• fix: This case uses below;

 η = η0.
• decay : This case uses below;

 η = η01+η0λk.
• decay-2: This case uses below;

 η = η01+k.
• decay-3: This case uses below;

 η = η0λ+k.

SGDLibrary also accommodates a user-defined stepsize algorithm. See the next section.

## Appendix G User-defined stepsize algorithm

SGDLibrary allows the user to define a new user-defined stepsize algorithm. This is done by setting as options.stepsizefun = @my_stepsize_alg, which is delivered to solvers via their input arguments. The illustrative example code is shown in Listing 24.

Listing 25 defines an example of the user-defined stepsize algorithm named as my_stepalg.

Then, the new algorithm function my_stepalg is set to the algorithm via options value for each solver as shown in Listing 26.

## Appendix H Collect statistics information

The solver automatically collects statistics information every epoch, i.e., outer iteration, via store_infos() function. The collected data are stored in infos struct, and it is returned to the caller function to visualize the results. The information contain below.

• iter: number of iterations

• time: elapsed time

• grad_calc_count: count of gradient calculations

• optgap: optimality gap

• cost: cost function value

• gnorm: norm of full gradient

Additionally, when a regularizer exists, its value is collected. This provides informative information, for example, the sparsity when . Furthermore, the user can collect the history of solution w in every epoch.

The code of store_infos() is illustrated in Listing 27.

## Appendix I Visualizations

SGDLibrary provides various visualization tools, which include

• display_graph(): display various graphs such as cost function values vs. the number of gradient evaluations, and the optimality gap vs. the number of gradient evaluations.

• display_regression_result(): show regression results for regression problems.

• display_classification_result(): show classification results specialized for classification problems.

• draw_convergence_animation(): draw convergence behavior animation of solutions.

This section provides illustrative explanations in case of Ł2-norm regularized logistic regression problem. The entire code is shown in Listing 28.

First, for the calculation of optimality gap, the user needs the optimal solution w_opt beforehand. This is obtained by calling problem.calc_solution() function of the problem definition function. This case uses the L-BFGS solver inside it to obtain the optimal solution under maximum iteration with a very precise tolerant stopping condition. Then, the optimal cost function value f_opt is calculated from w_opt. Listing 29 shows the code.

Then, you obtain the result of the optimality gap by display_graph() as Listing 30. The first argument and the second argument correspond to the values of -axis and -axis, respectively, in the graph. We can change these values according to the statistics data explained in Appendix H. The third argument represents the list of the algorithm names that are shown in the legend of the graph. The forth parameter indicates the list of the info structs to be shown.

Additionally, in case of Ł2-norm regularized logistic regression problems, the results of classification accuracy are calculated using the corresponding prediction function probrem.prediction() and the accuracy calculating function probrem.accuracy(). Then, the classification accuracies are illustrated by display_classification_result() function. The code is shown as Listing 31.

Finally, you can also show a demonstration of convergence animation. You need specify additional options before executing solvers as Listing 32.

Now, the animation of the convergence behavior is shown. The code is shown in Listing 33. It should be noted that draw_convergence_animation() is executable when only the dimension of the parameters is . The last parameter for the function, i.e., in this example, indicates the speed of the animation.

## Appendix J More results

We show more results for Ł2-norm regularized linear regression problem, Ł2-norm softmax classifier problem, and Ł2-norm regularized linear SVM problem in Figures 5 to 7, respectively.

## References

•  H. Kasai. SGDLibrary: A MATLAB library for stochastic gradient optimization algorithms. J. Mach. Learn. Res., 18(215):1–5, 2018.
•  H. Robbins and S. Monro. A stochastic approximation method. Ann. Math. Statistics, 22(3):400–407, 1951.
•  L. Bottou. Online algorithm and stochastic approximations. In David Saad, editor,

On-Line Learning in Neural Networks

. Cambridge University Press, 1998.
•  R. Johnson and T. Zhang. Accelerating stochastic gradient descent using predictive variance reduction. In NIPS, 2013.
•  N. L. Roux, M. Schmidt, and F. R. Bach. A stochastic gradient method with an exponential convergence rate for finite training sets. In NIPS, 2012.
•  S. Shalev-Shwartz and T. Zhang. Stochastic dual coordinate ascent methods for regularized loss minimization. J. Mach. Learn. Res., 14:567–599, 2013.
•  A. Defazio, F. Bach, and S. Lacoste-Julien. SAGA: A fast incremental gradient method with support for non-strongly convex composite objectives. In NIPS, 2014.
•  L. M. Nguyen, J. Liu, K. Scheinberg, and M. Takac. SARAH: A novel method for machine learning problems using stochastic recursive gradient. In ICML, 2017.
•  N. N. Schraudolph, J. Yu, and S. Gunter. A stochastic quasi-Newton method for online convex optimization. In AISTATS, 2007.
•  A. Mokhtari and A. Ribeiro. RES: Regularized stochastic BFGS algorithm. IEEE Trans. on Signal Process., 62(23):6089–6104, 2014.
•  A. Mokhtari and A. Ribeiro. Global convergence of online limited memory BFGS. J. Mach. Learn. Res., 16:3151–3181, 2015.
•  R. H. Byrd, S. L. Hansen, J. Nocedal, and Y. Singer. A stochastic quasi-Newton method for large-scale optimization. SIAM J. Optim., 26(2), 2016.
•  A. Mokhtari, M. Eizen, and A. Ribeiro. An incremental quasi-Newton method with a local superlinear convergence rate. In ICASSP, 2017.
•  P. Moritz, R. Nishihara, and M. I. Jordan. A linearly-convergent stochastic L-BFGS algorithm. In AISTATS, 2016.
•  R. Kolte, M. Erdogdu, and A. Ozgur. Accelerating SVRG via second-order information. In OPT2015, 2015.
•  J. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and stochastic optimization. J. Mach. Learn. Res., 12:2121–2159, 2011.
•  A. Bordes, L. Bottou, and P. Callinari. SGD-QN: Careful quasi-Newton stochastic gradient descent. J. Mach. Learn. Res., 10:1737–1754, 2009.
•  M. Blondel and F. Pedregosa. Lightning: large-scale linear classification, regression and ranking in Python, 2016.
•  I. Sutskever, J. Martens, G. Dahl, and G. Hinton.

On the importance of initialization and momentum in deep learning.

In ICML, 2013.
•  G. Tieleman, T. Hinton. Lecture 6.5 - RMSProp. Technical report, COURSERA: Neural Networks for Machine Learning, 2012.