SVGD: A Virtual Gradients Descent Method for Stochastic Optimization

07/09/2019 ∙ by Zheng Li, et al. ∙ Xiangtan University 0

Inspired by dynamic programming, we propose Stochastic Virtual Gradient Descent (SVGD) algorithm where the Virtual Gradient is defined by computational graph and automatic differentiation. The method is computationally efficient and has little memory requirements. We also analyze the theoretical convergence properties and implementation of the algorithm. Experimental results on multiple datasets and network models show that SVGD has advantages over other stochastic optimization methods.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 6

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Stochastic gradient-based optimization is most widely used in many fields of science and engineering. In recent years, many scholars have compared SGD[1] with some adaptive learning rate optimization methods[2, 3]. [4] shows that adaptive methods often display faster initial progress on the training set, but their performance quickly plateaus on the development/test set. Therefore, many excellent models [5, 6, 7] still use SGD for training. However, SGD is greedy for the objective function with many multi-scale local convex regions (cf. Figure 1 of [8] or Fig. 1, left) because the negative of the gradient may not point to the minimum point on coarse-scale. Thus, the learning rate of SGD is difficult to set and significantly affects model performance[9].

Unlike greedy methods, dynamic programming (DP) [10] converges faster by solving simple sub-problems that decomposed from the original problem. Inspired by this, we propose the virtual gradient to construct a stochastic optimization method that combines the advantages of SGD and adaptive learning rate methods.

Consider a general objective function with the following composite form:

(1)

where , functions and each component function of is first-order differentiable.

We note that:

(2)

In addition, when we minimize and with the same iterative method, the former should converge faster because the structure of is simpler than . Based on these facts, we construct sequences and that converge to and , respectively, with equations:

(3)

Fig. 1 (right) shows the relationship between and . The sequence can be obtained by using first-order iterative methods (see Sec.5 for details):

(4)

where is the learning rate, is an operator of mappping .

Figure 1:

The difficulty in constructing operator is how to make the condition (3) holds true. Let , is an operator of mapping , we give the following iterations:

(5)
(6)

Since in Eqn.(6) is equivalent to the position of in gradient descent method, we define as the virtual gradient of function for variable .

For Eqn.(6), it is easy to prove that the condition (3) holds when is a linear mapping. If is a nonlinear mapping, let the second-derivatives of be bounded and , owing to (5) and (6) and Taylor formula, the following holds true:

(7)

In this case, the condition (3) holds, approximately.

According to the analysis above, the sequence yields similar convergence as in Eqn.(6) and Eqn.(5), but faster than minimizing the function with the same first-order method, directly.

Note that the iterative method (6) is derived based on the composite form (1

) and this form is generally not unique, it is inconvenient for our algorithm design. We begin by introducing the computational graph. It is a directed graph, where each node indicates a variable that may be a scalar, vector, matrix, tensor, or even a variable of another type, and each edge unique corresponds to an operation which maps a node to another. We sometimes annotate the output node with the name of the operation applied. In particular, the computational graph corresponding to the objective function is a DAG(directed acyclic graphs)

[11]. For example, the computational graph of the objective function shown in Fig. 2 (a), the corresponding composite form (1) is:

(8)
Figure 2:

,

,

are nodes associated with leaf values, hidden values and output value, and , , are edges associated with operations .

For a given general objective function, let correspond to a computational graph that maps the set of leaf values to the output value , where the set of hidden values is . Let . In this paper, the objective function in Eqn.(1) will be expressed as the following composite form:

(9)

where , , and

For example, Eqn.(8) can be expressed as:

where

In deeping learning, the gradient of the objective function is usually calculated by the Automatic Differentiation (AD) technique

[12, 9]. Our following example introduces how to calculate the gradient of in Eqn.(8) using AD technique.

  1. Find the Operation associated with output value and its input node , cf. Fig. 2 (b). Then, calculate the following gradients:

  2. Perform the following steps by the partial order of :

    1. Find Operation and it’s input nodes which associated with hidden value , cf. Fig. 2 (c). Let:

      where ’’ denotes that it is treated as a constant during the calculation of gradients and will not be declared later. Calculate the following gradients:

      where .

    2. Find Operation and it’s input nodes which associated with hidden value , cf. Fig. 2 (d). Let:

      Calculate following gradients:

      where .

  3. Calculate the gradients of :

According to the analysis above, the computational graph of can be shown as Fig. 3 (a). If is a broadcast-like operator, the computational graph of vitrual gradients can be shown as Fig. 3 (b), where , and is defined by the Eqn.(10).

Figure 3: are edges associated with operation and denotes .

According to the definition of virtual gradient, for any :

(10)

Obviously, where is an identity operator. The bprop operation is uniquely determined by .

Then, the Eqn.(6) can be written as the following virtual gradient descent iteration:

(11)

We prove that the SVGD (Alg. 1

) has advantages over SGD, RMSProp and Adam in training speed and test accuracy by experiments on multiple network models and datasets.

In Sec.2 we describe the operator and the SVGD algorithm of stochastic optimization. Sec.3 introduce two methods to encapsulate SVGD, and Sec.4 provides a theoretical analysis of convergence. Sec.6 compares our method with other methods by experiments.

2 Stochastic Virtual Gradients Descent Method

In this section, we will use the accumulate squared gradient in the RMSProp to construct the operator . According to Eqn.(7), Eqn.(3) holds when the mapping is linear. Based on this fact, we designed the following SVGD algorithm. The functions and variables in the algorithm are given by Eqn.(9) and Eqn.(10).

: computational graph associated with function : Learning rate : Minibatch size : Scaling coefficient : Initial parameter /* define operator before training */
for  do
        // Initialize gradient accumulation variable
       for  do
             if  about is linear then
                    // define
                  
            else
                    // define
                  
             end if
            
       end for
      
end for
/* update */
while  not converged do
       Sample a minibatch of examples from the training set with corresponding targets .
       for  parallel do
              // Accumulate squared gradient
            
       end for
      for  parallel do
              // apply update
            
       end for
      
end while
Algorithm 1 SVGD, our proposed algorithm for stochastic optimization. indicates the elementwise square . Good default settings for the tested machine learning problems are and . All operations are element-wise.

SVGD works well in neural network training tasks (Fig.

9, 11, 12), it has a relatively faster convergence rate and better test accuracy than SGD, RMSProp, and Adam.

For the linear operation Conv2D [13] and matrix multiplication MatMul as follows:

there are and . Thus, SVGD also has less memory requirements than RMSProp and Adam for deep neural networks.

For the same stochastic objective function, the learning rate at timestep t in SVGD has the following relationship with the stepsize in the SGD and RMSProp:

3 Encapsulation

In this section, we introduce two methods to generate the computational graph of virtual gradient. We begin by assumming that the objective function is (cf. Fig. 4 (b)), the set (cf. Fig. 4 (a)), and the function used to construct the computational graph of gradients is "gradients", cf. Fig. 4 (c).

Figure 4:

We hope to generate the computational graph of virtual gradients by using the function "gradients", Fig. 4 (c).

3.1 Extend the API libraries

As shown in Fig. 4, We begin by replacing with , where is a copy of but corresponds to a new bprob operation. Then, call the function "gradients" to generate the computational graph of virtual gradients.

In order to achieve the idea above, in programming, we need to extending core libraries to customize new operations of and its bprop operation. Fig. 5

shows that we need to extend 3 libraries in the layered architecture of TensorFlow

[14].

Figure 5: Layered architecture of Tensorflow

3.2 Modify the topology of the calculation graph

According to Eqn.(10) and Fig. 3, the computational graph of the virtual gradients can be obtained by adding new nodes on the computational graph of the gradients and reroute the inputs and outputs of new nodes. cf. Fig. 6.

Figure 6: Subgraph views of gradients and virtual gradients. Left: the part of the computational graph of the gradients. Right: the part of the computational graph of the virtual gradients.

4 Convergence Analysis

In this section, we will analyze the theoretical convergence of Eqn.(6) under some assumptions. Let be a random ()-matrix, be an i.i.d. variable from . Then

(12)
Proof.

Let be the unit vector whose i-th component is 1, is bilinear, Then

(13)

Since be an i.i.d. variable from , the following holds true:

Thus:

(14)

Fig. 7 proof our lemma.

Figure 7: The relationship between

and the estimate of

. Each point corresponds to a pair of random vector

and a random matrix set

.

For defined in Lemma 4, if , then:

(15)

Let and

be second-order differentiable functions with random variables in their expression, we set:

If each component of Jacobian matrix is an i.i.d. variable from , then, for and there exists a such that

Proof.

Without loss of generality, we can assume . Then, the Maclaurin series for around the point is:

Let . According to corollary 4:

Although our convergence analysis in Thm.4

only applies to the assumption of uniform distribution, we empirically found that SVGD often outperforms other methods in general cases.

5 Related Work

First-order methods. For general first-order methods, The moving direction of the variables can be regarded as the function of the stochastic gradient :

  • SGD:    .

  • Momentum:[15]    Let . Then:

  • RMSProp:    Let . Then:

  • Adam:    Let . Then:

However, in SVGD method, cannot be written as a function of . Thus, SVGD is not essentially a first-order method.

Global minimum. A central challenge of non-convex optimization is avoiding sub-optimal local minima. Although it has been shown that the variable can sometimes converges to a neighborhood of the global minimum by adding noise[16, 17, 18, 19, 20]

, the convergence rate is still a problem. Note that the DP method has some probability to escape “appropriately shallow” local minima because the moving direction of the variable is generated by solving several sub-problems instead of the original problem. We use computational graph and automatic differentiation to generate the sub-problems in DP, such as what we did in the SVGD method.

6 Experiments

In this section, we evaluated our method on two benchmark datasets using several different neural network architectures. We train the neural networks using RMSProp, Adam, SGD, and SVGD to minimize the cross-entropy objective function with weight decay on the parameters to prevent over-fitting. To be fair, for different methods, a given objective function will be minimized with different learning rates. All extension libs, algorithm, and experimental logs in this paper can be found at the URL: https://github.com/LizhengMathAi/svgd.

The following experiments show that SVGD has a relatively faster convergence rate and better test accuracy than SGD, RMSProp, and Adam.

6.1 Multi-layer neural network

In our first set of experiments, we train a 5-layer neural network (Fig. 8) on the MNIST [21] handwritten digit classification task.

Figure 8: MLP architecture for MNIST with 5 parameter layers (245482 params).

The model is trained with a mini-batch size of 32 and weight decay of . In Table 1, we decay at 1.6k and 3.6k iterations and summarize the optimal learning rates for RMSProp, Adam, SGD, and SVGD by hundreds of experiments.

RMSProp Adam SGD SVGD(s=0.1)
iter: 0.001 0.001 0.1 0.01
iter: 0.0005 0.00005 0.05 0.005
iter: 0.00005 0.00005 0.01 0.001
test top-1 error 1.80% 1.94% 1.76% 1.60%
Table 1: The test error and learning rates in MLP experiments.

In Table 1 and Fig. 9 we compare the error rates and their descent process process on the CIFAR-10 test set, respectively.

Figure 9: Comparison of first-order methods on MNIST

digit classification for 3.75 epochs.

6.2 Convolutional neural network

We train a VGG model (Fig. 10) on the CIFAR-10 [22] classification task and follow the simple data augmentation in [23, 24] for training and evaluate the original image for testing.

Figure 10: VGG model architecture for CIFAR-10 with 6 parameter layers (46126 params).

The model is trained with a mini-batch size of 128 and weight decay of . In Table 2, we decay at 12k and 24k iterations and summarize the optimal learning rates for RMSProp, Adam, SGD, and SVGD by hundreds of experiments.

RMSProp Adam SGD SVGD(s=0.001)
iter: 0.02 0.02 2.0 2.0
iter: 0.01 0.01 0.5 0.5
iter: 0.002 0.005 0.005 0.005
test top-1 error 17.78% 18.02% 17.32% 17.07%
Table 2: The test error and learning rates in VGG experiment.

In Table 2 and Fig. 11 we compare the error rates and their descent process on the CIFAR-10 test set, respectively.

Figure 11: Comparison of first-order methods on CIFAR-10 dataset for 90 epochs.

6.3 Deep neural network

We use the same hyperparameters with

[24] to train ResNet-20 model(0.27M params) on the CIFAR-10 classification task. In Table 3, we decay at 12k and 24k iterations and summarize the optimal learning rates for RMSProp, Adam, SGD, and SVGD by hundreds of experiments.

RMSProp Adam SGD SVGD(s=0.01)
iter: 0.001 0.001 0.1 0.5
iter: 0.0001 0.0001 0.01 0.02
iter: 0.0001 0.00005 0.001 0.01
test top-1 error 11.18% 11.12% 10.69% 8.62%
Table 3: The test error and learning rates in ResNet experiments.

In Table 3 and Fig. 12 we compare the error rates and their descent process on the CIFAR-10 test set, respectively. The top-1 error fluctuations in experiments do not exceed 1%. See [25] for more information on the CIFAR-10 experimental record.

Figure 12: Comparison of first-order methods on CIFAR-10 dataset for 125 epochs.

References