Stochastic Recursive Variance Reduction for Efficient Smooth Non-Convex Compositional Optimization

12/31/2019 ∙ by Huizhuo Yuan, et al. ∙ Peking University 0

Stochastic compositional optimization arises in many important machine learning tasks such as value function evaluation in reinforcement learning and portfolio management. The objective function is the composition of two expectations of stochastic functions, and is more challenging to optimize than vanilla stochastic optimization problems. In this paper, we investigate the stochastic compositional optimization in the general smooth non-convex setting. We employ a recently developed idea of Stochastic Recursive Gradient Descent to design a novel algorithm named SARAH-Compositional, and prove a sharp Incremental First-order Oracle (IFO) complexity upper bound for stochastic compositional optimization: O((n+m)^1/2ε^-2) in the finite-sum case and O(ε^-3) in the online case. Such a complexity is known to be the best one among IFO complexity results for non-convex stochastic compositional optimization, and is believed to be optimal. Our experiments validate the theoretical performance of our algorithm.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

We consider the general smooth, non-convex compositional optimization problem of minimizing the composition of two expectations of stochastic functions:

(1)

where the outer and inner functions are defined as , , and

are random variables, and each component

, are smooth but not necessarily convex. Compositional optimization can be used to formulate many important machine learning problems, e.g. reinforcement learning (Sutton and Barto, 1998), risk management (Dentcheva et al., 2017), multi-stage stochastic programming (Shapiro et al., 2009), deep neural net (Yang et al., 2018), etc. We list two specific application instances that can be written in the stochastic compositional form of (1):

Risk management problem, which is formulated as

(2)

where denotes the returns of assets at time , and denotes the investment quantity corresponding to assets. The goal is to maximize the return while controlling the variance of the portfolio. (2) can be written as a compositional optimization problem with two functions

(3)
(4)

where denotes the (column) subvector of its first coordinates and denotes its -th coordinate.

Value function evaluation in reinforcement learning, where the objective function of interest is

(5)

where are two plausible states, denotes the reward to move from to , and is the value function on state corresponding to policy .

Compared with traditional optimization problem which allows accesses to the stochastic gradients, problem (1) is more difficult to solve. Existing algorithms for solving (1) are often more computationally challenging. This is mainly due to the nonlinear structure of the composition function with respect to the random index pair . Treating the objective function as an expectation

, computing each iterate of the gradient estimation involves recalculating

, which is either time-consuming or impractical. To tackle such weakness in practition, Wang et al. (2017a) firstly introduce a two-time-scale algorithm called Stochastic Compositional Gradient Descent (SCGD) along with its (in Nesterov’s sense) accelerated variant (Acc-SCGD), and provide a first convergence rate analysis to that problem. Subsequently, Wang et al. (2017b) proposed accelerated stochastic compositional proximal gradient algorithm (ASC-PG) which improves over the upper bound complexities in Wang et al. (2017a). Furthermore, variance-reduced gradient methods designed specifically for compositional optimization on non-convex settings arises from Liu et al. (2017) and later generalized to the non-smooth setting (Huo et al., 2018). These approaches aim at getting variance-reduced estimators of , and , respectively. Such success signals the necessity and possibility of designing a special algorithm for non-convex objectives with better convergence rates.

Algorithm Finite-sum Online
SCGD (Wang et al., 2017a) unknown
Acc-SCGD (Wang et al., 2017a) unknown
ASC-PG (Wang et al., 2017b) unknown
SCVR / SC-SCSG (Liu et al., 2017)
VRSC-PG (Huo et al., 2018) unknown
SARAH-Compositional (this work)
Table 1: Comparison of IFO complexities with different algorithms for general non-convex problem.

In this paper, we propose an efficient algorithm called SARAH-Compositional for the stochastic compositional optimization problem (1). For notational simplicity, we let and the index pair

be uniformly distributed over the product set

, i.e.

(6)

We use the same notation for the online case, in which case either or can be infinite.

A fundamental theoretical question for stochastic compositional optimization is the Incremental First-order Oracle (IFO) (the number of individual gradient and function evaluations; see Definition 1 in §2 for a precise definition) complexity bounds for stochastic compositional optimization. Our new SARAH-Compositional algorithm is developed by integrating the iteration of stochastic recursive gradient descent (Nguyen et al., 2017), shortened as SARAH,111This is also referred to as stochastic recursive variance reduction method, incremental variance reduction method or SPIDER-BOOST in various recent literatures. We stick to name the algorithm after SARAH to respect to our best knowledge the earliest discovery of that algorithm. with the stochastic compositional optimization formulation (Wang et al., 2017a). The motivation of this approach is that SARAH with specific choice of stepsizes is known to be optimal in stochastic optimization and regarded as a cutting-edge variance reduction technique, with significantly reduced oracle access complexities than earlier variance reduction method (Fang et al., 2018). We prove that SARAH-Compositional can reach an IFO computational complexity of , improving the best known result of in non-convex compositional optimization. See Table 1 for detailed comparison.

Related Works

Classical first-order methods such as gradient descent (GD), accelerated graident descent (AGD) and stochastic gradient descent (SGD) have received intensive attentions in both convex and non-convex optimization

(Nesterov, 2004; Ghadimi and Lan, 2016; Li and Lin, 2015). When the objective can be written in a finite-sum or online/expectation structure, variance-reduced gradient (a.k.a. variance reduction) techniques including SAG (Schmidt et al., 2017), SVRG (Xiao and Zhang, 2014; Allen-Zhu and Hazan, 2016; Reddi et al., 2016), SDCA (Shalev-Shwartz and Zhang, 2013, 2014), SAGA (Defazio et al., 2014), SCSG (Lei et al., 2017), SARAH/SPIDER (Nguyen et al., 2017; Fang et al., 2018; Wang et al., 2018), etc., can be employed to improve the theoretical convergence properties of classical first-order algorithms. Notably, Fang et al. (2018) recently proposed the SPIDER-SFO algorithm which non-trivially hybrids the iteration of stochastic recursive gradient descent (SARAH) (Nguyen et al., 2017) with the normalized gradient descent (NGD). In the representative case of batch-size 1, SPIDER-SFO adopts a small step-length that is proportional to the squared targeted accuracy , and (by rebooting the SPIDER tracking iteration once every iterates) the variance of the stochastic estimator can be constantly controlled by . For finding -accurate solution purposes, Wang et al. (2018) rediscovered a variant of the SARAH algorithm that achieves the same complexity as SPIDER-SFO (Fang et al., 2018) (their algorithm is under the name of SPIDER-BOOST since it can be seen as the SPIDER-SFO algorithm with relaxed step-length restrictions). The theoretical convergence property of SARAH/SPIDER methods in the smooth non-convex case outperforms that of SVRG, and is provably optimal under a set of mild assumptions (Fang et al., 2018; Wang et al., 2018).

It turns out that when solving compositional optimization problem (1), first-order methods for optimizing a single objective function can either be non-applicable or it brings at least queries to calculate the inner function . To remedy this issue, Wang et al. (2017a, b) considered the stochastic setting and proposed the SCGD algorithm to calculate or estimate the inner finite-sum more efficiently, achieving a polynomial rate that is independent of . Later on, Lian et al. (2017); Liu et al. (2017); Huo et al. (2018) and Lin et al. (2018) merged SVRG method into the compositional optimization framework to do variance reduction on all three steps of the estimation. In stark contrast, our work adopts the SARAH/SPIDER method which is theoretically more efficient than the SVRG method in the non-convex compositional optimization setting.

After the initial submission of the short version of this technical report, we are aware of a line of concurrent works by Zhang and Xiao (Zhang and Xiao, 2019b, a) who adapted the idea of SPIDER Fang et al. (2018) and solve the stochastic compositional problem. More relevant to this work is Zhang and Xiao (2019a) which consider a special non-smooth setting for the compositional optimization problem where the objective function has an additive non-smooth term that admits an easy proximal mapping.222Such a setting has also been studied in Wang et al. (2017b); Lin et al. (2018); Huo et al. (2018), among many others. Due to the lack of space, we satisfy ourselves with the smooth case and leave the analysis of the aforementioned non-smooth case to the future. For a fair comparison, the IFO complexity upper bound obtained in Zhang and Xiao (2019a) is similar to ours (Theorems 3 and 4) yet with significant differences: (i) Zhang and Xiao (2019a) has the step-length restriction that SPIDER has in nature, and our work circumvent this issue, hence applicable to a wider range of statistical learing tasks; (ii) Zhang and Xiao (2019a) fixes the batch size, while our work theoretically optimizes the choice of batch sizes (Corollary 5 and contexts) and further halves the IFO upper bound in its asymptotics, which potentially serves as a parameter-tuning guidance to practitioners.

Contributions This work makes two contributions as follows. First, we propose a new algorithm for stochastic compositional optimization called SARAH-Compositional, which operates SARAH/SPIDER-type recursive variance reduction to estimate relevant quantities. Second, we conduct theoretical analysis for both online and finite-sum cases, which verifies the superiority of SARAH-Compositional over the best known previous results. In the finite-sum case, we obtain a complexity of which improves over the best known complexity achieved by Huo et al. (2018). In the online case we obtain a complexity of which improves the best known complexity obtained in Liu et al. (2017).

Notational Conventions Throughout the paper, we treat the parameters and as global constants. Let

denote the Euclidean norm of a vector or the operator norm of a matrix induced by Euclidean norm, and let

denotes the Frobenious norm. For fixed let denote the sequence . Let denote the conditional expectation . Let and denote the cardinality of a multi-set of samples (a generic set that permits repeated instances). The averaged sub-sampled stochastic estimator is denoted as where the summation counts repeated instances. We denote if there exist some constants such that as becomes large. Other notations are explained at their first appearances.

Organization The rest of our paper is organized as follows. §2 formally poses our algorithm and assumptions. §3 presents the convergence rate theorem and §4 presents numerical experiments that apply our algorithm to the task of portfolio management. We conclude our paper in §5. Proofs of convergence results for finite-sum and online cases and auxiliary lemmas are deferred to §6 and §7 in the supplementary material.

2 SARAH for Stochastic Compositional Optimization

Recall our goal is to solve the compositional optimization problem (1), i.e. to minimize where

Here for each and the functions and . We can formally take the derivative of the function

and obtain (via the chain rule) the gradient descent iteration

(7)

where the operator computes the Jacobian matrix of the smooth mapping, and the gradient operator is only taken with respect to the first-level variable. As discussed in §1, it can be either impossible (online case) or time-consuming (finite-sum case) to estimate the terms and in the iteration scheme (7). In this paper, we design a novel algorithm (SARAH-Compositional) based on Stochastic Compositional Variance Reduced Gradient method (see Lin et al. (2018)) yet hybriding with the stochastic recursive gradient method Nguyen et al. (2017). As the readers see later, our SARAH-Compositional is more efficient than all existing algorithms for non-convex compositional optimization.

We introduce some definitions and assumptions. First, we assume the algorithm has accesses to an incremental first-order oracle in our black-box environment (Lin et al., 2018); also see (Agarwal and Bottou, 2015; Woodworth and Srebro, 2016) for vanilla optimization case:

Definition 1 (Ifo).

(Lin et al., 2018) The Incremental First-order Oracle (IFO) returns, when some and are inputted, the vector-matrix pair or when some and are inputted, the scalar-vector pair .

Second, our goal in this work is to find an -accurate solution, defined as

Definition 2 (-accurate solution).

We call an -accurate solution to problem (1), if

(8)

It is worth remarking here that the inequality (8) can be modified to for some global constant without hurting the magnitude of IFO complexity bounds.

Let us first make some assumptions regarding to each component of the (compositional) objective function. Analogous to Assumption 1(i) of Fang et al. (2018), we make the following finite gap assumption:

Assumption 1 (Finite gap).

We assume that the algorithm is initialized at with

(9)

where denotes the global minimum value of .

We make the following standard smoothness and boundedness assumptions, which are standard in recent compositional optimizatioin literatures (e.g. Lian et al. (2017); Huo et al. (2018); Lin et al. (2018)).

Assumption 2 (Smoothness).

There exist Lipschitz constants such that for , we have

(10)

Here for the purpose of using stochastic recursive estimation of , we slightly strengthen the smoothness assumption by adopting the Frobenius norm in left hand of the first line of (10).

Assumption 3 (Boundedness).

There exist boundedness constants such that for , we have

(11)

Notice that applying mean-value theorem for vector-valued functions to (11) gives another Lipschitz condition

(12)

and analogously for . It turns out that under the above two assumptions, a choice of in (10) can be expressed as a polynomial of . For clarity purposes in the rest of this paper, we adopt the following typical choice of :

(13)

whose applicability can be verified via a simple application of the chain rule. We integrate both finite-sum and online cases into one algorithm SARAH-Compositional and write it in Algorithm 1.

  Input:
  for  to  do
     if  then
        Draw samples and let (resp.  in finite-sum case)
        Draw samples and let (resp.  in finite-sum case)
        Draw samples and let (resp.  in finite-sum case)
     else
        Draw samples and let
        Draw samples and let
        Draw samples and let
     end if
     Update
  end for
  return  Output chosen uniformly at random from
Algorithm 1 SARAH-Compositional, Online Case (resp. Finite-Sum Case)

3 Convergence Rate Analysis

In this section, we aim to justify that our proposed SARAH-Compositional algorithm provides IFO complexities of in the finite-sum case and in the online case, which supersedes the concurrent and comparative algorithms (see more in Table 1).

Let us first analyze the convergence in the finite-sum case. In this case we have , , . Involved analysis leads us to conclude

Theorem 3 (Finite-sum case).

Suppose Assumptions 1, 2 and 3 in §2 hold, let , , let for any mini-batch sizes ,

(14)

and set the stepsize

(15)

Then for the finite-sum case, SARAH-Compositional Algorithm 1 outputs an satisfying in

(16)

iterates. Furthermore, let the mini-batch sizes , satisfy

(17)

then the IFO complexity to achieve an -accurate solution is bounded by

(18)

Like in Fang et al. (2018), for a wide range of mini-batch sizes the IFO complexity to achieve an -accurate solution is upper bounded by , as long as (17) holds.333Here and in below, the smoothness and boundedness parameters and are treated as constants Note if the batch size are chosen as , then from (18) the IFO complexity upper bound is

(19)

Let us then analyze the convergence in the online case, where we sample mini-batches of relevant quantities instead of the ground truth once every iterates. To characterize the estimation error, we put in one additional finite variance assumption:

Assumption 4 (Finite Variance).

We assume that there exists and as the upper bounds on the variance of the functions , , and , respectively, such that

(20)

From Assumptions 2 and 3 we can easily verify, via triangle inequality and convexity of norm, that can be chosen as and can be chosen as . On the contrary, cannot be represented as a function of boundedness and smoothness constants. We conclude the following theorem for the online case:

Theorem 4 (Online case).

Suppose Assumptions 1, 2 and 3 in §2 hold, let , , let for any mini-batch sizes

(21)

let noise-relevant parameter

(22)

let , and set the stepsize

(23)

Then the SARAH-Compositional Algorithm 1 outputs an satisfying in

(24)

iterates. Furthermore, let the mini-batch sizes , satisfy

(25)

then the IFO complexity to achieve an -accurate solution is bounded by

(26)

We see that in the online case, the IFO complexity to achieve an -accurate solution is upper bounded by , as long as (25) holds.444Here and in below, the smoothness and boundedness parameters and are treated as constants Note if the batch size are chosen as , then from (26) the IFO complexity upper bound is

(27)

In fact, we can further improve the coefficient in the term in (18) and term in (26). A simple optimization tricks enables us to obtain an optimal choice (as in (28) and (30) below) of mini-batch sizes, as

Corollary 5 (Optimal batch size, finite-sum and online case).

Let (resp. ) maps a real to its closest element in (resp. ).

  • When the mini-batch sizes in the finite-sum case are chosen as

    (28)

    the IFO complexity bound to achieve an -accurate solution for SARAH-Compositional is further minimized to

    (29)
  • When the mini-batch sizes in the online case are chosen to satisfy

    (30)

    where is defined in (22), the IFO complexity bound to achieve an -accurate solution for SARAH-Compositional is further minimized to

    (31)

To understand the new IFO complexity upper bounds (29) and (31) with optimally chosen batch sizes, via the basic inequality the complexity in (29) when can be further upper bounded by

This indicates that compared to the single-sample case (19), the IFO complexity upper bound obtained is reduced by at least in its coefficient when is asymptotically large. To our best knowledge, the theoretical phenomenon that mini-batch SARAH can reduce IFO complexity has not been quantitatively characterized in previous literatures. It is worth noting that analogous property does not hold in the classical optimization case, where the single-sample case and mini-batch cases share the same IFO complexity upper bound (Fang et al., 2018). With further efforts, it can be shown that the running time can be effectively more reduced by adopting parallel computing techniques; we omit the details for clarity.

Due to space limits, the detailed proofs of Theorems 3 and 4 and Corollary 5 are deferred to §6 in the supplementary material.

4 Experiments

In this section, we conduct numerical experiments to support our theory by applying our proposed SARAH-Compositional algorithm to three practical tasks: portfolio management, reinforcement learning, and a dimension reduction technique named stochastic neighborhood embedding (SNE). In sequel, §4.1 studies performance of our algorithm to (risk-adverse) portfolio management/optimization problem, §4.2 tests the performance of SARAH-Compositional on evaluating value functions in reinforcement learning, while §4.3 focuses on the study of SNE which possesses a non-convex objective function. We follow the setups in Huo et al. (2018); Liu et al. (2017) and compare with existing algorithms for compositional optimization. Readers are referred to Wang et al. (2017a, b) for more tasks we can apply our algorithm to.555We conduct experiments on synthetic data and MNIST dataset; the source code can be found at http://github.com/angeoz/SCGD.

4.1 SARAH-Compositional Applied to Portfolio Management

Figure 1: Experiment on the portfolio management. The -axis is number of passes of the dataset, that is, the number of gradients calculations divided by the number of samples. The -axis is the function value gap and the norm of gradient respectively.
Figure 2: Experiment on the portfolio management. The -axis is the number of gradients calculations divided by the number of samples, the -axis is the function value gap and the norm of gradient respectively.

The portfolio management minimization problem can be formulated as a mean-variance optimization problem:

(32)

where denotes the quantities invested at every assets . We recall that in Section 1 we introduced the equivalent formulation of (32) as a compositional optimization problem (4). As it satisfies Assumptions 1-4, it serves as a good example to validate our theory.

For illustration purpose of the finite-sum case we set and . Each row in the matrix are generated from a

-dimensional Gaussian distribution with covariance matrix

. The conditional number of are predetermined as one of the experiment settings. For example we experimented on the case when and when where denotes the condition number. Inspired from the setting of Huo et al. (2018), we sample from the Gaussian distribution and take the absolute value.

Furthermore, we did experiments on real world datasets as a demonstration of the online cases. Datasets include different portfolio datas formed on size and operating profitability666http://mba.tuck.dartmouth.edu/pages/faculty/ken.french/Data_Library/. We choose 6 different datasets with and , which is partly the same as used in (Lin et al., 2018).

Throughout the experiment of the portfolio management, we set in the finite-sum case, and set in the online case. The other parameters are set as follows: in the finite-sum case and in the online case, , and . For SCGD and ASC-PG algorithm, we fix the extrapolation parameter to be 0.9. Our search of learning rates is among , and we plot the learning curve of each algorithms corresponding to the best learning rate found. The results are shown in Figure 1and 2 respectively.

We demonstrate the comparison between our algorithm SARAH-Compositional, SCGD (Wang et al., 2017a) for compositional optimization, ASC-PG (Wang et al., 2017b) and VRSC-PG (Huo et al., 2018) which serves as a baseline for variance reduced stochastic compositional optmization methods. We plot the objective function value gap and gradient norm against IFO complexity (measured by gradients calculation) for all four algorithms, two covariance settings and six real world problems. We observe that SARAH-Compositional outperforms all other algorithms.

The result is an experimental justification that our proposed SARAH-based compositional optimization on portfolio management achieves state-of-the art performance. Moreover, we note that due to the small size of the batches, basic SCGD fails to reach a satisfying result, which is also shown by Huo et al. (2018); Lian et al. (2017). Smaller batch size also causes oscillation in SCGD training, which is a problem that SARAH-Compositional algorithm does not encounter.

4.2 SARAH-Compositional Applied to Reinforcement Learning

Next we demonstrate an experiment on reinforcement learning and test the performance of SARAH-Compositional on value function evaluation. Let be the value of state under policy , then the value function can be evaluated through Bellman equation:

(33)

for all , where represents the set of available states and . In value function evaluation tasks, we minimize the square loss

(34)

We write as . Equation (34) is a special form of the stochastic compositional optimization problem by choosing and as follows (Wang et al., 2017b):

where is the vector with the elements in as components.

To model a reinforcement learning problem, we choose one of the commonly used setting of Dann et al. (2014)

and generate a Markov decision process (MDP) with

states and

actions at each state. The transition probability is generated randomly from the uniform distribution

with added to each component to ensure the ergodicity. In addition, the rewards are sampled uniformly from .

Figure 3: Experiment on the reinforcement learning. We plot the Objective Value Gap and Gradient Norm vs. the IFO complexity (gradient calculation).

We tested our results on different settings of batch size and inner iteration numbers. In Figure 3 we plot our results on the batch size of , , respectively. The learning rate goes over the set and the inner loop update iteration number are set to be 100. We plot the objective value gap together with the gradient norm and use moving average to smooth the plot, which gives us Figure 3. From the figures we note that when the batch size is small and the iteration number is large, SARAH-Compositional outperforms VRSC on convergence speed, gradient norm and stability. This supports our theoretical results and shows the advantage of SARAH-Compositional over VRSC on the effect of variance reduction.

4.3 SARAH-Compositional Applied to SNE

In SNE problem (Hinton and Roweis, 2003) we use ’s to denote points in high dimensional space and ’s to denote their low dimensional images. We define

where controls the sensitivity to distance. Then the SNE problem can be formulated as a non-convex compositional optimization problem (Liu et al., 2017) as (1) and (6), where

We implement SNE method on MNIST dataset, with sample size 2000 and dimension 784. We use SCGD(Wang et al., 2017a), ASC-PG(Wang et al., 2017b), and VRSC (Liu et al., 2017) as a baseline of variance reduced version of stochastic compositional optimization methods and compare its performance with SARAH-Compositional. We choose the best learning rate that keeps the algorithm to converge for each case.

Figure 4: Experiment on SNE for MNIST dataset. The -axis is the IFO complexity (gradient calculation) and the -axis is the gradient norm.

In our experiment, we choose a inner batch size of 5, an outer batch size of 1000, and optimal learning rate for both algorithms. In the left panel of Figure 4, we plot the change of objective function value gap during iterations, and in the right panel we plot the gradient norm with respect to each outer loop update in SCGD, ASC-PG, VRSC and SARAH-Compositional. The left panel in Figure 4 shows that SARAH-Compositional has significantly better stability compared to VRSC. The gradient norm of SARAH-Compositional gradually decreases within each inner loop, while the gradient norm of VRSC accumulates within each inner loop and decreases at each outer loop.

We note that the objective function of t-SNE is non-convex. We observe from Figure 4 that, SARAH-Compositional outperforms VRSC with respect to the decrease in gradient norm against IFO complexity (gradient calculation), which is numerically consistent with our theory.

5 Conclusion

In this paper, we propose a novel algorithm called SARAH-Compositional for solving stochastic compositional optimization problems using the idea of a recently proposed variance-reduced gradient method. Our algorithm achieves both outstanding theoretical and experimental results. Theoretically, we show that the SARAH-Compositional algorithm can achieve desirable efficiency and IFO upper bound complexities for finding an -accurate solution of non-convex compositional problems in both finite-sum and online cases. Experimentally, we compare our new compositional optimization method with a few rival algorithms for the task of portfolio management. Future directions include handling the non-smooth case and the theory of lower bounds for stochastic compositional optimization. We hope this work can provide new perspectives to both optimization and machine learning communities interested in compositional optimization.

References

6 Detailed Analysis of Convergence Theorems

In this section, we detail the analysis of our Theormems 3 and 4. Before moving on, we first provide a key lemma that serves as their common analysis, whose proof is provided in §6.3. We assume that the expected estimation error squared is bounded as the following for any and some parameters , and to be specified here:

Lemma 6.

Assume that for any initial point