Stochastic Gradient Descent for Stochastic Doubly-Nonconvex Composite Optimization

05/21/2018
by   Takayuki Kawashima, et al.
0

The stochastic gradient descent has been widely used for solving composite optimization problems in big data analyses. Many algorithms and convergence properties have been developed. The composite functions were convex primarily and gradually nonconvex composite functions have been adopted to obtain more desirable properties. The convergence properties have been investigated, but only when either of composite functions is nonconvex. There is no convergence property when both composite functions are nonconvex, which is named the doubly-nonconvex case.To overcome this difficulty, we assume a simple and weak condition that the penalty function is quasiconvex and then we obtain convergence properties for the stochastic doubly-nonconvex composite optimization problem.The convergence rate obtained here is of the same order as the existing work.We deeply analyze the convergence rate with the constant step size and mini-batch size and give the optimal convergence rate with appropriate sizes, which is superior to the existing work. Experimental results illustrate that our method is superior to existing methods.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset