Stochastic Gradient Descent for Stochastic Doubly-Nonconvex Composite Optimization

05/21/2018
by   Takayuki Kawashima, et al.
0

The stochastic gradient descent has been widely used for solving composite optimization problems in big data analyses. Many algorithms and convergence properties have been developed. The composite functions were convex primarily and gradually nonconvex composite functions have been adopted to obtain more desirable properties. The convergence properties have been investigated, but only when either of composite functions is nonconvex. There is no convergence property when both composite functions are nonconvex, which is named the doubly-nonconvex case.To overcome this difficulty, we assume a simple and weak condition that the penalty function is quasiconvex and then we obtain convergence properties for the stochastic doubly-nonconvex composite optimization problem.The convergence rate obtained here is of the same order as the existing work.We deeply analyze the convergence rate with the constant step size and mini-batch size and give the optimal convergence rate with appropriate sizes, which is superior to the existing work. Experimental results illustrate that our method is superior to existing methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/20/2017

Stochastic Recursive Gradient Algorithm for Nonconvex Optimization

In this paper, we study and analyze the mini-batch version of StochAstic...
research
02/20/2018

Composite Optimization by Nonconvex Majorization-Minimization

Many tasks in imaging can be modeled via the minimization of a nonconvex...
research
08/09/2022

Adaptive Zeroth-Order Optimisation of Nonconvex Composite Objectives

In this paper, we propose and analyze algorithms for zeroth-order optimi...
research
12/14/2017

Stochastic Particle Gradient Descent for Infinite Ensembles

The superior performance of ensemble methods with infinite models are we...
research
08/26/2021

The Number of Steps Needed for Nonconvex Optimization of a Deep Learning Optimizer is a Rational Function of Batch Size

Recently, convergence as well as convergence rate analyses of deep learn...
research
10/27/2019

Improved Zeroth-Order Variance Reduced Algorithms and Analysis for Nonconvex Optimization

Two types of zeroth-order stochastic algorithms have recently been desig...
research
03/15/2021

Lasry-Lions Envelopes and Nonconvex Optimization: A Homotopy Approach

In large-scale optimization, the presence of nonsmooth and nonconvex ter...

Please sign up or login with your details

Forgot password? Click here to reset