Proximal Gradient Method for Manifold Optimization

11/02/2018
by   Shixiang Chen, et al.
0

This paper considers manifold optimization problems with nonsmooth and nonconvex objective function. Existing methods for solving this kind of problems can be classified into two classes. Algorithms in the first class rely on information of the subgradients of the objective function, which leads to slow convergence rate. Algorithms in the second class are based on operator-splitting techniques, but they usually lack rigorous convergence guarantees. In this paper, we propose a retraction-based proximal gradient method for solving this class of problems. We prove that the proposed method globally converges to a stationary point. Iteration complexity for obtaining an ϵ-stationary solution is also analyzed. Numerical results on solving sparse PCA and compressed modes problems are reported to demonstrate the advantages of the proposed method.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/27/2019

An Alternating Manifold Proximal Gradient Method for Sparse PCA and Sparse CCA

Sparse principal component analysis (PCA) and sparse canonical correlati...
research
10/12/2021

Global Convergence of Triangularized Orthogonalization-free Method

This paper proves the global convergence of a triangularized orthogonali...
research
03/09/2018

A Stochastic Semismooth Newton Method for Nonsmooth Nonconvex Optimization

In this work, we present a globalized stochastic semismooth Newton metho...
research
05/09/2016

Structured Nonconvex and Nonsmooth Optimization: Algorithms and Iteration Complexity Analysis

Nonconvex and nonsmooth optimization problems are frequently encountered...
research
03/03/2022

Parametric complexity analysis for a class of first-order Adagrad-like algorithms

A class of algorithms for optimization in the presence of noise is prese...
research
11/18/2017

Proximal Gradient Method with Extrapolation and Line Search for a Class of Nonconvex and Nonsmooth Problems

In this paper, we consider a class of possibly nonconvex, nonsmooth and ...
research
06/18/2020

Improving the Convergence Rate of One-Point Zeroth-Order Optimization using Residual Feedback

Many existing zeroth-order optimization (ZO) algorithms adopt two-point ...

Please sign up or login with your details

Forgot password? Click here to reset