An Incremental Path-Following Splitting Method for Linearly Constrained Nonconvex Nonsmooth Programs
The linearly constrained nonconvex nonsmooth program has drawn much attention over the last few years due to its ubiquitous power of modeling in the area of machine learning. A variety of important problems, including deep learning, matrix factorization, and phase retrieval, can be reformulated as the problem of optimizing a highly nonconvex and nonsmooth objective function with some linear constraints. However, it is challenging to solve a linearly constrained nonconvex nonsmooth program, which is much complicated than its unconstrained counterpart. In fact, the feasible region is a polyhedron, where a simple projection is intractable in general. In addition, the per-iteration cost is extremely expensive for the high-dimensional case. Therefore, it has been recognized promising to develop a provable and practical algorithm for linearly constrained nonconvex nonsmooth programs. In this paper, we develop an incremental path-following splitting algorithm with a theoretical guarantee and a low computational cost. In specific, we show that this algorithm converges to an ϵ-approximate stationary solution within O(1/ϵ) iterations, and that the per-iteration cost is very small for the randomized variable selection rule. To the best of our knowledge, this is the first incremental method to solve linearly constrained nonconvex nonsmooth programs with a theoretical guarantee. Experiments conducted on the constrained concave penalized linear regression (CCPLR) and nonconvex support vector machine (NCSVM) demonstrate that the proposed algorithm is more effective and stable than other competing heuristic methods.
READ FULL TEXT