DC Proximal Newton for Non-Convex Optimization Problems

07/02/2015
by   Alain Rakotomamonjy, et al.
0

We introduce a novel algorithm for solving learning problems where both the loss function and the regularizer are non-convex but belong to the class of difference of convex (DC) functions. Our contribution is a new general purpose proximal Newton algorithm that is able to deal with such a situation. The algorithm consists in obtaining a descent direction from an approximation of the loss function and then in performing a line search to ensure sufficient descent. A theoretical analysis is provided showing that the iterates of the proposed algorithm admit as limit points stationary points of the DC objective function. Numerical experiments show that our approach is more efficient than current state of the art for a problem with a convex loss functions and non-convex regularizer. We have also illustrated the benefit of our algorithm in high-dimensional transductive learning problem where both loss function and regularizers are non-convex.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/28/2018

Stochastic Optimization for DC Functions and Non-smooth Non-convex Regularizers with Non-asymptotic Convergence

Difference of convex (DC) functions cover a broad family of non-convex a...
research
06/19/2017

On Quadratic Convergence of DC Proximal Newton Algorithm for Nonconvex Sparse Learning in High Dimensions

We propose a DC proximal Newton algorithm for solving nonconvex regulari...
research
07/09/2023

Large-scale global optimization of ultra-high dimensional non-convex landscapes based on generative neural networks

We present a non-convex optimization algorithm metaheuristic, based on t...
research
06/27/2012

Sparse Support Vector Infinite Push

In this paper, we address the problem of embedded feature selection for ...
research
07/25/2017

Efficient Deformable Shape Correspondence via Kernel Matching

We present a method to match three dimensional shapes under non-isometri...
research
03/18/2013

A General Iterative Shrinkage and Thresholding Algorithm for Non-convex Regularized Optimization Problems

Non-convex sparsity-inducing penalties have recently received considerab...
research
07/26/2019

Using positive spanning sets to achieve stationarity with the Boosted DC Algorithm

The Difference of Convex function Algorithm (DCA) is widely used for min...

Please sign up or login with your details

Forgot password? Click here to reset