DeepAI AI Chat
Log In Sign Up

On a fixed-point continuation method for a convex optimization problem

by   Jean-Baptiste Fest, et al.

We consider a variation of the classical proximal-gradient algorithm for the iterative minimization of a cost function consisting of a sum of two terms, one smooth and the other prox-simple, and whose relative weight is determined by a penalty parameter. This so-called fixed-point continuation method allows one to approximate the problem's trade-off curve, i.e. to compute the minimizers of the cost function for a whole range of values of the penalty parameter at once. The algorithm is shown to converge, and a rate of convergence of the cost function is also derived. Furthermore, it is shown that this method is related to iterative algorithms constructed on the basis of the ϵ-subdifferential of the prox-simple term. Some numerical examples are provided.


page 1

page 2

page 3

page 4


Non-convex optimization via strongly convex majoirziation-minimization

In this paper, we introduce a class of nonsmooth nonconvex least square ...

Multiplying poles to avoid unwanted points in root finding and optimization

In root finding and optimization, there are many cases where there is a ...

Improving the Iterative Closest Point Algorithm using Lie Algebra

Mapping algorithms that rely on registering point clouds inevitably suff...

Accuracy-Reliability Cost Function for Empirical Variance Estimation

In this paper we focus on the problem of assigning uncertainties to sing...

Contraction Principle based Robust Iterative Algorithms for Machine Learning

Iterative algorithms are ubiquitous in the field of data mining. Widely ...

Fixed Point Analysis of Douglas-Rachford Splitting for Ptychography and Phase Retrieval

Douglas-Rachford Splitting (DRS) methods based on the proximal point alg...

L_1-regularized Boltzmann machine learning using majorizer minimization

We propose an inference method to estimate sparse interactions and biase...