Non-convex optimization via strongly convex majoirziation-minimization

06/13/2019
by   Azita Mayeli, et al.
0

In this paper, we introduce a class of nonsmooth nonconvex least square optimization problem using convex analysis tools and we propose to use the iterative minimization-majorization (MM) algorithm on a convex set with initializer away from the origin to find an optimal point for the optimization problem. For this, first we use an approach to construct a class of convex majorizers which approximate the value of non-convex cost function on a convex set. The convergence of the iterative algorithm is guaranteed when the initial point x^(0) is away from the origin and the iterative points x^(k) are obtained in a ball centred at x^(k-1) with small radius. The algorithm converges to a stationary point of cost function when the surregators are strongly convex. For the class of our optimization problems, the proposed penalizer of the cost function is the difference of ℓ_1-norm and the Moreau envelope of a convex function, and it is a generalization of GMC non-separable penalty function previously introduced by Ivan Selesnick in IS17.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro