Self-adaptive algorithms for quasiconvex programming and applications to machine learning

12/13/2022
by   Thang Tran Ngoc, et al.
0

For solving a broad class of nonconvex programming problems on an unbounded constraint set, we provide a self-adaptive step-size strategy that does not include line-search techniques and establishes the convergence of a generic approach under mild assumptions. Specifically, the objective function may not satisfy the convexity condition. Unlike descent line-search algorithms, it does not need a known Lipschitz constant to figure out how big the first step should be. The crucial feature of this process is the steady reduction of the step size until a certain condition is fulfilled. In particular, it can provide a new gradient projection approach to optimization problems with an unbounded constrained set. The correctness of the proposed method is verified by preliminary results from some computational examples. To demonstrate the effectiveness of the proposed technique for large-scale problems, we apply it to some experiments on machine learning, such as supervised feature selection, multi-variable logistic regressions and neural networks for classification.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/28/2019

Concavifiability and convergence: necessary and sufficient conditions for gradient descent analysis

Convergence of the gradient descent algorithm has been attracting renewe...
research
02/11/2020

Self-concordant analysis of Frank-Wolfe algorithms

Projection-free optimization via different variants of the Frank-Wolfe (...
research
04/06/2018

Adaptive Three Operator Splitting

We propose and analyze a novel adaptive step size variant of the Davis-Y...
research
03/23/2022

Spectral Projected Subgradient Method for Nonsmooth Convex Optimization Problems

We consider constrained optimization problems with a nonsmooth objective...
research
11/21/2021

A Data-Driven Line Search Rule for Support Recovery in High-dimensional Data Analysis

In this work, we consider the algorithm to the (nonlinear) regression pr...
research
07/24/2018

SAAGs: Biased Stochastic Variance Reduction Methods

Stochastic optimization is one of the effective approach to deal with th...
research
02/18/2018

Convergence of Online Mirror Descent Algorithms

In this paper we consider online mirror descent (OMD) algorithms, a clas...

Please sign up or login with your details

Forgot password? Click here to reset