The Novel Adaptive Fractional Order Gradient Decent Algorithms Design via Robust Control

03/08/2023
by   Jiaxu Liu, et al.
0

The vanilla fractional order gradient descent may oscillatively converge to a region around the global minimum instead of converging to the exact minimum point, or even diverge, in the case where the objective function is strongly convex. To address this problem, a novel adaptive fractional order gradient descent (AFOGD) method and a novel adaptive fractional order accelerated gradient descent (AFOAGD) method are proposed in this paper. Inspired by the quadratic constraints and Lyapunov stability analysis from robust control theory, we establish a linear matrix inequality to analyse the convergence of our proposed algorithms. We prove that the proposed algorithms can achieve R-linear convergence when the objective function is L-smooth and m-strongly-convex. Several numerical simulations are demonstrated to verify the effectiveness and superiority of our proposed algorithms.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/06/2021

A Caputo fractional derivative-based algorithm for optimization

We propose a novel Caputo fractional derivative-based optimization algor...
research
10/11/2021

Performance Analysis of Fractional Learning Algorithms

Fractional learning algorithms are trending in signal processing and ada...
research
04/17/2023

Accelerated Distributed Aggregative Optimization

In this paper, we investigate a distributed aggregative optimization pro...
research
02/13/2023

Convergence analysis for a nonlocal gradient descent method via directional Gaussian smoothing

We analyze the convergence of a nonlocal gradient descent method for min...
research
05/29/2021

On Centralized and Distributed Mirror Descent: Exponential Convergence Analysis Using Quadratic Constraints

Mirror descent (MD) is a powerful first-order optimization technique tha...
research
11/21/2022

Adaptive Stochastic Optimisation of Nonconvex Composite Objectives

In this paper, we propose and analyse a family of generalised stochastic...
research
01/23/2020

Replica Exchange for Non-Convex Optimization

Gradient descent (GD) is known to converge quickly for convex objective ...

Please sign up or login with your details

Forgot password? Click here to reset