Convergence Analysis of Optimization Algorithms

07/06/2017
by   HyoungSeok Kim, et al.
0

The regret bound of an optimization algorithms is one of the basic criteria for evaluating the performance of the given algorithm. By inspecting the differences between the regret bounds of traditional algorithms and adaptive one, we provide a guide for choosing an optimizer with respect to the given data set and the loss function. For analysis, we assume that the loss function is convex and its gradient is Lipschitz continuous.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/31/2020

Optimizing Optimizers: Regret-optimal gradient descent algorithms

The need for fast and robust optimization algorithms are of critical imp...
research
11/09/2019

Learning to Optimize in Swarms

Learning to optimize has emerged as a powerful framework for various opt...
research
02/20/2017

Hemingway: Modeling Distributed Optimization Algorithms

Distributed optimization algorithms are widely used in many industrial m...
research
09/18/2015

Accelerating Optimization via Adaptive Prediction

We present a powerful general framework for designing data-dependent opt...
research
04/29/2019

New optimization algorithms for neural network training using operator splitting techniques

In the following paper we present a new type of optimization algorithms ...
research
03/29/2023

Lipschitzness Effect of a Loss Function on Generalization Performance of Deep Neural Networks Trained by Adam and AdamW Optimizers

The generalization performance of deep neural networks with regard to th...
research
08/24/2020

Online Convex Optimization Perspective for Learning from Dynamically Revealed Preferences

We study the problem of online learning (OL) from revealed preferences: ...

Please sign up or login with your details

Forgot password? Click here to reset