DeepAI AI Chat
Log In Sign Up

Accelerating Optimization via Adaptive Prediction

09/18/2015
by   Mehryar Mohri, et al.
NYU college
0

We present a powerful general framework for designing data-dependent optimization algorithms, building upon and unifying recent techniques in adaptive regularization, optimistic gradient predictions, and problem-dependent randomization. We first present a series of new regret guarantees that hold at any time and under very minimal assumptions, and then show how different relaxations recover existing algorithms, both basic as well as more recent sophisticated ones. Finally, we show how combining adaptivity, optimism, and problem-dependent randomization can guide the design of algorithms that benefit from more favorable guarantees than recent state-of-the-art methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

07/06/2017

Convergence Analysis of Optimization Algorithms

The regret bound of an optimization algorithms is one of the basic crite...
09/05/2019

More Adaptive Algorithms for Tracking the Best Expert

In this paper, we consider the problem of prediction with expert advice ...
05/27/2022

A gradient estimator via L1-randomization for online zero-order optimization with two point feedback

This work studies online zero-order optimization of convex and Lipschitz...
06/20/2017

A Unified Approach to Adaptive Regularization in Online and Stochastic Optimization

We describe a framework for deriving and analyzing online optimization a...
06/09/2023

Faster Discrete Convex Function Minimization with Predictions: The M-Convex Case

Recent years have seen a growing interest in accelerating optimization a...
05/20/2016

Structured Prediction Theory Based on Factor Graph Complexity

We present a general theoretical analysis of structured prediction with ...
08/22/2014

Applications and Analysis of Bio-Inspired Eagle Strategy for Engineering Optimization

All swarm-intelligence-based optimization algorithms use some stochastic...