DeepAI AI Chat
Log In Sign Up

Accelerating Optimization via Adaptive Prediction

by   Mehryar Mohri, et al.
NYU college

We present a powerful general framework for designing data-dependent optimization algorithms, building upon and unifying recent techniques in adaptive regularization, optimistic gradient predictions, and problem-dependent randomization. We first present a series of new regret guarantees that hold at any time and under very minimal assumptions, and then show how different relaxations recover existing algorithms, both basic as well as more recent sophisticated ones. Finally, we show how combining adaptivity, optimism, and problem-dependent randomization can guide the design of algorithms that benefit from more favorable guarantees than recent state-of-the-art methods.


page 1

page 2

page 3

page 4


Convergence Analysis of Optimization Algorithms

The regret bound of an optimization algorithms is one of the basic crite...

More Adaptive Algorithms for Tracking the Best Expert

In this paper, we consider the problem of prediction with expert advice ...

A gradient estimator via L1-randomization for online zero-order optimization with two point feedback

This work studies online zero-order optimization of convex and Lipschitz...

A Unified Approach to Adaptive Regularization in Online and Stochastic Optimization

We describe a framework for deriving and analyzing online optimization a...

Faster Discrete Convex Function Minimization with Predictions: The M-Convex Case

Recent years have seen a growing interest in accelerating optimization a...

Structured Prediction Theory Based on Factor Graph Complexity

We present a general theoretical analysis of structured prediction with ...

Applications and Analysis of Bio-Inspired Eagle Strategy for Engineering Optimization

All swarm-intelligence-based optimization algorithms use some stochastic...