A Modular Analysis of Adaptive (Non-)Convex Optimization: Optimism, Composite Objectives, and Variational Bounds

09/08/2017
by   Pooria Joulani, et al.
0

Recently, much work has been done on extending the scope of online learning and incremental stochastic optimization algorithms. In this paper we contribute to this effort in two ways: First, based on a new regret decomposition and a generalization of Bregman divergences, we provide a self-contained, modular analysis of the two workhorses of online learning: (general) adaptive versions of Mirror Descent (MD) and the Follow-the-Regularized-Leader (FTRL) algorithms. The analysis is done with extra care so as not to introduce assumptions not needed in the proofs and allows to combine, in a straightforward way, different algorithmic ideas (e.g., adaptivity, optimism, implicit updates) and learning settings (e.g., strongly convex or composite objectives). This way we are able to reprove, extend and refine a large body of the literature, while keeping the proofs concise. The second contribution is a byproduct of this careful analysis: We present algorithms with improved variational bounds for smooth, composite objectives, including a new family of optimistic MD algorithms with only one projection step per round. Furthermore, we provide a simple extension of adaptive regret bounds to practically relevant non-convex problem settings with essentially no extra effort.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/21/2022

Distributed Online Non-convex Optimization with Composite Regret

Regret has been widely adopted as the metric of choice for evaluating th...
research
11/26/2012

The Interplay Between Stability and Regret in Online Learning

This paper considers the stability of online learning algorithms and its...
research
08/08/2022

Optimistic Optimisation of Composite Objective with Exponentiated Update

This paper proposes a new family of algorithms for the online optimisati...
research
10/22/2020

Regret Bounds without Lipschitz Continuity: Online Learning with Relative-Lipschitz Losses

In online convex optimization (OCO), Lipschitz continuity of the functio...
research
06/05/2015

Improved SVRG for Non-Strongly-Convex or Sum-of-Non-Convex Objectives

Many classical algorithms are found until several years later to outlive...
research
05/31/2022

AdaTask: Adaptive Multitask Online Learning

We introduce and analyze AdaTask, a multitask online learning algorithm ...
research
08/09/2022

Adaptive Zeroth-Order Optimisation of Nonconvex Composite Objectives

In this paper, we propose and analyze algorithms for zeroth-order optimi...

Please sign up or login with your details

Forgot password? Click here to reset