Distributed Online Non-convex Optimization with Composite Regret

09/21/2022
by   Zhanhong Jiang, et al.
0

Regret has been widely adopted as the metric of choice for evaluating the performance of online optimization algorithms for distributed, multi-agent systems. However, data/model variations associated with agents can significantly impact decisions and requires consensus among agents. Moreover, most existing works have focused on developing approaches for (either strongly or non-strongly) convex losses, and very few results have been obtained regarding regret bounds in distributed online optimization for general non-convex losses. To address these two issues, we propose a novel composite regret with a new network regret-based metric to evaluate distributed online optimization algorithms. We concretely define static and dynamic forms of the composite regret. By leveraging the dynamic form of our composite regret, we develop a consensus-based online normalized gradient (CONGD) approach for pseudo-convex losses, and it provably shows a sublinear behavior relating to a regularity term for the path variation of the optimizer. For general non-convex losses, we first shed light on the regret for the setting of distributed online non-convex learning based on recent advances such that no deterministic algorithm can achieve the sublinear regret. We then develop the distributed online non-convex optimization with composite regret (DINOCO) without access to the gradients, depending on an offline optimization oracle. DINOCO is shown to achieve sublinear regret; to our knowledge, this is the first regret bound for general distributed online non-convex learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/11/2022

Online Frank-Wolfe with Unknown Delays

The online Frank-Wolfe (OFW) method has gained much popularity for onlin...
research
07/05/2021

Robust Online Convex Optimization in the Presence of Outliers

We consider online convex optimization when a number k of data points ar...
research
09/08/2017

A Modular Analysis of Adaptive (Non-)Convex Optimization: Optimism, Composite Objectives, and Variational Bounds

Recently, much work has been done on extending the scope of online learn...
research
09/13/2021

Zeroth-order non-convex learning via hierarchical dual averaging

We propose a hierarchical version of dual averaging for zeroth-order onl...
research
12/22/2021

A Unified Analysis Method for Online Optimization in Normed Vector Space

We present a unified analysis method that relies on the generalized cosi...
research
02/23/2023

Dynamic Regret Analysis of Safe Distributed Online Optimization for Convex and Non-convex Problems

This paper addresses safe distributed online optimization over an unknow...
research
10/16/2019

Dynamic Local Regret for Non-convex Online Forecasting

We consider online forecasting problems for non-convex machine learning ...

Please sign up or login with your details

Forgot password? Click here to reset