Online Aggregation of Unbounded Losses Using Shifting Experts with Confidence

08/02/2018
by   Vladimir V'yugin, et al.
0

We develop the setting of sequential prediction based on shifting experts and on a "smooth" version of the method of specialized experts. To aggregate experts predictions, we use the AdaHedge algorithm, which is a version of the Hedge algorithm with adaptive learning rate, and extend it by the meta-algorithm Fixed Share. Due to this, we combine the advantages of both algorithms: (1) we use the shifting regret which is a more optimal characteristic of the algorithm; (2) regret bounds are valid in the case of signed unbounded losses of the experts. Also, (3) we incorporate in this scheme a "smooth" version of the method of specialized experts which allows us to make more flexible and accurate predictions. All results are obtained in the adversarial setting -- no assumptions are made about the nature of data source. We present results of numerical experiments for short-term forecasting of electricity consumption based on a real data.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/10/2014

A Second-order Bound with Excess Losses

We study online aggregation of the predictions of experts, and first sho...
research
07/09/2012

Forecasting electricity consumption by aggregating specialized experts

We consider the setting of sequential prediction of arbitrary sequences ...
research
09/29/2021

Online Aggregation of Probability Forecasts with Confidence

The paper presents numerical experiments and some theoretical developmen...
research
02/27/2019

Adaptive Hedging under Delayed Feedback

The article is devoted to investigating the application of hedging strat...
research
11/08/2017

Long-Term Sequential Prediction Using Expert Advice

For the prediction with expert advice setting, we consider methods to co...
research
06/11/2023

Parameter-free version of Adaptive Gradient Methods for Strongly-Convex Functions

The optimal learning rate for adaptive gradient methods applied to λ-str...
research
02/22/2022

No-Regret Learning with Unbounded Losses: The Case of Logarithmic Pooling

For each of T time steps, m experts report probability distributions ove...

Please sign up or login with your details

Forgot password? Click here to reset