Stochastic Online Convex Optimization; Application to probabilistic time series forecasting

02/01/2021
by   Olivier Wintenberger, et al.
0

Stochastic regret bounds for online algorithms are usually derived from an "online to batch" conversion. Inverting the reasoning, we start our analyze by a "batch to online" conversion that applies in any Stochastic Online Convex Optimization problem under stochastic exp-concavity condition. We obtain fast rate stochastic regret bounds with high probability for non-convex loss functions. Based on this approach, we provide prediction and probabilistic forecasting methods for non-stationary unbounded time series.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/25/2017

Stochastic Non-convex Optimization with Strong High Probability Second-order Convergence

In this paper, we study stochastic non-convex optimization with non-conv...
research
11/19/2015

Online Batch Selection for Faster Training of Neural Networks

Deep neural networks are commonly trained using stochastic non-convex op...
research
05/22/2020

Online Non-convex Learning for River Pollution Source Identification

In this paper, novel gradient based online learning algorithms are devel...
research
01/26/2023

Causal Structural Learning from Time Series: A Convex Optimization Approach

Structural learning, which aims to learn directed acyclic graphs (DAGs) ...
research
07/19/2018

Adaptive Variational Particle Filtering in Non-stationary Environments

Online convex optimization is a sequential prediction framework with the...
research
04/05/2011

Online and Batch Learning Algorithms for Data with Missing Features

We introduce new online and batch algorithms that are robust to data wit...
research
05/24/2021

Robust learning with anytime-guaranteed feedback

Under data distributions which may be heavy-tailed, many stochastic grad...

Please sign up or login with your details

Forgot password? Click here to reset