Hierarchical Partitioning Forecaster

05/22/2023
by   Christopher Mattern, et al.
0

In this work we consider a new family of algorithms for sequential prediction, Hierarchical Partitioning Forecasters (HPFs). Our goal is to provide appealing theoretical - regret guarantees on a powerful model class - and practical - empirical performance comparable to deep networks - properties at the same time. We built upon three principles: hierarchically partitioning the feature space into sub-spaces, blending forecasters specialized to each sub-space and learning HPFs via local online learning applied to these individual forecasters. Following these principles allows us to obtain regret guarantees, where Constant Partitioning Forecasters (CPFs) serve as competitor. A CPF partitions the feature space into sub-spaces and predicts with a fixed forecaster per sub-space. Fixing a hierarchical partition ℋ and considering any CPF with a partition that can be constructed using elements of ℋ we provide two guarantees: first, a generic one that unveils how local online learning determines regret of learning the entire HPF online; second, a concrete instance that considers HPF with linear forecasters (LHPF) and exp-concave losses where we obtain O(k log T) regret for sequences of length T where k is a measure of complexity for the competing CPF. Finally, we provide experiments that compare LHPF to various baselines, including state of the art deep learning models, in precipitation nowcasting. Our results indicate that LHPF is competitive in various settings.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/15/2021

Provable Regret Bounds for Deep Online Learning and Control

The use of deep neural networks has been highly successful in reinforcem...
research
07/16/2022

Online Prediction in Sub-linear Space

We provide the first sub-linear space and sub-linear regret algorithm fo...
research
09/01/2011

Differentially Private Online Learning

In this paper, we consider the problem of preserving privacy in the onli...
research
11/13/2018

A Local Regret in Nonconvex Online Learning

We consider an online learning process to forecast a sequence of outcome...
research
10/31/2018

On Exploration, Exploitation and Learning in Adaptive Importance Sampling

We study adaptive importance sampling (AIS) as an online learning proble...
research
02/17/2018

Black-Box Reductions for Parameter-free Online Learning in Banach Spaces

We introduce several new black-box reductions that significantly improve...
research
12/15/2022

Multi-Resolution Online Deterministic Annealing: A Hierarchical and Progressive Learning Architecture

Hierarchical learning algorithms that gradually approximate a solution t...

Please sign up or login with your details

Forgot password? Click here to reset