Context-Aware Ensemble Learning for Time Series

11/30/2022
by   Arda Fazla, et al.
0

We investigate ensemble methods for prediction in an online setting. Unlike all the literature in ensembling, for the first time, we introduce a new approach using a meta learner that effectively combines the base model predictions via using a superset of the features that is the union of the base models' feature vectors instead of the predictions themselves. Here, our model does not use the predictions of the base models as inputs to a machine learning algorithm, but choose the best possible combination at each time step based on the state of the problem. We explore three different constraint spaces for the ensembling of the base learners that linearly combines the base predictions, which are convex combinations where the components of the ensembling vector are all nonnegative and sum up to 1; affine combinations where the weight vector components are required to sum up to 1; and the unconstrained combinations where the components are free to take any real value. The constraints are both theoretically analyzed under known statistics and integrated into the learning procedure of the meta learner as a part of the optimization in an automated manner. To show the practical efficiency of the proposed method, we employ a gradient-boosted decision tree and a multi-layer perceptron separately as the meta learners. Our framework is generic so that one can use other machine learning architectures as the ensembler as long as they allow for a custom differentiable loss for minimization. We demonstrate the learning behavior of our algorithm on synthetic data and the significant performance improvements over the conventional methods over various real life datasets, extensively used in the well-known data competitions. Furthermore, we openly share the source code of the proposed method to facilitate further research and comparison.

READ FULL TEXT
research
07/21/2022

Heterogeneous Ensemble Learning for Enhanced Crash Forecasts – A Frequentest and Machine Learning based Stacking Framework

A variety of statistical and machine learning methods are used to model ...
research
03/25/2022

A Hybrid Framework for Sequential Data Prediction with End-to-End Optimization

We investigate nonlinear prediction in an online setting and introduce a...
research
08/23/2022

pystacked: Stacking generalization and machine learning in Stata

pystacked implements stacked generalization (Wolpert, 1992) for regressi...
research
07/18/2020

MTL2L: A Context Aware Neural Optimiser

Learning to learn (L2L) trains a meta-learner to assist the learning of ...
research
03/15/2022

Learning What Not to Segment: A New Perspective on Few-Shot Segmentation

Recently few-shot segmentation (FSS) has been extensively developed. Mos...
research
11/03/2009

Feature-Weighted Linear Stacking

Ensemble methods, such as stacking, are designed to boost predictive acc...
research
12/23/2015

Adaptive Ensemble Learning with Confidence Bounds

Extracting actionable intelligence from distributed, heterogeneous, corr...

Please sign up or login with your details

Forgot password? Click here to reset