Conceptually Diverse Base Model Selection for Meta-Learners in Concept Drifting Data Streams

11/29/2021
by   Helen McKay, et al.
0

Meta-learners and ensembles aim to combine a set of relevant yet diverse base models to improve predictive performance. However, determining an appropriate set of base models is challenging, especially in online environments where the underlying distribution of data can change over time. In this paper, we present a novel approach for estimating the conceptual similarity of base models, which is calculated using the Principal Angles (PAs) between their underlying subspaces. We propose two methods that use conceptual similarity as a metric to obtain a relevant yet diverse subset of base models: (i) parameterised threshold culling and (ii) parameterless conceptual clustering. We evaluate these methods against thresholding using common ensemble pruning metrics, namely predictive performance and Mutual Information (MI), in the context of online Transfer Learning (TL), using both synthetic and real-world data. Our results show that conceptual similarity thresholding has a reduced computational overhead, and yet yields comparable predictive performance to thresholding using predictive performance and MI. Furthermore, conceptual clustering achieves similar predictive performances without requiring parameterisation, and achieves this with lower computational overhead than thresholding using predictive performance and MI when the number of base models becomes large.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/07/2019

Multi-Source Transfer Learning for Non-Stationary Environments

In data stream mining, predictive models typically suffer drops in predi...
research
09/12/2021

Feature Importance in Gradient Boosting Trees with Cross-Validation Feature Selection

Gradient Boosting Machines (GBM) are among the go-to algorithms on tabul...
research
10/18/2022

Uncertainty estimation for out-of-distribution detection in computational histopathology

In computational histopathology algorithms now outperform humans on a ra...
research
01/13/2022

Towards a Data Privacy-Predictive Performance Trade-off

Machine learning is increasingly used in the most diverse applications a...
research
05/04/2020

StackGenVis: Alignment of Data, Algorithms, and Models for Stacking Ensemble Learning Using Performance Metrics

In machine learning (ML), ensemble methods such as bagging, boosting, an...
research
07/17/2023

Q(D)O-ES: Population-based Quality (Diversity) Optimisation for Post Hoc Ensemble Selection in AutoML

Automated machine learning (AutoML) systems commonly ensemble models pos...
research
12/07/2022

MetaStackVis: Visually-Assisted Performance Evaluation of Metamodels

Stacking (or stacked generalization) is an ensemble learning method with...

Please sign up or login with your details

Forgot password? Click here to reset