Learning with Subset Stacking

12/12/2021
by   Ş. İlker Birbil, et al.
0

We propose a new algorithm that learns from a set of input-output pairs. Our algorithm is designed for populations where the relation between the input variables and the output variable exhibits a heterogeneous behavior across the predictor space. The algorithm starts with generating subsets that are concentrated around random points in the input space. This is followed by training a local predictor for each subset. Those predictors are then combined in a novel way to yield an overall predictor. We call this algorithm "LEarning with Subset Stacking" or LESS, due to its resemblance to method of stacking regressors. We compare the testing performance of LESS with the state-of-the-art methods on several datasets. Our comparison shows that LESS is a competitive supervised learning method. Moreover, we observe that LESS is also efficient in terms of computation time and it allows a straightforward parallel implementation.

READ FULL TEXT

page 9

page 13

research
04/11/2016

Semi-supervised learning of local structured output predictors

In this paper, we study the problem of semi-supervised structured output...
research
01/19/2017

Parameter Selection Algorithm For Continuous Variables

In this article, we propose a new algorithm for supervised learning meth...
research
01/29/2020

Safe Predictors for Enforcing Input-Output Specifications

We present an approach for designing correct-by-construction neural netw...
research
03/23/2021

Deep KKL: Data-driven Output Prediction for Non-Linear Systems

We address the problem of output prediction, ie. designing a model for a...
research
10/26/2021

Understanding Interlocking Dynamics of Cooperative Rationalization

Selective rationalization explains the prediction of complex neural netw...
research
07/16/2021

Mediated Uncoupled Learning: Learning Functions without Direct Input-output Correspondences

Ordinary supervised learning is useful when we have paired training data...
research
05/23/2018

Input and Weight Space Smoothing for Semi-supervised Learning

We propose regularizing the empirical loss for semi-supervised learning ...

Please sign up or login with your details

Forgot password? Click here to reset