Stepwise regression for unsupervised learning

06/10/2017
by   Jonathan Landy, et al.
0

I consider unsupervised extensions of the fast stepwise linear regression algorithm efroymson1960multiple. These extensions allow one to efficiently identify highly-representative feature variable subsets within a given set of jointly distributed variables. This in turn allows for the efficient dimensional reduction of large data sets via the removal of redundant features. Fast search is effected here through the avoidance of repeat computations across trial fits, allowing for a full representative-importance ranking of a set of feature variables to be carried out in O(n^2 m) time, where n is the number of variables and m is the number of data samples available. This runtime complexity matches that needed to carry out a single regression and is O(n^2) faster than that of naive implementations. I present pseudocode suitable for efficient forward, reverse, and forward-reverse unsupervised feature selection. To illustrate the algorithm's application, I apply it to the problem of identifying representative stocks within a given financial market index -- a challenge relevant to the design of Exchange Traded Funds (ETFs). I also characterize the growth of numerical error with iteration step in these algorithms, and finally demonstrate and rationalize the observation that the forward and reverse algorithms return exactly inverted feature orderings in the weakly-correlated feature set regime.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/07/2023

Scalable High-Dimensional Multivariate Linear Regression for Feature-Distributed Data

Feature-distributed data, referred to data partitioned by features and s...
research
07/30/2014

Fast Bayesian Feature Selection for High Dimensional Linear Regression in Genomics via the Ising Approximation

Feature selection, identifying a subset of variables that are relevant f...
research
02/24/2020

FSinR: an exhaustive package for feature selection

Feature Selection (FS) is a key task in Machine Learning. It consists in...
research
10/21/2009

Sparsification and feature selection by compressive linear regression

The Minimum Description Length (MDL) principle states that the optimal m...
research
07/09/2020

Let the Data Choose its Features: Differentiable Unsupervised Feature Selection

Scientific observations often consist of a large number of variables (fe...
research
05/03/2018

Reversible Truly Concurrent Process Algebra

We design a reversible version of truly concurrent process algebra CTC w...
research
09/25/2022

Deep learning forward and reverse primer design to detect SARS-CoV-2 emerging variants

Surges that have been observed at different periods in the number of COV...

Please sign up or login with your details

Forgot password? Click here to reset