SWAG: A Wrapper Method for Sparse Learning

06/23/2020
by   Roberto Molinari, et al.
0

Predictive power has always been the main research focus of learning algorithms. While the general approach for these algorithms is to consider all possible attributes in a dataset to best predict the response of interest, an important branch of research is focused on sparse learning. Indeed, in many practical settings we believe that only an extremely small combination of different attributes affect the response. However even sparse-learning methods can still preserve a high number of attributes in high-dimensional settings and possibly deliver inconsistent prediction performance. The latter methods can also be hard to interpret for researchers and practitioners, a problem which is even more relevant for the “black-box”-type mechanisms of many learning approaches. Finally, there is often a problem of replicability since not all data-collection procedures measure (or observe) the same attributes and therefore cannot make use of proposed learners for testing purposes. To address all the previous issues, we propose to study a procedure that combines screening and wrapper methods and aims to find a library of extremely low-dimensional attribute combinations (with consequent low data collection and storage costs) in order to (i) match or improve the predictive performance of any particular learning method which uses all attributes as an input (including sparse learners); (ii) provide a low-dimensional network of attributes easily interpretable by researchers and practitioners; and (iii) increase the potential replicability of results due to a diversity of attribute combinations defining strong learners with equivalent predictive power. We call this algorithm “Sparse Wrapper AlGorithm” (SWAG).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/12/2021

Interpretable Predictive Maintenance for Hard Drives

Existing machine learning approaches for data-driven predictive maintena...
research
08/13/2022

Locating disparities in machine learning

Machine learning was repeatedly proven to provide predictions with dispa...
research
03/26/2019

Sparse Learning for Variable Selection with Structures and Nonlinearities

In this thesis we discuss machine learning methods performing automated ...
research
11/16/2022

Interpretable Few-shot Learning with Online Attribute Selection

Few-shot learning (FSL) is a challenging learning problem in which only ...
research
07/17/2020

Low-dimensional Interpretable Kernels with Conic Discriminant Functions for Classification

Kernels are often developed and used as implicit mapping functions that ...
research
08/16/2020

SECODA: Segmentation- and Combination-Based Detection of Anomalies

This study introduces SECODA, a novel general-purpose unsupervised non-p...
research
06/11/2020

The Backbone Method for Ultra-High Dimensional Sparse Machine Learning

We present the backbone method, a generic framework that enables sparse ...

Please sign up or login with your details

Forgot password? Click here to reset