Sparse Unit-Sum Regression

07/10/2019
by   Nick Koning, et al.
0

This paper considers sparsity in linear regression under the restriction that the regression weights sum to one. We propose an approach that combines ℓ_0- and ℓ_1-regularization. We compute its solution by adapting a recent methodological innovation made by Bertsimas et al. (2016) for ℓ_0-regularization in standard linear regression. In a simulation experiment we compare our approach to ℓ_0-regularization and ℓ_1-regularization and find that it performs favorably in terms of predictive performance and sparsity. In an application to index tracking we show that our approach can obtain substantially sparser portfolios compared to ℓ_1-regularization while maintaining a similar tracking performance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/29/2023

Implicit Regularization for Group Sparsity

We study the implicit regularization of gradient descent towards structu...
research
12/14/2019

Bayesian Linear Regression on Deep Representations

A simple approach to obtaining uncertainty-aware neural networks for reg...
research
11/05/2018

Supervised Linear Regression for Graph Learning from Graph Signals

We propose a supervised learning approach for predicting an underlying g...
research
11/13/2015

Lass-0: sparse non-convex regression by local search

We compute approximate solutions to L0 regularized linear regression usi...
research
09/13/2021

Multiple Linear Regression and Correlation: A Geometric Analysis

In this review article we consider linear regression analysis from a geo...
research
04/04/2017

Homotopy Parametric Simplex Method for Sparse Learning

High dimensional sparse learning has imposed a great computational chall...
research
10/29/2020

An Exact Solution Path Algorithm for SLOPE and Quasi-Spherical OSCAR

Sorted L_1 penalization estimator (SLOPE) is a regularization technique ...

Please sign up or login with your details

Forgot password? Click here to reset