We consider a sparse multi-task regression framework for fitting a collection of related sparse models. Representing models as nodes in a graph with edges between related models, a framework that fuses lasso regressions with the total variation penalty is investigated. Under a form of restricted eigenvalue assumption, bounds on prediction and squared error are given that depend upon the sparsity of each model and the differences between related models. This assumption relates to the smallest eigenvalue restricted to the intersection of two cone sets of the covariance matrix constructed from each of the agents' covariances. We show that this assumption can be satisfied if the constructed covariance matrix satisfies a restricted isometry property. In the case of a grid topology high-probability bounds are given that match, up to log factors, the no-communication setting of fitting a lasso on each model, divided by the number of agents. A decentralised dual method that exploits a convex-concave formulation of the penalised problem is proposed to fit the models and its effectiveness demonstrated on simulations against the group lasso and variants.

## Authors

• 6 publications
• 5 publications
• 11 publications
• ### High-dimensional Joint Sparsity Random Effects Model for Multi-task Learning

Joint sparsity regularization in multi-task learning has attracted much ...
09/26/2013 ∙ by Krishnakumar Balasubramanian, et al. ∙ 0

• ### Sparse Empirical Bayes Analysis (SEBA)

We consider a joint processing of n independent sparse regression proble...
11/30/2009 ∙ by Natalia Bochkina, et al. ∙ 0

• ### Flagging and handling cellwise outliers by robust estimation of a covariance matrix

We propose a method for detecting cellwise outliers. Given a robust cova...
12/28/2019 ∙ by Jakob Raymaekers, et al. ∙ 0

• ### On Dantzig and Lasso estimators of the drift in a high dimensional Ornstein-Uhlenbeck model

In this paper we present new theoretical results for the Dantzig and Las...
08/03/2020 ∙ by Gabriela Ciolek, et al. ∙ 0

• ### Efficient structure learning with automatic sparsity selection for causal graph processes

We propose a novel algorithm for efficiently computing a sparse directed...
06/11/2019 ∙ by Théophile Griveau-Billion, et al. ∙ 0

• ### Logistic regression and Ising networks: prediction and estimation when violating lasso assumptions

The Ising model was originally developed to model magnetisation of solid...
07/28/2018 ∙ by Lourens Waldorp, et al. ∙ 0