
Highdimensional Joint Sparsity Random Effects Model for Multitask Learning
Joint sparsity regularization in multitask learning has attracted much ...
read it

Sparse Empirical Bayes Analysis (SEBA)
We consider a joint processing of n independent sparse regression proble...
read it

Flagging and handling cellwise outliers by robust estimation of a covariance matrix
We propose a method for detecting cellwise outliers. Given a robust cova...
read it

On Dantzig and Lasso estimators of the drift in a high dimensional OrnsteinUhlenbeck model
In this paper we present new theoretical results for the Dantzig and Las...
read it

Efficient structure learning with automatic sparsity selection for causal graph processes
We propose a novel algorithm for efficiently computing a sparse directed...
read it

Logistic regression and Ising networks: prediction and estimation when violating lasso assumptions
The Ising model was originally developed to model magnetisation of solid...
read it

Sparse Overlapping Sets Lasso for Multitask Learning and its Application to fMRI Analysis
Multitask learning can be effective when features useful in one task are...
read it
Decentralised Sparse MultiTask Regression
We consider a sparse multitask regression framework for fitting a collection of related sparse models. Representing models as nodes in a graph with edges between related models, a framework that fuses lasso regressions with the total variation penalty is investigated. Under a form of restricted eigenvalue assumption, bounds on prediction and squared error are given that depend upon the sparsity of each model and the differences between related models. This assumption relates to the smallest eigenvalue restricted to the intersection of two cone sets of the covariance matrix constructed from each of the agents' covariances. We show that this assumption can be satisfied if the constructed covariance matrix satisfies a restricted isometry property. In the case of a grid topology highprobability bounds are given that match, up to log factors, the nocommunication setting of fitting a lasso on each model, divided by the number of agents. A decentralised dual method that exploits a convexconcave formulation of the penalised problem is proposed to fit the models and its effectiveness demonstrated on simulations against the group lasso and variants.
READ FULL TEXT
Comments
There are no comments yet.