Multi-Task Feature Learning Via Efficient l2,1-Norm Minimization

05/09/2012
by   Jun Liu, et al.
0

The problem of joint feature selection across a group of related tasks has applications in many areas including biomedical informatics and computer vision. We consider the l2,1-norm regularized regression model for joint feature selection from multiple tasks, which can be derived in the probabilistic framework by assuming a suitable prior from the exponential family. One appealing feature of the l2,1-norm regularization is that it encourages multiple predictors to share similar sparsity patterns. However, the resulting optimization problem is challenging to solve due to the non-smoothness of the l2,1-norm regularization. In this paper, we propose to accelerate the computation by reformulating it as two equivalent smooth convex optimization problems which are then solved via the Nesterov's method-an optimal first-order black-box method for smooth convex optimization. A key building block in solving the reformulations is the Euclidean projection. We show that the Euclidean projection for the first reformulation can be analytically computed, while the Euclidean projection for the second one can be computed in linear time. Empirical evaluations on several data sets verify the efficiency of the proposed algorithms.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/02/2015

Direct l_(2,p)-Norm Learning for Feature Selection

In this paper, we propose a novel sparse learning based feature selectio...
research
07/16/2013

Efficient Mixed-Norm Regularization: Algorithms and Safe Screening Methods

Sparse learning has recently received increasing attention in many areas...
research
10/27/2015

Exclusive Sparsity Norm Minimization with Random Groups via Cone Projection

Many practical applications such as gene expression analysis, multi-task...
research
04/10/2019

New Computational and Statistical Aspects of Regularized Regression with Application to Rare Feature Selection and Aggregation

Prior knowledge on properties of a target model often come as discrete o...
research
10/22/2012

Multi-Stage Multi-Task Feature Learning

Multi-task sparse feature learning aims to improve the generalization pe...
research
06/06/2020

Regularized Off-Policy TD-Learning

We present a novel l_1 regularized off-policy convergent TD-learning met...
research
04/13/2015

Learning Multiple Visual Tasks while Discovering their Structure

Multi-task learning is a natural approach for computer vision applicatio...

Please sign up or login with your details

Forgot password? Click here to reset