New Tight Relaxations of Rank Minimization for Multi-Task Learning

12/09/2021
by   Wei Chang, et al.
0

Multi-task learning has been observed by many researchers, which supposes that different tasks can share a low-rank common yet latent subspace. It means learning multiple tasks jointly is better than learning them independently. In this paper, we propose two novel multi-task learning formulations based on two regularization terms, which can learn the optimal shared latent subspace by minimizing the exactly k minimal singular values. The proposed regularization terms are the more tight approximations of rank minimization than trace norm. But it's an NP-hard problem to solve the exact rank minimization problem. Therefore, we design a novel re-weighted based iterative strategy to solve our models, which can tactically handle the exact rank minimization problem by setting a large penalizing parameter. Experimental results on benchmark datasets demonstrate that our methods can correctly recover the low-rank structure shared across tasks, and outperform related multi-task learning methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/07/2016

Distributed Multi-Task Learning with Shared Representation

We study the problem of distributed multi-task learning with shared repr...
research
02/12/2020

Deep Multi-Task Learning via Generalized Tensor Trace Norm

The trace norm is widely used in multi-task learning as it can discover ...
research
06/06/2017

Robust Online Multi-Task Learning with Correlative and Personalized Structures

Multi-Task Learning (MTL) can enhance a classifier's generalization perf...
research
10/11/2019

SUM: Suboptimal Unitary Multi-task Learning Framework for Spatiotemporal Data Prediction

The typical multi-task learning methods for spatio-temporal data predict...
research
04/12/2019

Low-Rank Deep Convolutional Neural Network for Multi-Task Learning

In this paper, we propose a novel multi-task learning method based on th...
research
11/20/2021

Safe Multi-Task Learning

In recent years, Multi-Task Learning (MTL) attracts much attention due t...
research
09/15/2021

Multi-Task Learning with Sequence-Conditioned Transporter Networks

Enabling robots to solve multiple manipulation tasks has a wide range of...

Please sign up or login with your details

Forgot password? Click here to reset