Task-Feature Collaborative Learning with Application to Personalized Attribute Prediction

04/29/2020
by   Zhiyong Yang, et al.
1

As an effective learning paradigm against insufficient training samples, Multi-Task Learning (MTL) encourages knowledge sharing across multiple related tasks so as to improve the overall performance. In MTL, a major challenge springs from the phenomenon that sharing the knowledge with dissimilar and hard tasks, known as negative transfer, often results in a worsened performance. Though a substantial amount of studies have been carried out against the negative transfer, most of the existing methods only model the transfer relationship as task correlations, with the transfer across features and tasks left unconsidered. Different from the existing methods, our goal is to alleviate negative transfer collaboratively across features and tasks. To this end, we propose a novel multi-task learning method called Task-Feature Collaborative Learning (TFCL). Specifically, we first propose a base model with a heterogeneous block-diagonal structure regularizer to leverage the collaborative grouping of features and tasks and suppressing inter-group knowledge sharing. We then propose an optimization method for the model. Extensive theoretical analysis shows that our proposed method has the following benefits: (a) it enjoys the global convergence property and (b) it provides a block-diagonal structure recovery guarantee. As a practical extension, we extend the base model by allowing overlapping features and differentiating the hard tasks. We further apply it to the personalized attribute prediction problem with fine-grained modeling of user behaviors. Finally, experimental results on both simulated dataset and real-world datasets demonstrate the effectiveness of our proposed method

READ FULL TEXT

page 7

page 12

page 13

page 14

page 18

research
01/30/2023

ForkMerge: Overcoming Negative Transfer in Multi-Task Learning

The goal of multi-task learning is to utilize useful knowledge from mult...
research
06/18/2019

Learning Personalized Attribute Preference via Multi-task AUC Optimization

Traditionally, most of the existing attribute learning methods are train...
research
10/10/2019

Gumbel-Matrix Routing for Flexible Multi-task Learning

This paper proposes a novel per-task routing method for multi-task appli...
research
09/24/2022

Highly Scalable Task Grouping for Deep Multi-Task Learning in Prediction of Epigenetic Events

Deep neural networks trained for predicting cellular events from DNA seq...
research
09/21/2023

Multi-Task Cooperative Learning via Searching for Flat Minima

Multi-task learning (MTL) has shown great potential in medical image ana...
research
06/02/2020

Learning to Branch for Multi-Task Learning

Training multiple tasks jointly in one deep network yields reduced laten...
research
01/07/2019

Location-Centered House Price Prediction: A Multi-Task Learning Approach

Accurate house prediction is of great significance to various real estat...

Please sign up or login with your details

Forgot password? Click here to reset