MT-GBM: A Multi-Task Gradient Boosting Machine with Shared Decision Trees

01/17/2022
by   ZhenZhe Ying, et al.
0

Despite the success of deep learning in computer vision and natural language processing, Gradient Boosted Decision Tree (GBDT) is yet one of the most powerful tools for applications with tabular data such as e-commerce and FinTech. However, applying GBDT to multi-task learning is still a challenge. Unlike deep models that can jointly learn a shared latent representation across multiple tasks, GBDT can hardly learn a shared tree structure. In this paper, we propose Multi-task Gradient Boosting Machine (MT-GBM), a GBDT-based method for multi-task learning. The MT-GBM can find the shared tree structures and split branches according to multi-task losses. First, it assigns multiple outputs to each leaf node. Next, it computes the gradient corresponding to each output (task). Then, we also propose an algorithm to combine the gradients of all tasks and update the tree. Finally, we apply MT-GBM to LightGBM. Experiments show that our MT-GBM improves the performance of the main task significantly, which means the proposed MT-GBM is efficient and effective.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/27/2023

Multi-task Highly Adaptive Lasso

We propose a novel, fully nonparametric approach for the multi-task lear...
research
11/10/2014

Multi-Task Metric Learning on Network Data

Multi-task learning (MTL) improves prediction performance in different c...
research
08/04/2023

Efficient Labelling of Affective Video Datasets via Few-Shot Multi-Task Contrastive Learning

Whilst deep learning techniques have achieved excellent emotion predicti...
research
08/16/2021

Task-wise Split Gradient Boosting Trees for Multi-center Diabetes Prediction

Diabetes prediction is an important data science application in the soci...
research
08/03/2023

Online Multi-Task Learning with Recursive Least Squares and Recursive Kernel Methods

This paper introduces two novel approaches for Online Multi-Task Learnin...
research
11/28/2022

AdaTask: A Task-aware Adaptive Learning Rate Approach to Multi-task Learning

Multi-task learning (MTL) models have demonstrated impressive results in...
research
10/27/2017

Generalization Tower Network: A Novel Deep Neural Network Architecture for Multi-Task Learning

Deep learning (DL) advances state-of-the-art reinforcement learning (RL)...

Please sign up or login with your details

Forgot password? Click here to reset