MetaBalance: Improving Multi-Task Recommendations via Adapting Gradient Magnitudes of Auxiliary Tasks

03/14/2022
by   Yun He, et al.
0

In many personalized recommendation scenarios, the generalization ability of a target task can be improved via learning with additional auxiliary tasks alongside this target task on a multi-task network. However, this method often suffers from a serious optimization imbalance problem. On the one hand, one or more auxiliary tasks might have a larger influence than the target task and even dominate the network weights, resulting in worse recommendation accuracy for the target task. On the other hand, the influence of one or more auxiliary tasks might be too weak to assist the target task. More challenging is that this imbalance dynamically changes throughout the training process and varies across the parts of the same network. We propose a new method: MetaBalance to balance auxiliary losses via directly manipulating their gradients w.r.t the shared parameters in the multi-task network. Specifically, in each training iteration and adaptively for each part of the network, the gradient of an auxiliary loss is carefully reduced or enlarged to have a closer magnitude to the gradient of the target loss, preventing auxiliary tasks from being so strong that dominate the target task or too weak to help the target task. Moreover, the proximity between the gradient magnitudes can be flexibly adjusted to adapt MetaBalance to different scenarios. The experiments show that our proposed method achieves a significant improvement of 8.34 NDCG@10 upon the strongest baseline on two real-world datasets. The code of our approach can be found at here: https://github.com/facebookresearch/MetaBalance

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/25/2019

Self-Supervised Generalisation with Meta Auxiliary Learning

Learning with auxiliary tasks has been shown to improve the generalisati...
research
12/05/2018

Adapting Auxiliary Losses Using Gradient Similarity

One approach to deal with the statistical inefficiency of neural network...
research
09/07/2022

Improving Self-supervised Learning for Out-of-distribution Task via Auxiliary Classifier

In real world scenarios, out-of-distribution (OOD) datasets may have a l...
research
09/13/2021

GradTS: A Gradient-Based Automatic Auxiliary Task Selection Method Based on Transformer Networks

A key problem in multi-task learning (MTL) research is how to select hig...
research
11/25/2022

Group Buying Recommendation Model Based on Multi-task Learning

In recent years, group buying has become one popular kind of online shop...
research
07/24/2023

DEPHN: Different Expression Parallel Heterogeneous Network using virtual gradient optimization for Multi-task Learning

Recommendation system algorithm based on multi-task learning (MTL) is th...
research
01/31/2023

Auxiliary Learning as an Asymmetric Bargaining Game

Auxiliary learning is an effective method for enhancing the generalizati...

Please sign up or login with your details

Forgot password? Click here to reset