TBGC: Task-level Backbone-Oriented Gradient Clip for Multi-Task Foundation Model Learning

07/07/2023
by   Zelun Zhang, et al.
0

The AllInOne training paradigm squeezes a wide range of tasks into a unified model in a multi-task learning manner. However, optimization in multi-task learning is more challenge than single-task learning, as the gradient norm from different tasks may vary greatly, making the backbone overly biased towards one specific task. To address this issue, we propose the task-level backbone-oriented gradient clip paradigm, compared with the vanilla gradient clip method, it has two points of emphasis:1) gradient clip is performed independently for each task. 2) backbone gradients generated from each task are rescaled to the same norm scale. Based on the experimental results, we argue that the task-level backbone-oriented gradient clip paradigm can relieve the gradient bias problem to some extent. We also propose a novel multi-branch data augmentation strategy where conflict augmentations are placed in different branches. Our approach has been shown to be effective and finally achieve 1st place in the Leaderboard A and 2nd place in the Leaderboard B of the CVPR2023 Foundation Model Challenge. It's worth noting that instead of evaluating all three tasks(detection, segmentation and fine-grained classification) in Leaderboard A, the segmentation task is not evaluated in Leaderboard B, in which our team has a huge advantage.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/31/2023

FULLER: Unified Multi-modality Multi-task 3D Perception via Multi-level Gradient Calibration

Multi-modality fusion and multi-task learning are becoming trendy in 3D ...
research
09/01/2023

MuraNet: Multi-task Floor Plan Recognition with Relation Attention

The recognition of information in floor plan data requires the use of de...
research
07/21/2022

UFO: Unified Feature Optimization

This paper proposes a novel Unified Feature Optimization (UFO) paradigm ...
research
03/10/2022

A Tree-Structured Multi-Task Model Recommender

Tree-structured multi-task architectures have been employed to jointly t...
research
05/04/2023

MTLSegFormer: Multi-task Learning with Transformers for Semantic Segmentation in Precision Agriculture

Multi-task learning has proven to be effective in improving the performa...
research
11/24/2022

Improving Multi-task Learning via Seeking Task-based Flat Regions

Multi-Task Learning (MTL) is a widely-used and powerful learning paradig...
research
05/21/2020

Team Neuro at SemEval-2020 Task 8: Multi-Modal Fine Grain Emotion Classification of Memes using Multitask Learning

In this article, we describe the system that we used for the memotion an...

Please sign up or login with your details

Forgot password? Click here to reset