GPPF: A General Perception Pre-training Framework via Sparsely Activated Multi-Task Learning

08/03/2022
by   Benyuan Sun, et al.
0

Pre-training over mixtured multi-task, multi-domain, and multi-modal data remains an open challenge in vision perception pre-training. In this paper, we propose GPPF, a General Perception Pre-training Framework, that pre-trains a task-level dynamic network, which is composed by knowledge "legos" in each layers, on labeled multi-task and multi-domain datasets. By inspecting humans' innate ability to learn in complex environment, we recognize and transfer three critical elements to deep networks: (1) simultaneous exposure to diverse cross-task and cross-domain information in each batch. (2) partitioned knowledge storage in separate lego units driven by knowledge sharing. (3) sparse activation of a subset of lego units for both pre-training and downstream tasks. Noteworthy, the joint training of disparate vision tasks is non-trivial due to their differences in input shapes, loss functions, output formats, data distributions, etc. Therefore, we innovatively develop a plug-and-play multi-task training algorithm, which supports Single Iteration Multiple Tasks (SIMT) concurrently training. SIMT lays the foundation of pre-training with large-scale multi-task multi-domain datasets and is proved essential for stable training in our GPPF experiments. Excitingly, the exhaustive experiments show that, our GPPF-R50 model achieves significant improvements of 2.5-5.8 over a strong baseline of the 8 pre-training tasks in GPPF-15M and harvests a range of SOTAs over the 22 downstream tasks with similar computation budgets. We also validate the generalization ability of GPPF to SOTA vision transformers with consistent improvements. These solid experimental results fully prove the effective knowledge learning, storing, sharing, and transfer provided by our novel GPPF framework.

READ FULL TEXT

page 7

page 16

page 17

page 18

page 20

research
04/27/2023

Retrieval-based Knowledge Augmented Vision Language Pre-training

With recent progress in large-scale vision and language representation l...
research
04/12/2020

Pre-training Text Representations as Meta Learning

Pre-training text representations has recently been shown to significant...
research
09/19/2022

Effective Adaptation in Multi-Task Co-Training for Unified Autonomous Driving

Aiming towards a holistic understanding of multiple downstream tasks sim...
research
08/23/2023

Critical Learning Periods Emerge Even in Deep Linear Networks

Critical learning periods are periods early in development where tempora...
research
05/03/2023

Learngene: Inheriting Condensed Knowledge from the Ancestry Model to Descendant Models

During the continuous evolution of one organism's ancestry, its genes ac...
research
04/04/2022

MultiMAE: Multi-modal Multi-task Masked Autoencoders

We propose a pre-training strategy called Multi-modal Multi-task Masked ...
research
06/29/2023

An Efficient General-Purpose Modular Vision Model via Multi-Task Heterogeneous Training

We present a model that can perform multiple vision tasks and can be ada...

Please sign up or login with your details

Forgot password? Click here to reset