Explore and Exploit the Diverse Knowledge in Model Zoo for Domain Generalization

06/05/2023
by   Yimeng Chen, et al.
0

The proliferation of pretrained models, as a result of advancements in pretraining techniques, has led to the emergence of a vast zoo of publicly available models. Effectively utilizing these resources to obtain models with robust out-of-distribution generalization capabilities for downstream tasks has become a crucial area of research. Previous research has primarily focused on identifying the most powerful models within the model zoo, neglecting to fully leverage the diverse inductive biases contained within. This paper argues that the knowledge contained in weaker models is valuable and presents a method for leveraging the diversity within the model zoo to improve out-of-distribution generalization capabilities. Specifically, we investigate the behaviors of various pretrained models across different domains of downstream tasks by characterizing the variations in their encoded representations in terms of two dimensions: diversity shift and correlation shift. This characterization enables us to propose a new algorithm for integrating diverse pretrained models, not limited to the strongest models, in order to achieve enhanced out-of-distribution generalization performance. Our proposed method demonstrates state-of-the-art empirical results on a variety of datasets, thus validating the benefits of utilizing diverse knowledge.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/30/2021

Compositional generalization in semantic parsing with pretrained transformers

Large-scale pretraining instills large amounts of knowledge in deep neur...
research
12/09/2022

Audiovisual Masked Autoencoders

Can we leverage the audiovisual information already present in video to ...
research
08/16/2020

DeVLBert: Learning Deconfounded Visio-Linguistic Representations

In this paper, we propose to investigate the problem of out-of-domain vi...
research
04/22/2022

Multimodal Adaptive Distillation for Leveraging Unimodal Encoders for Vision-Language Tasks

Cross-modal encoders for vision-language (VL) tasks are often pretrained...
research
05/21/2023

PRODIGY: Enabling In-context Learning Over Graphs

In-context learning is the ability of a pretrained model to adapt to nov...
research
09/24/2022

Open-Ended Diverse Solution Discovery with Regulated Behavior Patterns for Cross-Domain Adaptation

While Reinforcement Learning can achieve impressive results for complex ...
research
03/14/2023

Diversity-Aware Meta Visual Prompting

We present Diversity-Aware Meta Visual Prompting (DAM-VP), an efficient ...

Please sign up or login with your details

Forgot password? Click here to reset