Comparing Foundation Models using Data Kernels

05/09/2023
by   Brandon Duderstadt, et al.
0

Recent advances in self-supervised learning and neural network scaling have enabled the creation of large models, known as foundation models, which can be easily adapted to a wide range of downstream tasks. The current paradigm for comparing foundation models involves evaluating them with aggregate metrics on various benchmark datasets. This method of model comparison is heavily dependent on the chosen evaluation metric, which makes it unsuitable for situations where the ideal metric is either not obvious or unavailable. In this work, we present a methodology for directly comparing the embedding space geometry of foundation models, which facilitates model comparison without the need for an explicit evaluation metric. Our methodology is grounded in random graph theory and enables valid hypothesis testing of embedding similarity on a per-datum basis. Further, we demonstrate how our methodology can be extended to facilitate population level model comparison. In particular, we show how our framework can induce a manifold of models equipped with a distance function that correlates strongly with several downstream metrics. We remark on the utility of this population level model comparison as a first step towards a taxonomic science of foundation models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/05/2023

BadSAM: Exploring Security Vulnerabilities of SAM via Backdoor Attacks

Recently, the Segment Anything Model (SAM) has gained significant attent...
research
05/18/2023

Universal Domain Adaptation from Foundation Models

Foundation models (e.g., CLIP or DINOv2) have shown their impressive lea...
research
06/22/2023

TaCA: Upgrading Your Visual Foundation Model with Task-agnostic Compatible Adapter

Visual foundation models like CLIP excel in learning feature representat...
research
11/19/2022

Molecular Structure-Property Co-Trained Foundation Model for In Silico Chemistry

Recently, deep learning approaches have been extensively studied for var...
research
03/17/2022

DATA: Domain-Aware and Task-Aware Pre-training

The paradigm of training models on massive data without label through se...
research
06/15/2023

Robustness Analysis on Foundational Segmentation Models

Due to the increase in computational resources and accessibility of data...
research
05/16/2022

Manifold Characteristics That Predict Downstream Task Performance

Pretraining methods are typically compared by evaluating the accuracy of...

Please sign up or login with your details

Forgot password? Click here to reset