Effective Robustness against Natural Distribution Shifts for Models with Different Training Data

02/02/2023
by   Zhouxing Shi, et al.
0

“Effective robustness” measures the extra out-of-distribution (OOD) robustness beyond what can be predicted from the in-distribution (ID) performance. Existing effective robustness evaluations typically use a single test set such as ImageNet to evaluate ID accuracy. This becomes problematic when evaluating models trained on different data distributions, e.g., comparing models trained on ImageNet vs. zero-shot language-image pre-trained models trained on LAION. In this paper, we propose a new effective robustness evaluation metric to compare the effective robustness of models trained on different data distributions. To do this we control for the accuracy on multiple ID test sets that cover the training distributions for all the evaluated models. Our new evaluation metric provides a better estimate of the effectiveness robustness and explains the surprising effective robustness gains of zero-shot CLIP-like models exhibited when considering only one ID dataset, while the gains diminish under our evaluation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/12/2022

Are Sample-Efficient NLP Models More Robust?

Recent work has observed that pre-trained models have higher out-of-dist...
research
04/10/2023

Zero-Shot In-Distribution Detection in Multi-Object Settings Using Vision-Language Foundation Models

Removing out-of-distribution (OOD) images from noisy images scraped from...
research
05/03/2022

Data Determines Distributional Robustness in Contrastive Language Image Pre-training (CLIP)

Contrastively trained image-text models such as CLIP, ALIGN, and BASIC h...
research
02/24/2023

Robust Weight Signatures: Gaining Robustness as Easy as Patching Weights?

Given a robust model trained to be resilient to one or multiple types of...
research
07/17/2019

Robustness properties of Facebook's ResNeXt WSL models

We investigate the robustness properties of ResNeXt image recognition mo...
research
03/28/2022

Understanding out-of-distribution accuracies through quantifying difficulty of test samples

Existing works show that although modern neural networks achieve remarka...
research
11/17/2021

Understanding and Testing Generalization of Deep Networks on Out-of-Distribution Data

Deep network models perform excellently on In-Distribution (ID) data, bu...

Please sign up or login with your details

Forgot password? Click here to reset