Efficiently Robustify Pre-trained Models

09/14/2023
by   Nishant Jain, et al.
0

A recent trend in deep learning algorithms has been towards training large scale models, having high parameter count and trained on big dataset. However, robustness of such large scale models towards real-world settings is still a less-explored topic. In this work, we first benchmark the performance of these models under different perturbations and datasets thereby representing real-world shifts, and highlight their degrading performance under these shifts. We then discuss on how complete model fine-tuning based existing robustification schemes might not be a scalable option given very large scale networks and can also lead them to forget some of the desired characterstics. Finally, we propose a simple and cost-effective method to solve this problem, inspired by knowledge transfer literature. It involves robustifying smaller models, at a lower computation cost, and then use them as teachers to tune a fraction of these large scale networks, reducing the overall computational overhead. We evaluate our proposed method under various vision perturbations including ImageNet-C,R,S,A datasets and also for transfer learning, zero-shot evaluation setups on different datasets. Benchmark results show that our method is able to induce robustness to these large scale models efficiently, requiring significantly lower time and also preserves the transfer learning, zero-shot properties of the original model which none of the existing methods are able to achieve.

READ FULL TEXT
research
03/26/2023

Δ-Networks for Efficient Model Patching

Models pre-trained on large-scale datasets are often finetuned to suppor...
research
12/14/2022

Understanding Zero-Shot Adversarial Robustness for Large-Scale Models

Pretrained large-scale vision-language models like CLIP have exhibited s...
research
12/30/2021

THE Benchmark: Transferable Representation Learning for Monocular Height Estimation

Generating 3D city models rapidly is crucial for many applications. Mono...
research
03/31/2023

DIME-FM: DIstilling Multimodal and Efficient Foundation Models

Large Vision-Language Foundation Models (VLFM), such as CLIP, ALIGN and ...
research
08/21/2023

Towards Accelerated Model Training via Bayesian Data Selection

Mislabeled, duplicated, or biased data in real-world scenarios can lead ...
research
04/05/2019

The Information Complexity of Learning Tasks, their Structure and their Distance

We introduce an asymmetric distance in the space of learning tasks, and ...
research
07/17/2023

Revisiting the Robustness of the Minimum Error Entropy Criterion: A Transfer Learning Case Study

Coping with distributional shifts is an important part of transfer learn...

Please sign up or login with your details

Forgot password? Click here to reset