Does Robustness on ImageNet Transfer to Downstream Tasks?

04/08/2022
by   Yutaro Yamada, et al.
0

As clean ImageNet accuracy nears its ceiling, the research community is increasingly more concerned about robust accuracy under distributional shifts. While a variety of methods have been proposed to robustify neural networks, these techniques often target models trained on ImageNet classification. At the same time, it is a common practice to use ImageNet pretrained backbones for downstream tasks such as object detection, semantic segmentation, and image classification from different domains. This raises a question: Can these robust image classifiers transfer robustness to downstream tasks? For object detection and semantic segmentation, we find that a vanilla Swin Transformer, a variant of Vision Transformer tailored for dense prediction tasks, transfers robustness better than Convolutional Neural Networks that are trained to be robust to the corrupted version of ImageNet. For CIFAR10 classification, we find that models that are robustified for ImageNet do not retain robustness when fully fine-tuned. These findings suggest that current robustification techniques tend to emphasize ImageNet evaluations. Moreover, network architecture is a strong source of robustness when we consider transfer learning.

READ FULL TEXT

page 5

page 6

page 7

research
03/15/2023

Reinforce Data, Multiply Impact: Improved Model Accuracy and Robustness with Dataset Reinforcement

We propose Dataset Reinforcement, a strategy to improve a dataset once s...
research
06/16/2023

ALP: Action-Aware Embodied Learning for Perception

Current methods in training and benchmarking vision models exhibit an ov...
research
12/04/2018

Bag of Tricks for Image Classification with Convolutional Neural Networks

Much of the recent progress made in image classification research can be...
research
03/15/2023

DeepMIM: Deep Supervision for Masked Image Modeling

Deep supervision, which involves extra supervisions to the intermediate ...
research
04/19/2020

ResNeSt: Split-Attention Networks

While image classification models have recently continued to advance, mo...
research
11/26/2021

How Well Do Sparse Imagenet Models Transfer?

Transfer learning is a classic paradigm by which models pretrained on la...
research
02/16/2022

Vision Models Are More Robust And Fair When Pretrained On Uncurated Images Without Supervision

Discriminative self-supervised learning allows training models on any ra...

Please sign up or login with your details

Forgot password? Click here to reset