Effect of large-scale pre-training on full and few-shot transfer learning for natural and medical images

05/31/2021
by   Mehdi Cherti, et al.
0

Transfer learning aims to exploit pre-trained models for more efficient follow-up training on wide range of downstream tasks and datasets, enabling successful training also on small data. Recent line of work posits strong benefits for model generalization and transfer when model size, data size, and compute budget are increased for the pre-training. It remains however still largely unclear whether the observed transfer improvement due to increase in scale also holds when source and target data distributions are far apart from each other. In this work we conduct large-scale pre-training on large source datasets of either natural (ImageNet-21k/1k) or medical chest X-Ray images and compare full and few-shot transfer using different target datasets from both natural and medical imaging domains. Our observations provide evidence that while pre-training and transfer on closely related datasets do show clear benefit of increasing model and data size during pre-training, such benefits are not clearly visible when source and target datasets are further apart. These observations hold across both full and few-shot transfer and indicate that scaling laws hinting improvement of generalization and transfer with increasing model and data size are incomplete and should also take into account the degree of how distinct the source and target data distributions are, to correctly predict effect of model size and data size variation during pre-training on transfer. (Repository for reproducing the experiments will be made available.)

READ FULL TEXT

page 7

page 15

page 16

research
01/14/2021

Supervised Transfer Learning at Scale for Medical Imaging

Transfer learning is a standard technique to improve performance on task...
research
10/15/2020

Self-training for Few-shot Transfer Across Extreme Task Differences

All few-shot learning techniques must be pre-trained on a large, labeled...
research
10/13/2021

Scaling Laws for the Few-Shot Adaptation of Pre-trained Image Classifiers

Empirical science of neural scaling laws is a rapidly growing area of si...
research
08/15/2023

Enhancing Network Initialization for Medical AI Models Using Large-Scale, Unlabeled Natural Images

Pre-training datasets, like ImageNet, have become the gold standard in m...
research
08/26/2023

Transfer Learning for Microstructure Segmentation with CS-UNet: A Hybrid Algorithm with Transformer and CNN Encoders

Transfer learning improves the performance of deep learning models by in...
research
11/30/2021

Beyond Flatland: Pre-training with a Strong 3D Inductive Bias

Pre-training on large-scale databases consisting of natural images and t...
research
04/28/2022

Resource-efficient domain adaptive pre-training for medical images

The deep learning-based analysis of medical images suffers from data sca...

Please sign up or login with your details

Forgot password? Click here to reset