Comparing to Learn: Surpassing ImageNet Pretraining on Radiographs By Comparing Image Representations

07/15/2020
by   Hong-Yu Zhou, et al.
0

In deep learning era, pretrained models play an important role in medical image analysis, in which ImageNet pretraining has been widely adopted as the best way. However, it is undeniable that there exists an obvious domain gap between natural images and medical images. To bridge this gap, we propose a new pretraining method which learns from 700k radiographs given no manual annotations. We call our method as Comparing to Learn (C2L) because it learns robust features by comparing different image representations. To verify the effectiveness of C2L, we conduct comprehensive ablation studies and evaluate it on different tasks and datasets. The experimental results on radiographs show that C2L can outperform ImageNet pretraining and previous state-of-the-art approaches significantly. Code and models are available.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/22/2021

ImageNet-21K Pretraining for the Masses

ImageNet-1K serves as the primary dataset for pretraining deep learning ...
research
11/24/2019

Reinventing 2D Convolutions for 3D Medical Images

There has been considerable debate over 2D and 3D representation learnin...
research
10/02/2020

Contrastive Learning of Medical Visual Representations from Paired Images and Text

Learning visual representations of medical images is core to medical ima...
research
11/25/2020

Effective Sample Pair Generation for Ultrasound Video Contrastive Representation Learning

Most deep neural networks (DNNs) based ultrasound (US) medical image ana...
research
06/10/2020

Deep Learning-based Aerial Image Segmentation with Open Data for Disaster Impact Assessment

Satellite images are an extremely valuable resource in the aftermath of ...
research
10/11/2021

VTBR: Semantic-based Pretraining for Person Re-Identification

Pretraining is a dominant paradigm in computer vision. Generally, superv...
research
07/17/2021

Generative Pretraining for Paraphrase Evaluation

We introduce ParaBLEU, a paraphrase representation learning model and ev...

Please sign up or login with your details

Forgot password? Click here to reset