Learning Better Contrastive View from Radiologist's Gaze

05/15/2023
by   Sheng Wang, et al.
0

Recent self-supervised contrastive learning methods greatly benefit from the Siamese structure that aims to minimizing distances between positive pairs. These methods usually apply random data augmentation to input images, expecting the augmented views of the same images to be similar and positively paired. However, random augmentation may overlook image semantic information and degrade the quality of augmented views in contrastive learning. This issue becomes more challenging in medical images since the abnormalities related to diseases can be tiny, and are easy to be corrupted (e.g., being cropped out) in the current scheme of random augmentation. In this work, we first demonstrate that, for widely-used X-ray images, the conventional augmentation prevalent in contrastive pre-training can affect the performance of the downstream diagnosis or classification tasks. Then, we propose a novel augmentation method, i.e., FocusContrast, to learn from radiologists' gaze in diagnosis and generate contrastive views for medical images with guidance from radiologists' visual attention. Specifically, we track the gaze movement of radiologists and model their visual attention when reading to diagnose X-ray images. The learned model can predict visual attention of the radiologists given a new input image, and further guide the attention-aware augmentation that hardly neglects the disease-related abnormalities. As a plug-and-play and framework-agnostic module, FocusContrast consistently improves state-of-the-art contrastive learning methods of SimCLR, MoCo, and BYOL by 4.0 7.0 accuracy on a knee X-ray dataset.

READ FULL TEXT

page 1

page 2

page 5

page 6

page 7

page 8

page 9

research
02/07/2022

Crafting Better Contrastive Views for Siamese Representation Learning

Recent self-supervised contrastive learning methods greatly benefit from...
research
07/03/2020

Anatomy-Aware Siamese Network: Exploiting Semantic Asymmetry for Accurate Pelvic Fracture Detection in X-ray Images

Visual cues of enforcing bilaterally symmetric anatomies as normal findi...
research
06/01/2022

Contrastive Principal Component Learning: Modeling Similarity by Augmentation Overlap

Traditional self-supervised contrastive learning methods learn embedding...
research
10/27/2021

SCALP – Supervised Contrastive Learning for Cardiopulmonary Disease Classification and Localization in Chest X-rays using Patient Metadata

Computer-aided diagnosis plays a salient role in more accessible and acc...
research
02/21/2021

MedAug: Contrastive learning leveraging patient metadata improves representations for chest X-ray interpretation

Self-supervised contrastive learning between pairs of multiple views of ...
research
09/08/2023

Unsupervised Gaze-aware Contrastive Learning with Subject-specific Condition

Appearance-based gaze estimation has shown great promise in many applica...
research
04/06/2022

Follow My Eye: Using Gaze to Supervise Computer-Aided Diagnosis

When deep neural network (DNN) was first introduced to the medical image...

Please sign up or login with your details

Forgot password? Click here to reset