ConCL: Concept Contrastive Learning for Dense Prediction Pre-training in Pathology Images

07/14/2022
by   Jiawei Yang, et al.
0

Detectingandsegmentingobjectswithinwholeslideimagesis essential in computational pathology workflow. Self-supervised learning (SSL) is appealing to such annotation-heavy tasks. Despite the extensive benchmarks in natural images for dense tasks, such studies are, unfortunately, absent in current works for pathology. Our paper intends to narrow this gap. We first benchmark representative SSL methods for dense prediction tasks in pathology images. Then, we propose concept contrastive learning (ConCL), an SSL framework for dense pre-training. We explore how ConCL performs with concepts provided by different sources and end up with proposing a simple dependency-free concept generating method that does not rely on external segmentation algorithms or saliency detection models. Extensive experiments demonstrate the superiority of ConCL over previous state-of-the-art SSL methods across different settings. Along our exploration, we distll several important and intriguing components contributing to the success of dense pre-training for pathology images. We hope this work could provide useful data points and encourage the community to conduct ConCL pre-training for problems of interest. Code is available.

READ FULL TEXT

page 20

page 23

page 24

research
05/30/2022

Self-Supervised Pre-training of Vision Transformers for Dense Prediction Tasks

We present a new self-supervised pre-training of Vision Transformers for...
research
04/04/2023

Multi-Level Contrastive Learning for Dense Prediction Task

In this work, we present Multi-Level Contrastive Learning for Dense Pred...
research
08/07/2023

Exploring Visual Pre-training for Robot Manipulation: Datasets, Models and Methods

Visual pre-training with large-scale real-world data has made great prog...
research
11/14/2022

What Images are More Memorable to Machines?

This paper studies the problem of measuring and predicting how memorable...
research
02/17/2022

Augment with Care: Contrastive Learning for the Boolean Satisfiability Problem

Supervised learning can improve the design of state-of-the-art solvers f...
research
06/08/2023

R-MAE: Regions Meet Masked Autoencoders

Vision-specific concepts such as "region" have played a key role in exte...
research
02/02/2023

Boosting Low-Data Instance Segmentation by Unsupervised Pre-training with Saliency Prompt

Recently, inspired by DETR variants, query-based end-to-end instance seg...

Please sign up or login with your details

Forgot password? Click here to reset