SLPD: Slide-level Prototypical Distillation for WSIs

07/20/2023
by   Zhimiao Yu, et al.
0

Improving the feature representation ability is the foundation of many whole slide pathological image (WSIs) tasks. Recent works have achieved great success in pathological-specific self-supervised learning (SSL). However, most of them only focus on learning patch-level representations, thus there is still a gap between pretext and slide-level downstream tasks, e.g., subtyping, grading and staging. Aiming towards slide-level representations, we propose Slide-Level Prototypical Distillation (SLPD) to explore intra- and inter-slide semantic structures for context modeling on WSIs. Specifically, we iteratively perform intra-slide clustering for the regions (4096x4096 patches) within each WSI to yield the prototypes and encourage the region representations to be closer to the assigned prototypes. By representing each slide with its prototypes, we further select similar slides by the set distance of prototypes and assign the regions by cross-slide prototypes for distillation. SLPD achieves state-of-the-art results on multiple slide-level benchmarks and demonstrates that representation learning of semantic structures of slides can make a suitable proxy task for WSI analysis. Code will be available at https://github.com/Carboxy/SLPD.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/11/2023

Self-supervised adversarial masking for 3D point cloud representation learning

Self-supervised methods have been proven effective for learning deep rep...
research
06/21/2021

Simple Distillation Baselines for Improving Small Self-supervised Models

While large self-supervised models have rivalled the performance of thei...
research
12/06/2021

Separated Contrastive Learning for Organ-at-Risk and Gross-Tumor-Volume Segmentation with Limited Annotation

Automatic delineation of organ-at-risk (OAR) and gross-tumor-volume (GTV...
research
11/29/2022

One is All: Bridging the Gap Between Neural Radiance Fields Architectures with Progressive Volume Distillation

Neural Radiance Fields (NeRF) methods have proved effective as compact, ...
research
12/31/2022

Disjoint Masking with Joint Distillation for Efficient Masked Image Modeling

Masked image modeling (MIM) has shown great promise for self-supervised ...
research
07/29/2022

Global-Local Self-Distillation for Visual Representation Learning

The downstream accuracy of self-supervised methods is tightly linked to ...
research
09/15/2023

BROW: Better featuRes fOr Whole slide image based on self-distillation

Whole slide image (WSI) processing is becoming part of the key component...

Please sign up or login with your details

Forgot password? Click here to reset