Multi-modal Masked Autoencoders Learn Compositional Histopathological Representations

09/04/2022
by   Wisdom Oluchi Ikezogwo, et al.
0

Self-supervised learning (SSL) enables learning useful inductive biases through utilizing pretext tasks that require no labels. The unlabeled nature of SSL makes it especially important for whole slide histopathological images (WSIs), where patch-level human annotation is difficult. Masked Autoencoders (MAE) is a recent SSL method suitable for digital pathology as it does not require negative sampling and requires little to no data augmentations. However, the domain shift between natural images and digital pathology images requires further research in designing MAE for patch-level WSIs. In this paper, we investigate several design choices for MAE in histopathology. Furthermore, we introduce a multi-modal MAE (MMAE) that leverages the specific compositionality of Hematoxylin Eosin (H E) stained WSIs. We performed our experiments on the public patch-level dataset NCT-CRC-HE-100K. The results show that the MMAE architecture outperforms supervised baselines and other state-of-the-art SSL techniques for an eight-class tissue phenotyping task, utilizing only 100 labeled samples for fine-tuning. Our code is available at https://github.com/wisdomikezogwo/MMAE_Pathology

READ FULL TEXT

page 4

page 7

page 13

research
03/15/2022

Magnification Prior: A Self-Supervised Method for Learning Representations on Breast Cancer Histopathological Images

This work presents a novel self-supervised pre-training method to learn ...
research
08/16/2021

Improving Self-supervised Learning with Hardness-aware Dynamic Curriculum Learning: An Application to Digital Pathology

Self-supervised learning (SSL) has recently shown tremendous potential t...
research
02/09/2021

Learning Modality-Specific Representations with Self-Supervised Multi-Task Learning for Multimodal Sentiment Analysis

Representation Learning is a significant and challenging task in multimo...
research
04/04/2023

Defending Against Patch-based Backdoor Attacks on Self-Supervised Learning

Recently, self-supervised learning (SSL) was shown to be vulnerable to p...
research
03/28/2020

Semantically Mutil-modal Image Synthesis

In this paper, we focus on semantically multi-modal image synthesis (SMI...
research
03/28/2020

Semantically Multi-modal Image Synthesis

In this paper, we focus on semantically multi-modal image synthesis (SMI...
research
04/26/2023

Do SSL Models Have Déjà Vu? A Case of Unintended Memorization in Self-supervised Learning

Self-supervised learning (SSL) algorithms can produce useful image repre...

Please sign up or login with your details

Forgot password? Click here to reset