Mutual Contrastive Learning to Disentangle Whole Slide Image Representations for Glioma Grading

03/08/2022
by   Lipei Zhang, et al.
2

Whole slide images (WSI) provide valuable phenotypic information for histological assessment and malignancy grading of tumors. The WSI-based computational pathology promises to provide rapid diagnostic support and facilitate digital health. The most commonly used WSI are derived from formalin-fixed paraffin-embedded (FFPE) and frozen sections. Currently, the majority of automatic tumor grading models are developed based on FFPE sections, which could be affected by the artifacts introduced by tissue processing. Here we propose a mutual contrastive learning scheme to integrate FFPE and frozen sections and disentangle cross-modality representations for glioma grading. We first design a mutual learning scheme to jointly optimize the model training based on FFPE and frozen sections. Further, we develop a multi-modality domain alignment mechanism to ensure semantic consistency in the backbone model training. We finally design a sphere normalized temperature-scaled cross-entropy loss (NT-Xent), which could promote cross-modality representation disentangling of FFPE and frozen sections. Our experiments show that the proposed scheme achieves better performance than the model trained based on each single modality or mixed modalities. The sphere NT-Xent loss outperforms other typical metrics loss functions.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/17/2021

Contrastive Learning of Visual-Semantic Embeddings

Contrastive learning is a powerful technique to learn representations th...
research
03/10/2023

Understanding and Constructing Latent Modality Structures in Multi-modal Representation Learning

Contrastive loss has been increasingly used in learning representations ...
research
08/08/2021

Contrastive Representation Learning for Rapid Intraoperative Diagnosis of Skull Base Tumors Imaged Using Stimulated Raman Histology

Background: Accurate diagnosis of skull base tumors is essential for pro...
research
06/06/2022

Multimodal Contrastive Learning with LIMoE: the Language-Image Mixture of Experts

Large sparsely-activated models have obtained excellent performance in m...
research
05/01/2023

A Simplified Framework for Contrastive Learning for Node Representations

Contrastive learning has recently established itself as a powerful self-...
research
08/12/2023

Contrastive Learning for Cross-modal Artist Retrieval

Music retrieval and recommendation applications often rely on content fe...
research
11/04/2020

Mutual Modality Learning for Video Action Classification

The construction of models for video action classification progresses ra...

Please sign up or login with your details

Forgot password? Click here to reset