Vision Transformers for Small Histological Datasets Learned through Knowledge Distillation

05/27/2023
by   Neel Kanwal, et al.
0

Computational Pathology (CPATH) systems have the potential to automate diagnostic tasks. However, the artifacts on the digitized histological glass slides, known as Whole Slide Images (WSIs), may hamper the overall performance of CPATH systems. Deep Learning (DL) models such as Vision Transformers (ViTs) may detect and exclude artifacts before running the diagnostic algorithm. A simple way to develop robust and generalized ViTs is to train them on massive datasets. Unfortunately, acquiring large medical datasets is expensive and inconvenient, prompting the need for a generalized artifact detection method for WSIs. In this paper, we present a student-teacher recipe to improve the classification performance of ViT for the air bubbles detection task. ViT, trained under the student-teacher framework, boosts its performance by distilling existing knowledge from the high-capacity teacher model. Our best-performing ViT yields 0.961 and 0.911 F1-score and MCC, respectively, observing a 7 presents a new perspective of leveraging knowledge distillation over transfer learning to encourage the use of customized transformers for efficient preprocessing pipelines in the CPATH systems.

READ FULL TEXT
research
02/21/2023

MaskedKD: Efficient Distillation of Vision Transformers with Masked Images

Knowledge distillation is a popular and effective regularization techniq...
research
07/17/2022

Subclass Knowledge Distillation with Known Subclass Labels

This work introduces a novel knowledge distillation framework for classi...
research
09/12/2021

On the Efficiency of Subclass Knowledge Distillation in Classification Tasks

This work introduces a novel knowledge distillation framework for classi...
research
10/15/2021

From Multimodal to Unimodal Attention in Transformers using Knowledge Distillation

Multimodal Deep Learning has garnered much interest, and transformers ha...
research
12/28/2022

OVO: One-shot Vision Transformer Search with Online distillation

Pure transformers have shown great potential for vision tasks recently. ...
research
10/14/2022

Knowledge Distillation approach towards Melanoma Detection

Melanoma is regarded as the most threatening among all skin cancers. The...
research
04/01/2022

Consistency driven Sequential Transformers Attention Model for Partially Observable Scenes

Most hard attention models initially observe a complete scene to locate ...

Please sign up or login with your details

Forgot password? Click here to reset