HistoTransfer: Understanding Transfer Learning for Histopathology

06/13/2021
by   Yash Sharma, et al.
0

Advancement in digital pathology and artificial intelligence has enabled deep learning-based computer vision techniques for automated disease diagnosis and prognosis. However, WSIs present unique computational and algorithmic challenges. WSIs are gigapixel-sized, making them infeasible to be used directly for training deep neural networks. Hence, for modeling, a two-stage approach is adopted: Patch representations are extracted first, followed by the aggregation for WSI prediction. These approaches require detailed pixel-level annotations for training the patch encoder. However, obtaining these annotations is time-consuming and tedious for medical experts. Transfer learning is used to address this gap and deep learning architectures pre-trained on ImageNet are used for generating patch-level representation. Even though ImageNet differs significantly from histopathology data, pre-trained networks have been shown to perform impressively on histopathology data. Also, progress in self-supervised and multi-task learning coupled with the release of multiple histopathology data has led to the release of histopathology-specific networks. In this work, we compare the performance of features extracted from networks trained on ImageNet and histopathology data. We use an attention pooling network over these extracted features for slide-level aggregation. We investigate if features learned using more complex networks lead to gain in performance. We use a simple top-k sampling approach for fine-tuning framework and study the representation similarity between frozen and fine-tuned networks using Centered Kernel Alignment. Further, to examine if intermediate block representation is better suited for feature extraction and ImageNet architectures are unnecessarily large for histopathology, we truncate the blocks of ResNet18 and DenseNet121 and examine the performance.

READ FULL TEXT
research
03/19/2021

Cluster-to-Conquer: A Framework for End-to-End Multi-Instance Learning for Whole Slide Image Classification

In recent years, the availability of digitized Whole Slide Images (WSIs)...
research
05/23/2018

Do Better ImageNet Models Transfer Better?

Transfer learning has become a cornerstone of computer vision with the a...
research
11/29/2021

Enhanced Transfer Learning Through Medical Imaging and Patient Demographic Data Fusion

In this work we examine the performance enhancement in classification of...
research
03/09/2023

Mark My Words: Dangers of Watermarked Images in ImageNet

The utilization of pre-trained networks, especially those trained on Ima...
research
02/12/2022

Classification of Microscopy Images of Breast Tissue: Region Duplication based Self-Supervision vs. Off-the Shelf Deep Representations

Breast cancer is one of the leading causes of female mortality in the wo...
research
01/03/2022

Improving Feature Extraction from Histopathological Images Through A Fine-tuning ImageNet Model

Due to lack of annotated pathological images, transfer learning has been...
research
03/22/2019

Overcoming Small Minirhizotron Datasets Using Transfer Learning

Minirhizotron technology is widely used for studying the development of ...

Please sign up or login with your details

Forgot password? Click here to reset