-
Learning from a Teacher using Unlabeled Data
Knowledge distillation is a widely used technique for model compression....
read it
-
Semi-supervised learning using teacher-student models for vocal melody extraction
The lack of labeled data is a major obstacle in many music information r...
read it
-
Classification of Diabetic Retinopathy Using Unlabeled Data and Knowledge Distillation
Knowledge distillation allows transferring knowledge from a pre-trained ...
read it
-
Image Classification in the Dark using Quanta Image Sensors
State-of-the-art image classifiers are trained and tested using well-ill...
read it
-
Fixing the train-test resolution discrepancy: FixEfficientNet
This note complements the paper "Fixing the train-test resolution discre...
read it
-
Teacher-Student chain for efficient semi-supervised histology image classification
Deep learning shows great potential for the domain of digital pathology....
read it
-
Knowledge Concentration: Learning 100K Object Classifiers in a Single CNN
Fine-grained image labels are desirable for many computer vision applica...
read it
Resolution-Based Distillation for Efficient Histology Image Classification
Developing deep learning models to analyze histology images has been computationally challenging, as the massive size of the images causes excessive strain on all parts of the computing pipeline. This paper proposes a novel deep learning-based methodology for improving the computational efficiency of histology image classification. The proposed approach is robust when used with images that have reduced input resolution and can be trained effectively with limited labeled data. Pre-trained on the original high-resolution (HR) images, our method uses knowledge distillation (KD) to transfer learned knowledge from a teacher model to a student model trained on the same images at a much lower resolution. To address the lack of large-scale labeled histology image datasets, we perform KD in a self-supervised manner. We evaluate our approach on two histology image datasets associated with celiac disease (CD) and lung adenocarcinoma (LUAD). Our results show that a combination of KD and self-supervision allows the student model to approach, and in some cases, surpass the classification accuracy of the teacher, while being much more efficient. Additionally, we observe an increase in student classification performance as the size of the unlabeled dataset increases, indicating that there is potential to scale further. For the CD data, our model outperforms the HR teacher model, while needing 4 times fewer computations. For the LUAD data, our student model results at 1.25x magnification are within 3 model at 10x magnification, with a 64 times computational cost reduction. Moreover, our CD outcomes benefit from performance scaling with the use of more unlabeled data. For 0.625x magnification, using unlabeled data improves accuracy by 4 of deep learning solutions for digital pathology with standard computational hardware.
READ FULL TEXT
Comments
There are no comments yet.