Self-Knowledge Distillation for Surgical Phase Recognition

06/15/2023
by   Jinglu Zhang, et al.
3

Purpose: Advances in surgical phase recognition are generally led by training deeper networks. Rather than going further with a more complex solution, we believe that current models can be exploited better. We propose a self-knowledge distillation framework that can be integrated into current state-of-the-art (SOTA) models without requiring any extra complexity to the models or annotations. Methods: Knowledge distillation is a framework for network regularization where knowledge is distilled from a teacher network to a student network. In self-knowledge distillation, the student model becomes the teacher such that the network learns from itself. Most phase recognition models follow an encoder-decoder framework. Our framework utilizes self-knowledge distillation in both stages. The teacher model guides the training process of the student model to extract enhanced feature representations from the encoder and build a more robust temporal decoder to tackle the over-segmentation problem. Results: We validate our proposed framework on the public dataset Cholec80. Our framework is embedded on top of four popular SOTA approaches and consistently improves their performance. Specifically, our best GRU model boosts performance by +3.33 baseline model. Conclusion: We embed a self-knowledge distillation framework for the first time in the surgical phase recognition training pipeline. Experimental results demonstrate that our simple yet powerful framework can improve performance of existing phase recognition models. Moreover, our extensive experiments show that even with 75 the same baseline model trained on the full set.

READ FULL TEXT
research
02/04/2022

Iterative Self Knowledge Distillation – From Pothole Classification to Fine-Grained and COVID Recognition

Pothole classification has become an important task for road inspection ...
research
09/03/2022

A Novel Self-Knowledge Distillation Approach with Siamese Representation Learning for Action Recognition

Knowledge distillation is an effective transfer of knowledge from a heav...
research
11/05/2021

Oracle Teacher: Towards Better Knowledge Distillation

Knowledge distillation (KD), best known as an effective method for model...
research
09/11/2020

Extending Label Smoothing Regularization with Self-Knowledge Distillation

Inspired by the strong correlation between the Label Smoothing Regulariz...
research
09/19/2020

Introspective Learning by Distilling Knowledge from Online Self-explanation

In recent years, many explanation methods have been proposed to explain ...
research
04/05/2023

Self-Distillation for Gaussian Process Regression and Classification

We propose two approaches to extend the notion of knowledge distillation...
research
11/30/2018

Learning from a tiny dataset of manual annotations: a teacher/student approach for surgical phase recognition

Vision algorithms capable of interpreting scenes from a real-time video ...

Please sign up or login with your details

Forgot password? Click here to reset