Distilling Visual Priors from Self-Supervised Learning

08/01/2020
by   Bingchen Zhao, et al.
10

Convolutional Neural Networks (CNNs) are prone to overfit small training datasets. We present a novel two-phase pipeline that leverages self-supervised learning and knowledge distillation to improve the generalization ability of CNN models for image classification under the data-deficient setting. The first phase is to learn a teacher model which possesses rich and generalizable visual representations via self-supervised learning, and the second phase is to distill the representations into a student model in a self-distillation manner, and meanwhile fine-tune the student model for the image classification task. We also propose a novel margin loss for the self-supervised contrastive learning proxy task to better learn the representation under the data-deficient scenario. Together with other tricks, we achieve competitive performance in the VIPriors image classification challenge.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/21/2022

KD-MVS: Knowledge Distillation Based Self-supervised Learning for MVS

Supervised multi-view stereo (MVS) methods have achieved remarkable prog...
research
12/04/2021

Ablation study of self-supervised learning for image classification

This project focuses on the self-supervised training of convolutional ne...
research
12/07/2021

Auxiliary Learning for Self-Supervised Video Representation via Similarity-based Knowledge Distillation

Despite the outstanding success of self-supervised pretraining methods f...
research
04/26/2018

Better and Faster: Knowledge Transfer from Multiple Self-supervised Learning Tasks via Graph Distillation for Video Classification

Video representation learning is a vital problem for classification task...
research
10/19/2021

Learning Rich Nearest Neighbor Representations from Self-supervised Ensembles

Pretraining convolutional neural networks via self-supervision, and appl...
research
09/30/2022

Slimmable Networks for Contrastive Self-supervised Learning

Self-supervised learning makes great progress in large model pre-trainin...
research
02/16/2022

Phase Aberration Robust Beamformer for Planewave US Using Self-Supervised Learning

Ultrasound (US) is widely used for clinical imaging applications thanks ...

Please sign up or login with your details

Forgot password? Click here to reset