Contrastive Deep Supervision

07/12/2022
by   Linfeng Zhang, et al.
0

The success of deep learning is usually accompanied by the growth in neural network depth. However, the traditional training method only supervises the neural network at its last layer and propagates the supervision layer-by-layer, which leads to hardship in optimizing the intermediate layers. Recently, deep supervision has been proposed to add auxiliary classifiers to the intermediate layers of deep neural networks. By optimizing these auxiliary classifiers with the supervised task loss, the supervision can be applied to the shallow layers directly. However, deep supervision conflicts with the well-known observation that the shallow layers learn low-level features instead of task-biased high-level semantic features. To address this issue, this paper proposes a novel training framework named Contrastive Deep Supervision, which supervises the intermediate layers with augmentation-based contrastive learning. Experimental results on nine popular datasets with eleven models demonstrate its effects on general image classification, fine-grained image classification and object detection in supervised learning, semi-supervised learning and knowledge distillation. Codes have been released in Github.

READ FULL TEXT
research
03/01/2023

Can representation learning for multimodal image registration be improved by supervision of intermediate layers?

Multimodal imaging and correlative analysis typically require image alig...
research
07/06/2022

A Comprehensive Review on Deep Supervision: Theories and Applications

Deep supervision, or known as 'intermediate supervision' or 'auxiliary s...
research
11/06/2016

The Shallow End: Empowering Shallower Deep-Convolutional Networks through Auxiliary Outputs

The depth is one of the key factors behind the great success of convolut...
research
10/15/2020

Why Layer-Wise Learning is Hard to Scale-up and a Possible Solution via Accelerated Downsampling

Layer-wise learning, as an alternative to global back-propagation, is ea...
research
11/08/2021

Hybrid BYOL-ViT: Efficient approach to deal with small datasets

Supervised learning can learn large representational spaces, which are c...
research
06/03/2019

Deeply-supervised Knowledge Synergy

Convolutional Neural Networks (CNNs) have become deeper and more complic...
research
08/07/2016

Residual CNDS

Convolutional Neural networks nowadays are of tremendous importance for ...

Please sign up or login with your details

Forgot password? Click here to reset