Revisiting Locally Supervised Learning: an Alternative to End-to-end Training

01/26/2021
by   Yulin Wang, et al.
0

Due to the need to store the intermediate activations for back-propagation, end-to-end (E2E) training of deep networks usually suffers from high GPUs memory footprint. This paper aims to address this problem by revisiting the locally supervised learning, where a network is split into gradient-isolated modules and trained with local supervision. We experimentally show that simply training local modules with E2E loss tends to collapse task-relevant information at early layers, and hence hurts the performance of the full model. To avoid this issue, we propose an information propagation (InfoPro) loss, which encourages local modules to preserve as much useful information as possible, while progressively discard task-irrelevant information. As InfoPro loss is difficult to compute in its original form, we derive a feasible upper bound as a surrogate optimization objective, yielding a simple but effective algorithm. In fact, we show that the proposed method boils down to minimizing the combination of a reconstruction loss and a normal cross-entropy/contrastive term. Extensive empirical results on five datasets (i.e., CIFAR, SVHN, STL-10, ImageNet and Cityscapes) validate that InfoPro is capable of achieving competitive performance with less than 40 training, while allowing using training data with higher-resolution or larger batch sizes under the same GPU memory constraint. Our method also enables training local modules asynchronously for potential training acceleration. Code is available at: https://github.com/blackfeather-wang/InfoPro-Pytorch.

READ FULL TEXT
research
07/17/2022

Gigapixel Whole-Slide Images Classification using Locally Supervised Learning

Histopathology whole slide images (WSIs) play a very important role in c...
research
08/01/2022

Locally Supervised Learning with Periodic Global Guidance

Locally supervised learning aims to train a neural network based on a lo...
research
12/08/2022

Deep Model Assembling

Large deep learning models have achieved remarkable success in many scen...
research
01/18/2023

Local Learning with Neuron Groups

Traditional deep network training methods optimize a monolithic objectiv...
research
02/27/2023

Layer Grafted Pre-training: Bridging Contrastive Learning And Masked Image Modeling For Label-Efficient Representations

Recently, both Contrastive Learning (CL) and Mask Image Modeling (MIM) d...
research
05/14/2022

BackLink: Supervised Local Training with Backward Links

Empowered by the backpropagation (BP) algorithm, deep neural networks ha...
research
07/20/2023

The Role of Entropy and Reconstruction in Multi-View Self-Supervised Learning

The mechanisms behind the success of multi-view self-supervised learning...

Please sign up or login with your details

Forgot password? Click here to reset