Random Teachers are Good Teachers

02/23/2023
by   Felix Sarnthein, et al.
0

In this work, we investigate the implicit regularization induced by teacher-student learning dynamics. To isolate its effect, we describe a simple experiment where instead of trained teachers, we consider teachers at random initialization. Surprisingly, when distilling a student into such a random teacher, we observe that the resulting model and its representations already possess very interesting characteristics; (1) we observe a strong improvement of the distilled student over its teacher in terms of probing accuracy. (2) The learnt representations are highly transferable between different tasks but deteriorate strongly if trained on random inputs. (3) The student checkpoint suffices to discover so-called lottery tickets, i.e. it contains identifiable, sparse networks that are as performant as the full network. These observations have interesting consequences for several important areas in machine learning: (1) Self-distillation can work solely based on the implicit regularization present in the gradient dynamics without relying on any dark knowledge, (2) self-supervised learning can learn features even in the absence of data augmentation and (3) SGD already becomes stable when initialized from the student checkpoint with respect to batch orderings. Finally, we shed light on an intriguing local property of the loss landscape: the process of feature learning is strongly amplified if the student is initialized closely to the teacher. This raises interesting questions about the nature of the landscape that have remained unexplored so far.

READ FULL TEXT

page 6

page 12

page 13

page 16

page 17

research
06/17/2022

Revisiting Self-Distillation

Knowledge distillation is the procedure of transferring "knowledge" from...
research
01/30/2023

On student-teacher deviations in distillation: does it pay to disobey?

Knowledge distillation has been widely-used to improve the performance o...
research
05/26/2019

All Neural Networks are Created Equal

One of the unresolved questions in the context of deep learning is the t...
research
02/04/2023

MOMA:Distill from Self-Supervised Teachers

Contrastive Learning and Masked Image Modelling have demonstrated except...
research
12/06/2022

Self-Supervised Audio-Visual Speech Representations Learning By Multimodal Self-Distillation

In this work, we present a novel method, named AV2vec, for learning audi...
research
05/10/2023

Phase transitions in the mini-batch size for sparse and dense neural networks

The use of mini-batches of data in training artificial neural networks i...
research
05/27/2017

Lifelong Generative Modeling

Lifelong learning is the problem of learning multiple consecutive tasks ...

Please sign up or login with your details

Forgot password? Click here to reset