Knowledge distillation via adaptive instance normalization

03/09/2020
by   Jing Yang, et al.
17

This paper addresses the problem of model compression via knowledge distillation. To this end, we propose a new knowledge distillation method based on transferring feature statistics, specifically the channel-wise mean and variance, from the teacher to the student. Our method goes beyond the standard way of enforcing the mean and variance of the student to be similar to those of the teacher through an L_2 loss, which we found it to be of limited effectiveness. Specifically, we propose a new loss based on adaptive instance normalization to effectively transfer the feature statistics. The main idea is to transfer the learned statistics back to the teacher via adaptive instance normalization (conditioned on the student) and let the teacher network "evaluate" via a loss whether the statistics learned by the student are reliably transferred. We show that our distillation method outperforms other state-of-the-art distillation methods over a large set of experimental settings including different (a) network architectures, (b) teacher-student capacities, (c) datasets, and (d) domains.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/21/2021

Collaborative Teacher-Student Learning via Multiple Knowledge Transfer

Knowledge distillation (KD), as an efficient and effective model compres...
research
04/25/2022

Proto2Proto: Can you recognize the car, the way I do?

Prototypical methods have recently gained a lot of attention due to thei...
research
10/23/2019

Contrastive Representation Distillation

Often we wish to transfer representational knowledge from one neural net...
research
06/05/2021

Bidirectional Distillation for Top-K Recommender System

Recommender systems (RS) have started to employ knowledge distillation, ...
research
06/25/2023

Feature Adversarial Distillation for Point Cloud Classification

Due to the point cloud's irregular and unordered geometry structure, con...
research
09/14/2021

Exploring the Connection between Knowledge Distillation and Logits Matching

Knowledge distillation is a generalized logits matching technique for mo...
research
06/01/2021

Natural Statistics of Network Activations and Implications for Knowledge Distillation

In a matter that is analog to the study of natural image statistics, we ...

Please sign up or login with your details

Forgot password? Click here to reset