Stochastic Precision Ensemble: Self-Knowledge Distillation for Quantized Deep Neural Networks

09/30/2020
by   Yoonho Boo, et al.
0

The quantization of deep neural networks (QDNNs) has been actively studied for deployment in edge devices. Recent studies employ the knowledge distillation (KD) method to improve the performance of quantized networks. In this study, we propose stochastic precision ensemble training for QDNNs (SPEQ). SPEQ is a knowledge distillation training scheme; however, the teacher is formed by sharing the model parameters of the student network. We obtain the soft labels of the teacher by changing the bit precision of the activation stochastically at each layer of the forward-pass computation. The student model is trained with these soft labels to reduce the activation quantization noise. The cosine similarity loss is employed, instead of the KL-divergence, for KD training. As the teacher model changes continuously by random bit-precision assignment, it exploits the effect of stochastic ensemble KD. SPEQ outperforms the existing quantization training methods in various tasks, such as image classification, question-answering, and transfer learning without the need for cumbersome teacher networks.

READ FULL TEXT
research
11/28/2019

QKD: Quantization-aware Knowledge Distillation

Quantization and Knowledge distillation (KD) methods are widely used to ...
research
04/14/2021

Annealing Knowledge Distillation

Significant memory and computational requirements of large deep neural n...
research
09/04/2019

Empirical Analysis of Knowledge Distillation Technique for Optimization of Quantized Deep Neural Networks

Knowledge distillation (KD) is a very popular method for model size redu...
research
02/10/2023

Feature Affinity Assisted Knowledge Distillation and Quantization of Deep Neural Networks on Label-Free Data

In this paper, we propose a feature affinity (FA) assisted knowledge dis...
research
06/14/2019

Divide and Conquer: Leveraging Intermediate Feature Representations for Quantized Training of Neural Networks

The deep layers of modern neural networks extract a rather rich set of f...
research
11/08/2018

Knowledge Transfer via Distillation of Activation Boundaries Formed by Hidden Neurons

An activation boundary for a neuron refers to a separating hyperplane th...

Please sign up or login with your details

Forgot password? Click here to reset