Large-scale Knowledge Distillation with Elastic Heterogeneous Computing Resources

07/14/2022
by   Ji Liu, et al.
1

Although more layers and more parameters generally improve the accuracy of the models, such big models generally have high computational complexity and require big memory, which exceed the capacity of small devices for inference and incurs long training time. In addition, it is difficult to afford long training time and inference time of big models even in high performance servers, as well. As an efficient approach to compress a large deep model (a teacher model) to a compact model (a student model), knowledge distillation emerges as a promising approach to deal with the big models. Existing knowledge distillation methods cannot exploit the elastic available computing resources and correspond to low efficiency. In this paper, we propose an Elastic Deep Learning framework for knowledge Distillation, i.e., EDL-Dist. The advantages of EDL-Dist are three-fold. First, the inference and the training process is separated. Second, elastic available computing resources can be utilized to improve the efficiency. Third, fault-tolerance of the training and inference processes is supported. We take extensive experimentation to show that the throughput of EDL-Dist is up to 3.125 times faster than the baseline method (online knowledge distillation) while the accuracy is similar or higher.

READ FULL TEXT
research
11/15/2019

Stagewise Knowledge Distillation

The deployment of modern Deep Learning models requires high computationa...
research
11/20/2021

HeterPS: Distributed Deep Learning With Reinforcement Learning Based Scheduling in Heterogeneous Environments

Deep neural networks (DNNs) exploit many layers and a large number of pa...
research
05/23/2023

One-stop Training of Multiple Capacity Models for Multilingual Machine Translation

Training models with varying capacities can be advantageous for deployin...
research
10/15/2021

From Multimodal to Unimodal Attention in Transformers using Knowledge Distillation

Multimodal Deep Learning has garnered much interest, and transformers ha...
research
02/17/2023

Explicit and Implicit Knowledge Distillation via Unlabeled Data

Data-free knowledge distillation is a challenging model lightweight task...
research
12/03/2018

Accelerating Large Scale Knowledge Distillation via Dynamic Importance Sampling

Knowledge distillation is an effective technique that transfers knowledg...
research
08/24/2023

Fall Detection using Knowledge Distillation Based Long short-term memory for Offline Embedded and Low Power Devices

This paper presents a cost-effective, low-power approach to unintentiona...

Please sign up or login with your details

Forgot password? Click here to reset