Designing and Training of Lightweight Neural Networks on Edge Devices using Early Halting in Knowledge Distillation

09/30/2022
by   Rahul Mishra, et al.
0

Automated feature extraction capability and significant performance of Deep Neural Networks (DNN) make them suitable for Internet of Things (IoT) applications. However, deploying DNN on edge devices becomes prohibitive due to the colossal computation, energy, and storage requirements. This paper presents a novel approach for designing and training lightweight DNN using large-size DNN. The approach considers the available storage, processing speed, and maximum allowable processing time to execute the task on edge devices. We present a knowledge distillation based training procedure to train the lightweight DNN to achieve adequate accuracy. During the training of lightweight DNN, we introduce a novel early halting technique, which preserves network resources; thus, speedups the training procedure. Finally, we present the empirically and real-world evaluations to verify the effectiveness of the proposed approach under different constraints using various edge devices.

READ FULL TEXT

page 1

page 7

page 8

research
10/05/2020

A Survey on Deep Neural Network Compression: Challenges, Overview, and Solutions

Deep Neural Network (DNN) has gained unprecedented performance due to it...
research
06/25/2021

PQK: Model Compression via Pruning, Quantization, and Knowledge Distillation

As edge devices become prevalent, deploying Deep Neural Networks (DNN) o...
research
05/05/2021

ScissionLite: Accelerating Distributed Deep Neural Networks Using Transfer Layer

Industrial Internet of Things (IIoT) applications can benefit from lever...
research
05/12/2021

Lightweight compression of neural network feature tensors for collaborative intelligence

In collaborative intelligence applications, part of a deep neural networ...
research
02/11/2018

Edge-Host Partitioning of Deep Neural Networks with Feature Space Encoding for Resource-Constrained Internet-of-Things Platforms

This paper introduces partitioning an inference task of a deep neural ne...
research
07/09/2023

A Lightweight Approach for Network Intrusion Detection based on Self-Knowledge Distillation

Network Intrusion Detection (NID) works as a kernel technology for the s...
research
01/14/2023

Survey of Knowledge Distillation in Federated Edge Learning

The increasing demand for intelligent services and privacy protection of...

Please sign up or login with your details

Forgot password? Click here to reset