Arch-Net: Model Distillation for Architecture Agnostic Model Deployment

11/01/2021
by   Weixin Xu, et al.
0

Vast requirement of computation power of Deep Neural Networks is a major hurdle to their real world applications. Many recent Application Specific Integrated Circuit (ASIC) chips feature dedicated hardware support for Neural Network Acceleration. However, as ASICs take multiple years to develop, they are inevitably out-paced by the latest development in Neural Architecture Research. For example, Transformer Networks do not have native support on many popular chips, and hence are difficult to deploy. In this paper, we propose Arch-Net, a family of Neural Networks made up of only operators efficiently supported across most architectures of ASICs. When a Arch-Net is produced, less common network constructs, like Layer Normalization and Embedding Layers, are eliminated in a progressive manner through label-free Blockwise Model Distillation, while performing sub-eight bit quantization at the same time to maximize performance. Empirical results on machine translation and image classification tasks confirm that we can transform latest developed Neural Architectures into fast running and as-accurate Arch-Net, ready for deployment on multiple mass-produced ASIC chips. The code will be available at https://github.com/megvii-research/Arch-Net.

READ FULL TEXT
research
08/15/2023

EQ-Net: Elastic Quantization Neural Networks

Current model quantization methods have shown their promising capability...
research
04/20/2018

Co-Design of Deep Neural Nets and Neural Net Accelerators for Embedded Vision Applications

Deep Learning is arguably the most rapidly evolving research area in rec...
research
12/22/2021

Joint-training on Symbiosis Networks for Deep Nueral Machine Translation models

Deep encoders have been proven to be effective in improving neural machi...
research
12/07/2022

Slimmable Pruned Neural Networks

Slimmable Neural Networks (S-Net) is a novel network which enabled to se...
research
10/08/2018

NSGA-NET: A Multi-Objective Genetic Algorithm for Neural Architecture Search

This paper introduces NSGA-Net, an evolutionary approach for neural arch...
research
11/30/2020

KD-Lib: A PyTorch library for Knowledge Distillation, Pruning and Quantization

In recent years, the growing size of neural networks has led to a vast a...
research
11/29/2022

One is All: Bridging the Gap Between Neural Radiance Fields Architectures with Progressive Volume Distillation

Neural Radiance Fields (NeRF) methods have proved effective as compact, ...

Please sign up or login with your details

Forgot password? Click here to reset