Kernel Based Progressive Distillation for Adder Neural Networks

09/28/2020
by   Yixing Xu, et al.
2

Adder Neural Networks (ANNs) which only contain additions bring us a new way of developing deep neural networks with low energy consumption. Unfortunately, there is an accuracy drop when replacing all convolution filters by adder filters. The main reason here is the optimization difficulty of ANNs using ℓ_1-norm, in which the estimation of gradient in back propagation is inaccurate. In this paper, we present a novel method for further improving the performance of ANNs without increasing the trainable parameters via a progressive kernel based knowledge distillation (PKKD) method. A convolutional neural network (CNN) with the same architecture is simultaneously initialized and trained as a teacher network, features and weights of ANN and CNN will be transformed to a new space to eliminate the accuracy drop. The similarity is conducted in a higher-dimensional space to disentangle the difference of their distributions using a kernel based method. Finally, the desired ANN is learned based on the information from both the ground-truth and teacher, progressively. The effectiveness of the proposed method for learning ANN with higher performance is then well-verified on several benchmarks. For instance, the ANN-50 trained using the proposed PKKD method obtains a 76.8% top-1 accuracy on ImageNet dataset, which is 0.6% higher than that of the ResNet-50.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/26/2019

Progressive Label Distillation: Learning Input-Efficient Deep Neural Networks

Much of the focus in the area of knowledge distillation has been on dist...
research
03/11/2019

HetConv: Heterogeneous Kernel-Based Convolutions for Deep CNNs

We present a novel deep learning architecture in which the convolution o...
research
08/22/2018

Progressive Deep Neural Networks Acceleration via Soft Filter Pruning

This paper proposed a Progressive Soft Filter Pruning method (PSFP) to p...
research
04/17/2023

LaSNN: Layer-wise ANN-to-SNN Distillation for Effective and Efficient Training in Deep Spiking Neural Networks

Spiking Neural Networks (SNNs) are biologically realistic and practicall...
research
07/04/2019

Graph-based Knowledge Distillation by Multi-head Self-attention Network

Knowledge distillation (KD) is a technique to derive optimal performance...
research
11/08/2019

Deep geometric knowledge distillation with graphs

In most cases deep learning architectures are trained disregarding the a...
research
11/30/2020

Extracting Electron Scattering Cross Sections from Swarm Data using Deep Neural Networks

Electron-neutral scattering cross sections are fundamental quantities in...

Please sign up or login with your details

Forgot password? Click here to reset