Feature Fusion for Online Mutual Knowledge Distillation

04/19/2019
by   Jangho Kim, et al.
0

We propose a learning framework named Feature Fusion Learning (FFL) that efficiently trains a powerful classifier through a fusion module which combines the feature maps generated from parallel neural networks. Specifically, we train a number of parallel neural networks as sub-networks, then we combine the feature maps from each sub-network using a fusion module to create a more meaningful feature map. The fused feature map is passed into the fused classifier for overall classification. Unlike existing feature fusion methods, in our framework, an ensemble of sub-network classifiers transfers its knowledge to the fused classifier and then the fused classifier delivers its knowledge back to each sub-network, mutually teaching one another in an online-knowledge distillation manner. This mutually teaching system not only improves the performance of the fused classifier but also obtains performance gain in each sub-network. Moreover, our model is more beneficial because different types of network can be used for each sub-network. We have performed a variety of experiments on multiple datasets such as CIFAR-10, CIFAR-100 and ImageNet and proved that our method is more effective than other alternative methods in terms of performance of both sub-networks and the fused classifier.

READ FULL TEXT
research
03/26/2021

Distilling a Powerful Student Model via Online Knowledge Distillation

Existing online knowledge distillation approaches either adopt the stude...
research
06/16/2022

Multi scale Feature Extraction and Fusion for Online Knowledge Distillation

Online knowledge distillation conducts knowledge transfer among all stud...
research
02/05/2020

Feature-map-level Online Adversarial Knowledge Distillation

Feature maps contain rich information about image intensity and spatial ...
research
08/13/2020

An Ensemble of Knowledge Sharing Models for Dynamic Hand Gesture Recognition

The focus of this paper is dynamic gesture recognition in the context of...
research
08/11/2022

MixSKD: Self-Knowledge Distillation from Mixup for Image Recognition

Unlike the conventional Knowledge Distillation (KD), Self-KD allows a ne...
research
05/18/2017

Fusing restricted information

Information fusion deals with the integration and merging of data and in...
research
11/12/2017

D-PCN: Parallel Convolutional Neural Networks for Image Recognition in Reverse Adversarial Style

In this paper, a recognition framework named D-PCN using a discriminator...

Please sign up or login with your details

Forgot password? Click here to reset