Student Becoming the Master: Knowledge Amalgamation for Joint Scene Parsing, Depth Estimation, and More

04/23/2019
by   Jingwen Ye, et al.
0

In this paper, we investigate a novel deep-model reusing task. Our goal is to train a lightweight and versatile student model, without human-labelled annotations, that amalgamates the knowledge and masters the expertise of two pretrained teacher models working on heterogeneous problems, one on scene parsing and the other on depth estimation. To this end, we propose an innovative training strategy that learns the parameters of the student intertwined with the teachers, achieved by 'projecting' its amalgamated features onto each teacher's domain and computing the loss. We also introduce two options to generalize the proposed training strategy to handle three or more tasks simultaneously. The proposed scheme yields very encouraging results. As demonstrated on several benchmarks, the trained student model achieves results even superior to those of the teachers in their own expertise domains and on par with the state-of-the-art fully supervised models relying on human-labelled annotations.

READ FULL TEXT

page 3

page 5

page 7

page 8

research
05/28/2019

Amalgamating Filtered Knowledge: Learning Task-customized Student from Multi-task Teachers

Many well-trained Convolutional Neural Network(CNN) models have now been...
research
11/07/2018

Amalgamating Knowledge towards Comprehensive Classification

With the rapid development of deep learning, there have been an unpreced...
research
08/20/2019

Customizing Student Networks From Heterogeneous Teachers via Adaptive Knowledge Amalgamation

A massive number of well-trained deep networks have been released by dev...
research
04/23/2019

A Large RGB-D Dataset for Semi-supervised Monocular Depth Estimation

The recent advance of monocular depth estimation is largely based on dee...
research
06/24/2019

Knowledge Amalgamation from Heterogeneous Networks by Common Feature Learning

An increasing number of well-trained deep networks have been released on...
research
04/30/2021

Distilling EEG Representations via Capsules for Affective Computing

Affective computing with Electroencephalogram (EEG) is a challenging tas...
research
12/14/2021

Model Uncertainty-Aware Knowledge Amalgamation for Pre-Trained Language Models

As many fine-tuned pre-trained language models (PLMs) with promising per...

Please sign up or login with your details

Forgot password? Click here to reset