Learning from Higher-Layer Feature Visualizations

03/06/2019
by   Konstantinos Nikolaidis, et al.
0

Driven by the goal to enable sleep apnea monitoring and machine learning-based detection at home with small mobile devices, we investigate whether interpretation-based indirect knowledge transfer can be used to create classifiers with acceptable performance. Interpretation-based indirect knowledge transfer means that a classifier (student) learns from a synthetic dataset based on the knowledge representation from an already trained Deep Network (teacher). We use activation maximization to generate visualizations and create a synthetic dataset to train the student classifier. This approach has the advantage that student classifiers can be trained without access to the original training data. With experiments we investigate the feasibility of interpretation-based indirect knowledge transfer and its limitations. The student achieves an accuracy of 97.8 similar smaller architecture to that of the teacher. The student classifier achieves an accuracy of 86.1 (teacher: 89.5

READ FULL TEXT
research
01/23/2021

Network-Agnostic Knowledge Transfer for Medical Image Segmentation

Conventional transfer learning leverages weights of pre-trained networks...
research
02/10/2020

Subclass Distillation

After a large "teacher" neural network has been trained on labeled data,...
research
08/18/2020

Knowledge Transfer via Dense Cross-Layer Mutual-Distillation

Knowledge Distillation (KD) based methods adopt the one-way Knowledge Tr...
research
05/23/2019

Zero-shot Knowledge Transfer via Adversarial Belief Matching

Performing knowledge transfer from a large teacher network to a smaller ...
research
08/31/2020

Evaluating Knowledge Transfer In Neural Network for Medical Images

Deep learning and knowledge transfer techniques have permeated the field...
research
03/23/2020

Efficient Crowd Counting via Structured Knowledge Transfer

Crowd counting is an application-oriented task and its inference efficie...
research
11/26/2018

ExpandNets: Exploiting Linear Redundancy to Train Small Networks

While very deep networks can achieve great performance, they are ill-sui...

Please sign up or login with your details

Forgot password? Click here to reset