An Interpretable Neuron Embedding for Static Knowledge Distillation

11/14/2022
by   Wei Han, et al.
0

Although deep neural networks have shown well-performance in various tasks, the poor interpretability of the models is always criticized. In the paper, we propose a new interpretable neural network method, by embedding neurons into the semantic space to extract their intrinsic global semantics. In contrast to previous methods that probe latent knowledge inside the model, the proposed semantic vector externalizes the latent knowledge to static knowledge, which is easy to exploit. Specifically, we assume that neurons with similar activation are of similar semantic information. Afterwards, semantic vectors are optimized by continuously aligning activation similarity and semantic vector similarity during the training of the neural network. The visualization of semantic vectors allows for a qualitative explanation of the neural network. Moreover, we assess the static knowledge quantitatively by knowledge distillation tasks. Empirical experiments of visualization show that semantic vectors describe neuron activation semantics well. Without the sample-by-sample guidance from the teacher model, static knowledge distillation exhibit comparable or even superior performance with existing relation-based knowledge distillation methods.

READ FULL TEXT

page 2

page 3

research
07/23/2019

Similarity-Preserving Knowledge Distillation

Knowledge distillation is a widely applicable technique for training a s...
research
11/30/2020

A Selective Survey on Versatile Knowledge Distillation Paradigm for Neural Network Models

This paper aims to provide a selective survey about knowledge distillati...
research
05/13/2022

Knowledge Distillation Meets Open-Set Semi-Supervised Learning

Existing knowledge distillation methods mostly focus on distillation of ...
research
07/20/2021

Follow Your Path: a Progressive Method for Knowledge Distillation

Deep neural networks often have a huge number of parameters, which posts...
research
06/07/2022

Improving Knowledge Graph Embedding via Iterative Self-Semantic Knowledge Distillation

Knowledge graph embedding (KGE) has been intensively investigated for li...
research
11/08/2018

Knowledge Transfer via Distillation of Activation Boundaries Formed by Hidden Neurons

An activation boundary for a neuron refers to a separating hyperplane th...
research
06/01/2021

Natural Statistics of Network Activations and Implications for Knowledge Distillation

In a matter that is analog to the study of natural image statistics, we ...

Please sign up or login with your details

Forgot password? Click here to reset