Interpreting and Disentangling Feature Components of Various Complexity from DNNs

06/29/2020
by   Jie Ren, et al.
0

This paper aims to define, quantify, and analyze the feature complexity that is learned by a DNN. We propose a generic definition for the feature complexity. Given the feature of a certain layer in the DNN, our method disentangles feature components of different complexity orders from the feature. We further design a set of metrics to evaluate the reliability, the effectiveness, and the significance of over-fitting of these feature components. Furthermore, we successfully discover a close relationship between the feature complexity and the performance of DNNs. As a generic mathematical tool, the feature complexity and the proposed metrics can also be used to analyze the success of network compression and knowledge distillation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/04/2022

Towards Theoretical Analysis of Transformation Complexity of ReLU DNNs

This paper aims to theoretically analyze the complexity of feature trans...
research
08/05/2019

Knowledge Isomorphism between Neural Networks

This paper aims to analyze knowledge isomorphism between pre-trained dee...
research
03/07/2020

Explaining Knowledge Distillation by Quantifying the Knowledge

This paper presents a method to interpret the success of knowledge disti...
research
08/18/2022

Quantifying the Knowledge in a DNN to Explain Knowledge Distillation for Classification

Compared to traditional learning from scratch, knowledge distillation so...
research
11/05/2021

Visualizing the Emergence of Intermediate Visual Patterns in DNNs

This paper proposes a method to visualize the discrimination power of in...
research
09/23/2021

DeepRare: Generic Unsupervised Visual Attention Models

Human visual system is modeled in engineering field providing feature-en...

Please sign up or login with your details

Forgot password? Click here to reset