Comparing Deep Neural Nets with UMAP Tour

10/18/2021
by   Mingwei Li, et al.
0

Neural networks should be interpretable to humans. In particular, there is a growing interest in concepts learned in a layer and similarity between layers. In this work, a tool, UMAP Tour, is built to visually inspect and compare internal behavior of real-world neural network models using well-aligned, instance-level representations. The method used in the visualization also implies a new similarity measure between neural network layers. Using the visual tool and the similarity measure, we find concepts learned in state-of-the-art models and dissimilarities between them, such as GoogLeNet and ResNet.

READ FULL TEXT

page 8

page 9

research
03/20/2023

Model Stitching: Looking For Functional Similarity Between Representations

Model stitching (Lenc Vedaldi 2015) is a compelling methodology to c...
research
06/02/2019

NeuralDivergence: Exploring and Understanding Neural Networks by Comparing Activation Distributions

As deep neural networks are increasingly used in solving high-stake prob...
research
05/30/2023

Pointwise Representational Similarity

With the increasing reliance on deep neural networks, it is important to...
research
05/10/2023

Similarity of Neural Network Models: A Survey of Functional and Representational Measures

Measuring similarity of neural networks has become an issue of great imp...
research
11/22/2021

Graph-Based Similarity of Neural Network Representations

Understanding the black-box representations in Deep Neural Networks (DNN...
research
06/05/2022

Searching Similarity Measure for Binarized Neural Networks

Being a promising model to be deployed in resource-limited devices, Bina...

Please sign up or login with your details

Forgot password? Click here to reset