Explaining Deep Neural Networks using Unsupervised Clustering

07/15/2020
by   Yu-han Liu, et al.
144

We propose a novel method to explain trained deep neural networks (DNNs), by distilling them into surrogate models using unsupervised clustering. Our method can be applied flexibly to any subset of layers of a DNN architecture and can incorporate low-level and high-level information. On image datasets given pre-trained DNNs, we demonstrate the strength of our method in finding similar training samples, and shedding light on the concepts the DNNs base their decisions on. Via user studies, we show that our model can improve the user trust in model's prediction.

READ FULL TEXT

page 4

page 5

page 8

page 9

page 10

page 11

page 12

page 13

research
10/17/2019

On Concept-Based Explanations in Deep Neural Networks

Deep neural networks (DNNs) build high-level intelligence on low-level r...
research
06/06/2021

Topological Measurement of Deep Neural Networks Using Persistent Homology

The inner representation of deep neural networks (DNNs) is indecipherabl...
research
03/30/2020

Architecture Disentanglement for Deep Neural Networks

Deep Neural Networks (DNNs) are central to deep learning, and understand...
research
02/06/2022

Aligning Eyes between Humans and Deep Neural Network through Interactive Attention Alignment

While Deep Neural Networks (DNNs) are deriving the major innovations in ...
research
03/05/2023

Discrepancies among Pre-trained Deep Neural Networks: A New Threat to Model Zoo Reliability

Training deep neural networks (DNNs) takes signifcant time and resources...
research
03/11/2023

Probing neural representations of scene perception in a hippocampally dependent task using artificial neural networks

Deep artificial neural networks (DNNs) trained through backpropagation p...
research
12/02/2018

Image Score: How to Select Useful Samples

There has long been debates on how we could interpret neural networks an...

Please sign up or login with your details

Forgot password? Click here to reset