Understanding Instance-based Interpretability of Variational Auto-Encoders

05/29/2021 ∙ by Zhifeng Kong, et al. ∙ 46

Instance-based interpretation methods have been widely studied for supervised learning methods as they help explain how black box neural networks predict. However, instance-based interpretations remain ill-understood in the context of unsupervised learning. In this paper, we investigate influence functions [20], a popular instance-based interpretation method, for a class of deep generative models called variational auto-encoders (VAE). We formally frame the counter-factual question answered by influence functions in this setting, and through theoretical analysis, examine what they reveal about the impact of training samples on classical unsupervised learning methods. We then introduce VAE-TracIn, a computationally efficient and theoretically sound solution based on Pruthi et al. [28], for VAEs. Finally, we evaluate VAE-TracIn on several real world datasets with extensive quantitative and qualitative analysis.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 7

page 9

page 21

page 24

page 29

page 31

page 32

page 33

Code Repositories

VAE-TracIn-pytorch

Official PyTorch implementation for "Understanding Instance-based Interpretability of Variational Auto-Encoders."


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.