DeepAI AI Chat
Log In Sign Up

RELAX: Representation Learning Explainability

by   Kristoffer K. Wickstrøm, et al.

Despite the significant improvements that representation learning via self-supervision has led to when learning from unlabeled data, no methods exist that explain what influences the learned representation. We address this need through our proposed approach, RELAX, which is the first approach for attribution-based explanations of representations. Our approach can also model the uncertainty in its explanations, which is essential to produce trustworthy explanations. RELAX explains representations by measuring similarities in the representation space between an input and masked out versions of itself, providing intuitive explanations and significantly outperforming the gradient-based baseline. We provide theoretical interpretations of RELAX and conduct a novel analysis of feature extractors trained using supervised and unsupervised learning, providing insights into different learning strategies. Finally, we illustrate the usability of RELAX in multi-view clustering and highlight that incorporating uncertainty can be essential for providing low-complexity explanations, taking a crucial step towards explaining representations.


page 2

page 7

page 10

page 12

page 13

page 14

page 15

page 16


On Representation Learning with Feedback

This note complements the author's recent paper "Robust representation l...

MORI-RAN: Multi-view Robust Representation Learning via Hybrid Contrastive Fusion

Multi-view representation learning is essential for many multi-view task...

Reflective-Net: Learning from Explanations

Humans possess a remarkable capability to make fast, intuitive decisions...

GANMEX: One-vs-One Attributions using GAN-based Model Explainability

Attribution methods have been shown as promising approaches for identify...

Self-learn to Explain Siamese Networks Robustly

Learning to compare two objects are essential in applications, such as d...

Towards Visually Explaining Similarity Models

We consider the problem of visually explaining similarity models, i.e., ...

Explaining Dataset Changes for Semantic Data Versioning with Explain-Da-V (Technical Report)

In multi-user environments in which data science and analysis is collabo...